Free software finally gets significant coverage on BBC TV's Click show this week, but I think it's very much Linux rather than GNU/Linux and free cost rather than freedom. They mentioned free security software and even raised the possibility of trojans, but didn't mention how free (as in freedom) software allows any random end-user to check or have it checked.
Quite a missed opportunity! However, Click has a regular letters section, so watch it (times below), email click@bbc.co.uk and see if we can get the free software view across.
The letters section this week seemed to be flaming proprietary SaaS social network site facebook for their pathetic default-permit approach to security of user details. I really think there's a role for something like noserub.com in free software social networking.
bbc.co.uk/click is shown on BBC News Channel Saturday 1130, Sunday 0430 and 1130, Monday 0030 and Sunday 0430 on BBC-1 (times BST)
bbcworld.com/click is shown Thursday 19:30 GMT, Repeated Friday 09:30 and 12:30 (Asia Pacific only), Saturdays 06:30, Mondays 15:30, Tuesdays 01:30 (not Asia Pacific, Middle East or South Asia) & 07:30 GMT
Anyone else see this?
Regards,
Florian Weimer fw@deneb.enyo.de writes:
- MJ Ray:
didn't mention how free (as in freedom) software allows any random end-user to check or have it checked.
How is this different from proprietary software?
Either this is obvious, or I'm not understanding the question.
Software that doesn't give the user freedom to inspect the source code and pass it on to others, doesn't allow the user to check the software themselves or have someone else check it and pass it along to them. This is distinct from free software, which allows all of this.
* Ben Finney:
Florian Weimer fw@deneb.enyo.de writes:
- MJ Ray:
didn't mention how free (as in freedom) software allows any random end-user to check or have it checked.
How is this different from proprietary software?
Either this is obvious, or I'm not understanding the question.
Software that doesn't give the user freedom to inspect the source code and pass it on to others, doesn't allow the user to check the software themselves or have someone else check it and pass it along to them. This is distinct from free software, which allows all of this.
These days, there's hardly any widely used piece of proprietary software for which you can't get the source code. You can't make modifications, and there might be restrictions with whom you can share your results, but security reviews based on source code are definitely possible.
It's also not clear if source code availability is that helpful for uncovering security bugs.
Florian Weimer fw@deneb.enyo.de wrote: [...]
These days, there's hardly any widely used piece of proprietary software for which you can't get the source code.
I wasn't aware of this. The Norton Security tools on Windows cause some associates of mine many problems. Even if the apparent bugs can't be fixed, knowing the precise details of how it worked with help. Where can they get the source code?
[...]
It's also not clear if source code availability is that helpful for uncovering security bugs.
Would either the recent openssl/debian zero-entropy mistake or the openssl dangerous use of uninitialised memory have been uncovered without source code availability?
It seems to me that closed security software is a bit dangerous. Treating it as a black box and prodding it with different inputs and outputs is an inadequate way of testing it, not really checking.
Regards,
On Sun, 2008-05-18 at 12:42 +0200, Florian Weimer wrote:
- Ben Finney:
Florian Weimer fw@deneb.enyo.de writes:
- MJ Ray:
didn't mention how free (as in freedom) software allows any random end-user to check or have it checked.
How is this different from proprietary software?
Either this is obvious, or I'm not understanding the question.
Software that doesn't give the user freedom to inspect the source code and pass it on to others, doesn't allow the user to check the software themselves or have someone else check it and pass it along to them. This is distinct from free software, which allows all of this.
These days, there's hardly any widely used piece of proprietary software for which you can't get the source code. You can't make modifications, and there might be restrictions with whom you can share your results, but security reviews based on source code are definitely possible.
But you might of course get sued by an IPR holder if you then worked on a similar project and they claimed you had stolen their idea that you saw in their code. Shared source has specific risks to the user that FOSS doesn't have.
It's also not clear if source code availability is that helpful for uncovering security bugs.
Certainly there are some deterrents in exercising the right to go and take a look depending on how the proprietary software is licensed. Whether this makes a practical difference? Who knows?
Ian
On 18 May 2008, at 11:42, Florian Weimer wrote:
- Ben Finney:
Florian Weimer fw@deneb.enyo.de writes:
- MJ Ray:
didn't mention how free (as in freedom) software allows any random end-user to check or have it checked.
How is this different from proprietary software?
Either this is obvious, or I'm not understanding the question.
Software that doesn't give the user freedom to inspect the source code and pass it on to others, doesn't allow the user to check the software themselves or have someone else check it and pass it along to them. This is distinct from free software, which allows all of this.
These days, there's hardly any widely used piece of proprietary software for which you can't get the source code. You can't make modifications, and there might be restrictions with whom you can share your results, but security reviews based on source code are definitely possible.
It's also not clear if source code availability is that helpful for uncovering security bugs. _______________________________________________ Discussion mailing list Discussion@fsfeurope.org https://mail.fsfeurope.org/mailman/listinfo/discussion
Hi Florian, I'm not sure about your assertions about security.
These days, there's hardly any widely used piece of proprietary software for which you can't get the source code.
Do you have any evidence to support this view?
All sorts of concerns come into proprietary software companies' decisions about access to source code: competition, corporate culture, industrial espionage (think Huawei and Cisco) the importance of the party seeking source code (Microsoft gives more code to governments, than to individual researchers, for example).
Certainly there are a few proprietary software companies who have widely open-sourced. But generally speaking one of the defining aspects of a proprietary software company is that it doesn't allow source code access. Those who do tend to use the restrictive licenses that you imply.
Even if you do get full access, it's unlikely that you would actually be able to build and run it from source and verify that it's identical to the binaries you already have. So you can't actually say anything about the security of the binaries you're already running.
there might be restrictions with whom you can share your results
This rather defeats the point checking for security holes. You might miss a bug that someone else will notice. If you can't share that information, the bug goes un-patched for you.
Of course, responsible disclosure procedures are needed, but that's pretty trivial.
It's also not clear if source code availability is that helpful for uncovering security bugs.
I don't agree with this.
De-compilation is a nasty business, and auditing something as a 'black box' closed-source binary allows limited scope for evaluating the security of the internal workings of the system. The programme output might be correct, but it is difficult to expose - or evaluate - internal vulnerabilities.
I think it's possible to make the argument that source code availability is neutral for security _overall_ (though I think there are other reasons why that's not true), but not that it doesn't help uncover security bugs.
That more bugs are found is one of the key advantages of free and open source software. That you can patch them yourself - something you can't do with the type of 'read-only' licenses for commercial software to which you refer, is another key advantage. "Given enough eyeballs, all bugs are shallow."
Regards,
Graeme West
Florian Weimer fw@deneb.enyo.de writes:
These days, there's hardly any widely used piece of proprietary software for which you can't get the source code.
There's a huge gulf between "possible through illegal and dubious channels to get some source code for what might be the program", and "can get the corresponding source code easily, openly, and legally from the vendor themselves".
For the purposes discussed in this thread, the former isn't adequate.
It's also not clear if source code availability is that helpful for uncovering security bugs.
This is contrary to all prevailing wisdom from security professionals. You're making an extraordinary claim, that needs extraordinary evidence to support.
On Sun, 2008-05-18 at 12:42 +0200, Florian Weimer wrote:
These days, there's hardly any widely used piece of proprietary software for which you can't get the source code.
To be honest, I don't see how this statement is true.
You can't make modifications, and there might be restrictions with whom you can share your results,
This is the problem. In most of the cases who find bugs is someone that is considering or trying to use said free software in new code. If you are building something new, generally you are going to exercise part of the existing code that may have not been exercised in previous uses. This is a great way to find new bugs or deficiencies, and some times these turn out to be also security issues.
If you can't reuse the code, there are less chances to catch errors in it. The more the code is reused, the more it is tested in different conditions, the more it becomes robust and flaws are eliminated. There is still space for subtle bugs that may not cause errors, but the mere fact that lots of people start looking at the code when working on it make it possible for some bugs to be spotted.
Add to that the "community" aspect. In most cases when you have a bug in some widely spread proprietary software you simply live with it. Or maybe you even change software, but you don't report it, because there is no accessible community that will take the issue seriously even if you are not paying big money to some company, and will help you. More reports of this kind mean more fixes as well, again more exercise and feedback.
but security reviews based on source code are definitely possible.
It is, and availability of the source allows you to run tools that automatically analyze and discover defects. But these tools can only go so far.
It's also not clear if source code availability is that helpful for uncovering security bugs.
When source code is available you get individuals willing to uncover bugs. In the Samba community it is not uncommon to get security reports from bug hunter professionals. That is possible mostly because source code is available, as the protocol is complex enough that just blind testing (which we already do in many cases with our protocol analysis tools) is not enough to find all defects.
Black boxes are definitely harder to check.
Simo.