get that man to Gitmo
Dangerous, he has ideas and values. His ideas will disrupt the new economy. From the CIA, who would have thunk?
Computer security luminary Dan Geer has proposed a radical shakeup of the software industry in hope of avoiding total disaster online. Geer played a crucial role in the development of the X Window System and the Kerberos authentication protocol, and is now the chief security officer of the CIA’s VC fund In-Q-Tel. And during …
It's a lovely idea but will never work.
Currently these people have tried and failed to shutdown the sites with the information and have resorted to going after search companies like Google. This is exactly the same as getting rid of a book in a library by removing it's index card from the card catalogue.
As soon as somebody reindexes, the information is back.
It's a total waste of time that only benefits lawyers.
Make un-patched software open source is a great idea on the surface of it, but dig a little deeper and you can quickly see a glaring problem - code re-use.
If large chunks of code are ported across from version to version, is it really abandoned? Perhaps the code that was left out, but how do you isolate that?
I am very much for a lot of what this chap is saying but an unrealistic goal is worse than none in this instance because it allows the software companies to take the initiative and re-frame the discussion as they please.
The problem that I am seeing - whether it is a big hurdle or not - is that code is intellectual property, at least inside the current legal framework.
Making software open-source means that you expose that code to the masses. If part of that code is being re-used in new software then that will open up commercial secrets you are currently earning money from.
In this way, I suppose, it is similar to the way record labels re-release best-ofs and remasters every so often to extend copyright. Note that I am not condoning the practice, just drawing a parallel.
However, with an album of music, each song can be dealt with individually without much problem and you could release one song from an album from copyright while keeping it on the remaining tracks. With code, this is substantially more difficult as the bits of code that are no longer in use (and thus eligible for release as open source) require other bits of code that are still in use.
As I said, it's a good idea but the details trip it up somewhat. You'd have to have strict criteria for what software fits the bill and one suspects that, just like music copyright, there would be ways to make sure your software never qualified for release as open source.
Actually, code re-use is a good argument FOR the measure.
If your cold is old enough you aren't willing to support it, then why are you still using it in the newer product?
In truth, the whole idea is to force software compagnies to do something all other compagnies have to do, take responsability for the products they offer. By forcing them to re-internalise a lot of the costs of the crap they put out, you make it financially worth it for them to fix it.
My own thought was that at that point you should lose your copyright protection and the code should become public use. But given the way most people treat open source, Stallman and his acolytes are the only people who would argue against it being mere semantics.
Whether or not a chunk has been abandoned is irrelevant. The monolithic code has been abandoned and it is in the public interest that the monolithic code continue to be supported. Yes, you do need to indemnify the company so that their continued use of a code segment isn't a violation of copyright on the old code, hence my preference for copyright expiration over OSS.
Whether or not a chunk has been abandoned is irrelevant. The monolithic code has been abandoned
OK, now show us how to define "chunk" and "monolithic code" in a manner that is legally useful, fair, and can't trivially be avoided by software authors.
I like the idea of copyright lapsing on orphaned software, and the source being published, I really do; but it's patently unworkable.
It's an unworkable proposition in any case, because the cost of "maintaining" software can be reduced to nearly zero. Physical property, children, etc are abandoned because they have high minimal maintenance costs (at least in terms of opportunity costs and the like, if we're talking about something that doesn't require active maintenance). A company could "maintain" software indefinitely by issuing an annual vapid update of some sort. There's also the inheritance argument made by other posters - what's to stop a company from saying "this new product is the maintenance release for the old product"? (Try to create a legally useful definition of maintenance that prohibits that while recognizing whatever you consider legitimate maintenance.)
It would be impossible to regulate what significant maintenance (to prevent abandonment) would entail - the range of software is just too wide - so software could often only be determined to be abandoned by costly litigation.
The proposal also doesn't work well with a lot of embedded software.
It's a cute idea, but it would require a massive, hugely complicated, highly fraught regime of software oversight. Ain't gonna happen, and the cure would be worse than the disease.
History teaches that humans remain complacent until a crisis occurs. Then they try to patch the system in an "economical" way. They do it until a real catastrophe happens. Then the whole old system has to be ripped up and rebuilt from scratch.
All is well for a time until a crisis ....
Humanity is at a civilizational crux point, Geer said, where meatspace and the digital world are converging with very few controls. He said that unless humans reassert control over the digital sphere and make it work to human rules, humanity will not be able to take back control once code is law.
Take back control from whom
Geer also suggested a new way to stamp out the exploitation of software security vulnerabilities for which no patches exist – dreaded zero-day vulns: the US government should make a standing offer to pay a bug bounty equivalent to TEN times the price companies are willing to pay for the security flaws, and then make them public after a patch has been developed.
This could go a long way to end inter-state cyberwar and stop common criminals, we're told. No mention of the NSA, of course.
Such would create, or more accurately legitimately and attractively fabulously reward the extremely intelligent entity able to create and deploy, and autonomously and anonymously both micro and macro manage, inter-state cyberwarfare. And attractively fabulously because such an ability and facility and ubiquitous virtual utility is worth an absolute fortune to companies willing to pay for national, international and internetional security flaws which deliver to them inequitable overwhelming global executive advantage. But hey, is that not the Wild Wacky Western American Dream in the Capitalist Way, and perfectly normal in its own big little perverse and corrupt sort of way?
However, as for the the need and feed of the flourish ……”and then make them public after a patch has been developed” ……. rather than the intellectual property remaining MkUltra Sensitive TS/SCI and Strictly Need to Know Private Pirate Renegade Rogue Information and Super CyberIntelAIgent, well, that might be best given a rethink.
Fancy inferring that the CIA might have an ulterior motive for suggesting that the US government run a global bug bounty offering huge money for information on product flaws.
Honestly, Dan Greer is obviously just thinking of the children / out to punish terrorists and paedos / <insert boogeyman of the day here>, and this information most emphatically would not be used for (industrial) espionage purposes.
Pfft, perish the thought...
1) US buys all zero days instead of just some of them
2) US sets them free instead of using them for itself
I could see #1, because Congress has an unlimited appetite for giving money to defense and spy agencies with little or no oversight. #2 has no chance in hell of ever happening.
The proposal for government to buy 0day exploits and altruistically manage them assumes that government is a disinterested party. As the article said, the chap made no mention of NSA etc.
As an ideal Dan Geer's proposals have some merit, but it assumes a level of trust we, the people, simply don't have in our governments.
I was wondering why this rather critical flaw in the logic wasn't commented on immediately....
You cannot trust any government with a scheme like this, let alone one that has proven to have a rather cavalier attitude when it comes to civilians' rights. Within a week the first accusations of [government body x] not releasing all acquired data, or retaining data for [nefarious purpose x] will fly. Nationally and internationally.
And no way to really prove the opposite even if the agency involved was squeeky clean and holier than st. Peter.
Instead of... "offer to pay a bug bounty equivalent to TEN times the price companies are willing to pay for the security flaws, and then make them public after a patch has been developed."
...how about, make them automatically public one month after the purchase, to make damn sure that a patch gets developed in the mean time!
I know ... I'll get my coat.
I like the way this guy thinks and agree with many of his points but find the proposed solutions unlikely to ever be implemented, at least without being modified severely enough that they would be useless.
When it comes to buying-up all vulnerabilities, wouldn't this just cause a bidding war and drive up the price of vulnerabilities? It would be nice to think that the government of a major world power could consistently outbid any crime syndicate but I find that hard to believe given how much money/power/etc. is involved in exploitation of certain vulnerabilities and the limitations imposed by government budgeting.
"He said that in the intelligence community building plausible false identities is becoming much harder in the digital age and will only get harder. These days it’s a much better solution to steal someone’s identity and use that, Geer opined."
nanowrimo coming up. Interesting idea that one. LegendOMatic to steal from Le Carre. Ebay for IDs. Have bots manufacturing a social media history for agents.
It could make a good story but the problem is that HUMINT is pretty much dead these days with all the major agencies relying almost exclusively on SIGINT. Exceptions include the police, who still illegally infiltrate groups they have been told might at some future point be a threat to somebody important.
The John le Carré/Len Deighton spy is dead, although to be fair Deighton's world weary bureaucrats were also closer to the reality anyway.
Somebody should tell this guy that there is something between big commerce and open source.
How about the army of bedroom coders who release their software for free, but don't want to make it open source? There's quite a few of them around, and applying the same legal liabilities for something given away for free would most likely make them think "sod it", especially if they don't feel (for whatever reason) that they want to make the source available.
To be honest, these days I'm quite weary of those exact people - allegedly disinterested enough to let you use their software for free, but not enough to actually let you make proper use of it by also giving you the source ("please let me put my hand into your pocket, I promise I won't take anything! Well, maybe later, just a little bit..."). I prefer to use as few such 'gifts' as I can (as opposed to fully open source ones), so as far as I'm concerned, good riddance...
quote: "How about the army of bedroom coders who release their software for free, but don't want to make it open source? There's quite a few of them around, and applying the same legal liabilities for something given away for free would most likely make them think "sod it", especially if they don't feel (for whatever reason) that they want to make the source available."
If those bedroom coders have performed rigorous security testing, and write code taking basic security principles into account, then there is no issue. If they are unaware or unwilling to, then I think I'd rather they do say "sod it I won't bother", as it cleans up the install base for those functions to software that is either open source (and vetted) or proprietary and properly tested.
Just as some bedroom coders are exceedingly skilled and conscientious, some are writing their first Hello World app and then vanity publishing it, without even considering input validation, ASLR or other techniques that help mitigate potential exploits. Discouraging insecure development is a pre-requisite of making computing a more secure place, and any bedroom coder worth the name won't begrudge putting the extra effort in to learn how to do it properly, IMO ;)
" proprietary and properly tested "
You must be talking about Microsoft.
No? Maybe you are talking about Flash?
Or...? Ubuntu? Android? iOS? Everything has patches and updates to correct errors (and sometimes, to introduce new ones!).
If the big companies have to provide regular patches for the same sorts of flaws (buffer overrun and failing to sanitise inputs), isn't it a bit rich to expect a non-pro part time coder to turn out something better? If in doubt, refer to OpenSSL for an example of the supposed specialists getting it wrong, and to WPS for an example of a protocol broken from the outset. Software is a very complex thing with zero tolerance for mistakes, created by creatures who are imperfect and make mistakes. I disagree that we the populace should serve as an army of beta testers, but likewise I think expecting absolute perfection is a dream...
Somebody should tell this guy that there is something between big commerce and open source. .... heyrick
The Danegeld Route Root though, as crazily rewarding and exciting as it is, is not something for everyone or anyone who hasn't done a minefield of crash testing, heyrick. But then highly specialised and potentially unique and original have always been naturally prized and much sought after.
Geer's ideas are an interesting blend of old-fashioned capitalist market economics and ... common sense. Can't wait to hear/read his full remarks. Whether it will actually have legs is dependent on a lot of factors, but it seems like the trend in tech is that individuals at the bottom will continue to gain more power as the tech itself becomes more capable. Of course even if we see courts (and legislatures) holding publishers responsible for bad software and withdrawing of legal protection for abandoned code, it's not likely to happen in an orderly way. In fact it will probably come about as the result of public pressure and civil disobedience -- which is something that's already been building for at least 2 decades. One indicator of the direction the wind is blowing will be the outcome of the current campaign to further spread the reach of the DMCA via the latest round of Trans regional "Partnership" treaties over the next few years. It will be an interesting ride.
"Software houses will yell bloody murder and pay any lobbyist they can to scream that this will end computing as we know it,” he said.
Well, yes, of course. Publicly-traded companies have a fiduciary responsibility to oppose burdensome regulation, unless they have reason to believe it will give them a competitive advantage. Doesn't mean it's not a good idea, but blaming software vendors for opposing it is more than a little disingenuous. This is precisely why we have governments - to force people to do things they'd rather not.
In any case, the idea of software liability is a old one, and has been vigorously debated by the IT security community. Schneier used to talk about it a fair bit, and there have been some interesting discussions of it on his blog.
Yes but apparently his point is that if you have software where you can "remove features you don't want", i.e. open source software, you aren't liable. This makes it very feasible as there is no reason for not distributing your source code anymore, except for malware.