
Just what represents a “critical vulnerability” in a word processor or Android app is not explained.
That'll be anything that allows users to block ads ..
Google has decided it's fair to tell the world about newly-discovered security flaws seven days after it learns about them, even if that's not enough time for vendors of vulnerable software to provide a fix. The Chocolate Factory used its Online Security Blog to deliver this edict, writing that “We recently discovered that …
True. Google now have a lot of software out there, if some other vendors decide to get childish, and go tit-for-tat.
Also, how does this sit with Google's policy on Android? Are they going to out all the phone manufacturers and networks who haven't put out Android updates? There's an awful lot of people not running all the bug-fixes on their versions because their phones are abandonware.
If it is already actively being exploited about in the wild it seems the only ones who do *not* know about the problem are the users. The vendor is in the know, the script-kiddies are in the know.
Seems like a sensible thing to inform the users too. They can avoid using the software in question, or use an alternative, or take whatever steps are needed.
I'm with MO. Give users the opportunity to uninstall the software until a fix is found. What's so bad about that? Just how damaging these vulnerabilities are likely to be is a question, but certainly in other areas users would like to block any attack vector as soon as possible - nevermind a week later.
Give users the opportunity to uninstall the software until a fix is found. What's so bad about that?
The problem is that is completely impractical. If uninstall on flaw is the approach adopted you'll soon find that you are not running any web browser for example, since they all have significant long-standing issues. That's before you get in the gritty problem of finding a replacement: if you are using advanced features or non-standard extensions you may well find there isn't one. Bear in mind that for many users their computer platform is not some optional plaything that can be done without but an essential tool for real work. If your mission-critical in-house app requires some plug-in that has only ever been released for Firefox jumping ship to IE for a week or two simply isn't on the radar. Nor is it if you have the simple issue of managing a roll out across an entire estate of thousands of users and machines.
Seriously, I'm beginning to wonder how many commentards here have real world commercial experience.
Giving users the information lets them make their own risk assessment, weigh the costs of taking action (which might stop short of uninstalling, it might just mean disabling a feature) against the benefits, and make their own informed decision about whether to continue to be exposed or not. Better?
weigh the costs of taking action (which might stop short of uninstalling, it might just mean disabling a feature)
Or configuring your content-inspecting firewall to detect the exploit signature.
All you folks with Internet-facing production-critical apps have content-inspecting firewalls, right? And administrators who are capable of adding new signatures to them, right?
If not, perhaps you're not in a position to complain about vulnerability disclosure.
"snip> If your mission-critical in-house app requires some plug-in that has only ever been released for Firefox jumping ship to IE for a week or two simply isn't on the radar. Nor is it if you have the simple issue of managing a roll out across an entire estate of thousands of users and machines.
Seriously, I'm beginning to wonder how many commentards here have real world commercial experience."
Seriously? You'd build a business-critical application based on some rather untested features on a browser that might change versions without warning, and leave it connected to the internet knowing that it had been exploited... and you wonder how many of US have real-world commercial experience?
This will get interesting. Google have been put in a situation where the best they can do is choose the least evil because a no evil solution doesn't exist.
I've puzzled over the problem for quite some time and still don't have even an inkling of a theoretical solution let alone a practical one. The reality is that the coders have to be the ones to issue the patch, sometimes that takes an awfully long time, and sometimes it isn't practical to uninstall or block whatever is broken until the patch is released. Here is one of the places where I do see an advantage for the publicly posted OSS source libraries. Any coder can post a fix so once disclosed the time to fix seems to fall drastically compared to closed/non-public source.
"They can avoid using the software in question, or use an alternative"
Does that mean after every PWN2OWN competition where Firefox, Chrome and IE are all taken down, people will just uninstall their browsers?
I'm all for quicker patches and preventing companies from ignoring vulnerabilities, but full disclosure after 7 days seems odd. That's like a car manufacturer disclosing details on how to break into a competitors car if they don't fix it in 7 days. How does that protect the consumers?
It says, as you quote, that Google will support researchers in making that disclosure, not Google itself. You may think that being one step removed from the process is as good as makes no difference at all but Google will say it is just encouraging others to adopt its own position.
1. Will they support researchers if the company who hasn't fixed a critical bug in the timeline is Google?
2. If the bug is complex and rushing the fix out causes issues of insufficient QA and then userspace is broken as a consequence... Will they support the vendor when the world goes completely Torvalds on them?
To many excuses and reasons why company's should have more time.
As Google say. the bug is known and being exploited. A company needs to take responsibility for their actions and their products rather than bitching about it and finding ways to not fix an issue.
Working for many councils, NHS and other organisations and I see this sort of crap all the time. People inside company's justifying their wages by being heard and seen on email trails making spurious excuses, basically arguing the toss. This situation carry's over to the world of real work all too often.
A second trouble is that so many software companies are in it for the quick buck, providing all sorts of claims about their ability to deliver, hitting the low hanging fruit then screwing the customer towards the end of the project once they see the money well drying up. Generally these bugs and exploits are in there because they were quick fixes, or even worse they were known about but not brought to attention as the way around it was just to much work.
Software development is one of those industries that needs to grow up, and fast
And Google highlighting vulnerabilities in other people's software will surely be beneficial for Google. But they're playing a long game so a quick buck isn't what they're after. Putting the fox in charge of the hen house again....
(And it's "too many" not "to many")
This doesn't seem that unreasonable for vulnerabilities under active exploitation in the wild. As long as they aren't releasing a POC, I don't see the harm. When java zerodays were recently discovered being exploited in the wild the vulnerability was widely published so that people would know to disable the plugin.
"Most the people vulnerable to the attacks wont know how to disable the plugin and wont read information about the vulnerabilities as IT news, security and vulnerabilities aren't on the radar."
So what ? Those ignorant folks would have a better chance of being informed if the vulnerabilities are published instead of everyone keeping quiet about them. The people with the most to gain from NOT publishing the vulnerabilities are the folks exploiting them and the vendors punting broken products to their unsuspecting users.
Slightly off-topic: A lot of vendors still have a nasty habit of ignoring or attacking the folks who report faults in their products. Those vendors need to get wise or be culled in my view because their attitude costs users a more pain, time and money than their products are worth.
Shock horror - Google place a limit on how long apps with 'critical vulnerabilities' are allowed to go unreported. Anyone would think they are trying to protect their users from bad people!
If the vendor can't or wont fix a 'critical vulnerability' within 7 days - or suspend the app from the store while it is fixed, you have to call into question the competence and/or integrity of that vendor.
What's the problem with this? All we're really talking about here is embarrassment - I'd argue that most of the larger software vendors care more about their "reputation" than about their customers so all that's happening is the their feet are going to be held to the flames. Maybe they will finally get of their collective arses and fix the problems.
This is such a non-issue, I'm bored.
While this is a reasonable idea on the face of it - too many times serious issues are buried only to get a rough patch job when the issue blows up . . .
Given Android's screwy permissions that favour the app in preference to the user and leak private data, the ability for carriers to stuff on apps you can't get rid of, plus an update infrastructure so messed up that phones are still being sold with 2.3 (or less!) onboard...
I wonder how long it'll be before Android gets the fix-or-disclose treatment?
There might be some justification for this course when the vulnerabilities are known to be actively exploited - in other words the exploit was uncovered as a result of an investigation into unexpected activity.
But if an exploit is discovered and there is no evidence that it is being actively exploited, then the balance between arming the blackhats, with the likelihood that it will lead to immediate exploitation, and waiting for a scheduled update cycle, with the likelihood that most users will get the patch before the exploitation can be widely deployed, is different.
To announce a windows exploit 8 days before "Patch Tuesday", for example, rather than waiting until a patch was deployed, would be the action of a real Jobsworth. The same would now go for Flash, which has finally adopted a monthly patch cycle, or Firefox, which has an auto-update mechanism and fast patch deployment. What if 5 days after being notified of the bug, and just before deploying a patch, Mozilla discovers that the patch itself introduces a new vulnerability? Should Google just go ahead and publicize a vulnerability that isn't being exploited?
In the case of a vulnerability in a application that doesn't have a well-defined patch cycle or auto-update mechanizm, it almost doesn't matter when they announce it - lots of end-users will never know that the vulnerability exists, so the people who will get the most benefit from the publication will be the blackhats.
If Google is going to be aggressive about this, they should publish a tool that end-users can install that will alert them when Google has published an exploit for an application that is installed on the users desktop. (It's not as if most of the vulnerable end-users aren't already running half a dozen Google services already).
A data vault to protect my notes, phone contacts, and data fields. If google wants to impress people, stip letting line, kakao, and other apps from plundering personal info. This is a bug or a feature, depending on who is speaking.
Apps that send and receive data should need a tick box case by case permission to eve SEE user-created or input info. This could help make some.poor code less threatening to users.
«Seven days is an aggressive timeline and may be too short for some vendors to update their products, but it should be enough time to publish advice about possible mitigations, such as temporarily disabling a service, restricting access, or contacting the vendor for more information. As a result, after 7 days have elapsed without a patch or advisory, we will support researchers making details available so that users can take steps to protect themselves. By holding ourselves to the same standard, we hope to improve both the state of web security and the coordination of vulnerability management. »
What's wrong with that ? And if Google don't, in fact, «[hold themselves] to the same standard», I'm certain that you will be there to report, impartially as always, on the matter....
Henri
You'd think a topic this controversial would have been discussed once or twice by security practitioners.
Oh, wait - it has. Ad nauseum, for many a year. Hell, it's been 13 years since RFP published the initial version of RFPolicy. It's like no one here reads Bugtraq or RISKS or CERT announcements.
But no doubt the learned opinions of Mr Sharwood and the chorus of Reg commentators rehashing the most preliminary and unsophisticated observations on the topic will provide many new insights.