Embrace, extend, and extinguish?
Google needs to be very careful in the current climate if it wants to roll out services that it proposes will improve the status quo. It does not look as if it is being careful enough.
Google has big ambitions for its new Open Source Vulnerabilities database, but getting started requires a Google Cloud Platform account and there are other obstacles that may add friction to adoption. The Chocolate Factory is not happy with the state of open-source software security, which is a big deal not least because its …
Google needs to be very careful in the current climate if it wants to roll out services that it proposes will improve the status quo. It does not look as if it is being careful enough.
I think if they de-Google the process and put it under common, public governance this approach has real value. I work in open source middleware/data processing, so I have to frequently put my Vulnerability Manager hat on for customers. While CVEs are valuable and are the lingua franca, they are _entirely_ beholden to the level of effort vendors or open source communities are willing to put into writing them. The first thing to go is bothering to check when the bug was really introduced. Just list the last couple of releases as "affected" and your latest release as "fixed" and call it a day. Bisection keeps this process honest.
Oh and what's that? Your upstream community uses a custom versioning scheme that you have to pass a course in cryptoversionology to understand? It was agreed by four guys in a Palo Alto sports bar in the mid 2000s? Well sure let me get right on automating the checks for which bits of our software - with our own bonkers versioning schemes - depend on the vulnerable bits of your software. Should only take me a few weeks per dependency. Articulating vulnerabilities based on git commits and release tags helps with this.
But if it stays this way it'll just go the way of CVE's own CPE - barely used, so barely useful.
This seems to be a side-effect of the necessary rate-limiting. Which is probably because it was thrown on GAE to get it up and running. Difficult to see the need for Big Table and thus this kind of limiting for a production system. Will be interesting to see what Google does with it because it could be useful. But other services such as Safety for Python are available.
"This seems to be a side-effect of the necessary rate-limiting."
Which they need exactly why? They don't have rate limiting on so many other products they develop. I can send searches through their system by automatic means, and each of my searches have to go to many nodes so they can search a massive database quickly, find advertisements for me which means going to another database, and a bunch of sharding and protection against endpoint failure. This one is much easier. A single database. Maybe five queries on it for each search. Yet this one needs authentication and rate limiting?
This is Google. They have plenty of resources to host a database. Also, it's supposed to be an open resource for the benefit of everybody [cough as if cough]. That should mean open access, with an open API which lets me clone the entire thing if I have a reason to. It should also mean open management, where they wouldn't even have to provide all the resources. If this was an extension or replacement of the CVE system designed by Google with others who were going to use it and encourage its adoption, they could get lots of other big companies to chip in for the hosting and bandwidth costs. I disagree entirely that rate limiting of any kind is necessary, let alone rate limiting by requiring a painstaking authentication process with contact information.
Your comparison is flawed: rate limiting is required here because they don't have a read optimised service so all queries are billable. I wouldn't expect that to stay that way if this becomes a real service.
Search is to a read optimised service which they provide as part of their business model. Though you will find that they do occasionally greylist ip addresses.
If this isn't read optimized when it's a database that rarely gets written to, that's on their head. I have no sympathy. Also, it doesn't matter what's billable and not, because it's hosted on their own systems. They don't have to structure it so they pay per transaction to the cloud team. Both can be easily compared to search; they don't have a structure where they have to pay some other department for each search, and it is still a more complex arrangement. They have spent longer optimizing search, but the search process itself is already more complex than a query on this database. Any design decision that leads to the rate limiting is either a stupid design choice ("Should we optimize this database that we expect people to be reading for read operations? No, let's not bother.") or unnecessary ("We're going to host this database as one of our corporate services on servers which are corporate property. Let's set it up to bill ourselves rather than the way we do every other system.").
"The Chocolate Factory is not happy with the state of open-source software security, "
Chrome OS is the problem, it is designed for data collection and marketing, NOT security. A marketing company will never make a "secure for the end user" OS because it is not their business, data collection and marketing is. If the OS was secure they could not collect data.
It's silly to buy into the hype of them promoting security, when the reason for the OS is the opposite. (well unless they are talking about their financial security)
Back when they were calling for all of this, we knew what it would look like and how useless it would be. I warned them, assuming any of them read comments sections:
"If you construct [any new databases or systems you think people need] to lock in developers, expect to be snubbed."
Well, they did and we will. Nice going.
"The company wants to see more discipline and checks in critical open-source software, and revealed that it maintains its own private repositories for many projects to guard against compromised code or newly committed vulnerabilities."
As a former Googler, I'm calling BS. At least, this better be. At the time, ALL code that the build system used was in Perforce. Precisely 0 outside calls were permitted. And I can image 0 reasons to change that. Calls to the outside during builds are a vulnerability both as to availability and as to security. Full stop.
I understand that small companies might not have the bandwidth to set up the appropriate mirroring architecture. This is one thing that Google has had right for quite some time.
It sounds like they are doing what you said they're doing. They maintain private repos, where they store the code and don't allow any outside calls. Which you corroborated. I'm curious why you're doubting their statement when your experience is that they're already doing it?
Oh, I read that differently. I assumed "for many projects" meant "for many open source projects that they use internally, but not all of open source since there are projects they don't use". I'm assuming you think it's referring to "for many internal Google projects". I can't tell which reading is correct.
If Google wants open source projects to fix critical vulnerabilities then Google should pay the maintainers.
Tools and gimmicks only go so far to help.
The limiting factor is time and time is money even for open source projects.
As part of my day job if I find a problem in an open source software and it's critical to me I push a fix up stream or a RBT with bisect info if I can't figure out how to fix the issue.
I've only managed once to get management to pay for a fix. That was one of the most painful experiences ever... even though it was significantly more cost effective then trying to fix the problem myself.