# We're almost into the third decade of the 21st century and we're still grading security bugs out of 10 like kids. Why?

The way we rate the severity of computer security vulnerabilities and bugs needs to change to better protect people and businesses from malware and cyber-crime. So says Marc Rogers, executive director of cybersecurity at Okta and head of security at the world's biggest hacking conference DEF CON. Speaking to The Register at …

1. #### The logical next step is the two-dimensional risk rating approach

This is already well established; risks are assessed both on likelihood and consequence. So this could capture how vulnerable the issue makes a system (e.g. ease of exploit, local access required, etc.) as well as the severity or harm if exploited.

It's a reasonable criticism that 1-10 over-simplifies, but you don't want to over-complicate things, either; that way lies analysis paralysis.

1. #### Re: The logical next step is the two-dimensional risk rating approach

What tends to happen is that someone comes up with a clever multi-factor way to score the problems. For simplicity, that's then boiled down to a 100 point score. Then the fact that there's no difference between a score of 90 and 91 means it's boiled down to a 10 point score!

Then the 10 point score is considered too simplistic so we have a number after the decimal point...

1. #### Re: The logical next step is the two-dimensional risk rating approach

Then my management only want risks in terms of green/amber/red..

1. #### Re: The logical next step is the two-dimensional risk rating approach

Our colour system is wrong way round. We've made the highest energy colour the "safest"; and the lowest energy wavelength, the most dangerous. So if something is serious I have to go, "Boss, this one is infra-red." Or "Boss, this one is radio waves."

But if we matched danger with energy then I'd always have the option of going "Boss, this is ultraviolet." Or "Boss, this one will vaporise half your fucking face."

1. #### Re: The logical next step is the two-dimensional risk rating approach

Blame that on nature.

Generally speaking, in nature, dangerous animals (poisonous) tend to be brightly coloured, with red being liberally used.

Therefore it is natural to associate red with danger.

If you reverse it, you will have problems. Try teaching kids that in nature anything red should be avoided, but in industrial/technical contexts, red is good and green is bad...

So, avoid that snake with a red belly. But that bright red button is the good one, the one that opens the door, but the green button is the emergency "vent reactor" button.

2. #### Re: The logical next step is the two-dimensional risk rating approach

Please look closer. CVSS scoring is already multi-dimensional. It is essentially polynomial.

The problem is that end-user news articles only show the CVE numbers and final CVSS summary scores. Without including direct links to the CVE report and the CVSS calculator, the one-dimensional summary score tends to dominate the discussion. I've called for El Reg to improve on this, but more of us need to make a stink about it--instead of just shouting in the wind, like the profits of the boomer generation (yeah, that includes me).

I'm more than ready to see a CVSS version 4, 5, 6, etc. Evolution is inevitable. But, more of us need to demand better from the places where we discuss these things. Are you listening El Reg? We love you, but step up.

1. #### Re: The logical next step is the two-dimensional risk rating approach

You're referring to the vector. Which for most normal human beings is basically meaningless.

E.g.

CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

I can't see how a news article becomes more useful to an average person if they post that instead of the base score.

Perhaps if one or more of the corners of "THE ALMIGHTY TRIAD" were next to the score to balance things, the base score might be more useful.

1. #### Re: The logical next step is the two-dimensional risk rating approach

CVSSv3 also incorporates a base, temporal, and environmental score. Most outlets don't do a good job of reporting or explaining those.

While publishing the vector isn't useful for human readers (that's not the intended audience), there's nothing to stop someone from providing a concise text explanation.

CVE-2019-xxxx has a CVSSv3 base score of 10 (over the network, easy to attack, no privilege required, no user interaction required; high risk to confidentiality, integrity, and availability).

Obviously there's still some jargon, or at least terms of art, in that, but you don't have to be an IT security expert to understand it.

You left off the temporal and environmental scores in your example vector, so explaining those in plain language is left as an exercise for the reader.

It's all very well for Rogers to say we need a different scoring system and representation, but CVSSv3 does incorporate a threat model, and considering combinations of vulnerabilities quickly falls foul of combinatorial explosion. While he raises some good points, and while theoretical speculation is useful, it won't get us very far until someone has a concrete proposal. I'd say that CVSSv3 does a good job at the function it's intended to perform; that function is valuable; and interpreting combinations of vulnerabilities under richer threat models is the job of human experts, not a mechanical scoring system.

2. #### Re: The logical next step is the two-dimensional risk rating approach

Those are fair points, but I don't think that means we can't have a parenthetical link to the calculator for those of us can deal with it, in addition to whatever manner the writer chooses to summarize. And, El Reg is actually pretty good at going beyond a 0 to 10 score with some explanation that hints at the details in the vector in terms that work for most people. And, for those who are ready to step up to the next level, they might well be encouraged by a link. The industry could well be served by having more who are informed.

3. #### Re: analysis paralysis

Excellent phrase, sir. My uncharitable attitude to Vulnerability Management (and pen-testers and auditors) is that they're paid to find and "manage" issues, but not to help fix them.

If a company has a CMDB and a regular patching & hardening program, it's relatively easy to accelerate the process and the schedule to handle out of cycle fixes. If you don't have those in place, you'll waste a lot of time analysing, planning and firefighting.

Posted as AC, but my colleagues in Vulnerability Manglement will probably know it's me anyway - I've said it to their faces a few times.

1. #### Re: analysis paralysis

Yes they're paid to find and manage because if they also get involved in fixing theres a massive conflict of interest.

Let's flip the whole thing round from your perspective. I'm assuming you're a sysadmin or engineer.

Let's say 6 months ago you finished upgrading the network and you've been continually patching and monitoring ever since to the best of your ability.

Your boss has asked you, the sysadmin, to perform a vulnerability assessment on the network. It's all going well until...whoops, you find a bunch of vulnerabilities and threats you didn't realise were there and frustratingly some of them exist because of decisions you made when you performed the upgrade.

How do you report this to your boss without lying.

Do you come clean and admit to the fuck up and come off as an incompetent sysadmin?

Do you secretly fix the problems and report that there were no problems found and hope that no problems will be found in the future to highlight your poor upgrade management?

The idea is to keep things objective and continually improve processes, policies and practices.

1. #### Re: analysis paralysis

I'm all for independent auditors, but I've come across my share of corrupt and/or incompetent auditors. Example: web event registration application that used /errorlog.txt to record every possible piece of information, including credit card number, expiration date, cardholder name, and CVV-code. The PCI-DSS auditor (independent fairly well-known company) signed off on it.

Also, in my entire IT-career (going back to the 90s) I have NEVER seen an auditor asking the grunts any questions. In my experience the average admin/engineer knows about a bunch of issues, but they never get cataloged formally. Even communicating them up the chain tends to do nothing more than making oneself hated across all tiers in the company.

4. #### Re: The logical next step is the two-dimensional risk rating approach

I don't think the scoring is the real issue. The real issue is the way risk is determined.

Most security frameworks define risk as threat * vulnerability.

The trouble with that is that vulnerabilities are generally easier to identify than threats.

That's what I think this researcher is pointing out. I.e. you can have a couple of relatively low scoring vulnerabilities which wouldn't be prioritised if they aren't aligned with some sort of high scoring threat...thus providing a low risk rating in a risk assessment which in turn leads to the vulnerabilities not being dealt with or compensated for.

I think the vulnerability scoring system is...ok. Threat identification not so much. With threat identification being as woolly as it is, it makes risk assessment less accurate and prone to error.

2. #### Who is "we", Kemosabe?

"We need to come up with a more dynamic process that takes in the CVSS score, but also factors in knowledge from the system."

Those of us in the business have been doing that very thing for decades, usually without bothering with the CVSS score as the superfluous thing that it is. Got anything new to share with us, Rogers?

(Note: I matters not one whit what you tell Management, all they care about is the bottom line. If your solution can turn a profit, you're Golden. If it's a cost center, they will fight it tooth and nail. Turn broken security into a real, not imagined, cost center (with numbers!) and Bob's your Auntie.)

1. #### Re: Who is "we", Kemosabe?

Indeed

Chaining together "low level" bugs together to own a machine has been a thing for a long time.

Everyone with even a casual interest in the field (e.g. me as I'm in IT not directly involved in security issues on a daily basis) knows that its not just a look at the numbers game

2. #### Re: Who is "we", Kemosabe?

While what you said is absolutely true, I think he's going for the thought that some companies work on the process that security breaks things and the downtime risk is more important than security breach to them. The only recourse the security team have is some form of risk to mgmt of gross negligence, and using the number as a cudgel against them to put them individually at risk. That number often can be used as the trigger point of gross negligence to force them to divert resources, delay product release, etc that they would otherwise jam through, because we haven't gotten hacked yet. Not saying that companies that work that way are right or even sane but that's the way it often is

3. #### Re: Who is "we", Kemosabe?

Right. When we do a device failure risk assessment, we have to boil everything down to three levels, and even then assign them colours (Red, Yellow, Green) for the mangler's benefit.

3. #### The task sounds enormous

So, two low-scoring vulns could be combined into one big problem. Sure, theoretically, but how do you evaluate just how many low-scoring things can be combined and in what way, before you can rate all of them properly ?

Security is always in hindsight. We know to look out for privilege escalation issues because some hacker one day taught us that it worked. We have a body of knowledge today that is certainly impressive, and it will be one hell of a task to knit all that knowledge together to create a proper rating system, but there is no such thing as automating the risk evaluation - it has to be analyzed by a human. Humans don't know everything, and are rather bad at taking into account hundreds of parameters at once.

It is obvious the CVSS is not very valuable, but crafting a good replacement is going to be a massive headache. And yet, it should definitely be done. Good luck with that, then.

1. #### Re: The task sounds enormous

It is obvious the CVSS is not very valuable

Rubbish. CVSSv3 serves a number of important purposes:

- It encourages various judges (original researchers, PSRT members, secondary researchers investigating CVEs and patches) to examine vulnerabilities from a variety of angles and consider a number of important aspects.

- It provides a measure of consistency in describing and evaluating a number of critical attributes of vulnerabilities, and a shared and well-defined vocabulary for discussing those attributes.

- It provides a multidimensional rating mechanism that, while necessarily simplified, assists in triage and discussion with non-experts.

- It also constitutes an industry-standard representation of those things, so we can avoid duplication and miscommunication among different organizations.

- It gives us a machine-readable representation, amenable to various sorts of automatic processing.

Frankly, I'm rather dubious about the IT-security credentials of anyone who dismisses CVSS. Standardization is critical for industrial scaling and efficiency.

4. We're almost into the third decade of the 21st century and we're still grading security bugs out of 10 like kids. Why?

Because it gives an easy and quick method of identifying their severity. However, as Jake says above, if you only rely on the CVSS then you're doing it wrong.

1. For any non-trivial task, if you rely exclusively on one tool you're almost certainly "doing it wrong". This is a facile observation and not a meaningful objection to CVSS.

5. #### Marc Rogers is right.

The scale should go up to 11.

6. #### It's not that simple though is it

There is value in a simple message. Your out of date software scores a CVSS of 10 therefore "critical" therefore you must fix it.

That's a nice simple message to give to non-technical decision makers.

The nuance around the 'risk' presented from that vulnerability is much harder to articulate. And that is probably best managed by whatever means is most appropriate for the organisation. That doesn't remove the value of the simple scoring mechanism it just puts an onus on security staff and accreditors to use that information properly

7. #### Six Degrees of Kevin Bacon

Perhaps it needs a method to calculate a degrees of separation, how far away from bad bugs is this trivia bug.

I claim it and I'll name it the Tremor Scale of vulnerabilities.

8. #### Not like kids...

Kids do it in stars - min 1, may 5.

They cannot count as far as 10.

9. #### Y2K?

We're almost into the third decade of the 21st century

I thought that error was fixed years ago. There is still more than 10% left of the second decade.

I blame the artist formerly known as squiggle for partying too soon.

1. #### Re: Y2K?

Yes. To be pedantic (I am very OCD), our calendar system begins with year 1, not year 0. In the B.C./AD dating system it went from 1 B.C. to AD 1. So, we are coming up on the end of the second decade in 2020. Third decade will begin 2021. Did I beat that to death? :-)

10. FMEA, anyone?

11. Simple answer, because a score from 1-10 is easy for most people to understand. It may not be the best method but telling people there is a severity 9 bug they need to plug (i.e. upgrade software) is going to be a lot more effective than (e.g.) a 14R8 level bug.

1. Maybe outside of the USA. Here they think in F-A, which is notoriously fuzzy. In addition, A for Effort is a very real problem.

On the other hand, in the USA nothing gets the message across better than walking into a bank branch (or executive office) with a color printout from ssllabs.com showing a nice, big, red F...

12. Tenable already have a new system called VPR - vulnerability probability rating ... which taken on it's own works pretty well.

1. #### re: tenable

What do they overcharge for it?

13. #### A threat centric approach

This is a bit of an out of date perspective, talking about a problem that already has solutions. Multiple approaches for improving prioritisation of vulnerabilities already exist that augment CVSS's lack of real world information on the probability a vulnerability is and will be exploited. Tenable has "Predictive Prioritization" in their enterprise products and Kenna's "Prediction Model" is available to sit on top of other VM vendors data, both of which use ML to understand the likelihood of exploitation, irrelevant of the CVSS base and temporal metrics, to better focus on the small handful of vulnerabilities that will be leveraged versus the massive amount disclosed but will never be favoured by attackers.

It's not the scale that's the problem, it's the lack of real world context.

1. #### Re: A threat centric approach

Kenna are spamming scumbags and leech off fear and confusion.

And yes, I can demonstrate that in court if they want to dispute it.

1. #### Re: A threat centric approach

That may be true, but it says nothing about whether their predictive model is a useful tool, much less whether the approach in general can be useful.

1. #### Re: We're almost into the third decade of the 21st century

In a hundred years time people will look back at our times in the same way we look back at 1919.

1. #### Re: We're almost into the third decade of the 21st century

With binoculars?

1. #### Re: We're almost into the third decade of the 21st century

Not binoculars. A spotting scope.

15. #### helocopter

It will be made as complicated as possible, completely unusable, and have governments highest approval and be made mandatory. Half the InfoSec peeps will say F this, and take jobs that don't stress them out so much.

Great plan. Just like making a helicopter, a million parts flying in formation, when one goes bad - they all do.

16. #### Can't say that I agree at all

The point of the scoring system is to draw attention to the bloody obvious. In general it works for reasons already stated in the comments. Because it is simple enough to understand.If your organization actually takes risk management seriously, it has in-house staff to do the the scoring in the context of how it affects the organization. Risk and threat modeling is not something I would expect someone outside my organization to understand as far as it applies to my organization because they lack the information to do so. I would not expect any agency or a third party organization to do a valid scoring for my organization as a general rule (auditing etc not included). They can't and even if they could it would not scale to try and keep such a vast library up to date. The simplicity scales.

17. #### Same old, same old...

Since the days of Windows 95, upon booting the system checks/maps network drives BEFORE initialising the WiFi. The result is an error report/notification upon a reboot.

It isn't as if Microsoft haven't noticed this. I reported it as a tester on windows 95, NT4 and Windows 10. When the simple stuff is never done because the 'bells and whistles' to attract non-IT-savvy people/users has priority, we will for ever be plagued by bugs that never seem to be correctly prioritised.

Just saying...

18. #### Borderline meaningless

NO one who has spent any time thinking about this issue uses CVE scores as anything other than a rank order in evaluating bugs that came in overnight. I've never gone to management and said, "Boss, we have to get on this one--it's a CVE 7!" or 8 or 9 or 10. If a point upgrade of the associated libraries is not enough, you evaluate the vuln in the context of your business, and, depending on how much process is needed, present the case to manglement about the actual threats the business faces relative to the known vulnerabilities in the system.

And while I'm in a bad mood, why are people talking about vuln chaining? Maybe it's a civilian mindset that I miss, but you infiltrate the same way you build--one step at a time. This has always been the case. Again, I've never said, "Oh, it's just a privilege escalation bug", or "Oh, it's just a unauthorized access issue." Every vuln in your system is a stepping stone to the next (potentially currently unknown) bigger vuln.

19. #### Not helped by bad scores

We’re setting a spate of CVEs with bloody stupid CVSS scores recently, not helped by bloody stupid bugs that aren’t security issues.

For example, flaws that require physical access to exploit are being marked as exploitable over the network which pushes the score up. Bugs that fail to check a return value during boot are also being marked as network exploitable even though there is no network at the time and no practicable way to exploit the thing anyway.

NVD are unwilling to address this problem so the scores are rapidly becoming useless so we see a lot of people asking for better ways of scoring. I’d settle for NVD accepting their limitations and asking experts for help to score vulns.

It’s exasperating.

20. #### Gave up . . .

I gave up this far into the report: ""The challenge is the whole vulnerability management space has been evolving," Rogers said, "but it is being outpaced by the evolution of how we leverage attacks."

Over the years I've developed a distinct aversion to anything about levers, leverage and leveraging along with curator, curating and curation and all the other bumfluff that's taken root in the vocabulary of the loquacious if not the intelligible.

I'm beginning to think it's time we, all of us, embarked on a journey in which others regardless of ethnicity, faith and gender join in The Conversation about what a lever is and what a lever does.

21. #### "still ranking bugs from nought to ten"

The CVSS, for all its shortcomings (and there are some) has a resolution of one percent although the support documentation advises that its accuracy may be lower than that. Fair enough, it's handling a lot of rather imponderable parameters. However, no organisation I have ever worked with that used the CVSS has ever calculated the environmental score, which is intended to address the local susceptibility of the target to the technological vulnerability being attacked. Its lack of use is probably why it's remained rather a crude metric as the rest of the CVSS has evolved quite well.

In reality, practically no business for which there's a reliable post-breach incident report has shown up as robust or resilient. Most have been wide open to the supposedly "sophisticated attacks" which have usually turned out to be relatively trivial to execute.

The most important consideration in security is resilience against the unexpected, but almost all proactive effort so far has been directed at modelling the adversary rather than the target. A more sophisticated (and much less purely technocentric) CVSS environmental metric would be an excellent move towards resilience, but only if people were prepared to use it.

22. CVSS has flaws for sure but it’s easy to say it sucks but not offer a better solution. For me, I find the subscores/vectors very useful as I combine them with Threat, business and technical impact and internal controls in a heavily modified version of owasp risk rating that is more contextual and useful than Tenable or Kenna Prioritization alone. For instance, I’m classifying assets and business drivers or other risk indicators like safety and grid reliability to drive this logic. You can’t do that with a generic prioritization score. Check out Fortress Information Security and you will be pleasantly surprised. https://fortressinfosec.com or you can always visit owasp and roll your own.

## POST COMMENT House rules

Not a member of The Register? Create a new account here.