It's a bug ...
as long as it is fixed in a shortish time, don't make too much of a fuss.
Oops! Microsoft has published an advisory on a bug in its Internet Information Services (IIS) product that allows a malicious HTTP/2 request to send CPU usage to 100 per cent. An anonymous Reg reader tipped us off to the advisory, ADV190005, which warns that the condition can leave the system CPU usage pinned to the ceiling …
I was looking to snark all over it, but after reading the article, it's like "meh".
Glad it's fixed, anyway.
It's not like that 'Code Red' thing was, from (nearly) a couple o' decades ago, at any rate. That thing went unpatched for YEARS by end-users and created a LOT of intarweb traffic...
This post has been deleted by its author
Exactly. That min/max is pretty damn broad. They should have a reasonable default setting to kick off with. I can run a basic IIS server, but I'm afraid the fine detail of protocol implementations is beyond me. (And no, I'm not that interested in getting to that level of detail either - I'm a mechanic, not an engineer.)
Also, I'm surprised it's not a security update, considering the flaw can DDOS your system. I get it's a "bug", but surely security flaws are also "bugs". I say this from a general philosophy of being cautious when applying feature updates to servers, while always applying security updates in a timely fashion - I know I'm not the only one.
For Http2MaxSettingsPerFrame, given the minimum value is 7 and maximum settings options are 6, I guess that means set it at the minimum or just above in case of future enhancements? Sounds dangerous if thats the case - surely IIS should set based on supported SETTINGS options and allow sites to overide if required
There's also Http2MaxSettingsPerMinute which seems friendlier - 7 x number of expected clients per minute and I assume bump up or down if you see issue. I would have thought MS could calculate a value based on CPU speed which could then be overridden by sites that needed to adjust it rather than leaving it as an exercise for the reader...
HTTP/2 is a highly complex protocol so it's very unlikely we'll see a fully correct implementation within the next few decades. On the other hand, laboratory tests only show about 30% performance improvement compared to unoptimized normal HTTP.
If I was a secret service I'd do my best to promote HTTP/2 as it'll mean lots of bugs and therefore many exploitable security issues. Any kind of complexity increase helps those who want to exploit it.