
Did MS retired the updates?
I can't see them in Windows Update now.
Microsoft's first Patch Tuesday of 2022 has, for some folk, broken Hyper-V and sent domain controllers into boot loops. A Register reader got in touch concerning KB5009624, which they said "breaks hypervisors running on WS2012R2." "I'm currently dealing with this right now and it's a hassle," our reader said. "After several …
"No, they retired (*Cough*Fired*Cough*) their entire QA testing department " HR later went to their office to tell them the bad news, however they couldn't be found. It later transpired, there was no QA or testing department, the employees were canned years ago to save costs.
Very few have the time and resources to run a test environment that replicates everything in production.
Many don't have any resources for testing unless it's code their company has created or commissioned.
Sure, you don't deploy to every box at once and you start with the most trivial but those aren't going to be DC's and HyperV hosts so you'd not find the issue in a low pain fashion.
We expect for the sums of money that we are paying to MS that they do a reasonable degree of testing.
In this case, it's clear they didn't. The scenarios that seem to result in the patch's breaking fundamental windows components are not rare edge cases.
So even on a test environment the DC issue didn't fire at install time or on the initial re-boot, it apparently causes the machine to crash after startup and restart periodically.
So you would need to let it run in your test lab long enough to crash and reboot a few times. Likely most organizations can't devote that deep a testing for every patch run, and it slipped by some as a result. The hyper V thing is more glaring, but showed up about when you would expect for tiered deployments.
Storage backends and Hypervisor Hosts are always some of the last things we get to patch, bout because the impact of a bad patch is high, but also because of the need to down so much of the rest of the deployment to get to them.
So the patches get applied like reverse growth rings, even in testing. If the bug only shows up on the equipment in those last, inner tiers, than we get what we saw with this months patches.
Instead of pointing fingers at each other and our test environments, we need to start pointing finger back at M$. Yes they need to raise their QC game, but they also need to unbundle the individual fixes so we can roll back a single issue w/o having to remove every fix from that month.
The attackers are already retroengineering the patch they issued, and can weaponize any of those exploits in a couple of days. Several of those were auto-exploits from the preview pane level nasty. That level of exploit should be released as a spot fix separate from the roll up to allow the most serious threats to be addressed even in the event of a problem with the monthly roll-up patch.
>"we need to start pointing finger back at M$. Yes they need to raise their QC game, but they also need to unbundle the individual fixes so we can roll back a single issue w/o having to remove every fix from that month.
The attackers are already retroengineering the patch they issued, and can weaponize any of those exploits in a couple of days."
This is a big problem that MS et al need to address. Events like this mean that more people will delay in implementing server updates, so we have both an increased window and population at risk of being victims of the retroengineering attackers. Unbundling to increase fix granularity will be helpful, as will up-todate security suites.
Now, Now, let's not let common sense cloud our thinking! the real question we should be asking is "why haven't you moved all of your business data up to The Cloud(tm) and let MS worry about the infrastructure instead? /sarcasm
There's a reason why we lag a month behind in patches, and this crap is one of them.
This is still the long shadow of the dissolution of the "Trustworthy Computing Initiative" inside Microsoft. While that program certainly needed further internal reforms, it gave a high level voice inside the company to address quality issues. That team was dissolved and spread around to other teams, with the (now proven flawed) notion that the team would be able to spread their skills and knowledge to all of the other teams.
Instead they ended up as junior members of established groups with no formal or soft power to effect change within those new teams. Most were ignored and many moved on, but the impact goes on and on. Worse, even if the essential parts of that program were implemented today, it would take a decade with mop and bucket to catch up with the cleanup.
Really depends on your risk appetite since MS hotfixes are known to often have problems.
But that sort of thing should be brought out during the change management process if it is anything more than a box ticking exercise.
And presumably everybody has pre-release environments to test these things on? :-)
P*ss because some client admin decided the Hyper-V host and associated VMs should get updates auto installed so calls started around 3pm about loss of service.
Result on WS2012R2:
1. Lost Hyper-V due to update - but simple to resolve.
2. Restarted VMs were in process of installing updates...
The Domain Controller boot loop was a bugger to resolve - getting the DC VM sufficiently stable so that KB5009624 could be removed.
Seems Exchange 2013 (on WS2012R2) had difficulty digesting the updates, seeming stuck in a reboot loop for a few hours.
Other WS2012 VM's seem happy...
While this started off in the right direction, it appears to be a dead duck. The biggest red flag was when they released the new format and the disk repair and recovery tools were missing.
If the people who built a filesystem didn't support fixing/recovering it from the beginning, it will probably fail as a project (A) and (B) take your data with it. Not because the lack of developer supported tools will make the project fail on their own, but because the "field of dreams" model doesn't work for low level disk formats, and that clearly, the team already failed at basic use case/requirements analysis if they put the horse before the cart and released the file system without repair/resize/snapshot/convert back parts already sorted out.
And this morning I've sent out an email detailing this incident, and impressed the need that we really look at switching away from Windows as this will be a risk we cannot ignore.
Imagine your production (critical) going poof after applying a not-so-kosher update... Been there, done that, don't want any repeats kthanxbai.
Or worse still ReFS showing raw, and somebody thinking a quickie format will resolve things... (one poster above commented that there isn't tools or recovery software available for ReFS, so what's the use of it then?)
Even more so if it is a remote server in a remote location, and somebody need to be sent out to do the gefingerpoken at the server in order to fix things... most of us most certainly do not need extra fun and games of this sort.
Still running Win 7 here (plus Mint of course) and getting ESU updates (we all know how don't we?).
Security rollup for .NET arrived this week, seemed to install OK. Then I tried to load a movie for editing in AVIdemux 2.7 - file opened, indexing took place, then instead of displaying the first frame the application just crashed.
Tried a clean install of v2.8 and the problem persisted. Rolled back the .NET update and everything is fine - $deity knows what M$ did there.
What, .NET updates break software? INCONCEIVABLE! .NET is PERFECT!
.. sorry, I can't continue, I'm laughing too hard.
Especially when installing a .NET patch breaks little things like, say, Exchange, and the only fix for it is to install the latest version of .NET so that you can install the latest CU, and then PRAY that it all works.
:wanders off grumbling::