
Clusterfuck
Techies are scratching their heads after Red Hat pulled a CPU microcode update that was supposed to mitigate variant two of the Spectre design flaw in Intel and AMD processors. This U-turn follows VMware, Lenovo, and other vendors, stalling on rolling out microcode patches after Intel admitted its firmware caused systems to …
Very clusterfuck.
I've seen similair thing with VMWare ESXi Hypervisor. Last week they issued a patch supposedly to cover Spectre/Meltdown. This week they recalled the patch saying it contained vulnerable microcode from Intel.
Similair pattern with vendors presumably unable to solve very complex issue quickly without Intel getting their own shit together first.
This post has been deleted by its author
Exactly this. VMWare have pulled patches (https://kb.vmware.com/s/article/52345 ), as have Lenovo (https://support.lenovo.com/gb/en/solutions/len-18282 ), due to the bug in Intel's microcode fix. I'm not sure why the article suggests this is anything else.
Why stop with getting rid of systemd? How about getting rid of multithreading/multiprocessing altogether? That would have avoided all these spectre/meltdown type bugs and would provide a more genuine 70s-era feel than going back to init.d could ever hope to do.
cs9, I'm all for change. Change is good. But only when that change is actually good. Look a the source for init, and think about what, exactly, this little bit of code actually does. It's really quite simple and elegant. You can read it for yourself, it's not all that difficult to understand. One tiny little thing that does one job, and does it well. It's almost a poster child for what un*x code is all about.
Then I look at the grossly overweight (and growing by leaps and bounds!) clusterfuck called systemd, with all it's bells and whistles and unnecessary hangers-on, with more getting press-ganged into it on a regular basis with no rhyme or reason ... and I just say no. It's an accident waiting to happen. I want nothing to do with that kind of sloppy shit anywhere near my PID1.
You think you had it bad? Our old vCenter Server wouldn't take a bloody upgrade, so I've been running around like a madman deploying a new one and cutting over the hosts out-of-hours so that I can install a later version of ESXi without losing management of the host.
All of the pissing about trying to get the vCenter Server VM onto the vDS without it falling over. All of the hassle cutting the hosts from the old vDS to the new one (shutdown, reconfigure network, restart each one). Then Veeam has (understandably) decided that these are all new VMs and have nothing to do with the existing backups, so our backups have suddenly exploded in size. Some of the backups were still running into working hours the other day as it tried to catch up. now VMware's Update Server wants to remediate all the VMs with a new version of VMware Tools, so cue another bout of updates...
All I wanted was a nice simple patch for an old version of ESXi. Actually, ignore that. All I wanted is for vCenter to actually accept the update that I've tried putting on it a dozen times... So yeah, I've put in a lot of hours over the last couple of weeks, I'm tired, and it'd better bloody not be all for naught. :-/
I had to rebuild a vcenter server last year due to OS corruption in windows. Database for vmware lives on separate linux host with oracle. Used same vcenter version but the process itself was straight forward(reinstall and connect hosts). No complications. No issues with VDS or anything else. I spent a lot of time trying to repair main vcenter since i was quite paranoid about rebuilding it live(never had to do it before ). But once i gave up on that the process took just a few hrs.
External DB was probably the way to go, and it's something we've discussed from time to time, but it's not all that big a deployment and we wanted to keep it simple, so we did the nasty SQL Express onboard job.
I'm not sure I'd be terribly comfortable having the database on a separate SQL server in case of trouble. But I guess if I deployed a pair of vCenter Servers talking to the same database on an Availability Group then the odds of something breaking hard enough to shut the lot down are pretty slim.
On the plus side, two of our sites are on Hyper-V, so it was relatively simple to get them up to speed. This third site is 5 timezones away, so we can get at least an hour and a half of clear patching time before even the most enthusiastic of users arrive.
With vCentre, ESXi patches, hardware firmware updates all before guest hardware updates and OS patches now required to implement the fixes it has become a monumental process from what started out as just an OS patch now known to be flaky and break many key applications (anyone brave enoght to patch SAP servers yet?) and no microcode.
Given they had 6 months to sort this out I would have hoped they could have got it all together and presented as one unified here's your plan and what you need to do, rather than drip feed patches and keep changing their minds as to what actually needs to be done and by whom.
Complete balls up by the the whole industry that really needs to get its shit together and realise they only exist because of their customers.
"A senior techie who spoke to us on condition of anonymity".
He wants to be anonymous because he's embarrassed, any senior should know this. The answer is very simple. There is just 1 way to mitigate Spectre variant 2. You need new instructions added to the CPU, that the OS will then invoke when appropriate. These new instructions are added with a CPU microcode update. The update doesn't "stick" to the CPU, so each time the system is rebooted, the microcode update needs to be re-applied again. Linux already offers a way to load new microcode each time the OS boots, Windows DOES NOT. So, what do you do for Windows? You need a BIOS update. The BIOS update re-applies the microcode update each time the system is restarted, pretty much the same time Linux already does. So there you go, Windows needs a BIOS update because Microsoft is too lazy to implement microcode loading in the OS. If you have a Linux system and did the BIOS update, you don't need the OS microcode update. If you run Linux and absolutely want the microcode update, you need to download the file from Intel's website and put it in the right place. Otherwise just sit back and relax, there are no exploits right now, so no hurry people ;-)
>So there you go, Windows needs a BIOS update because Microsoft is too lazy to implement microcode loading in the OS. If you have a Linux system and did the BIOS update, you don't need the OS microcode update.<
I don't know what you thought you meant, because you haven't explained it very well. Windows does do microcode updates, and has for years.