Related?
https://www.bbc.co.uk/news/articles/cv2g5lvwkl2o Multiple major Aussie systems down....?
An update to a product from infosec vendor CrowdStrike is bricking computers running Windows globally. The Register has found numerous accounts of Windows 10 PCs crashing, displaying the Blue Screen of Death, then being unable to reboot. “We're seeing BSOD Org wide that are being caused by csagent.sys, and it's taking down …
Ah, bless that you even think this might happen! ;-)
To be honest, while this is Crowdstrike's fault, it's also showing the problems of too many systems set up to update this software blindly. Mission critical systems must have redundant setups that can brought online asap if problems like this occur.
Current legislation means that Crowdstrike is exempt of liability but companies that use their software to provide services are not. Doesn't that make you think? You're right: we should extend that exemption umbrella!
It's a tricky thing though, when the software product is related to cyber security, threat mitigation. Yes, ideally, updates, signatures, etc should be thoroughly tested to ensure that nothing goes wrong when you push it out to the world. On the other hand, that takes time, time in which customers are running at risk, may be under attack and are desparate for your latest and greatest to be deployed to them forthwith.
Having said that, you'd think that there was at least enough time to see if an update is going to crash a box. I think it almost inconceivable that Crowdstrike launched this on to the world completely untested, and due to the seemingly universal impact it's had you'd think that even if Crowdstrike had done as little as a single quick-looksee test the problem would have be exhibitted. It's either really is true muppetry on a monumental scale, or something very odd has happened. There's probably going to be something for us all to learn from whatever the final analysis is.
And you can bet your bottom dollar that, should Crowdstrike survive this episode, there product is likely to be one of the most thoroughly tested in the field. I mean, it'd have to be. They'd not survive causing this scale of outage twice...
Agreed. If this was caused by a malformed AV signature update, then in most cases, these get installed without being tested. When signature updates come out every few hours, running though a test cycle is a lot of effort, and it's assumed (wrongly?) that a signature update won't cause a BSOD. Though I have worked on environments where even thrice daily Defender AV updates do get tested before being pushed out, due to the mission criticality of the systems and risk adverse nature of the business.
And you can bet your bottom dollar that, should Crowdstrike survive this episode, there product is likely to be one of the most thoroughly tested in the field. I mean, it'd have to be. They'd not survive causing this scale of outage twice... ...... bazza
Common sense would have one thinking that so, bazza, but systems administering to Boeing prove that notion nonsensical with lessons to be learned saved for another day yet come.... which all sort of more than just suggests such outages/outrages/missteps are to be the new norm in systems administering elite and/or vital and/or privileged executive advantage and/or protection.
What y’all can also be absolutely sure of is ...... the ambulance chasing, life blood sucking parasites of the criminal justice system will be salivating at the prospect of making a fortune from persuading Crowdstricken Cloudstroke victims and patients to launch an attack on common sense against someone/something they can bill in order that they can survive and prosper to blight and blot the future landscape with their shenanigans in another inevitable 0day ....... for someone/something has to be held accountable and responsible for globalised losses, isn’t that right.
It is quite probably up there with the creation of war as one of the greatest of pathetic psychotic shows enacted by sad robotic humans down on Earth.
And that has one thinking such is the Perfect Prime High Time for an Earthed Systems NeuroLinguistic Programming Reboot ..... and Great Advanced IntelAIgent Reset. It is not as if all of the necessary virtual and physical infrastructures/quantum communications networks are not already in place and globally available for services use .... and great abuse too if that is decided vital for the greater good and protection of the evolution of future leading species hosting universal programs on/from Earth.
Actually... based on a past experience with a certain Oxfordshire-based AV vendor, yes, they tested, but they didn't test enough... In their case, they tested a Unix signature file on Unix, but didn't quite on Windows, and then, well, it deployed onto Windows and took out anything with the word 'update' in it. Remember that? Oh how much fun was had trying to recover systems... I'm sure the El Reg vultures can dig up that series of incidents. Was around 2012/13...
It would not remotely surprise me if that's what happened to Crowdstrike, because it happens to them *all* eventually...
Been there. Did that. Have the vendor T-shirt.
Strangely enough, they now have an arrangement whereby the clueful can designate a test group and release the software to that before deciding to release it to the entire estate.
Others say "we only deploy $newstuff to 10% of your estate" (so only 10% is fscked, and hopefully that does not include both of your solitary pair of DCs). Scream loudly enough and the other 90% shall be saved.
And one would hope that the clueful go for the 'quarantine the file' option, not 'delete the file', which is ultimately what borked those who had automatically received said broken Linux AV signature... At least quarantining allows you to *un*quarantine a file and unbork much of your estate.
:-)
THAT and making a BSOD boot optionally jump to a screen that gives you the ability to ROLL BACK THE SYSTEM or "safe boot" instead of showing an UNRECOVERABLE TOMBSTONE SCREEN
You know like "press F1 for recovery options" somewhere on the BSOD screen. XP used to have something like this as I recall, detecting a failed boot the next time you start up. But I only use 10 in a VM...
Brought down a lot of our first wave patched servers today, as well as a huge proportion of our desktops (Aus BTW). Most had to be "fixed" by the recommended work around, so I guess next weeks second wave of patching's have been put on hold. I was super glad I'm only in a back fill role, so didn't have to stress while the perm team fixed everything, though I did fix 4 systems just to feel useful.
Monoculture is exactly the right way to look at this. (eg Irish potato famine caused by just one plant pest taking out whole crop) Just for once M$ look to be a genuine victim, rather than the cause. Goes to show how careful you have to be who you make friends with. Wish I was a lawyer in the insurance business - reckon I could retire on the proceeds about a year or two from now.
Glad I went Penguin years ago.
Genuine victim!
You are kidding here, aren't you?
It's like Boeing accepting parts for the new launcher of the back of a truck and fitting them without question or inspec ... oh.
Of course they are culpable as it's their OS and laissez faire attitude to inserting any old crap in there on an almost daily basis.
This fault was easily discoverable, I would have thought, had there been any systems that nurture the most basic curiosity in play.
"It's like Boeing accepting parts for the new launcher of the back of a truck and fitting them without question or inspec ... oh."
Not like that at all. Nothing gets added to or changed on an airliner without a massive paper trail and testing (yes, we all know about Boeing). Microsoft does not have control over the software that runs on its platform (it's not Apple, after all). This could happen on any operating system - it's always possible to run some code which will have bad effects, especially when it runs at the low level an AV product has to.
From the post I'd say definitely smugness. When something like this happens, rather than patting youself on the back that it didn't happen to you, you should be checking that something else can't. In a way, it's just as wrong-headed for a company to rely on Linux distros as it would be to rely on Microsoft Azure VMs. However, decades of experience running unix systems have provided fairly robust tools for testing and rolling out updates with minimum downtimes, though I still think that the BSDs generally got this right first.
It'd be possible to have a kernel that can catch a subset of crashes in kernel space (and maybe eject a driver or something), but more to the point:
kernel crash means a bug in kernel space.
Either MS's code has bugs itself, or has a security hole that allows other software to crash the kernel (yes, in reality it's both).
Unless you're using a non-MS kernel, MS has something to do with this.
And yes, the same could happen with other kernels, sure, but it usually happens with Windows
Disclosure: Clueless end user here
This BSOD/boot loop thing. Is there no way of instructing client machines to boot from an emergency OS over the network? I recollect that was how systems got reinstalled after being messed up in one of the colleges I worked in. Just thinking of the logistics of recovering thousands of client machines from this situation without an army of IT bods running about.
Icon: after the recovery
> Is there no way of instructing client machines to boot from an emergency OS over the network?
Maybe on systems with IPMI, you might be able to remotely instruct the BIOS to boot Windows into Safe Mode to apply the workaround, but then you might not get network access to the machine as it won't be running it's VNC service or whatever, so you might still have to go round the racks with a Keyboard and Mouse. I'm not sure if the IPMI itself can provide a remote display
Frankly, I despair that people still use Windows in datacentres. It's never really been designed for remote operation.
https://i.imgflip.com/66uud2.jpg <-- work safe
IPMI in the datacenter, or Intel’s vPro/AMT for clients. The latter, once configured, gives what is essentially hardware-level VNC access with KVM control and network drive mounting / boot redirection. It’s saved my bacon more than once. Combined with a simple script it could be used to deploy this Crowdstrike workaround in seconds per PC.
Kind of, but then the system (server) itself will still be useless. The listed fix for this is a rather simple but very manual affair and while it is not beyond the capabilities of suitably tech-savvy individuals to create an emergency boot environment (OS) that can start, access the normal boot environment and make the change required and then restart the system, this is also risky.
However, for more secure system this pretty much becomes a non-option due to encryption of the OS volumes involved where it becomes impossible to externally modify the file system. As a result this could very easily become solely an issue that can only be resolved manually.
Server options like iDrac, ILO et al exist for a reason.
For the humble workstation - you have various Intel vulnerabilities.
Once you have access to the crashed system, I understand that a simple wildcard delete and restart cures all. Am not a MobPunt customer so cannot confirm.
A driver is for things like hardware.
e.g. if your sound drivers fail, you can just lose audio rather than killing the system.
drivers are not for security (well, there are hardware security tokens, but that failure will fail correctly)
And external companies injecting code into the kernel for security? yeah, we have an example today of why that's not ideal
See also: "Rootkit"
Most so-called "anti-virus" software fits the definition of a rootkit. It installs a kernel module "driver" to override system calls, placing a man-in-the-middle to calls like fopen() and read()
It's a terrible security paradigm, because it means that software can be insecure, we will just rely on this rootkit-thingy to protect us when something nasty happens.
It's a bit like leaving all your doors open but instead paying someone from the local mafia to house-sit
The issue here is worse than that. It wasn't the kernel module itself that was replaced, it was a data file which triggered a bug that had been there all along. There was no good version to roll back to. It was entirely the responsibility of the kernel module or whatever it was to handle the bad data file.
"leaves you system running but with all the security turned off"
It's not a straightforward question. If the system is in some way compromised but apparently undamaged this is not good. However just crashing the system and making it unbootable deprives you of your best tool for diagnosis and repair - the system itself. The best course of action might be to boot into some sort of safe mode. The advised solution is indeed to do this although it doesn't seem ti be automatic. From what I've read here the limitation of this is that if Bootlocker is installed it would normally obtain the keys from the server but this isn't an option when booting to safe mode, they need to be entered manually which depends on there being a more accessible copy. It would obviously have helped if systems were able to obtain the server's copy of the keys when booting to safe mode. Of course if the server is also down because of this...
The issue, as always, is to try to anticipate problems at design time and set a goal of having as few problems as possible that can prevent the system getting into a state where it can at least act as a tool to help recover to the intended production state.
I think the main things we can blame on Microsoft are:
1. The prevailing practice of allowing third-parties to run non memory-safe code in kernel space, and the normalisation of installing a "rootkit" as an anti-virus tool
2. Training users/admins that they must allow all software to automatically apply updates as soon as they are pushed. Anyone who had delayed CrowdStrike's updates for 24 hours would be breathing a huge sigh of relief right now.
This, again, the idea of "let our rootkit in before someone else's rootkit gets in" was and is absurd. A good OS kernel should allow NO rootkits inside. Microsoft promised to end this practice with Kernel Patch Protection (introduced in all x86-64 versions of Windows) but didn't, they left loopholes for people like Crowdstrike to use.
BTW this is the reason the only antivirus I have is Windows Defender, because I know it doesn't install any rootkits.
Microsoft are reported to be seriously considering, or already using, Rust in their OS. It's not surprising, I think most OSes will have to go that way once one does. I'm sure that one reason why Linus has consented to Linux accepting Rust code is because everyone else is beginning to head that way, and there'd no doubt be security benefits from doing so; no sense letting Linux fall behind.
I'm not familiar with CrowdStrike's software, but it might be that there's no end user option to delay updates. With a cyber-security type application, delaying updates sort of runs counter to the purpose of the software; you're still vulnerable, whilst everyone else is OK. It's a tricky thing to balance. The admin who did delay updates only to find their entire corporate network got hacked through a zero day attack may have some explaining to do to their boss! Damned if you do, damned if you don't.
The best thing is not to be the person who selected the software, and to constantly raise the potential for disaster at company risk-register update meetings. In a good organisation, that risk would attract a lot of attention because a day's lost trading could easily be far more money than the cost of mitigating it.
A friend who does Linux kernel wizardry as part of his day job pointed out to me this morning that an OS _could_ cope. This would be the case if the OS did not rely on arbitrary shoehorning of endpoint monitoring modules into the OS kernel, instead running them sandboxed via eBPF - for example, Tetragon on Linux.
eBPF is a bit nuts; it's allowing arbitrary code - with constraints - to run within the kernel. Exploit it successfully, and you own the machine.
I occassionally wonder if this kind of thing isn't a good reason for OSes to use more than 1 CPU privilege ring for things like eBPF, or indeed virus checkers.
Maybe there really is an argument for operating systems using more than just 1 CPU privilege ring for all kernel stuff, and 1 for all user land stuff. As all current OSes put everything into those single rings, they're perhaps missing out on opportunities to be robust to events like this.
> using more than just 1 CPU privilege ring for all kernel stuff, and 1 for all user land stuff.
It would be good to finally actually use all the transistors in our CPUs, the one that implement the full set of rings. Always assuming they still work, of course (when were those bits of the designs given a thorough workout?)
But AFAIK the most used chips - still x86/x64 - only really provide 4 rings via memory segments, which is a bit too crude for most OSes, who like to instead rely on the daintier, less clunky, memory divisions available via Page Tables, which only have the one protection bit per entry.
Yup. Also see https://www.theregister.com/2024/07/21/crowdstrike_linux_crashes_restoration_tools/ before we get (even further) into a smugathon. Anything working that far down the stack has the potential to break things. The era of Windows device driver issues was one of those things that caused some major re-thinks 20-30 years ago.
Thanks to the recent self-inflicted Tory armageddon election, I'd say we're probably back on the right track in that regard, for now at least.
Maybe. When GNER was in government run hands, it did very well, much better that the franchise holder it was taken back from did. But, we need to remember that was happening in a world of competition with other franchises and, at the time, the desire to get it into a good shape to put iot back out to franchise. A fully re-nationalised rail network, tracks'n'trains, will only have set targets, probably with little penalty if not met and so may well start off good but need to be careful that laissez faire doesn't set in.
This post has been deleted by its author
Yeah, over £100 to get into London from Suffolk. That's so much "better"
Where do you get that from?
An Ipswich to London ticket for next week costs £39, or £25 if you split the tickets due to the stupid fare system.
In 1983 it would have cost you £11.50, which is £50 in today's money.
Obligatory Ronnie Barker on British Rail sketch.
Some things used to be better.
Guaranteed connections for many services, so if your train was delayed you they held the connection until it arrived.
Gurard's van on most trains so you could just turn up and put your bike on without any of today's palaver.
Proper buffet service on many trains. Take the piss out of BR sarnies all you like, but a freshly cooked bacon sarnie and strong tea from an everlasting pot will always be better than the pre-packaged crap from today's trolley.
Those train carriages with compartments but no corridor betweeen them. Once you were in one there was no chance of being disturbed until the next station.
Reminds me of a post in a newspaper by someone who recounted, back in the BR days, whilst travelling from London to get to see his dying father in hospital, where due to delays, he'd miss his last connection, a Guard arranged for that connecting service to be held until his service reached and he'd transferred over.
Apparently, the chap had been a bit stroppy to the guard earlier, mentioning that looks like he's not going to make the connection and his father's in his deathbed in hospital. The guard had then gone on his way, but at the next stop, had made the arrangements to hold the other train. Won't happen today.
Gawd yeah, I remember being able to leg it out of work in London, jump on the first available train to wherever I was going, and the guard / ticket officer would sell me the very best return ticket for my journey. No issues, no problems, just two adults discussing options and working things out.
Then a couple of years after privatisation, WOW... it was like they'd fired all the good guards overnight and replaced them with ***holes. Jump on a train without a ticket and you'd be charged the absolute most expensive option for where you were going.
This post has been deleted by its author
LOL, I clearly did use them. Yes, there were strikes - there were strikes everywhere (more due to us vs them on both sides, and an absolute inability to work together), yes, the coffee was awful (like it was everywhere back then), but there wasn't the chaos there is now.
But the trains were more reliable, and while old they had a lot more room. And you could always get a seat (outside rush hour in London I presume, I wasn't there then) - remember those days?
So reliable I used to catch the 10h00 Manchester Euston train by turning up at Piccadilly at 11h00. Catch the train on the platform at 10 & you’d be on the 9 o’clock service & be hit for using a cheap day return on a non cheap day service. And the station food & drinks were truly abysmal.
British Rail by the 70s was dysfunctional & appalling, it is the nature of any organisation that can ignore its clients, like all government departments & all nationalised industries. Ignoring basic human/economic facts always ends in tears for the tax pay & massive payouts for the people in the organisation.
It would be good, that's how it would be.
It was obvious to me even as a child that the ideology of privatisation made no sense: it essentially just means trying to run the same service with less money (because some is siphoned off into shareholders' pockets) and without the economies of scale provided by nationalised infrastructure. It literally doesn't add up, in the most distressingly basic way possible.
@Hubert Cumberdale
"It was obvious to me even as a child that the ideology of privatisation made no sense: it essentially just means trying to run the same service with less money"
That is what we should all strive for. Private business looks to bring in revenue from customers. Public sector have to spend all of their money in any way possible to then claim they need the same or more the next budget. The largest purchaser of fax machines and using them long past any reasonable sanity was public sector (NHS). Catch them up to the 20th century and they think its magic let alone 21st.
"and without the economies of scale provided by nationalised infrastructure"
Such as being held hostage because they are the sole behemoth user that can be shafted, particularly as they will just demand more money. See MOD with their aircraft carriers (catapults) or again the NHS overpaying for drugs. But its ok its only tax payers money and there is always more.
"It literally doesn't add up, in the most distressingly basic way possible."
Give someone a credit card with a decent limit that someone else pays for no questions asked. Consider how you use your credit card that you are responsible for paying. Suddenly things add up very easily.
@AC
"Must have not actually worked in a largescale private sector if they think this is only a public sector problem."
I have a dislike of working for large private sector but the problem is very different. Look at the tanking share price vs the public sector just borrow and blow more.
"NHS overpaying for drugs"
I refer you directly to the offensively profit-mongering shitshow that is the US healthcare system and the way that the insurance racket leads directly to massive overinflation of the cost of everything. I do this not just because it's true, but specifically because I know it's likely to lead to another rant from you and I like wasting your time. Please bear in mind that I won't read it.
@Hubert Cumberdale
"I refer you directly to the offensively profit-mongering shitshow that is the US healthcare system"
I am always amazed that the UK looks down on the US system, the US system looks down on the UK system and we dare not look at the rest of the world.
"Please bear in mind that I won't read it."
You dont need to make excuses, I will try to make my post simple and short enough for you
I would be looking at France to start with.
You've got to be kidding. The state (i.e.social security) only pays 60-70% of your health care, and you're required to have insurance for the rest. Your employer pays that if you have a job, otherwise you'll need your own policy (unless you're young/old).
There are whole areas of France known as medical deserts, with no GP services, to the point where the government were considering making it mandatory for newly-qualified GPs to be sent to work in such areas for the first 2 years after they qualify, even if they're the other side of the country.
If you need to get an eye test there's no point in looking for something like "Specsavers", you need to find an opthalmologist, who are so rare that the practice I used to use opened their appointment books for a few weeks in December to take appointments for the following year, so you had to wait until then and usually got offered a date the following July or August.
If you need to see a specialist your GP isn't permitted to refer you to anyone. At best they'll give you a list of names, and you have to ring round to see if any of them have appointments available.
Sure, they do some things better than the UK, just as they do others much worse, but it's by no means a system to emulate.
I live in France. I can get a GP appointment within the week. Better than the UK where the chances of getting an appointment are very near to zero in most cases.
GP referals? I've had 3.
Sight tests you can get from an optician. Yes, full eye test are from an opthamologist. (This was not the case a decade ago.)
Oh, and when you get any kind of test, you get the results in your hand. You don't have to wait weeks to get another appointment with your GP or specialist. For example blood test: same day. MRI or any other imaging: in your hand before you leave the clinic.
i live in uk
I get Dr appointments same day
GP referrals at least 10
sight tests no problem
never had problem with tests.
everyone's situation is different, and locations vary like everywhere.
only thing health insurance adds is someone taking a profit (that overtime gets worse) and adds reasons to deny action due to insurance taking the piss
trying to run the same service with less money (because some is siphoned off into shareholders' pockets) and without the economies of scale provided by nationalised infrastructure
Whereas in public ownership the managers don't need to care about efficiency or profit since there's always more taxpayer money to bail them out, and they can concentrate on the important issues like making sure their department is bigger than the one next door.
By the time the next election comes round we'll all be bemoaning the mess that nationalised railways are in.
Yeah, because syphoning money from a business to pay bonuses and dividends is much more productive than one that isn't. (Sarc)
Typical right wing mentality: Run it badly, blame government, claim private sector does it better. Sell off.
How about.... I dunno. Fixing the issue in-house? Same deal with water and ant other critical infrastructure.
It's always going to be more inefficient if people are extracting money from the operation. If it isn't, it's being done wrongly.
A lot of the fans who went to Euro 2024 and traveled by train would disagree with you on the German rail system. Late or cancelled trains, overcrowding...
Does that seem familiar? DB in Germany, is just like BR in the UK, has/was been starved of investment by their government.
There was a long piece on the Euro 2024 issues on the BBC News site on Monday.
The German railway system at the moment resembles that of the UK in the 1980s. In the 1990s the decision was taken to privatise it and the board was told to make it attractive to investors. This led to the usual closure of branch lines and vastly reduced maintenance schedules and investment only in a few high profile projects and the sacking or retirement of skilled staff. Then, about 10 years ago, the political winds shifted, and the privatisation plans were shelved but not much money was made available for maintenance and the stuff that hadn't been properly maintained for a decade started to fail more frequently. We're now in that period that even just keeping the trains running costs a lot more than it used to and renovating the whole network is going to take years. Still, based on my most recent experiences of both, I'd say that the German railways haven't yet got as bad as in the UK!
There's a lesson there for governments somewhere but round the corner I can see someone waving a banner with "cut government waste" on one side and "the private sector can do it better" on the other…the leader of the opposition recently suggested more closures as a way out of the current situation. Easy for him to say as he tends to fly everywhere in a private plane!
"Who would notice if UK trains were not running. Strike laden networks with abysmal track (sic) records of operation & punctuallity."
I was reading just yesterday, (BBC website??) that UK trains have a better %age on-time record than Germany now. Worse, UK trains are classed as "not on time" if arrival or departure is +/- ONE minute while Germany classes trains as "not in time" if more than +/- SIX minutes. I don't use trains at all in the UK, so have no skin in this game other than what I read :-)
Ah, found it How Euro 2024 busted legend of German efficiency
Just had a call from my GP to cancel a vaccination I had booked for this morning (with a lead time of almost 4 weeks) as they're unable to access the patient record system and were using a printed list to phone round and cancel all non-urgent appointments.
I suggested that, for a scheduled vaccination, they could perhaps administer it anyway and make a paper note and update their patient records when they came back online but it seems to be beyond their comprehension that medicine might be practised in the absence of a functioning computer. And they could offer no assurance that it might not be another 4 weeks before an alternative appointment could be made.
I think my local surgery use EMIS, but I have no idea whether this is merely a local problem or more widespread.
However, given the increasing frequency of major system outages, it's depressing that service providers have such poor contigency plans when in many cases they could perhaps continue to function with a little forethought.
"The NHS is aware of a global IT outage and an issue with EMIS, an appointment and patient record system, which is causing disruption in the majority of GP practices.
"The NHS has long-standing measures in place to manage the disruption, including using paper patient records and handwritten prescriptions, and the usual phone systems to contact your GP.
"There is currently no known impact on 999 or emergency services, so people should use these services as they usually would.
"Patients should attend appointments unless told otherwise. Only contact your GP if it’s urgent, and otherwise please use 111 online or call 111."
https://www.abc.net.au/news/2024-07-19/global-it-outage-crowdstrike-microsoft-banks-airlines-australia/104119960
appears this is world wide. I suspect this company is going to be in a world of hurt when the washout of this comes out. Fortunately it is late Friday afternoon for the East coast in Australia so I suspect there are going to be a lot of early marks taken. However you may be struggling to get a beer unless you have cash as a lot of banks and POS machines have been hit as well.
We went to one of those annoying pubs last week. Ordered 2 beers. Barman poured them. I offered cash. He said it was cards only. I gave him my Amex card. "We don't take Amex", he said. "All I've got", I lied and watched him pour the beer down the sink before I left and went to a decent pub. Can't wait to go back.
How are kids supposed to pay?
You know that pained look kids give when they receive a present they didn't really want and won't use, but know they're supposed to be fake happiness to the gift giver? I got that from my nieces the last time I gave them cash for Christmas. (They asked for money over toys, books, or anything else, so I carefully got some fresh bills from a live bank teller, dropped them in funny greetings cards, and waited for beaming faces on Christmas morning.)
As my sister-in-law patiently explained to me, cash isn't useful to kids now. They can't pay for online games with cash, exchange cash with friends by phone, buy stuff online with cash, pay for Door Dash to deliver a Starbucks in the morning with cash, or any other modern activity.
So: prepaid gift cards. Digital currency makes the kids happy.
I think that if you provide goods/services to someone, and then present them with a bill - if the customer offers legal tender to pay the bill then you either accept it or you let the bill go. That's what legal tender means, unless society has already collapsed.
WTF! Wait until they've dropped at least another 20%! Otherwise you're buying into something that's in freefall!
Then you've still got to hope that they actually survive this shitshow, and that the shares will actually go up at some point in the future.
.
I wouldn't be surprised if the shares are valued less than the paper they are written on within a few months.
This post has been deleted by its author
"blocks attacks on your systems while capturing and recording activity as it happens to detect threats fast."
The update is progress. The updated system is very effective at protecting against any and all threats. It doesn't even need to record and capture to make you perfectly safe.
I was waiting for that :) :) :).
Yes, you have to admit it's the safest a Windows machine can be.
You could say that it's not very useful, but you have to admit that that too is not a major difference from normal..
:)
(on a more serious note, does nobody test updates anymore?)
Taken directly from CrowdStrike's UK website:
"62 minutes could bring your business down.
That’s the average time it takes an adversary to land and move laterally through your network. When your data, reputation, and revenue are at stake, trust the pioneer in adversary intelligence."
Oh how right they are!
https://www.crowdstrike.com/en-gb/
Although they should have replaced "an adversary" with "us"
> some pissant tiny one man band outfit.
Oi!
One man bands do a lot of good stuff* and at least you (quite literally) know who you are dealing with.
Where do you think you'd be now if it weren't for all the "one (or two) men in a garage"?
"Pissant"! Bloody cheek.
*ok, some don't but it is bell curve, like everything else
I was thinking that, when I was looking after AV and patching, if something like this happened, nearly all users were in an office, so we as staff could get our arses in gear and head off and fix - often multiple machines at the same time
.
Trying explaining over the phone to a user how to get into some of this, with Bitlocker rolling in for a bit of fun and how to fix.
Either show the user there is no local password in some scenarios - erk, or an inbuilt password that they now know, or you have a rotating password and no idea what it maybe now if it is out of sync with the parent server.
Glad that is behind me. Still, where is the testing and change control, or are these devices configured to update automatically without a parent server pushing (as Symantec and Sophos used to do) - can see that more for uses WFH, but not servers
This is bad... real bad. 10's of thousands of networks down worldwide.
There is supposedly a fix that involves booting affected computers in safe mode, and deleting/renaming a Crowdstrike file in System 32. Which is great if all your workstations/servers are remote and the workstations all have bitlocker. And the bitlocker keys are all on a server thats affected....
Why on Earth do people roll out everything in production without testing it? I was once told to never do that...
Ok, for my personal systems I do just install updates, but then I don't have duplicated systems, no real "testing" stage etc.and if stuff breaks I can fix it usually, and nothing is really "mission critical".
Setting up rolling updates takes effort. Also, while you delay updates to your critical systems, they're vulnerable to the latest malware/viruses so there's definitely an impetus to roll out AV/anti-malware updates ASAP.
I'm still not sure how the hell Crowdstrike managed to release an update that was so badly broken, though. Wasn't it tested? Or was it a supply chain attack?
We used to get the updates and roll out to our test machines, do some *basic* tests and then onot pilot machines before prod deployments
If there was a 0-day, then we would try to follow the above, but speed the process up with a few hands on.
If managemment insisted on an instant push with no testing, they approved that change incase of any shitshow - which was a Emergency
If you are a big company like a bank, the national rail company, air line, you should have the resources to do that. And yes, while there is a window of opportunity with zero day exploits (they are already under attack at that point in time) you should not just f' up your production system by flying blind.
I agree on CrowdStrike having to test this, and tests should have caught that. But still, if you are a big company and your actual core business depends on your IT systems working you should be careful with applying updates.
The effort on rolling updates is a definite perceptible effort which carries much more weight than a hypothetical effort to recover the systems when it breaks, so people think "meh, it'll be fine" and crack on with immediate rollouts. Doesn't make it right, but that's how people think, particularly if they have to do manual approvals of AV signatures on an almost daily basis.
I expect a lot of companies will be reviewing those processes now, though.
It is anti-virus software. It always gets updated automatically, otherwise the update comes too late to be effective.
NO "anti-virus" software actually works. It's always reactive, so if the (l)user gets infected with something new - and bearing in mind that putting together a couple of dozen variants of an obsfuscated Windoze virus takes about 5 minutes - none of the AV snale-oil works effectively. The only real cure is NOT to "run" Windows...
Reply Icon Why on Earth do people roll out everything in production without testing it? I was once told to never do that...
This was an antivirus update and because of zero day exploits it has become the habit (or indeed the default setting from most providers) for these to be applied automatically, invisibly and 'seamlessly'. Before I retired from IT I always used a sacrificial goat (my PC, test servers) for any Windows updates with a roll-back/bare metal restore option if needed. Day to day AV updates were just applied automatically - major releases treated as Windows updates.
I couldn't find the origin of my quote on testing above, but it could have been addressed at Crowdstrike. The issue there is one that Microsoft are familiar with - almost infinite variants of installations and thrid party addons which could interact. Mind you, this sounds to be a major sector affected, so a definite testing failure.
"This was an antivirus update and because of zero day exploits it has become the habit (or indeed the default setting from most providers) for these to be applied automatically, invisibly and 'seamlessly'."
Providing it's seamless, fine. That, possibly unconscious, trade-off of risks is now going to have to be revisited.
Is this another case of directly pushed updates that bypass an organization's preflight checks (because of course. *everyone* runs all updates through their UAT environment before mass deployment).
In before the rest: My home office server (Debian) and desktop (Mint) seem OK this morning.
Remote working + Bitlocker + Support Desk being offline too = A free day off.
Currently around 30% of our remote workers leave their laptop on and updates get auto installed.
That's a lot of users.
I do find it amusing that this has occurred a day after this article was published.
https://www.theregister.com/2024/07/18/security_review_failure/
Out of morbid curiosity I tried https://account.microsoft.com/devices/recoverykey and after all the InTune/Authenticator hoop jumping it just took me to https://myaccount.microsoft.com/ so it wouldn't have helped me.
This is why IMHO disk encryption software should only be used on systems that contain secret information and are at a realistic risk of being stolen for the data they contain. It's far more likely that the encryption is going to hamper you from recovering your systems than prevent some largely imaginary threat.
At least here in the UK, it became the norm to encrypt laptops as a minimum when civil servants and politicians developed a distressing habit of leaving them on public transport and tools like Locksmith that MS provided as part of MDOP/DaRT made them trivially easy to access if not encrypted.
(I'm sure people in private companies did and do as well, but that doesn't tend to make the news)
Completely agree with this, in my experience bitlocker is far more of a problem than it's worth.
I much rather have an application, if it must store data locally, handle the encryption itself. Yes, that adds complexity to key management but it's not that hard to manage and certainly easier than trying to get a bitlocker 'protected' system repaired.
If anybody is struggling to get hold of the fix here is the CS Alert text:
Tech Alert | Windows crashes related to Falcon Sensor | 2024-07-19
Cloud: US-1EU-1US-2
Published Date: Jul 18, 2024
Summary CrowdStrike is aware of reports of crashes on Windows hosts related to the Falcon Sensor.
Details
Symptoms include hosts experiencing a bugcheck\blue screen error related to the Falcon Sensor.
Current Action
CrowdStrike Engineering has identified a content deployment related to this issue and reverted those changes.
If hosts are still crashing and unable to stay online to receive the Channel File Changes, the following steps can be used to workaround this issue:
Workaround Steps:
Boot Windows into Safe Mode or the Windows Recovery Environment
Navigate to the C:\Windows\System32\drivers\CrowdStrike directory
Locate the file matching “C-00000291*.sys”, and delete it.
Boot the host normally.
Latest Updates
2024-07-19 05:30 AM UTC | Tech Alert Published.
2024-07-19 06:30 AM UTC | Updated and added workaround details.
>> In a fair world, this would be the end of crowdstrike.
> and yet they're after Kaspersky
Incoming claim this was a Soviet[1] attack in retaliation in 5 4 ...
On TV news this morning you could almost see the sofa dollies salivating at the idea, as they pushed their various experts to say all the troubles today are down to some "hack".
[1] I know, *I* know.
There's an argument that this validates the decision to ban Kaspersky. If the Russian government went in, they could conceivably force them to release a "bad" update to non-Russian IPs which bricked devices and beyond the ability to recover by simply deleting a file in safe mode. Far-fetched? Certainly. Possible? Absolutely. You'd get to do it once ever, but the potential impact to Western IT might make it worthwhile from a Russian perspective.
In any case, it's re-highlighted the value of a supply chain attack on anti virus/malware vendors.
It’s Microsoft’s fault on various levels.
For decades now they’ve engendered an expectation that this sort of experience is perfectly normal and acceptable instead of pulling their fingers out and making a suite of products that are fit for a range of purposes.
And they’ve created a generation of “professionals” who know no alternative and are incapable of asking critical questions such as “should my POS terminal really be downloading live updates from the internet???”.
Sure, everything has its vulnerabilities and deficiencies but the MS model of doing business is on a whole unique level of its own. Maybe a few senior IT managers will wake up and start thinking about the meaning of the phrase “‘mission critical”. But probably not..
The downvote button is down a bit and to the right for those of you who don’t like their faces being rubbed in the obvious.
See above re:bitlocker. Can't reboot in safe mode without it, and getting those keys to large numbers of users will be entertaining, not to mention the security implications of slinging all those keys around in the first place. Assuming there's a user, and they're capable of applying that work around. It looks like massive numbers of POS systems are borked, so good luck there.
It's well past beer o'clock round here anyway.
This post has been deleted by its author
"Earlier that day, my sandwich was mistreated as a bomb threat. Thing is as it had been left outside for a long time it sort of spoiled. Anyway, I was hungry and ate it after the bomb squad gave it to me. Later that night in the middle of a software update, I had to visit the toilet for an explosive evacuation...."
Yes.
Or disable your home's internet connection.
NB: re no-internet: this assumes it didn't download an update yesterday which takes effect on reboot. If so, it'll brick itself on restart and you'll have to try the "fix" above.
BUT! Note that their tech guy posted this on X with the caveat that it doesn't work for everyone. So you might like to hold off until a "harder" fix is issued.
This is a third party issue, not Microsoft!
If Microsoft produced secure systems in the first place then add-ons like CrowdStrike would not be needed.
Security is hard, needs work to get right. Even on Linux systems many do not bother to get it right, for instance SE-Linux can stop things working and there is a big temptation to just switch it off. SE-Linux is not a panacea but does make it harder for malware to get in.
That's only part of it.
Going back through the years, they've always prioritised making things easier for people over good security.
Which did work, as it helped with the spread of computers and with making them more money
But we're continually seeing the downsides
Windows started as single-user & standalone. You didn't really need that much in the way of security (until sneakernet virii, but that's a bit later).
As opposed to "real" OSes that started out with some sense of being multiuser (primitive and broken though it was have a century ago)
It's changed a lot over the years, but you can still see some of the legacy of those early days
Windows today is nothing like Windows of old, structurally and it's not even that similar in terms of the UI. The "real" multiuser OSes of old were not particularly secure and any access controls were easily bypassed if anyone with some knowledge of the systems cared to do so. Mostly the system operators relied on "access control " by limiting the people who had keys to the computer lab door rather than complex encryption and salt/hash password files. This worked about as well as you might expect. Networking just added to the fun -- the "Morris worm" exploited a bug in sendmail running on Unix, for example.
Yes, security on those old systems was poor. But the idea of multi-user and having some sort of isolation (and access control) was being worked on.
There's always been a conflict between security and usability (if it's done well, you can get a fair bit of both, but that takes skill and effort)
MS doesn't exactly have a track record of focusing on security - but if they did, random other corporations couldn't, say, install kernel code - and that'd upset a lot of people too (anti-cheat rootkits, for example)
First: MS did try to stop ring-0 antimalware but was threatened with anti-trust lawsuits by the usual crowd (including, IIRC, McAfee[1]). (Can't now find the reference.)
Second: Crowdstrike did the same thing to RedHat Linux last month https://access.redhat.com/solutions/7068083 caused kernel panic
[1] CEO of CrowdStrike was McAfee CTO when McAfee was breaking systems...
I tend to use car analogies to illustrate IT issues and in this case, it works too....
No-one blames the car companies for the fact most ppl get better insurance prices for security add-ons like thatcham category anti theft 3 or tracking/location devices.
Well maybe the petrolheads do, but if you asked them, 90% of them would be better than average at driving!
This is a great example of why you shouldn't automagically take every update, and why your critical business systems should be air-gapped from the Wild West that is the Internet.
Watching the share price of Crowdstrike today should be entertaining, and a few CIOs should have their b*ll*cks in a vice.
I think that the only winners will be the lawyers.
Contracts will be reviewed (on both sides). The MS lawyer army will make sure this is covered and excluded, and the lawyer armies of other companies will probe the MS defences.
A few class actions will peter out with a few peanuts distributed while the lawyers feast.
It looks like it was a virus checking update, so not a new version of the program (which some admins might test before rolling out), but some sort of 'definitions' update.
Generally you'd want to roll out new virus definitions within a few hours of them being released, which makes testing difficult.
My utterly uninformed guess is that they released an update which identified a critical file in Win 10 as a virus, and blocked access, (but they'd only tested it on Win 11 perhaps?).
CIO bollocks in a vice ?
What a joke, in all bad events the leadership are the last who get punished. Small people get fired, leadership get parachutes, not sure how thats getting your bollocks in a vice.
THis is the problem, people claim leadership are responsible, they have completely twisted beyond meaning what the word actually means. The truth is leadership are not respsonsible, they speak bullshit but thats not responsibility. Saying sorry is not being responsible.
I wouldn't have this software running on any linux boxen under my care and wasn't too happy about Mandiant's fireeye either. For me they were cures far worse than the purported ailments and a introduced massive valnerabilities themselves. Reminiscent of the nursery rhyme of an old lady who swallowed a fly.
As Marcus J Ranum once commented in the source to the TIS fwtk's smap.c you can be right, or you can work.
Fortunately I had reached the point in my life's journey* of having the luxury of choosing the former. :)
* verso la fine del cammin di nostra vita / mi ritrovai per nella terra desolata, / ché la diritta via era smarrita. :)
Unfortunately, Crowdstrike offers a version of their software on Linux.
My end-client has it as a mandatory tool for Linux systems running in their wider environment, which is checked for compliance on a regular bases.
This does not seem to be affected by the current problems, but I have read that if you have it on a Linux system with Secure Boot enabled, it won't run as it is not signed appropriately. But even then, it does not brick the system, merely doesn't run.
Unfortunately, Crowdstrike offers a version of their software on Linux.
For quite a while I think.
I had encountered it on RHEL7 VMs (not under my care :) running in Amazon's cloud. For whatever reason it appeared from the logs to attempt to load a kernel module (and fail) and I think likewise for some BPF code both of which gave me the heebee jeebees - I think RHEL7's kernel was too old or RH had fiddled with the upstream kernel incompatibly or their SeLinux rules didn't permit the loading.
From what I read and widely speculated, cause that's fun to do, it seems like Crowndstrike released a dodgy update, and Microsoft rolled it out to production without testing probably and borked Azure. If that's true, there are so many questions, like why Microsoft is deploying things globally, or if you deployed it locally and borked something, why does it have a global effect - ie, WW dependency on some service - AD cough cough. But it is all speculation, but that's what we are here for right....now where is my pitch fork
According to the status history, yesterday's Central US Region issue (not at all world wide) was a configuration change issue in a storage cluster. Which is nery much not good, but not a made malware update.
The status page does have an alert about VMs running CrowdStrike Falcon agent having problems.
Crowdstrike Falcon is a cloud-enabled item of software. As far as I can tell, it used Cloud services to deploy the virus signatures, and also passes off some analysis and logging of issues to services run in one of the Cloud providers. They claim that this allows them to be aware of issues quicker than other anti-virus tool providers because of their use of AI.
It is not the cloud services that are the problem here, but the very nature of deploying software directly from remote servers, whether they are in the cloud or not.
My work Win11 laptop is borked too. I noticed it as the first BSOD around 2.30 from Sydney time, rebooted and ran long enough to investigate the error message, figured it was a crowdstrike issue *that has happened at least a couple of times previously*, reported all this against my support ticket, then an hour or so later the infinite BSOD/reboot loop started.
At least our mob has a self-service get-your-bitlocker-key option they claim is working, so as an IT company we *may* have a decent proportion of people up and running on Monday. If they read their emails on their phones. I wonder about the coloured pencil brigade though.
Anyway, I don't feel the need to spend my weekend sorting my laptop out though, that can wait till Monday.
To be fair, it's only because (a) the bad guys will happily do so and (2) (assuming Windows) that Microsoft have built the castle with no moat, "Welcome" sign as you enter, non-existent doors and Windows (ha!) wide open, bereft of even glass.
I used to be a big fan of Avast and ZoneAlarm on our personal Windows PCs.
Over the last couple of years they have become so bloated and invasive and gobbling up all the CPU on older lower end machines. I've had to start removing them from our dual core and low end quad core Win 10 PCs as they are just unusable otherwise.
I have 2 PCs here on which I can compile a very large application.
Compiling from scratch on Ubuntu LTA takes about 19 minutes.
Compiling from scratch on Windows 10 takes over 2 hours.
The difference is that the Windows machine has all the IT spyware, while the Linux has just CrowdStrike.
But, IT upper management still complain that we have Linux machines that they cannot monitor - they really are the anti-work team!
Updated at 0730 UTC to add Brody Nisbet, CrowdStrike's chief threat hunter, has confirmed the issue and on X posted the following:
There is a faulty channel file, so not quite an update. There is a workaround... 1. Boot Windows into Safe Mode or WRE. 2. Go to C:\Windows\System32\drivers\CrowdStrike 3. Locate and delete file matching "C-00000291*.sys" 4. Boot normally.
Thanks Brody, great workaround. I'll ask my 5,000 users to reboot into safe mode, get around BitLocker and delete a file
Weird that a company like Crowdstrike allows non-spokespersons to put out statements like the one above. Where I work, that's a trip to HR for tea and biscuits, and a chat about my NDA.
"Threat Hunter" ≠ spokesperson
Spokespersons don't issue official communications using words like "faulty" in a reply to someone else's tweet.
And given what a shitshow this has proven to be, I'd bet a crispy tenner that Crowdstrike wants to manage the situation using people who don't pour oil on the fire.
Safe mode is perfectly possible to boot into even with Bitlocker enabled, but it asks for a the recovery key first.
The decryption (revocery) key is unique and consists of 48 numbers.
In an AD environment - with proper configuration - the decryption keys can be stored in the AD schema. If computers are enrolled into Intune then the keys are in Azure.
In a home environment or without (Azure-)AD you can still enable Bitlocker - the wizard will prompt to save the recovery key either into your Micros~1 Account (online); save it into a file NOT on the drive you are going to encrypt; or print it - the encryption process will not start until one of these is done.
InTune forces the key to get changed from time to time, so the printout is useless. Nobody has their own key.
So, someone in the IT dept needs to get the keys out of AD. Every key.
Then somehow get those keys out to all the employees - many of whom do not have any access to corporate communication channels because their company PC is BSOD.
Then, change all the keys, as it's certain that miscreants got hold of some of them, defeating the entire point of bitlocker.
So, someone in the IT dept needs to get the keys out of AD. Every key.
Using a machine that is either built afresh, in which case it may not have network credentials, or a standard build if they're having PXE booting available.
Otherwise the machines you need for recovery are just as buggered as the systems you're trying to restore as you're facing a chicken and egg problem.
You know, it's almost the perfect argument to keep a few Macbooks or Linux laptops around :).
"InTune forces the key to get changed from time to time"
Intune forces rotation if rotation is configured, yes.
If computer is not Intune'd, but a regular AD is used without the Bitlocker policies I mentioned - or using a home computer - then the printout or the saved keyfile is needed.
I replied to someone who alluded that safe mode with Bitlocker is hard/impossible - possibly because (s)he does not have any knowledge of Bitlocker and felt necessary to let others know about his/her incompetence.
"So, someone in the IT dept needs to get the keys out of AD. Every key. Then somehow get those keys out to all the employees[...]
Look, I'm not disagreeing with you one bit. This is a lot of work.
My biggest client uses Crowdstrike worldwide. The scenario is harder than what you write since perhaps 1% of users actually has admin rights to their personal laptops because of good IT practices. This means that most of the affected users will need to bring their laptops to the IT service desk.
Luckily - because July is the most popular vacation month in my part of world - I would expect many Crowdstrike users here to have evaded the problem - I know I have. (writing this on a hammock)
The big one will be a late Friday update to DNS or BGP on Edge Routers.
1. Too many things are using "Cloud" when it should be the users own data centres.
2. Cloudfare is too big and used by too many.
3. Google Chromium code and standards too dominant on browsers and other things that render HTML
There wasn't any need for the famines in 19th Ireland and to a lesser extent in Scotland and Germany. Poor people too dependent on potatoes due to greed of some rich people.
Bit surprised not to see the Microsoft PR machine in action.
On this occasion, it's not actually their fault, but they're taking all the flak in the mainstream press. If there was ever a justifable time to run a press offensive defending your name this should be it....
Agreed - Anti-malware is high privillage software; it's specifically allowed to do significant stuff, like stop systems booting for their own protection (say if the drive is about to get encrypted...).
Ultimately, people put a lot of trust in this software, and if it fails them, they're in trouble..
The cost of this incident is surely going to run into $trillions. The airport shutdowns alone will see to that. But you can bet that the CrowdStrike licence will have the usual disclaimer for consequent loss or damage in the small print.
In any case, the chances of CrowdStrike having the resources to cover the cost of having crashed the world are infinitesimal. So the world will soon have obstructive security software that's supported by a company that has vapourised.
More popcorn, please.
Me: "Antiviral software is a backdoor into your system that should never be used and 99% of the time only exists to tick an IT Health Check box."
Companies: "So, you're saying we can install it and get that box ticked?"
Me: "..."
Nothing will be learned from this episode.
Call me cynical, but the lesson is most likely to be along the lines of "We could move our booking/record/payment/etc system off Windows onto something else and pay to have a custom app written and maintained or even bring it in-house given it's critical to our ops, but in the long run it would cost more than suffering an occasional meltdown which our insurance partly covered anyway so let's cross our fingers and hope for the best"
I like Windows bashing as much as the next guy, but it's easy to do without trying to shoehorn it in to situations where it's not appropriate.
Crowdstrike Falcon is of course also used on millions of Linux boxes across the world as well. Just happend to be the Windows build they screwed up on this occassion....
Do major broadcasters give airtime to 'tech experts' who clearly have absolutely no idea at all what they are talking about.
Please if you MUST broadcast crap, please don't try to disguise it be claiming it came from a 'tech expert' when the only thing they're expert in is spouting crap.
BBC I'm looking at you.
Funny how we all first assumed it was Redmond. Maybe its because they can't even keep SSL cert renewals up to day.
Even funnier that it was "anti-virus" software. The ultimate placeboware. If you are running anything Windows it will never ever be secure. Thats what Unix/Linux and firewalls are for. So all anti-virus software is a waste of resources. And money. Now locked down network firewalls are worth their wait in gold. As is no download policy / wipe and reinstall base image on a regular basis.
You want real security then its Linux/Unix boxes ringed by firewall DMZ's. And no Intel CPU's. AMD and ARM have no hidden execution modes and interesting no longer documented register flags. I wonder how they got there.
Fell sorry for the poor sods in IT/support who have to clean up after this clusterf*ck of truly epic proportions.
News
Sky News goes off air after worldwide tech outage linked to Microsoft Windows - as trains stop running, airlines are grounded and banks and IT firms hit across the globe
https://www.dailymail.co.uk/news/article-13650333/Sky-News-Windrush-TV-channel-technical-issues.html
Hmmm.
While the (latest) opportunity to bash MS/Windows (some more) is welcome, this one isn't really on them, but rather on C{l|r}o{u|w}dstrike, isn't it?
Unless of course a (proper) OS shouldn't allow such 3rd party shenanigans to begin with,
. . . . unless of course such shenanigans are necessary for AV etc applications to provide security,
. . . . . . . . unless of course a (proper) OS should not need shoring up security by 3rd party shenanigans to begin with.
Hmmm.
Probably shouldn't mention the image it conjured up for me, as it is rather un-PC*.
On an unrelated note, many people will be out looking to collect scalps over this debacle.
* hint: preface with the word "Big", possibly also "Heap" if you are into search algorithms.
I'm no expert in OS design, or security, but OSes should be concentrating on resilience. And I don't mean resilience to threats, I mean ability to not behave like a house of cards, and fall down at the first hint of a problem. Surely it can be part of the intrinsic design of an OS to detect when a file is causing a bootloop and automatically removing it? With all the advances in... everything?
I'm not saying that Linux-based OSes are perfect or anything, but let's face it, Microsoft have normalised the enshittification of OSes. We expected windows 1,2 and 3 to be unreliable because we understood how DOS worked.
Then came Windows 95, which was essentially DOS-based. It was like a house built on foundations of sticks. It would come crashing down at the slightest provocation. This unreliability continued, even into NT-based OSes like XP. And though it's got better, the underlying capriciousness of the OS is still there. We have learnt to put up with this. And all the while all Microsoft can be bothered with is putting ads into the start menu, moving the taskbar icons around, forcing its customers to use bing, set up Microsoft accounts and switch to Edge. They're like spurned lovers acting up and being petty.
Let me be clear about this: Crowdstrike is the root cause of this problem. But the blame should be squarely placed at microsoft's door. It's their OS that cannot look after itself, and cannot repair a bit of damage from a single program running on it. We deserve better than this.
People and companies know this, and still they build critical infrastructure on it. An airline ticketing app, medical records database or card payment app doesn't need the full "power" of Windows to perform. It's a bit like deciding I want to make a cheese sandwich and going out to procure a professional kitchen with every conceivable gadget and utensil before I set about buttering the bread. I'm not daft - I understand why companies use Windows and that they have little choice, but they have to take some of the blame for this; "critical" should mean more than just ticking the "accept licence" terms and hoping for the best. It wouldn't be so bad if the vendors took some responsibiltiy for performance, but the licence that users have with Microsoft, Crowdstrike, or whoever is completely one-sided and MS/Crowdstrike/whoever will take no responsibility nor pay any compensation for any shortcomings of their product, no matter the impact. Sure, Crowdstrike's share price is plummetting and they might suffer as a result of that, but if they go bust their competitors are no better at this and MS is too big to go bust.
Blame the vendor of whichever software has fucked your business, by all means, but go and read the contract and you'll see that, legally, it's nothing to do with them.
Wasn't there a discussion about that exactly a few weeks ago? Like comparing software supply chains to... I forgot, was it the food industry, with its many layers of checks and rules (cooling chain?). And there were people arguing against software vendors taking the responsibility for their products.
This is what you get, folks.
Way back when MS had an OS where a crash device driver bug could not brick the whole OS. It was NT 3.51 and earlier.
Then the chief architect of NT 4.0 was so f*cking technically clueless (but boy was Jim Allchin a genius at MS company politics and self-promotion) that he did not know how to speed up memory copies over Ring partitions for video drivers so he decided to stick ALL video and printer drivers in Ring 0. With the OS kernel. What could possibly go wrong. So now driver BSOD's which were pretty unknown on NT 3.51 and earlier (and I tried) became very common with NT 4 and later. Saw my first one a few days after installing NT 4. He also made total unnecessary changes to the DDK architecture every release causing huge churn in device drivers. Hell for both users and hardware vendors due to hard to find newly introduced incompatibilities. They would work fine for weeks / months on the news release then...boom. Blue Screen.
And NT 4.0 begat Win32. Which begat Win2k. Which begat XP. Etc. Etc.
He is also a crap guitarist.