Its obviously a govt project... but
... how exactly how do you spend $10B if you are going to use 'already existing' assets to build it?
81 publicly visible posts • joined 2 Dec 2016
In the dawm of time I downloaded something called "linux" onto recyled NT3.1 floppies donated to our school by Microsoft. Rushed home installed for several hours. Acheived root.... typed: "mv / /tmp" ... and things got weird fast. Think that bug was fixed somewhere in the 0.98 kernel release.
This sort of reminds me of the trend in retail of check my ID when I buy beer. I'm nearly 50. I'm not sure what purpose this actually serves. You can check my id all you want, but it doesn't stop me from handing off the beer to the teenager waiting outside one bit. (I don't do this... but it all comes down to trust eventually doesn't it?).
TLS is broken, cert longevity has nearly nothing to do with that. It's broken because the CAs don't really do their job either because they are run by MBAs or because they exist to be shady... and due to politics browsers end up trusting a lot of CAs that I don't really want to trust ever. Furthermore, unless you actually use DNSSEC fully the domain I'm connecting too isn't necessarily mapped to an authoritative IP address...which can have a perfectly valid cert from letsencrypt.
Security software is just like this. It doesn't matter if it's in the kernel or runs in userspace etc. (well actually it matters quite a lot - userspace is NOT effective and has terrible performance and compatibility problems).. But by its very nature security software MUST be able to modify the running system... which means if f's up... your system may not run anymore. This time it was a BSOD in there driver due to some quality issues, but simply blocking particular processes due to a detection false would have the exact same result in windows (BSOD and a looping one at that).
The actual _issue_ here is that McAfee did all this in 2007, 2009 etc. and Kurtz was CTO at McAfee... and apparently did not take those lessons to his new organization. Security software must be built and supported with a culture of obsessive safety and not just from 'attackers'. Which means testing... but also looking at designs critically and avoiding ones that are going to fail in interesting ways. Culture comes from the top.
My effective tax rate is about 48% largely due to many many layers of taxation between me and the federal government... for which I'm not entirely clear that I'm getting a good deal in return and my vote against such things never works... so I question if democracy is really helping me either. Joining a union just seems like putting the president of the HOA (you know the one) in charge of how much you can make. It's just another tax you have to pay to reduce your freedom to do things. No thank you.
Very much an easy thing to say... and obviously the view of somebody that has never built any software, nor has to actually pay for the result. The thing about software is it just doesn't work that way. You can design with the best of intentions build in quality at every step and really grind on it... and still have bugs. NASA is an interesting example. They spend some crazy amount _per line_ of code and still crash into Mars about half of the time when they do something new.
The truth is security is an manageable risk that is in the rounding error of the net productivity that software brings to the user.. even if its insecure by design, especially given that insecure by design is the only _usable_ design. I mean hell.. the internet isn't very secure.. but we get by.
I write this kind of code for security software. I didn't write crowdstrike, but I did write a competitor or two. Crwd seems to have taken an approach of pre-compiling search trees and directly loading those into their kernel filter driver. Every cycle counts since the job is to eliminate system i/o events against a vast corpus of rules as fast as possible. However, I've always taken the approach of building search trees dynamically directly from the source content on the endpoint since you can do much more validation against what the sensor can actually support and avoid this kind of f'up. The tradeoff is that rules are shipped in source form to the endpoint and are readable by anybody that cares to look. I suspect Crwd was all paranoid about people reading them and decided to ship compiled search trees instead which has some risky edge cases between sensor code and compiled content as aptly demonstrated here.
I recently had a bit of a road trip through the 'fly over states'. Literal ghost towns everywhere... with lots of infrastructure and housing.... Just not enough people to keep it working. Pretty sure Missouri has the internet, nothing stops many of us from working from Missouri except a will to live in Missouri.
There isn't a housing shortage. There is simply an unequal distribution of it. Turns out if you to live in 'not Missouri' place you have to pay more... and anybody whining about that price might consider Missouri.
Culture is very much at the root here... And McAfee was toxic. There were lessons learned at McAfee with the 5958 incident. Some good, some bad. Ultimately 5958 led to the internal stagnation of the organization and why McAfee just isn't really a relevant player anymore. And yes, he was in the room then as CTO.
It's my job to talk with CISOs. I've discovered there are a few types:
1) former 'technical' person. Probably worked on a SOC team or did some red teaming at some point in their carrier. They view CISO as 'defend the network'. These guys tend to fail in the boardroom but tend do a competent job with what little budget they get.
2) former 'cops'. Law enforcement, legal backgrounds, program managers for TLAs etc. They view being a CISO as 'risk management'. Do somewhat better with the board room, but also tend to be a bit brittle since they have impostor syndrome pretty hard with their technical team that reports to them.
2b) Subset of 2 that has done X things that have now 'solved security'. These are the ones that get hacked hard.
And finally, I'll take objection to the Senators statement:
"The cyberattack against UHG could have been prevented had UHG followed industry best practices," said Wyden, concluding his rousing letter-cum-tirade. "UHG's failure to follow those best practices, and the harm that resulted, is the responsibility of the company's senior officials including UHG's CEO and board of directors"
MFA is a good thing, but the REAL question is how did somebody already have credentials? They were already breached, and they still haven't found root cause.
If you have a renewable token with access to things... that is a post authentication token. It's a shared secret between you and whatever API that honors it. If I as an attacker happen to read it out of your browser's session data... (which is exactly what an info stealer like Lumastealer which is the specific variant cited here does) I have the exact same access as you do, and if its renewable, I can get new tokens just as easily. Info stealers aren't after your passwords as much because to use them they need to also defeat MFA... but who needs a password if you already have access with a token?
popcnt is a very useful instruction... it returns the number of bits set to 1 in a word of memory. This sounds trivial, but it's a huge perf boost since this is a very common thing to need to do, and the obvious for loop alternative is fast... but not as fast. You _could_ dynamically detect it, there is a CPUID bit that indicates if it is supported, but then you would have to dynamically replace every possible location of it with a call to some other routine, or somehow trap an exception, retry etc. It's just not practical to dynamically support using it since the effort to do so negates the performance advantage of the instruction.
All that said, the min CPU spec is really about security and the colossal screwup that is meltdown/spectre. Forcing out chipsets that bleed information due to meltdown/spectre/etc and can't really be effectively mitigated, or worse the mitigations further drag down performance is required to have basic security guarantees. SSE4.2 just comes along for the ride. Sure, use the older machine for something by running linux on it... just make sure it isn't anything important.
Microsoft has done a really _terrible_ job of actually explaining the hardware motivations here. Simply terrible.
The SEC has reporting rules now. They are being used by attackers as leverage to get people to pay ransom now.... pay or we'll turn you in. Even dumber though is that recently a health care provider was issued a fine after reporting an attack... for exposing patient records (to the attacker). As with most regulation, reporting largely doesn't accomplish what was intended and has some really perverse incentives built into it. Blaming the victim is probably not the solution.
Yeah, that is some bad ransomware data.... and sort of calls into question the rest of the report. Ransomware from chain analysis was at least $1.1_BILLION last year. (https://www.chainalysis.com/blog/ransomware-2024/) 2X from 2022. That does NOT include the MASSIVE costs of incident response with or without paying a ransom. I think you could find single incidents that might have costs more than $52m last year.
Well yes... This is EXACTLY why anything the government is involved with is so ridiculously expensive. You literally have to charge them 3-4x what you would charge anybody else as a paperwork tax... and since so few vendors can even get certified, its not like 'open bids' have competition a great deal of the time, you the vendor having achieved the activation energy to deal with the government can in fact name your price and they have to pay it. Not that they care, not like its the governments money anyways...
It's obvious Boeing is culturally bankrupt. It's not just the MAX fiasco. And the door plug was one week after a previous 'ground them all' inspection where there where loose bolts on a safety critical system in the tail assembly. This shit should be checked and rechecked... probably IS being checked and rechecked and it's still wrong.
The Starliner program has also had some serious issues. And not just the ones on flight hardware where it didn't quite make it to orbit. They managed to also take _checkout photos_ to document the state of the parachutes on a drop test that clearly showed one of the parachutes was NOT connected to the airframe... and somebody then proceeded to pack the parachutes in that state. I'm sure they are ISO compliant in more ways that I could even imagine... but compliant isn't the same thing as giving a damn about quality. I mean good news! 2 out of the 3 parachutes opened, good thing for redundancy... but wow, they had _photos_.
Exactly this. Cloud applications depend on your client application being able to keep tokens secret. Android and iOS were designed with this in mind and generally a client app can store its tokens with reasonable assurance that other apps (aka malware) can't read them. Windows, Linux, Unix, OSX where all designed long before 'the web' was really a thing. Client apps store data as _you_. As such any application you are running (aka malware) can read any data you can read... including your tokens. Running Linux doesn't make you safe here, it just makes you less likely to be a target in the first place... but that is only the market share of "Linux on the desktop" being essentially a rounding error and thus irrelevant to a malware business.
Nobody cares where you live, they care to know where google or whoever thinks you are living. This is so when they use the tokens they just harvested, they can spoof the correct geolocation and not set off any alarms in the cloud services from a token performing 'time travel'. Time travel detection (aka you are suddenly in SE Asia, despite logging in from Redmond Washington not 5 minutes ago) is bread and butter of cloud identity security.
https://upsight.ai/blog/beyond-passwords-decoding-the-vulnerability-of-identity-tokens
Ransomware is a business, not a tactic. The trend in this business is towards 'Surprise backups' of victim data. Ransome isn't quite the right word; blackmail is more like it.
Imagine you have gained control over a law firm, it has many people's secrets in their files and a professional obligation to protect those secrets. You could 'sell' a onetime license to the law firm so they can avoid using their backup.... Or you could sell an _annual_ subscription service of not telling others about all of their secrets. Any MBA will tell you that the recurring revenue is better... and so much harder to defend against.
I've had I think 6 EVs now:
- 1 Chevy Spark EV died due to a minor crash that caused the insurance to total it due to the cost of electronics.
- 1 Chevy Bolt EV was sold on the used market at a loss - it was basically a Chevy malibu and had tons of software glitches and piss poor design possibly due to the retro fit.
- 1 Tesla Model X died 30 minutes after delivery upon arriving home. Some sort of central computer had stopped working. This was a COVID build, so Elon himself may have fitted the QA failed part from the scrap bin in it himself. Twas returned to Tesla.
- 1 Chevy Spark EV died after 2 years due to the charge controller failing. GM was unable to find a replacement board after 9 months of it sitting at the dealer... I got a buy out check from them.
... Kia rental EV was 'fine'.. but it was a rental so who knows what happened after I returned it 9 months of me driving it around....
- current Nissan Leaf - still going strong. Very boring car... but I've high hopes of it having a natural end as a result.
The Chevy Spark EV is still my favorite... But I suspect the fundamental problem is that people expect more from EVs due to the price, so the normal build quality issues stand out more and when it does go wrong, its very wrong. What you might accept in a $25k malibu, you are going to be pissed about in a $37k bolt.
The real action is in getting post authentication tokens. All I need to do is read your tokens out of your profile directory and I _am_ you to whatever you happened to be logged into. I don't need your username or your password, and I don't care if your MFA is legit or SMS. I'm still _you_.
Malware and the AV engines that VT is aggregating across are so 1990s. Attackers don't send you malware. They sent you _links_ to malware, or better yet they Macgyver it from bits you already have on disk using duct tape and zip-ties. In this case a fairly pedestrian dll abuse to download malware as part of an update.
As such, checking your binaries against VT isn't going to flag anything, and a goodly amount of the time neither is your AV scanner. This was a multi-stage attack - the malware part that VT would be able to flag is downloaded much later in the attack.
Duh, but hardware as to why it would be bad? Seriously? Hardware is just going to keep on working with vmware, it's nearly impossible for it to not. You should be far, far more concerned about the _virtual hardware_ being inaccessible. The choice is VMW or "the cloud". AWS, Google and Azure look forward to this acquisition no doubt, THAT is the bad you should be worried about.
I'm pretty sure my Irish and Scotts ancestors didn't come to the USA because they woke up one morning and decided to emigrate. There were reasons; possibly including judicial murder, starvation and general religious suppression. Those same ancestors went on to homestead in the west, in some cases literally over the graves of the previous inhabitants. Apologies are owed all around... but this is an accounting that simply can't be settled. Learn and live for the future.
That's the problem though. Your control over the system is the weak point of using your local system to store passwords! You have access to all of your data all of the time, ergo so does any attacker that you happen to let in though a momentary lapse of humanity. And beyond that, the password itself isn't really that interesting, the hot new trend is local token theft. I don't need your password if you already authenticated for me, I have something better - a token! Again, if you are NOT asked for a password on each and every API call that your browser/application might be making, and if you are in control of your system, the same attacker can just read your keystrokes too.
It's all a shell game. The only "secure" device I might trust is inherently entirely out of my control because it won't let me control my own data.
Having gone through this process... The vetting is done by a 3rd party CA issuing an EV certificate to the company. That certificate is used to sign submissions to the Microsoft signing process. (before Microsoft did the signing directly, the CA cert was used to sign the drivers since the CA had a Microsoft issued cross-cert). The 'Extended Validation' in an EV cert is... 1) can you pay $400? 2) Do you have an attorney that will attest that you are you and answer a phone call to repeat the same?
I suspect the latter is the weak link since it is not clear how the attorney is vetted as actually being a member of their respective bar association, nor would I trust in professional ethics in all places equally. (see also: the Panama papers). Some number of years back Microsoft was talking about doing the vetting directly and cutting out the 3rd party CAs entirely. Probably a good move, but I've not heard much about it since.
Signing does provide security value in several ways - there is an audit chain to some extent - somebody somewhere does in fact have to swear they are up to no good and sign contracts to that effect. Not everything has to be a technical solution. The second way is actually fairly important - it's very difficult to create a polymorphic driver because of the signing requirement - while there are lots of 'bad' drivers and many more abusable drivers out there... the number is not infinite, and you can in fact build effective rulesets around them.
The dystopian part here is that as a driver developer... quite a lot of process to get through to ship some code. Annoying, I don't like it, but I'm not arguing against it either.
Meh. This has been going on forever, the key statement is ""In these attacks, the attacker had already gained administrative privileges on compromised systems prior to use of the drivers,". The only thing different here is that there is at least _some_ audit trail due to the signing about who wrote the code.... the fact that Microsoft signed it is less relevant than it sounds (prior to Microsoft signing, this was effectively done by a cross-cert from Microsoft to particular 3rd party CAs... net - same thing).
However, there are _plenty_ of drivers out there that export functions to just about any process to read/write kernel memory, modify/terminate processes etc. For instance, Process Hacker was used to attack Sony Pictures years and years back. Microsoft's own SysInternals procexep.sys can _also_ terminate protected processes via a driver as well (indirectly at least - it can close handles - close the right handle and you can cause a process to exit).
The thing to actually watch out for are drivers that are not actual _device_ (as in hardware) drivers. Software drivers have their uses and there is nothing wrong with them per se... but you should audit them more closely. If you are running a driver from, say a major AV vendor, but you don't use their software... you are probably looking at a problem. Or you have procexp.sys but aren't using procmon. And so on.
VMWare isn't like twitter, a fool can buy and destroy twitter - no big deal, people can spout stuff lots of other places... such as right here. Nothing much actually depends on twitter's existence. VMWare is unique and things that you most certainly DO rely on, depend on it, there are no viable alternatives.
It's a stretch to call it anti-trust... but there is clearly a public interest in keeping Broadcom from doing what Broadcom is going to do to VMWware.
The EU and really any government that has the public interest in mind should look _very_ skeptically at this... Not only from a competition standpoint, but a national security one as well. VMWare is the only technically viable and operationally mature alternative to 'the cloud'... A Broadcom acquisition would very likely shift things to a 'big 3 cloud or nothing' set of alternatives.
I see zero reason to think Broadcom would handle vmware any differently than CA or Symantec... VMW _is_ legacy tech and sort of 'missed that whole cloud thing' despite some frankly confused efforts in the past few years to 'also run' on the cloud. Unlike CA of SYM, VMW also happens to be very, very, important legacy tech and should not be meddled with.
I think the only thing holding it up is the EUC quite rightly asking wtf is going to happen if Broadcom destroys all of the non-cloud infrastructure by acquiring its sole viable vendor. (yes there are alternatives... if you have time and inclination.. but ESX stands alone).
One can only hope Broadcom is going to Musk this... though too late since the trust damage was already done to customers, employees and shareholders by the VMW board in even agreeing to the offer.
Not particularly relevant to the tech stack in question, but remapping around bad sectors is a very old problem and an SSD is no different. NTFS for instance has had a feature since NT 3.5 era where if there is a write error, it will mark the sector as bad in its internal tables, remap the LBA to another free block and perform the write again. Doesn't help for read errors obviously.. .but its one of those little things that if you start to see this happening in the event log... you got a disk on borrowed time. Then came SMART which basically does the same thing at the hardware level, again if you notice these events, your disk is on borrowed time. These behaviors are great... right until they aren't since its easy to ignore/never see the events that tell you of the pending failure.
Repeat after me: "Linux does not mean you are 'secure' "
... you either aren't worth attacking or you've owned and just don't know it yet.
Security on linux is in far, far far more difficult than windows. And no, I'm not talking about anti-malware, that doesn't work even on windows and is laughable on linux. If you consider a modern approach to security such as EDR - the number of 'potentially interesting security events' is 10x than on windows - fork()/exec() and the unix tool philosophy means there are many, many, more objects to keep track of... AND the BIGGEST threat to windows is powershell and the various other sorts of built-in script interpreters such as office macros. Well... Linux is nothing BUT a huge interpreter, no malware need apply, just live off the land. And finally, when you consider the intersection of licensing politics and kernel code, the interfaces to build useful security controls are there if you don't mind doing everything yourself, but if you just want to buy a security service like in the windows world... said service provider can't really give you a first-class experience due to GPL complications.
Like many cool things... if the first step is 'change everything': you've failed.
Successful technology improves upon the previous generations or layers in a way that acknowledges the technical history and keeps the old stuff working. See also: Windows backwards compatibility (yes, yes - they seem to have lost their way here a bit... but it built an empire for sure). Unsuccessful technology asks you to first change everything and do something a new way. See also: IPV6.
The later _can_ work... but it has to be pretty damn compelling.
I was at McAfee during the Intel era... It was always a complete mystery to everybody wtf it meant to put 200+ MB of MD5 hashes 'into the silicon'. Though it was apparently on several executive bonus plans that we needed to increase sales of 'sockets', not that anybody in McAfee mgmt chain had a clue how to make that happen. I'm pretty sure the confusion on the Intel side was pretty much the same.
See 'security'... as Intel understood it meant door locks, 'security' as McAfee understood it meant 'background checks'. I could see a sort of world where those two notions worked together... but customers where not that interested in paying _more_ for the privilege and given how Intel slices up their CPU features in pretty arbitrary ways... you end up with the 'why does this brand-new computer not run windows 11' problem all over, or in this case 'why does my door-lock fail open'. Thus ended any actual engineering attempts to bring those ideas together.
I'm not sure much has changed, but the main thing is software businesses should be software businesses.
I spent a fair amount of time yesterday learning about the 'Bitmask Manipulation Instructionset 2' (BMI2) and in particular why a library I'm using was causing a #UD fault on a brand-new Windows 11 laptop from Asus. The reason is that it's an 'Intel Inside' (aka 'Celeron') and... despite meeting the weighty requirements for windows 11 lacks some Haswell era instruction sets. (Answer is to recompile the library with some defines that do things the hard way). Mind you, I know this is a cheap laptop, that is why I purchased it since I'm developing software and want to know the worst-case performance scenario. Naively I was hoping that win11 at least meant we could count on some sort of baseline of cpu features from say 2013. Nope.
Utter bollocks. Cars? That is just loony.
The only party this deal is good for is Mr. Dell. VMware, its customers and employees will suffer, Broadcom is buying an empty bag of air because it plain does not culturally match VMware even a little bit, it literally can't help but kill VMware. What promise VMware held out for customers... some sort of neutral cloud layer... Well they will find they can run on AWS and Azure just fine without it not long after this goes through.
This reminds me of when Yahoo rejected a takeover offer from Msft. Msft's offer was way more than they eventually fire sailed it off for. It almost seemed like Musk woke up one morning bought 9% of twitter on robinhood and _then_ discovered that despite all of the noise... twitter isn't really that central to that many people's lives. Take the money and run fools.
Microsft may bundle one drive, and be super pushy about using it... But the apis ARE open and have been for years now. You have to provide your own sync engine and cloud... But the OS apis are there and not welded to onedrive. Microst knows exactly where the line is at. IOS on the other hand....
Kangaroo is what happens when you are just taken out back after trial and shot. I think you need to readjust your rhetorical scale a bit.
Point still stands - prosecution is a civilized way of dealing with crime. The alternative is you rob the wrong guy and just end up dead someplace. We don't want that.
The point of prosecution is that the government has to prove that you did a crime in a manner that requires particular rules to be followed. People are acquitted all the time (some news to that effect over here in fact). The US justice system may not be perfect, but it is no kangaroo court either. Ducking prosecution just leaves you 'beyond the law'... which is where vigilantism comes in. I think we would all really rather we keep with the civilized path to punishment.