Re: Sell strike aircraft not spyware
Tu quoque fallacy. One bad act does not excuse another.
9034 posts • joined 21 Dec 2007
Certainly no one who's paying any attention believe NSO Group are the only smartphone-APT vendors. Others such as Cytrox and Candiru have been exposed; some of them operate in the open. NSO have just become the most notorious thanks to a series of (unfortunate, for them) high-profile cases and the Pegasus Project exposé.
They're bad actors, and I'm happy to see them squirm; I'd be happier yet if they were shut down. But they're far from the only ones.
Different exploits for different use cases.
The NSA exploits leaked by Shadow Brokers were very useful to many people, and remain useful in many cases because lots of folks never update their systems. But they don't include zero-click APTs for current Android and iOS devices.
Pegasus really is very well-done malware, and then there are all the services provided by NSO Group once it's installed – you don't need your own penetration team to make use of it. You can buy similar capabilities from other top-shelf malware vendors, but there's nothing equivalent available for free.
The vulnerabilities exploited by Pegasus are more than adequately explained by normal programming errors. Given the state of software development, there's no need to go to the expense and risk of planting agents within the organizations doing the development. Those resources can be put to better use elsewhere.
Yes, there are a number of commentators here who seem to believe Pegasus is a static malware package that uses a single exploit each for Android and iOS. It's not. It's an evolving software product, just like other ISVs produce, and it makes use of multiple exploits that change over time.
As with all software security, this is a game of whack-a-mole.
I'm not a fan of EVs myself, but, yeah, it's not like ICE vehicles aren't full of combustible and flammable materials, hazardous waste, etc. Complex machines that have to perform a number of functions generally will be.
The spontaneous-fire issue with EV batteries is newsworthy mostly because it's novel. I have a friend who had a towing service and car lot on one of his commercial properties for several years, and while there was never a major fire there, that was at least partly luck. You get gasoline spills and whatnot at a wrecking yard. And water isn't great for putting out gasoline fires, either.
Hell, remember magnesium-block engines?
Yeah. And X11 was used properly in the days of X terminals, with mostly protocol messages flowing between the client and server, and let the ddx layer render. Very little of this QT-style "I'll do my own rendering into a bitmap and shove it over to the server" crap.
So you didn't need nearly as much network bandwidth. And since the primitives could compress a lot of information and rendering was slow compared to today's hardware, latency was less noticeable, too. If an xterm sent 800 characters in a single XDrawString to a server, it would have a little while before the next message needed to get there.
"hipster slang"? There's a big disparity between the demographics of the Occupy movement (which coined the "one-percenter" label) and the hipster subculture. The former was dominated by progressives, a significant portion of them older (about 20% over 45, according to the survey by Cordero-Guzman & Schultz), and mostly lower-income (about 85% under $75K/yr). Hipsterism is dominated by upper-middle-class and affluent teens and young adults, and is culturally retrogressive, with a postmodern iconography scavenged indiscriminately from the real and imagined past. There's little ground for calling "one-percenter" a "hipster" term.
There's no reason for a judge to avoid using vernacular terms to describe a defendant's state of mind when they express the appropriate idea; in fact, when they're terms that the defendant might use, they clarify the opinion by providing relevant connotations. Using them would be very weak evidence of bias indeed, and frankly I'm not sure how you would draw that conclusion.
Legal writing is often so stogy that it obscures its own meaning. The movement for more use of plain language in legal writing is of benefit to us all. It certainly shouldn't be obstructed by unwarranted shibboleths like this.
I agree. I've been writing code in C since 1988, before the language was standardized. I had the original K&R book mostly memorized, and most of the C90 standard memorized; I reviewed the changes to the language in C94 (which admittedly wasn't much), C99, and C11. I've read the Rationales that accompany the standards. I've read (and re-read) a number of books on C development, such as van der Linden's Expert C Programming and Jones' The New C Standard.
I've written, tested, debugged, and maintained hundreds of thousands of lines of C, my own and other people's, most of it system software such as middleware and application engines. I've done device drivers, graphics kernels, compilers, debuggers. I've worked on embedded systems, micros, minis, mainframes. C is still the language I work in most often.
Software quality is one of my research areas; security is another. These are things of considerable concern to me. I do extensive refactoring and prefactoring of code to wrap the (very primitive) C standard library in higher-level abstractions and simplify implementing cross-cutting aspects.
I still make mistakes that the Rust ownership model and borrow checker would catch at compile time. I know that because I've written a little Rust and I'm gradually doing more of it.
I've worked with a great many programming languages, from pretty much all the families, from assembler (on a dozen architectures) through 3GLs, functional, OO, and multiparadigm languages. I've used scripting languages and data-flow languages and constraint languages and lots of DSLs.
Rust is pretty good. It makes a substantial contribution to improving software quality. A very disciplined C developer can indeed write C which is much, much better than run-of-the-mill C code; but automating the detection of significant classes of issues at compilation time is still better.
And it's not like hobbyists don't have all sorts of ways to explore and tinker and make things. Just look at Hackaday or similar sites to see all the great things people are doing on their own.
If someone wants to play with custom chips, there are FPGAs, which will suit many purposes. I mean, most folks tinkering in the shed aren't looking to build thousands of commercial-grade CPUs to sell, so the performance and gate counts of off-the-shelf FPGAs would probably be fine for whatever experiments they want to do. Hell, simulators are probably fine for most of it. People have implemented SPARC-64 on FPGAs, for example.
It's really not clear what the hell OP was thinking people might do at home, but looking at the vast array of projects documented on "maker" sites like Hackaday I don't think individuals or even small teams would be able to be much more ambitious even with access to a fab.
It's not always in key management. People make plenty of other mistakes, too. I note of the attacks listed in the article, none appear to be key-management vulnerabilities as such. They have a couple of oracles that leak key material, which is a protocol implementation error; they have integrity violations, which is separate from keying; and they appear to have a vulnerability which escalates the key-recovery attacks which may be due to the use of ECB mode. (I haven't read the paper and it isn't clear from the article.)
There are a great many mistakes people make. Many of them have to do with key generation, key hygiene, and other keying issues; but many do not.
It's quite likely various groups have examined it and found these or other vulnerabilities, and kept them to themselves. You know there's a whole industry around selling vulnerabilities and exploits that haven't been published, right? There's a good free RAND study on that business.
And researchers research the things that attract their attention. It's not like the IT-security community is rigorously organized. Someone says, hmm, today I'll poke at this thing. Read researcher blogs if you want to see how it generally goes. Some people (e.g. SANS researchers) typically chase attacks that come into their honeypots; some track down malware and analyze it (Marcus Hutchins does a bunch of this, as does someone who posts to Full Disclosure as "malvuln"); some go after particular commercial targets (e.g. Stefan Kanthak and Microsoft, particularly software installers); some follow what's hot in the news (Graham Cluley, say, or Paul Ducklin); some are more interested in people (Brian Krebs) or policy (Bruce Schneier).
So this may just be the first time a prominent research organization has had some of its people publish research on Mega. Or maybe it's happened before and you and I just didn't notice. It's a big industry.
Did I miss something?
An opportunity to learn how Diffie-Hellman (which should really be "Diffie-Hellman-Markle", as Diffie pointed out some years ago, and arguably should be called "Diffie-Hellman-Markle-but-only-because-Ellis-Cocks-Williamson-weren't-allowed-to-publish") key exchange works without UNNECESSARY SHOUTING.
OP's post was wrong. That's the other thing you missed.
Encrypted data requires a key to decrypt it; that's what "encryption" means. Can you replace the actual key data with a procedure to generate it? Sure. Does that gain you anything, from a technical or legal perspective? It does not. By Kerckhoff's Principle, whatever is secret is the key; making the key fancy doesn't mean it stops being a key. And laws are rarely stymied by someone being a clever dick. A judge isn't going to say "oh well done, you've got us there!". He'll just throw you in the pen for contempt.
It's also worth noting that the issues mentioned in the article – I haven't read the DARPA report – have all been documented in published research before. There were papers on Bitcoin network partitioning in Colyer's Morning Paper when that was still active, for example.
But reproduction and confirmation of results is useful, even if it isn't new to people who have been following the research.
I was writing X11 applications, window managers, libraries, and extensions in the late 1980s / early 1990s, and I didn't need a "wall" of documentation. There was book 0, the protocol (which was also the specification); book 1 for the libx11 API; and book 2 for the major widget sets, if memory serves. It wasn't something you'd slip into your hip pocket but it wasn't SNA, either.
As Nick said in Metropolitan, I'm not entirely joking.
But, curmudgeonly as I am, even I make use of some online services that wouldn't be feasible without CDNs or some other edge-delivery mechanism. (Someone else mentioned IPFS, but I'm dubious.) Could I live without them? Absolutely. Would I miss them if they vanished? Eh, a bit, but to be honest I'd miss good old fashioned paper books far more if I lost those.
Still, I can't pretend I get no value from CDNs. And I suspect that's true for the vast majority of people who do anything online.
Cloudflare is used for things that aren't exclusively web-related, such as DNS. Per comments above.
But, yeah, if you weren't caught out by using Cloudflare-backed DNS, you probably wouldn't have observed too many issues with your SSH connections or whatever.
I missed the outage, thanks to my time zone and working hours, but it probably wouldn't have troubled me too much since I'd still have the corporate network and there's no shortage of things I can be doing.
I have mixed feelings about Cloudflare, but they are generally quite good about explaining what went wrong. They also publish a lot of good technical content in general.
Mark Boost, on the other hand, sounds like a spoiled brat. "Everything isn't perfect! My gratification isn't immediate! How dare you!"
I've been using the public Internet since a few years after Flag Day, and I've managed to avoid panicking when I can't "access the online services that are part of the fabric of all our lives". Sometimes there are, y'know, network interruptions. Or power interruptions. Grow the fuck up, Mark.
Technically, though, OP is correct. That's not "intrinsic value". It's exchange value.
The problems with the cryptocurrency fans is, first, they're hoping the exchange value of the particular horses they've picked in a very crowded race for new, unregulated, volatile exchange media, many of them with various other unfortunate properties, will outperform the established top-down (i.e. government-issued) currencies.1 And sometimes they do, for the short term. But it's a highly risky position to take – and the short term is the only term we've had with cryptocurrency. No one knows what that market will look like in ten years.
And second, cryptocurrency only succeeds as a medium of exchange (and thereby has exchange value) if you can actually exchange it for something with use value. Sometimes you can, apparently; that's what made the Silk Road successful. But I sure wouldn't want to depend on it.
So government currencies have superior exchange value to cryptocurrencies now, and they may always (with the possible exception of government-issued cryptocurrencies, which, ugh, I can't even). But they're not different in the type of value they have, just in the amount and liquidity of that value.
1Of course cryptocurrencies aren't the only example of bottom-up consensus currencies, even in the modern era. There are plenty of groups of people in places like Papua New Guinea using those, and for a while in Somalian markets the merchants were circulating their own notes. But historically bottom-up currencies have only succeeded in fairly small markets where there was a strong in-group identity and widely-observed mores against exploiting them.
I wouldn't pay $25 for all the BTC in the world
Really? I would.
I mean, it'd be a great line to drop at parties. "What do I do? Oh, I work in computers. Also I own all the Bitcoin. Yup, bought it all, shut the whole thing down. Just a hobby, really."
Or: "Yeah, I remember when everyone was talking about Bitcoin a few years back. Whatever happened to that?" "Well, funny you should ask..."
Also, by definition "all the BTC" would be an NFT, and $25 is pretty cheap for one of those. "Sure, I dabbled in the NFT fad, just for a lark. Only spent a few dollars on it. Got all the Bitcoins."
It's the sort of thing Hat from XKCD might do.
Also, anyone who hasn't put a credit freeze with each of the big three credit reporting agencies (Equifax, Experian, TransUnion) should.
Anyone in the US, anyway. By law, freezing at the big three is now free. Just do it. Do it for your underage children, too; if they have SSNs, they're vulnerable.
And then freeze at as many of the smaller agencies as you can. There are dozens of them, so good luck. Innovis and Chex are a good place to start. Here's one article which lists some of them. I've only skimmed it so I can't vouch for its quality.
We've received free "identity theft protection" dozens of times, thanks to the regular parade of breaches. It's never notified us of anything. On the other hand, we've never discovered evidence of successful identity theft – just the occasional compromised debit or credit card details (which has been a widespread problem in the US thanks to foot-dragging on adopting EMV).
Usually what it means is we can expect a flurry of offers to start paying for the "service" in a few months.
In what context have you seen CAPTCHAs used as a security mechanism to prevent malware from impersonating you?
Every use I've ever seen of the damn things is an attempt to block bots from 1) creating accounts or 2) posting fake UGC.
CAPTCHAs were a bad idea when they were invented and have gotten steadily worse, because of course they degrade into problems which are easier for machines than they are for people. Anything that helps get rid of them is fine with me. (I am not an Apple user. Haven't liked anything they've done since the //e, and don't care for the corporate attitude.)
It doesn't entirely "get rid of the problem".
Using a private code repository that's not shared with people and organizations outside yours reduces the attack surface and risk quite a bit, yes.
Using an in-house code repository that's not accessible on the public Internet reduces it further. But as we know, the "egg model" (hard network perimeter, soft inside) fails all over the place, because some attackers do get in, and then they pivot and elevate. So security is improved but there are still serious vulnerabilities.
Using a code repository that is just a code repository and not some glorified all-in-one mess of repository and CI/CD system and code-review tool and problem-ticketing tool and probably there's a flight simulator in there somewhere, like GitHub Enterprise, considerably reduces the attack surface and further improves security.
Using a code repository where some developer hasn't broken the permissions mechanism with a random change that wasn't caught until an external security researcher looked at it improves security.
You can never get to absolutely secure – there's no such thing. But, yeah, not using public fucking GitHub certainly improves the situation.
Seventy-three million developers can be wrong.
the APE format leverages that by embedding a binary structure that can alternately be interpreted as a script loader or as an executable container
Yes, plus some other goodies.
Basically, PEs are recognized on Windows by the magic number in the first two bytes, which happens to be "MZ" (for good historical reasons) in ASCII. There are similar magic numbers for various binary executable formats used by many other OSes. (Claiming "any x86 OS" is clearly a little broad, because who knows how many people have implemented their own experimental OSes with crazy formats and conventions.)
Bourne shell scripts have to contain legal Bourne shell commands, but don't have to start with anything in particular, because when they were introduced the interpreter ("hash-bang") line concept hadn't been introduced.
So, when a POSIX OS is asked to execute a file, it sees if it starts with the magic number of any of the executable formats it knows how to run. If not, it has to try to run it as a Bourne shell script. (It can't really apply any heuristics except that if the first byte is binary 0, it can't be a valid Bourne shell script. Otherwise, in POSIX land, even a 2-byte file could be a valid script, because you can use any character other than NUL or / in a filename, and the script might just execute another file. But I digress.)
So, I can start a Bourne shell script with "MZ" and it looks like a PE to Windows. If I write that script carefully, I can actually make it both a valid Bourne script and a valid PE, because they're both pretty forgiving. In the case of APE, the script starts off by turning that MZ into the beginning of a variable name, as you can see if you follow the link in the article.
Then, if it's running as a PE, great; you just stick in the various PE sections after redirecting around the script stuff. If it's running as a script, it can do various things to get itself re-executed as the proper sort of file.
And then there's the web-app zip trick, which is one of the first tricks you learn when writing polyglots, and one that was used by Phil Karn's original PKZip (in the pksfx utility). A zip file has its metadata at the end, and the directory in the metadata contains offsets to the compressed contents. So anything can come before the zip data; decompressors will start at the end of the file and then jump into the middle of it. So you just take your executable and append the zip data to it, and you have both an executable and a zip archive.
I do this occasionally with a PNG image and a zip archive. The PNG image is of text that reads "This is a zip file. Rename it with a zip extension and open it again." It's handy for sending zips to people whose email clients block them, for example.
Nah. Polyglots already existed before this. Small hosted execution environments existed before this. Unless you're a fairly feeble exploit developer, this is not new.
As I wrote above, it's good work. It's not breaking new ground, particularly not in the world of advanced malware (APTs and such).
It's a fine technical accomplishment. I'm not sure it rises to "remarkable" if you follow this sort of work. Like, say, PoC||GTFO 14, which is a PDF (of the issue's contents, of course), a zip archive with the samples from the articles, and a Nintendo ROM with a playable game, all in one file.
And the text of the PDF includes the PDF's own MD5 sum, so they had to calculate an MD5 collision on top of their three-way polyglot. (Because, of course, if you take the MD5 hash of a PDF and then add it to the text, you'll change the hash.)
So, yes, congratulations to Justine for some fine hackery in the best sense, attending to how things work at the low level and bumming bytes. Good stuff. Not astonishing, though.
All respect to Justine, but creating polyglots is not "computer science" under any useful definition of the term. It's good old-fashioned hacking, of the sort that could get you a nice article in 2600 or PoC||GTFO.
Indeed, if you read the PoC||GTFO collections you'll find plenty of examples of similar work.
Hell, some of the DeFi platforms and other cryptocurrency outgrowths (cryptocurrency exchanges and funds, DeFi insurers, NFTs, ...) make CDOs look almost sensible. I mean, at least with CDOs you could roughly gauge your exposure by what tranche you were in and what sort of diversity might be in the underlying securities. And even doomed mortgages have some real estate backing them, even if it's overvalued.
Many of the cryptocurrency enthusiasts seem to be not so different from the Sovereign Citizen true believers, in that they live in a fantasy world where recording things has magical power, and nothing more is necessary to win any argument. But I suppose that's not surprising – that's hardly the only point of resemblance.
Oh yeah. Just search for "flash loan" on web3isgoinggreat if you want some representative examples. (Not all the flash-loan attacks involve increasing stake for a voting takeover, but some do.)
Those suffering a schadenfreude deficit may want to head over to web3isgoinggreat occasionally for more like this story, by the way. It's generally pretty horrortaining.
It's not impossible, but it's really hard to tell (even with the advances that have been made in de-anonymization of cryptocurrencies and related tech). Thanks to the 2021 bubble, even many individual investors ended up with notional hundreds of millions of dollars' worth of cryptocurrencies. So this could be J. Random Dude.
Biting the hand that feeds IT © 1998–2022