Re: I remember
SimCity.
(They actually created a special memory allocator mode just for SimCity :) -- no memory freed until the game exited (because it would read memory after freeing it) )
2557 publicly visible posts • joined 18 Nov 2016
You need to also see his many other efforts, eg his briefings to government etc. Variously reported over the last 3-4yrs. He has a very standard mental problem.
Eg, if you are getting near-daily reports of major city-wide urging of death in 3+ major cities (over 40% of Australia's population), and nationally: public marches, violent attacks (on people and their property) driven by foreign ideological basis itself explicitly anti-pleb anti-West etc, collaring people in the process of implementing mass shootings/bombings/etc, etc, all to SUCH level now that even the media is feeling they need to report bits of it to avoid losing any more credibility, but publicly you try to deflect by pointing at and declaring Top National Priority one --perhaps as many as two-- dozen wannabe LARPERs who come together occasionally in one already-messed-up city to pwetend to be weal-wife white Nazis, then you are operating according to priorities starkly at odds with the nominal purpose of your job.
(Anyone else remember how Ken Thompson backdoored every unix? Twiddled a core compiler.)
There's another major risk arisen recently, not obviously nation-state but ideological, and a lot larger/wider than most nation-states can manage.
You may have noticed the Rust OSS community has been hijacked by people you could generously describe as psychotic: anti-pleb, anti-West, anti-personalchoice, anti- pretty much everything that's created the society & societal wealth that lets them play on computers with their adopted Language Of Virtue.
You may also have noticed that they've done a systemd: forced out previous code versions in favour of their own language's copies of same, across multiple distros.
This is parTICularly insane, given that their versions are regressions: failing even basic Unit Tests(!).
It is also parTICularly odd, that they've focussed in the first instance on very simple&reliable but core utilities which every box and —more importantly— every complex or large installation critically relies on.
Eg NTP. Heard of any egregious memory safety bugs in NTP in the last many years? Nor have I. But tell you what, if every box in a high-availability replicating database backend switched to different times all of a sudden, WOULDN'T we have fun watching the front-end's service-provision collapse in a shower of "argh".
An oddly fawning Ubuntu team post credited the entire drive to swap out long-standing battle-tested functional code for test-failing regressions of Rust code, to one particular group. That group and its funders likewise seem obsessed with swerving the usual memory-bug risky culprits in favour of taking over core low level tools which lend themselves very well to sabotage. sudo, for example.
Check out Trifecta Tech Foundation then their funders. Eg, one of their funders proudly boasts that their code has now completely replaced the NTP servers for Lets Encrypt.
Worth noting that Rust's compiler is now Rust. Anyone else remember how Ken Thompson backdoored every unix?
HE did it innocuously, just as a mental exercise and also to point up the security exposure.
Diametrically-opposite-wise, given the anti-pleb psychosis (proudly) displayed by many of the Rust community, we are now looking at all but a handful of Linux distros now carrying material risk of that exposure being not just created but "weaponised".
I would not be deploying a Rust-affected Linux for anything critical until I'd seen the results of some pentesting teams reverse-engineering the binaries.
It's a lot like people today (especially on this forum) praising Libre Office et al as the equal of MS Word or --god help them-- MS Excel.
"Well...if that's your idea of productivity...and that's all you know of the larger world...then...yeah. Yeah, that's absolutely a great idea.
"...in your tiny, tiny context."
BeOS had it, that I've used.
MacOS UI & GUI (former VASTLY better than anything available since -- it got its "reputation" for crashing (without data loss...) because EVERYONE was hacking their kernel. Have a think about how many people you know who are comfortable to do that nowadays, then: at what tech skill level they're operating at...), but sh*t-off-a-shovel speed and a file system that was almost a relational View on a database.
But Gassée crippled Apple's core Culture, permanently (first mover privilege/impact), exaggerating massively Jobs's own framework, and in turn crippled his own company.
Alas.
The insta-downvote is rather amusing, if you know the larger context.
Simple and unarguable fact, copypaste&verify repeatable, pointing out & evidencing the Syndrome now infesting much of OSS:
sought to dismiss via exogenous mechanisms but without the balls to attempt to engage,
because they knew they'd have their arse handed to them.
Finally had time to check a hunch -- I was right. The EOL bug is NOT present in the text search.
The literally insane dependency on LANG affecting Binary search at all, let alone massively, was the clue. Binary should be simpler to search for&in than text (simply go straight to the search code), but instead it's behaving like they've actually used the text code but bandaided some sort of pre-/post-/both- processing around it. Code structure 180º inverted from reality structure. And they mungled it; assumption/understanding mismatch somewhere.
Demo:
Show that String grep can handle what Binary grep can't: match patterns across/including EOL/LF/ctrl-J/0x0a.
0.1 Test file "test_text.txt" of:
abcdef
ghijkl
mnopqr
st
0.2 Put eeeeeevil binary(ctrl-J) in a var for readability:
EOL="
"
1.1 & 1.2: run String grep across First and Second EOL respectively.
echo "== 1.1: grep across First EOL =="
grep -o "def${EOL}g" test_text.txt
echo "-----"
echo "== 1.2: grep across Second EOL =="
grep -o "jkl${EOL}mn" test_text.txt
echo "-----"
OUTPUT:
== 1.1: grep across First EOL ==
def
g
-----
== 1.2: grep across Second EOL ==
jkl
mn
-----
Bingo and LOL.
And people wonder why I warn about the exponentially degrading attitude of coders/commmunities and hence quality of code nowadays.
>those aren't bugs, as grep is designed to work on characters and data lines after all - not binary values.
Incorrect. First time I've heard anyone attempt that description since I started using grep in my first unix R&D job in 1991. Sounds like retconning to "explain" why the GNU-version bug that "doesn't exist" now DOES exist and "here's why that's a good thing". Revisionism & ego over reality.
grep is a pattern matcher. The clue's in the name. It has special treatment of text, making allowances for text-only concepts such as "lines". But that's it so far as "characters and data lines". The only fiddle we had grepping binary in the 90s was the usual faff getting it cleanly into the pattern strings from the command line (cf. above).
But it's actually TWO bugs, not one.
I can confirm that GNU grep 2.6.3 does NOT have the second bug: UTF-8 works fine in 2.6 for at least 256char, collapses at 128 in 3.11.
However it DOES still have the first bug: in context of my above test snippet, patterns 10 & 11 still fail.
So GNU grep has a long-standing bug (at least 16yrs): inability to search for or in/around/through the character EOL, in ~all circs: linefeed/ctrl-J/0x0a.
I was fiddling an irrit in my syntax there, and idly thought to start the count from 0. And... there's ANOTHER grep bug in there, which I have absolutely no explanation for: patterns 10 & 11 fail.
All others work correctly, up until file byte 128.
SCRIPT: slightly prettier output
## SAME BUT DONT SUBSET REGION: START AT 0
for (( x=$((0x00)); x<$((0x8f)); x++ )); do p=`printf "\x5cx%02x\x5cx%02x" $x $((x+1))`; echo -n "$p: "; grep -c -a -o -b -P "$p" test; done
OUTPUT:
\x00\x01: 1
\x01\x02: 1
\x02\x03: 1
\x03\x04: 1
\x04\x05: 1
\x05\x06: 1
\x06\x07: 1
\x07\x08: 1
\x08\x09: 1
\x09\x0a: 0
\x0a\x0b: 0
\x0b\x0c: 1
\x0c\x0d: 1
\x0d\x0e: 1
\x0e\x0f: 1
\x0f\x10: 1
.
.
.
Errr... Now... THAT's a weird one. Not just a size-limit cockup; also some sort of (probably separate) interaction effect? Which matches your own test's (non) symptoms.
So... that's THREE behaviours showing (possibly more, but) at least TWO showstopper bugs in grep.
>I suspect that 128-byte limit only applies to ancient proprietary Unix grep implementations
GNU grep 3.11 installed for me, 3.12 for you. The release notes for .12 show for binary search only a Doc mod.
BTW: I ran across various comments online that this used to work fine in old grep (pre 2.30ish?) but no longer works. So it's a NEW bug created by GNU, not "ancient proprietary" etc.
Problem with your test is you've chosen a size exactly WITHIN a "goodsize". Bonus is that you've accidentally shown a peculiar interaction of the bug with the pattern-size AND the offset(-size?). (OR that everything intra-patternspace running over 128 bytes file offset is failing silently? (you could test this by changing ONLY file byte 128))
Here's a much better test+demo of the problem:
SCRIPT:
## CREATE TEST-FILE
# - 256 bytes, holding byte-value 0 to 255
printf `for (( x=0; x<256; x++ )); do printf "\x5cx%02x" $x; done` > test
#
## GREP FOR 2-BYTE PATTERN ALONG ENTIRE FILE
# - pattern = {byte,byte+1}
# - just do region around bug-start point: 0x70-0x8f
for (( x=$((0x70)); x<$((0x8f)); x++ ))
do
p=`printf "\'\x5cx%02x\x5cx%02x\'" $x $((x+1))`
echo -n "$p "
echo $p test | xargs grep -c -a -o -b -P | cut -d: -f1
done
(Hmph. ElReg's code tag is swallowing the indenting, and the pre tag is the same but double-linefeeding)
OUTPUT:
'\x70\x71' 1
'\x71\x72' 1
'\x72\x73' 1
'\x73\x74' 1
'\x74\x75' 1
'\x75\x76' 1
'\x76\x77' 1
'\x77\x78' 1
'\x78\x79' 1
'\x79\x7a' 1
'\x7a\x7b' 1
'\x7b\x7c' 1
'\x7c\x7d' 1
'\x7d\x7e' 1
'\x7e\x7f' 1
'\x7f\x80' 0
'\x80\x81' 0
'\x81\x82' 0
'\x82\x83' 0
'\x83\x84' 0
'\x84\x85' 0
'\x85\x86' 0
'\x86\x87' 0
'\x87\x88' 0
'\x88\x89' 0
'\x89\x8a' 0
'\x8a\x8b' 0
'\x8b\x8c' 0
'\x8c\x8d' 0
'\x8d\x8e' 0
'\x8e\x8f' 0
BOOM, there's the end of grep's binary pattern-matching range. hex80 dec128.
(BTW, just another tip for anyone who ever finds themselves suddenly dropped in this situation: ntfscluster is crucial for this sort of binary reconstruction work. Maps physical device addresses to logical files; bit of twiddling the output and you can pin down which parts of which files are now all 0s, then go hunting.)
In the event you're manually rebuilding NTFS files in binary by hand,* be aware of a VERY surreal design failure. The $SECURE metadata file in the MFT entries (12 metadata files, one of which contains 100% of YOUR files (which is pretty meta)) has internal replication/backup, which is good. Each chunk is written out as 128 clusters (4kb each) broken down as 2x64 clusters, with the 2nd being a backup: a replication of the 1st.
The primary copy (A) is contiguous on disk. The replication copy (B) is sprayed randomly all over your disk.
So far, so good.
PROBLEM:
Clusters A1..A64,B1 are written contiguously as a 65 cluster strip, then the remaining 63 (B2..B64) are sprayed.
That is, the first cluster of the backup/replication copy is attached to the end of the primary copy, is physically contiguous with it on disk.
So anything which damages that chunk's primary region, actually cripples BOTH the primary copy AND the backup/replication copy.
As in, you've lost the lot simply because the backup was physically joined to the primary.
As in, they just completely destroyed the entire POINT of the intra-chunk backup/replication. By this one bizarro choice.
Utterly beyond belief, given the excellent intelligence in and effort on the rest of the file system design. Why go to all that brilliant effort & care, then arbitrarily cripple it with a bodge like that?
No idea, but something to be aware of.
Fortunately the first cluster is fairly constant relative to the remaining 63 clusters so was able to manually re-construct it with a bit of research & crawling over other files, then just find&gather the other 63 surviving clusters on the rest of the disk and rewrite the 64+1 chunk.
.
I do not recommend doing any of this voluntarily, by the way.
(Although you DO develop enormous respect for the file system. Re-interleaving wildly semantically different bytes with everything abutting --semantic meaning switching between offset boundaries-- felt very much like sticking my hand into a running sewing machine to repair it.)
.
* as to WHY... A new bug in an old tool I'd used for over a decade. Instead of twiddling 16 bytes on one device, it ZeroedOut 512kb on a different device. My daily driver's boot disk... Thence ensued a massive deepdive into binary** ntfs I had had NO interest in learning previously.
** btw, the ONLY linux hex editor that actually works is wxHexEditor. Although its binary search appears to rely on linux's built-ins, which are stunningly broken: grep's only correct for the first 128 bytes IIRC. So wxHexEditor can't usefully search on binary, only on text.
NTFS Sparse Files can ONLY be created on Linux using ntfsclone, part of the ntfs-3g standard (currently) package.
Despite all the documentations' claims, all the other tools' sparsefile functionality/options/etc (eg, dd, cp) simply don't work.
ntfsclone, on the other hand, Just Works. Quite delightful in this day and age.
I have, for example, a 62gb image of a 1tb partition, which in a hex editor is byte-for-byte identical to the raw partition at any offset (I lost patience waiting on a cmp after an hour with no diffs so just bounced around on known files then random jumps).
Dave Plummer said this behaviour could be replicated in his design/code if there's a bug in the code for the tray icon (now renamed taskbar or whatever) that's "wrapping"/providing the user's non-TaskMgr window visibility&access. His DestroySelf code would have timed-out on checking for that tray process then shooting it, so either it's something else or subsequent coders on TaskMgr have cocked up the basics/deleted his timeout.
>Wow. I hope that's been fixed
The only change I've seen is that they've now actually acknowledged in their doco that that's what they do.
NB: if you know someone in that situation, who's about to bite the bullet and start wiping apps to achieve 1gb, WARN THEM THAT ONLY WORKS FOR 24 HOURS,
UNLESS:
* they enable the Developer Settings (fast-tap half a dozen or so times on some icon in Settings I can't remember right now -- google it)
* they switch OFF auto-update of the System/OS, in those developer-settings.
If they don't do this, the following day they'll discover they've been "up"dated/downgraded/downdated to the maximum size OS that Android can squeeze into the space. This normally re-removes the ability to install/update, AND Android's RAM footprint just keeps getting massivefatter and massivefatter, so their perfectly fine & zippy phone will be reduced to essentially a crawling single-app non-multitasker: slow as a wet weekend.
>an ARPU trick to maintain revenues once existing phones are paid off
Android has actually built one of those into the Android code.
After 12mths, the minimum freespace required to install an app, or even update an existing one, switches from TheSpaceNeeded, to a minimum of ~ 1 gigabyte. It's explicitly done on the calendar date and on that size (at least for the older Androids -- they've probably bumped it up for the newer versions as the phones get "bigger").
So after a while, people get stuck with apps they can't even update, and then as new "up"dates are demanded for poorly-thought-out data/api accesses, more and more apps become simply unusable.
I realised they were clueless parasites when they started flopping around about CISA being critical etc. Then ranting that lack of DEI etc cripples security.
Just another member of the parasite contingent, weeping and wailing and trying to get their snouts properly back in the trough.
Charity Commission has no teeth, unfortunately.
I can remember 15-20yrs ago they came out with a big report & media push/campaign of it, to try to _shame_ "charities" into being even remotely the organisations people thought they were. They pointed out that something like 95% of "charities" didn't meet the Charity Commissions guidelines on Max.Percent spent on themselves rather than the intended recipients of the money. Eg, Oxfam was singled out as about the worst, spending over 98% of its donations etc on itself, less than 2% on any of the "starving in Africa etc".
Oxfam's response was to bring in a crack team of accountants who relabelled all their spending and 6mths later they proudly announced some frankly hilariously virtuous %ge.
Along with all the other problematic behaviour, I'd rate Oxfam as about the most financially parasitic/corrupt of the major charities, albeit not aggressively anti-pleb campaigners like MSF, etc.
This is not parody:
Github: detect-fash: A utility to detect problematic software and configurations
Searches Linux systems for installations of the Ladybird Web Browser, Hyprland, or if the system is Omarchy Linux. Also contains code to detect if the user is DHH.
“systemd-detect-fash detects execution in a fascist environment. It identifies the fascist technology and can distinguish full machine fascism from installed fashware. systemd-detect-fash exits with a return value of 0 (success) if a fascism technology is detected, and non-zero (error) otherwise.”
>military veterans
No. Prima bureaucracy winners in an ossified peacetime bureaucracy.
>The *only* message was that "we're morons". Possibly hinting that any dissidents will be fired.
The former is approximately 100% accurate, but only after this cmd: "sed s/we\'re/you\'re/". And post-cmd the latter is not just correct, not just valid, but tremendously beneficial. Approximately none of them do anything militarily useful.
I invite you to read AND UNDERSTAND any documents from their military. For, including early warnings of same, over a century.
The infestation you're so proud of and so defensive of has been a long time growing.
And it's not confined to the military.
In the UK, the local NGOs only give them Androids. Cheaper.
Not sure what the US NGOs were doing. After US"AID" got cut, suddenly a lot of illegal support collapsed.
Also, amusingly, various high-profile Hard-Left media people costing their employers a net many millions of dollars a year, suddenly went quiet.
Interestingly, Qatar has now been documented stepping in to replace USAID funding and they're reappearing. Qatar continues to be the major funders of Hamas, Muslim Brotherhood, etc.
Interestingly, the requirement to arrest & separate children from their family was created by Obama.
(Law enforcement HATE it, but no one important enough has had it high enough on their priorities to try to get Congress to reverse it.)
Likewise the "childrens' cages" on the border.
Rather "lovely" photographs issued by the White House at the time, of Obama inspecting & proudly showing off his creations.
My "favourite" was the surreal, high, naked, cubical, iron-bar cages inside and widely separated from a brutalist concrete shell with, walking beside them, the State Governor looking like she'd been suddenly dropped into hell then hit very hard in the back of the head with a hammer, and Obama strutting beside her looking like a cross between a rooster and the cat that got the cream.
>try to woo them
Actually, he gave them the mother and father of all kicks up the arse.
You'd know that if you put down your pre-packed marketing material and instead addressed the real world: the speech transcript is readily available online.
It *does*, I warn you, utterly scramble your script -- it's entirely orthogonal to it.
This is quite a good example. Posted by another chap here on a different topic but on precisely the same theme.
"Vibe Coding is a Dangerous Fantasy", Namanyay Goel.
“invisible complexity gap”
"you don’t know what you don’t know"
Everything looks good for a while as you burn through your buffer (luck, trust, social capital).
Then your ignorance/dismissal of the fact that your technical problem is just one tiny part of a much larger technical and social/culture network of factors, "unexpectedly" creates "unexpected" problems.
There's a very scope-limited set of types & also number of jobs which can be done remotely, and only up to slightly-better-than-average level. (Minor exception to the last: pure IT system-admin roles can run higher on the max skill scope limit.) (Worth noting that the tiny subset remaining of ElReg's commentards are overwhelmingly in these subsets.)
But they're crippled on face-to-face teamwork benefits.
And they're crippled on learning speed. Death to juniors.
And it really shows, once your needs get above the median. Eg, emergency. Or just high aspirations.
And it really shows, as time goes on. Burning capital ALWAYS looks good; hell, GREAT. For a while. Then goes horribly wrong.
Rather amusing/"totally unexpected" story in Oz media recently. Chap had booked some time off, decided to set aside a day to go and talk to the people he kept amazedly and jealously seeing out the window having fun on the beach every day, surfing every day, etc. While HE was at work every day, working.
Standard response was: "I'm working from home."
I discovered last year on a forensic audit of a legal matter, that anyone full-using salesforce is breaching most countries' data protection laws, eg GPDR.
I also discovered their sysadmin team is either sloppy or overworked.
But that at least some of them might also retain a {oldskool} hacker's sense of humour. Eg San Francisco servers self-geolocating to Frisco, Indiana. Could be cockup. Could be joke.
I'm leaning to joke because the particular building nominated is an isolated shack thing that looks like an overfed outhouse.
Wait for it... Wait for it...
>and backed up by gas generation
... THERE it is!
As always: the quiet mumble of "just build double your generation needs".
Then swerve the capital cost implications, and swerve the lifetime/replacement-time cycle consequences of ongoing ("sustainable!") supply.
Hey presto! If we delete the major costs and just look at this one tiny little bit, it's magic!
It's all so tiresome.