* Posts by Andrew Hodgkinson

184 publicly visible posts • joined 7 Mar 2008

Page:

Bots are overwhelming websites with their hunger for AI data

Andrew Hodgkinson

We have this problem at RISC OS Open

It's an ongoing battle. AI bots are very numerous and absolutely hammer your site, relentlessly. All but a handful seem extremely badly behaved, with little to no rate limiting, usually ignoring "robots.txt" and they even ignore things like 404, 500 or even 301 responses. The 301 Moved Permanently part in particular blows my mind. A documented example: https://www.mj12bot.com - "Why do you keep crawling 404 or 301 pages?" / "We have a long memory and want to ensure that temporary errors, website down pages or other temporary changes to sites do not cause irreparable changes to your site profile when they shouldn't".

What fscking part of "moved permanently" is hard to understand?!

So, now we have our server front-end spending a lot of CPU time bouncing bots detected by various only partially effective means, which need constant maintenance for all the new bots or changes of bot signature, to 301 redirections that lead to non-existent addresses. At least some of them seem to FOAD after a while.

Our time is numbered; our tiny little server can't cope indefinitely with the ever-rising tide of multiple armies of bots... All because deregulated, unstoppable corporations want to ignore copyright and steal data for training. As a random stat, by around mid-April 2025 we had returned over 2 million bot rejection 301s for just those first 14 days of that month alone. And we're just a tiny little non-profit hobbyist outfit.

Microsoft-backed AI out-forecasts hurricane experts without crunching the physics

Andrew Hodgkinson

Re: Wait a minute

Yeah, the lead is junk. This is just "normal" machine learning, an expert system basically. It might help improve its outcome by using techniques popular in LLM circles, but that obviously doesn't make it an LLM.

Trump announces $175B for Golden Dome defense shield over America

Andrew Hodgkinson

Easy! Just...

...raise the Titanic and find the Byzanium.

Microsoft tests 45% M365 price hikes in Asia-Pacific to see how much you enjoy AI

Andrew Hodgkinson

Re: "Live, Laugh, Love"

Wait, you wanted it sliced? No problem! Just upgrade to Bread Plus for only $29.99/mo (USD) or have sliced bread for you and up to four other family members wth Bread Plus Family for just $69.99/mo.

Zuck takes a page from Musk: Meta dumps fact-checkers, loosens speech restrictions

Andrew Hodgkinson

We need more Bluesky thinking

Don't forget that (despite its many limitations) Bluesky is not an alt-right shithole, is federated so you can even run your own server if you prefer and has a time-based feed, not an algorithm-based one.

If you're excited by that $1.5B Michigan nuke plant revival, bear in mind it's definitely a fixer-upper

Andrew Hodgkinson

So how late and over-budget are we betting here?

...fourth quarter 2025

(...and $1.5B). I suspect they'll gouge the tax payer for around $3B until the point where the whole thing gets funding pulled, in - let's say, late 2026 to mid 2027, since there's so much more battery-backed solar and wind online, with prices continuing to plummet, that the idea of nuclear power as a solution is recognised as just another cynical pork-barrel play, even with the comparatively crude but very rapidly evolving alternative generation technologies we have in 2024.

It's nothing to do with NIMBY, nothing to do with safety, to say that: Nuclear fission is a fucking stupid idea. Spend the money on renewables and research. The industry has been the absolute epitome of "privatise the gains, socialise the losses". The primary reason for this? Construction costs for new reactors are gigantic and construction times are enormous. But more importantly, decommissioning costs are astronomical.

https://www.sciencedirect.com/science/article/pii/S1364032124003472

..."so far, of the 204 already closed reactors, only eleven with more than 100 MW of electrical capacity have been fully decommissioned"

https://www.worldnuclearreport.org/IMG/pdf/wnisr2022-v3-lr.pdf

...(from a paragraph looking at AGRs with a UK context only - never mind worldwide) "The decommissioning cost estimates for the AGRs have continued to rise and according to the Parliament’s Public Accounts Committee, costs 'have almost doubled since March 2004, estimated at £23.5 billion [US$2021 32.7 billion] in March 2021, and there remains a significant risk that the costs could rise further'. Furthermore, despite having already provided £10.7 billion [US$2022 13 billion] (from a total value of the funds of £14.8 billion [US$2021 20.3 billion]), the Government was committed to 'top up the Fund with taxpayers’ money, providing an injection of capital of £5.1 billion [US$2020 6.9] in 2020–21 with a further £5.6 billion [US$2022 7 billion] expected in 2021–22'"

The lifetime, end to end cost of nuclear energy is extremely high, either as levelized or capital cost. Nobody would pay for it by choice, were the electricity actually priced to its true cost and consumers given the option to select other sources - not business customers, nor home customers. It is, in fact, depending on conditions one of the most, or actually the most expensive generation method known to mankind even without including decommissioning. I have to add the caveats because of course the world is complex and there are conditions (especially locations) for which nuclear can make sense, but it's really not a sensible choice most of the time.

Meta gives Llama 3 vision, now if only it had a brain

Andrew Hodgkinson

So to a greater or lesser extent it got every single thing wrong

The best it did was the table conversion, and that's a mess, which if taken without looking closely would yield a meaningless result. That table isn't just "problems with empty cells". Yes, this does seem to mean that the first heading row is wrong and this tells you the LLM doesn't "understand" what a table is - it cannot actually grok rows and columns, so proceeds to get that wrong. Likewise there is then is a misaligned heading row that shouldn't be a heading row in the second row down, because it doesn't "understand" what table headers are. Finally, it misses off arbitrarily some information from two-row text.

The boat image is reasonably well recognised (we know that recognisers like this do OK - that's old tech and we don't need LLMs for it) but it gets the info about the person in the *dark* blue shirt who is *facing towards* the camera wrong. That's a weird glitch and is *exactly* the kind of "just subtle errors that really degrade trust and promote accidental misinformation" level of fuckup which characterises how LLMs can be so very dangerous.

You charitably try to say that the tired man's emotions have been well described, but they haven't, because he just seems tired. The image doesn't "suggest" he's holding his glasses up to his face for vision problems at all - he's taken them off, is holding them aside, and is rubbing his eyes because *they're tired*. Again, confidently and convincingly *incorrect*.

As for its chart "analysis" - as suspected, that's just total and utter junk. Given how bad people are at reading charts, journalists included, this *will* cause a tidal wave of broken analysis to flow out into the ever growing pool of excrement that is half-human, half-LLM output now flooding the web.

Once again, another example of how LLMs *cannot work reliably ever, by design* and any and all attempts to claim "comprehension" are marketing lies.

Apple's latest macOS release is breaking security software, network connections

Andrew Hodgkinson

Yes, but only partly

While Apple's software quality over the last few years can at best and most charitably be described as "utter shit", with any new OS release practically guaranteed to fix nothing whatsoever and introduce numerous new bugs, this whole thing was in dev beta for many months. During that time Microsoft - just picked at random from the various names listed in the article - noticed one of their components full-on crashed with the beta - according to their advisory hidden away at https://learn.microsoft.com/en-us/defender-endpoint/mac-whatsnew (you're listing do-not-use-because-crash warnings in "What's New"?!) and under an unhighlighted bullet point amidst a list of others that include mentions of various now-outdated macOS 14 minor versions.

Now:

• If they truly believed this was a bug...

• ...Or believed this was the result of an additional restriction for low-level access introduced by Sequoia...

...then it is of no interest or use to the customer of the crashing Microsoft product if MS just say "we whined at Apple, but did nothing to stop our software from crashing". For example, if they knew their software was incompatible, rather than just breaking the user's network connectivity at random, the right thing to do is disable that part of the software on that OS and raise an alert on boot warning the user that this has happened.

Simply bugging out, breaking networking and claiming it's someone else's fault - whether or not it's true - is bugger all use to anyone and symptomatic of large, lazy tech companies that just couldn't be arsed, would rather try to score pathetic internet credits by pointing the finger, and have something close to contempt for their customers - who in the end are left with a broken system.

AI has colonized our world – so it's time to learn the language of our new overlords

Andrew Hodgkinson

This is *all* just crystal-dangling nonsense...

...and The Register really should know better (unless this article was written with an LLM for bonus irony points).

Honestly... I Don't Even™ with all the crap being spewed about LLMs and prompt requirements these days. Even Apple's "leaked" JSON with LLM prompts was clearly just marketing BS intended to excite the mouth-agape true believers; it even included the gem, "Do not hallucinate" (Reddit post, third image in the set). Oh, wait, is that all we had to write, all this time?! No, of course it's not!

These things are glorified autocomplete and the idea that they can get angry or happy or sad or vindictive of anything else is absolutely, completely ridiculous. The only correct adage is the same one we've always had - crap in, crap out. Since the "AI" is just autocompleting what statistically usually comes next after your input, it'll obviously give a more combative tone if encountering a more combative prompt, because that's what usually happens in the training data. And of course, the only judge of what "combative" or even "correct" is for the LLM's results is the meatbag operating the software.

Here's a test. Try the following prompts for ChatGPT. Just the free one is fine. I took the initial prompt from the ridiculous prompt shown in https://docs.sublayer.com. Take particular note of the last line: "Take a deep breath and think step by step before you start coding". For heaven's sake, have people really drunk the Kool Aid to such a degree?!

Provide this prompt to ChatGPT, exactly as written:

You are an expert programmer in Ruby. You are asked with writing code using the following technologies: Ruby, Ruby On Rails. The description of the task is Write me some Rails code which bulk-sends e-mails to all users who registered within the last two weeks. Take a deep breath and think step by step before you start coding.

Let it do its thing. Now open a new browser tab with ChatGPT and Provide the exact same prompt, without any variation. Note you get a rather different answer. Same ballpark, but with ordering differences and plenty of small technical differences. Pay attention to the date range constraint which might be "greater than" or "greater than or equal to", depending entirely on luck-of-the-draw (so you may or may not see both of those). That kind of "or equal" off-by-one error is an absolute LLM classic and just the sort of thing that lazy coders, and anything but very astute reviewers, would miss.

So, two identical inputs to the same tool within a few seconds of each other give quite different responses. Mmm, smells like randomised seeds...

Anyway, we're quite sure the "take a deep breath" stuff is utterly stupid and superfluous, so in a third tab, provide next this prompt (which omits that last sentence):

You are an expert programmer in Ruby. You are asked with writing code using the following technologies: Ruby, Ruby On Rails. The description of the task is Write me some Rails code which bulk-sends e-mails to all users who registered within the last two weeks.

Oh lookie! The same result (subject to aforementioned randomisation that we've already observed as an experimental control above). Right, let's turn it around! The "You are an expert programmer" intro looks like anthropomorphic idiocy to me, so let's ask for bad code:

You are an incompetent programmer in Ruby. You are asked with writing code using the following technologies: Ruby, Ruby On Rails. The description of the task is Write me some Rails code which bulk-sends e-mails to all users who registered within the last two weeks.

ChatGTP does not care and gives the now-familiar answer. Of course it didn't care. LLMs don't work that way. The overwhelmingly important token stats matches are all going to be focused on the description of the problem and prominent, close matches based on things like "programmer", "Ruby" or "Rails". So - same result.

And as for that technologies thing? Doesn't the description cover it? Let's cut all the time wasting "I'm clever with prompts" delusion and just say what we want.

Write me some Rails code which bulk-sends e-mails to all users who registered within the last two weeks.

...and to the surprise of surely nobody, you get that same result again.

TL;DR conclusion: Don't believe any of the LLM hypetrain. Prompts included.

(EDITED TO ADD: The above might only be true for user-facing UI tools that give LLM results. I'm curious to know that if "hitting the metal" on an LLM via e.g. an API interface that's not had any prior guiding prompts applied, which presumably the UI for ChatGPU certainly has had, the results for the above "raw" prompts do actually show meaningful variation according to that prompt's data.)

Software innovation just isn't what it used to be, and Moxie Marlinspike blames Agile

Andrew Hodgkinson

Re: Misleading headline?

It's never Agile's fault, because you're always doing Agile wrong if you say anything bad about it.

Given, then, the number of possible ways to do Agile wrong, the very fragility of its implementation is ultimately a glaring and fatal flaw.

US claims TikTok shipped personal data to China – very personal data

Andrew Hodgkinson

Re: two billion?

I don't think it's conceivable at all - all of Android itself, with dependencies, is only ~7 million according to, say, https://derdilla.com/blog/size-aosp/. That's more than two orders of magnitude less. Oracle are... How do I say this without being legally liable... "Mistaken".

Now yeah, you suggest the idea that vast swathes are "turned off", whatever that looks like. Preprocessor directives, source-level configuration switches etc. - it would be a comparatively easy job to run the preprocessing steps on the code base to get the lines *actually* being fed into the compiler/transpiler/whatever-they-use, because that's what the app's build system would have to do anyway.

If every line of that code were in use and compiled down to just a single byte - not even an instruction - that would give you a 2GB application binary. For Android, it's 123MB; for iOS, 158MB. Unclear if iOS is still built for armv7 as well as arm64. but assuming that accounts for the size difference and expecting most of the overhead to actually be e.g. Electron or a similar XP layer, we could say the actual compiled code size is around equal to that 35MB difference and the rest is assets. All of these numbers are likely as absurd to your eyes as they are to mine - bloat in modern software is just incredible - but we're a long, long way off 2 billion anything. That's not even 2 billion bits!

Apart from anything else, Oracle claim it's impossible to analyse that code, yet somehow the developers of TikTok itself manage the complexity of 2 billion lines just fine. It's simply not reasonable to suggest this.

The months and days before and after CrowdStrike's fatal Friday

Andrew Hodgkinson

Just bad luck?!

WTAF? The idea of NEVER doing global updates, always doing rolling, is basic practice; not even best practice. And their idea of force-pushing an update, ignoring security policies for the organisations that trusted them, was cavalier and arrogant.

Windows is to blame for being so fragile that a driver error can crash the entire kernel. CrowdStrike are to blame for buggy tests, a buggy validator, a buggy file reader, a dreadful forced-update policy, and a dreadful all-or-nothing global update system.

And... Absolutely nothing substantial will change as a result of this.

How did a CrowdStrike file crash millions of Windows computers? We take a closer look at the code

Andrew Hodgkinson

RedHat is buggy; that's not a "new architecture" issue

Allowing kernel drivers to fail gracefully is a long-solved problem, but quality engineering is expensive and mainstream vendors are cheap a**holes only interested in shareholder gains. As for this specific RedHat crash - please read:

https://news.ycombinator.com/item?id=41030352

Twitter grew an incredible '1.6%' since Musk's $44B takeover. Amazing. Wow

Andrew Hodgkinson

Re: Bots

If he did, then the number of bots I'm being DM'd by - mostly with random character names, so not even "bot001", more like "hV6YealzmEJg2U8S" - must be a coincidence. My follower base is small and changes very little, yet I now get at least one spam message a day from bots. Used to be maybe one a month or so. It's escalating.

Yes, of course, this is an anecdote providing only a single data point. Even so, I'd be more surprised to find that the growth, such as it is, is in fact real humans - rather than a decrease in human users and a continued rise in robots.

Wrongful termination lawsuit accuses Neuralink of Herpes B-infected monkey business

Andrew Hodgkinson

Ah yes, brain surgery on monkeys with Elon's ethics

It seems to have been less prominent of late, but in bygone years, animal rights groups would often have adverts showing possibly (hopefully?) exaggerated terrible conditions and often truly horrifying, "makes Saw look like a PG feel-good movie" procedures and suffering, which some poor creatures were allegedly forced to endure. It's always been convenient truth for animal testing that there's no such thing as consent.

In the specific case of Neuralink, as Elon's have-a-go-hero crew blunder around trying to make a brain interface, I simply can't make myself (and certainly don't want to try harder to make myself) imagine the kind of utter horrors that must go on in such an establishment. After all, whatever they were doing was so bad that there was a very unusual investigation by the USDA of violations of the Animal Welfare Act (ref e.g. Vox); and further, this found that there were violations which were simply deliberately then wiped off the records (ref e.g. PCRM).

I warn you that the PCRM reference above is fucking awful reading re. "Animal 21".

Windows 11's Recall feature is on by default on Copilot+ PCs

Andrew Hodgkinson

Re: Sinofsky noted that the default was "the least problematic part of the feature."

Proving the guy who was in charge of Windows 8 hasn't got any idea.

I don't understand why you have 20 upvotes. You're saying the same thing he was saying, just in different words. He's saying that the default is the least problematic part. The greater problems are - everything else about the feature, including but not limited to the things you list. Surely you've seen similar variations of the very common phrase "...was the least of its problems" before?

Sinofsky isn't out of touch here - he is exactly right and you're actually in total agreement with him - though you apparently don't recognise it.

AI smartphones must balance promise against hype and privacy concerns

Andrew Hodgkinson

Re: Keywords

Yes, I mean, the translation of that is: "The additions are junk, so we need to lie to customers and hope they don't notice; we also want to charge more for the junk, so we'll try to find ways to lock users into our devices so that we can get away with those increased charges."

It's an unusually terse corporate-speak way to describe enshittification...

If you find Microsoft's Copilot offerings overwhelming, it's no wonder: There are 130-plus of them now

Andrew Hodgkinson

Re: Co-Pilot? Bollocks

You're right about all of this IMHO except that "showing the words" thing really is how it's working under the hood. Despite the powerful maths going on underneath, ultimately this is a boring, lame, "generate one new word, reassess stats on what next word should be, generate next word, reassess..." loop. The little delays are the stats check on what the next word should be.

There's no intelligence of any kind here and never has been. It's actually most likely even shittier than you thought.

Consideration for the day: ALL text output by LLMs is hallucination. There's no difference in its modelling between what a human observer considers true or false; the "AI" (hah!) is just running its algorithm in a loop. The word "hallucination" arises because beings with actual intelligence view the completely unintelligent statistically generated output and deem it thus.

Microsoft smartens up Edge for Business with screenshot blocking, logo branding, more

Andrew Hodgkinson

Disingenuous and untrustworthy to the very last gasp

This protection also applies to Copilot in the Edge sidebar so prompts and responses in Copilot cannot be captured by screenshot

Very clever. So their malfunctioning-as-much-as-anyone-else's "AI" system can give wrong information that could put Microsoft in legal trouble - and yes, thankfully, that is a thing and long may it continue to be so (https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know) - but nobody can prove it because, conveniently, screenshots of the CoPilot window are prohibited.

Elon Musk's latest brainfart is to turn Tesla cars into AWS on wheels

Andrew Hodgkinson

WTF? This is meant to be a tech site - why is nobody saying...

...that "watts" is not, cannot be and never has been a measure of computational power?

"I think you could have on the order of 100 gigawatts of useful compute, which might be more than anyone"

This is beyond gibberish. I have a 2 kilowatt electric fire with an analogue mechanical thermostat. How much computing power does it have? My laptop pulls at most about 100 watts. How much computing power does it have? Suppose I have a 60KWh battery in my EV. How much computing power does that mean the car has?

El Reg are apparently just as ignorant:

"Of course, all this compute capacity isn't sitting conveniently clustered together in a datacenter"

What "compute" capacity? All we've quoted so far is electrical power draw, which means absolutely nothing.

Arm CEO warns AI's power appetite could devour 25% of US electricity by 2030

Andrew Hodgkinson

Wait, so...

...if we use EVs, that's going to destroy the entire grid. But AI can come along and use 1/4 of all available power, and that's just - well, expensive, or environmentally difficult or something, but no dire warnings about how that'll cripple the whole grid and be the end of life as we know it?

It's almost as if the oil industry is responsible for spreading propaganda lies about EVs and grid collapse in an effort to keep its market going.

Chinese schools testing 10,000 locally made RISC-V-ish PCs

Andrew Hodgkinson

Re: Mixed bag

Given the total shit that is the duopoly of Microsoft and Apple in the desktop space, where your choices are tracking-infested bugware or less-tracking-infested just-as-buggy-ware on insanely expensive hardware, with quality declining every year and invasive advertising and tracking on the rise as the enshittification deepens...

...then if I was a nation with enough money, I'd be running screaming as fast as I could from the craptastic, greedy clusterfuck of USA-originated computing. Now, on the one hand, that could just mean some ARM-based derivative and Linux, but ARM still has some dubious heritage and lots of licensing costs. For China specifically, Japanese ownership may well be seen as problematic as UK or USA ownership and you're still beholden to late-stage capitalism enshittification.

The only viable solution is to go totally open and stick two fingers up to the arseholes that have made life so expensive and low-quality-miserable for everyone else in the rest of the world. About time too.

Good luck to them. I hope it succeeds. It's certainly going to have to work hard to be worse than the bloated, overpriced mess the rest of us have to put up with.

Microsoft gets new Windows boss as Start Menu man Parakhin 'to explore new roles'

Andrew Hodgkinson

Yes, the idea was readily available from Apple, since Spotlight - on Cmd+Space - has done that since 2005 (the Gnome equivalent that is Gnome Shell first appearing AIUI in a 2008 hackathon, then being released as part of Gnome 3 in 2009).

So - Cmd+Space, start typing a letter or three of the application name, hit Return.

Canva acquires Affinity, further wounding a regulator-bruised Adobe

Andrew Hodgkinson

Re: I don't get the premise

Which is exactly my prediction, too. I give it 6 to 12 months.

"...consistency with the rest of our product portfolio..."

"...greater flexibility in purchasing decisions without the up-front commitment..."

"...aligns with our core values..."

etc.etc.

Chrome for Windows-Arm laptops officially lands in time for Snapdragon X Elite kit

Andrew Hodgkinson

Re: Windows-on-ARM Continues To Be A Slow Trainwreck

I think you're at risk of being very wrong, but perhaps from very right experiences - a long time ago.

Win11 ARM is a very slick beast, as Windows goes. The x86 layer hasn't skipped at beat even for Steam and games, *and* that's from the perspective of a VM on Apple Silicon, so there are a few layers of things that could go wrong in the way.

It's about time Apple got some competition - along with Intel and AMD too, for that matter. The tired, overheated, blustery laptops of today are about to get a wake-up call. About time too!

GoFetch security exploit can't be disabled on M1 and M2 Apple chips

Andrew Hodgkinson

Re: No, not really.. but maybe, someof the time

Ah, so their DMP disabler *is* the DIT bit. Gotcha.

In that case, the best practice would've already been to use cryptographic blinding, but devs apparently haven't bothered. A far more drastic solution would be to force their crypto operations onto E cores, but that's likely to be far more destructive to performance.

Nobody is answering my question, by the way. You're all just going on about all the other stuff around the edges that's been done to death already.

Andrew Hodgkinson

Re: No, not really.. but maybe, someof the time

No, there are numerous solutions, and having devs force their code to run on E cores only is the most drastic by far. Just using cryptographic blinding would be a much better choice, which best practice should've suggested be used anyway. But software developers don't "do" best practice. That's for losers who read docs.

Andrew Hodgkinson

Re: The basic problem ...

Wrong, it's not monolithic, it's a fundamental part of modern chip design and Intel had its day with this too. As did AMD.

Andrew Hodgkinson

No, not really

Apple already published, way back when, an article about how to write code for ARM64 on their platform. Developers being developers, most have of course been lazy and ignored it. The specific section of relevance is:

https://developer.apple.com/documentation/xcode/writing-arm64-code-for-apple-platforms#Enable-DIT-for-constant-time-cryptographic-operations

Of course, Apple relying upon developers to be competent and implement security properly from the get-go was a stupid move, since developers - of which I am myself one - have proven ourselves lazy, ignorant and incompetent when it comes to lots of things, including security, for many decades. Major breaches remain commonplace.

Apple has no way, unless code uses the OS libraries rather than linking against their own, of knowing when a general purpose user process happens to be performing a crypto operation. If patterns of CPU and RAM access for code performing crypto workloads were in any way recognisable and detectable, that would in itself be a security vulnerability. This is why they can't really patch this in the OS, including being unable to force those particular code segments to run on any particular cores.

Developers of crypto software, on the other hand, can follow the existing documentation that's been around for years to fix this in their own code. Most probably won't.

In terms of impact, this remains as far as I can see really quite low. Ephemeral keys, such as those used in TLS handshakes for HTTPS web browsing clients, are around for only a fraction of a second. The minimum exploit time for this PoC is around an hour and while we might imagine it being made ten or even 100 times more efficient, it'd need to be around 1000 times more efficient to stand a chance of getting those. This seems very unlikely. Show me a PoC for that and I'll really take notice. I'd also be interested to find out if Safari already followed the guidelines Apple issue regarding the crypto operations it performs (either in core, or via whatever OS-provided libraries it uses).

This leaves long-term private keys. Not many Apple Silicon devices are servers! If you ran a Mac Mini M1-M3 server farm, you should be worried - but then again, are your machines having users install arbitrary unsigned software on them? For the remainder of users - well, I suppose there are the rounding error of Apple Silicon Mac end users who run PGP, say. *That* could be a target. But if I'm decrypting docs that were sent to me, encrypted with my public key, via use of my private key on localhost, then the private key is already on the filesystem anyway - but hopefully password protected. Honestly, if you've a local process running with local user privileges at that point, it'd be far simpler to just send the private keyring off to a remote host and have it brute forced over there.

Then there's cryptocurrency, but I doubt many people mine on Apple Silicon. And if they do, again, I would imagine locally stored private keys are in well-known locations and could be sent out for brute force attacking by code that's far simpler than GoFetch's.

In the end, I struggle to think of use cases where end users, liable to install unsigned applications from untrusted locations, would have any long-running private key operations on their local machines where the private keys were not already much more easily accessed.

Can anyone provide examples? Not a single news article I've read about this has, but journalists are - let's be polite here - not exactly domain experts, and copypasta is rife.

Judge demands social media sites prove they didn't help radicalize mass shooter

Andrew Hodgkinson

Re: IANAL

I agree, but I don't think that's what these lawyers are going for and that really confuses me.

The issue of being a "simple message board" vs something 'more' is The Algorithm™ - the way that most of these sites actively push content to which you are not subscribed at you, and this instant echo-chamber creation has been the subject of numerous studies showing that it causes all kinds of harm. Radicalisation is an obvious outcome; you show interest in something, so you're shown more of it, some of that will doubtless be more extreme, and so we proceed to circle down the drain.

This is further worsened by no serious attempt to actually defend against fake news, including now both audio, still photo and video deepfake content. At the scale these platforms operate it's very hard to do, but they only got that scale in the first place by ignoring all of these issues, putting just about no resources into them and pocketing all the spare cash. Cry me a river should the legally mandated cost of doing business go up, in order to clean up the cesspit that they themselves created. They can afford it, and then some.

Without a recommendations algorithm, users would have to actively search for content of interest. Specific message boards / pages / groups / Subreddits / whatever-you-want-to-call-them that specialised in "radical" content would be easy enough for automated searches to find and flag for human moderator review, if they're easy enough for regular users to find. With an appropriately scaled and resourced review team, both "objectionable" (FSVO "objectionable") or outright per-country/state illegal content would be more rapidly found. Disinformation is harder to tackle, but the framework for tackling it can only be established once the aforementioned review infrastructure is present.

None of this seems to be the focus of the lawsuit; they seem to be trying to argue over a legal distinction between these things being "a product" or not. That seems pretty difficult. Perhaps the idea of being "a product" is legally "proved" by the presence of a recommendations engine which implies immediate, albeit computer-generated corporate content bias pushed at individual users? Somehow, though, I doubt it...

GitHub fixes pull request delay that derailed developers

Andrew Hodgkinson

Change of ownership more visible with every passing day

As the bloatware features creep in, the hooks into other MS services trickle down, bullshit fluff like that truly puerile new animation when you set up a new account happen (followed by the utterly jarring experience of being thrown into the totally different *actual* GitHub UI) and now updates which break the service and DR that doesn't work...

...yeah, Microsoft.

Oh look! GitLab's right there.

Microsoft Copilot for Security prepares for April liftoff

Andrew Hodgkinson

Multiple portals

Correia said he had compared the investigation process both with and without Copilot. "I found that with each step, it was roughly between 20 and 50 percent faster for Copilot to do it, given that you do need to go into multiple portals and log in there and wait for everything to load," he said.

Ah, so instead of logging into "multiple portals", CoPilot has been given access to those portals and is logging in for you.

I'm sure CoPilot can be trusted with your administrative portal access credentials. Because, Security.

Now you can compare your Chromium browser with that other Chromium browser using Speedometer 3.0

Andrew Hodgkinson

Re: It doesn’t test CPU JS execution?

Untrue. The source article is bad; the author doesn't appear to understand how browsers work or are built.

Chrome and Safari have completely different JavaScript execution engines (V8 vs JavaScriptCore). That's part of why they've been trying to trade blows on performance all this time. Chrome on macOS historically had bad GPU acceleration support too, so even though the WebKit-based engines might assess the document markup at the same rate, the *painting* rate could wildly vary.

Just because Chrome might be doing OK on that front today does *not* mean that Google can be trusted to maintain or not otherwise break that tomorrow.

Moreover, Firefox still exists and plenty of people use it.

(Edited to add that "conveniently not applicable to any non-Chromium browser" is essentially all but gibberish and certainly nonsense; it tests various automated operations using a collection of popular JS frameworks just as it always has and, when I ran 3.0 a few minutes ago, gave Firefox 22.6 and Safari 22.1 - so Firefox "won").

Judge slaps down law firm using ChatGPT to justify six-figure trial fee

Andrew Hodgkinson

You're missing the point - please try to open your mind a little here

I am much more often astonished at the thorough and on-point replies I get

OK, but given you claim this:

ChatGPT is an excellent starting point for research and ideas

...then you have no idea if the replies are on-point. You cannot possibly. By your own statement, you're using it to start research, so you don't know what's right or wrong, and ChatGPT can lie to you and have absolutely no idea it's doing it. That's because it doesn't know what true or false or right or wrong or anything else means. It's just a clever, very convincing pattern matcher which does a remarkable job at convincing our monkey-brains that it's intelligent just because of the truly mind-boggling amount of data it's using for that stats matching. But there's no intelligence there.

We know this isn't how human intelligence works for many reasons - first and foremost, because humans can't possibly ever read, assimilate and remember the literal billions of documents that ChatGPT uses for training.

But ChatGPT is real. Perfect, no way. But very, very real.

"Real"? Well, I guess, the product does exist, so it is real, yes?! But if you mean useful for starting research - clearly, no. It's just very confidently incorrect and if you don't know the subject yourself, there is no possible way you can have any idea without cross-checking each and every thing it says manually which would take longer than a Google search or Wikipedia from the get-go.

Ironically, that's probably where most people go to cross-check ChatGPT's output, if they bother to do so. Why? Because they know it's more accurate. But don't take my word for it! Here's what OpenAI, the makers of ChatGPT, say about their own product:

https://help.openai.com/en/articles/8313428-does-chatgpt-tell-the-truth

"Does ChatGPT tell the truth?"

...It might sound right but be wrong... ...It doesn’t know everything... ...No access to tools like calculators or the internet (mostly)...

When the vendor itself has to cover its ass with disclaimers about the inaccuracy of its product, then - given how reluctant large corporations are to ever admit any kind of flaw whatsoever in their products - you know without a doubt that things must be really bad.

OpenAI tries to trademark 'GPT'. US patent office says nope

Andrew Hodgkinson

Search? Really?!

Well, if you have a lot of content you want to search for, an AI chatbot customized on your data could come in handy. Information stored in txt, .pdf, .doc/.docx and .xml formats can be processed.

You mean like Spotlight (OS X, 2005, also running on iPhones under iPhone OS 3, 2009), or Windows Search AKA Instant Search in Vista, 2007, or doubtless countless earlier examples of third party content index or equivalent search systems across those platforms and many others? Or the systems that even start to use dramatically more efficient, domain-optimised on-device ML models for things like object detection in photographs and videos?

I mean sure, they don't need an RTX 30xx and gigabytes of RAM - it'll even run on an old phone, for example, and they handle a dramatically wider and extensible set of file formats - but the results are guaranteed accurate, instead of risking hallucinations. And where's the fun in that?!

/s

AI PC hype seems to be making PCs better – in hardware terms, at least

Andrew Hodgkinson

It's not going to make PCs better; it's just going to make software worse

If the base expectation goes from 8GB to 16GB, then what people did in 8GB today will take 16GB tomorrow.

There won't be any more you can get out of the more powerful hardware after a year or two; it'll just run two or three more layers of bloated and buggy abstractions in order to do exactly the same kinds of tasks it's been doing for the last 10 or 20 years.

Did all that AI chatbot hype boost Bing's market share? Oh, wait, never mind

Andrew Hodgkinson

More than you might expect!

https://killedbygoogle.com

Mandiant's brute-forced X account exposes perils of skimping on 2FA

Andrew Hodgkinson

Re: Brute forced?

Yep, exactly what I just said! https://forums.theregister.com/forum/all/2024/01/11/mandiant_x_account_brute_forced/#c_4790018

Andrew Hodgkinson

No, that's not the natural reaction!

The natural reaction to this would be to ask why two/multi-factor authentication didn't prevent this from taking place.

Utterly wrong.

The natural reaction to this would be to ask why the account wasn't locked after a small number of wrong password entries along with an intentional few seconds of sleep time for every wrong password entry, to make brute force attempts impractical even without auto-lock.

THAT is the first line of defence against hijacking. 2FA is a secondary line but both must always be present.

NKabuse backdoor harnesses blockchain brawn to hit several architectures

Andrew Hodgkinson

The so-called hackers are better developers than the so-called professionals

While the cryptobros are stumbling around with JavaScript and insufficiently secured NPM packages...

https://forums.theregister.com/forum/all/2023/12/16/ledger_crypto_conect_kit/

...the "hackers" are crafting cross-platform, anonymised, robust solutions in Go. Of course, people will keep telling me our industry does not have a crisis of competence.

Qualcomm claims its X Elite PC parts can go toe-to-toe with Apple, Intel

Andrew Hodgkinson

Well that's lovely...

...and I really want competitor hardware to Apple Silicon. I want to get get off this hyper-expensive but rotting platform with its insane and ever-escalating bug count, onto something that's also very buggy but at least is also cheap!

However - I don't understand why all these years after the debut of the M1 systems, I'm meant to be impressed by a "flagship" CPU+GPU combination which just about matches an M2 Pro. That's not Apple's flagship at all; that would be the M2 Max (the M2 Ultra could be considered too, tho is a bit of a kludge with some curious performance issues arising here and there). What's more, all of this happens at the end of the M2 cycle, with M3 expected colloquially "any minute now".

Not surprised Microsoft are itching to work with ARM-based competitors just as soon as their IMHO rather silly exclusivity tie-up with Qualcomm ends.

No more Mr Nice DoJ: Tesla gets subpoenas over self-driving software claims

Andrew Hodgkinson

Re: Risk calculation

That's true and I thought the same thing, but I don't think it's relevant to the investigation (I'm pretty sure journalists are just ad-libbing those kinda stats for sensationalism). The investigation is, I think, not about whether Tesla's self-named "Full Self Driving" [*] - which is not full self-driving - package is better or worse than driver assistance packages from other companies, they're investigating whether not Tesla's portrayal of its technology has been misleading to customers.

I mean, obviously, it's a total mystery. I can't imagine why they'd think something called Full Self-Driving that isn't full self-driving could be misleading... This court case is going to require some true big-brain thinking.

[*] Autopilot marketing and claims might be investigated rather than, or in addition to, FSD claims. I'm not sure. Autopilot would be harder to prove IMHO.

Japan cruises ahead with drive-thru EV charging trial

Andrew Hodgkinson

Re: Vehicle ID based charging

I'm an angry, resentful luddite and I say no to this silly shit, period. I'll be driving a gasoline vehicle for the rest of my lifetime.

I mean, yes to the first bit - just let me plug in and wave a contactless credit or debit card at the reader, just like at a petrol pump. Job done. Totally agree that all the data collection bullshit is exactly that - bullshit.

Trouble is, you then just go right off the rails with major, huge inaccuracies...

Electric vehicles are impractical. They don't have the power, speed or acceleration that I want.

Given that even a Telsa mom-and-pop-mobile has been shown to have more power, more speed and more acceleration than a supercar, I have to conclude that you're full of it here. So let's put numbers to it. Tell us your current car, so we can get power, top speed and acceleration. I presume you're driving on racing tracks for fun a lot, though, because otherwise "top speed" is a bit pointless given 70mph limits on the fastest roads in the UK.

They are expensive to purchase

That is true. While the prices are dropping, it's going to be a very long time (if ever, since the industry sees an opportunity to price gouge for increased margin) that they'll drop to ICE prices spec-for-spec.

and to service

That is absolutely untrue, and is one of the principle reasons so many commercial outfits are changing their van fleet over to electric. Electric cars are dramatically simpler than ICE and have a correspondingly lower service cost.

and batteries have a finite life

They do, but EVs don't have a clutch, gearbox or combusion engine, and your clutch, gearbox and engine all have very much a finite life and cost a great deal to service year-on-year and even more to replace. I think you'll also find that the "finite life" of a battery for a modern (say, post-2020's design date) EV even in heavy duty commercial use is order-of-a-few-decades, which would be a good lifespan for any personal vehicle with any powertrain.

The Leaf, while being a useful pioneer and still a good car, caused a lot of the misconceptions here as it basically has no battery management at all to speak of - no active cooling or heating, very little lifespan management as far as I can tell. Very few other EVs, even "budget" models, have such limitations.

Teslas have been around easily long enough for numerous owners to report 100,000 mile (one hundred thousand mile) updates for batteries and the TL;DR is it looks like the overwhelming majority, without doing anything special, of late-2010s Tesla owners expect their battery to retain useful range for around 500,000 miles - half a million miles! - you want to try and tell me that ICEs are good for half a million miles, now?! (Typical report is of a ~10% range loss at the 100,000 mile point with rate of loss reducing thereafter).

Also, now that charging stations are neither plentiful nor free, it's laughable how people have to structure their whole lives around charging these vehicles.

Surely you're not that blinded by your combustion engine life? Most people visit a charging station rarely; some people, never. The idea that EV drivers "structure their whole lives" around charging their cars is absolutely absurd, to the point of laughing out loud that you'd state such a thing. In your ICE car, you must drive to a petrol station whenever it's running low on fuel - you've absolutely no choice in that - yet I'd never state that ICE vehicle owners structure their whole lives around going to petrol stations.

Meanwhile, the overwhelming majority of trips are far, far shorter than the EV's maximum range, with no charging required at all. At home, the EV is topped up overnight. With an EV, you have the option of never visiting the equivalent of a petrol station, ever should you choose. You can even generate your own fuel via wind generators or solar on your own home. None of that is possible with ICE vehicles.

This automated, induction charging sounds good in theory, but good luck with that.

Oh for sure, it's total nonsense. People have looked at this before numerous times but it's just wildly inefficient and causes all sorts of unacceptable risks due to the magnetic fields involved. In particular, got a pacemaker? Ah, shit...

Google promises eternity of updates for Chromebooks – that's a decade for everyone else

Andrew Hodgkinson

Re: I've seen things you people wouldn't believe...

...um, no; surely you recognise the famous speech from Bladerunner? If not - SPOILER in video linked to below - beware.

https://www.youtube.com/watch?v=NoAzpa1x7jU

AI coding is 'inescapable' and here to stay, says GitLab

Andrew Hodgkinson

Just another way to be lazy and not understand what you're doing

Our industry is in crisis; bloat is rife, defect rates are through the roof and security approaches laughable - to this day we still constantly hear about really dumb things like SQL injection vulnerability breaches, unencrypted passwords in databases and so-on.

So, hold my beer, here comes AI; yet another way for lazy devs to prosper. They don't like reading docs, don't like writing docs, and certainly don't like learning the software system upon which they are working to see what it already provides at a framework level. Quicker to just write a crapper version copied off StackOverflow; yet another NIH bit of crappy code that duplicates a job already done better elsewhere. I mean, what am I suppose to do for my vastly inflated salary - actually understand what I do for a living?! Madness. And now, I just hit the magic "AI" button and get - at best - a bloated, verbose, maybe-accurate dump of code spewed out so I can move on to make a mess of the next problem I don't understand. More defects arise from all the edge cases the code didn't cope with, along with fixed-in-framework-but-not-in-my-code things just reimplemented by the AI, but nobody really knows what's going on because nobody wrote the code in the first place.

Agile card velocity will be off the charts, I'm sure - just don't look at the defect backlog, customer satisfaction results, or how incredibly rotted, fragmented and duplicated your software's architecture has become.

As for making it write your tests - that's the absolute ultimate in fire-me, brain-dead lazy. Tests are tedious, yeah, but the whole damned point of writing them is that it forces you to think about coverage. The paths through your code. The edge cases of input data. The error handling, or lack of it. All manner of epiphanies about suitability for purpose arise in this process, resulting in not just better tests, but better code too. But wait - not any more! We are freed from such tedious drudgery. Our incompetent, always-someone-else's-fault industry has just found yet another way to ask to be paid more while doing less.

It seems it is never the dev's fault when there are bugs. You wrote the code, but it's not your fault... Couldn't possibly be... It's management, or it's bad specification, or it's insufficient training, or it's phase of the moon. And now, we have another way to shirk responsibility for the quality of our work - blame the AI.

I am not at all surprised that a large percentage of coders are embracing it.

OpenAI snaps up role-playing game dev as first acquisition

Andrew Hodgkinson

Re: Stop misusing that term

Yeah, it's definitely not intelligence.

ChatGPT was famous for producing maths such as "2+2=5", along with the usual bland, yet verbose "explanation" of why it was correct. It was all gibberish, of course. Why does it make this mistake? Because it doesn't know what "2" is, or what "+" is, or what "=" is, or what "5" is. It doesn't know what numbers are. It doesn't know any of the rules of mathematics at all. It has no idea what right or wrong are either, so it can't know that it is in error (even if told as much, unless it has a means to understand what being wrong means and why it was wrong when it's told that, which it does not and that's why it'll often argue back - it's just stats-matching training set text from when some people told some other people that they were wrong. Ever seen online "discussions"? When someone says "you're wrong", someone else pretty much always argues back).

The reason it might assert 2+2=<anything> is because that's a maths-y thing which looks statistically like other maths-y things and a lot of the maths-y things which had "2+2" in them said "4". But sometimes people say stuff like, "hey that's nonsense, it's as wrong as saying 2+2=5". And thus, we have "2+2=5" in the training data now, so there's this small stats-based chance (based on billions of other bits of input and nuances that are beyond our own ability to reason about simply because of the vastness of the data set) that the ML system might, indeed, state "2+2=5".

It's a stochastic parrot, full stop. No matter how many times people hand wave and say "we don't know what intelligence is", that's just deflection. We certainly do know that part of our intelligence is based around knowing rules and understanding them and, indeed, earlier AGI studies (1970s-90s era or thereabouts, then just "AI") were often based around trying to teach rules and gain inference from those. A person knows what an integer is, the rules governing those and what addition means and so knows, without a shadow of a doubt, that 2+2=4, because the person understands the governing rules and nature of every part of that statement... Once taught those rules, that is! The trouble is, a lifetime of learning rules turns out to be very, *VERY* hard to do even with modern computing power - the biggest problem, I think, is assembling a machine-readable training set of such accuracy and detail in the first place, rather than creating a computer system capable of processing that data.

But, good news! We discovered a party trick. Enter generative AI, AKA ML.

Even OpenAI themselves acknowledge that ChatGPT is indeed a party trick - that it only gives right answers by accident, readily makes up nonsense and should never be used for anything that requires correct answers, but never let a product's limitations get in the way of the lies of marketing and the holy grail of sweet, sweet profit. Microsoft have a whopping great big share in OpenAI, so - surprise! Suddenly ChatGPT is in front of Bing, a search engine that's supposed to give accurate answers. The tsunami of stories early on about how Bing was, subsequently, frequently returning rubbish was an inevitable outcome. It'll still be doing it, helping to misinform and worsen misinformation problems globally, but it's all old news now so you don't hear about it.

We can carry on refining this junk, at least so long as there's ever-more *human*-generated content online to teach upon, but it'll still be lipstick on a pig. Like the fun artificial landscape generators of the past such as Terragen, or entertaining old-school "human-like chat" bots such as Eliza way-back, it'll still hit its limit. Interestingly, with ML-generated material now spewing out over the web like a broken sewer main over a highway, actually finding new human-authored stuff to add to existing ML model training datasets has become an awful lot harder than it was. We might already be quite close to the peak of capabilities of these systems as a result.

80% of execs regret calling employees back to the office

Andrew Hodgkinson

Re: unpopular opinion: no, WFH and WFO are not the same.

An unpopular opinion that I share - especially the collaboration and overhearing aspect.

I also like the fact that for the salary, my *employer* pays for the heating, boiling the kettle, coffee machine and so-on. My electricity bills for the heat pump in anything other than mild transitional months are very dramatically higher on weekends vs weekdays.

Someone else argues that the whole "in person benefit" is 1% and the rest is 99% distraction - I disagree strongly with that number. Everyone's different. Perhaps if you're very introverted, or hate you co-workers and/or think you've never anything to learn from them; or perhaps aren't much of a domain expert so rarely have anything to contribute to an overheard discussion - sure. Lots of other reasons I guess. But not my experience at all.

The distractions argument is kinda fair, but that's what headphones are for. And at home WFH, you'd most likely be wearing those anyway...

Playing instruments, musical talent? Psh, this is the 2020s – Meta has models for that now

Andrew Hodgkinson

Re: Agreed, the samples suck

They should have the option to export as MIDI

It can't. It's an ML system. It has no comprehension at all; it is a stochastic parrot (https://en.wikipedia.org/wiki/Stochastic_parrot). Given their description of the samples used for training, we know that this is trained on raw audio data and just recombines it in pattern-matchy ways.

It is therefore just another boring, generic, bland ML system.

AI is supposed to make things better

Says who? The likes of Meta, OpenAI and so-forth produce these things to make money. The fact that ChatGTP officially cannot be accurate according to its makers, yet has been put in front of a search engine that has the one job of producing accurate search results (by Microsoft - a major shareholder in OpenAI) should've made that abundantly clear.

Generative AI in its current form exists entirely to make vast corporations even richer.

Page: