* Posts by Andrew Hodgkinson

171 publicly visible posts • joined 7 Mar 2008

Page:

Twitter grew an incredible '1.6%' since Musk's $44B takeover. Amazing. Wow

Andrew Hodgkinson

Re: Bots

If he did, then the number of bots I'm being DM'd by - mostly with random character names, so not even "bot001", more like "hV6YealzmEJg2U8S" - must be a coincidence. My follower base is small and changes very little, yet I now get at least one spam message a day from bots. Used to be maybe one a month or so. It's escalating.

Yes, of course, this is an anecdote providing only a single data point. Even so, I'd be more surprised to find that the growth, such as it is, is in fact real humans - rather than a decrease in human users and a continued rise in robots.

Wrongful termination lawsuit accuses Neuralink of Herpes B-infected monkey business

Andrew Hodgkinson

Ah yes, brain surgery on monkeys with Elon's ethics

It seems to have been less prominent of late, but in bygone years, animal rights groups would often have adverts showing possibly (hopefully?) exaggerated terrible conditions and often truly horrifying, "makes Saw look like a PG feel-good movie" procedures and suffering, which some poor creatures were allegedly forced to endure. It's always been convenient truth for animal testing that there's no such thing as consent.

In the specific case of Neuralink, as Elon's have-a-go-hero crew blunder around trying to make a brain interface, I simply can't make myself (and certainly don't want to try harder to make myself) imagine the kind of utter horrors that must go on in such an establishment. After all, whatever they were doing was so bad that there was a very unusual investigation by the USDA of violations of the Animal Welfare Act (ref e.g. Vox); and further, this found that there were violations which were simply deliberately then wiped off the records (ref e.g. PCRM).

I warn you that the PCRM reference above is fucking awful reading re. "Animal 21".

Windows 11's Recall feature is on by default on Copilot+ PCs

Andrew Hodgkinson

Re: Sinofsky noted that the default was "the least problematic part of the feature."

Proving the guy who was in charge of Windows 8 hasn't got any idea.

I don't understand why you have 20 upvotes. You're saying the same thing he was saying, just in different words. He's saying that the default is the least problematic part. The greater problems are - everything else about the feature, including but not limited to the things you list. Surely you've seen similar variations of the very common phrase "...was the least of its problems" before?

Sinofsky isn't out of touch here - he is exactly right and you're actually in total agreement with him - though you apparently don't recognise it.

AI smartphones must balance promise against hype and privacy concerns

Andrew Hodgkinson

Re: Keywords

Yes, I mean, the translation of that is: "The additions are junk, so we need to lie to customers and hope they don't notice; we also want to charge more for the junk, so we'll try to find ways to lock users into our devices so that we can get away with those increased charges."

It's an unusually terse corporate-speak way to describe enshittification...

If you find Microsoft's Copilot offerings overwhelming, it's no wonder: There are 130-plus of them now

Andrew Hodgkinson

Re: Co-Pilot? Bollocks

You're right about all of this IMHO except that "showing the words" thing really is how it's working under the hood. Despite the powerful maths going on underneath, ultimately this is a boring, lame, "generate one new word, reassess stats on what next word should be, generate next word, reassess..." loop. The little delays are the stats check on what the next word should be.

There's no intelligence of any kind here and never has been. It's actually most likely even shittier than you thought.

Consideration for the day: ALL text output by LLMs is hallucination. There's no difference in its modelling between what a human observer considers true or false; the "AI" (hah!) is just running its algorithm in a loop. The word "hallucination" arises because beings with actual intelligence view the completely unintelligent statistically generated output and deem it thus.

Microsoft smartens up Edge for Business with screenshot blocking, logo branding, more

Andrew Hodgkinson

Disingenuous and untrustworthy to the very last gasp

This protection also applies to Copilot in the Edge sidebar so prompts and responses in Copilot cannot be captured by screenshot

Very clever. So their malfunctioning-as-much-as-anyone-else's "AI" system can give wrong information that could put Microsoft in legal trouble - and yes, thankfully, that is a thing and long may it continue to be so (https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know) - but nobody can prove it because, conveniently, screenshots of the CoPilot window are prohibited.

Elon Musk's latest brainfart is to turn Tesla cars into AWS on wheels

Andrew Hodgkinson

WTF? This is meant to be a tech site - why is nobody saying...

...that "watts" is not, cannot be and never has been a measure of computational power?

"I think you could have on the order of 100 gigawatts of useful compute, which might be more than anyone"

This is beyond gibberish. I have a 2 kilowatt electric fire with an analogue mechanical thermostat. How much computing power does it have? My laptop pulls at most about 100 watts. How much computing power does it have? Suppose I have a 60KWh battery in my EV. How much computing power does that mean the car has?

El Reg are apparently just as ignorant:

"Of course, all this compute capacity isn't sitting conveniently clustered together in a datacenter"

What "compute" capacity? All we've quoted so far is electrical power draw, which means absolutely nothing.

Arm CEO warns AI's power appetite could devour 25% of US electricity by 2030

Andrew Hodgkinson

Wait, so...

...if we use EVs, that's going to destroy the entire grid. But AI can come along and use 1/4 of all available power, and that's just - well, expensive, or environmentally difficult or something, but no dire warnings about how that'll cripple the whole grid and be the end of life as we know it?

It's almost as if the oil industry is responsible for spreading propaganda lies about EVs and grid collapse in an effort to keep its market going.

Chinese schools testing 10,000 locally made RISC-V-ish PCs

Andrew Hodgkinson

Re: Mixed bag

Given the total shit that is the duopoly of Microsoft and Apple in the desktop space, where your choices are tracking-infested bugware or less-tracking-infested just-as-buggy-ware on insanely expensive hardware, with quality declining every year and invasive advertising and tracking on the rise as the enshittification deepens...

...then if I was a nation with enough money, I'd be running screaming as fast as I could from the craptastic, greedy clusterfuck of USA-originated computing. Now, on the one hand, that could just mean some ARM-based derivative and Linux, but ARM still has some dubious heritage and lots of licensing costs. For China specifically, Japanese ownership may well be seen as problematic as UK or USA ownership and you're still beholden to late-stage capitalism enshittification.

The only viable solution is to go totally open and stick two fingers up to the arseholes that have made life so expensive and low-quality-miserable for everyone else in the rest of the world. About time too.

Good luck to them. I hope it succeeds. It's certainly going to have to work hard to be worse than the bloated, overpriced mess the rest of us have to put up with.

Microsoft gets new Windows boss as Start Menu man Parakhin 'to explore new roles'

Andrew Hodgkinson

Yes, the idea was readily available from Apple, since Spotlight - on Cmd+Space - has done that since 2005 (the Gnome equivalent that is Gnome Shell first appearing AIUI in a 2008 hackathon, then being released as part of Gnome 3 in 2009).

So - Cmd+Space, start typing a letter or three of the application name, hit Return.

Canva acquires Affinity, further wounding a regulator-bruised Adobe

Andrew Hodgkinson

Re: I don't get the premise

Which is exactly my prediction, too. I give it 6 to 12 months.

"...consistency with the rest of our product portfolio..."

"...greater flexibility in purchasing decisions without the up-front commitment..."

"...aligns with our core values..."

etc.etc.

Chrome for Windows-Arm laptops officially lands in time for Snapdragon X Elite kit

Andrew Hodgkinson

Re: Windows-on-ARM Continues To Be A Slow Trainwreck

I think you're at risk of being very wrong, but perhaps from very right experiences - a long time ago.

Win11 ARM is a very slick beast, as Windows goes. The x86 layer hasn't skipped at beat even for Steam and games, *and* that's from the perspective of a VM on Apple Silicon, so there are a few layers of things that could go wrong in the way.

It's about time Apple got some competition - along with Intel and AMD too, for that matter. The tired, overheated, blustery laptops of today are about to get a wake-up call. About time too!

GoFetch security exploit can't be disabled on M1 and M2 Apple chips

Andrew Hodgkinson

Re: No, not really.. but maybe, someof the time

Ah, so their DMP disabler *is* the DIT bit. Gotcha.

In that case, the best practice would've already been to use cryptographic blinding, but devs apparently haven't bothered. A far more drastic solution would be to force their crypto operations onto E cores, but that's likely to be far more destructive to performance.

Nobody is answering my question, by the way. You're all just going on about all the other stuff around the edges that's been done to death already.

Andrew Hodgkinson

Re: No, not really.. but maybe, someof the time

No, there are numerous solutions, and having devs force their code to run on E cores only is the most drastic by far. Just using cryptographic blinding would be a much better choice, which best practice should've suggested be used anyway. But software developers don't "do" best practice. That's for losers who read docs.

Andrew Hodgkinson

Re: The basic problem ...

Wrong, it's not monolithic, it's a fundamental part of modern chip design and Intel had its day with this too. As did AMD.

Andrew Hodgkinson

No, not really

Apple already published, way back when, an article about how to write code for ARM64 on their platform. Developers being developers, most have of course been lazy and ignored it. The specific section of relevance is:

https://developer.apple.com/documentation/xcode/writing-arm64-code-for-apple-platforms#Enable-DIT-for-constant-time-cryptographic-operations

Of course, Apple relying upon developers to be competent and implement security properly from the get-go was a stupid move, since developers - of which I am myself one - have proven ourselves lazy, ignorant and incompetent when it comes to lots of things, including security, for many decades. Major breaches remain commonplace.

Apple has no way, unless code uses the OS libraries rather than linking against their own, of knowing when a general purpose user process happens to be performing a crypto operation. If patterns of CPU and RAM access for code performing crypto workloads were in any way recognisable and detectable, that would in itself be a security vulnerability. This is why they can't really patch this in the OS, including being unable to force those particular code segments to run on any particular cores.

Developers of crypto software, on the other hand, can follow the existing documentation that's been around for years to fix this in their own code. Most probably won't.

In terms of impact, this remains as far as I can see really quite low. Ephemeral keys, such as those used in TLS handshakes for HTTPS web browsing clients, are around for only a fraction of a second. The minimum exploit time for this PoC is around an hour and while we might imagine it being made ten or even 100 times more efficient, it'd need to be around 1000 times more efficient to stand a chance of getting those. This seems very unlikely. Show me a PoC for that and I'll really take notice. I'd also be interested to find out if Safari already followed the guidelines Apple issue regarding the crypto operations it performs (either in core, or via whatever OS-provided libraries it uses).

This leaves long-term private keys. Not many Apple Silicon devices are servers! If you ran a Mac Mini M1-M3 server farm, you should be worried - but then again, are your machines having users install arbitrary unsigned software on them? For the remainder of users - well, I suppose there are the rounding error of Apple Silicon Mac end users who run PGP, say. *That* could be a target. But if I'm decrypting docs that were sent to me, encrypted with my public key, via use of my private key on localhost, then the private key is already on the filesystem anyway - but hopefully password protected. Honestly, if you've a local process running with local user privileges at that point, it'd be far simpler to just send the private keyring off to a remote host and have it brute forced over there.

Then there's cryptocurrency, but I doubt many people mine on Apple Silicon. And if they do, again, I would imagine locally stored private keys are in well-known locations and could be sent out for brute force attacking by code that's far simpler than GoFetch's.

In the end, I struggle to think of use cases where end users, liable to install unsigned applications from untrusted locations, would have any long-running private key operations on their local machines where the private keys were not already much more easily accessed.

Can anyone provide examples? Not a single news article I've read about this has, but journalists are - let's be polite here - not exactly domain experts, and copypasta is rife.

Judge demands social media sites prove they didn't help radicalize mass shooter

Andrew Hodgkinson

Re: IANAL

I agree, but I don't think that's what these lawyers are going for and that really confuses me.

The issue of being a "simple message board" vs something 'more' is The Algorithm™ - the way that most of these sites actively push content to which you are not subscribed at you, and this instant echo-chamber creation has been the subject of numerous studies showing that it causes all kinds of harm. Radicalisation is an obvious outcome; you show interest in something, so you're shown more of it, some of that will doubtless be more extreme, and so we proceed to circle down the drain.

This is further worsened by no serious attempt to actually defend against fake news, including now both audio, still photo and video deepfake content. At the scale these platforms operate it's very hard to do, but they only got that scale in the first place by ignoring all of these issues, putting just about no resources into them and pocketing all the spare cash. Cry me a river should the legally mandated cost of doing business go up, in order to clean up the cesspit that they themselves created. They can afford it, and then some.

Without a recommendations algorithm, users would have to actively search for content of interest. Specific message boards / pages / groups / Subreddits / whatever-you-want-to-call-them that specialised in "radical" content would be easy enough for automated searches to find and flag for human moderator review, if they're easy enough for regular users to find. With an appropriately scaled and resourced review team, both "objectionable" (FSVO "objectionable") or outright per-country/state illegal content would be more rapidly found. Disinformation is harder to tackle, but the framework for tackling it can only be established once the aforementioned review infrastructure is present.

None of this seems to be the focus of the lawsuit; they seem to be trying to argue over a legal distinction between these things being "a product" or not. That seems pretty difficult. Perhaps the idea of being "a product" is legally "proved" by the presence of a recommendations engine which implies immediate, albeit computer-generated corporate content bias pushed at individual users? Somehow, though, I doubt it...

GitHub fixes pull request delay that derailed developers

Andrew Hodgkinson

Change of ownership more visible with every passing day

As the bloatware features creep in, the hooks into other MS services trickle down, bullshit fluff like that truly puerile new animation when you set up a new account happen (followed by the utterly jarring experience of being thrown into the totally different *actual* GitHub UI) and now updates which break the service and DR that doesn't work...

...yeah, Microsoft.

Oh look! GitLab's right there.

Microsoft Copilot for Security prepares for April liftoff

Andrew Hodgkinson

Multiple portals

Correia said he had compared the investigation process both with and without Copilot. "I found that with each step, it was roughly between 20 and 50 percent faster for Copilot to do it, given that you do need to go into multiple portals and log in there and wait for everything to load," he said.

Ah, so instead of logging into "multiple portals", CoPilot has been given access to those portals and is logging in for you.

I'm sure CoPilot can be trusted with your administrative portal access credentials. Because, Security.

Now you can compare your Chromium browser with that other Chromium browser using Speedometer 3.0

Andrew Hodgkinson

Re: It doesn’t test CPU JS execution?

Untrue. The source article is bad; the author doesn't appear to understand how browsers work or are built.

Chrome and Safari have completely different JavaScript execution engines (V8 vs JavaScriptCore). That's part of why they've been trying to trade blows on performance all this time. Chrome on macOS historically had bad GPU acceleration support too, so even though the WebKit-based engines might assess the document markup at the same rate, the *painting* rate could wildly vary.

Just because Chrome might be doing OK on that front today does *not* mean that Google can be trusted to maintain or not otherwise break that tomorrow.

Moreover, Firefox still exists and plenty of people use it.

(Edited to add that "conveniently not applicable to any non-Chromium browser" is essentially all but gibberish and certainly nonsense; it tests various automated operations using a collection of popular JS frameworks just as it always has and, when I ran 3.0 a few minutes ago, gave Firefox 22.6 and Safari 22.1 - so Firefox "won").

Judge slaps down law firm using ChatGPT to justify six-figure trial fee

Andrew Hodgkinson

You're missing the point - please try to open your mind a little here

I am much more often astonished at the thorough and on-point replies I get

OK, but given you claim this:

ChatGPT is an excellent starting point for research and ideas

...then you have no idea if the replies are on-point. You cannot possibly. By your own statement, you're using it to start research, so you don't know what's right or wrong, and ChatGPT can lie to you and have absolutely no idea it's doing it. That's because it doesn't know what true or false or right or wrong or anything else means. It's just a clever, very convincing pattern matcher which does a remarkable job at convincing our monkey-brains that it's intelligent just because of the truly mind-boggling amount of data it's using for that stats matching. But there's no intelligence there.

We know this isn't how human intelligence works for many reasons - first and foremost, because humans can't possibly ever read, assimilate and remember the literal billions of documents that ChatGPT uses for training.

But ChatGPT is real. Perfect, no way. But very, very real.

"Real"? Well, I guess, the product does exist, so it is real, yes?! But if you mean useful for starting research - clearly, no. It's just very confidently incorrect and if you don't know the subject yourself, there is no possible way you can have any idea without cross-checking each and every thing it says manually which would take longer than a Google search or Wikipedia from the get-go.

Ironically, that's probably where most people go to cross-check ChatGPT's output, if they bother to do so. Why? Because they know it's more accurate. But don't take my word for it! Here's what OpenAI, the makers of ChatGPT, say about their own product:

https://help.openai.com/en/articles/8313428-does-chatgpt-tell-the-truth

"Does ChatGPT tell the truth?"

...It might sound right but be wrong... ...It doesn’t know everything... ...No access to tools like calculators or the internet (mostly)...

When the vendor itself has to cover its ass with disclaimers about the inaccuracy of its product, then - given how reluctant large corporations are to ever admit any kind of flaw whatsoever in their products - you know without a doubt that things must be really bad.

OpenAI tries to trademark 'GPT'. US patent office says nope

Andrew Hodgkinson

Search? Really?!

Well, if you have a lot of content you want to search for, an AI chatbot customized on your data could come in handy. Information stored in txt, .pdf, .doc/.docx and .xml formats can be processed.

You mean like Spotlight (OS X, 2005, also running on iPhones under iPhone OS 3, 2009), or Windows Search AKA Instant Search in Vista, 2007, or doubtless countless earlier examples of third party content index or equivalent search systems across those platforms and many others? Or the systems that even start to use dramatically more efficient, domain-optimised on-device ML models for things like object detection in photographs and videos?

I mean sure, they don't need an RTX 30xx and gigabytes of RAM - it'll even run on an old phone, for example, and they handle a dramatically wider and extensible set of file formats - but the results are guaranteed accurate, instead of risking hallucinations. And where's the fun in that?!

/s

AI PC hype seems to be making PCs better – in hardware terms, at least

Andrew Hodgkinson

It's not going to make PCs better; it's just going to make software worse

If the base expectation goes from 8GB to 16GB, then what people did in 8GB today will take 16GB tomorrow.

There won't be any more you can get out of the more powerful hardware after a year or two; it'll just run two or three more layers of bloated and buggy abstractions in order to do exactly the same kinds of tasks it's been doing for the last 10 or 20 years.

Did all that AI chatbot hype boost Bing's market share? Oh, wait, never mind

Andrew Hodgkinson

More than you might expect!

https://killedbygoogle.com

Mandiant's brute-forced X account exposes perils of skimping on 2FA

Andrew Hodgkinson

Re: Brute forced?

Yep, exactly what I just said! https://forums.theregister.com/forum/all/2024/01/11/mandiant_x_account_brute_forced/#c_4790018

Andrew Hodgkinson

No, that's not the natural reaction!

The natural reaction to this would be to ask why two/multi-factor authentication didn't prevent this from taking place.

Utterly wrong.

The natural reaction to this would be to ask why the account wasn't locked after a small number of wrong password entries along with an intentional few seconds of sleep time for every wrong password entry, to make brute force attempts impractical even without auto-lock.

THAT is the first line of defence against hijacking. 2FA is a secondary line but both must always be present.

NKabuse backdoor harnesses blockchain brawn to hit several architectures

Andrew Hodgkinson

The so-called hackers are better developers than the so-called professionals

While the cryptobros are stumbling around with JavaScript and insufficiently secured NPM packages...

https://forums.theregister.com/forum/all/2023/12/16/ledger_crypto_conect_kit/

...the "hackers" are crafting cross-platform, anonymised, robust solutions in Go. Of course, people will keep telling me our industry does not have a crisis of competence.

Qualcomm claims its X Elite PC parts can go toe-to-toe with Apple, Intel

Andrew Hodgkinson

Well that's lovely...

...and I really want competitor hardware to Apple Silicon. I want to get get off this hyper-expensive but rotting platform with its insane and ever-escalating bug count, onto something that's also very buggy but at least is also cheap!

However - I don't understand why all these years after the debut of the M1 systems, I'm meant to be impressed by a "flagship" CPU+GPU combination which just about matches an M2 Pro. That's not Apple's flagship at all; that would be the M2 Max (the M2 Ultra could be considered too, tho is a bit of a kludge with some curious performance issues arising here and there). What's more, all of this happens at the end of the M2 cycle, with M3 expected colloquially "any minute now".

Not surprised Microsoft are itching to work with ARM-based competitors just as soon as their IMHO rather silly exclusivity tie-up with Qualcomm ends.

No more Mr Nice DoJ: Tesla gets subpoenas over self-driving software claims

Andrew Hodgkinson

Re: Risk calculation

That's true and I thought the same thing, but I don't think it's relevant to the investigation (I'm pretty sure journalists are just ad-libbing those kinda stats for sensationalism). The investigation is, I think, not about whether Tesla's self-named "Full Self Driving" [*] - which is not full self-driving - package is better or worse than driver assistance packages from other companies, they're investigating whether not Tesla's portrayal of its technology has been misleading to customers.

I mean, obviously, it's a total mystery. I can't imagine why they'd think something called Full Self-Driving that isn't full self-driving could be misleading... This court case is going to require some true big-brain thinking.

[*] Autopilot marketing and claims might be investigated rather than, or in addition to, FSD claims. I'm not sure. Autopilot would be harder to prove IMHO.

Japan cruises ahead with drive-thru EV charging trial

Andrew Hodgkinson

Re: Vehicle ID based charging

I'm an angry, resentful luddite and I say no to this silly shit, period. I'll be driving a gasoline vehicle for the rest of my lifetime.

I mean, yes to the first bit - just let me plug in and wave a contactless credit or debit card at the reader, just like at a petrol pump. Job done. Totally agree that all the data collection bullshit is exactly that - bullshit.

Trouble is, you then just go right off the rails with major, huge inaccuracies...

Electric vehicles are impractical. They don't have the power, speed or acceleration that I want.

Given that even a Telsa mom-and-pop-mobile has been shown to have more power, more speed and more acceleration than a supercar, I have to conclude that you're full of it here. So let's put numbers to it. Tell us your current car, so we can get power, top speed and acceleration. I presume you're driving on racing tracks for fun a lot, though, because otherwise "top speed" is a bit pointless given 70mph limits on the fastest roads in the UK.

They are expensive to purchase

That is true. While the prices are dropping, it's going to be a very long time (if ever, since the industry sees an opportunity to price gouge for increased margin) that they'll drop to ICE prices spec-for-spec.

and to service

That is absolutely untrue, and is one of the principle reasons so many commercial outfits are changing their van fleet over to electric. Electric cars are dramatically simpler than ICE and have a correspondingly lower service cost.

and batteries have a finite life

They do, but EVs don't have a clutch, gearbox or combusion engine, and your clutch, gearbox and engine all have very much a finite life and cost a great deal to service year-on-year and even more to replace. I think you'll also find that the "finite life" of a battery for a modern (say, post-2020's design date) EV even in heavy duty commercial use is order-of-a-few-decades, which would be a good lifespan for any personal vehicle with any powertrain.

The Leaf, while being a useful pioneer and still a good car, caused a lot of the misconceptions here as it basically has no battery management at all to speak of - no active cooling or heating, very little lifespan management as far as I can tell. Very few other EVs, even "budget" models, have such limitations.

Teslas have been around easily long enough for numerous owners to report 100,000 mile (one hundred thousand mile) updates for batteries and the TL;DR is it looks like the overwhelming majority, without doing anything special, of late-2010s Tesla owners expect their battery to retain useful range for around 500,000 miles - half a million miles! - you want to try and tell me that ICEs are good for half a million miles, now?! (Typical report is of a ~10% range loss at the 100,000 mile point with rate of loss reducing thereafter).

Also, now that charging stations are neither plentiful nor free, it's laughable how people have to structure their whole lives around charging these vehicles.

Surely you're not that blinded by your combustion engine life? Most people visit a charging station rarely; some people, never. The idea that EV drivers "structure their whole lives" around charging their cars is absolutely absurd, to the point of laughing out loud that you'd state such a thing. In your ICE car, you must drive to a petrol station whenever it's running low on fuel - you've absolutely no choice in that - yet I'd never state that ICE vehicle owners structure their whole lives around going to petrol stations.

Meanwhile, the overwhelming majority of trips are far, far shorter than the EV's maximum range, with no charging required at all. At home, the EV is topped up overnight. With an EV, you have the option of never visiting the equivalent of a petrol station, ever should you choose. You can even generate your own fuel via wind generators or solar on your own home. None of that is possible with ICE vehicles.

This automated, induction charging sounds good in theory, but good luck with that.

Oh for sure, it's total nonsense. People have looked at this before numerous times but it's just wildly inefficient and causes all sorts of unacceptable risks due to the magnetic fields involved. In particular, got a pacemaker? Ah, shit...

Google promises eternity of updates for Chromebooks – that's a decade for everyone else

Andrew Hodgkinson

Re: I've seen things you people wouldn't believe...

...um, no; surely you recognise the famous speech from Bladerunner? If not - SPOILER in video linked to below - beware.

https://www.youtube.com/watch?v=NoAzpa1x7jU

AI coding is 'inescapable' and here to stay, says GitLab

Andrew Hodgkinson

Just another way to be lazy and not understand what you're doing

Our industry is in crisis; bloat is rife, defect rates are through the roof and security approaches laughable - to this day we still constantly hear about really dumb things like SQL injection vulnerability breaches, unencrypted passwords in databases and so-on.

So, hold my beer, here comes AI; yet another way for lazy devs to prosper. They don't like reading docs, don't like writing docs, and certainly don't like learning the software system upon which they are working to see what it already provides at a framework level. Quicker to just write a crapper version copied off StackOverflow; yet another NIH bit of crappy code that duplicates a job already done better elsewhere. I mean, what am I suppose to do for my vastly inflated salary - actually understand what I do for a living?! Madness. And now, I just hit the magic "AI" button and get - at best - a bloated, verbose, maybe-accurate dump of code spewed out so I can move on to make a mess of the next problem I don't understand. More defects arise from all the edge cases the code didn't cope with, along with fixed-in-framework-but-not-in-my-code things just reimplemented by the AI, but nobody really knows what's going on because nobody wrote the code in the first place.

Agile card velocity will be off the charts, I'm sure - just don't look at the defect backlog, customer satisfaction results, or how incredibly rotted, fragmented and duplicated your software's architecture has become.

As for making it write your tests - that's the absolute ultimate in fire-me, brain-dead lazy. Tests are tedious, yeah, but the whole damned point of writing them is that it forces you to think about coverage. The paths through your code. The edge cases of input data. The error handling, or lack of it. All manner of epiphanies about suitability for purpose arise in this process, resulting in not just better tests, but better code too. But wait - not any more! We are freed from such tedious drudgery. Our incompetent, always-someone-else's-fault industry has just found yet another way to ask to be paid more while doing less.

It seems it is never the dev's fault when there are bugs. You wrote the code, but it's not your fault... Couldn't possibly be... It's management, or it's bad specification, or it's insufficient training, or it's phase of the moon. And now, we have another way to shirk responsibility for the quality of our work - blame the AI.

I am not at all surprised that a large percentage of coders are embracing it.

OpenAI snaps up role-playing game dev as first acquisition

Andrew Hodgkinson

Re: Stop misusing that term

Yeah, it's definitely not intelligence.

ChatGPT was famous for producing maths such as "2+2=5", along with the usual bland, yet verbose "explanation" of why it was correct. It was all gibberish, of course. Why does it make this mistake? Because it doesn't know what "2" is, or what "+" is, or what "=" is, or what "5" is. It doesn't know what numbers are. It doesn't know any of the rules of mathematics at all. It has no idea what right or wrong are either, so it can't know that it is in error (even if told as much, unless it has a means to understand what being wrong means and why it was wrong when it's told that, which it does not and that's why it'll often argue back - it's just stats-matching training set text from when some people told some other people that they were wrong. Ever seen online "discussions"? When someone says "you're wrong", someone else pretty much always argues back).

The reason it might assert 2+2=<anything> is because that's a maths-y thing which looks statistically like other maths-y things and a lot of the maths-y things which had "2+2" in them said "4". But sometimes people say stuff like, "hey that's nonsense, it's as wrong as saying 2+2=5". And thus, we have "2+2=5" in the training data now, so there's this small stats-based chance (based on billions of other bits of input and nuances that are beyond our own ability to reason about simply because of the vastness of the data set) that the ML system might, indeed, state "2+2=5".

It's a stochastic parrot, full stop. No matter how many times people hand wave and say "we don't know what intelligence is", that's just deflection. We certainly do know that part of our intelligence is based around knowing rules and understanding them and, indeed, earlier AGI studies (1970s-90s era or thereabouts, then just "AI") were often based around trying to teach rules and gain inference from those. A person knows what an integer is, the rules governing those and what addition means and so knows, without a shadow of a doubt, that 2+2=4, because the person understands the governing rules and nature of every part of that statement... Once taught those rules, that is! The trouble is, a lifetime of learning rules turns out to be very, *VERY* hard to do even with modern computing power - the biggest problem, I think, is assembling a machine-readable training set of such accuracy and detail in the first place, rather than creating a computer system capable of processing that data.

But, good news! We discovered a party trick. Enter generative AI, AKA ML.

Even OpenAI themselves acknowledge that ChatGPT is indeed a party trick - that it only gives right answers by accident, readily makes up nonsense and should never be used for anything that requires correct answers, but never let a product's limitations get in the way of the lies of marketing and the holy grail of sweet, sweet profit. Microsoft have a whopping great big share in OpenAI, so - surprise! Suddenly ChatGPT is in front of Bing, a search engine that's supposed to give accurate answers. The tsunami of stories early on about how Bing was, subsequently, frequently returning rubbish was an inevitable outcome. It'll still be doing it, helping to misinform and worsen misinformation problems globally, but it's all old news now so you don't hear about it.

We can carry on refining this junk, at least so long as there's ever-more *human*-generated content online to teach upon, but it'll still be lipstick on a pig. Like the fun artificial landscape generators of the past such as Terragen, or entertaining old-school "human-like chat" bots such as Eliza way-back, it'll still hit its limit. Interestingly, with ML-generated material now spewing out over the web like a broken sewer main over a highway, actually finding new human-authored stuff to add to existing ML model training datasets has become an awful lot harder than it was. We might already be quite close to the peak of capabilities of these systems as a result.

80% of execs regret calling employees back to the office

Andrew Hodgkinson

Re: unpopular opinion: no, WFH and WFO are not the same.

An unpopular opinion that I share - especially the collaboration and overhearing aspect.

I also like the fact that for the salary, my *employer* pays for the heating, boiling the kettle, coffee machine and so-on. My electricity bills for the heat pump in anything other than mild transitional months are very dramatically higher on weekends vs weekdays.

Someone else argues that the whole "in person benefit" is 1% and the rest is 99% distraction - I disagree strongly with that number. Everyone's different. Perhaps if you're very introverted, or hate you co-workers and/or think you've never anything to learn from them; or perhaps aren't much of a domain expert so rarely have anything to contribute to an overheard discussion - sure. Lots of other reasons I guess. But not my experience at all.

The distractions argument is kinda fair, but that's what headphones are for. And at home WFH, you'd most likely be wearing those anyway...

Playing instruments, musical talent? Psh, this is the 2020s – Meta has models for that now

Andrew Hodgkinson

Re: Agreed, the samples suck

They should have the option to export as MIDI

It can't. It's an ML system. It has no comprehension at all; it is a stochastic parrot (https://en.wikipedia.org/wiki/Stochastic_parrot). Given their description of the samples used for training, we know that this is trained on raw audio data and just recombines it in pattern-matchy ways.

It is therefore just another boring, generic, bland ML system.

AI is supposed to make things better

Says who? The likes of Meta, OpenAI and so-forth produce these things to make money. The fact that ChatGTP officially cannot be accurate according to its makers, yet has been put in front of a search engine that has the one job of producing accurate search results (by Microsoft - a major shareholder in OpenAI) should've made that abundantly clear.

Generative AI in its current form exists entirely to make vast corporations even richer.

Computer scientist calls for new layers in the tech stack to make generative AI accurate

Andrew Hodgkinson

Usual facepalm applies

So a guy that heads an investment firm knows all about AI coding and software architectures. Yeah. OK.

I love the idea that there can be a layer which has known-true sources which can fact check the AI (LLM). So this is a layer that knows what the AI has been asked, and can check its answers to see if they match the answers it has.

So... Why didn't we just ask that fact-checking layer the question, since it has to understand the question the "AI" was asked, so must be a LLM itself, but somehow it knows The Truth.

Just a muppet with no clue waffling on with science-y sounding words that make equally clueless other investors nod, smile and hand over their cash.

Think of our cafes and dry cleaners, says Ohio as budget slashes WFH for govt workers

Andrew Hodgkinson

It's such a North American viewpoint...

...which other countries seem to want to adopt even if their situations are very, very different. Labo(u)r laws in the USA aren't exactly in favour of workers; commutes are often extraordinarily long; there are usually no public transport options at all; and traffic jams seem commonplace.

No wonder so many North American employees don't want to go through that!

While some cities might have people that feel the same, many places in the world *do* have viable public transport, *don't* have such huge commutes and often have much more pro-worker labour laws. What's more, we recognise that it's really a very clever move for the corporate profit sheet to make *us* ask *them* for the burden of paying all day for our heading, water, internet, drinks etc., rather than using the company's already-paid-for services in its building. It's kinda bizarre - we take on all the cost and risk, and actually *beg* for this to happen and, in some cases, even accept a *cut* in pay when if anything it should be a raise.

For many towns and cities in many countries, when it comes to clamouring for work-from-home, be careful what you wish for - you might just get it.

Oh, wow. OK. Apple really is making a $3.5K VR ski-mask. Dev tools are now out for it

Andrew Hodgkinson

Y'all said the same about the iPod, the iPhone and the iPad...

...and were wrong about every last one of them.

Yes, this product is expensive. That said, in a very, very unusual move for Tim Cook it does not seem overpriced - show me who else is offering 23 megapixels of OLED display at that kind of density along with all the optics needed to view it. In fact you barely get a couple of normal sized lowest-end 4K HDR OLED TVs for that kind of money, and they're only providing around 16 megapixels in total. So, if anything, I was surprised it was so cheap when you also factor in the M2 SOC, the brand new real-time sensor chip, the sensors themselves and the RTOS layer that the developers had to write on top of the Mach kernel to service it all. The resulting 12ms from-input-to-display latency is very low. Oh - and I almost forgot another very expensive part - there's even that crazy, entirely bespoke, one-of-a-kind lenticular front OLED display that's providing true no-glasses perspective/3D effects to anyone looking at a person wearing the headset.

Yes, this product is also boring. All these people going on about "just a bunch of everyday applications" are just like 3D TV evangelists from years gone by - just hawking tedious, lame tech demos, flashy nonsense with no real-world use. Whereas a general purpose computing device that is *also* equivalent to a huge 4K HDR true stereoscopic OLED TV, a laptop screen mirror and all the other bells and whistles to make a spacial model actually work? That is the point - being boring is exactly the point. This isn't some stupid pie-in-the-sky junkware with no serious applications and/or only very niche use cases, it's an attempt to provide a third general purpose computing model (after mobile/touch and WIMP) that people can actually use routinely.

Finally, yes, this headset is quite big and heavy (and it makes a huge amount of sense to have an external battery pack given the limitations of even the best battery and miniaturisation technology available in 2023). But - obviously - it'll get smaller and lighter over time. The cameras will get better from iteration to iteration, dealing with low lights and highlights better; the displays will get better, with similar results; latency will drop; sensor accuracy will improve and so-on. This is a generation 1 product, and in many ways could be considered a proof-of-concept; nobody's expecting mass-market appeal.

Many of the predecessors and current in-market offerings are trying for "mass market appeal" but haven't achieved it, because they didn't grasp what people might actually want and just focused on the niche use cases (game-only headsets, or Hololens) or VR style tech dream hypefest (Meta's nonsense).

It'll take 2-4 years for it to get small, light and cheap enough, but I'm pretty confident that there's going to be a huge market for this in due course.

Surprise! GitHub finds 92% of developers love AI tools

Andrew Hodgkinson

Re: Kids of today, eh?

> The 60% of programmers are not the problem, the problem is the employers/management that have chosen not to judge on code quality.

OK, so at what point as an "industry" of developers do we take responsibility for our work?

Do you?

Does anyone?

How come when anything about the fucking horrific quality of code shat out by our no-professional-standards "industry", both commercial and open source - we somehow always, always get to blame someone else?

Take responsibility for the quality of your work.

Tesla Semi, out since December, already facing a recall over brakes

Andrew Hodgkinson

...which is a shame...

...because even in tiny New Zealand, they're looking promising - see for example https://www.1news.co.nz/2023/03/25/new-breed-of-electric-trucks-put-to-work-on-central-interceptor/ or on a smaller scale, https://www.greengorilla.co.nz/ev-trucks/.

Seeing as GPT-3 is great at faking info, you should lean into that, says Microsoft

Andrew Hodgkinson

For those who still don't realise...

...this isn't AI/AGI, it's just ML. A large language model. It understands nothing. It knows no *rules* - just pattern matches, which with a big enough data set can seem convincing but this, combined with the parameter programming designed to make it respond in a very confident and authoritative manner, make it downright dangerous.

It can't give you a right or wrong answer because it doesn't know what right or wrong means. It knows that it thinks your pattern of text input mathematically correlates highly with other inputs which in turn led to other outputs, which are then combined to produce a result which mathematically looks like it's probably following the grammar patterns it has combined with the expected response patterns based on the input text.

Crap analogy time: Imagine an image matching ML system trained on cats. One day, it's given something that to a human is clearly a plant, but it's been trimmed and the colours of leaves and branches have been set up to make it look like a cat. If it mathematically matches very close to cat, the ML system might get 'plant', but it'll strongly get 'cat' and won't know *rules* that a cat can't be a plant, because it *understands* nothing. It cannot apply logic or reason. So it'll say "cat", and be wrong. LLMs are the text equivalent of the same thing. Give it enough data and it might start being able to say "cat *and* plant", or even have enough parameters to have never seen something that is both a cat and a plant so know no rules for it, but statistically speaking lean that way anyway; and so, it gives the illusion of understanding, without any. Doesn't know biology; RNA, DNA, how those work; nothing. No objective foundation in the laws of mathematics; not AGI. Just *fake*; an illusion.

It's also why image synthesisers like Lensa or MidJourney mangle fingers. They don't know what anything *is*, so they don't know what "rules" fingers have. Don't know there should be four fingers and a thumb, rules about symmetry, the way they can bend; just kinda makes something fingers-like that's close enough based on the maths and stats -> job done. And the result is, typically, very broken. Imagine the text equivalent of not knowing what fingers are. Yeah. That's where we are.

All this is why ChatGPT infamously responded with confidence about the bears Russia put into space. Sure, subsequent case-by-case hacks close off the embarrassing, very public mistakes, but that doesn't mean the technology suddenly magically works in any different a way. OpenAI is an Elon Musk venture originally and now largely controlled by Microsoft, so with either form of leadership, it's going to be (my opinion) ruthlessly immoral and entirely interested in profit, seeking it at any cost, legal if forced to, illegal if it can get away with it - e.g. by misleading advertising in countries where that's not permitted or embrace-extend-extinguish to remove competition (all, again, just my opinion).

So, the company is IMHO spouting BS about what this can do, the public are largely buying it and other companies are then spending money like it's going out of fashion to incorporate the largely useless and broken technology into their systems. They've either not so much drunk, as drowned in the Kool-Aid, or they're well aware that it's all nonsense, but think there'll be a good ROI because their own *customers* don't know any better and they're very happy to maintain that status quo. The net result is software that's even more bloated, even more buggy and even more all-round unpredictable. Slow - fucking - clap.

Any ML system can be a fun (if legally/morally dubious, due to training set!) way to generate fiction/expressive works, be it text or image, where accuracy or "makes any sense" aren't required. To front a search engine with it, where accuracy is key, is one of the most serious and flagrant breaches of duty of care I've ever witnessed and will *very severely* increase the misinformation problem with which our society is already struggling (and largely failing) to cope.

Tesla reports two more fatal Autopilot accidents to the NHTSA

Andrew Hodgkinson

I still don't understand how they get away with this!

In the UK, as far as I'm aware, advertising cannot be misleading.

If Tesla called their system ADAS, it'd be fine - Automated Driver Assistance, yeah, got it. Assists me. Instead, Tesla called it "Autopilot" and the name alone certainly gives the impression of being rather more than just assistance. I'm pretty sure their marketing-of-the-time was trying to give a self-driving impression too, but they've obviously reigned it in a lot since.

Lately, however, Tesla have had something which surely crosses they line - they call it Full Self Driving. That is literally its name. It is not full self driving at all; it's just ADAS. They warn you that full self driving is not full self driving as if that's somehow supposed to alleviate them of any responsibility...

https://www.tesla.com/support/full-self-driving-subscriptions

It's bizarre. How can they be so completely misleading with a product named in a way that specifically says it is something it is not and this be allowed? Even in the USA, it seems like a stretch.

Multi-factor auth fatigue is real – and it's why you may be in the headlines next

Andrew Hodgkinson
FAIL

Why are they sending notifications at all?

The article appears to not mention the most sensible solution - using a 3rd party MFA app and prompting the user to type in the 6-digit code, rather than using any kind of SMS or notification. SMS should be a fallback only for users who insist they can't run an app and notifications should just not be a thing.

You can't bombard a user with notifications when there aren't any. This whole thing is bizarre - once again, our industry sucks - it never learns anything from past mistakes while simultaneously inventing new ways to fail. This is why I had to stop reading "comp.risks" in the end; the repetition was too depressing.

If Twitter forgets your timeline preference, and you're using Safari, this is why

Andrew Hodgkinson

Or...

...lazy, not competent or hobbled-by-crap-company-structure (probably the latter) devs stored user preferences client-side in the browser rather than in the user record on their side.

Why is this a stupid, broken design? Try using more than one web browser, or more than one device.

The whole point of cleaning out the web-dev-dumpster-fire of client-side databases is to stop malicious actors - such as advertisers - from storing things on your computer indefinitely without your consent. Thank god at least one browser out there seems to be trying to stop it.

GitHub saved plaintext passwords of npm users in log files, post mortem reveals

Andrew Hodgkinson

Why is anyone surprised?

Microsoft bought GitHub, then Microsoft bought NPM, then Microsoft integrated the two.

We're surprised about elementary and severe security failures why, exactly?

Safari is crippling the mobile market, and we never even noticed

Andrew Hodgkinson

Today in "nonsense", we have...

...this article.

Web apps are slower because the technologies upon which they are based are inherently and irrevocably slower and more resource intensive than native coded applications. An in-advance compiled application (actual native code) will always be tighter than cross-platform web technologies, including WASM since - apart from numerous other reasons - the WASM modules are still _driven from_ JavaScript. Even if a JIT's going to produce native code, that all comes at a cost of RAM and, on a resource constrained device, RAM is *very* precious.

Below the JavaScript, you've got HTML and CSS which were never designed for application-like UIs, so it's a tortured mess of DIV soup and reams of CSS - typically messing around with the hyper-convoluted flexbox, especially if you have the audacity to want something that pre-flex Web found super hard and super advanced, like, y'know, vertical centring. Woah. Advanced stuff, web guys. As for autolayout with springs, struts and the like? Yeah, right. Once again, we're hacking around with bits of CSS that can be coerced into behaving in a similar fashion, given enough time and effort - and device resources to interpret and execute it all.

(The recent example of the performance of LibreOffice ported to WASM was a pretty stark example of how efficient those technologies aren't).

Moreover, there's no access to the native UI toolkit from these applications (no, HTML forms elements are *not* an application user interface toolkit). You need to construct everything from scratch. If you're lucky, you might be able to use a native form button and maybe an input field - but photo pickers, toolbars, popups, map views, tabs, master-detail views, navigation overlays, all of the animated transitions...

Your device's global settings offer a *built-in* native toolkit dark mode? Text size options? Bold-text-everywhere? High contrast mode? Distinguish-without-colour? Button shape settings? On-off labels? Transparency reduction? Motion reduction? Numerous accessibility options for navigation like switch control or audio descriptions that just work out-of-box on native elements? Tough. Reimplement it all again, from scratch, different every time, limited by at-best the comparatively meagre attribute decorations that HTML offers for accessibility *and* only if your devs know to use them (and use them everywhere at, again, great cost in time, testing and maintenance).

Even something as basic as proper scrolling mechanics often have to be coded from scratch, depending on what you're trying to scroll inside your giant tower of DIVs.

The whole debate is asinine. If you want a web page, write a web page. If you want a "web app", fine, you don't want to pay fees except your own hosting. Live with the fact that you're either going to produce a sub-par user experience on the lowest of lowest common denominator cross-platform options, or you're going to burn a truly vast amount of money on extra engineering resource to try and reimplement all the things that native code would've given out-of-box on Android or iOS - right up until next year, at least, when a new iOS or Android version changes how things looks, or introduces new features that all the already-written native framework apps just 'get', but your web app doesn't.

Want free of the "walled garden"? Good news! Android exists, and has a *huge* market share compared to iOS. Deploy off-store. Don't want to be limited by Safari on macOS? Good news! Windows and Linux exist and have an even *more* huge market share compared to macOS. I mean, who cares if you need to tell your users to bugger off and download the latest Chrome or Firefox or whatever because we all just *loved* it in the 1990s when web sites would tell us that our browsers weren't good enough, right? So knock yourselves out, use all those shiny new APIs that evil Apple isn't giving you.

But what if you wanted that juicy income from those rich iOS folks but don't wanna bother writing a native app because hosting it on the App Store (or Google Play Store, for that matter) means 30%? Well then yeah, it's not about your users, is it? It's about the money. The users have to accept something slower, of unknown security, of unknown privacy and with no control over when updates happen.

If any web app was worth beans then it'd be popular AF on Android, with people clamouring for a version on their Apple device, making Apple look bad until they did something about it. Ever heard of that for a web app? Even once? Nah, me neither.

As for when Google is pushing the latest new web API? Be afraid - or did you think somehow that Google were any less evil, or any less self-interested, than Apple?

Page: