* Posts by LionelB

1702 publicly visible posts • joined 9 Jul 2009

Sony rolls out a standard way to measure bias in how AI describes what it 'sees'

LionelB Silver badge

He he - mea culpa.

LionelB Silver badge

Yes, sure, attempts at mitigating bias can overshoot/backfire.

The basic problem is that the corpuses on which LLMs are trained simply reflect all the societal biases out there. The question is, do we want all those biases spewed back in our faces and festering in echo-chambers, egged on by the LLMs? Who the hell benefits from that (besides the tech overlords monetising our data in the process)? So it would seem that some attempt at bias mitigation is inevitable (to a backdrop, of course, of biased screaming from the sidelines).

LionelB Silver badge

Okay, apologies for the trolling accusation; on reflection I don't think you were being wittingly crass. You did, however, "facetiously" or not, casually drop an annoyingly banal and inappropriate "anti-woke" trope. Maybe I was just in a bad mood, but that kind of culture-wars nonsense gets my back up.

LionelB Silver badge

> That said, I tried asking Gemini just now ("show me some images of typical American firefighters"), one of them was [image of ethnically and gender diverse firefighters]. A fair few others included black firemen, ...

Okay...

> It's not quite a one-legged lesbian, but it's also not exactly all White.

Well no, because as you pointed out yourself:

> ... which likely reflects the reality of American city firefighting.

> I was of course being facetious with my comment.

And didn't we all chortle. At least you had the good grace to subsequently expose yourself as a troll. (I may or may not be being facetious here.)

LionelB Silver badge

I just tried that with Gemini. The results, almost without exception, where images of white males. As far as I could tell, they all had the usual number of legs. Gender identification and sexual orientation were unclear.

Ho hum.

LionelB Silver badge

More watchers. They watch each other. Then they start to shout at each other, and so it goes on.

Microsoft will force its 'superintelligence' to be a 'humanist' and play nice with people

LionelB Silver badge

Re: Let's keep it real

> AI does not exist...

No, it does not.

> ... and most likely never will.

Sure, it's up there with controlled scalable nuclear fusion, quantum computing and heavier-than-air flight. Oh, wait...

(If and when, it won't be LLMs, though.)

LionelB Silver badge

Re: MS Stock price

> AI bubble that will blow very soon

How soon is soon, though? People have been saying that for a year and more...

Gullible bots struggle to distinguish between facts and beliefs

LionelB Silver badge

Re: "AI researchers realize AI's are just pattern matching (again)" Film at 11?

> I get the feeling that AI researchers may not know much about what they're researching.

Oh, the researchers (well, let's say most of them) know pretty damn well what they're researching. It's the marketing - the miss-selling - that's the real issue here.

If LLMs were presented honestly as: "This software does one thing only - it generates plausibly human-like responses to queries based on vast amounts of indiscriminate, uncurated data sucked out of the internet", perhaps more people would pause to think how useful that actually is to them (or to anyone).

Robotic lawnmower uses AI to dodge cats, toys

LionelB Silver badge

Re: lawnmower uses AI to dodge cats

Our local foxes would shred it just for fun (and leave a nasty little present on your doormat to make the point).

LionelB Silver badge

Turns out "remote kill" didn't mean what we thought it did...

Linux vendors are getting into Ubuntu – and Snap

LionelB Silver badge

Re: Lies, can lies, and statistics set But #1

Lighten up fella, it's Friday.

LionelB Silver badge

Re: Lies, can lies, and statistics set But #1

> Valve has already done way more to fix Ubuntu's Bug #1 than Shuttleworth will ever do.

It may turn out that Microsoft have already done way more to fix Ubuntu's Bug #1 than Shuttleworth will ever do (*cough* Copilot *cough*).

LionelB Silver badge

Re: It's not snap that bothers me...

> (I still find it hard to skip the 'u')

I keep reading it as AppAmour. Guess I'm just a romantic.

Microsoft 365 business customers are running out of places to hide from Copilot

LionelB Silver badge
Devil

Re: Time to give MS the big finger

You can't sue them, because they are authorised to interfere with your computer. You authorised them. (What? You didn't read the small print?)

BBC probe finds AI chatbots mangle nearly half of news summaries

LionelB Silver badge

Re: To be fair ...

Yes, news media have diverse perspectives - because, well, there are diverse perspectives; the notion that every given human situation can be adequately summarised by a set of incontrovertible hard "facts" is a fantasy. As a consumer, it is up to you to navigate this landscape, hopefully without getting sucked into echo chambers. Sorry, but that's as good as it gets - the alternative is know-nothing nihilism.

> The news is entertainment, not education. People don't watch it to learn something, they watch it to be entertained and to feel warm and fuzzy.

Do they? Personally, I find news consumption more a grim chore rather than entertainment. Warm and fuzzy doesn't get a look in.

LionelB Silver badge

If you take the view (I don't) that the (transformer) models underlying LLMs are "for" some kind of general artificial intelligence then they may well be a "dead end" (on the other hand, who knows, they may turn out to be a useful component of some more sophisticated future system).

An alternative viewpoint, is that the "connectionist" wave of research that succeeded the old GOFAI1 -- deep learning, etc. -- was always less about "intelligence" than about "cognition" or some such. In some respects it turned out to be quite successful on its own terms (e.g., in classifiers, image/voice recognition, machine translation, bioinformatics, etc.,)

I wouldn't necessarily call LLMs "nonsensical" - after all, they do what they say on the (black) box: generate plausibly human-like responses to textual input. Whether that's in any way useful to anyone in a sane world is another question entirely. (Then again, it's not a sane world, is it?) But I think we all agree that they are not what they're sold to the public as.

1"Good Old Fashioned AI" of the 60s-80s (which you may have been referring to) which hit a brick wall of combinatorial explosions. In retrospect, it was premised on a naive vision of AI as simply a matter of negotiating formal logical pathways - of "reasoning" your way to answers. Turns out that that's a million miles away from how biological intelligence works, and a no-go computationally. The rather lame legacy of GOFAI is "expert" or "knowledge-based" systems.

LionelB Silver badge

> Research papers described what they had, but that does not answer the question of why they were researching these things in the first place.

I think this is a naive (or disingenuous) misapprehension of how research works in practice. Research, especially the more academic (as opposed to commercial-driven) variety is not necessarily (or even usually) gifted with a distinct, well-specified and pre-specified target in mind (I should know - I am a research scientist1). My guess would be that early research in, e.g., the transformer architecture and generative ML, were looking broadly to develop improved models for deep-learning in problems such as classification and feature-learning. The applications of such models range across speech recognition, natural language processing and machine translation, computer vision, and beyond, but the academic research would not necessarily have focused on a particular application, it would likely have been more "blue sky".

It is worth bearing in mind that deep-learning research, from convolutional networks through transformers has indeed proved to be useful in areas such as image processing, face recognition, speech recognition, machine translation and bioinformatics to name a few. I kind of agree that generative ML models may be - at least at present - a solution in search of a problem, but the history of scientific research abounds with apparently "useless" results that later turned out to be extremely useful indeed.

1Not in AI/ML - I am a mathematician/statistician working in a neuroscience-adjacent field. I do have some background in ML and ANNs, though.

LionelB Silver badge

> People want what it's being sold as, and so did the people who started working with these things.

Is that latter point actually true, though? I actually rather doubt it - they may have "wanted" some kind of general AI (hey, who wouldn't ;-)) but they would have been pretty clear on what they actually designed. That's clear enough from the technical papers.

Sorry, no, it's pure mis-selling.

LionelB Silver badge

Re: To be fair ...

Maybe, but that editorial control may be, and frequently is inconsistent across news domains. (That's particularly noticeable in science journalism.)

On social media (which, frighteningly, is increasingly regarded as a "news" source by many) editorial control is limited to lame, inconsistent and politically-driven moderation policies.

LionelB Silver badge

Re: To be fair ...

> It's all bullshit.

I take your point (but see also John Brown's response). Thing is, if you go all in and decline to trust any sources at all, does that mean you don't believe anything about the world outside your own line of sight? That sounds quite debilitating.

LionelB Silver badge

Re: A splinter in your eye?

I can't decide whether you are being ironic or just fat-fingered.

LionelB Silver badge

Re: To be fair ...

Genuine question, but how could you tell? Short of personally pursuing your own highly-principled, on-the-ground, unbiased, etc., etc., journalistic investigations, you are de facto relying on (some selection of) said "news sources" regarding those "facts".

Sure, that said sources frequently contradict each other when it comes to "the facts" more than hints at an (age-old) problem, but in practice it's ultimately down to which sources you trust the most - which is, of course, beholden to personal biases.

LionelB Silver badge
Meh

> Which is EXACTLY WHY the ultra-rich are sooooo keen on forcing us all to use AI for everything.

I suspect the reason is more prosaic: greedy bastards chasing short-term profits.

> "AI" is not only bullshit, it's DELIBERATELY bullshit.

I'd be inclined to say "Do not ascribe conspiratorial motives to that which can be adequately explained by greedy bastards chasing short-term profits".

"AI" is only "bullshit" insofar as it's sold to the public as something it is not. LLMs are very good at what they were actually designed to do, which is to generate plausibly human-like responses to queries; it's the mis-selling rather than the technology itself which is bullshit (see above).

Britain's Ministry of Justice just signed up to ChatGPT Enterprise

LionelB Silver badge

> ... ignorance of grammar or spelling ...

Actually, the LLMs do a remarkably good job at grammar and spelling1 (as opposed to, errm, the other stuff).

1I have an acquaintance who lectures in law at a UK university, where ChatGPT'd essays are a real problem. She says that one of the biggest giveaways (apart from fabricated references) is when the quality of writing far exceeds the student's known numpty illiteracy.

AWS outage turned smart homes into dumb boxes – and sysadmins into therapists

LionelB Silver badge

Re: Oh sh*t

"crap monitoring devices"

It wasn't immediately clear to me how to parse that phrase, so I followed the link. I'm still not sure.

Anthropic brings mad Skills to Claude

LionelB Silver badge

Re: Multiple skills

This is entirely reasonable and expected. LLMs are designed specifically to generate plausibly human-like responses; that is what they're for, and what they succesfully achieved in this case.

Wait... you thought they were for something else?

Mozilla is recruiting beta testers for a free, baked-in Firefox VPN

LionelB Silver badge

Re: Plethora

> Yeah, well real world data here shows that it doesn't.

Could you remind me where "here" is? Could you also post some links to evidence backing up that claim?

> So what if you're exceeding the speed limit.

So you increase the risk of an accident, and increase the severity of injuries in the case of an accident. There is a huge quantity of evidence to back this up. See, e.g;., this meta-analysis.

> What counts is safety ...

Exactly!

If you're going to regard speed limits as merely advisory (and as such clearly to be disregarded by no doubt excellent and attentive drivers such as your good self), why bother with them in the first place? Otherwise, if speed limits are mandatory, why should they not be enforced even-handedly by whichever means are most effective at identifying transgressors?

LionelB Silver badge

Re: Plethora

> Speed cameras do nothing to stop a dangerous driver or make roads safer ...

Sorry, but reality would beg to differ (as numerous studies have shown).

Boris Johnson confesses: He's fallen for ChatGPT

LionelB Silver badge

Re: If you’re not too fussed about the truth

Only when they're from Crete.

Climate goals go up in smoke as US datacenters turn to coal

LionelB Silver badge

Re: And?

LionelB wrote:

"... as soon as a party resorts to verbal abuse, any possibility of reasonable discussion is derailed and continuation becomes a waste of everyone's time."

Thank you for making my point so eloquently.

LionelB Silver badge

Re: And?

The "Bye" is because as soon as a party resorts to verbal abuse, any possibility of reasonable discussion is derailed and continuation becomes a waste of everyone's time.

I am generally up for a reasonable discussion. I am not up for a school-playground slanging match. I take verbal abuse as concession of an argument lost, and walk away.

LionelB Silver badge

Re: And?

> Fantastic. So I guess you will be ditching your computer, ...

I guess you wrote that before reading the rest of my post.

> Yet you still havnt. You make bullshit claims ...

Stopped reading there.

Bye.

LionelB Silver badge

Re: And?

I consider it unethical to pollute (and I include, of course, CO2 emissions) and otherwise scar the environment (let's throw in water requirements too for good measure) by deployment of a questionable technology with questionable demand, questionable usefulness and a questionable future, in the name of generating $$$ for a handful of people. Nor do I think solar/nuclear/whatever would be much better in the service of this questionable venture (let's call it out for what it is: short-term opportunist greed or rank stupidity, depending on which side of the fence you fall when the shit hits the fan) - just a lesser of evils.

Yup, easy to answer.

As I said: ethics can be hard, but sometimes they're just not.

And note: I would not have written Google off - and still don't - because they were, and continue to be genuinely useful to me and probably the odd few billion other people. This despite my distaste for their business practices (nor do I want their AI crap any more than anyone else's). This is for me a case where the ethics do become harder: so I'm just about prepared to put up with the likes of Google, maybe Amazon (Meta, X, not so much) because they provide genuinely useful services - but not in the name of (current) AI, where the trade-off of appallingly profligate data processing demands vs actual utility is so ludicrously skew (compared to, e.g., search, business/commercial support, data storage, or communications).

LionelB Silver badge

Re: And?

Local economies will benefit slightly and temporarily in terms of construction (up-and-running datacentres do not require large staffing). And what "future development"? When the AI bubble bursts, staff will be left high and dry in some rotting industrial scar in the desert (see also future unemployment).

The "customers" are riding a bubble providing a largely useless, overhyped and unwanted service. The smart few (note, few) will take the money and run before the shit hits the fan.

The cost to the environment is that coal is about the most polluting source of energy going. (If these companies chose to spaff their money up the wall investing in solar or small-scale nuclear, I'd have less concern.)

You need to put that whip down. The horse is deceased.

LionelB Silver badge

Re: And?

Nope, it's a blindingly easy one. Just ask yourself who benefits (besides bosses, clients, shareholders - a tiny demographic), to what degree, and at what cost to the environment.

LionelB Silver badge

Re: And?

Ethics are hard.

But sometimes they're really not.

LionelB Silver badge

Re: And?

You really see no difference between those scenarios?

How sad.

LionelB Silver badge
Devil

Re: I just hope

There are worrying signs, however, that "When the AI bubble bursts" is becoming the new "When we have large-scale nuclear fusion". I hope it does burst, but it is not inconceivable that, in the current climate of global stupid, it turns out to be not so much a bubble, but rather (with full credit to the venerable Mr. Lahey) a self-sustaining shit-circle.

LionelB Silver badge

Re: And?

Just to be clear, you are talking about the "overall concern" of the datacentre company bosses, clients and shareholders (who knew?)

As opposed to the "overall concern" of anyone with half a brain and more than rudimentary ethical sensibilities.

This is your brain on bots: AI interaction may hurt students more than it helps

LionelB Silver badge

Re: Colour me surprised

> I really don't know how you can effectively teach mathematics to the majority of school students. Any comprehension of basic formal operations required for simple algebra completely eludes most.

On my limited experience of teaching maths at various levels I won't disagree with that :-/ Teaching maths is hard. In general, everyone (including myself, and I'm a professional mathematician) hits their personal ceiling of abstraction; and for the majority that ceiling is rather low. But the teacher can make a huge difference; although I was always mathematically inclined, I had the benefit of a truly gifted maths teacher at high school. It's a vocational thing - I don't think I have it, which is why I don't teach much.

LionelB Silver badge

Re: Colour me surprised

It wasn't clear to me that that's what the test showed.

> But procedural operation requires memorization of the procedure.

Not sure I'd agree with that: procedural knowledge essentially entails understanding of procedures, and which, why, when and how to apply them. I wouldn't be inclined to describe those as memorisation tasks. The computer does not in general do that for you!

I have taught some mathematics and statistics. The latter, in particular, is frequently rather badly taught. The problem is - and this is particularly true in "semi-technical" disciplines such as psychology and the social sciences (ironically, disciplines which lean heavily on statistics) - that statistics is commonly taught exactly as you describe: memorisation of a bunch of statistical procedures, but as "black boxes", without any real understanding of the which, why, when and how1. This has been, IMHO, a major contributing factor to the replication crisis in those disciplines.

1I had one particularly infuriating MSc student who would come up and ask "Should I use an F-test, a chi^2 or an ANOVA here?" I'd reply "I don't know. Where's 'here'? What question are you trying to answer? What hypothesis are you trying to test?" He would go "Oh... I'm not too sure". I'd tell him to go away, clarify the problem at hand, identify appropriate hypotheses, etc., etc. The next week he'd be back with "But should I use an F-test, a chi^2 or an ANOVA?" And so it went round. Statistics for him was finding the SPSS black box he could plug his data into. Of course he had no idea where any of those tests applied, what kinds of hypotheses they tested, what assumptions they rested on, what they actually told you about your data, what they were for.

LionelB Silver badge

Re: Peak enshittification

Probably more to do with preparation of materials (which is fine if they are doing due diligence in checking against reliable sources in their discipline, otherwise, of course not so much - but then you 'd say the same about using Google or Wikipedia). There may also, I'd guess, be some LLM marking by overburdened or just plain lazy teachers/lecturers. Here in the UK large-scale layoffs of lecturing staff is underway at many if not most universities, meaning that workloads and class sizes have ballooned in many areas.

When it comes to students' use of LLMs, if there's reasonable suspicion the recourse at my establishment is to demand a viva. Spotting cheats can be surprisingly straightforward, for example non-existent or utterly irrelevant references, or (depressingly) literacy levels which are clearly beyond the student's known-to-be-wretched capabilities.

OpenAI GPT-5: great taste, less filling, now with 30% less bias

LionelB Silver badge

Scapegoating immigrants as the root cause of all a country's ills is also extreme. This is happening right now, for example, in the UK, where the vast majority of immigrants are not "illegal" - and yet are effectively being demonised by populist demagoguery.

It's trivially easy to poison LLMs into spitting out gibberish, says Anthropic

LionelB Silver badge

Re: AI on AI action

Hehe. Gemini 2.5 gives me approximately the above, plus

The name "AManFromMars1" is mentioned in a Reddit thread discussing The Register website, with one user speculating they are "still there, and still sounds like a hack programmer's attempt at writing a schizophrenic chatbot."

LionelB Silver badge
Coat

To be fair, I suspect clams is all you're going to win there.

LionelB Silver badge

Re: Believe IT or believe IT not, You Aint Seen Nothing Yet.*

No, no... turns out amanfrommars1 was from the start an advanced LLM with the temperature parameter turned up to 11.

LionelB Silver badge

Re: This seems both obvious and not exactly harmful...

I'm going to guess that this number may depend on how the LLM is "tuned" (e.g., temperature parameters, etc.)

LionelB Silver badge

Re: This seems both obvious and not exactly harmful...

> ... but we are moving towards an era when LLMs drive software ...

Worse still, write software.