Posts by LionelB
1314 publicly visible posts • joined 9 Jul 2009
'Error' causes Alexa to endorse Kamala Harris, refuse to discuss Trump
Upgrading Linux with Rust looks like a new challenge. It's one of our oldest
Re: Screw this, Let's Rewrite Everything in Cobol
> ... if-and-only-if you overcome your inner urges to show people just how god-damned clever you are.
See also C++ template metaprogramming.
(I once wrote a template metaprogram for super-efficient arbitrary-length bitstrings. It was, if I may say so myself - and I do - extremely clever. QED.)
We're in the brute force phase of AI – once it ends, demand for GPUs will too
Re: "Generative AI is.. not @AC-ng for u
> Can you not think of one use for a high-scoring probability engine?
We have several, like... um... Google Search for instance; with the added advantage that search engines strongly encourage you to check multiple information sources, and actually do a little bit of due diligence; unlike LLMs which encourage you to accept a single (quite likely flawed) response.
> That is a step up from...
Maybe it will be some time in the future (I'm not holding my breath), or at least for some very specific and well-delineated knowledge domains (like maths).
Re: "Generative AI is"
> There is one extremely powerful use case for generative AI: knowledge discovery ... The change is comparable to what Google did to Search.
Seems to me, though, that as regards knowledge discovery, as it stands generative AI is significantly worse than (Google or comparable) search, for the following reason: an LLM will give you one answer to your query, which may be nonsense. A search may throw up hundreds of answers to your query, many of which may be nonsense. But at least search results - assuming you are not a complete moron who always accepts as gospel the first answer that comes up - encourage you to acknowledge conflicting answers, and thus to dig deeper in order to discern which are the not-nonsense ones. LLMs give you no such encouragement; in fact they actually encourage you to be that moron.
Re: "Generative AI is"
As it happens, I've not not personally found much use for computer algebra systems, beyond the odd integral that I'm too lazy to hack out for myself - but if they work for you, then fine.
> Gartner is right to point out that older AI technologies are still a lot better for many jobs - but too many of the comments about this article ABSOLUTELY MASSIVELY understate the value of a really good LLM like Gemini Advanced: ...
I'll take your word for it.
> ... it's like having a clever person who's eager to help you whenever you ask.
I'm rather a fan of Maths Stack Exchange: it's almost like having 10,000 clever persons who're eager to help you whenever you ask - and human persons at that, so they tend to answer with more wit and creativity then any LLM is likely to muster any time soon.
And some not-so-clever people too, of course, but they tend to be quickly corrected (and generally rather politely, at that - Reddit it ain't).
Re: "Generative AI is"
> Well, I sure hope you are willing to correct yourself.
I sure am - and others' mistakes too.
> And I hope you don't make the kind of obscene mathematical errors that LLMs tend to make.
I am in fact a mathematician, statistician and research scientist. As such, as for every mathematician ever, I have made mistakes ranging from the trivial to the... well, at least significant, if not "obscene". And I have identified such mistakes in others' work.
Interestingly, probably the most serious mathematical error I've actually published (an error of omission which was pointed out to me by a fellow researcher) is still sitting in the published paper because I have completely failed to get the journal in question to publish an erratum - not for lack of trying. This is ridiculous and annoying - it means that every time I reference the paper myself, I have to put a footnote in there pointing out the error.
> I, for one, try not to make confident claims unless I can back them up with facts, citations, and reasoning.
Oh, absolutely.
Finally, well... why on earth would anyone want to ask a Large Language Model to give you answers about mathematics? There is much more appropriate software around for doing that.
Re: Programming techniques are yet to be refined
Um, maybe for some values of "elegant".
As regards LLMs (at least those based on the transformer architecture), my suspicion would be (though I'd be happy to be proven wrong) that no reduction in computational complexity (and data resource demands) is actually possible; hence my suggestion that new approaches are likely to be necessary.
Re: "Generative AI is.. not @AC-ng for u
> I would say Turing kicked it off. Took a while to get there though.
He may well have. The Turing test was though, one might argue, rather quickly met - and with hindsight somewhat naive. Great man that he was, Turing was no psychologist ;-)
> Because emotions. Needs to be tethered in.
Not sure I understand your point (unless it was the same as mine re. Turing).
> Character-based probability engines that abstract into numbers(sort of) ...
Thanks; I said I didn't understand what they're for, not what they are, or how they work. I know what they're are, and have a fair grasp of how they work.
Don't entirely disagree with your remarks on generative AI, although I'm not convinced that the mere fact that they're generative accounts for their obscene resource demands. I can envisage equally resource-greedy models that are not generative (beyond the fact that all putative AI models are going to have to generate something, even if that's just a yes/no).
Re: "Generative AI is"
Wind your neck in. My intention was not actually to suggest that LLMs are anything like statistical mechanics models - beyond the fact that they are both, in fact, statistical models.
Sure, regression models (widely and successfully deployed in myriad domains) are a closer match - although not that actually that close, if you are familiar with the transformer architecture underlying LLMs.
My real point - which I concede I may not have made clear - is that the proliferation of comments on any article involving "AI" and particularly LLMs pushing the "just a statistical model" trope (my emphasis) is naive and misguided. No, I certainly do not think that LLMs are anything I'd like to call "intelligent", but that's little to do with the mere fact that they happen to be statistical models, let alone "just". Statistical models come in many flavours, depending on the domain in question, and, as I pointed out, are commonplace and generally uncontroversial in science. There are, indeed, statistical models under development for brain function and cognition; see, e.g., Predictive Coding and the Free Energy Principle. (And no, I am not suggesting that LLMs are anything like those. They're not.)
Re: "Generative AI is"
> The real AI researchers have recognized this and already moved on to use the term AGI to refer to what they used to mean by "AI".
You mean they moved the goalposts ... again?
When, and by whom was it decided that intelligence has to be "general"? Not a rhetorical question: for example, I think octopuses are pretty damn intelligent - but I'm not sure their intelligence would count as "general" from a human perspective. Why the anthropomorphism around "intelligence"?
(Disclaimer: I am not claiming that LLMs are particularly intelligent in any sense of the word I can think of. The best you can say is that they're quite good at assembling plausibly human-like textual responses to textual queries - which is almost impressive as a feat of engineering, but also not. I don't really understand what they're for.)
Programming techniques are yet to be refined
I don't think it's a question of programming techniques. The computational demands of LLMs, for instance, are intrinsic to the model itself, the manner in which it scales (i.e., badly) - no amount of programmatic twiddling or ingenuity will mitigate that to any significant degree. New, different models are required.
I'm not sure I actually agree with Gartner's main argument either. Take pattern recognition, for instance. That only really took off when hardware resources to run convolutional neural networks became available. Today it is mainstream, and almost exclusively run on GPUs - which ain't going away any time soon. Sure, "elegant" programming is nice when you can do it, but it rarely* makes serious headway into the scaling of a problem in terms of computational and data demands.
*There are exceptions; the Fast Fourier Transform, which sits at the core of almost all digital signal processing applications, is a nice example of that.
Re: "Generative AI is"
"Can we stop calling this current orgasmic hype-ware AI?"
The bird has bolted and the horse has flown, I'm afraid.
"It's just a large language statistical model."
Yes it is, but no disrespect to statistical modelling - it can be extremely powerful (large chunks of science, for instance, are grounded in statistical modelling; cf. statistical mechanics).
"AI would imply the software has some form of understanding of what it's dealing with."
Would it? I kind of like to think I have intelligence, and yet I frequently get by without much understanding (whatever that means) of what I'm dealing with - I just get on and do it.
"It doesn't - as it clearly evident by some of the utter rubbish it produces."
I confess to producing utter rubbish on occasion (or so my family, friends and work colleauges tell me).
Python script saw students booted off the mainframe for sending one insult too many
Confession: while working as a programmer for a well-known UK telecoms company in the early-mid 80s, I wrote a small script which sent the unfeasibly noisy line printer in the print room next to Accounts a sequence of characters that clacked out the rhythm of the Birdie Song in a loop: drr drr drr drr drr drr DRRR, drr drr drr drr drr drr DRRR, drr drr drr drr drr drr DRRR DRRR DRRR DRRR DRRR, etc. Much mirth ensued.
GNU screen 5 proves it's still got game even after 37 years
Microsoft decides it's a good time for bad UI to die
Indeed. Personally I'm stuck in the late 90s - been using Fluxbox for over twenty years, Blackbox before that. Minimalist, customisable and ergonomic, with a distinctive aesthetic. Not for everyone, but I've found no reason to switch (aside from the odd foray into Xfce - ironically, perhaps, mostly for it's well-organised UI options for system and peripherals configuration).
I did, but reading your post the first person that came to mind was Keith Floyd - a man whose finest moments involved lurching blind drunk around a French country barbecue. I'm not sure how that is relevant to the Windows Control Panel, beyond that his signature Big Fat Glass Of Wine may conceivably make it a little less stressful coping with its successor.
Brace for glitches and GRUB grumbles as Ubuntu 24.04.1 lands
AGI is on clients' radar but far from reality, says Gartner
Re: Intelligence
> Finally, since the advent of computers, too many people have thought that what our brains do is "computing," but I'm far from convinced of that.
Amen to that. I work in consciousness science (on the mathematical/neuroscience side), and that's one of my pet peeves. If brains can be said to "compute" anything, that is surely stretching the idea of what "computation" means to most people (including computer scientists). At best it seems like a lousy metaphor for what brains actually do, and at worst encourages a simplistic, reductionist way of thinking about cognition and intelligence.
On the other hand, my other pet peeve is the idea that brains construct "models of the world" (including, presumably, a homunculus representing the owner of said brain). Again, "model" here stretches it's well-defined scientific meaning, and encourages a misleading, dualistic and static view of brain function. (I have less of a problem with dynamic, interactive modelling of brain function as epitomised, e.g., by Predictive Coding theory, which may well be what you had in mind.)
Brains, as you say -- in all organisms which can be said to possess one -- operate in tight sensory-motor loops with the environment. There's a strong sense in which they function more like glorified Watt governors than computers. (This argument was put powerfully by the early cyberneticists; c.f., Braitenburg vehicles.) The Enactivist school of thought makes this explicit.
Re: AGI is possible.
"AGI would come from a completely different place, establishing multiple basic concepts (things, uses, preferences, outcomes), linking them, and then allowing extension."
That approach was heavily researched in the 70s - early 90s (a.k.a. "GOFAI" = Good Old-Fashioned AI), but foundered in a morass of combinatorial explosions (which would not be mitigated significantly by subsequent increases in computing capacity). That is almost certainly not how human intelligence functions - and LLMs are also almost certainly not how human intelligence functions either. We are still very much in the dark about the (evolved) organisational principles which underpin human intelligence - or other animal intelligence, for that matter.
In lieu of a major breakthrough in understanding, whatever direction AI might take over the next few decades, it will not, I suspect, be terribly human-like. (Which may not be a Bad Thing.)
UK tech pioneer Mike Lynch dead at 59
The problem is, I guess, that many people do not appreciate the difference between those scenarios, or even recognise them as distinct.
Human intuition for probability and statistics is notoriously rubbish. We are evolved to see patterns everywhere - we are inclined to overfit the world (to varying degree... conspiracy theorists sit at one end of that spectrum).
Okay. how would you work out that probability then? To calculate a probability, you need to state in advance (1) the set of all things which fall into the class of events under consideration that could potentially occur, (2) the probability distribution over that set of events (which might or might not be uniform), and (3) the subset of those events for which you want to know the probability.
So... e.g.,
Would you include cases where the victim was not Lynch, but someone else, say, any business tycoon and (one of) his associates?
Did it have to be a waterspout or would some other unpredictable weather phenomenon make the cut?
Would it count if the trial was for something different? Or there was no trial, but perhaps something else contentious going on with the victims?
Over what period are we considering the probability? A year? A decade? A century? A millennium?
How close together in time do the deaths have to be to count?
...
...
...
I'm sure you can think of a few zillion more.
I know I've already done so in this thread, but I'm again going to have to leave the last word on coincidence to Richard Feynman:
“You know, the most amazing thing happened to me tonight... I saw a car with the license plate ARW 357. Can you imagine? Of all the millions of license plates in the state, what was the chance that I would see that particular one tonight? Amazing!”
Nvidia's latest AI climate model takes aim at severe weather
Re: Weathermarket
Atmospheric dynamics at least are underpinned by (mostly) known, if inherently difficult-to-predict, deterministic (if chaotic) physics. Market dynamics are underpinned by human individual and herd psychology in all its perverse and irrational glory, and (as I learned during a brief stint as a quant), a woefully poor signal-to-noise ratio. Those are very different beasts.
Core Python developer suspended for three months
I think the term is "ideologue" -- fanatical subscribers to some -ism or other -- and yes, they are bad news. Not long ago here in the UK, we had one of those who inexplicably managed to became Prime Minister for a (very) short time. Blinded to reality by her ideology, she tanked the economy in record time - we have barely recovered from the damage.
> "Repeat a lie often enough and it becomes the truth”, is a law of propaganda often attributed to the Nazi Joseph Goebbels.
Sure, propaganda is a powerful tool, and catching them at an early age particularly effective (I guess you may have been referring to religious indoctrination, perhaps the paradigm example). A particularly insidious variety popular nowadays is to undermine the very notion of truth - "truth" becomes, by definition, what the demagogue(s) declare it to be (the cult of Trump speaks strongly to that one - see also Russia, North Korea, China ...).
> Most people don't think for themselves, if such were the case we would not have the politicians/politics that we have right now.
Or, perhaps, do think for themselves, but find themselves in agreement with the demagoguery in question. Or, worse, they may, deep down, recognise propaganda as lies, but have simply ceased to care about truth (see above).
So it's not the idealism per se that's bad - it's "enforcing that others believe your lies".
A couple of problems with that:
1. Wanting to enforce people to believe your lies is hardly the preserve of idealists. In fact it may even help not to be an idealist, if that's your agenda; cynicism seems more appropriate.
2. How do you "enforce" people to believe anything? I mean you may wish to, and you can give it a go -- you may even coerce people into pretending to believe your lies (popular in many parts of the world) -- but people have an annoying tendency to have minds of their own.
In any case, are there not also good ideals? To be trite, suppose in my ideal world I'd want people to be kind to each other. Would that make me a terrible, dangerous person?
Nah. It's just become a meaningless catch-all term of abuse from the reactionary right for anyone who doesn't share their perspective; a tribal "virtue" signal, if you like.
And when did idealism become a Bad Thing, by the way? Don't you think those of a reactionary persuasion -- especially the religious right -- likely see themselves as holding rather strong ideals, to say the least? (Personally, if I really have to subscribe to any -ism, it would probably be humanism.)
Re: "It was their behavior that got them there in the first place"
> The thing is, we are living under a yoke where a lot of things cannot be said ...
Can't they? You seem to have managed to say an awful lot. I'm guessing the jackboots have not arrived at your door yet.
There are a number of places where a lot of things really cannot be said; Putin's Russia, North Korea, various Islamic states, come to mind. Your rants are at best snowflaky, at worst an insult to the poor folk who have to actually live under such regimes.