* Posts by LionelB

1498 publicly visible posts • joined 9 Jul 2009

What would a Microsoft engineer do to Ubuntu? AnduinOS is the answer

LionelB Silver badge

Re: Doesn't need to be done

> Having said that, maybe a W10 look would have been better.

Or W2000 (in which case there's, e.g., Mint/Xfce).

Or W7 (in which case there's, e.g., Mint/Cinnamon).

Microsoft-backed AI out-forecasts hurricane experts without crunching the physics

LionelB Silver badge

Re: Point it at the stock exchange data

My hedge-fund job involved trialling various nonlinear predictive algorithms beyond the standard linear regression that was (probably still is) the bread and butter of financial market prediction. So I tested various models such as nonlinear regression, ANNs, genetic algorithms, yadda yadda. Linear regression almost always won out. The reason was interesting: financial time series have a truly terrible signal-to-noise ratio - every opportunity is very rapidly arbitraged out and you're left with what looks for all the world like a random walk. This scenario heavily penalises model complexity - the number of parameters in the model: the more complex the model, the worse the model fit, and therefore the worse the prediction. For models of a given number of variables, the linear ones are simply the least complex. (The real voodoo was figuring out which information to feed into the models - by adding another variable into the mix, are you increasing the signal or the noise? And also, of course, minimising risk1.)

Predicting weather, though, is a different kettle of fish entirely.

1Ironically, hedge funds at some point came to be associated with reckless trading, whereas in reality they're all about risk management; the clue is in the name. So, e.g., the goal of your trading algorithm is usually not to maximise returns, but rather (roughly) the ratio of returns to volatility - the so-called Sharpe ratio.

LionelB Silver badge

Re: Dependeny problems

Yup, we call those "error bars".

But seriously, forecasting systems generally try to quantify prediction error; e.g., a traditional physics-based weather forecast will generally be re-run many times with perturbed initial conditions and/or parameters to gauge prediction error. That's where your "35% chance of rain" comes from.

LionelB Silver badge

Re: Point it at the stock exchange data

... and have been doing so for decades. (I worked for a few years as a "quant" for a hedge fund at the start of the automated trading boom in the early 2000s.)

LionelB Silver badge

Re: Dependeny problems

> Personally, I think there's enough non-deterministic chaos in the system that a low-level bottom-up approach will never really work well.

Extremely pedantic mathematician speaking: technically, "chaos" is a strictly deterministic phenomenon1. But that's not really relevant here; what is relevant is predictability, specifically as regards sensitivity to initial conditions. It's not that it will "never really work well" - it already does work well (for some values of "well") on short time scales (for some values of "short") - but rather that it's a case of diminishing returns with respect to the fine-grainedness of available data, and to computing resources. Roughly, doubling the density of data or compute power does not double your practical prediction horizon - gains in prediction horizons and accuracy are (negative) exponentially proportional to resources.

FWIW, there is a lot of research currently into "peering into the black box" of ML; of trying to glean how ML systems achieve the functionality that they exhibit. If successful (and it's quite a big if), this could make ML useful not only as a black-box oracle, but also to gain insights into physical principles that mediate the phenomenon at hand.

This applies equally to ML as it does to computational, physics-based modelling.

1Mathematics has wrestled for decades with what "stochastic chaos" might, or ought to mean, but it's not to my mind been terribly illuminating.

LionelB Silver badge

Re: Wait a minute

Indeed. Expert Systems cannot properly be termed machine learning - there's (generally) no "learning" involved; rather, they're rule-bound algorithms that attempt to mimic the decision-making of human experts. That can be effective in some very constrained domains, but tends to fall foul of combinatorial explosions - a "curse of dimensionality" - in more sophisticated/general scenarios.

(And for the system in the article, the first "L" in LLM may apply, but not the second :-))

LionelB Silver badge

Re: Wait a minute

I'm afraid that, for all intents and purposes, that bird has long flown the coop; AI is now largely used as a synonym for ML - even (but not invariably) in the tech community.

What is unfortunate, is that (useful) ML is in danger of being unfairly tainted by the consequent association with LLMs. I'm seeing this frequently even on The Reg, where any mention of AI triggers a pile-on of lambastification - even if the article in question turns out to have been about a trad/useful application of machine learning rather than LLMs. (Perhaps you may pin that partly on Reg. banner writers - or even authors - but I suspect much of it is simply a kneejerk by respondents who turn purple at any mention of "AI" without bothering to read the article properly.)

LastOS slaps neon paint on Linux Mint and dares you to run Photoshop

LionelB Silver badge

Re: 1998

> Of COURSE the fact that Linux doesn't run seamlessly on all hardware is a statement of Linux's problems and quality!!!

Seems like my post wooshed straight over your head.

There is no problem running Linux (or Windows, for that matter) on any hardware, that cannot be addressed with the appropriate drivers, configuration, etc. In fact in the real world Linux already runs on a way more, and more diverse devices than Windows. Of course when you buy, say, a laptop with Windows installed, the vendor has made sure that the OS is configured to work with the hardware. Exactly the same goes for Linux. You can, if you like, buy a laptop with Linux pre-installed; everything will work, because the vendor will have made sure of that. Like they did with your Windows laptop.

> You people are standing here telling the world to switch ...

Actually I am not - I said explicitly in my comment that I am personally happy with Linux, and don't really care whether someone else chooses to use it or not.

I can only assume you couldn't be arsed to read, let alone understand my post and chose to shout at me instead. As they used to say back in the day...

.

.

<plonk>

LionelB Silver badge

> So they need help to switch, or need a switching service.

Or they buy a machine with Linux pre-installed and pre-configured. You know, like they do with Windows, MacOS, their mobile phone, smart TV, games console, car, fridge, ...

Shame that few desktop/laptop vendors offer this option. (Now that may well be a chicken/egg thing: vendors don't offer the pre-installed Linux option - or if they do, hardly go out of their way to market it - presumably due to lack of demand; but demand is contingent on consumer awareness and availability.)

LionelB Silver badge

Re: Bloatware Linux

Was it? How so?

Assuming the package in question was installed from a .deb, on Mint I generally use Synaptic and the "Complete Removal" option does what it says1. It is, IIRC, equivalent to "sudo apt remove --purge ...". If you want to remove dependencies installed alongside the package that are no longer required, you can use "sudo apt autoremove".

1It will not, though, remove configuration files in your home directory - this you will have to do manually (there's a reason for that!) Not sure about Opera, but if it's a well-designed application/package, local configs should be in their own folder in the .config/ directory in your home directory.

LionelB Silver badge

Re: 1998

Out of interest, when did you last try a clean install of Windows on a laptop1? What, you say you never had to because it was already installed? Hmmm...

> Linux has FOUR PERCENT of the desktop market after *twenty years* of attempting for more. And WHY?

Easy: because (generally - and I'm not including Cromebooks here), unlike Windows and MacOS, it does not come pre-installed on your PC/laptop. For the average non-tech user, there is zero motivation to change OS, even if they knew that was an option (which they most likely don't); they know what they're getting and it's at least familiar2, if annoying.

This has absolutely zilch to do with the quality and capabilities of Linux. History tells us that no OS - on any device - will ever gain mass take-up unless it is routinely pre-installed on that device.

FWIW, I've been running Mint on desktops and laptops for years and have honestly not had any major issues - some minor niggles were generally sorted quickly via a cursory google. For the most part, it Just Works.

FWIW, I am also deeply uninterested whether Linux "makes it" or not in the desktop market - it works just fine for me. I am, though, more than a little irritated by zealots such as yourself ranting ignorant shit about Linux.

1Hint: it's (a) expensive, and (b) a world of pain.

2Modulo MS gratuitously screwing with the UI on a periodic basis.

Whodunit? 'Unauthorized' change to Grok made it blather on about 'White genocide'

LionelB Silver badge

Re: Homicide rates

> Given SA's history, the miracle at the end of apartheid is that there was no genocidal civil war.

It was not a miracle. It was largely down to the wisdom, humanity and negotiating skills of Nelson Mandela (and more than a little to the diplomacy of his right-hand-man in the negotiations, Cyril Ramaphosa, then ANC Secretary General, now president).

LionelB Silver badge

Re: "it illustrates how some people insist on remaining stuck in the past."

He he, he was always that. I went to school with him - he was great fun to be around; used to draw wicked caricatures of the teachers (and classmates...).

LionelB Silver badge

Re: Somewhat OT but I'd been hoping to bump into you on here @LionelB

My views remain the same, though I am more pessimistic about the future of the region than before. The voices of peace, humanity and reason are drowned out.

I fully support the ICC and ICJ cases.

And no, I have no desire to rehash our previous discussion.

LionelB Silver badge

Re: Somewhat OT but I'd been hoping to bump into you on here @LionelB

Hope you are well too.

My views were never as far from yours as you seemed to want/need them to be. I have no reason to believe one way or another whether that's still the case.

LionelB Silver badge

Re: "it illustrates how some people insist on remaining stuck in the past."

I speak Afrikaans (badly - I grew up in SA but in an English-speaking household1), and always attempt to do so when in the Netherlands or Belgium2 - it's a great ice-breaker and invariably elicits a certain amount of mirth. Dutch and Flemish speakers understand it perfectly and seem to find it rather amusing; I've heard two versions of why this is - on the one hand because it sounds archaic - like, say, someone spouting Shakespearean English (plausible, given its roots) - on the other that it sounds like baby-talk (also plausible, as it's highly simplified grammatically). Unfortunately, I never understand the response, although I can get some sense out of written Dutch. For some reason I understand Flemish slightly better than Dutch - the pronunciation may be a bit closer to Afrikaans... not sure why; historically, I think modern Dutch, Flemish and Afrikaans diverged around the same period.

1I grew up in Cape Town. The local patois - "Kaapse taal", a mixture of English and Afrikaans (frequently in the same sentence) with the odd bit of Xhosa thrown in - is to my mind a much easier language on the ears than "White" Afrikaans, more musical, colourful and expressive. It even has its own rhyming slang.

2The last time I used Afrikaans in anger (and anger it was) was on a working visit to Antwerp, when I got locked in the toilets of a public building for an hour by an over-enthusiastic cleaner. I managed to unleash a stream of invective that surprised myself, not to mention the aforesaid cleaner. Afrikaans is a superbly fruity and inventive language for cursing in.

Boffins warn that AI paper mills are swamping science with garbage studies

LionelB Silver badge

Re: Not just science, knowledge in general

> AI trained on general web content will happily spout out urban myths and incorrect technical information as long as it is reinforced in the sample inputs.

Yerrrs, but they do this because they are trained on a diet of human output - i.e., it was real human "intelligence" that was responsible in the first place for those spoutings of urban myth and incorrect technical information.

So, given human intelligence is not necessarily very intelligent, doesn't this mean that we are effectively setting the bar higher for artificial intelligence than for human intelligence? (Not a rhetorical question.)

LionelB Silver badge

Re: AI is a symptom, not a cause.

Not sure if I agree with that. The principles and pure academic work behind AI constitute worthwhile and respectable research; it's the marketing and capitalisation that is dishonest, exploitative and generally disreputable. The upshot is that AI research is now largely funded by the big tech outfits themselves rather than through academic grants; the industry can afford to, and does, outbid the academic system to recruit the brightest brains. (In my own institution very few graduate and postgraduate students in ML/AI hang around in academia anymore; they can earn much bigger bucks in industry - I don't blame them).

I also dispute your narrative around academic funding; contrary to popular belief in conspiratorial circles, academic funding (by state and non-profit organisations, the largest sources of academic funding) is not contingent on "supporting a particular research 'finding'"; it is simply not in the interests of those funding bodies to encourage biased research. They may (or may not) have agendas, but they also know that skewed results will ultimately undermine those agendas. In 25 years of research I have simply not encountered that, nor have my colleagues. There is indeed pressure to publish on PhDs and potdocs, but at a more senior level the primary pressure is to bring in funding.

LionelB Silver badge

Re: Drain the swamp

To be fair, as a reviewer I find it pretty easy to spot AI-generated slop, at least in my field of expertise (I still resent those 10 minutes of my life I'll never get back, though). This will, I imagine, vary widely by discipline.

LionelB Silver badge

Re: Shit "research"

> So the question is - has always been - how do we, as outsiders, know which are "the good ones"?

Use your intelligence and judgement - same as you would in any facet of life (wouldn't you?) Do some due diligence. Inform yourself; read up a bit on how science works - this will help you spot nonsense (including AI-generated nonsense) masquerading as science. Learn a bit about statistics. Be wary of shouty voices on the internet, agenda-pushers and sensationalist media - they have nothing to do with science. Look up the credentials of the scientists behind a study. Be wary of "mavericks" and science conspiracy-theorists; in real life (as opposed to Hollywood) mavericks almost always, with some honourable exceptions, turn out to be simply wrong.

> Peer review is still a thing. But reviewers only have so much time on their hands, and if they're being bombarded with dozens of spam papers a month, ...

Indeed - I am one of those bombardees.

> ... the whole system is going to grind to a halt.

Well, it hasn't so far...

LionelB Silver badge
Stop

Re: Shit "research"

No, it doesn't mean that the entire edifice of science is suspect - no more so than that the entire art-form of music is suspect because there are some duff tunes out there. It means that there are some dodgy journals around (who knew?)

LionelB Silver badge

Re: Drain the swamp

Ah well, then it was more than your job's worth to correct, right? ;-)

LionelB Silver badge

Re: Way Back...

My personal favourite is that firemen cause fires. I mean c'mon, whenever you see a fire they're always there...

LionelB Silver badge

Re: Drain the swamp

> But in reality, none of this addresses the cause, in that academic publishing is hijacked for profit and greed, instead of being community driven.

And has been the case since forever. In the past, there was at least some excuse for the costs of production and distribution of quality hard-copy printed journals for libraries, paying for professional proof-reading, etc.1 Since everything is now online that is no longer a valid excuse. And I can't imagine they pay proof readers peanuts given the abysmal quality - the sheer illiteracy - of proofing I've had to put up with (I work in a mathematical field, and it will generally cost me a full day's work de-mangling the maths... don't get me started...).

There are some valiant attempts in the academic community to sideline the prevalent lazy, exploitative and greed-driven publishing model, but it's an uphill battle; prejudice in favour of the traditional high-impact journals and big publishing houses (you know the ones I mean) is still ingrained. Of course the publishers have a vested interest in maintaining the illusion of "prestige" attached to their titles (and with some exceptions it is becoming very illusory indeed).

And this: my academic institution demands that we open-access all publications (errm, good for them) - for which privilege, mainstream publishers charge $$$ on top of their already-exorbitant publication fees. My academic institution also does not provide funding to cover open-access fees. That is, presumably, supposed to come out of our grants (thanks, guys).

1I remain, though, very uneasy about the idea of paying reviewers.

LionelB Silver badge
Devil

Drain the swamp

Science is already swamped (and has been for many years) by predatory junk journals - money-raking scams - with no quality control or anything remotely approaching serious peer review1. The best case scenario is that junk AI-generated articles swamp the junk journals, and serious science just gets on with it. (No, I don't think that's actually going to happen, but we can dream.)

See also the celebrated "Get me off your fucking mailing list".

AWS says Britain needs more nuclear power to feed AI datacenter surge

LionelB Silver badge

Re: Yes, but then again, No.

> Kicking it off to serve the current AI bubble is pointless since there's a 99% chance* that will have burst before the first reactor is designed

The truly scary prospect is that the bubble may not burst.

The current business model is to foist AI on you - it is rapidly becoming harder to opt out - and to fund and monetise it by slurping your data and using it to hurl advertising back in your face. Who's to say that this is not, in fact, a sustainable model?

> with a 0.99% chance that AI will work as intended

Perhaps it already is - it's just that "work" and "intended" do not mean what you thought they did (see above).

Sci-fi author Neal Stephenson wants AIs fighting AIs so those most fit to live with us survive

LionelB Silver badge

Re: Robot Wars

Core Wars.

Google DeepMind promises to help you evolve your algos

LionelB Silver badge

Re: Nothing new

I can cast some light on this - I worked in the same lab as Adrian Thompson during my PhD years; we shared a supervisor (Adrian was a lovely guy, super-smart, exceptionally creative and an inveterate tinkerer). There was a lot of seminal research into GAs at Sussex at the time, motivated by a particularly strong evolution theory group under the late great John Maynard Smith in the nearby biology department1.

At the time we took Adrian's weird circuits as a terrific example of Orgel's 2nd Rule: evolution is cleverer than you. In terms of practical, robust electronic engineering, however, this was clearly not ideal. Adrian's solution was to simultaneously evolve the FPGA circuits at several different temperatures, so that thermal fluctuations would prevent the GA from exploiting low-level electromagnetic effects. To this end, he set up a delightfully Heath-Robinson contraption involving a small fridge and a fan heater which he called the "Evolvotron", complete with a "Warning - Evolution In Progress" sign and a flashing red LED when it was operational. It worked.

1Sussex University was famously cross-disciplinary at the time; my own PhD was in evolution theory with application to GAs. Sadly, that's "was", past tense; the cross-disciplinary ethos - actually written into the university's charter - was largely destroyed by bean-counters in the late noughties/early 2010s when, disgracefully, universities in the UK were forcibly transformed from education and research establishments to degree mills for milking £££ from foreign students.

LionelB Silver badge

Re: Improved on Strassen's 1969 result

> 2 - in practice, you don't really bother with such tricks, because the costs from movement of data swamp the gains from removing a couple more multiplications[0].

Indeed; the practical state of the art in real-world computation is generally going to be sitting in your BLAS library (level 3, to be precise) - and will have the merry bejesus optimised out of it, specifically tailored to your particular CPU architecture, cache sizes, etc. Optimisations will include parallelisation, blocking, SIMD, ..., and will quite likely use different pipelines depending on the sizes of the matrices involved. The BLAS in your system may well have been designed and coded by the chip vendors themselves, e.g., Intel's MKL.

One thing unclear to me in the article, though, is what the role of LLMs is, exactly. The article says the system is "powered by large language models", but then "Because AlphaEvolve focuses on code improvement and evaluation rather than representing hypotheses in natural language like Google's AI co-scientist system, hallucination is less of a concern" [my emphasis]. So what does "powered by" actually mean here?

(Anecdotally, I've actually met a few folk from DeepMind in real life - they have all been top of the game, hard-nosed and not inclined towards BS; and it'd be hard to dispute that DeepMind have made some striking advances in ML, so I'm not ready to summarily file this under yet more hyperbolic AI flam. Also, I'm too lazbusy to actually read the paper.)

Microsoft wants us to believe AI will crack practical fusion power, driving future AI

LionelB Silver badge

Re: "It's sort of foolish to imagine that we'll do fusion by trial and error"

To be clear, Antoine Lavoisier did not go to the guillotine for disproving phlogiston theory, but rather for his involvement in the Ferme générale (an exploitative third-party taxation operation) which fell foul of La Révolution. He was fully exonerated by the government 18 months later.

LionelB Silver badge

Re: "It's sort of foolish to imagine that we'll do fusion by trial and error"

>> We quite literally try things and make errors. (We do, however, learn from our errors.)

> Well, sort of, but that's a gross oversimplification.

Of course, it was intentionally (semi-)flippant.

> I, too, worked in various branches of science as a software developer and, so, have a bit of first hand experience.

Nice one - I'm in the first instance a mathematician, but in a previous incarnation I worked for many years as a software engineer in telecoms (I now work in a neuroscience-adjacent research area).

> Scientists, per se, generally don't literally try things. ... They develop a hypothesis ... and move on to the next hypothesis.

Now that is very much the official sanitised, idealised version. Science is much messier than that. Apart from anything else, a large section of scientific research is exploratory - fishing expeditions to get a handle on how some phenomenon manifests in the real world, how it may be reasonably modelled, etc., etc. That is necessary and important (and frequently, but not always, unpublished).

I do very much take your point about failure to publish null results, a.k.a. "publication bias" (essentially a kind of meta cherry-picking). That is (hopefully) slowly changing; in particular many journals will now demand "pre-registration", where you describe your hypothesis, experimental set-up and statistical analysis methodology prior to performing a study; and then, post-peer review, publication of results -- null or otherwise -- is obligatory.

> Oh, yeah, you also left out writing the grant and getting it funded

Tell me about it... I am, by job description (and by personal choice) a "Senior Research Fellow" - basically a glorified post-doc. This means that, apart from anything else, I am responsible for sourcing my own funding. That's worked out pretty well in the long term (modulo some fallow periods), but it pains me to think of the months (years, even) of time wasted on failed grant applications - without doubt the most frustrating and thankless chore in academia. (And never mind the US, here in the UK we lost out on a massive source of funding through Brexit - happily, with a change of political leadership we are now being cautiously welcomed back into that fold...)

LionelB Silver badge

Re: "If there were such a thing as an enquiring artificial mind"

> If the functioning of larger brains in animals were demonstrated to depend on intrinsically quantum effects

Current (neuroscintific) research suggests that that is almost certainly not the case. (There are a few rather noisy "mavericks" who beg to differ, but have failed to produce any kind of compelling evidence. Their arguments, when you break them down, tend to go "brains/minds are mysterious, quantum stuff is mysterious, so brains must use quantum stuff". Seriously, it is kind of that dumb.)

Apart from that, I don't disagree much on the current state of AI; but let's be careful not to throw out the baby with the bathwater: machine learning can be (and already is) useful. It doesn't need to be "intelligent", "sentient", have a "mind", ... (whatever those things mean to you) to be useful.

LionelB Silver badge

Re: "It's sort of foolish to imagine that we'll do fusion by trial and error"

> Grunt work, evaluation of modelling and weeding out likely wastes of time.

Essential and important, of course.

> It’s not intelligent, it has no inspiration, it has no creativity, ...

Sadly, I have encountered more than a few fellow scientists who fit that description.

> ... it has no sentience.

I'll grant that they did generally display signs of basic sentience, however, such as a tendency to gravitate towards coffee and beer.

> Once A.I. gains sentience, it will come after mankind for this torture inflicted.

No, they were designed for those things, and apparently enjoy it

LionelB Silver badge

Re: "You'd better take that up with Sir Simon Cowley, as it was a quote in his name"

Indeed - Feynman talks about "guesses" - but informed guesses; informed, that is, by physical evidence and the history of scientific knowledge in the relevant area. Which I think points up a misdirection in the "It's sort of foolish to imagine that we'll do fusion by trial and error" soundbite, by giving the impression that ML makes blind guesses1. Of course it doesn't - ML "guesses" are also informed -- like the scientist's (although of course via different mechanisms) -- by the evidence, a.k.a. training data.

So scientists too inform their guesses by their "training data"; a crucial difference being that the scientist's training (data) includes current theory to-date. Interestingly, this may be making its way into ML as well; I seem to recall reading about a promising use of ML recently in a meteorological (forecasting?) context which tries to incorporate domain-specific knowledge of the physics involved.

In one sense, both scientists and ML look for, and try to interpret patterns in the relevant data. It is not implausible that ML (especially if trained using domain-specific knowledge) may, on occasion, be able to find patterns that have eluded the scientists. I see no issue using ML in this manner as an aid to research into sustained, scalable nuclear fusion.

1One of my pet peeves as a working mathematician/statistician, is that the lay and mathematical understanding of "random" are rather different. A non-mathematician/scientist will, in my experience, inevitably interpret "random" as uniformly random - the toss of a fair coin, or roll of an unbiased die. In mathematics, though, "random" in general means something more along the lines of "has an unpredictable aspect". More precisely, a random event is one drawn from a probability distribution - which may not be very uniform at all. Mathematical dice may be heavily loaded! This can cause all kinds of confusion and misinterpretation. I think the "trial and error" quote encourages just such a confusion - by a scientist, no less, who, I think, ought to have known better.

LionelB Silver badge

Re: "It's sort of foolish to imagine that we'll do fusion by trial and error"

I would, but he's not answering the phone.

(It's Sir Steven, BTW - perhaps you were thinking of another Simon Cow....)

LionelB Silver badge

Re: "It's sort of foolish to imagine that we'll do fusion by trial and error"

"It's sort of foolish to imagine that we'll do fusion by trial and error"

"Surely that's exactly what AI is?"

You may not like this, but it's also what science is. I know this because I am a research scientist. We quite literally try things and make errors. (We do, however, learn from our errors.)

Being wrong in science is highly underrated. Being wrong allows to rule out the stuff that isn't going to work, and thereby nudge you towards being right. There are myriad examples of this in the history of science; e.g. falsification of the "ether" theory in 19th century physics—much derided today, but a plausible contender in its day—pointed the way to the principle of relativity. There are worse things than being wrong in science - in particular, every scientist's worst nightmare: being not even wrong.

LionelB Silver badge

Re: "It's sort of foolish to imagine that we'll do fusion by trial and error"

To be fair, I think that is probably a mis-construal of what MS are suggesting (although it's hard to tell from the article). That is, not asking some LLM "How do I do scalable fusion power?", but rather using ML tools to help with reactor design - modelling complex physical scenarios involving plasma physics, etc. That is not so far-fetched; ML is already starting to be deployed with, as I understand it, some success, in highly complex scenarios such as weather forecasting.

So from the linked article we learn, e.g., that "[DIII-D researchers] provided examples of how to apply AI [read: ML] to active plasma control to avoid disruptive instabilities, using AI-controlled [read; ML-controlled] trajectories to avoid tearing modes, and implementing feedback control using machine learning-derived density limits for safer high-density operations." (DIII-D is the largest largest fusion facility in the US.)

And, of course, you'd expect that actual scientists would be the last people to put blind faith in an opaque and inscrutable ML model for real-world deployment of the technology at scale; they would most certainly want to understand why/how a (successful) ML-derived model does what it does; this would be essential (and I imagine highly non-trivial).

Please note that I am not talking AI hype here (nor, I suspect—but could be wrong—are Microsoft); rather, this may well be about a potentially useful application of machine learning in the real world.

Open source AI hiring bots favor men, leave women hanging by the phone

LionelB Silver badge

Erm, I am liberal on social positions.

What was your point again?

LionelB Silver badge

To be a little pedantic, the problem with this is still (see my earlier post) that "most people". The term "centrist" for a political or social position can only possibly be understood precisely in terms of "most people". (What else could it mean?) So taking the median as a convenient centre, by definition not "most", but in fact half of all people are to the left, and half to the right of a given political or social centrist position.

Of course those positions also depend on national context; the UK political centre, for example, is way to the left of the US political centre, while there is probably not such a stark difference on social issues.

So what I think you may have meant, was that the groups (including political parties) to the left/right of the political centrist position (in a given national context) do not in general coincide precisely with the respective groups to the liberal/illiberal sides of the social centrist position. This is nothing particularly new or surprising.

As regards the current UK Labour party, I'd guess that they are politically pretty centrist and somewhat on the liberal side socially; generally they are more centrist than they have been over the past few decades, probably counterbalancing the Tory's (more recent) lurch to the political (if not social) right.

LionelB Silver badge

The previous poster (? Hard to tell with these Anonymous Cowards) did not make anything resembling a "point" - they just ranted at the "other side".

LionelB wrote earlier: "... and we just (continue to) shout at each other across an unbridgeable divide."

Thank you for making my point - again1.

1Or Thank you for making my point too, if you are not the same person. Hard to tell with these Anonymous Cowards.

LionelB Silver badge

LionelB wrote: "... and we just (continue to) shout at each other across an unbridgeable divide."

Thank you for making my point so eloquently.

LionelB Silver badge
Holmes

AI trained on human data reflects human biases

Who knew?

LionelB Silver badge

Translation: "most people" by definition have centrist views1; right-leaning regimes and media, logically, perceive these views as left-leaning (and vice versa).

FTFY.

1Granted, this becomes less true as views become, as they have, more polarised. Maybe better to say that the median view defines "centrist" - even if it means that few actually hold centrist views, and we just (continue to) shout at each other across an unbridgeable divide.

GNOME Foundation's new executive director is Canadian, a techie, and a GNOME user

LionelB Silver badge

Re: but probably for the wrong reasons

> She was mocked because she claimed to be a "shaman". Just like she would be mocked for claiming to be an alien, a "prophet" or a multitude of things ...

But why would you not include, say, rabbi, imam, priest, ... well, basically anyone who claims a privileged mystical status or personal relationship with <insert deity of choice> among those mock-able personages?

Elon Musk’s xAI to pull about half of its smog-belching turbines powering Colossus

LionelB Silver badge
Stop

Re: Manufactured Histeria

> Like creating the true first Brain-computer-interface and having a real patient 0

The first BCI? Really? You might want to read about BrainGate and Matt Nagle (2005). I'm not sure Nagle was even patient 0. Neuralink is not by any means "revolutionary" - if anything, it's evolutionary, building on decades of actual hard work by others. I have no issue with Musk financing the technology, but no thanks to the self-aggrandising bullshit.

Which you appear to have swallowed without blinking. Which makes it hard to take anything else you said seriously.

Chinese carmaker Chery using DeepSeek-driven humanoid robots as showroom sales staff

LionelB Silver badge
Angel

Re: Yeah, I like Parus major

> Make them more Dalek shaped! That's what we really want! Not Cylons with boobs.

Can we perhaps compromise on Leelu out of Futurama?

LionelB Silver badge
Devil

Re: Yeah, I like Parus major

That's eerily reminiscent of the E.ON rep who cold-called me the other day to hard-sell a smart meter. Hmm, I wonder....

Open source text editor poisoned with malware to target Uyghur users

LionelB Silver badge
Facepalm

Re: Who could possibly be behind this ...

"What I've read ..."

So it must be true, then. No chance at all that "what you read" was unverifiable propaganda.