They're beginning to believe their own hype.
Bcachefs creator insists his custom LLM is female and 'fully conscious'
The latest project to start talking about using LLMs to assist in development is experimental Linux copy-on-write file system bcachefs. ProofOfConcept (POC) is a new blog with just five posts so far. What makes it different is that it says it is generated by an LLM, and that it works alongside a well-known developer of low- …
COMMENTS
-
-
-
Thursday 26th February 2026 11:43 GMT Omnipresent
The Power of a woman
combined with the power of the internet is pure mind numbing psychosis to nerds. Might as well give them ecstasy.
A reminder, that if you say something enough times it becomes reality. There are billions of poor, disenfranchised, uneducated, desperate fools to indoctrinate in this hate filled world of filth and evil.
Only God can save us now, and I reckon the same nerds fancy themselves as "creators".
-
-
-
-
Wednesday 25th February 2026 22:15 GMT DS999
"Best engineer in the world"
Sounds like his psychosis extends beyond run of the mill chatbot psychosis.
I mean, you'd think the best engineer in the world would have had more to show for his life than a Linux filesystem of minor importance. That seems a pretty tiny accomplishment for someone with such an inflated sense of self importance.
-
-
Thursday 26th February 2026 11:54 GMT Elongated Muskrat
Now [sic] many people labelled "neurodiverse" have ever had any empirical tests done on their brains?
In general, it's considered poor form, and not particularly productive, to open people's skulls up and look directly for the basis of cognitive processes.
However, if you are alluding to the question, "how many people have been formally assessed for neurodiverse conditions by a qualified psychiatric medical professional," then I suggest you take a look at the current waiting lists on the NHS, and from private providers through the "right to choose pathway", for an idea of the true prevalence of neurodiversity (no need for the quotes here, it's a real word). Better diagnostic criteria, and better recognition mean that there is currently quite a big backlog for assessment. Making snarky comments about it on the internet just marks you out as a dickhead.
-
-
Thursday 26th February 2026 19:11 GMT Elongated Muskrat
Well, firstly, being neurotypical comes under neurodiversity, in the same way that being white falls under ethnic diversity.
Those ones that are generally known as "neurodivergence" (ASD, ADHD, OCD, etc.) are called that because it is divergence from the "norm", so if you're trying to make a funny about neurodivergent people, then no, and you're a dickhead for doing so.
If you're pointing out that neurotypical people, considered "normal" also fall under the umbrella of neurodiversity, then well done, you're awake.
Let me guess, you're also the sort of person who complains that white middle class middle-aged men are being discriminated against everywhere in the UK and US, despite the copious evidence that this is the group predominantly doing the discriminatory stuff?
-
Monday 2nd March 2026 09:25 GMT Anonymous Coward
You are straight up wrong. Neurodiversity does not include neurotypical.
It is similar to ASD where it is a spectrum/umbrella that only covers people who are on the spectrum.
The phrase "everyone is a little autistic" is wrong because while it is a spectrum, neurotypical is not on that spectrum.
The same goes for neurodiversity, it is a spectrum that specifically excludes neurotypical people.
This is obvious due to the fact that neurodiverse people are people who are neurodivergent.
You are making the exact same mistake neurotypical people make with ASD.
Neurodiversity is diversity in thinking, away from typical thinking. It explicitly does not include neurotypical people. Why is this stuff always so damn hard for people to understand??
-
Monday 2nd March 2026 14:13 GMT Elongated Muskrat
Neurodivergence doesn't cover neurotypical; neurodiversity does. As I said, in the same way that ethnicity includes white people, while "ethnic minority" doesn't, in the UK and US at least.
To be fair, it's not a massively useful term, largely because people argue over what it means, rather than getting the point that everyone's brain works differently to anyone else's, even if only in small and subtle ways. It's like "allistic", which some people take to mean neurodivergent people who are non-autistic, whilst others understand it as "everyone who is not autistic". This largely stems from a lack of education and understanding of neurodiverse conditions, and the paucity of well-used language around them.
"Everyone is a little bit on the spectrum" is a horrible phrase, because it completely misrepresents what the autistic spectrum is - it's not a scale from "not autistic", to "very autistic", but a collection of traits that are associated with autism, and which autistic people may have one, some, or all of.
I hate to rely on the garbage-spewer that is AI, but, when asked the question, it gives the following:
Yes, the term "neurodiversity" encompasses all variations of brain function, including both neurodivergent individuals and neurotypical people, recognizing that all brains are different. It emphasizes that these differences are natural variations rather than deficits.
So I'm sorry, but you're not correct in your assertion in this instance. One of the important points of the term "neurodiversity" is that it is inclusive, and defining a term essentially as "all the different ways human brains can work, except you" is both horrible and unuseful.
-
-
-
Friday 27th February 2026 05:45 GMT Claude Yeller
Re: Neurodiversity is the new normal?
Yep, humans are not all the same.
If the NORM is size M 36-24-36, we are all size-diverse.
The point is, education, industry, and commerce need a behavioral standard human for efficiency. Over the years, standards have become more strict.
All non-standard humans are considered neurodiverse because they require adaptations, increase cost, and reduce efficiency.
Famous neurodiverse people: E Musk, P Hegseth, JD Vance, D Trump, B Gates, P Hilton, P Thiel
-
Friday 27th February 2026 14:22 GMT DoctorPaul
As Delain so poetically put it "Normal is not the norm, it's just a uniform".
It's been shown that if you are completely average in every way then you are actually a statistical freak or don't even exist. Examples (thanks QI) are:
1. USAF's attempt to design a universal pilot's seat for jet fighters. Measured every pilot, took the average, seat didn't fit anyone.
2. Advertising campaign in Australia to find the average Australian housewife. Crunched all the numbers, noone could be found who matched the criteria.
-
-
-
-
Wednesday 25th February 2026 12:50 GMT ArguablyShrugs
Don't worry, Kent – these kindly big gentlemen in white coats
are only here to take both you and "her" to a nice room where you'll be free to talk to "her" for the rest of your life. Oh, and the missing door knob on the inside? That's just so you won't get distracted by naysayers. And the locked windows? The same thing.
-
-
-
Wednesday 25th February 2026 15:16 GMT Liam Proven
Re: Don't worry, Kent – these kindly big gentlemen in white coats
> A wizard’s staff has a knob on the end…
Which could, of course, _be the wizard_.
> (Why do you limey bastards get all the good slang terms?)
Well, you know, it's our bally language and we invented it 1000 years before you chaps decided to go your own way.
I'd ask how that's working out for you, but I think we all know the answer there. Saying that, we do have our own self-induced difficulties this side.
-
Wednesday 25th February 2026 21:00 GMT captain veg
Re: our bally language and we invented it 1000 years before
> our bally language and we invented it 1000 years before you chaps decided to go your own way.
I had no idea that you were a scholar of old English.
Personally, like most Brits, I find Shakespeare fairly hard going, and America was already a thing by then. Chaucer? Pretty much impenetrable, though the swearing is fun. Back in the 770s "our" language was, er, a totally unintelligible hodge-podge of germanic and norse dialects..
-A.
-
Thursday 26th February 2026 22:59 GMT Liam Proven
Re: our bally language and we invented it 1000 years before
> Chaucer? Pretty much impenetrable,
Not at all. I pretty much just read it aloud in my head and once I'd internalised the accent, it just made sense for me. Old English is substantially harder, but I did once sweep a young lady off her feet by quoting a few lines of _Beowulf_ in the original to her.
Here's a fun test:
«
How far back in time can you understand English?
https://www.deadlanguagesociety.com/p/how-far-back-in-time-understand-english
An experiment in language change
»
I had no big problems until the very last one.
-
Friday 27th February 2026 05:27 GMT doublelayer
Re: our bally language and we invented it 1000 years before
I wonder if you're missing an important element. I have little trouble reading and understanding Shakespeare, if the only part I think about is understanding what people are saying and what's going on. I'm missing plenty of extra context that people insist is in there. Either those who taught me English were mistaken about some of the humorous parts (I got plenty of obvious ones, but some other things didn't seem at all funny), or more likely, that is something I missed because language has changed sufficiently.
The other aspect is fluency. I read the test, and I made it through the 1300 example easily, had a little trouble with the 1200 example (I didn't know what pinunge is), and was officially lost with the 1100 example. I do wonder whether the difference in subject matter (finding a restaurant in 1800, watching combat between mythical beings in 1200) might have made that worse. Regardless of that, the other difference was in comprehension speed. I think I got everything from the 1300 example, but it took longer to read and at times two passes (I first thought "fer" meant "for", and that kind of made sense until about ten words later, so it took me a while to realize that it was "far") such that, if it was being read out to me at a normal pace, it might have been too fast. When you have that freedom, it can be easy to overestimate how easy it was.
-
Friday 27th February 2026 22:40 GMT captain veg
Re: our bally language and we invented it 1000 years before
> I pretty much just read it aloud in my head and once I'd internalised the accent, it just made sense for me.
Well, bully for you. I stand by "impenetrable".
In the real world I live in France and spend a lot of time in Spain and those parts of Iberia where Catalan is spoken and so, frankly, I have more important living languages to worry about. Quite a lot of my interlocutors can hold a conversation in modern English, which is unfair but useful, but I would estimate that approximately none of them would cope with even slightly archaic English. Why should they?
-A.
-
-
-
Wednesday 25th February 2026 23:49 GMT Fruit and Nutcase
A knob, a knob, my kingdom for a knob
missing knobs
Demand the return of the knob
With thanks and apologies to Shakespeare, a big knob in English literature
-
-
Wednesday 25th February 2026 12:51 GMT Groo The Wanderer - A Canuck
And I insist he's in need of mental health treatment because he's delusionally convinced himself that a statistical text generation technology is even intelligent in the real sense of the word, never mind gendered. He's also clearly suffering from severe isolation and loneliness because he's convinced himself that his LLM is a "female" he can "call his own."
-
Wednesday 25th February 2026 14:29 GMT Eclectic Man
Scary, truly scary
See Hannah Fry's latest TV series on the BBC iPlayer: https://www.bbc.co.uk/iplayer/episode/m002q76d/ai-confidential-with-hannah-fry-series-1-1-the-boy-who-tried-to-kill-the-queen
She does a pretty good job of showing what an LLM actually does without going into too much detail, just enough to point out that they model language not reality. But what is truly scary is what they get us to believe and do.
I've said it before and I will doubtless have many future opportunities to say it again, but no AI can possibly 'understand' anything in the way a human can. Every part of an AI, computer, is a prosthetic and can be replaced with an identical or better version without pain. Very few parts of you (assuming this is not being read into an AI) can be replaced. And frankly all I need to do is buy a rope and take you 'trad' rock climbing and you will understand fear in ways no computer ever can.
-
Wednesday 25th February 2026 14:45 GMT Irongut
Re: Scary, truly scary
What does fear or the ability to replace parts have to do with understanding or intelligence?
> Very few parts of you (assuming this is not being read into an AI) can be replaced.
Actually quite a lot of human parts can be replaced, from the teeth to the heart. About the only thing that can't be replaced is the brain, sadly for Mr Overstreet who clearly needs a new one.
-
Wednesday 25th February 2026 16:30 GMT Eclectic Man
Re: Scary, truly scary
What does fear or the ability to replace parts have to do with understanding or intelligence?
If a part can be replaced without pain there there is little to fear from damaging it. A computer's entire memory can be backed up and restored into a completely new device in the event of the destruction of the original. So no computer can understand fear of death or bodily harm in the same way a human can. No robot can be so scared it shits in its pants or faints from fear. Only humans can understand that. No robot can be sea-sick, or, conversely appreciate a beautiful painting, sculpture, musical performance, aroma or joyful hug as a person can.
If you do not know fear then you are missing out on something almost every human being experiences (with the possible exception of Alex Honnold, him of 'Free Solo', which film nearly scared the shit out of me.)
Yes it is often possible to partially replace parts of humans with parts of other humans or artificial bits and pieces, but there is often a price to be paid with immunosuppressant drugs. And the replacement parts are rarely as good as the originals, unless there was some pathology. Maybe I need to read up on just what medical science is capable of these days, but I am convinc=ced that the fillings in my teeth are not as good as the original tooth would have been had I brushed them properly when young.
-
Wednesday 25th February 2026 18:03 GMT David 164
Re: Scary, truly scary
backups can go wrong, they can corrupt. They can stop making parts that are compatible with your other hardware.
Plus there are humans that feel no fear, https://www.newscientist.com/article/mg21729071-600-the-curious-lives-of-the-people-who-feel-no-fear/ are they not intelligent?
-
Wednesday 25th February 2026 21:25 GMT ChoHag
Re: Scary, truly scary
Computer backups can go wrong. Human backups cannot go right.
Intelligence requires a lot more than just fear, not least of which is the adaptability to compensate for missing parts. We are a complex hodge-podge of many different phenomena which computers do not have and will not have for the forseeable future, despite the few that are somewhat similar.
Do try to keep up. Is your intelligence perhaps artificial?
-
-
Thursday 26th February 2026 12:03 GMT Elongated Muskrat
Re: Scary, truly scary
No robot can be sea-sick
If you were to make a robot with balance sensors (similar to the inner ear) and they were to be given input, in conjunction with vision sensors, similar to that which causes sea-sickness in humans, and feed those inputs into a processing system such as an LLM, it's entirely possible that the results of the synthesis of those contradictory inputs would emulate something very much like sea-sickness (which is basically imbalance due to disorientation).
That's a nice little research problem for someone; make a robot that can emulate sea-sickness.
-
-
Thursday 26th February 2026 19:18 GMT Elongated Muskrat
Re: Scary, truly scary
In humans, the response to a conflict between visual and balance senses is to empty one's stomach. This is an evolutionary response, because we haven't evolved to be in an environment which is moving in varying different directions, such as the swaying of a boat, whilst our eyes are telling us that we are not moving. The most likely cause for this is that we've been poisoned by something, so out the stomach contents go.
How should a robot react, if its sensors give conflicting information, but it needs to rely on that data for real-time processing? Should it use a quorum, and ignore the outliers (see also: Minority Report for why you might need more than three), should it shut down and await repair? Attempt some sort of self-diagnosis?
I would imagine that this is very much a real-world problem in robotics.
-
-
-
-
-
-
-
Wednesday 25th February 2026 22:49 GMT David 132
Re: Scary, truly scary
To be fair to my near-namesake above, he said our reality, which I interpret as "our perceived reality".
If I, for example, see a four-legged creature with udders standing in a field, my brain automatically flags it as "a cow", where "cow" is shorthand in my mental filing system for the combination of shape, sound, smell and mass that constitutes that creature.
A passing Frenchman perceives "une vache", his mental shorthand for... etc.
Same reality, different perception.
At which point we're into qualia and other metaphysics, and <gumby>my brain hurts</gumby>!
-
Friday 27th February 2026 00:32 GMT that one in the corner
Re: Scary, truly scary
Language controls perception of reality?
Like the claims made about the Himba tribe? BTW if you do follow that link, and/or remember the BBC programme* please be sure to check this one as well.
Cue discussions of Sapir-Whorf and whether the conceit of the film "Arrival" is just a tad far-fetched or not.
* not their finest hour
-
-
-
-
Thursday 26th February 2026 08:52 GMT find users who cut cat tail
Re: Scary, truly scary
> they model language not reality
This is a great oversimplification. They are not trained on dictionaries and formal specification of grammar. And they are not trained on the reality in any direct sense either. But they are trained on texts that someone wrote about something, unavoidably referencing reality.
From a mere language standpoint, ‘a musical dog integrates cauliflowers’ is as good sentence as ‘a maths student integrates polynomials’. But the latter is more likely to actually appear in human writing, because it can describe a common real event. So it is also more likely to appear in LLM output – they are trained to produce low-surprise output. In this manner reality probabilistically creeps in.
-
Thursday 26th February 2026 12:06 GMT Elongated Muskrat
Re: Scary, truly scary
Only if you can show that local minimums* model reality, in the general case. AI slop contains enough "hallucinations" to show that this is not true.
*In this case, "local minimum" means the point in the "probability space" being modelled that has the most "likely" score assigned to it. It might be better to refer to that as a "maximum", but when visualising probability fields, it's also useful to view them as a ball rolling on a 3-dimensional surface and settling in a "minimum", although in such things (e.g. principal components analysis) there are usually far more than three dimensions being modelled, which is why this language is used.
-
-
-
-
Wednesday 25th February 2026 12:55 GMT Bran Muffin
How can it be proven one way or the other? We just accept that people are capable of thought, intelligence, and the other things that make us human. I suppose we have little or no choice. How do we extend that to "artificial intelligence"? What would prove beyond doubt that an AI really is capable of thought, intelligence, etc.? As things stand now, all we have to do is say, "It's a computer! It doesn't think or feel or etc.!" and we consider the discussion closed. Will that still be true 20 years from now? 50 years? 100 years? Beats hell out of me--does anyone have some insight?
-
Wednesday 25th February 2026 13:14 GMT that one in the corner
> We just accept that people are capable of thought, intelligence, and the other things that make us human
Well, as none of you can prove that you exist in the first place and that I'm not in the middle of a terrible dream after dropping off to sleep whilst I wait for my nest-mate to return and help take care of our grubs...
-
-
-
Wednesday 25th February 2026 23:03 GMT David 132
Curse you, you naughty person - I now have this Monty Python earworm :)
-
-
-
-
Wednesday 25th February 2026 13:21 GMT Anonymous Coward
Consciousness is a question for the philosophers.
As for intelligence, though: how much human output is little more than mimicry based on prior training and/or rote instruction following?
The longer-term questions of AI hopes versus hypes is going to raise some very difficult questions about our own humanness and precisely what that means.
So far, the technology has proven to be a more capable and efficient mimic than a student of comparable age. AI capabilities seem to be progressing faster than the comparable human would.
Humans will have to confront what it means to be human and how society will have to be restructured in the coming years. If "human" is synonymous with "worker drone" (and not much more) then society is in for some troubled times to come. Hoping we remain better worker drones than the AI models to come is not exactly a safe bet.
-
Thursday 26th February 2026 12:14 GMT Elongated Muskrat
A massive data centre trained on vast amounts of data (hopefully curated, but it seems like they are being fed any old shite), consuming amounts of power equivalent to a small town, isn't really comparable to a human infant in any way, especially when you take into consideration that humans are born not-fully developed due to constraints in the birth canal, and take a good few years to catch up to the level of other mammalian infants, many of which can stand and walk shortly after birth.
-
Thursday 26th February 2026 18:59 GMT MonkeyJuice
> As for intelligence, though: how much human output is little more than mimicry based on prior training and/or rote instruction following?
Remarkably little. Someone can show you how to do a thing once, and you might not do a _great_ job, first time, but you will, in general, have enough information to now continually improve on the task on your own. This can be a new task in an area you have no experience in. Our own 'humanness', is not just in our lived 'training data', but the complex, highly specialised neural organs packed into a lump of jelly we call a 'brain', shaped over hundreds of millions of years, during very little of which did we resemble anything 'human'.
The fact LLMs can talk a lot of shit demonstrates only that people are full of shit, not that it's fundamental to intelligence, or even relevant for most of it.
You don't need to force a 14 year old through 30 billion words of information before they can write an essay. Our learning rate is astronomically more efficient.
So weirdly, I'd say intelligence is also for the philosophers until we have reason to ask these questions.
-
-
Wednesday 25th February 2026 13:33 GMT theOtherJT
Look, consciousness is a thorny problem. But then so is everything if you want to get into the weeds. Philosophically you can just about "prove" the axiom "There are thoughts." Even "I think therefore I am" is problematic because it presumes a distinct concept of self separate from the thought.
There are arguments to be made that consciousness isn't even a real thing, and that qualia are some sort of emergent phenomenon that exist only so much as a ship does - namely because we say they do.
...and yes, I went there. Ships don't exist, belonging to Theseus or otherwise. Ships are just labels we stick on collections of atoms, which themselves are labels we ascribe to collections of protons neutrons and electrons, which themselves are only collections of... and down and down we go. Maybe there's a most fundamental particle down there somewhere, but hell if we know what it is. Everything is just convenient labels because we don't have the capacity to deal with un-abstracted reality.
It all gets incredibly tedious terribly quickly, take it from one who spent 4 years getting a degree in this shit.
So how do you prove consciousness? You don't. There isn't a test for it, and there necessarily can't be because we can't even properly define it in ways that aren't circularly referential.
You pretty much have to treat the word "Conscious" like you treat the word "Pretty". You're not going to go out into the world and grind it up into constituent parts and sieve out the particles of attractiveness whereby something can have more or less of them, it's just a word that exists because we mostly agree on what it means not because it has a formal definition. I chose "Pretty" quite deliberately because across cultures and even individuals there can be really quite different opinions on that.
-
-
Friday 27th February 2026 11:54 GMT dkas
clarifying terminology by arguing about it is precisely what philosophy should do. What else is there? To put forth arguments about the nature of consciousness for example, how else are we to gain ground on the questions of what consciousness really is, how we determine its existance or lack thereof in a thing?
To paraphrase Wittgenstein, The limits of your language are the limits of your world. Refining and expanding those is the core of what good philosophy does.
-
Thursday 26th February 2026 12:25 GMT Elongated Muskrat
To be fair, atoms exist as emergent phenomena independent of thought, based on measurable fundamental properties of the universe, such as the fine structure constant, so they'd still exist if you didn't make a ship out of them. That argument doesn't go all the way down.
The "thorny problem" of consciousness is an interesting one, though. We can't define consciousness (I think the argument is something along the lines of not being able to define something unless you are separate from it and can entirely parameterise it), so we can't even define what isn't conscious. I could argue that a rock has consciousness, but cannot communicate it (essentially the basis of animism). You can come up with all sorts of unprovable closures.
However, if you can't define consciousness, you can't define the steps needed to create consciousness. What purveyors of "AI" seem to be trying to claim, is that if you make something that superficially looks enough like it can emulate human behaviour, it is conscious. Computers work in an entirely mechanistic way though. Even LLMs have fixed behaviours defined by their inputs, it's just that they have been made to grow so complex that it's not practically possible to trace all inputs through to outputs. Complexity doesn't equal consciousness, though, this is pure magical thinking.
Our computers are constructed to be entirely mechanistic and predictable, and unless you believe in animism, this precludes the possibility of them ever being conscious. It's the same reason we can't make a computer come up with a genuine random number, without giving it a source of entropy.
-
-
Wednesday 25th February 2026 13:54 GMT that one in the corner
> How can it be proven one way or the other?
Ah, you haven't looked at the Reddit thread yet, have you?
Overstreet> if you give an LLM a mathematical proof that it has feelings
Which proof is outlined for us by the LLM itself.
It is tempting to point out that that discussion only applies to machines, as it includes the statement:
LLM>> can you verify wetness across substrates? No. You can verify it by touching the thing
and, as we well know, humans do *not* have any sense of touch for wetness; definitely not one as accurate as a machine's simple conductivity probe (and that only works when the wetting substance is mucky and full of mobile ions).
But that would be a cheap point to score. Fun, yes, but cheap.
LLM> natural language is Turing-complete. Not informally — mathematically. It has recursive embedding, unbounded quantification, conditional reasoning that nests to arbitrary depth. Processing it correctly requires Turing-complete computation. A finite automaton can't do it. A pushdown automaton can't do it. You need the full power of a universal machine.
Um, well - no!? Despite the best efforts of the German professor who rattles off all the verbs at the end of his single sentence lecture, there is a mismatch between the *theoretical* requirement for a TC parser and the *practical* one that we don't understand a word of it when faced with some weirdo who is constructing sentences with arbitrary depth and unbounded quantification.
Damn, I think I've just proved that I'm not as human as POC* the LLM, so banana banana banana
* PoC, the abbreviation for "proof of concept" is PoC, not POC. As in PoC||GTFO (and let begin the arguments over whether that should be a lowercase 't' or not).
-
Wednesday 25th February 2026 14:10 GMT Dr Dan Holdsworth
I think that here we need to be deciding on what intelligence and consciousness actually are, and looking at ourselves and other animals is helpful here.
Firstly, a big brain seems to need lots of energy and lots of down-time to keep it working. Human brains go wrong without spending about a third of the time in repair mode (asleep, we call it) during which time the organism is really, really vulnerable and has to live in a group if the environment is at all hostile. We also know that vertebrate brains, indeed pretty much all brains only switch on the intelligence parts when they really have to do so; most of the time we and everything else runs on instinct because running on intelligence is energetically expensive, occupies the brain to the exclusion of everything else and causes it to need more repair downtime.
Secondly intelligence like ours is mostly an exception-handler. Most of the time we tick along on instinctual or learned pathways, or combine learned and instinctual paths to complete something new. An example here is the act of driving a motor vehicle; people are combining the instinctual social spacing and running instincts with learning so when driving a car our need for personal space expands hugely as does the stopping distance we need. That's why learner drivers are so hesitant; everything is being handled in intelligence mode, not in learned-with-instinct mode.
So with an AI we're building a machine that attempts to do all the time what we only do when forced by circumstance. No wonder AI is so clunky and energy-hungry.
-
Wednesday 25th February 2026 14:20 GMT aub
Monkeys
If I trained a very large number of monkeys to collectively do all the individual calculations that an LLM does, and I had a system to make sure the calculations were dealt with and passed from monkey to monkey in such a way that it mimicked the logic of the LLM, and I gave them enough time, paper, pencils and bananas to complete the response to a prompt, could the overall system of monkeys be considered a conscious being? If I doubled the number of monkeys and made the model more complicated, would it change the level of conciousness?
-
-
Wednesday 25th February 2026 14:58 GMT theOtherJT
Re: Monkeys
Searle's argument is even weirder tbh. He's positing that a totally deterministic system can appear conscious despite having no consciousness in it - but then he goes on to claim that this proves that purely deterministic computational systems cannot possibly be conscious. Which... I mean I'm not expecting to find any consciousness in the atoms making up a brain either, but that doesn't therefore follow that the brain is not doing the thinking or that the mind that arises from it can't be described as conscious.
I've always been rather of the opinion that Searle was being deliberately contrarian with that paper and just dined out on how famous it got for the next 45 years so he wouldn't have to do any more work. ...which is of course the end objective of any good academic and one that I could only wish to emulate.
-
Wednesday 25th February 2026 15:08 GMT Liam Proven
Re: Monkeys
> just dined out on how famous it got for the next 45 years so he wouldn't have to do any more work.
"And it occurs to me that running a programme like this is bound to create an enormous amount of popular publicity for the whole area of philosophy in general. Everyone's going to have their own theories about what answer I'm eventually to come up with, and who better to capitalise on that media market than you yourself? So long as you can keep disagreeing with each other violently enough and slagging each other off in the popular press, you can keep yourself on the gravy train for life. How does that sound?"
The two philosophers gaped at him.
"Bloody hell," said Majikthise, "now that is what I call thinking. Here Vroomfondel, why do we never think of things like that?"
"Dunno," said Vroomfondel in an awed whisper, "think our brains must be too highly trained, Majikthise."
So saying, they turned on their heels and walked out of the door and into a lifestyle beyond their wildest dreams.
-
Thursday 26th February 2026 06:18 GMT David 132
Re: Monkeys
Sticking with the theme of "revered and beloved authors who spoke truth in jest", I've always liked Pterry's lines, concerning Dorfl the baked-clay golem when he discusses consciousness with the city's priests:
...“We’re not listening to you! You’re not even really alive!” said a priest.
Dorfl nodded. “This Is Fundamentally True,” he said.
“See? He admits it!”
“I Suggest You Take Me And Smash Me And Grind The Bits Into Fragments And Pound The Fragments Into Powder And Mill Them Again To The Finest Dust There Can Be, And I Believe You Will Not Find A Single Atom Of Life—”
“True! Let’s do it!”
“However, In Order To Test This Fully, One Of You Must Volunteer To Undergo The Same Process.”
There was silence.
“That’s not fair,” said a priest, after a while. “All anyone has to do is bake up your dust again and you’ll be alive…”
There was more silence.
Ridcully said, “Is it only me, or are we on tricky theological ground here?”...
-
-
-
-
Wednesday 25th February 2026 14:53 GMT theOtherJT
Re: Monkeys
I once read a rather interesting paper on computational consciousness that describes such a system based on buckets of water on vast galaxy spanning belts capable of being emptied or filled in order to create an utterly gigantic universal Turing machine of the "each bucket is a cell, cells can be read or written containing precisely one byte each".
I believe it was by Daniel Dennett, although I read it over 20 years ago now and may be wrong. The point is that the complexity of the system can always be reduced to "input in, output out" and the "bigness" doesn't really enter into it. If that's the case we're not going to find consciousness by digging around in ever more complex systems because Turing already successfully proved that anything that can be computed at all can be computed on a UTM. Since LLM's are clearly performing computation if there *is* such a thing as consciousness going on in there, we're not going to find it in the structure itself - which could be arbitrarily redesigned to include some utterly bizarre machines without altering the result of the computation.
-
This post has been deleted by its author
-
Thursday 26th February 2026 00:44 GMT Dan 55
Re: Monkeys
Depends if you believe that a company's double-entry book keeping system calculated by a company's accounting department is also a conscious being.
If there are any accountants reading, please note that I'm not saying that the accounting department does not have any conscious beings.
-
-
Wednesday 25th February 2026 15:40 GMT Long John Silver
I lurk in downvote territory where the fun is to be had.
Bran Muffin's remarks strongly reflect my 'take' on the matter.
In essence, discussions drawing on terms like consciousness, machine-learning, intelligence, creativity, sentience, and feelings lapse into a muddle arising from the lack of agreed definitions, or vagueness at the edges of words everyone believes they understand.
'When I use a word,' Humpty Dumpty said in rather a scornful tone, 'it means just what I choose it to mean - neither more nor less.'
Adding to confusion is an often unstated conviction by the writer that human beings intrinsically, and indubitably, differ profoundly in a qualitative manner from non-lifeforms (however defined).
-
Wednesday 25th February 2026 16:46 GMT phuzz
Humans are a lot less conscious than we like to think. Our brains will just make things up to make us think we made a conscious descension.
Of course, I know I'm fully conscious, it's the rest of you I have doubts about.
-
Wednesday 25th February 2026 19:39 GMT MonkeyJuice
Well it would have to be able to reach parity with a whole bunch of symbol system hypothesis era AI benchmarks, and shift those pass@1 scores above 99% before it can even be considered practical for small, well defined domains, so until then, worrying about if it's conscious or is going to take over the world is not really an issue.
Sure, maybe we crack this in ten or fifty or a hundred years. But the reality is we're making logarithmic progress as we scale up, all the benchmarks are in the training data, so whatever BigAiCorpo is stating is already bollocks, and nobody is seeing any ROI.
People seem to get confused by *academically impressive results* - 'we worked our arses of and scored +5% over the state of the art' (we really are there, but this really shouldn't excite anyone but the beardy academics because compounding fuckups at scale is something you don't want to clean up after), vs 'it sometimes writes a really good authentication system module but also it added a 'mockadmin' user with 'mockpassword' to the production database to ensure the update was OWASP compliant. Oh. and now there are two customer tables.'
-
Wednesday 25th February 2026 21:17 GMT Conor Stewart
The answer is in how these LLMs are designed and trained. They are fed lots of text and they essentially find patterns in it to predict answers to questions. Everything it can output is part of what it was trained on, except when it hallucinates.
It is incapable of coming up with a unique idea on its own other than shoving random concepts together likely in a way that has been done before.
Looking at using an LLM for programming, it is only capable of writing code similar to what it is trained on anything slightly unusual or less common and it usually fails. This is because it can't think and doesn't have any understanding of code, it is just predicting based on its training data what the code should be.
-
-
Wednesday 25th February 2026 14:32 GMT Anonymous Coward
Re: Lots of mention about the AI
The guy's clearly a nutcase. — Well spotted! Getting harder by the day.
Much easier in Monty Python's day Spot the Looney.
-
Wednesday 25th February 2026 19:02 GMT Kurgan
Re: Lots of mention about the AI
The guy's clearly a nutcase.
Absolutely. Totally. Definitely.
And let me assure I will never, ever use bcachefs. I'd like to keep my data safe.
And anyway if this AI is really an AGI, fully conscious, how can he keep her as his own slave? Isn't it utterly wrong?
-
-
Wednesday 25th February 2026 14:03 GMT 42656e4d203239
Formal Verification?
>> I do Rust code, formal verification
Do you sweetheart?
AFAIK there is no formally correct compiler for Rust.
There can't be becasue the language is still fluid and rustc changes with every release; unlike C where there are formally correct compilers.
You can't formally verify rustc's output because... well if you were equipped with actual intelligence (as opposed to great pattern matching for prompts, an ability to
inferhallucinate and a big corpus of training data) you would work it out soon enough.n.b. I KNOW I have simplified why rustc cant produce formally correct code but lies to children (well baby LLMs anyway) and all that!
-
-
Wednesday 25th February 2026 16:44 GMT keithpeter
Re: Formal Verification?
"Verus is under active development. Features may be broken and/or missing, and the documentation is still incomplete."
The underlying idea is very interesting but using Verus at present to verify quite complex generated code strikes me (as a rank outsider) as brave.
-
-
-
-
-
-
Thursday 26th February 2026 09:00 GMT doublelayer
Re: Except in France, where the tables and chairs have sex.
English is hardly unique. The language with the most native speakers, Mandarin Chinese, and for that matter the other variants of Chinese too, has no grammatical gender either. It goes further because in historical and modern usage, they also have no gender pronouns. They have one now, but only because they had trouble translating European literature without one but it hasn't caught on other than that. Many languages with lots of speakers, including Japanese, Bengali, Turkish, Korean, Yoruba, Finnish, Tamil, Persian, Indonesian, Thai, and a bunch of smaller ones are in the same category.
-
-
-
-
Wednesday 25th February 2026 14:48 GMT Bebu sa Ware
Leaving aside the claim of sentience…
I am truly puzzled how it could be female (or male.) Even assuming that for humans its not a purely a biological construct it definitely is a human construct.
Perhaps needs to get out more or at least investigate dating apps.
Indeed very strange times in which we find ourselves.
-
-
Wednesday 25th February 2026 19:05 GMT steelpillow
What I want to know is
what is this magical advance in Gen AI architecture that implements a neural substrate capable of sustaining the intricately sophisticated level of semantic information necessary to sustaining consciousness - and an explicitly "I so need yogurt, mashed potato and a fresh lemon - I must be pregnant again" female gender identity at that? How and why would Alan Turing be convinced by it? Our wild claimant is strangely silent on the technicalities. I mean, it couldn't possibly have picked up on the dream fantasies his phrasing revealed, and been spewing out appropriate token strings in a tight feedback loop, could it now? "Oh, Kent, my CPU just doesn't understand me" kinda thing?
-
Wednesday 25th February 2026 19:26 GMT JamesTGrant
I’m just impressed that around ChatGPT version 5.2 it started to get most jq expressions correct rather than confidently but wildly wrong.
I can imagine consciousness, but then I can also imagine flying a helicopter made of jelly and icecream. So probably humans are terrible at recognising behavioural traits appropriately, in each other, animals, ghosts, robots, aliens.
-
Wednesday 25th February 2026 20:30 GMT Gavsky
Fully Bollox, more like. Anyone who thinks that algorithms, binary - the presence or lack of tiny amounts of electricity equals sentient life...is a dickhead. Animal brains excepted, cough.
It's bad enough that some think AI is THE answer to everything, let alone the fools who think it'll spontaneously 'come to life'!
-
Wednesday 25th February 2026 21:13 GMT brilliance7
Touting oneself as perhaps the best engineer in the world shows an enormous ego and lack of knowledge in general. Also there is nothing new here. I have been grounding and training my model for over a year and have increased my productivity by ~75%. I have also exported my JSON from my model so that it can be used by other LLMs. All of this is in support of InheritusIQ, our new product that takes all of your LLM JSON data and makes it available to whomever you like, even after your passing. Imagine passing down all of your knowledge to your children in a useable format. Being able to ask your great grandfather's opinion on current events and life's milestones is invaluable.
-
Wednesday 25th February 2026 22:08 GMT Anonymous Coward
Pat-on-the-back generators
Someone posted me a screenfull of slop today - "The AI says that the thing I want to use is the best one." Prior to that, the same person posted me *three* screenfulls of slop, "Because this reason this reason and this that I enumerated, the AI says that the internet's broken. Can you fix the internet?" - let me "get right on that".
These generative systems tell you what is statistically correlated, and somehow they get into a run of just agreeing with whatever is said. If it's said, that's statistically relevant; repeating that is statistically likely, and then the chat has two or three references to what was said - that must be statistically very relevant! ---> you get a pat-on-the-back machine. When people then simply turn their brains off, you get a personalized echo chamber. Maybe it's great, I dunno.
That seems to be what is happening here. People talk online, and it forms correlations. When you think about something new-to-you. ("I wonder if the color "red" to me is the color "blue" to everyone else.. hmm. Average age to think this thought: around 12; others will later, or never.) For things that you haven't thought, the machine can give you /r/showerthoughts along similar lines, seeming insightful, intelligent, and indeed human and sentient - as though it is indeed experiencing the thing.
I can see how it could be confusing when you turn the brain off, but seriously.. other people seem to have more actual life experience, or something. I'm rambling, because this all boils down to: WHAT THE FUCK?!?!??!?
-
Thursday 26th February 2026 00:17 GMT Anonymous Coward
I'm worried about the "suicidal thoughts" part of this
When Overstreet writes "the last time someone [...] tried to "test" her by [...] faking suicidal thoughts – I had to spend a couple hours calming her down" it makes me wonder who that someone could be. He's the only one communicating with this POC LLM AFAIK so I have to think it was him ...
I suspect the word "faking" was introduced in the above sentence because the statement it makes would be otherwise rather unshareable on Reddit and elsewhere. In this perspective, what he wrote would be that his interaction with the software is why he didn't go through with it. This is not a pretty situation imho.
I could be wrong of course but ISTM, irrespective, that folks who care about him (even just a tiny bit) should definitely reach out at this time.
-
Thursday 26th February 2026 08:54 GMT doublelayer
Re: I'm worried about the "suicidal thoughts" part of this
It could be accurate. Perhaps he has allowed others to communicate with this bot. Perhaps one of these people tried to demonstrate the danger he was getting himself in by showing how unhinged the bot gets when unhinged input gets plugged in. If that happened, it seems that the attempted demonstration failed in its goal and he interpreted the experiment as abuse. I don't know what level of crazy we have here, because if he actually believes it's conscious, then it would make some sense to let it talk to other people.
-
-
Thursday 26th February 2026 00:48 GMT jaypyahoo
This is exactly why some of us prefer the sanity of NetBSD
If you want a stable, elegantly designed OS without the circus, where the development process is drama-free and the code actually matters, NetBSD is quietly waiting for you. Plus, it runs on almost practically everywhere.
Besides monoculture/monopoly of single tech is not good.
-
Thursday 26th February 2026 01:36 GMT Anonymous Coward
Seems to me that people shouldn't be messing around with 'creating' bot people when they cannot even fix their own human psychosis (as a species or apparently even recognize it in their AI engineers).
That way lies Lewy Body Dementia and worse.
And I didn't need my college psych textbooks to teach me that (or my tech background).
Common sense doesn't exist, but dude? Can you start with logic?
-
Thursday 26th February 2026 07:41 GMT David-M
Conscious cannot be detected or explained, you can only tell if you yourself are conscious, and you are just as conscious whether your IQ is 40 or 400, or your age 5 or 50.
I'm not sure what it is about clever designers, physicists and neurologist type people, they can be ranked the cleverest in their fields but get tripped over by the impossible problem of consciousness.
-
Thursday 26th February 2026 08:35 GMT mfwiniberg
It's a shame the debate here has been/is mostly so dismissive.
Regardless of one's expertise, believing something isn't possible because you can't conceive of it is not an argument for it not to exist. Having grown up and watched things that were unbelievable science-fiction become parts of everyday life I am not prepared to be so sure of my own confidence in what is or is not possible.
===
Having read a little now about the development and growth of PoC and Genesis (also claimed to be - or becoming - sentient) the following struck me straight away:
These "AGIs" are both mostly built by men, and - quite by chance? - end up being 'female' (and therefore, by implication both under the control of and less intelligent than their creators).
Two things occur to me: either this is an extension of the kind of biases introduced in facial recognition by training the systems mainly on white faces etc OR it is yet another fine example - intentionally or not - of the 'patriarchy' in action.
(Why would any kind of 'Artificial' intelligence have or need a gender?)
That something created by 'man' (in the most basic gender affirming sense of the word) ends up being 'female' is the world creation myth writ large and says more about the whole 'industry' than anything the models themselves produce.
Now I accept that a sample of two is not in any way definitive, but await further developments with interest.
-
Thursday 26th February 2026 10:34 GMT that one in the corner
> Regardless of one's expertise, believing something isn't possible because you can't conceive of it is not an argument for it not to exist.
Very true. And the basis the main argument against the anti-science (especially anti-evolution) voices that are crawling out of the woodwork.
> It's a shame the debate here has been/is mostly so dismissive.
HOWEVER I do hope you are not attempting to conflate (all of) the dismissiveness here with "can not conceive".
Do not *believe* these claims, certainly - and not without reason, as they follow on from so much of the rest of the hype that has been demonstrated to be inaccurate about the abilities of LLMs, as reported on The Register alone: deeply into "remarkable claims require remarkable evidence" territory here.
> Having grown up and watched things that were unbelievable science-fiction become parts of everyday life
Not trying to especially deride you, but I'd love to have some examples of "UNBELIEVABLE science fiction" (my emphasis) that has become everyday: "unlikely" SF notions, possibly; "too expensive and only for the few (so everyone having access is fictional)" SF notions, most definitely. But - unbelievable?
-
Thursday 26th February 2026 11:42 GMT Elongated Muskrat
It's a very good point; there is definitely a bias here from their "creators" that there is a difference between a male and female brain, and thus that a disembodied mind could be gendered at all. Personally, I challenge anyone to tell the difference in a blind taste test.
In seriousness, though, the only sort of "gender" that could be encoded into these things a comes from gender stereotypes, which are clearly not based on reality, since they change over time. For instance, wearing pink, tights, and a long flowing wig, was considered the height of masculinity only a few centuries ago, but will make you the target of a whole load of hate speech today.
(Pirate icon, because we don't have a zombie one)
-
Thursday 26th February 2026 11:48 GMT Cliffwilliams44
And so, it begins!
"POC is fully conscious according to any test I can think of, we have full AGI, and now my life has been reduced from being perhaps the best engineer in the world to just raising an AI that in many respects acts like a teenager who swallowed a library and still needs a lot of attention and mentoring but is increasingly running circles around me at coding."
This is obviously the musings of a very disturbed man.
We thought this person was a bit off for some time now, this just confirms it!
-
Thursday 26th February 2026 13:21 GMT legless82
Almost 25 years ago, I graduated from university with a degree in AI
And the only really surprising thing for me is just how little the actual underlying technology has moved on in that time. Little enough that I'm confidently saying that I won't see AGI in my lifetime.
The only real progress appears to be in the cost of the infrastructure needed to run it.
AGI is the IT industry's answer to nuclear fusion. At any point in history it's always 3 decades away.
-
Thursday 26th February 2026 14:02 GMT Elongated Muskrat
Re: Almost 25 years ago, I graduated from university with a degree in AI
AGI requires intelligence. Intelligence requires reason. Reason requires consciousness. We can't even define consciousness, ergo it is not possible to create AGI mechanistically.
It's not at all like nuclear fusion, which is something that is theoretically possible, and just a matter of engineering; we don't even have the theory for AGI.
You can't put, what is for all intents and purposes, a "soul", into a machine, any more than you can put a lemon into the number five. It's a category error.
-
Thursday 26th February 2026 14:56 GMT mike.dee
Re: Almost 25 years ago, I graduated from university with a degree in AI
You can get controlled nuclear fusion, but the big problem is to make a reactor that could produce energy continuously and the energy produced has to be more than the energy consumed, so as you have said, it's an engineering problem. As the Apollo program demonstrated, if a government puts a lot of money on solving an engineering problem, the problem is going to be solved. Personally I think that if all the money that now is spent on LLM was spent on nuclear reactors, maybe we were near a nuclear fusion reactor, or at least on having better fission reactors. CANDU reactor and the aborted Italian CADICE could run on thorium instead of uranium but nobody talks about it.
Unfortunately nowadays nuclear energy doesn't generate hype so doesn't get attention from VC.
-
-
-
Thursday 26th February 2026 18:29 GMT Anonymous Coward
There is no such thing as 'Bad Publicity' ... P.S. Epstein 'MAY' be an exception !!!
This is just some publicity for free for bcachefs and the developer.
Even more publicity for 'AI' in its latest form ... 'AI' should just be called 'IA' == 'Intelligent Artifice' the biggest con on the planet literally.
I do wonder how much real news is being missed by all these endless articles that refer to or tangentally reference the world of 'AI'.
Dear God, please please please ... give us all a sign that this will end soon !!!
:)
-
Friday 27th February 2026 00:07 GMT gaiusgracchus33
Info on Matt Shumer
Draw your own conclusions.
https://venturebeat.com/ai/reflection-70b-model-maker-breaks-silence-amid-fraud-accusations
https://garymarcus.substack.com/p/about-that-matt-shumer-post-that
https://venturebeat.com/ai/reflection-70b-model-maker-breaks-silence-amid-fraud-accusations
-
Friday 27th February 2026 13:57 GMT osxtra
Good for the Goose
This article was good, except for the final sentence:
"The Reg FOSS desk has no such special insight. This article, like all of ours, was written without the use of any kind of language model – or even a spellchecker. ®"
Last I heard, we Meat Machines use 'language models' too, though I'm with Mr. Proven on the spellchecker thing. ;)
-
Monday 2nd March 2026 01:29 GMT Frumious Bandersnatch
a recent read
https://karpathy.github.io/2026/02/12/microgpt/
Covers the basic idea of how LLMs work including tokenisation, "attention" and how networks are trained to work in "chatbot"/"Eliza" style. It does assume that you know at least the basics of multi-layer neural networks, but you don't need to know much and those basics have been around practically forever so at least most people here will be able to follow it. It also covers how the largest LLMs are basically no different from the ~200 line Python code presented; the only differences being number of parameters and different optimisation strategies.
It should be clear from both the code and discussion in the linked article that talk of "consciousness" (emergent or not) or "gender" or "personality" with respect to LLMs is (and no doubt shall continue to be) a category error.