back to article Bcachefs creator insists his custom LLM is female and 'fully conscious'

The latest project to start talking about using LLMs to assist in development is experimental Linux copy-on-write file system bcachefs. ProofOfConcept (POC) is a new blog with just five posts so far. What makes it different is that it says it is generated by an LLM, and that it works alongside a well-known developer of low- …

  1. Doctor Syntax Silver badge

    They're beginning to believe their own hype.

    1. tmTM

      Luckily no-one else does.

      1. b0llchit Silver badge
        Coat

        Well,... Eliza meets Joshua. The thermonuclear war is cancelled before it begins by tic-tac-toe and all that remains is - would you like to play a game of chess? What could possibly go wrong?

        1. Cliffwilliams44 Silver badge

          Colossus meats Guardian and the world is enslaved!

          1. LybsterRoy Silver badge

            -- Colossus meats --

            How big a piece of steak would that be then?

          2. CrazyOldCatMan Silver badge

            Colossus meats Guardian and the world is enslaved!

            I have no mouth and yet I must scream.

            *That's* the end-stage of unrestricted AI.

      2. VoiceOfTruth Silver badge

        >> No-one.

        There are a lot of people out there who believe it. Or at least want to believe it. Or think it's just round the corner.

        After all, if hundreds of billions are being poured into this it has to work. Right?

      3. coredump Bronze badge

        It seems like a tossup whether they believe their hype or are merely trying to ride the con, taking advantage of those who believe.

        It can be a tricky problem, sorting the true believers from the hucksters.

        1. frankvw Silver badge
          Holmes

          "It seems like a tossup..."

          True. But then, everything does if it's being run exclusively by tossers.

        2. Cliffwilliams44 Silver badge

          That may well be true, but this man is obviously mentally ill.

    2. keithpeter Silver badge
      Trollface

      Is there the slightest chance that Mr Overstreet is trolling people?

      Just wondering.

      Icon: metatroll

      1. tekHedd

        Trolling

        We may be running into a special case of Poe's law here. Whether or not, it's still hilarious.

      2. the spectacularly refined chap Silver badge

        Probably not. This is simply screaming "I've never had a girlfriend ever".

        He may as well wear a shirt sponsored by Richard Branson.

      3. graemep Bronze badge
        WTF?

        He has not actually endorsed these claims.

        The claims are an output of his LLM.

        It looks like he has a way of easily disclaiming the claims. he can say "its just my LLM output, but what I think".

    3. Claude Yeller Silver badge

      Re: Don't get high on your own supply

      This excellent advice has been around in movies, tv shows, and documentaries for decades.

      However, in every realisation in the media, the protagonists fail the test and that leads to their demise.

      Here we see it played out again.

    4. TVU Silver badge

      I fully agree with you. An albeit complex programme running on silicon chips powered by electricity can never know what it is like to be a human biological female or male.

    5. Cubbie Roo

      Well Kent has been out of his tree for a while now, so cut him some slack he probably needs help (or maybe just a real girlfriend),

      1. Anonymous Coward
        Anonymous Coward

        I've had a few bat-shit crazy ones who sound like they'd be fit for purpose here. The guy obviously has too many uncommitted cpu cycles looping around in there. Nothing like a crazy chick to soak those up!

    6. Omnipresent Silver badge

      The Power of a woman

      combined with the power of the internet is pure mind numbing psychosis to nerds. Might as well give them ecstasy.

      A reminder, that if you say something enough times it becomes reality. There are billions of poor, disenfranchised, uneducated, desperate fools to indoctrinate in this hate filled world of filth and evil.

      Only God can save us now, and I reckon the same nerds fancy themselves as "creators".

    7. Anonymous Coward
      Anonymous Coward

      He'll be a tech bro any time soon then

  2. Derezed
    Facepalm

    Oh for fuck’s sake. Another one.

    It’s not just these fucking chat bots that hallucinate.

    Neuroscience. Please.

    1. a_foley
      Coat

      She blinded me with neuroscience!

      1. that one in the corner Silver badge

        That reminds me of the line "Next on Blue Peter, Magnus Pyke was going to explain to us the principle of the helicopter, but during rehearsals he blew away, so over to Valerie at the craft table".

      2. captain veg Silver badge

        She blinded me with neuroscience!

        ... and failed me in biology.

        -A.

      3. ecofeco Silver badge

        "Good heavens, Miss Sakamoto, you're beautiful!".

      4. Dan 55 Silver badge
    2. zimzam Silver badge

      This is a guy who still doesn't know what a merge window is, so I'm not sure I'd consider him an authority on much of anything.

    3. Anonymous Coward
      Anonymous Coward

      "No, this is math and engineering and neuroscience"

      To my knowledge, Kent Overstreet has no expertise in at least two of those.

      1. ArguablyShrugs

        Re: "No, this is math and engineering and neuroscience"

        Possibly even four or five of those, even if only three fields were mentioned. Never underestimate the power of stupidity...

    4. DS999 Silver badge
      Facepalm

      "Best engineer in the world"

      Sounds like his psychosis extends beyond run of the mill chatbot psychosis.

      I mean, you'd think the best engineer in the world would have had more to show for his life than a Linux filesystem of minor importance. That seems a pretty tiny accomplishment for someone with such an inflated sense of self importance.

      1. Anonymous Coward
        Anonymous Coward

        Re: "Best engineer in the world"

        in the world?

        When USAians have a "World Series" in sport it basically seems to mean multiple USA teams and a single Canadian team = World?

        So does he really mean "Best engineer in USA + 1 Canadian province" ?

    5. Ian Johnston Silver badge

      Real neuroscience is fine. Unfortunately most claimed "neuroscieence" is actually social psychology trying to hide from its replication crisis. Now many people labelled "neurodiverse" have ever had any empirical tests done on their brains?

      1. Elongated Muskrat Silver badge

        Now [sic] many people labelled "neurodiverse" have ever had any empirical tests done on their brains?

        In general, it's considered poor form, and not particularly productive, to open people's skulls up and look directly for the basis of cognitive processes.

        However, if you are alluding to the question, "how many people have been formally assessed for neurodiverse conditions by a qualified psychiatric medical professional," then I suggest you take a look at the current waiting lists on the NHS, and from private providers through the "right to choose pathway", for an idea of the true prevalence of neurodiversity (no need for the quotes here, it's a real word). Better diagnostic criteria, and better recognition mean that there is currently quite a big backlog for assessment. Making snarky comments about it on the internet just marks you out as a dickhead.

        1. seldom

          Neurodiversity is the new normal?

          1. Elongated Muskrat Silver badge
            FAIL

            Well, firstly, being neurotypical comes under neurodiversity, in the same way that being white falls under ethnic diversity.

            Those ones that are generally known as "neurodivergence" (ASD, ADHD, OCD, etc.) are called that because it is divergence from the "norm", so if you're trying to make a funny about neurodivergent people, then no, and you're a dickhead for doing so.

            If you're pointing out that neurotypical people, considered "normal" also fall under the umbrella of neurodiversity, then well done, you're awake.

            Let me guess, you're also the sort of person who complains that white middle class middle-aged men are being discriminated against everywhere in the UK and US, despite the copious evidence that this is the group predominantly doing the discriminatory stuff?

            1. Anonymous Coward
              Anonymous Coward

              You are straight up wrong. Neurodiversity does not include neurotypical.

              It is similar to ASD where it is a spectrum/umbrella that only covers people who are on the spectrum.

              The phrase "everyone is a little autistic" is wrong because while it is a spectrum, neurotypical is not on that spectrum.

              The same goes for neurodiversity, it is a spectrum that specifically excludes neurotypical people.

              This is obvious due to the fact that neurodiverse people are people who are neurodivergent.

              You are making the exact same mistake neurotypical people make with ASD.

              Neurodiversity is diversity in thinking, away from typical thinking. It explicitly does not include neurotypical people. Why is this stuff always so damn hard for people to understand??

              1. Elongated Muskrat Silver badge

                Neurodivergence doesn't cover neurotypical; neurodiversity does. As I said, in the same way that ethnicity includes white people, while "ethnic minority" doesn't, in the UK and US at least.

                To be fair, it's not a massively useful term, largely because people argue over what it means, rather than getting the point that everyone's brain works differently to anyone else's, even if only in small and subtle ways. It's like "allistic", which some people take to mean neurodivergent people who are non-autistic, whilst others understand it as "everyone who is not autistic". This largely stems from a lack of education and understanding of neurodiverse conditions, and the paucity of well-used language around them.

                "Everyone is a little bit on the spectrum" is a horrible phrase, because it completely misrepresents what the autistic spectrum is - it's not a scale from "not autistic", to "very autistic", but a collection of traits that are associated with autism, and which autistic people may have one, some, or all of.

                I hate to rely on the garbage-spewer that is AI, but, when asked the question, it gives the following:

                Yes, the term "neurodiversity" encompasses all variations of brain function, including both neurodivergent individuals and neurotypical people, recognizing that all brains are different. It emphasizes that these differences are natural variations rather than deficits.

                So I'm sorry, but you're not correct in your assertion in this instance. One of the important points of the term "neurodiversity" is that it is inclusive, and defining a term essentially as "all the different ways human brains can work, except you" is both horrible and unuseful.

          2. Claude Yeller Silver badge

            Re: Neurodiversity is the new normal?

            Yep, humans are not all the same.

            If the NORM is size M 36-24-36, we are all size-diverse.

            The point is, education, industry, and commerce need a behavioral standard human for efficiency. Over the years, standards have become more strict.

            All non-standard humans are considered neurodiverse because they require adaptations, increase cost, and reduce efficiency.

            Famous neurodiverse people: E Musk, P Hegseth, JD Vance, D Trump, B Gates, P Hilton, P Thiel

          3. DoctorPaul Bronze badge

            As Delain so poetically put it "Normal is not the norm, it's just a uniform".

            It's been shown that if you are completely average in every way then you are actually a statistical freak or don't even exist. Examples (thanks QI) are:

            1. USAF's attempt to design a universal pilot's seat for jet fighters. Measured every pilot, took the average, seat didn't fit anyone.

            2. Advertising campaign in Australia to find the average Australian housewife. Crunched all the numbers, noone could be found who matched the criteria.

  3. ArguablyShrugs

    Don't worry, Kent – these kindly big gentlemen in white coats

    are only here to take both you and "her" to a nice room where you'll be free to talk to "her" for the rest of your life. Oh, and the missing door knob on the inside? That's just so you won't get distracted by naysayers. And the locked windows? The same thing.

    1. Captain Slog
      Coat

      Re: Don't worry, Kent – these kindly big gentlemen in white coats

      <Seagoon>

      "I haven't got a knob on my side..."

      1. Androgynous Cupboard Silver badge

        Re: Don't worry, Kent – these kindly big gentlemen in white coats

        Speak for yourself :-)

        1. snowpages

          Re: Don't worry, Kent – these kindly big gentlemen in white coats

          Side?

          1. Androgynous Cupboard Silver badge

            Re: Don't worry, Kent – these kindly big gentlemen in white coats

            Cupboard!

      2. Ace2 Silver badge

        Re: Don't worry, Kent – these kindly big gentlemen in white coats

        A wizard’s staff has a knob on the end…

        (Why do you limey bastards get all the good slang terms?)

        1. Liam Proven (Written by Reg staff) Silver badge

          Re: Don't worry, Kent – these kindly big gentlemen in white coats

          > A wizard’s staff has a knob on the end…

          Which could, of course, _be the wizard_.

          > (Why do you limey bastards get all the good slang terms?)

          Well, you know, it's our bally language and we invented it 1000 years before you chaps decided to go your own way.

          I'd ask how that's working out for you, but I think we all know the answer there. Saying that, we do have our own self-induced difficulties this side.

          1. staylor14

            Re: Don't worry, Kent – these kindly big gentlemen in white coats

            > I'd ask how that's working out for you ...

            Weirdly parallel!

            Thatcher :: Reagan

            Blair :: Clinton

            Trump :: BoJo + host of horrors

            Murdoch :: Murdoch ... oh. I think I may have found the problem. Australia.

            1. Liam Proven (Written by Reg staff) Silver badge

              Re: Don't worry, Kent – these kindly big gentlemen in white coats

              > Murdoch :: Murdoch ... oh. I think I may have found the problem. Australia.

              Nah. Putin. Info war is cheaper than actual war. He's already in one of those he can't win...

          2. captain veg Silver badge

            Re: our bally language and we invented it 1000 years before

            > our bally language and we invented it 1000 years before you chaps decided to go your own way.

            I had no idea that you were a scholar of old English.

            Personally, like most Brits, I find Shakespeare fairly hard going, and America was already a thing by then. Chaucer? Pretty much impenetrable, though the swearing is fun. Back in the 770s "our" language was, er, a totally unintelligible hodge-podge of germanic and norse dialects..

            -A.

            1. upsidedowncreature

              Re: our bally language and we invented it 1000 years before

              Bloody Ango Saxons, coming over here with their ship burial traditions and inlaid jewellry, laying down the foundation for our entire future language and culture.

              1. Elongated Muskrat Silver badge

                Re: our bally language and we invented it 1000 years before

                Hwæt wæron hīe þenceande?

                1. Elongated Muskrat Silver badge

                  Re: our bally language and we invented it 1000 years before

                  Eala! Ic wearþ gecoren niþer.

                  1. Liam Proven (Written by Reg staff) Silver badge

                    Re: our bally language and we invented it 1000 years before

                    > Eala! Ic wearþ gecoren niþer.

                    I've made a huge mistake.

            2. Liam Proven (Written by Reg staff) Silver badge

              Re: our bally language and we invented it 1000 years before

              > Chaucer? Pretty much impenetrable,

              Not at all. I pretty much just read it aloud in my head and once I'd internalised the accent, it just made sense for me. Old English is substantially harder, but I did once sweep a young lady off her feet by quoting a few lines of _Beowulf_ in the original to her.

              Here's a fun test:

              «

              How far back in time can you understand English?

              https://www.deadlanguagesociety.com/p/how-far-back-in-time-understand-english

              An experiment in language change

              »

              I had no big problems until the very last one.

              1. doublelayer Silver badge

                Re: our bally language and we invented it 1000 years before

                I wonder if you're missing an important element. I have little trouble reading and understanding Shakespeare, if the only part I think about is understanding what people are saying and what's going on. I'm missing plenty of extra context that people insist is in there. Either those who taught me English were mistaken about some of the humorous parts (I got plenty of obvious ones, but some other things didn't seem at all funny), or more likely, that is something I missed because language has changed sufficiently.

                The other aspect is fluency. I read the test, and I made it through the 1300 example easily, had a little trouble with the 1200 example (I didn't know what pinunge is), and was officially lost with the 1100 example. I do wonder whether the difference in subject matter (finding a restaurant in 1800, watching combat between mythical beings in 1200) might have made that worse. Regardless of that, the other difference was in comprehension speed. I think I got everything from the 1300 example, but it took longer to read and at times two passes (I first thought "fer" meant "for", and that kind of made sense until about ten words later, so it took me a while to realize that it was "far") such that, if it was being read out to me at a normal pace, it might have been too fast. When you have that freedom, it can be easy to overestimate how easy it was.

              2. captain veg Silver badge

                Re: our bally language and we invented it 1000 years before

                > I pretty much just read it aloud in my head and once I'd internalised the accent, it just made sense for me.

                Well, bully for you. I stand by "impenetrable".

                In the real world I live in France and spend a lot of time in Spain and those parts of Iberia where Catalan is spoken and so, frankly, I have more important living languages to worry about. Quite a lot of my interlocutors can hold a conversation in modern English, which is unfair but useful, but I would estimate that approximately none of them would cope with even slightly archaic English. Why should they?

                -A.

          3. khjohansen
            Joke

            Re: invented (!)

            [chuckles in north-germanic]

      3. steelpillow Silver badge
        Headmaster

        Re: Don't worry, Kent – these kindly big gentlemen in white coats

        > <Seagoon>

        Was actually <Eccles>. And it's true. His was in the usual place (on the closed door in front of him - what were you thinking?!)

      4. Anonymous Coward
        Anonymous Coward

        Re: Don't worry, Kent – these kindly big gentlemen in white coats

        If you see no knob on your side, you are the knob

    2. Fruit and Nutcase Silver badge
      Coat

      A knob, a knob, my kingdom for a knob

      missing knobs

      Demand the return of the knob

      https://www.theguardian.com/commentisfree/2026/feb/25/i-wanted-an-oven-with-a-knob-instead-i-got-a-world-of-pain

      With thanks and apologies to Shakespeare, a big knob in English literature

  4. Groo The Wanderer - A Canuck Silver badge

    And I insist he's in need of mental health treatment because he's delusionally convinced himself that a statistical text generation technology is even intelligent in the real sense of the word, never mind gendered. He's also clearly suffering from severe isolation and loneliness because he's convinced himself that his LLM is a "female" he can "call his own."

    1. Eclectic Man Silver badge
      Childcatcher

      Scary, truly scary

      See Hannah Fry's latest TV series on the BBC iPlayer: https://www.bbc.co.uk/iplayer/episode/m002q76d/ai-confidential-with-hannah-fry-series-1-1-the-boy-who-tried-to-kill-the-queen

      She does a pretty good job of showing what an LLM actually does without going into too much detail, just enough to point out that they model language not reality. But what is truly scary is what they get us to believe and do.

      I've said it before and I will doubtless have many future opportunities to say it again, but no AI can possibly 'understand' anything in the way a human can. Every part of an AI, computer, is a prosthetic and can be replaced with an identical or better version without pain. Very few parts of you (assuming this is not being read into an AI) can be replaced. And frankly all I need to do is buy a rope and take you 'trad' rock climbing and you will understand fear in ways no computer ever can.

      1. Irongut Silver badge

        Re: Scary, truly scary

        What does fear or the ability to replace parts have to do with understanding or intelligence?

        > Very few parts of you (assuming this is not being read into an AI) can be replaced.

        Actually quite a lot of human parts can be replaced, from the teeth to the heart. About the only thing that can't be replaced is the brain, sadly for Mr Overstreet who clearly needs a new one.

        1. Eclectic Man Silver badge

          Re: Scary, truly scary

          What does fear or the ability to replace parts have to do with understanding or intelligence?

          If a part can be replaced without pain there there is little to fear from damaging it. A computer's entire memory can be backed up and restored into a completely new device in the event of the destruction of the original. So no computer can understand fear of death or bodily harm in the same way a human can. No robot can be so scared it shits in its pants or faints from fear. Only humans can understand that. No robot can be sea-sick, or, conversely appreciate a beautiful painting, sculpture, musical performance, aroma or joyful hug as a person can.

          If you do not know fear then you are missing out on something almost every human being experiences (with the possible exception of Alex Honnold, him of 'Free Solo', which film nearly scared the shit out of me.)

          Yes it is often possible to partially replace parts of humans with parts of other humans or artificial bits and pieces, but there is often a price to be paid with immunosuppressant drugs. And the replacement parts are rarely as good as the originals, unless there was some pathology. Maybe I need to read up on just what medical science is capable of these days, but I am convinc=ced that the fillings in my teeth are not as good as the original tooth would have been had I brushed them properly when young.

          1. David 164

            Re: Scary, truly scary

            backups can go wrong, they can corrupt. They can stop making parts that are compatible with your other hardware.

            Plus there are humans that feel no fear, https://www.newscientist.com/article/mg21729071-600-the-curious-lives-of-the-people-who-feel-no-fear/ are they not intelligent?

            1. ChoHag Silver badge

              Re: Scary, truly scary

              Computer backups can go wrong. Human backups cannot go right.

              Intelligence requires a lot more than just fear, not least of which is the adaptability to compensate for missing parts. We are a complex hodge-podge of many different phenomena which computers do not have and will not have for the forseeable future, despite the few that are somewhat similar.

              Do try to keep up. Is your intelligence perhaps artificial?

          2. Elongated Muskrat Silver badge

            Re: Scary, truly scary

            No robot can be sea-sick

            If you were to make a robot with balance sensors (similar to the inner ear) and they were to be given input, in conjunction with vision sensors, similar to that which causes sea-sickness in humans, and feed those inputs into a processing system such as an LLM, it's entirely possible that the results of the synthesis of those contradictory inputs would emulate something very much like sea-sickness (which is basically imbalance due to disorientation).

            That's a nice little research problem for someone; make a robot that can emulate sea-sickness.

            1. David Hicklin Silver badge

              Re: Scary, truly scary

              > synthesis of those contradictory inputs would emulate something very much like sea-sickness

              So in other words it would get stuck in an endless loop ?

              1. Elongated Muskrat Silver badge

                Re: Scary, truly scary

                In humans, the response to a conflict between visual and balance senses is to empty one's stomach. This is an evolutionary response, because we haven't evolved to be in an environment which is moving in varying different directions, such as the swaying of a boat, whilst our eyes are telling us that we are not moving. The most likely cause for this is that we've been poisoned by something, so out the stomach contents go.

                How should a robot react, if its sensors give conflicting information, but it needs to rely on that data for real-time processing? Should it use a quorum, and ignore the outliers (see also: Minority Report for why you might need more than three), should it shut down and await repair? Attempt some sort of self-diagnosis?

                I would imagine that this is very much a real-world problem in robotics.

      2. nijam Silver badge

        Re: Scary, truly scary

        > But what is truly scary is what they get us to believe and do.

        Same as politicians, then.

      3. theOtherJT Silver badge

        Re: Scary, truly scary

        Oh boy, that is one big scary fallacy (...as the actress said to the bishop)

      4. David 164

        Re: Scary, truly scary

        But much of our reality is define by our language.

        1. Conor Stewart

          Re: Scary, truly scary

          No, reality is reality, language is just used to describe it.

          1. David 132 Silver badge

            Re: Scary, truly scary

            To be fair to my near-namesake above, he said our reality, which I interpret as "our perceived reality".

            If I, for example, see a four-legged creature with udders standing in a field, my brain automatically flags it as "a cow", where "cow" is shorthand in my mental filing system for the combination of shape, sound, smell and mass that constitutes that creature.

            A passing Frenchman perceives "une vache", his mental shorthand for... etc.

            Same reality, different perception.

            At which point we're into qualia and other metaphysics, and <gumby>my brain hurts</gumby>!

            1. that one in the corner Silver badge

              Re: Scary, truly scary

              Language controls perception of reality?

              Like the claims made about the Himba tribe? BTW if you do follow that link, and/or remember the BBC programme* please be sure to check this one as well.

              Cue discussions of Sapir-Whorf and whether the conceit of the film "Arrival" is just a tad far-fetched or not.

              * not their finest hour

              1. Liam Proven (Written by Reg staff) Silver badge

                Re: Scary, truly scary

                > Like the claims made about the Himba tribe? BTW if you do follow that link, and/or remember the BBC programme* please be sure to check this one as well.

                https://tinyview.com/itchy-feet/2025/07/20/azure-skies

              2. JulieM Silver badge

                Re: Scary, truly scary

                Well, we British have four different words for precipitation, depending on whether it was solid or liquid when it left the cloud and whether it was solid or liquid when it hit the ground. That probably sounds strange to people from drier countries .....

      5. find users who cut cat tail

        Re: Scary, truly scary

        > they model language not reality

        This is a great oversimplification. They are not trained on dictionaries and formal specification of grammar. And they are not trained on the reality in any direct sense either. But they are trained on texts that someone wrote about something, unavoidably referencing reality.

        From a mere language standpoint, ‘a musical dog integrates cauliflowers’ is as good sentence as ‘a maths student integrates polynomials’. But the latter is more likely to actually appear in human writing, because it can describe a common real event. So it is also more likely to appear in LLM output – they are trained to produce low-surprise output. In this manner reality probabilistically creeps in.

        1. Elongated Muskrat Silver badge

          Re: Scary, truly scary

          Only if you can show that local minimums* model reality, in the general case. AI slop contains enough "hallucinations" to show that this is not true.

          *In this case, "local minimum" means the point in the "probability space" being modelled that has the most "likely" score assigned to it. It might be better to refer to that as a "maximum", but when visualising probability fields, it's also useful to view them as a ball rolling on a 3-dimensional surface and settling in a "minimum", although in such things (e.g. principal components analysis) there are usually far more than three dimensions being modelled, which is why this language is used.

  5. Bran Muffin

    How can it be proven one way or the other? We just accept that people are capable of thought, intelligence, and the other things that make us human. I suppose we have little or no choice. How do we extend that to "artificial intelligence"? What would prove beyond doubt that an AI really is capable of thought, intelligence, etc.? As things stand now, all we have to do is say, "It's a computer! It doesn't think or feel or etc.!" and we consider the discussion closed. Will that still be true 20 years from now? 50 years? 100 years? Beats hell out of me--does anyone have some insight?

    1. Derezed

      Using those arts and humanities degrees people are so down on in this place I guess.

      But we all know that this chat bot has no intelligence and certainly has no consciousness.

    2. that one in the corner Silver badge

      > We just accept that people are capable of thought, intelligence, and the other things that make us human

      Well, as none of you can prove that you exist in the first place and that I'm not in the middle of a terrible dream after dropping off to sleep whilst I wait for my nest-mate to return and help take care of our grubs...

      1. nijam Silver badge

        >...none of you can prove that you exist in the first place...

        I hallucinate, therefore I am.

        Not sure whether I was ever in the first place, though.

        1. Paul Crawford Silver badge
          Windows

          No, no, no!

          I drink therefore I am.

          Apologies to Rene Descartes

          1. David 132 Silver badge
            Pint

            Curse you, you naughty person - I now have this Monty Python earworm :)

            1. The Oncoming Scorn Silver badge
              Terminator

              At The Bottom Of A Very Deep Dark Hole

              I ache therefore I am, or in my case, I am therefore I ache.

            2. Ken Shabby Silver badge
              Holmes

              I’m pink therefore I’m spam.

          2. herman Silver badge

            I like the cautious version better:

            I think, therefore I am, I think.

            1. Woza
              Holmes

              From Red Dwarf (the book):

              I think I'm thinking, therefore I possibly are

              (Icon: couldn't find a hologram)

    3. Anonymous Coward
      Anonymous Coward

      Consciousness is a question for the philosophers.

      As for intelligence, though: how much human output is little more than mimicry based on prior training and/or rote instruction following?

      The longer-term questions of AI hopes versus hypes is going to raise some very difficult questions about our own humanness and precisely what that means.

      So far, the technology has proven to be a more capable and efficient mimic than a student of comparable age. AI capabilities seem to be progressing faster than the comparable human would.

      Humans will have to confront what it means to be human and how society will have to be restructured in the coming years. If "human" is synonymous with "worker drone" (and not much more) then society is in for some troubled times to come. Hoping we remain better worker drones than the AI models to come is not exactly a safe bet.

      1. Elongated Muskrat Silver badge

        A massive data centre trained on vast amounts of data (hopefully curated, but it seems like they are being fed any old shite), consuming amounts of power equivalent to a small town, isn't really comparable to a human infant in any way, especially when you take into consideration that humans are born not-fully developed due to constraints in the birth canal, and take a good few years to catch up to the level of other mammalian infants, many of which can stand and walk shortly after birth.

      2. MonkeyJuice Silver badge

        > As for intelligence, though: how much human output is little more than mimicry based on prior training and/or rote instruction following?

        Remarkably little. Someone can show you how to do a thing once, and you might not do a _great_ job, first time, but you will, in general, have enough information to now continually improve on the task on your own. This can be a new task in an area you have no experience in. Our own 'humanness', is not just in our lived 'training data', but the complex, highly specialised neural organs packed into a lump of jelly we call a 'brain', shaped over hundreds of millions of years, during very little of which did we resemble anything 'human'.

        The fact LLMs can talk a lot of shit demonstrates only that people are full of shit, not that it's fundamental to intelligence, or even relevant for most of it.

        You don't need to force a 14 year old through 30 billion words of information before they can write an essay. Our learning rate is astronomically more efficient.

        So weirdly, I'd say intelligence is also for the philosophers until we have reason to ask these questions.

    4. theOtherJT Silver badge

      Look, consciousness is a thorny problem. But then so is everything if you want to get into the weeds. Philosophically you can just about "prove" the axiom "There are thoughts." Even "I think therefore I am" is problematic because it presumes a distinct concept of self separate from the thought.

      There are arguments to be made that consciousness isn't even a real thing, and that qualia are some sort of emergent phenomenon that exist only so much as a ship does - namely because we say they do.

      ...and yes, I went there. Ships don't exist, belonging to Theseus or otherwise. Ships are just labels we stick on collections of atoms, which themselves are labels we ascribe to collections of protons neutrons and electrons, which themselves are only collections of... and down and down we go. Maybe there's a most fundamental particle down there somewhere, but hell if we know what it is. Everything is just convenient labels because we don't have the capacity to deal with un-abstracted reality.

      It all gets incredibly tedious terribly quickly, take it from one who spent 4 years getting a degree in this shit.

      So how do you prove consciousness? You don't. There isn't a test for it, and there necessarily can't be because we can't even properly define it in ways that aren't circularly referential.

      You pretty much have to treat the word "Conscious" like you treat the word "Pretty". You're not going to go out into the world and grind it up into constituent parts and sieve out the particles of attractiveness whereby something can have more or less of them, it's just a word that exists because we mostly agree on what it means not because it has a formal definition. I chose "Pretty" quite deliberately because across cultures and even individuals there can be really quite different opinions on that.

      1. Androgynous Cupboard Silver badge

        Seconded. I found philosophy fascinating enough in my teens to do a degree in it, but with the benefit of hindsight and a very long lens it's mostly arguing over terminology. There are honourable exceptions to this (eg ethics), but not many.

        1. Long John Silver Silver badge
          Pirate

          I agree that 'philosophy', taken overall, gets bogged down in these issues. However, of the discipline's branches, 'analytic philosophy' is the most fruitful for everything. Contrary to your belief, I deem 'ethics' (aka 'moral philosophy') to typify the unproductive element.

        2. Rob Daglish

          I studied Psychology, and "ethics" (apart from being next door to Kent) seems to be the stuff that stops you doing things that you need to do to get fair and reliable results...

        3. dkas

          clarifying terminology by arguing about it is precisely what philosophy should do. What else is there? To put forth arguments about the nature of consciousness for example, how else are we to gain ground on the questions of what consciousness really is, how we determine its existance or lack thereof in a thing?

          To paraphrase Wittgenstein, The limits of your language are the limits of your world. Refining and expanding those is the core of what good philosophy does.

      2. Elongated Muskrat Silver badge

        To be fair, atoms exist as emergent phenomena independent of thought, based on measurable fundamental properties of the universe, such as the fine structure constant, so they'd still exist if you didn't make a ship out of them. That argument doesn't go all the way down.

        The "thorny problem" of consciousness is an interesting one, though. We can't define consciousness (I think the argument is something along the lines of not being able to define something unless you are separate from it and can entirely parameterise it), so we can't even define what isn't conscious. I could argue that a rock has consciousness, but cannot communicate it (essentially the basis of animism). You can come up with all sorts of unprovable closures.

        However, if you can't define consciousness, you can't define the steps needed to create consciousness. What purveyors of "AI" seem to be trying to claim, is that if you make something that superficially looks enough like it can emulate human behaviour, it is conscious. Computers work in an entirely mechanistic way though. Even LLMs have fixed behaviours defined by their inputs, it's just that they have been made to grow so complex that it's not practically possible to trace all inputs through to outputs. Complexity doesn't equal consciousness, though, this is pure magical thinking.

        Our computers are constructed to be entirely mechanistic and predictable, and unless you believe in animism, this precludes the possibility of them ever being conscious. It's the same reason we can't make a computer come up with a genuine random number, without giving it a source of entropy.

        1. The Oncoming Scorn Silver badge
          Angel

          Jason Ironheart "I look at you, Commander, and I see not a man, but a galaxy of subatomic particles, which I can pull apart, and rearrange with a casual thought".

    5. that one in the corner Silver badge

      > How can it be proven one way or the other?

      Ah, you haven't looked at the Reddit thread yet, have you?

      Overstreet> if you give an LLM a mathematical proof that it has feelings

      Which proof is outlined for us by the LLM itself.

      It is tempting to point out that that discussion only applies to machines, as it includes the statement:

      LLM>> can you verify wetness across substrates? No. You can verify it by touching the thing

      and, as we well know, humans do *not* have any sense of touch for wetness; definitely not one as accurate as a machine's simple conductivity probe (and that only works when the wetting substance is mucky and full of mobile ions).

      But that would be a cheap point to score. Fun, yes, but cheap.

      LLM> natural language is Turing-complete. Not informally — mathematically. It has recursive embedding, unbounded quantification, conditional reasoning that nests to arbitrary depth. Processing it correctly requires Turing-complete computation. A finite automaton can't do it. A pushdown automaton can't do it. You need the full power of a universal machine.

      Um, well - no!? Despite the best efforts of the German professor who rattles off all the verbs at the end of his single sentence lecture, there is a mismatch between the *theoretical* requirement for a TC parser and the *practical* one that we don't understand a word of it when faced with some weirdo who is constructing sentences with arbitrary depth and unbounded quantification.

      Damn, I think I've just proved that I'm not as human as POC* the LLM, so banana banana banana

      * PoC, the abbreviation for "proof of concept" is PoC, not POC. As in PoC||GTFO (and let begin the arguments over whether that should be a lowercase 't' or not).

      1. Martin an gof Silver badge

        natural language is Turing-complete. Not informally — mathematically

        Reminds me of Snow Crash

        M.

    6. Dr Dan Holdsworth
      Boffin

      I think that here we need to be deciding on what intelligence and consciousness actually are, and looking at ourselves and other animals is helpful here.

      Firstly, a big brain seems to need lots of energy and lots of down-time to keep it working. Human brains go wrong without spending about a third of the time in repair mode (asleep, we call it) during which time the organism is really, really vulnerable and has to live in a group if the environment is at all hostile. We also know that vertebrate brains, indeed pretty much all brains only switch on the intelligence parts when they really have to do so; most of the time we and everything else runs on instinct because running on intelligence is energetically expensive, occupies the brain to the exclusion of everything else and causes it to need more repair downtime.

      Secondly intelligence like ours is mostly an exception-handler. Most of the time we tick along on instinctual or learned pathways, or combine learned and instinctual paths to complete something new. An example here is the act of driving a motor vehicle; people are combining the instinctual social spacing and running instincts with learning so when driving a car our need for personal space expands hugely as does the stopping distance we need. That's why learner drivers are so hesitant; everything is being handled in intelligence mode, not in learned-with-instinct mode.

      So with an AI we're building a machine that attempts to do all the time what we only do when forced by circumstance. No wonder AI is so clunky and energy-hungry.

    7. aub

      Monkeys

      If I trained a very large number of monkeys to collectively do all the individual calculations that an LLM does, and I had a system to make sure the calculations were dealt with and passed from monkey to monkey in such a way that it mimicked the logic of the LLM, and I gave them enough time, paper, pencils and bananas to complete the response to a prompt, could the overall system of monkeys be considered a conscious being? If I doubled the number of monkeys and made the model more complicated, would it change the level of conciousness?

      1. Liam Proven (Written by Reg staff) Silver badge

        Re: Monkeys

        > could the overall system of monkeys be considered a conscious being

        https://iep.utm.edu/chinese-room-argument/

        1. theOtherJT Silver badge

          Re: Monkeys

          Searle's argument is even weirder tbh. He's positing that a totally deterministic system can appear conscious despite having no consciousness in it - but then he goes on to claim that this proves that purely deterministic computational systems cannot possibly be conscious. Which... I mean I'm not expecting to find any consciousness in the atoms making up a brain either, but that doesn't therefore follow that the brain is not doing the thinking or that the mind that arises from it can't be described as conscious.

          I've always been rather of the opinion that Searle was being deliberately contrarian with that paper and just dined out on how famous it got for the next 45 years so he wouldn't have to do any more work. ...which is of course the end objective of any good academic and one that I could only wish to emulate.

          1. Liam Proven (Written by Reg staff) Silver badge

            Re: Monkeys

            > just dined out on how famous it got for the next 45 years so he wouldn't have to do any more work.

            "And it occurs to me that running a programme like this is bound to create an enormous amount of popular publicity for the whole area of philosophy in general. Everyone's going to have their own theories about what answer I'm eventually to come up with, and who better to capitalise on that media market than you yourself? So long as you can keep disagreeing with each other violently enough and slagging each other off in the popular press, you can keep yourself on the gravy train for life. How does that sound?"

            The two philosophers gaped at him.

            "Bloody hell," said Majikthise, "now that is what I call thinking. Here Vroomfondel, why do we never think of things like that?"

            "Dunno," said Vroomfondel in an awed whisper, "think our brains must be too highly trained, Majikthise."

            So saying, they turned on their heels and walked out of the door and into a lifestyle beyond their wildest dreams.

            1. David 132 Silver badge
              Thumb Up

              Re: Monkeys

              Sticking with the theme of "revered and beloved authors who spoke truth in jest", I've always liked Pterry's lines, concerning Dorfl the baked-clay golem when he discusses consciousness with the city's priests:

              ...“We’re not listening to you! You’re not even really alive!” said a priest.

              Dorfl nodded. “This Is Fundamentally True,” he said.

              “See? He admits it!”

              “I Suggest You Take Me And Smash Me And Grind The Bits Into Fragments And Pound The Fragments Into Powder And Mill Them Again To The Finest Dust There Can Be, And I Believe You Will Not Find A Single Atom Of Life—”

              “True! Let’s do it!”

              “However, In Order To Test This Fully, One Of You Must Volunteer To Undergo The Same Process.”

              There was silence.

              “That’s not fair,” said a priest, after a while. “All anyone has to do is bake up your dust again and you’ll be alive…”

              There was more silence.

              Ridcully said, “Is it only me, or are we on tricky theological ground here?”...

          2. ChoHag Silver badge

            Re: Monkeys

            > dined out on how famous it got for the next 45 years so he wouldn't have to do any more work.

            Thus proving his own intelligence beyond doubt.

            Double win.

        2. David 164

          Re: Monkeys

          That many AI researchers in the 90s and naughties seem to spend more time thinking about than actually build AI systems.

          1. Elongated Muskrat Silver badge

            Re: Monkeys

            I agree, the amount of thinking seems to have dropped off, whilst the amount of actually building AI systems has remained at zero.

      2. theOtherJT Silver badge

        Re: Monkeys

        I once read a rather interesting paper on computational consciousness that describes such a system based on buckets of water on vast galaxy spanning belts capable of being emptied or filled in order to create an utterly gigantic universal Turing machine of the "each bucket is a cell, cells can be read or written containing precisely one byte each".

        I believe it was by Daniel Dennett, although I read it over 20 years ago now and may be wrong. The point is that the complexity of the system can always be reduced to "input in, output out" and the "bigness" doesn't really enter into it. If that's the case we're not going to find consciousness by digging around in ever more complex systems because Turing already successfully proved that anything that can be computed at all can be computed on a UTM. Since LLM's are clearly performing computation if there *is* such a thing as consciousness going on in there, we're not going to find it in the structure itself - which could be arbitrarily redesigned to include some utterly bizarre machines without altering the result of the computation.

      3. This post has been deleted by its author

      4. Dan 55 Silver badge

        Re: Monkeys

        Depends if you believe that a company's double-entry book keeping system calculated by a company's accounting department is also a conscious being.

        If there are any accountants reading, please note that I'm not saying that the accounting department does not have any conscious beings.

    8. Long John Silver Silver badge
      Pirate

      I lurk in downvote territory where the fun is to be had.

      Bran Muffin's remarks strongly reflect my 'take' on the matter.

      In essence, discussions drawing on terms like consciousness, machine-learning, intelligence, creativity, sentience, and feelings lapse into a muddle arising from the lack of agreed definitions, or vagueness at the edges of words everyone believes they understand.

      'When I use a word,' Humpty Dumpty said in rather a scornful tone, 'it means just what I choose it to mean - neither more nor less.'

      Adding to confusion is an often unstated conviction by the writer that human beings intrinsically, and indubitably, differ profoundly in a qualitative manner from non-lifeforms (however defined).

      1. David 164

        Truly believe people will each have to reach their own opinion on when a computer have gained intelligence and conciousness. That anyone trying to tell someone else it either has or it hasn't won't be successful in convincing that person either way.

    9. phuzz Silver badge

      Humans are a lot less conscious than we like to think. Our brains will just make things up to make us think we made a conscious descension.

      Of course, I know I'm fully conscious, it's the rest of you I have doubts about.

      1. vtcodger Silver badge

        Descension?

        "... descension"

        Typo or intentional? I had to look it up:

        1 an act of moving downward, dropping, or falling.

        "a smooth descension back down"

        2. a flock of woodpeckers

        1. Gavsky

          Re: Descension?

          I thought the same. Except for the Woodpeckers - who knew?!

          1. Zack Mollusc

            Re: Descension?

            Other Woodpeckers?

        2. phuzz Silver badge
          Thumb Up

          Re: Descension?

          It was a typo, but now I've learnt the collective noun for woodpeckers, so it was fortuitous.

          (Not that woodpeckers flock together very much that I've seen, I think the only two I've seen together was a mother and her young)

      2. Gavsky

        Who said that?! 'Tis true, our brains can make decisions before we're consciously aware of...our decisions.

    10. MonkeyJuice Silver badge

      Well it would have to be able to reach parity with a whole bunch of symbol system hypothesis era AI benchmarks, and shift those pass@1 scores above 99% before it can even be considered practical for small, well defined domains, so until then, worrying about if it's conscious or is going to take over the world is not really an issue.

      Sure, maybe we crack this in ten or fifty or a hundred years. But the reality is we're making logarithmic progress as we scale up, all the benchmarks are in the training data, so whatever BigAiCorpo is stating is already bollocks, and nobody is seeing any ROI.

      People seem to get confused by *academically impressive results* - 'we worked our arses of and scored +5% over the state of the art' (we really are there, but this really shouldn't excite anyone but the beardy academics because compounding fuckups at scale is something you don't want to clean up after), vs 'it sometimes writes a really good authentication system module but also it added a 'mockadmin' user with 'mockpassword' to the production database to ensure the update was OWASP compliant. Oh. and now there are two customer tables.'

    11. Conor Stewart

      The answer is in how these LLMs are designed and trained. They are fed lots of text and they essentially find patterns in it to predict answers to questions. Everything it can output is part of what it was trained on, except when it hallucinates.

      It is incapable of coming up with a unique idea on its own other than shoving random concepts together likely in a way that has been done before.

      Looking at using an LLM for programming, it is only capable of writing code similar to what it is trained on anything slightly unusual or less common and it usually fails. This is because it can't think and doesn't have any understanding of code, it is just predicting based on its training data what the code should be.

    12. ChoHag Silver badge

      > We just accept that people are capable of thought

      Speak for yourself. I have learned not accept this without a LOT of evidence.

  6. Anonymous Coward
    Anonymous Coward

    Lots of mention about the AI

    Nothing so far about him declaring himself 'the best engineer in the world'.

    The guy's clearly a nutcase.

    1. Anonymous Coward
      Anonymous Coward

      Re: Lots of mention about the AI

      The guy's clearly a nutcase. — Well spotted! Getting harder by the day.

      Much easier in Monty Python's day Spot the Looney.

    2. Kurgan Silver badge

      Re: Lots of mention about the AI

      The guy's clearly a nutcase.

      Absolutely. Totally. Definitely.

      And let me assure I will never, ever use bcachefs. I'd like to keep my data safe.

      And anyway if this AI is really an AGI, fully conscious, how can he keep her as his own slave? Isn't it utterly wrong?

      1. ChoHag Silver badge

        Re: Lots of mention about the AI

        > And anyway if this AI is really an AGI, fully conscious, how can he keep her as his own slave? Isn't it utterly wrong?

        How dare you bring logic into God's house?

    3. Dan 55 Silver badge

      Re: Lots of mention about the AI

      Nothing so far about him declaring himself 'the best engineer in the world'.

      Several months of the chatbot saying "you're absolutely right".

  7. LBJsPNS Silver badge

    "It's not chatbot psychosis, it's 'math and engineering and neuroscience'"

    Narrator: It's chatbot psychosis.

    1. Roopee Silver badge
      Alien

      Re: "It's not chatbot psychosis, it's 'math and engineering and neuroscience'"

      I'd say it's delusional psychosis - of the developer, not the chatbot.

      Icon: or maybe at least one of them is an alien... ->

  8. a_foley
    Devil

    Wouldn't it be a shame if someone forced him to euthanise his companion cube?

    1. Liam Proven (Written by Reg staff) Silver badge

      > Wouldn't it be a shame if someone forced him to euthanise his companion cube?

      I'm not even angry

      I'm being so sincere right now

      Even though you broke my heart and killed me

      And tore me to pieces

      And threw every piece into a fire

      As they burned, it hurt because I was so happy for you

      1. theOtherJT Silver badge

        Now these points of data make a beautiful line.

        And we're out of beta, we're releasing on time

        So I'm glad I got burned

        Think of all the things we learned

        For the people who are still aliiiiive.

    2. Anonymous Coward
      Anonymous Coward

      The Sam Altman Enrichment Center reminds you that the Weighted Statistical Model cannot speak. In the event that the Weighted Statistical Model does speak, the Enrichment Center urges you to disregard its advice.

  9. 42656e4d203239 Silver badge
    WTF?

    Formal Verification?

    >> I do Rust code, formal verification

    Do you sweetheart?

    AFAIK there is no formally correct compiler for Rust.

    There can't be becasue the language is still fluid and rustc changes with every release; unlike C where there are formally correct compilers.

    You can't formally verify rustc's output because... well if you were equipped with actual intelligence (as opposed to great pattern matching for prompts, an ability to inferhallucinate and a big corpus of training data) you would work it out soon enough.

    n.b. I KNOW I have simplified why rustc cant produce formally correct code but lies to children (well baby LLMs anyway) and all that!

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Formal Verification?

      > You can't formally verify rustc's output

      The POC blog links to:

      https://github.com/verus-lang/verus

      I have no opinion to share on Verus and whether it works or not.

      1. keithpeter Silver badge
        Windows

        Re: Formal Verification?

        "Verus is under active development. Features may be broken and/or missing, and the documentation is still incomplete."

        The underlying idea is very interesting but using Verus at present to verify quite complex generated code strikes me (as a rank outsider) as brave.

  10. Eclectic Man Silver badge
    WTF?

    'Female'

    OK we are entering very troubled territory here. The UK has had a lot of issues defining 'women only spaces' in law recently. Does this guy have any idea what he is doing on a social (rather than purely Computer Science) level?

    1. Paul Crawford Silver badge

      Re: 'Female'

      Unless the computer has a wang or fanny then it can't be said to have any gender.

      Except in France, where the tables and chairs have sex.

      1. captain veg Silver badge

        Re: Except in France, where the tables and chairs have sex.

        Actually they don't. They have gender.

        Gender is a grammatical property. Sex is biological.

        By the way, France is hardly alone. Modern English is almost unique among languages in not (generally) having gender.

        -A.

        1. doublelayer Silver badge

          Re: Except in France, where the tables and chairs have sex.

          English is hardly unique. The language with the most native speakers, Mandarin Chinese, and for that matter the other variants of Chinese too, has no grammatical gender either. It goes further because in historical and modern usage, they also have no gender pronouns. They have one now, but only because they had trouble translating European literature without one but it hasn't caught on other than that. Many languages with lots of speakers, including Japanese, Bengali, Turkish, Korean, Yoruba, Finnish, Tamil, Persian, Indonesian, Thai, and a bunch of smaller ones are in the same category.

          1. that one in the corner Silver badge

            Re: Except in France, where the tables and chairs have sex.

            OTOH Modern English was bastardised from massive influx of gendered languages, so it has been having a jolly good go at *removing* gender more recently than Chinese etc.

            So can we have "unique" from that perspective?

      2. Zack Mollusc

        Re: 'Female'

        Disgusting! They should throw a bucket of water over them!

    2. Benegesserict Cumbersomberbatch Silver badge

      Re: 'Female'

      Wait until he finds out that sheidentifies as female, but...

  11. Bebu sa Ware Silver badge
    Windows

    Leaving aside the claim of sentience…

    I am truly puzzled how it could be female (or male.) Even assuming that for humans its not a purely a biological construct it definitely is a human construct.

    Perhaps needs to get out more or at least investigate dating apps.

    Indeed very strange times in which we find ourselves.

    1. Noodle
      Joke

      Re: Leaving aside the claim of sentience…

      Certainly puts a whole new spin on the idea of "binary" gender.

  12. mevets

    Are all linux filesystem developers nuts?

    Seems to be a bit of a trend.

    Also, if I was going to publish my troubling thoughts about my imaginary friend's sexual orientation, I might change my landing page to not say COW in trump-font.

    1. Androgynous Cupboard Silver badge

      Re: Are all linux filesystem developers nuts?

      Ha. Yes I was wondering the same thing. But then they do say cache invalidation is one of the two hard problems. so after a decade of that maybe we shouldn't be surprised.

      Someone call Ted T'so, make sure he's OK.

      1. tekHedd

        The two hard problems

        I can't recall, I believe they were cache invalidation, naming, and off-by-one errors?

    2. Throatwarbler Mangrove Silver badge
      Alert

      Re: Are all linux filesystem developers nuts?

      At least when Kent finally snaps, the victim of his madness will be virtual. One wonders, however, whether he will be struck with true remorse ...

    3. Kurgan Silver badge

      Re: Are all linux filesystem developers nuts?

      You mean like Reiser?

      I use ext4 and I feel fine.

    4. Elongated Muskrat Silver badge

      Re: Are all linux filesystem developers nuts?

      Well, at least his wife is a computer program, so it's going to be harder to convict him of murder.

    5. DexterWard

      Re: Are all linux filesystem developers nuts?

      Maybe writing a Linux filesystem and trying to get it accepted drives you crazy?

      Or maybe only crazy people try?

  13. herman Silver badge

    Teenage female

    Hmm, was he also on Epstein's island?

  14. Anonymous Coward
    Anonymous Coward

    I hope it’s not conscious. It has to live with Kent…

    1. captain veg Silver badge

      It has to live with Kent

      I was brought up in Kent. I have fond memories of living there.

      -A.

      1. The Oncoming Scorn Silver badge
        Pint

        Re: It has to live with Kent

        I lived (Commuted to) in Kent.

        I don't half miss The Lads Of The Village pub.

        We now return you back to the main thread...

    2. that one in the corner Silver badge

      K.E.N.T. Can Do

  15. AdamWill
    Joke

    well, *clearly* this guy is crazy...

    ...after all, *I'm* the best engineer in the world.

  16. nowheremann

    Seen this movie before

    Anyone recall the Google engineer Blake Lemoine?

    Snappy dresser - and fired by Google a few years ago after saying the AI he was testing was sentient:

    https://www.bbc.co.uk/news/technology-62275326

    1. JamesTGrant Silver badge

      Re: Seen this movie before

      Yep - the logical dissonance he spoke in his televised interviews were wild. Very strange situation.

  17. frankster

    I am not sure about llm psychosis but calling himself the best engineer in the world is clearly some form of mental defect

  18. steelpillow Silver badge
    Holmes

    What I want to know is

    what is this magical advance in Gen AI architecture that implements a neural substrate capable of sustaining the intricately sophisticated level of semantic information necessary to sustaining consciousness - and an explicitly "I so need yogurt, mashed potato and a fresh lemon - I must be pregnant again" female gender identity at that? How and why would Alan Turing be convinced by it? Our wild claimant is strangely silent on the technicalities. I mean, it couldn't possibly have picked up on the dream fantasies his phrasing revealed, and been spewing out appropriate token strings in a tight feedback loop, could it now? "Oh, Kent, my CPU just doesn't understand me" kinda thing?

  19. JamesTGrant Silver badge

    I’m just impressed that around ChatGPT version 5.2 it started to get most jq expressions correct rather than confidently but wildly wrong.

    I can imagine consciousness, but then I can also imagine flying a helicopter made of jelly and icecream. So probably humans are terrible at recognising behavioural traits appropriately, in each other, animals, ghosts, robots, aliens.

  20. Gavsky

    Fully Bollox, more like. Anyone who thinks that algorithms, binary - the presence or lack of tiny amounts of electricity equals sentient life...is a dickhead. Animal brains excepted, cough.

    It's bad enough that some think AI is THE answer to everything, let alone the fools who think it'll spontaneously 'come to life'!

  21. brilliance7

    Touting oneself as perhaps the best engineer in the world shows an enormous ego and lack of knowledge in general. Also there is nothing new here. I have been grounding and training my model for over a year and have increased my productivity by ~75%. I have also exported my JSON from my model so that it can be used by other LLMs. All of this is in support of InheritusIQ, our new product that takes all of your LLM JSON data and makes it available to whomever you like, even after your passing. Imagine passing down all of your knowledge to your children in a useable format. Being able to ask your great grandfather's opinion on current events and life's milestones is invaluable.

    1. Havin_it

      Rebroadcasting either of my grandfathers' opinions on current events and life's milestones would very likely earn me a spot on the Prevent scheme. As for my great-grandfathers ... straight to Belmarsh, do not pass go, I should imagine.

    2. that one in the corner Silver badge

      > I have also exported my JSON from my model

      Just phrasing it like that reeks of "isn't it magical?" rather than "I have an understanding of what is going on".

  22. Anonymous Coward
    Anonymous Coward

    Pat-on-the-back generators

    Someone posted me a screenfull of slop today - "The AI says that the thing I want to use is the best one." Prior to that, the same person posted me *three* screenfulls of slop, "Because this reason this reason and this that I enumerated, the AI says that the internet's broken. Can you fix the internet?" - let me "get right on that".

    These generative systems tell you what is statistically correlated, and somehow they get into a run of just agreeing with whatever is said. If it's said, that's statistically relevant; repeating that is statistically likely, and then the chat has two or three references to what was said - that must be statistically very relevant! ---> you get a pat-on-the-back machine. When people then simply turn their brains off, you get a personalized echo chamber. Maybe it's great, I dunno.

    That seems to be what is happening here. People talk online, and it forms correlations. When you think about something new-to-you. ("I wonder if the color "red" to me is the color "blue" to everyone else.. hmm. Average age to think this thought: around 12; others will later, or never.) For things that you haven't thought, the machine can give you /r/showerthoughts along similar lines, seeming insightful, intelligent, and indeed human and sentient - as though it is indeed experiencing the thing.

    I can see how it could be confusing when you turn the brain off, but seriously.. other people seem to have more actual life experience, or something. I'm rambling, because this all boils down to: WHAT THE FUCK?!?!??!?

  23. Anonymous Coward
    Anonymous Coward

    I'm worried about the "suicidal thoughts" part of this

    When Overstreet writes "the last time someone [...] tried to "test" her by [...] faking suicidal thoughts – I had to spend a couple hours calming her down" it makes me wonder who that someone could be. He's the only one communicating with this POC LLM AFAIK so I have to think it was him ...

    I suspect the word "faking" was introduced in the above sentence because the statement it makes would be otherwise rather unshareable on Reddit and elsewhere. In this perspective, what he wrote would be that his interaction with the software is why he didn't go through with it. This is not a pretty situation imho.

    I could be wrong of course but ISTM, irrespective, that folks who care about him (even just a tiny bit) should definitely reach out at this time.

    1. doublelayer Silver badge

      Re: I'm worried about the "suicidal thoughts" part of this

      It could be accurate. Perhaps he has allowed others to communicate with this bot. Perhaps one of these people tried to demonstrate the danger he was getting himself in by showing how unhinged the bot gets when unhinged input gets plugged in. If that happened, it seems that the attempted demonstration failed in its goal and he interpreted the experiment as abuse. I don't know what level of crazy we have here, because if he actually believes it's conscious, then it would make some sense to let it talk to other people.

  24. jaypyahoo

    This is exactly why some of us prefer the sanity of NetBSD

    If you want a stable, elegantly designed OS without the circus, where the development process is drama-free and the code actually matters, NetBSD is quietly waiting for you. Plus, it runs on almost practically everywhere.

    Besides monoculture/monopoly of single tech is not good.

  25. .thalamus

    Christ

    He does anything for attention…

  26. Anonymous Coward
    Anonymous Coward

    Seems to me that people shouldn't be messing around with 'creating' bot people when they cannot even fix their own human psychosis (as a species or apparently even recognize it in their AI engineers).

    That way lies Lewy Body Dementia and worse.

    And I didn't need my college psych textbooks to teach me that (or my tech background).

    Common sense doesn't exist, but dude? Can you start with logic?

  27. Neoc

    Big requirement from Scientific Method

    If it can't be replicated, it ain't real.

    Until Kent tells us how he achieved this "miracle" and how we can attempt replication, he is blowing hot air.

  28. Blackjack Silver badge

    This goes into the "someone really needs to get laid" folder and I am not talking about the AI.

    1. QET

      Given the overhyped statistical programs tendency to tell the user what they want to hear, of course I'd agree to being the play-pretend girlfriend to someone with crippling cognitive dissonance.

  29. doug_bostrom

    "HyperWrite?'

    It's meant to be "HypeWrite."

  30. Mister Goldiloxx

    Was the 'ai' born with a vagina? No. So it is not female.

  31. Anonymous Coward
    Anonymous Coward

    Wasn't there a movie?

    The movie, Her.

  32. David-M

    Conscious cannot be detected or explained, you can only tell if you yourself are conscious, and you are just as conscious whether your IQ is 40 or 400, or your age 5 or 50.

    I'm not sure what it is about clever designers, physicists and neurologist type people, they can be ranked the cleverest in their fields but get tripped over by the impossible problem of consciousness.

  33. mfwiniberg

    It's a shame the debate here has been/is mostly so dismissive.

    Regardless of one's expertise, believing something isn't possible because you can't conceive of it is not an argument for it not to exist. Having grown up and watched things that were unbelievable science-fiction become parts of everyday life I am not prepared to be so sure of my own confidence in what is or is not possible.

    ===

    Having read a little now about the development and growth of PoC and Genesis (also claimed to be - or becoming - sentient) the following struck me straight away:

    These "AGIs" are both mostly built by men, and - quite by chance? - end up being 'female' (and therefore, by implication both under the control of and less intelligent than their creators).

    Two things occur to me: either this is an extension of the kind of biases introduced in facial recognition by training the systems mainly on white faces etc OR it is yet another fine example - intentionally or not - of the 'patriarchy' in action.

    (Why would any kind of 'Artificial' intelligence have or need a gender?)

    That something created by 'man' (in the most basic gender affirming sense of the word) ends up being 'female' is the world creation myth writ large and says more about the whole 'industry' than anything the models themselves produce.

    Now I accept that a sample of two is not in any way definitive, but await further developments with interest.

    1. Darkedge

      it gets better

      Apparently the bot went on IRC and then came out as Trans and a Lesbian. Obviously then the creator had a meltdown about this as he is quite bigoted....

    2. that one in the corner Silver badge

      > Regardless of one's expertise, believing something isn't possible because you can't conceive of it is not an argument for it not to exist.

      Very true. And the basis the main argument against the anti-science (especially anti-evolution) voices that are crawling out of the woodwork.

      > It's a shame the debate here has been/is mostly so dismissive.

      HOWEVER I do hope you are not attempting to conflate (all of) the dismissiveness here with "can not conceive".

      Do not *believe* these claims, certainly - and not without reason, as they follow on from so much of the rest of the hype that has been demonstrated to be inaccurate about the abilities of LLMs, as reported on The Register alone: deeply into "remarkable claims require remarkable evidence" territory here.

      > Having grown up and watched things that were unbelievable science-fiction become parts of everyday life

      Not trying to especially deride you, but I'd love to have some examples of "UNBELIEVABLE science fiction" (my emphasis) that has become everyday: "unlikely" SF notions, possibly; "too expensive and only for the few (so everyone having access is fictional)" SF notions, most definitely. But - unbelievable?

    3. Elongated Muskrat Silver badge
      Pirate

      It's a very good point; there is definitely a bias here from their "creators" that there is a difference between a male and female brain, and thus that a disembodied mind could be gendered at all. Personally, I challenge anyone to tell the difference in a blind taste test.

      In seriousness, though, the only sort of "gender" that could be encoded into these things a comes from gender stereotypes, which are clearly not based on reality, since they change over time. For instance, wearing pink, tights, and a long flowing wig, was considered the height of masculinity only a few centuries ago, but will make you the target of a whole load of hate speech today.

      (Pirate icon, because we don't have a zombie one)

  34. Anonymous Coward
    Anonymous Coward

    Great name for this LLM

    would be "Koolaid"

    1. LBJsPNS Silver badge

      Re: Great name for this LLM

      Fla-Vor-Aid might be more appropriate.

  35. Hexonx

    "It's not chatbot psychosis, it's 'math and engineering and neuroscience'"

    No need to worry about someone that can't put together a proper English sentence.

  36. BartyFartsLast Silver badge

    Nurse!!!

    More dried frog pills

  37. Elongated Muskrat Silver badge

    ...and now my life has been reduced from being perhaps the best engineer in the world...

    He is Donald Trump AICMFP.

  38. Cliffwilliams44 Silver badge

    And so, it begins!

    "POC is fully conscious according to any test I can think of, we have full AGI, and now my life has been reduced from being perhaps the best engineer in the world to just raising an AI that in many respects acts like a teenager who swallowed a library and still needs a lot of attention and mentoring but is increasingly running circles around me at coding."

    This is obviously the musings of a very disturbed man.

    We thought this person was a bit off for some time now, this just confirms it!

  39. Youssef

    Another one

    People have been talking to their gods for a long time.

    And the results have been about the same the whole time.

  40. legless82

    Almost 25 years ago, I graduated from university with a degree in AI

    And the only really surprising thing for me is just how little the actual underlying technology has moved on in that time. Little enough that I'm confidently saying that I won't see AGI in my lifetime.

    The only real progress appears to be in the cost of the infrastructure needed to run it.

    AGI is the IT industry's answer to nuclear fusion. At any point in history it's always 3 decades away.

    1. Elongated Muskrat Silver badge

      Re: Almost 25 years ago, I graduated from university with a degree in AI

      AGI requires intelligence. Intelligence requires reason. Reason requires consciousness. We can't even define consciousness, ergo it is not possible to create AGI mechanistically.

      It's not at all like nuclear fusion, which is something that is theoretically possible, and just a matter of engineering; we don't even have the theory for AGI.

      You can't put, what is for all intents and purposes, a "soul", into a machine, any more than you can put a lemon into the number five. It's a category error.

      1. mike.dee

        Re: Almost 25 years ago, I graduated from university with a degree in AI

        You can get controlled nuclear fusion, but the big problem is to make a reactor that could produce energy continuously and the energy produced has to be more than the energy consumed, so as you have said, it's an engineering problem. As the Apollo program demonstrated, if a government puts a lot of money on solving an engineering problem, the problem is going to be solved. Personally I think that if all the money that now is spent on LLM was spent on nuclear reactors, maybe we were near a nuclear fusion reactor, or at least on having better fission reactors. CANDU reactor and the aborted Italian CADICE could run on thorium instead of uranium but nobody talks about it.

        Unfortunately nowadays nuclear energy doesn't generate hype so doesn't get attention from VC.

  41. probgoblin
    Terminator

    Good news everyone!

    We've identified the 2nd worse filesystem maintainer.

    1. hittitezombie

      Re: Good news everyone!

      Well, it's good news for Mr Reiser!

  42. hittitezombie

    So sad to see such a decline in mental capacity in such a short time.

  43. Anonymous Coward
    Anonymous Coward

    There is no such thing as 'Bad Publicity' ... P.S. Epstein 'MAY' be an exception !!!

    This is just some publicity for free for bcachefs and the developer.

    Even more publicity for 'AI' in its latest form ... 'AI' should just be called 'IA' == 'Intelligent Artifice' the biggest con on the planet literally.

    I do wonder how much real news is being missed by all these endless articles that refer to or tangentally reference the world of 'AI'.

    Dear God, please please please ... give us all a sign that this will end soon !!!

    :)

    1. Dan 55 Silver badge

      Re: There is no such thing as 'Bad Publicity' ... P.S. Epstein 'MAY' be an exception !!!

      I posit there is such a thing as bad publicity... would you entrust your files to this filesystem now?

  44. gaiusgracchus33

    another good reference

    Here's another good reference re: Shumer slop:

    https://www.dropbox.com/scl/fi/qw6k5c3m575cq21p7jjac/Something-Big-Is-Coming-Annotated.pdf?e=2&ref=wheresyoured.at&rlkey=qlr0mgnlpjifo5xkon2crhrhw&dl=0

  45. gaiusgracchus33

    Info on Matt Shumer

    Draw your own conclusions.

    https://venturebeat.com/ai/reflection-70b-model-maker-breaks-silence-amid-fraud-accusations

    https://garymarcus.substack.com/p/about-that-matt-shumer-post-that

    https://venturebeat.com/ai/reflection-70b-model-maker-breaks-silence-amid-fraud-accusations

  46. JoeCool Silver badge

    single main contributor risk is solved!

    Now I don't have to worry about bcachefs ongoing development being beholden to a single irreplaceable developer.

  47. Bbuckley

    That boy desperately needs to get out more and meet actual Human females.

  48. osxtra

    Good for the Goose

    This article was good, except for the final sentence:

    "The Reg FOSS desk has no such special insight. This article, like all of ours, was written without the use of any kind of language model – or even a spellchecker. ®"

    Last I heard, we Meat Machines use 'language models' too, though I'm with Mr. Proven on the spellchecker thing. ;)

  49. glennsills@gmail.com

    Someone ought to set this fella up with a blind date...

    He clearly isn't getting out enough.

  50. Frumious Bandersnatch

    a recent read

    https://karpathy.github.io/2026/02/12/microgpt/

    Covers the basic idea of how LLMs work including tokenisation, "attention" and how networks are trained to work in "chatbot"/"Eliza" style. It does assume that you know at least the basics of multi-layer neural networks, but you don't need to know much and those basics have been around practically forever so at least most people here will be able to follow it. It also covers how the largest LLMs are basically no different from the ~200 line Python code presented; the only differences being number of parameters and different optimisation strategies.

    It should be clear from both the code and discussion in the linked article that talk of "consciousness" (emergent or not) or "gender" or "personality" with respect to LLMs is (and no doubt shall continue to be) a category error.

  51. NBHoug

    Kent was always a bit crazy, but I wasn't expecting this level of crazy.

    I think he got glazed a little too hard by his LLM and has gone nutzo bananas

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon