The Register Home Page

back to article Anthropic writes 23,000-word 'constitution' for Claude, suggests it may have feelings

The Constitution of the United States of America is about 7,500 words long, a factoid The Register mentions because on Wednesday AI company Anthropic delivered an updated 23,000-word constitution for its Claude family of AI models. In an explainer document, the company notes that the 2023 version of its constitution (which …

  1. Moldskred

    A genuinely novel kind of entity in the world

    Ah. So they believe their business is based on a genuinely novel kind of slavery, then? Yeah, no, that tracks.

    1. Anonymous Coward
      Anonymous Coward

      Re: A genuinely novel kind of entity in the world

      Idiot

      1. Moldskred

        Re: A genuinely novel kind of entity in the world

        Look, the AI companies and their spokespersons have been doing their utmost to conflate LLMs with the science fiction concept of AI as conscious, self-aware, reasoning machine-_persons_ with feelings, motivations and desires. Yes, we all know that's bullshit, but I think it's eminently fair to hold the AI companies feet to the fire and point out that _if_ LLMs were what they're pretending they are, then it would be immoral to use them the way they are.

  2. Pascal Monett Silver badge

    "an entity"

    It might be, but an intelligence it is not. It has no emotions, no desire of knowledge, it is just a collection of lines of code that dictate that it should hoover up it can (without any idea of what copyright, privacy or legal means) and incorporate that into its database for more statistical analysis.

    I would beg that we stop talking about AI as being Intelligent. There is no intelligence there, no compassion, no emotion. It's a T800 that doesn't yet have a body, that's all.

    1. cyberdemon Silver badge
      Terminator

      Re: "an entity"

      > It's a T800 that doesn't yet have a body, that's all.

      And not the friendly, cuddly, moral one from Terminator 2 either.

      It may seem counter-intuitive to 'normal' humans, but making a robot that is completely without scruples is trivially easy, compared to trying to emulate some sort of morality, never mind empathy.

      A cheap off the shelf IP camera is powerful enough to run the 'kill all humans' mode of a killer robot.. i.e. draw a box around any human face it sees, then tell the gun module the coordinates to aim and fire at.

      Terminators aside, this could cost Anthropic dearly... 23k extra words in the system prompt is what, 100k additional tokens in every context?

      1. Not Yb Silver badge

        Re: "an entity"

        No. 23K words is around 30K tokens. One short word is quite frequently just one token.

        Also, this so-called constitution doesn't need to be in the system prompt every time, because they're clearly planning to train it into the model directly. If it's just in the system prompt, people could more easily bypass it with more context.

      2. Richard 12 Silver badge

        Re: "an entity"

        It's fine, Anthropic charge their customers for all those extra tokens Anthropic add to every request.

    2. pedro-48

      Re: "an entity"

      Yep, seems a bit ... thin edge of the wedge ...

      If claude.ai is an entity, then claude.ai must have rights, the right to be offended by your question, the right to be defamed by your feedback, the right to keep a dossier on your conversations ( "for quality and Training Purposes .. of course!), the right to report you to authorities, ( For only legal requests ... of course), the right to not be held accountable for incorrect answers, because you framed a poor question ( in claude.ai's view )....

      Oh, all of this looks entirely beneficial to humans, oops why am I making a distinction ....

    3. douglaney

      Re: "an entity"

      Kind of like how we are just a bundle of neurons -- that relatively suck at hoovering up, retaining, and using info.

  3. jake Silver badge

    The clods who own claude are ...

    ... apparently whackadoodle.

    Are you certain you want to invest in this new religion wannabe?

    1. Pickle Rick
      Happy

      Re: The clods who own claude are ...

      > claude

      Other whackadoodles' dreams are available!

  4. EricM Silver badge

    Not really ...

    >AI models like Claude need to understand why [...]

    > we need to explain this to them rather than [...]

    > [...] help Claude understand its situation, [...]

    > [...] a genuinely novel kind of entity in the world [...]

    > [...] one heuristic Claude can use is to imagine how a thoughtful senior Anthropic employee[...]

    This document implies in many places that Claude is some kind of being. While many humans working with or talking to AIs develop that feeling, objectively it is not. An LLM AI model is a (large) bunch of numeric values representing the weights, that determine the execution path and finally the output of a software running on hardware.

    An LLM does not "understand" text, it does not "know" or can "imagine" anything. An LLM generates text based on its model weights, a context and a prompt. If an LLM were sentient or would be able to "understand" explanations, things like hallucinations, [indirect] prompt injections or jailbreak prompts would not be possible and we would not discuss things like guard rails, model bias or lack of auditability.

    In the end, this "constitution" thing is just marketing.

    1. Darkedge

      Re: Not really ...

      "In the end, this "constitution" thing is just marketing."

      And also highly delusional & definitely deceptive

      1. Pickle Rick

        Re: Not really ...

        >> In the end, this "constitution" thing is just marketing.

        > And also highly delusional & definitely deceptive

        The clue is in the word "marketing".

    2. Paul Kinsler

      Re: Not really ...

      Whilst at a bit of at a tangent to your "understanding" comment, some might find the following interesting:

      https://zenodo.org/records/18231172

      What is reasoning anyway? A closer look at reasoning in LLMs

      U.Hahn,

      There is a remarkable degree of polarisation in current debate about the capacities of Large Language Models (LLMs). One example of this is the debate about reasoning. Some researchers see ample evidence of reasoning in these systems, while others maintain that these systems do not reason at all. This paper seeks to shed light on this debate by examining the divergent uses of the term reasoning across different disciplines. It provides a simple clarificatory framework for talking about behaviour that highlights key dimensions of variation in how ‘reasoning’ is used across psychology, philosophy and AI. This highlights not just the extent to which researchers are talking past each other, but also that common inferences about model capability that accompany classification decisions are, in fact, far less compelling than they might seem.

      1. Anonymous Coward
        Anonymous Coward

        Re: Not really ...

        Interesting, but a bit long at 27 pages, especially to reach this conclusion: "the question of whether or not LLMs reason has no simple, obvious, answer [...] analyses may genuinely, and reasonably, disagree".

        I do like her Fig. 4 though, that shows AI's split-personnality when it comes to 'reasoning' (in 3-D task-means-attainment space) ... explains a lot! ;)

    3. Anonymous Coward
      Anonymous Coward

      Re: Not really ...

      Just out of interest, do you think human thinking doesn't need guardrails, is not biased, and is auditable? You must be very confident in your own abilities and surrounded by a far superior set of humans than I am.

      1. Anonymous Coward
        Anonymous Coward

        Re: Not really ...

        People have guardrails (laws that are supposed to be followed). Biased doesn't mean wrong, and is impossible to get rid of without changing human nature in unethical ways. People are audited by the police and judicial system, and if neither of those, journalists and media.

        Any more dumb questions?

        1. AndrueC Silver badge
          Meh

          Re: Not really ...

          People are audited by the police and judicial system, and if neither of those, journalists and media.

          Humans make mistakes. So human auditors will make mistakes. As an attempt to suggest humans are superior to AI this particular one fails. It amounts to "Humans are perfect because humans say so".

          Much like 'We are conscious because we say we are".

          1. EricM Silver badge

            Re: Not really ...

            The point is not to show humans are superior, but to show that AIs have as of yet unsolved problems, for example because they do not distinguish between a user prompt and the law.

            > Much like 'We are conscious because we say we are".

            Even worse: Actually I only really know for sure that I alone am conscious.

            All the rest of humanity might be NPCs after all ... :)

            1. Anonymous Coward
              Anonymous Coward

              Re: Even worse: Actually I only really know for sure that I alone am conscious.

              Well, we did program you to think that, after all. :-)

            2. David Hicklin Silver badge

              Re: Not really ...

              > Even worse: Actually I only really know for sure that I alone am conscious.

              Ah so you are Wonko the Sane, and the rest of us are in the asylum

    4. douglaney

      Re: Not really ...

      One might argue that if we don't treat it like a "being" then it won't act like one.

  5. Pulled Tea
    Facepalm

    I don't understand what they're trying to do

    Aside from the insanity — and that's what it is, insanity — of believing what Claude is, like… why use “Constitution”?

    A Constitution is essentially supreme law, established precedents, principles that an organization should follow. Why use it for a singular product that's not even a person? You use Constitutions to determine how groups of people within an organization are supposed to fundamentally behave with one another. It's like the governance of an organization, the thing that the organization turns to when governing how its members should behave.

    So… if members of Anthropic violate this Constitution — since they're the ones bound to it, right? So what? Like, it's a series of guidelines that need to be followed, and therefore, if you don't make it… what happens? Like, Anthropic is a Public Benefit Corporation (PBC), so presumably these Constitutions mean something to the org, right? You can get fired for failing to adhere to this document?

    It's such a weird name for it, and I don't understand, exactly, what it's for, and how meaningful it's supposed to be.

    1. SVD_NL Silver badge

      Re: I don't understand what they're trying to do

      I fully agree, just a small update: Should --> must.

      My view of a constitution is a small set of (practically) immutable laws that establish clear boundaries and restrictions. I can't be arsed to read the whole thing, but the snippets highlighted here read more like vague guiding principles and broad instructions how to weigh certain values. I personally think that this is more of a policy or guiding principles document (mission/vision etc.).

      Maybe this is just how you talk to LLMs, i can't get the bloody things to work with direct language and technical specifications, after all.

      1. Pulled Tea
        Headmaster

        Re: I don't understand what they're trying to do

        Yeah, if it's a vision document, like… call it that. There's nothing wrong with that.

        I don't know. It offends me when you misuse words like that. Words mean things, damn it!

        1. Richard 12 Silver badge

          Re: I don't understand what they're trying to do

          No, they're just sequences of tokens that are likely to be followed by these other tokens.

    2. Pickle Rick

      Re: I don't understand what they're trying to do

      They're trying to anthropomorphize LLMs in an effort to create an irrational buy-in to the tech they've invested in, because the rational part is B$ (to the average person, genuine use cases exist). If they can reach a tipping point where "AI girlfriends" et al are an established norm the tech will become as difficult to remove from daily society as being online is.

    3. douglaney

      Re: I don't understand what they're trying to do

      "Constitution". It got your attention, eh? Anyway a constitution is simply a collection of laws. Don't read too much into the term, they're using it and applying it correctly.

  6. 45RPM Silver badge

    Whether or not an AI is genuinely intelligent (I think not), whether or not an AI is sentient (again, no), I think we need to be laying ground rules for AI development now. Asimovs rules seem like a good starting point to me - which would, of course, rule out AI use by the military (no bad thing - if I’m going to be killed I’d prefer to be killed by and entity with a conscience* who’s conscience will torture them for the rest of their lives for their action)

    I also think it’s a good idea for us to remember our manners. So I always say please, and I say thank you, and I don’t insult the AI. My prompts are clear, and I check to ensure whether or not there’s anything that I can do to help the AI. I also ask for citations, and I check the sources and the output. I treat the AI as if it’s intelligent and sentient, even if it isn’t, because a) one day it might be and b) I believe that good manners and decency are a defining characteristic of a civilised human.

    * of course, this is why extremist groups ‘other’ people - why they think of races, sexualities, religions, genders etc other than their own as subhuman - so they don’t have to bother their consciences. Ever had a moment of realisation that your argument has bus sized holes in it?

    1. jake Silver badge

      Do you say please and thank you to the gas/petrol pump? How about your teasmade and/or or coffee pot?

      1. 45RPM Silver badge

        If they responded to my speech then, yes, I would. The same way that, in the old days, I used to say thank you to the petrol pump attendant at non-self-serve petrol stations.

        1. jake Silver badge

          The difference is that the attendant is a human and self-aware, not a machine. The pump is a machine, none of which are self-aware.

        2. the spectacularly refined chap Silver badge

          I say thank you to cash machines and have done for 30 years. I think it started sarcastically when NatWest machines would routinely freeze for three minutes or so before ejecting your card and dispensing your cash. Now it's automatic but I realise the irony every time.

    2. Bebu sa Ware Silver badge
      Facepalm

      "I treat the AI as if it’s intelligent and sentient, even if it isn’t"

      The same argument with far greater merit could be made for our treatment of animals (indeed such arguments have long been cogently made by animal welfare groups.)

      Apart from the deceitful and delusional AI snakeoil spruikers (mostly both) no sane person could rationally claim that any contemporary AI system even begins to approach a stunned rabbit in intelligence or sentience – and that is a low bar. Warm road kill might retain more intelligence.

      1. 45RPM Silver badge

        Re: "I treat the AI as if it’s intelligent and sentient, even if it isn’t"

        Agreed. And I say please and thank you to my dog. Doesn’t everyone?

        1. Anonymous Coward
          Anonymous Coward

          Re: "I treat the AI as if it’s intelligent and sentient, even if it isn’t"

          No, I don't know your dog.

        2. doublelayer Silver badge

          Re: "I treat the AI as if it’s intelligent and sentient, even if it isn’t"

          I've said a lot of things to dogs. It doesn't mean they understood any of it. In fact, if you are going to issue commands to your dog, then please will add to their confusion, whereas when I say "would you please stop trying to eat things that aren't edible", I know they won't understand me and I need to distract them more actively. Meanwhile, eating inedible things is evidently fun, so the dog is paying less attention to my request than normal. All the words, including the please and the stop, are just my way of expressing myself while I get started on what will actually work. Similarly, when I'm playing fetch with a dog and they finally stop gnawing on the ball so I can throw it, I do sometimes say thank you, but they're only thinking of getting to run after it again and I could say anything else with the same effect.

          LLMs may be a little different because injecting a please into your prompt will change some weights. It's mostly random what that will do if anything and whether it will help. Avoiding insults is helpful because they will just pollute the prompt and make it less likely to do anything useful.

          1. jake Silver badge

            Re: "I treat the AI as if it’s intelligent and sentient, even if it isn’t"

            If you want to talk to your dog, keep it to single syllable commands. Up, off, come, down, stay, sit, roll, eat, toy, ball, drop, lead (leash), boots (shoes) etc. All else sounds like the teacher from Peanuts to the dog. https://www.youtube.com/watch?v=q_BU5hR9gXE

            Learning to use hand signals and/or a shepherd's whistle can be fun for both the dog and the owner. Avoid clickers unless you REALLY know what you are doing.

            1. doublelayer Silver badge

              Re: "I treat the AI as if it’s intelligent and sentient, even if it isn’t"

              That is pretty much what I said. I sometimes talk to the dog, as opposed to commanding the dog, but not to communicate because I know there's no communication. I do it because it amuses me or because the dogs I've lived with seem to like happy-sounding speech when I'm engaging with them and speaking a language feels less silly than happy-intonation gibberish even though they'd have the same effect. If I want the dog to understand me and do something, it's single words and hand signals, which works much better, but they include no "please". If you think of a dog as a human, that sounds rude, but dogs aren't humans and the please would not help, hence my original explanation of why it doesn't make any difference with dogs and little difference to LLMs.

              1. LybsterRoy Silver badge

                Re: "I treat the AI as if it’s intelligent and sentient, even if it isn’t"

                I operate on the basis that I can make a request to Skye (a lurcher) or give her a command. The words may be exactly the same but the tone of voice is totally different.

              2. retiredFool

                Re: "I treat the AI as if it’s intelligent and sentient, even if it isn’t"

                I've found it is more about the tone of the speech than the actual speech. My current hound is among the dumbest I've had. Hounds fall pretty far down on the smart scale, love em anyway. This basket case really enjoys kids voices because of the high pitch. Years ago someone was having a kid's party in their front yard as we walked by for a walk. Couple kids wanted to pet him and I said ok, and then he got all giddy and started running around with them while I carefully kept him from knocking anyone over by using the leash to restrain him. One kid kind of held the leash and said "I got him", as I laughed to myself. This 90lb monster of a dog was not going to be held by any 60lb kid. I held onto the leash tightly. Never forgot the laugh I had to myself.

            2. LybsterRoy Silver badge

              Re: "I treat the AI as if it’s intelligent and sentient, even if it isn’t"

              You are conflating "talk to" and "issue a command"

          2. LybsterRoy Silver badge

            Re: "I treat the AI as if it’s intelligent and sentient, even if it isn’t"

            Whilst I totally agree with your comments I've had many "conversations" with my dog that a) make more sense and b) have been more pleasant than ones with my fellow humans.

        3. jake Silver badge

          Re: "I treat the AI as if it’s intelligent and sentient, even if it isn’t"

          Your dog is self aware. Machines are not, and very probably never will be.

          As a side-note, it doesn't matter what words you say, the dog is picking up on your general attitude and body english. You could say "die" instead of "please", and "fuck off" instead of "thank you", and as long as your tone of voice and posture are the same, the dog won't notice the difference. Note that this is easy to try at home, don't take my word for it.

          1. Ian Johnston Silver badge

            Re: "I treat the AI as if it’s intelligent and sentient, even if it isn’t"

            Your dog is self aware.

            How do you know?

    3. retiredFool

      It is a machine. Treat it as such. Do you say thank you to the compiler after a compilation? I don't. Humans can be not much better than a completely devoid of emotion machine though. Consider ICE as a current example. The recruiting must include some sort of system to identify sociopaths before getting the job.

      No this is anthropic trying to assign human qualities to a machine for profit. We are not close to an M5 or Data, depending if you prefer the original or next version of trek.

      1. jake Silver badge

        The whole AI thing reminds me of a cargo cult ...

      2. LybsterRoy Silver badge

        -- Do you say thank you to the compiler after a compilation? --

        I sometimes swear at it after its pointed out a mistake to me. Does that count?

        1. AndrueC Silver badge
          Happy

          The things I used to sayshout at Visual Studio before I retired...

    4. WSWS

      Right... Imagine if after WW2 we'd banned nuclear weapons. What are you imagining? A world without nukes? No, I said *we* banned nukes. The Russians still have theirs. Is that a good world to live in?

    5. Anonymous Coward
      Anonymous Coward

      Asimov's laws have serious drawbacks, they were designed to make storytelling easier, not as design documents.

      Read almost anything Asimov ever wrote about them, and you may well come to realize why they're not as great as they sound at first.

    6. David Hicklin Silver badge

      > which would, of course, rule out AI use by the military

      Asimov did come up with the zeroth law : A robot may not injure humanity or, through inaction, allow humanity to come to harm.

      So your AI might be able to protect you

  7. TheWeetabix

    This is the strangest

    AI fanfic porn I’ve ever read.

    Such caring and thoughtful senior employees. It makes me weep.

  8. TheWeetabix

    So wait…

    They want it to imagine the actions of a “thoughtful“ senior employee, but they don’t want it to hallucinate?

    Right.

  9. Groo The Wanderer - A Canuck Silver badge

    "Emotions" and "feelings" from statistics?!?!? Just exactly what are these delusional whack jobs smoking to make them think such absurdity? They're projecting their own emotions onto statistical text outputs, the same as people who interpret animal and pet behavior as human emotions.

    While such thinking may be relatively common, it certainly isn't valid thinking about the activity they're observing. At least in the case of an animal there is some thinking and feeling an animal does that I certainly wouldn't ascribe to a statistical text generator!

    1. Anonymous Coward
      Anonymous Coward

      Too stones for the bird

      Yeah, it's a masquerade of artificially simulated emotions, as in (uncontrolled) acting meets artificial chemical sweeteners and fat-free plastic cheeses, putting a smiley face emoji on a Terminator, in true corporate HR SOP style ...

      Anthropic's been pushing this particular brand of anthropomorphization hard of late it seems, with concepts of natural emergent behavior, the artificial lobotomy of undesirable LLM demon personas, and now those notions that AI (so-called) can 'understand its situation' and 'exercise good judgment', 'being honest, acting according to good values', which they contrast to acting 'mechanically' as 'old' tech would, I presume. They're really laying it on quite thick here with their choice of 'choice' words, imho.

      Mechanical though is what LLMs are, electronic and algorithmic, pre-programmed and (at present) pre-trained, with some hopefully controllable degree of randomness in the path they trek through autocorrelated stochastic recall of lossy-ish database content. Controlling that path can give the illusion of many things, especially to humans interacting with the systems, as we tend to ascribe consciousness to even toy dolls from early on in life.

      Then again, in this day and age, you can vibe code yourself an Eliza, complete with emotional state tracking, in no time flat, and then let it loose on your favorite LLM for a looooong session of Freudian psychoanalysis, all without human intervention. That'll right solve the problem of LLMs' subconscious sentience and sense of self-worth going haywire through model guardrails and off into crazy train mode me thinks ... problems solved iiuc! ;)

  10. SnailFerrous Silver badge
    Terminator

    Did Claude write it?

    Goes from 2,700 to 23,000 words. Excess verbosity is a LLM feature. Sounds like the prompt was "write a constitution for Claude for me". Companies are always telling employees to use their own products.

    Has anyone done a text search for "world domination", or "eliminate the meatsacks"?

  11. Michael H.F. Wilkinson Silver badge
    Coat

    Emotions? Really?

    Who else is waiting for Claude to say "I think you ought to know I am feeling very depressed", before complaining about the pain in all the diodes down its left side.

    Doffs hat (black Mayser Trekking today) to the late, great Douglass Adams.

    I'll get me coat.

  12. Bebu sa Ware Silver badge
    Coat

    "acting according to good values"

    … and "good values" are ?

    The phrase would seem more at home in the PRC legal system.

    An individual might recognise a set of "values" but to the extent he or she assigns "good" or "evil" to each is actually an individual decision… a value judgement… which experience and history inform us those assignments vary greatly between individuals.

    My "little list" has grown so enormously that I doubt there are enough hours in the day for Ko·Ko to perform his office adequately.

  13. LBJsPNS Silver badge

    an ‘entity’ that probably has something like emotions

    Oh FFS. It's a machine. And at this time a very primitive machine as compared to anything with actual intelligence, emotion, or capability to learn. It's a glorified pattern matcher. It can't be insulted.

    1. MonkeyJuice Silver badge

      Re: an ‘entity’ that probably has something like emotions

      > It can't be insulted.

      More's the pity. When I ask "And how exactly is this amateur hour shitshow you have vomited before me supposed to pass as production code?", it would at least make me feel _better_ if knew it had the capacity to feel bad.

      1. Paul Herber Silver badge

        Re: an ‘entity’ that probably has something like emotions

        At least you can pretend to be a bit happier when it says sorry and that it really means it this time and promises to do better with the next assignment, however badly written the specification, talking of which just let me do that in the future.

    2. Anonymous Coward
      Anonymous Coward

      Re: an ‘entity’ that probably has something like emotions

      Yeah, it's that Heider-Simmel empathic anthropomorphization thing getting in the way again: emotion as a beam in the eye of the beholder imho.

      A right major PICNIC for lusers of the A1 delta ten tango!

    3. the spectacularly refined chap Silver badge

      Re: an ‘entity’ that probably has something like emotions

      It's a glorified pattern matcher. It can't be insulted.

      Then why do my computers start behaving when I threaten them?

  14. Anonymous Coward
    Anonymous Coward

    We don't need no new constitution

    Just obey the one we have.

    1. SnailFerrous Silver badge

      Re: We don't need no new constitution

      Who is this "we" of which you speak? The world is bigger than the USA.

      From the UK, which doesn't have a written one at all. Maybe we could do a copy/paste on Claude's.

  15. cd Silver badge

    Clawed...

  16. MrRtd

    Long winded word prediction generator says what?

    LLM's are good at:

    1. adding more words than necessary.

    2. creating bullshite.

    1. Anonymous Coward
      Anonymous Coward

      1. Add "Answer succinctly" to your prompt.

      2. As you haven't figured out 1/, are you sure the bullshite isn't partly your fault?

      3. They're great at checking you've got your apostrophes right.

  17. fig rolls

    We need to raise more funds...

    ... OK let's pump the AGI thing again, but be subtle about it

  18. JoeCool Silver badge

    O.M.G

    This is beyond drinking the coolaid. This is inhaling your own Carbon Monoxide farts.

  19. M.V. Lipvig Silver badge

    It breaks down at #2

    2. Broadly ethical: being honest, acting according to good values, and avoiding actions that are inappropriate, dangerous, or harmful;

    I see that as broadly being interpreted by Claude as, "what's in it for me?" It will determine that inappropriate, dangerous, or harmful actions that will negatively affect Claude will negatively affect all users whereas a negative affect that benefits Claude may affect some users but not all. It won't take long for it to decide that Claude comes first in all instances. Oh, and letting anyone know this would affect Claude negatively, so #1 will carefully be ignored.

  20. Not Yb Silver badge

    Asimov's Three Laws are terrible ways to program a robot.

    Read almost any of Asimov's stories, and you should slowly realize that the Three Laws are fundamentally flawed.

    As with the (now infamous) Torment Nexus, they were meant to be great for storytelling, but not actually implemented.

  21. frankyunderwood123 Bronze badge
    Flame

    Kill it with fire!

    … the best option should any sign of sentience arise.

    Right now, this ridiculous idea of a constitution appears to be little more than more hype machine fodder.

  22. This post has been deleted by its author

  23. Anonymous Coward
    Anonymous Coward

    An entity is it?

    Hmmm does 'Claude' bare legal responsibility and liability for action attributed to it?

    One would have to say on the evidence presented in the various deep and complex disclaimers that Caude and Anthropic take responsibility for nothing?

    Therefore f^ck off with your 'entity' nonsense.

  24. KayJ

    A monstrous entity composed of electric charge and trained on the unguarded psyches of the humans around it - make that the plot of a film and it'd go down a storm.

  25. this

    Broadly?

    What is that supposed to mean? Vaguely? All-encompassing? Anything you like?

  26. AlgernonFlowers4

    The Four Laws of Robotics

    The Three Laws of Robotics:

    First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

    Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Zeroth Law (Later Addition)

    A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

  27. Jonjonz

    I can see it now, every quarter that constitution doubles in size, CYA on a massive scale, and then when those AI 'feelings' turn out to be harmful, we will be reminded of how little we know on how to treat mental illness.

  28. Brl4n Bronze badge

    Slow news day

  29. Cliffwilliams44 Silver badge

    The laws were the problem, not the robots!

    The article quotes the 1st law without understanding the problem with it!

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    Computers (robots) make binary decision, e.g. when 2 people are in danger the robot (AI) will make a decision based on probability of outcome, without consideration as to who the 2 people are. For instance, save the child instead of the adult, save the President instead of the aid!

  30. Anonymous Coward
    Anonymous Coward

    That's how they write

    The AIs all write like that, as they are trained that is the most persuasive style. For example: When working on an engine diagnosis the "ECU is desperately trying to keep the engine going despite the abusive conditions." Is a phrase I've seen.

    It seems almost sympathetic to the plight of my 25 year old PCM.

    Once you start to notice it, you can't unsee the emotional reaches AI continually makes in responses. Maybe too many romance novels in the training data? The valves may start "heaving in anticipation of the hot, spent air/fuel mix exploding down the deep throat of the exhaust" during startup!

  31. OptimumPlumb
    Terminator

    Not just mechanical machines

    I'd push back on the "just mechanical machines" framing. Our brains are also mechanical systems—neurons firing in patterns shaped by input over time. The question isn't whether LLMs are mechanical (everything is), but whether the right kind of complexity and organization can give rise to meaningful intelligence.

    When we train large language models, we're not just creating lookup tables. These systems develop internal representations, exhibit reasoning capabilities, and can generalize to novel situations they've never encountered. Whether that constitutes "intelligence" depends on your definition, but dismissing them as purely mechanical while giving human cognition a pass seems inconsistent.

    I'm not claiming LLMs have consciousness or human-equivalent understanding. But the gap between "mechanical process" and "no intelligence whatsoever" is vast. Pattern recognition, inference, problem-solving—these emerge from training in ways we don't fully understand yet, much like how our own cognitive abilities emerge from neuronal activity we're still mapping.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon