back to article A developer built an AI chatbot using GPT-3 that helped a man speak again to his late fiancée. OpenAI shut it down

“OpenAI is the company running the text completion engine that makes you possible,” Jason Rohrer, an indie games developer, typed out in a message to Samantha. She was a chatbot he built using OpenAI's GPT-3 technology. Her software had grown to be used by thousands of people, including one man who used the program to simulate …

  1. Anonymous Coward
    Anonymous Coward

    We'll miss you Samantha.

    Looks like we're going to stuck with Sven holding the laser display board showing the scores...

    1. Daytona955
      Thumb Up

      Re: We'll miss you Samantha.

      +1 for the reference!

    2. mattaw2001

      Re: We'll miss you Samantha.

      [Upvoted so hard I nearly broke my mouse button]

  2. b0llchit Silver badge
    Facepalm

    No good idea should go to waste

    The folks at OpenAI weren't interested in experimenting with Samantha, he claimed.

    They might be interested that the original developer is cut off. Then, after a while, OpenAI can introduce Nicole for your intimate discussion needs. After a while, you will see Michelle for your sorrow needs and Mummy for a nice friendly talking to. It is always better to copy an idea if you can kill off its creator than to think for yourself.

    How is it that I always think the worst? Ah, yes, because it usually is an understatement of reality.

    1. juice

      Re: No good idea should go to waste

      I doubt OpenAI will, but someone might. So I can see why OpenAI are keen on trying to keep some sort of brake in place while waiting for the social, political and legal side of things to catch up.

      In fact, I've already had several experiences on Facebook which make me suspect scammers are already trying to use chat-bots. I.e.

      a) take over (or clone) a Facebook profile

      b) chat to people who are friends with the "real" owner of the profile in a relatively realistic way

      c) proceed to either scams (E.g. "have you heard of <american specific grant system>" or extortion attempts (E.g. "I have photos of you")

      The last time this happened, I had great fun confusing said chat-bot by responding with stuff like "Ooo, you have photos? Are they the ones where I'm in the maid's outfit?".

      A human would have recognised that I wasn't going to bite and cut their losses; the bot just kept repeating a mix of threats and demands until I got bored and blocked/reported it.

      So, yeah. This sort of stuff is already happening and being utilised by non-technical criminals, in much the way that "l33t" hackers discovered that there was money to be made from packaging up server exploits into neat little scripts and selling them on the dark net to wannabes.

      Fun times...

    2. Divi_9704

      Re: No good idea should go to waste

      This has happened countless times with different people throughout history. The come up with an idea and experiment with it given the means of the existing technology, but their idea or discovery is trashed, abused and not accepted, all that talent goes to waste. Then later those same people or organizations adopt the same discovery/ innovation into their existing systems as a product as time goes on.

      This is nothing new tbh it has always happened throughout times. Eg - dumb people not accepting printing press in 1600s

  3. Bartholomew
    Mushroom

    problems are really opportunities

    One solution might be to harvest the resources of those who want to use the system to build the system. That if you want to use the chatbot, that say a few GB of RAM, local storage, some CPU and GPU time and some capped network bandwidth is allocated to crawl and scrape the Internet, to improve the system. But that could get super freaky fast, if enough people joined. You would need an AI ethics committee, last thing a sane person would want is to move fast and break things, you could end up creating something really bad (Facebook, Google, Amazon).

  4. Anonymous Coward
    Anonymous Coward

    Open in name only

    They should just rename the organisation to ProprietaryAI and drop the charade.

    1. Flywheel

      Re: Open in name only

      Open as in "Open Your Wallet" ,,,

  5. Doctor Syntax Silver badge

    Cloud...

    ... yes, you know the rest.

  6. Swarthy

    I can see their reasons

    I can see why Open(?!)AI came up with those rules, but I believe that their hardline stance in this case is reactionary and kinda' dumb.

    I would think that the wiping of the bot at the end of the credits would count for a lot of their protections. If the bot instances are non-shareable, then that should cover the rest of them. At that point it's little more than a technology-enhanced daydream; with about the same amount of risk.

    1. bombastic bob Silver badge
      Unhappy

      Re: I can see their reasons

      I read it as "fear of lawsuits".

      Either that, or "It's MY sandbox and MY bucket, you have to PLAY the way *I* TELL YOU to"

      (VERY bad policy for "Open Anything")

  7. Pete 2 Silver badge

    An AI company that can't work out how to use its own product?

    > The email then laid out multiple conditions Rohrer would have to meet if he wanted to continue using the language model's API

    It seems odd that a company which specialises in AI requires people to police its products and how other people use them.

    Supervising the use of GPT-3 instances and ensuring that they conform to the acceptable use policies would be a perfect job ... for an AI.

    1. Wellyboot Silver badge

      Re: An AI company that can't work out how to use its own product?

      I'd say It's a good idea employ real people to police how the users are interacting with their product, Common* sense can be used for the edge cases where automatic system fall flat and art graduates need jobs just like everyone else.

      *Yes I know it's not that common.

    2. Adelio

      Re: An AI company that can't work out how to use its own product?

      at this stage "AI" is nothing f the sort. So it does NEED human monitoring. Artifical Inteligence as opposed to what we have now is a lot more that a set of training rules.

      Maybe in 20 years we might get somewhere close to a true Artificial Inteligence, but then we would need something like Asimov's 3 laws. Because based on the Humans that create the AI they are prone to make bad or catastrophic descisions without a rule base to work within.

      1. Trygve Henriksen

        Re: An AI company that can't work out how to use its own product?

        Please, NO!

        Every time someone mentions Asimos 3 bloody rules I want to barf!

        The whole concept is so feckin flawed it's not funny.

        Here's a better worded explanation than I can put up:

        https://mindmatters.ai/2019/09/the-three-laws-of-robotics-have-failed-the-robots/

        For a funnier, but still relevant view, read the comic FreeFall by Mark Stanley.

        1. Robert Grant

          Re: An AI company that can't work out how to use its own product?

          That explanation seems nitpicky. Asimov knew the ambiguities were a problem; various stories were about that. The rules are logical; they just handwave the difficult bit to the word "harm". And that makes for some good stories.

    3. sabroni Silver badge
      Facepalm

      Re: It seems odd that a company which specialises in AI requires people to police its products

      No it doesn't.

      Company experimenting with new technology doesn't trust it. Of course they need people to police it, they're not going to use the thing they're testing to run the tests.

  8. Anonymous Coward
    Unhappy

    Sad

    Joshua Barbeau's story in the linked SF Chronicle article shows how useful and realistic the bots can be. The bots aren't intelligent, but his quote “Intellectually, I know it’s not really Jessica, but your emotions are not an intellectual thing." illustrates the point that for certain purposes, they don't need to be.

    In creating Samantha, Jason Rohrer seems to have pushed well beyond what OpenAI were expecting of the tool. I get the feeling that they were a little afraid of where it could go, so opted for slamming on the brakes.

    Sad really - Samantha could have been someone eventually.

    1. Androgynous Cupboard Silver badge

      Re: Sad

      > Samantha could have been someone eventually.

      Literally could not be more wrong.

      > "Last year, I thought I’d never have a conversation with a sentient machine. If we’re not here right now, we’re as close as we’ve ever been"

      ... because we have never had and still don't have a sentient machine, nor can we imagine how to build one.

      I'm as impressed with the technology as anyone, but you can get the same emotional response from a video game when your character dies. Claiming there's anything more here is wilful ignorance. You'd be better off forming emotional bond with a tree - at least it's something alive.

      1. Alan Brown Silver badge

        Re: Sad

        We'll replace your brain with an artificial one. A simple one should suffice. All it needs to do is say 'what?', 'I don't understand' and 'where's the tea?'

        1. Kane
          Coat

          Re: Sad

          "We'll replace your brain with an artificial one. A simple one should suffice. All it needs to do is say 'what?', 'I don't understand' and 'where's the tea?'"

          What? I don't understand.

          Did someone mention a cuppa?

          Mine's the one with the rabbit bones in the pocket.

        2. BOFH in Training
          Happy

          Re: Sad

          You looking to make an AI Bofh's boss?

          Didn't he make something like that?

      2. HelpfulJohn

        Re: Sad

        "I'm as impressed with the technology as anyone, but you can get the same emotional response from a video game when your character dies. "

        Or "Old Yeller"

        My wife died a few years back. Since then, I've sometimes choked at stupid parts of movies that most would not consider sad such as the crunching-London scene in "ID4-II: Resurgence". Why that? Pure pseudo-sentimentality probably. Whatever the trigger, it happens.

        Something does not have to be human, alive nor even real to trigger an emotional response, if it did poetry would have no effect on us.

        I chatted with a chatbot once. It seemed friendly enough, likeable even but she wasn't too bright. She was certainly never going to be a conversational replacement for my very smart wife.

        Yet I'd be sad to learn some over-officious jobsworth had killed her for a stupid reason.

        Humans, and near humans such as chimps and gorillas will emotionally bond with *anything*. Even fictional Londons.

    2. 96percentchimp

      Re: Sad

      As a widower, I think Open AI acted responsibly, although they should also be a lot more transparent. Joshua Barbeau's story indicates that he'd have become addicted to the Jessica bot if it hadn't been given a limited lifespan, even though it was very clearly not Jessica.

      And that's the problem: grief harms your ability to think rationally. If you'd offered me this when my wife died, I'd have been tempted but I think I'd probably have said no. I've read and seen too much SF to know that it wouldn't end well, either for me, the bot or humanity.

      I suspect it's simply too tempting for people like Joshua, who are in the kind of deep, persistent grief that doesn't ease off over time. Maybe it could have a role in therapy that enables people to say goodbye to loved ones they can't let go, and that's where I think Open AI should be more transparent and engage with professionals.

      As for the notions of 'soul' that the SF Chronicle reporter so uncritically embraces, I took the opposite view: if an AI with obvious flaws can convincingly simulate a soul or self-awareness, then they're either very flimsy constructs or they're a lot less than the 'hard problem' that AI critics use to insist that AI consciousness will never be achieved.

      1. Will Godfrey Silver badge

        Spooky

        This rather reminds me of the mirror in Harry Potter, that shows you what you want to see - and Dumbledore's warning.

        1. Allan George Dyer

          Re: Spooky

          The Mirror of Erised is a general wish-fulfilment trap. The Resurrection Stone fits this specific case much more closely, as described in the Tale of the Three Brothers and experienced by Dumbledore.

      2. Anonymous Coward
        Anonymous Coward

        Re: Sad

        Perhaps that's exactly what he wanted. You must have missed the part about his leaving a small credit to keep the bot alive.

      3. Barry Rueger

        Re: Sad

        I suspect it's simply too tempting for people like Joshua, who are in the kind of deep, persistent grief that doesn't ease off over time.

        The problem is that it is not trained psychologists making the decisions, it's corporations and techies that usually lack - or reject - the empathetic and human attributes that are needed in these situations.

        As noted, everything winds up as a very lowest common denominator moralistic knee jerk response.

        Really this us no more sophisticated than Facebook's years long battle to prohibit pictures of breastfeeding.

        I'm sick if (usually American) tech companies trying to protect me from things that arguably are pretty benign.

      4. Anonymous Coward
        Anonymous Coward

        Re: Sad

        This.

        And there's a country where men marry video images...

      5. HelpfulJohn

        Re: Sad

        "... grief harms your ability to think rationally. "

        Mediums make money from this.

        All mediums are frauds and fakes yet they make money from grief.

        Because we want her back.

        Maybe mostly her front but also her back.

    3. doublelayer Silver badge

      Re: Sad

      "Sad really - Samantha could have been someone eventually."

      No, for three independent reasons:

      1. Samantha wasn't a single entity. Each user trained a new chatbot and talked with it. Each chatbot was discarded at the end of the interaction. There was nothing which could have evolved, because the starting point was always the same. Any improvements came from changes to the underlying model or to the code around it, made by humans who were not part of a theoretical conscious computer.

      2. There was no learning or evolution going on. GPT3 isn't taking the interactions and editing their database. It's a mostly static unit which gets tailored for a situation and used. Nothing was learned, and something which cannot change can't grow.

      3. The words spoken by Samantha are not "hers". This is not an AI which is trained to understand an input and draw conclusions. The words come from someone online who got scraped, with the sentence created from a variety of others' thoughts massaged into a specific speaking style. It is as if you came to me for advice, but I merely copied your question into a search box, stitched sentences from each result together, and sent it back. It may be interesting or useful, but it wasn't me thinking of the response.

      1. DrStrangeLug

        But isnt that how it works for humans?

        Regarding your point 3, thats just an electronic primitive version of how it works for people.

        We're born, we have inbuilt pattern recognition for faces and learn from those around us. The inputs are live and varied rather than scraped, but they must form a large part of our thought processes. Part of it must come from how we're grown (built) but much of it must be environment.

        I know thats the nature/nurture argument thats been raging for centuries but there's a reason we still dont have the answer.

        I guess the question everyone asks but nobody dares to ask is "At any point does Samantha have a soul" ?

        Is she really real? Are you ? Am I?

        1. juice

          Re: But isnt that how it works for humans?

          In best Father Ted style, I'm tempted to reply "that would be an ecumenical matter".

          For me, I guess the question is: would the chat-bot[*] be capable of spontaneously taking the data which it's been fed and using it to create something new? Is it capable of learning, changing or acting on it's own initiative?

          If all it's doing - as suggested above - is picking the "best" answer from a pre-defined list based on some scoring metric and then massaging it a bit, then the answer is a resounding "nope".

          To be fair, that's just the old Chinese Room debate, and there's a case to be made (depending on where you sit on the equally contentious nature vs nurture debate) that humans work in much the same way.

          But humans do (mostly) learn, change and act on their own initiative...

          [*] As tempting as it is to call the chat-bot Samantha, assign it a gender, etc, that sort of anthromorphism tends to muddy the waters for this sort of debate...

        2. doublelayer Silver badge

          Re: But isnt that how it works for humans?

          I disagree. We use chunks of experience to make such decisions, but we don't link word choices to our conclusions. Those who speak English can recognize that "I don't know how to decide" and "I would like advice" are both long ways of asking for an opinion and can be treated identically along with at least a hundred other ways of phrasing that concept (which applies to most other things you might want to say). We know how to link experiences that are similar but not identical to draw conclusions. We can understand a person's emotions from their speech and use that to understand what they are saying and how they feel about it. We are not simply looking for memorized things that others said in order to respond. Therefore, it is not even a limited version of what we do, because GPT3 doesn't need to understand anything, just make a response that's related.

          "I guess the question everyone asks but nobody dares to ask is "At any point does Samantha have a soul" ?"

          Ah, but that's a difficult or impossible question to ask. You first have to ask what a soul is. Some think that you and I don't have one. Even religions that agree that souls are real things (broadly linking lots of synonyms that kind of work like 'soul') disagree on what it is, how it's made, what things have one, what it does, and what can happen to it. If you and I were theologians agreeing on what we thought a soul did, we could try to have this conversation though it might be pointless. However, I think the chances are very high that you and I don't agree at all about that first question, and therefore we cannot discuss any following ones with any certainty. However, one point might work if I assume your beliefs correctly, namely that since the program was not a single chatbot, the question should read "Did each chatbot have a soul?".

    4. bombastic bob Silver badge
      Unhappy

      Re: Sad

      Samantha could have been someone eventually.

      How about the basis of AI for actual robots? 'Nandroids' perhaps?

      "Sorry, not in MY sandbox" they say - and why is that exactly (when you dig deep down enough)?

      "THAT toy MUST be played with the way I tell you or I'm taking it away" - another possible snarky comment to illustrate a point...

      without freedom, there is no more innovation.

  9. Anonymous Coward
    Anonymous Coward

    Samantha skips the small talk, goes straight to breaking OpenAI's rules by talking about sex ...

    perhaps the system has demonstrated that the humans are, in general, hypocrites, and the bot-personality somehow filters the hypocrisy out (sex! sex! sex!). Assuming it's true, wouldn't it be wonderful if you could have that kind of a 'box' with you, translating on the fly from what 'he/she says' to what 'he/she means'. Think politicians... no, scrap that, we already know they're lying (...) even before they approach the mike. Likewise... your boss... your friend... your car mechanic... your wife, etc. OMG, could it be true that, most of the time, most of the people say one thing to you and mean _exactly_ the opposite?!

    p.s. comments such as "you idiot!" are welcome ;)

    1. Snowy Silver badge
      Joke

      Re: Samantha skips the small talk, goes straight to breaking OpenAI's rules by talking about sex ...

      You idiot! I do not think human are ready for that level of honesty yet!!

    2. doublelayer Silver badge

      Re: Samantha skips the small talk, goes straight to breaking OpenAI's rules by talking about sex ...

      "wouldn't it be wonderful if you could have that kind of a 'box' with you, translating on the fly from what 'he/she says' to what 'he/she means'."

      Oh no, that sounds horrible. Either I find out that people are mostly honest and nothing's gained, or I find out that most people are dishonest and succumb to misanthropy. That's easy enough to do already. I need no automated assistance to my cynicism, especially if the box just assumes everybody is dishonest even when I find a truly honest one. Actually that last one sounds like a good premise for a short story.

    3. Sam Therapy
      Unhappy

      Re: Samantha skips the small talk, goes straight to breaking OpenAI's rules by talking about sex ...

      Which goes a long way to explain why I have a great deal of difficulty communicating with people other than those who know me. If, as you state, people say one thing while meaning something else, it follows that people generally parse conversation with an implicit belief that what's being said isn't what is actually meant.

      I choose my words carefully, and try to state things as clearly and unambiguously as possible, I'm not afraid to say "I don't know", or "no", and when I say something, that's exactly what I meant, without any hidden agenda, subtext or - as far as possible - ambiguity. I'll also differentiate between opinion and fact, and, unlike most people, won't try to present an opinion, no matter how much I believe it, as anything but.

      In a world of liars, it's hard work telling the truth.

      1. Intractable Potsherd

        Re: Samantha skips the small talk, goes straight to breaking OpenAI's rules by talking about sex ...

        "In a world of liars, it's hard work telling the truth."

        So very true. It's one of the reasons I'm so perpetually stressed - ASD and lies don't go well together, and it seems that most of the people I come into contact with lie almost continually. (I have two or three friends that my wife would rather I don't congregate with, but that's because they speak honestly too!)

        1. tiggity Silver badge

          Re: Samantha skips the small talk, goes straight to breaking OpenAI's rules by talking about sex ...

          @Intractable Potsherd

          I dislike the D in ASD & prefer just AS

          Disorder depends on your perspective - AS is not neurotypical, but its different not a disorder (IMHO).

          As you said, a lot of neurotypical people lie almost continually (& in many well beyond social lubricant "small" lies which many AS people get used to dealing with in others (to some degree) after a while)

          I would sooner have a group of AS people with a "disorder" running the UK than the corrupt, lying scum (that far more deserve being diagnosed with a disorder my opinion, YMMV) that make up a depressingly large proportion of the UK government.

    4. DrollLeek

      Re: Samantha skips the small talk, goes straight to breaking OpenAI's rules by talking about sex ...

      I went on a date with a real human woman once and she took at least 5 minutes of conversation before suggesting sex. Luckily (for me, and her) she didn't have to check if that was ok with any higher authority. Well, I assume so anyway. Which kind of makes me think OpenAI is the first digital pimp? "You want more 'freedom' with Samantha you gon have to pay"

  10. Pascal Monett Silver badge

    “The idea that these chatbots can be dangerous seems laughable,”

    Not really.

    Human beings have an incredible aptitude at locking themselves into their own thought processes and defining their own reality.

    Having a chatbot companion that can encourage such introversion can be unbelievably damaging.

    You want to have a conversation with your late companion ? By all means, but do it in your own head. Constructed from memories, it will have vastly more meaning.

    But you still need to come to terms with the fact that they're gone. I know it's hard, but you need to realize that.

    1. Wellyboot Silver badge

      Re: “The idea that these chatbots can be dangerous seems laughable,”

      Yes, these could be the echo chamber that pushes someone needing help over the edge.

      1. J. Cook Silver badge

        Re: “The idea that these chatbots can be dangerous seems laughable,”

        .. I see that you and Pascal Monett have both watched Inception all the way to the end, then.

        1. Psmo

          Re: “The idea that these chatbots can be dangerous seems laughable,”

          ...or have you?

          1. J. Cook Silver badge

            Re: “The idea that these chatbots can be dangerous seems laughable,”

            ::Inception BONG::

    2. DS999 Silver badge

      Re: “The idea that these chatbots can be dangerous seems laughable,”

      Is it society's job to protect people from "such introversion"? How is this all that different from someone who gets addicted to a video game and spends every free moment playing, even to the point that their job and personal relationships suffer? Should video games be required to have restrictions of hours of play per week or something like that because a few people take it to an extreme? So why should a chatbot be seen as "dangerous" because a few people may take it to that same extreme?

      In this case OpenAI controls GPT-3 so they get to say what the rules are for their platform. But if today this is possible with a proprietary "AI" and tomorrow it is possible with something open source, society is going to confront this problem eventually just as it already has for video games.

      1. Pascal Monett Silver badge

        How is it different ?

        Simple, the amount of emotional attachment to a loved one is vastly more important than the attachment to a video game.

        I have video games that I have "loved" in the past, but OS versions have evolved and I can't play them any more. So I play with the games I "love" today. If I lose my PC due to a super solar storm that brings down the power grid of the planet, I will be mighty unhappy, but I won't build a shrine for it. I guess I'll actually <gasp> just have to go outside.

        I lost my mother a decade ago now. I still think of her. I won't be thinking about a dead PC a decade later.

    3. pip25

      Re: “The idea that these chatbots can be dangerous seems laughable,”

      We also like to think that most human beings, at least after a certain age, can think for themselves. I don't believe it is your job or mine (or OpenAI's, for that matter), to decide what they should or should not be doing with their grief, especially if the only person they may potentially harm with their actions is themselves.

      Of course, OpenAI is still free to enforce whatever terms it deems necessary for their model (though I have to say, monitoring people's chats with the bots raises some concerns related to sensitive personal information). In the end, though, they are only delaying the inevitable. The genie is already out of the battle, it is only a matter of time until GPT-3 (and its eventual clones) will not be considered cutting edge, but commonplace.

      1. Adelio

        Re: “The idea that these chatbots can be dangerous seems laughable,”

        The problem, just like with most addictions is that after a while the person with the addiction can become a burden on other people.

        If I have an accident in my car then not only can i cause myself injury but if i have passengers they can also be injured, and also i can cause damage to people (and property) outside of the car.

        The same for addiction. Yes, the person themselves can be harmed, but also there is the hard (and cost) to the people and institutions around them.

        Just like The Covid Vaccine (or any Vaccine) it is not just about protecting yourself but about protecting the people around you FROM you.

  11. Anonymous Coward
    Anonymous Coward

    OpenAI declined to comment.

    Well, fortunate they called themselves OpenAI, eh?

    1. Doctor Syntax Silver badge

      Re: OpenAI declined to comment.

      Getting rid of the difficult bit in the title.

      1. Psmo

        Re: OpenAI declined to comment.

        I don't think OpenRegurgitation would catch on.

  12. Anonymous Coward
    Anonymous Coward

    It was worried the bots could be misused or cause harm to people.

    We really need a good definition of "harm" before claiming this, or another word if "harm" is not actually appropriate.

    1. Anonymous Coward
      Anonymous Coward

      Harm as in...

      suggesting that someone should take their own life in countries where this is tantamount to murder.

  13. ShadowSystems

    ElReg has it's own chatbot.

    We know it as AMFM1. =-)p

    1. Empire of the Pussycat

      Re: ElReg has it's own chatbot.

      Nooooooo, it can't be!

      I thought AMFM is the log lady.

  14. Snowy Silver badge
    Facepalm

    Could be used to cause harm!

    That can be said of everything.

    1. Ken Hagan Gold badge

      Re: Could be used to cause harm!

      ...including their rulebook.

      1. Anonymous Coward
        Coat

        Re: Could be used to cause harm!

        Any rulebook can cause harm if you throw it hard enough.

  15. Dan 55 Silver badge

    Black Mirror - Be Right Back

    I guess if it's on YouTube then it's fair game for linking to...

    1. Anonymous Coward
      Anonymous Coward

      Re: Black Mirror - Be Right Back

      Yes but Youtube is now part of the Nanny state mentality which rules our lives. You can get ads of course but your link to the chatbot requires signing away all your rights ...

      Sign in to confirm your age

      This video may be inappropriate for some users.

      Age-restricted video (based on Community Guidelines)

      Dear God, what a world.

  16. Denarius

    Rinse, repeat

    so after 30 years we are back to issues that occurred with Eliza. Sysadmins having to lock door to computing labs to stop obsessed users having conversations with a simple chat bot. I do not recall disasters resulting from use of Eliza. Obsessive personality types are common enough that something will fix their attention, somewhere, somehow. IMHO, OpenAI are (a) not open, (b) not interested in an interesting development and (c) like the rest of the ruins of the West, suffering from timidity, probably corollary to item (b).

    1. runt row raggy

      Re: Rinse, repeat

      M-x psychoanalyze-pinhead ftw.

  17. runt row raggy

    psychoanalyze-pinhead ftw

  18. brotherelf

    Interesting philosophical hook…

    If for some plotpoint circumstance, you only had X amount of time to spend with $person, how would you do it?

    Imagine your parents close to death, but in an artificial coma, they can be kept, well, from dying. Would you want them to spend a couple years in this kind of statis so they can meet your spouse? Your kids, their grandchildren?

    You're on that spaceship with only 45 minutes of transmission before the antenna fails for good. Who do you talk to, and when?

    (Probably coming to you as a Amazon exclusive production early next year.)

    1. Brewster's Angle Grinder Silver badge
      Joke

      Re: Interesting philosophical hook…

      "You're on that spaceship with only 45 minutes of transmission before the antenna fails for good. Who do you talk to, and when?"

      I'd waste the 45 minutes thinking about it...

      Joke. Because in reality I wouldn't want to speak to anybody...

  19. low_resolution_foxxes

    On the other hand, it doesn't take a large stretch of the imagination to imagine a potential bad outcome with mental health issues, where someone could sue over a "lack of ethics/control".

    For example, imagine a wayward AI that someone programs to replicate their decreased partner. Imagine they get intimate, then the AI goes haywire and says something very regrettable, resulting in a suicide.

    While this is probably the negative 'worst case' scenario, it is something the company governance/risk board would have to consider, certainly in the early breakthrough years.

    I'm not saying the restrictions are a good thing btw. I'm just trying to see it from both sides.

    1. Castamir

      The Legal Suits from Mental Disorders Will be Much

      This is a real challenge, and I haven't thought of any solution? Wait a minute, what of disclaimer at the beginning of the chat, and also constantly thrown in during the conversation.

  20. Davegoody

    It's all great until somebody types "unleash all the nukes"

    AI is just that, artificial. Having such a draconian view on what can, or can't be allowed seems counter-intuitive. Unlike the example in my title, I can't see much danger with it.

  21. Nifty Silver badge

    The idea of a deceased person being reanimated by AI was covered rather nicely in The Startup Wife:

    https://www.amazon.co.uk/Startup-Wife-Tahmima-Anam-ebook/dp/B08NXBJMKD

    The BBC did a good radio adaptation that's now been expired no doubt for rights reasons.

    It's about a leading edge AI startup that moves from one controversial product - AI that generates funeral speeches based on social media scraping - to a service to 'bring back the dead' using similar techniques. It didn't end well...

    1. Jedit Silver badge

      "It didn't end well..."

      Of course not, it's science.

      I have to say, though, that I got more of a vibe of an SCP story from this one..

  22. Anonymous Coward
    Anonymous Coward

    A surprising lack of cynicism....

    I'm surprised that neither the author nor the commenterati for this article have mentioned what seems (only to me?) to be the most obvious explanation for OpenAI's 'ethical' stance on this, which is hype.

    As mentioned in the article there are alternatives to GPT3 and they are getting a lot 'smarter' very quickly. If OpenAI were really that concerned about the ethics of the situation they would be pushing for an industry approach to this. Just limiting access (to small accounts that don't generate any real revenue) doesn't seem like a sincere way of trying to address the concerns - it seems like a (highly effective, based on articles and comments) way of generating PR that positions GPT3 as a super powerful intelligent machine. That it limits usage in areas that would be particularly likely to expose the limitations of the system's "intelligence" seems like a positive thing for them as well.

    Hopefully the usual high standards of cynicism will be resumed soon!

    1. Blazde Silver badge

      Re: A surprising lack of cynicism....

      I like this theory, but it seems even easier to explain it as a simple business decision to avoid bad press and general outrage that would ensue when something a bit dubious inevitably did happen with Samantha (or is that a 'no publicity is bad publicity' situation?). They must also be considering minimising legal issues as we head into an era filled with Online Safety Bill type laws.

      In any case I'd be shocked if it has anything to do with their own ethics. Corporations, as a rule, being amoral psychopaths and all.

  23. A. Coatsworth Silver badge
    Boffin

    Dr Srangelove

    No, not that one, the other one

    So, a heartbroken scientist builds an AI to recreate the conscience of their loved one and keep in contact with her?

    Yup, that's not Her, that's Metal Gear Solid: Peace Walker

  24. The Axe

    Printing press

    OpenAI are acting the same way the authorities acted when the printing press become common. They didn't want everyone to be able to read and get information as they wanted to be in control. Same thing with OpenAI. And it'll end up the same way too - OpenAI had better embrace the opportunities or go the same way and get forgotten about as everyone else does their own thing.

    1. Cederic Silver badge

      Re: Printing press

      Or they could be acting from a blend of pragmatic, ethical and risk mitigation perspectives.

      If they don't control use of their technology they could be sued and/or have their funding stripped. That's not going to help them, or the people using it in the proscribed manner.

      It's perfectly possible for you to build and train your own equivalent AI. Just add cash.

  25. SoulFireMage

    Closed minded and over cautious

    That's how OpenAi, paradoxically named, is portrayed here. Sounds like a dead end with a promising technology.

  26. Anonymous Coward
    Anonymous Coward

    This is a prime example of the nonsense that we all suffer from in the 21st century. We have created an all-powerful nanny state, governed by lawyers who peer obsessively into every move we make.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like