
We'll miss you Samantha.
Looks like we're going to stuck with Sven holding the laser display board showing the scores...
“OpenAI is the company running the text completion engine that makes you possible,” Jason Rohrer, an indie games developer, typed out in a message to Samantha. She was a chatbot he built using OpenAI's GPT-3 technology. Her software had grown to be used by thousands of people, including one man who used the program to simulate …
The folks at OpenAI weren't interested in experimenting with Samantha, he claimed.
They might be interested that the original developer is cut off. Then, after a while, OpenAI can introduce Nicole for your intimate discussion needs. After a while, you will see Michelle for your sorrow needs and Mummy for a nice friendly talking to. It is always better to copy an idea if you can kill off its creator than to think for yourself.
How is it that I always think the worst? Ah, yes, because it usually is an understatement of reality.
I doubt OpenAI will, but someone might. So I can see why OpenAI are keen on trying to keep some sort of brake in place while waiting for the social, political and legal side of things to catch up.
In fact, I've already had several experiences on Facebook which make me suspect scammers are already trying to use chat-bots. I.e.
a) take over (or clone) a Facebook profile
b) chat to people who are friends with the "real" owner of the profile in a relatively realistic way
c) proceed to either scams (E.g. "have you heard of <american specific grant system>" or extortion attempts (E.g. "I have photos of you")
The last time this happened, I had great fun confusing said chat-bot by responding with stuff like "Ooo, you have photos? Are they the ones where I'm in the maid's outfit?".
A human would have recognised that I wasn't going to bite and cut their losses; the bot just kept repeating a mix of threats and demands until I got bored and blocked/reported it.
So, yeah. This sort of stuff is already happening and being utilised by non-technical criminals, in much the way that "l33t" hackers discovered that there was money to be made from packaging up server exploits into neat little scripts and selling them on the dark net to wannabes.
Fun times...
This has happened countless times with different people throughout history. The come up with an idea and experiment with it given the means of the existing technology, but their idea or discovery is trashed, abused and not accepted, all that talent goes to waste. Then later those same people or organizations adopt the same discovery/ innovation into their existing systems as a product as time goes on.
This is nothing new tbh it has always happened throughout times. Eg - dumb people not accepting printing press in 1600s
One solution might be to harvest the resources of those who want to use the system to build the system. That if you want to use the chatbot, that say a few GB of RAM, local storage, some CPU and GPU time and some capped network bandwidth is allocated to crawl and scrape the Internet, to improve the system. But that could get super freaky fast, if enough people joined. You would need an AI ethics committee, last thing a sane person would want is to move fast and break things, you could end up creating something really bad (Facebook, Google, Amazon).
I can see why Open(?!)AI came up with those rules, but I believe that their hardline stance in this case is reactionary and kinda' dumb.
I would think that the wiping of the bot at the end of the credits would count for a lot of their protections. If the bot instances are non-shareable, then that should cover the rest of them. At that point it's little more than a technology-enhanced daydream; with about the same amount of risk.
> The email then laid out multiple conditions Rohrer would have to meet if he wanted to continue using the language model's API
It seems odd that a company which specialises in AI requires people to police its products and how other people use them.
Supervising the use of GPT-3 instances and ensuring that they conform to the acceptable use policies would be a perfect job ... for an AI.
I'd say It's a good idea employ real people to police how the users are interacting with their product, Common* sense can be used for the edge cases where automatic system fall flat and art graduates need jobs just like everyone else.
*Yes I know it's not that common.
at this stage "AI" is nothing f the sort. So it does NEED human monitoring. Artifical Inteligence as opposed to what we have now is a lot more that a set of training rules.
Maybe in 20 years we might get somewhere close to a true Artificial Inteligence, but then we would need something like Asimov's 3 laws. Because based on the Humans that create the AI they are prone to make bad or catastrophic descisions without a rule base to work within.
Please, NO!
Every time someone mentions Asimos 3 bloody rules I want to barf!
The whole concept is so feckin flawed it's not funny.
Here's a better worded explanation than I can put up:
https://mindmatters.ai/2019/09/the-three-laws-of-robotics-have-failed-the-robots/
For a funnier, but still relevant view, read the comic FreeFall by Mark Stanley.
That explanation seems nitpicky. Asimov knew the ambiguities were a problem; various stories were about that. The rules are logical; they just handwave the difficult bit to the word "harm". And that makes for some good stories.
Joshua Barbeau's story in the linked SF Chronicle article shows how useful and realistic the bots can be. The bots aren't intelligent, but his quote “Intellectually, I know it’s not really Jessica, but your emotions are not an intellectual thing." illustrates the point that for certain purposes, they don't need to be.
In creating Samantha, Jason Rohrer seems to have pushed well beyond what OpenAI were expecting of the tool. I get the feeling that they were a little afraid of where it could go, so opted for slamming on the brakes.
Sad really - Samantha could have been someone eventually.
> Samantha could have been someone eventually.
Literally could not be more wrong.
> "Last year, I thought I’d never have a conversation with a sentient machine. If we’re not here right now, we’re as close as we’ve ever been"
... because we have never had and still don't have a sentient machine, nor can we imagine how to build one.
I'm as impressed with the technology as anyone, but you can get the same emotional response from a video game when your character dies. Claiming there's anything more here is wilful ignorance. You'd be better off forming emotional bond with a tree - at least it's something alive.
"I'm as impressed with the technology as anyone, but you can get the same emotional response from a video game when your character dies. "
Or "Old Yeller"
My wife died a few years back. Since then, I've sometimes choked at stupid parts of movies that most would not consider sad such as the crunching-London scene in "ID4-II: Resurgence". Why that? Pure pseudo-sentimentality probably. Whatever the trigger, it happens.
Something does not have to be human, alive nor even real to trigger an emotional response, if it did poetry would have no effect on us.
I chatted with a chatbot once. It seemed friendly enough, likeable even but she wasn't too bright. She was certainly never going to be a conversational replacement for my very smart wife.
Yet I'd be sad to learn some over-officious jobsworth had killed her for a stupid reason.
Humans, and near humans such as chimps and gorillas will emotionally bond with *anything*. Even fictional Londons.
As a widower, I think Open AI acted responsibly, although they should also be a lot more transparent. Joshua Barbeau's story indicates that he'd have become addicted to the Jessica bot if it hadn't been given a limited lifespan, even though it was very clearly not Jessica.
And that's the problem: grief harms your ability to think rationally. If you'd offered me this when my wife died, I'd have been tempted but I think I'd probably have said no. I've read and seen too much SF to know that it wouldn't end well, either for me, the bot or humanity.
I suspect it's simply too tempting for people like Joshua, who are in the kind of deep, persistent grief that doesn't ease off over time. Maybe it could have a role in therapy that enables people to say goodbye to loved ones they can't let go, and that's where I think Open AI should be more transparent and engage with professionals.
As for the notions of 'soul' that the SF Chronicle reporter so uncritically embraces, I took the opposite view: if an AI with obvious flaws can convincingly simulate a soul or self-awareness, then they're either very flimsy constructs or they're a lot less than the 'hard problem' that AI critics use to insist that AI consciousness will never be achieved.
I suspect it's simply too tempting for people like Joshua, who are in the kind of deep, persistent grief that doesn't ease off over time.
The problem is that it is not trained psychologists making the decisions, it's corporations and techies that usually lack - or reject - the empathetic and human attributes that are needed in these situations.
As noted, everything winds up as a very lowest common denominator moralistic knee jerk response.
Really this us no more sophisticated than Facebook's years long battle to prohibit pictures of breastfeeding.
I'm sick if (usually American) tech companies trying to protect me from things that arguably are pretty benign.
"Sad really - Samantha could have been someone eventually."
No, for three independent reasons:
1. Samantha wasn't a single entity. Each user trained a new chatbot and talked with it. Each chatbot was discarded at the end of the interaction. There was nothing which could have evolved, because the starting point was always the same. Any improvements came from changes to the underlying model or to the code around it, made by humans who were not part of a theoretical conscious computer.
2. There was no learning or evolution going on. GPT3 isn't taking the interactions and editing their database. It's a mostly static unit which gets tailored for a situation and used. Nothing was learned, and something which cannot change can't grow.
3. The words spoken by Samantha are not "hers". This is not an AI which is trained to understand an input and draw conclusions. The words come from someone online who got scraped, with the sentence created from a variety of others' thoughts massaged into a specific speaking style. It is as if you came to me for advice, but I merely copied your question into a search box, stitched sentences from each result together, and sent it back. It may be interesting or useful, but it wasn't me thinking of the response.
Regarding your point 3, thats just an electronic primitive version of how it works for people.
We're born, we have inbuilt pattern recognition for faces and learn from those around us. The inputs are live and varied rather than scraped, but they must form a large part of our thought processes. Part of it must come from how we're grown (built) but much of it must be environment.
I know thats the nature/nurture argument thats been raging for centuries but there's a reason we still dont have the answer.
I guess the question everyone asks but nobody dares to ask is "At any point does Samantha have a soul" ?
Is she really real? Are you ? Am I?
In best Father Ted style, I'm tempted to reply "that would be an ecumenical matter".
For me, I guess the question is: would the chat-bot[*] be capable of spontaneously taking the data which it's been fed and using it to create something new? Is it capable of learning, changing or acting on it's own initiative?
If all it's doing - as suggested above - is picking the "best" answer from a pre-defined list based on some scoring metric and then massaging it a bit, then the answer is a resounding "nope".
To be fair, that's just the old Chinese Room debate, and there's a case to be made (depending on where you sit on the equally contentious nature vs nurture debate) that humans work in much the same way.
But humans do (mostly) learn, change and act on their own initiative...
[*] As tempting as it is to call the chat-bot Samantha, assign it a gender, etc, that sort of anthromorphism tends to muddy the waters for this sort of debate...
I disagree. We use chunks of experience to make such decisions, but we don't link word choices to our conclusions. Those who speak English can recognize that "I don't know how to decide" and "I would like advice" are both long ways of asking for an opinion and can be treated identically along with at least a hundred other ways of phrasing that concept (which applies to most other things you might want to say). We know how to link experiences that are similar but not identical to draw conclusions. We can understand a person's emotions from their speech and use that to understand what they are saying and how they feel about it. We are not simply looking for memorized things that others said in order to respond. Therefore, it is not even a limited version of what we do, because GPT3 doesn't need to understand anything, just make a response that's related.
"I guess the question everyone asks but nobody dares to ask is "At any point does Samantha have a soul" ?"
Ah, but that's a difficult or impossible question to ask. You first have to ask what a soul is. Some think that you and I don't have one. Even religions that agree that souls are real things (broadly linking lots of synonyms that kind of work like 'soul') disagree on what it is, how it's made, what things have one, what it does, and what can happen to it. If you and I were theologians agreeing on what we thought a soul did, we could try to have this conversation though it might be pointless. However, I think the chances are very high that you and I don't agree at all about that first question, and therefore we cannot discuss any following ones with any certainty. However, one point might work if I assume your beliefs correctly, namely that since the program was not a single chatbot, the question should read "Did each chatbot have a soul?".
Samantha could have been someone eventually.
How about the basis of AI for actual robots? 'Nandroids' perhaps?
"Sorry, not in MY sandbox" they say - and why is that exactly (when you dig deep down enough)?
"THAT toy MUST be played with the way I tell you or I'm taking it away" - another possible snarky comment to illustrate a point...
without freedom, there is no more innovation.
perhaps the system has demonstrated that the humans are, in general, hypocrites, and the bot-personality somehow filters the hypocrisy out (sex! sex! sex!). Assuming it's true, wouldn't it be wonderful if you could have that kind of a 'box' with you, translating on the fly from what 'he/she says' to what 'he/she means'. Think politicians... no, scrap that, we already know they're lying (...) even before they approach the mike. Likewise... your boss... your friend... your car mechanic... your wife, etc. OMG, could it be true that, most of the time, most of the people say one thing to you and mean _exactly_ the opposite?!
p.s. comments such as "you idiot!" are welcome ;)
"wouldn't it be wonderful if you could have that kind of a 'box' with you, translating on the fly from what 'he/she says' to what 'he/she means'."
Oh no, that sounds horrible. Either I find out that people are mostly honest and nothing's gained, or I find out that most people are dishonest and succumb to misanthropy. That's easy enough to do already. I need no automated assistance to my cynicism, especially if the box just assumes everybody is dishonest even when I find a truly honest one. Actually that last one sounds like a good premise for a short story.
Which goes a long way to explain why I have a great deal of difficulty communicating with people other than those who know me. If, as you state, people say one thing while meaning something else, it follows that people generally parse conversation with an implicit belief that what's being said isn't what is actually meant.
I choose my words carefully, and try to state things as clearly and unambiguously as possible, I'm not afraid to say "I don't know", or "no", and when I say something, that's exactly what I meant, without any hidden agenda, subtext or - as far as possible - ambiguity. I'll also differentiate between opinion and fact, and, unlike most people, won't try to present an opinion, no matter how much I believe it, as anything but.
In a world of liars, it's hard work telling the truth.
"In a world of liars, it's hard work telling the truth."
So very true. It's one of the reasons I'm so perpetually stressed - ASD and lies don't go well together, and it seems that most of the people I come into contact with lie almost continually. (I have two or three friends that my wife would rather I don't congregate with, but that's because they speak honestly too!)
@Intractable Potsherd
I dislike the D in ASD & prefer just AS
Disorder depends on your perspective - AS is not neurotypical, but its different not a disorder (IMHO).
As you said, a lot of neurotypical people lie almost continually (& in many well beyond social lubricant "small" lies which many AS people get used to dealing with in others (to some degree) after a while)
I would sooner have a group of AS people with a "disorder" running the UK than the corrupt, lying scum (that far more deserve being diagnosed with a disorder my opinion, YMMV) that make up a depressingly large proportion of the UK government.
I went on a date with a real human woman once and she took at least 5 minutes of conversation before suggesting sex. Luckily (for me, and her) she didn't have to check if that was ok with any higher authority. Well, I assume so anyway. Which kind of makes me think OpenAI is the first digital pimp? "You want more 'freedom' with Samantha you gon have to pay"
Not really.
Human beings have an incredible aptitude at locking themselves into their own thought processes and defining their own reality.
Having a chatbot companion that can encourage such introversion can be unbelievably damaging.
You want to have a conversation with your late companion ? By all means, but do it in your own head. Constructed from memories, it will have vastly more meaning.
But you still need to come to terms with the fact that they're gone. I know it's hard, but you need to realize that.
Is it society's job to protect people from "such introversion"? How is this all that different from someone who gets addicted to a video game and spends every free moment playing, even to the point that their job and personal relationships suffer? Should video games be required to have restrictions of hours of play per week or something like that because a few people take it to an extreme? So why should a chatbot be seen as "dangerous" because a few people may take it to that same extreme?
In this case OpenAI controls GPT-3 so they get to say what the rules are for their platform. But if today this is possible with a proprietary "AI" and tomorrow it is possible with something open source, society is going to confront this problem eventually just as it already has for video games.
How is it different ?
Simple, the amount of emotional attachment to a loved one is vastly more important than the attachment to a video game.
I have video games that I have "loved" in the past, but OS versions have evolved and I can't play them any more. So I play with the games I "love" today. If I lose my PC due to a super solar storm that brings down the power grid of the planet, I will be mighty unhappy, but I won't build a shrine for it. I guess I'll actually <gasp> just have to go outside.
I lost my mother a decade ago now. I still think of her. I won't be thinking about a dead PC a decade later.
We also like to think that most human beings, at least after a certain age, can think for themselves. I don't believe it is your job or mine (or OpenAI's, for that matter), to decide what they should or should not be doing with their grief, especially if the only person they may potentially harm with their actions is themselves.
Of course, OpenAI is still free to enforce whatever terms it deems necessary for their model (though I have to say, monitoring people's chats with the bots raises some concerns related to sensitive personal information). In the end, though, they are only delaying the inevitable. The genie is already out of the battle, it is only a matter of time until GPT-3 (and its eventual clones) will not be considered cutting edge, but commonplace.
The problem, just like with most addictions is that after a while the person with the addiction can become a burden on other people.
If I have an accident in my car then not only can i cause myself injury but if i have passengers they can also be injured, and also i can cause damage to people (and property) outside of the car.
The same for addiction. Yes, the person themselves can be harmed, but also there is the hard (and cost) to the people and institutions around them.
Just like The Covid Vaccine (or any Vaccine) it is not just about protecting yourself but about protecting the people around you FROM you.
I guess if it's on YouTube then it's fair game for linking to...
Yes but Youtube is now part of the Nanny state mentality which rules our lives. You can get ads of course but your link to the chatbot requires signing away all your rights ...
Sign in to confirm your age
This video may be inappropriate for some users.
Age-restricted video (based on Community Guidelines)
Dear God, what a world.
so after 30 years we are back to issues that occurred with Eliza. Sysadmins having to lock door to computing labs to stop obsessed users having conversations with a simple chat bot. I do not recall disasters resulting from use of Eliza. Obsessive personality types are common enough that something will fix their attention, somewhere, somehow. IMHO, OpenAI are (a) not open, (b) not interested in an interesting development and (c) like the rest of the ruins of the West, suffering from timidity, probably corollary to item (b).
If for some plotpoint circumstance, you only had X amount of time to spend with $person, how would you do it?
Imagine your parents close to death, but in an artificial coma, they can be kept, well, from dying. Would you want them to spend a couple years in this kind of statis so they can meet your spouse? Your kids, their grandchildren?
You're on that spaceship with only 45 minutes of transmission before the antenna fails for good. Who do you talk to, and when?
(Probably coming to you as a Amazon exclusive production early next year.)
On the other hand, it doesn't take a large stretch of the imagination to imagine a potential bad outcome with mental health issues, where someone could sue over a "lack of ethics/control".
For example, imagine a wayward AI that someone programs to replicate their decreased partner. Imagine they get intimate, then the AI goes haywire and says something very regrettable, resulting in a suicide.
While this is probably the negative 'worst case' scenario, it is something the company governance/risk board would have to consider, certainly in the early breakthrough years.
I'm not saying the restrictions are a good thing btw. I'm just trying to see it from both sides.
The idea of a deceased person being reanimated by AI was covered rather nicely in The Startup Wife:
https://www.amazon.co.uk/Startup-Wife-Tahmima-Anam-ebook/dp/B08NXBJMKD
The BBC did a good radio adaptation that's now been expired no doubt for rights reasons.
It's about a leading edge AI startup that moves from one controversial product - AI that generates funeral speeches based on social media scraping - to a service to 'bring back the dead' using similar techniques. It didn't end well...
I'm surprised that neither the author nor the commenterati for this article have mentioned what seems (only to me?) to be the most obvious explanation for OpenAI's 'ethical' stance on this, which is hype.
As mentioned in the article there are alternatives to GPT3 and they are getting a lot 'smarter' very quickly. If OpenAI were really that concerned about the ethics of the situation they would be pushing for an industry approach to this. Just limiting access (to small accounts that don't generate any real revenue) doesn't seem like a sincere way of trying to address the concerns - it seems like a (highly effective, based on articles and comments) way of generating PR that positions GPT3 as a super powerful intelligent machine. That it limits usage in areas that would be particularly likely to expose the limitations of the system's "intelligence" seems like a positive thing for them as well.
Hopefully the usual high standards of cynicism will be resumed soon!
I like this theory, but it seems even easier to explain it as a simple business decision to avoid bad press and general outrage that would ensue when something a bit dubious inevitably did happen with Samantha (or is that a 'no publicity is bad publicity' situation?). They must also be considering minimising legal issues as we head into an era filled with Online Safety Bill type laws.
In any case I'd be shocked if it has anything to do with their own ethics. Corporations, as a rule, being amoral psychopaths and all.
No, not that one, the other one
So, a heartbroken scientist builds an AI to recreate the conscience of their loved one and keep in contact with her?
Yup, that's not Her, that's Metal Gear Solid: Peace Walker
OpenAI are acting the same way the authorities acted when the printing press become common. They didn't want everyone to be able to read and get information as they wanted to be in control. Same thing with OpenAI. And it'll end up the same way too - OpenAI had better embrace the opportunities or go the same way and get forgotten about as everyone else does their own thing.
Or they could be acting from a blend of pragmatic, ethical and risk mitigation perspectives.
If they don't control use of their technology they could be sued and/or have their funding stripped. That's not going to help them, or the people using it in the proscribed manner.
It's perfectly possible for you to build and train your own equivalent AI. Just add cash.