But when dealing with service desk operators, there is often little to suggest they are sentient.
Numerous people start to believe they're interacting with something sentient when they talk to AI chatbots, according to the CEO of Replika, an app that allows users to design their own virtual companions. People can customize how their chatbots look and pay for extra features like certain personality traits on Replika. …
but this is THE core part of their training! Service desk operators or any 'service' is a cost. The shorter, the less frequent the interaction between the 'valued customer' and the lower the need for support / service desk, the less cost, thus more profit, to the biz. Once you've trained valued customer to realize the 'support is not sentient, so no use, so no point contacting, so don't bother, through their short process of learned helplessness - you increase your margin, profit...
and lose your customers, I hear you say? Aha! - but here you are wrong, the reason is simple: it's already a standard business practice across your whole sector, so your valued customer has nowhere else to go, check-mate.
"AI algorithms detect and analyse things like a person's eye movement, facial expression ..." it's documented extremely well in Blade Runner, you can see how well it works and how it tried to figure out if Rachel is a Replicant or a Lesbian ... in the movie AI is just Alien Intelligence.
It took more than 100 questions and they were only able to guess - watch the whole movie and you can imagine that you can figure out the answer at the end. Essentially AI chatbots seem to have all the same issues that Blade Runner illustrates for Replicants.
That's an interesting and essential stay out of jail free card right there, Version 1.0, and a valuable hedge for any and all making surefire bets they cannot afford to lose ....... Essentially AI chatbots seem to have all the same issues that Blade Runner illustrates for Replicants.
40 years is absolutely ages for perfection to practise disguising its stealthy wares ..... and there’s nothing to say and suggest that developments weren’t perfected many ages long before even 1982, is there?
" Millions have downloaded the app and many chat regularly to their made-up bots. Some even begin to think their digital pals are real entities that are sentient."
'A few' per million folks will probably be found to believe almost anything. The comment is so vague as to be effectively meaningless as a statement of 'fact'. However, if it is a valid observation, it's another example of the big problem that besets the 'Chinese room' and the Turing test. The results of all three are utterly dependent on the perceptiveness of the observer.
The results of all three are utterly dependent on the perceptiveness of the observer. .... Mike 137
Such an absolute dependency on the correct raw and rare perceptiveness of a random human observer provides an unparalleled and far-reaching stealthy advantage to the subject in question and all manner of matters coincidental and yet to be even raised for further discussion and acceptance/realisation ....... by humans.
"People can customize how their chatbots look and pay for extra features like certain personality traits on Replika. Millions have downloaded the app and many chat regularly to their made-up bots. Some even begin to think their digital pals are real entities that are sentient."
To me it looks like imaginary friends v2.0. With the mobile paradigm and all that blah, blah. The only difference is old-school imaginary friends were free and were not dependent on battery life.
"People who regularly talk to AI chatbots often start to believe they're sentient..."
I'm just as certain that people who regularly talk to chatbots not only start to believe that they are sentient, but---also---often do not seek much-needed psychiatric care. After all, someone has to assure them that they just might be sentient.
> These systems are not sentient, however, and instead trick humans into thinking they have some intelligence. They mimic language and regurgitate it somewhat randomly without having any understanding of language or the world they describe.
This is an overreach. It's like you're saying "People who claim that ultralight planes can beat the speed of sound are being scammed. These toys merely imitate flight and can barely manage hops of a dozen meters."
Yes, language models are probably not conscious. However, no, language models don't "regurgitate language randomly without having any understanding of the world." These models routinely set records on benchmarks of commonsense understanding. That they don't have a reliable, easy grasp of basic physics, the relations between objects, causality, placements, etc. does not mean that they have none.
Language models probably have some limited understanding of the world. As they are scaled up and redesigned, this understanding will expand.
These bots are not understanding the world any more than I would demonstrate understanding of something by rephrasing a Wikipedia article. I could take that text, written by someone who understands it, and use my knowledge of language to move the words around in a way that seems natural. Hopefully, I'd do it without making the facts incorrect, though AIs fail to meet that requirement all the time and somehow you don't appear to think that counts. In any case, any correctness seen in the result was generated by someone else. The chatbots we've seen the workings of don't read text to understand its meaning, but instead read it to copy chunks that are hopefully relevant.
I think that some people go "oh a demonstration of an error, this means the LM doesn't really understand what it's saying", and I go "one times in three, the correct answer comes out - do you get that this would be impossible if the model didn't have understanding?"
You cannot demonstrate ignorance, only knowledge. If you give the model an input and the wrong answer comes out, that might mean the model doesn't know - or it doesn't understand what you're asking it, or it's answering a different question than you think you're asking.
There's an old saying:
Q: When will we know how to build AI?
A: Never. If we knew how to do it, we wouldn't call it AI.
I think there's a tendency to exclude algorithms that seem "too simple." Language models can generalize over arbitrary patterns. They can assign multiple meanings to symbols, correlate different concepts, and apply them in novel contexts and in novel ways. They don't have introspection, sure, and they don't have arbitrary recursion, granted, and they don't have online learning, fine, but I don't view those as necessary to understanding per se. Nobody is saying that GPT-3 is a general intelligence. But as I understand the term "to understand", it does understand some things.
"one times in three, the correct answer comes out - do you get that this would be impossible if the model didn't have understanding?"
It wouldn't be impossible. It has the answer, written by someone else who has understanding. It correctly found the right snippet. It's like a person who doesn't know how to write code but finds a Stack Overflow post that actually wrote what they want. They don't understand the code, or they could have written it themselves, but when they paste it in, it works. When the model gets the wrong snippet, it has no clue that it's messed up.
You're ascribing something that is the entire point of the model to understanding, but no understanding is needed to produce that result.
Yeah but the model can solve problems that it's never seen before, as long as they're structurally similar to a problem that it has seen. That's why I think it has some level of understanding.
Ultimately, if you apply enough abstraction, anything that any human being does can be reduced to "find the right past-experience to refer to." That's not literally what we're doing, but then again it's not literally what a language model is doing either; it's not like it actually searches StackOverflow as you query it.
The truth may be that these people have shallow, vapid friends who have the verbal and intellectual complexity of chatbots. So they don't notice any difference when they are talking to silicon.
The similarity to people who believe in the supernatural is apt. Both groups could do with watching some science documentaries to bump up their IQ to somewhere near average and using fewer recreational drugs.
If you aren't sure when on the phone to what was, many years ago, 'customer service' (and some UK government depts are using them already), chatbots will talk over you on the phone and tend to speak well, without regional accents, contractions or colloquialisms. Their tone is measured and they never laugh. They sound nothing like call centre staff, but because you are phoning a call centre, you assume they are human by default.
As someone who has actually talked to several chatbots because reasons I can honesty say I have become convinced that no, they aren't smart.
Be it from some WhatApp taxi thing, needing help in something or just a chatbot program I downloaded to fool around, no they aren't people. And no they are not smart. Many are so badly made you need help to get them to help you. Like the time I was ten minutes trying to get the stupid AI to call me a taxi when a simple phone call would have done it in 30 seconds or less.
“ Those who manually change their date of birth to register as over 18 have the option of uploading a video selfie, and Yoti's technology is then used to predict whether they look mature enough”
Change their date of birth? Wonder how that works.
Also, it’s not “predict”. It’s “estimate”/“judge”.
Do chatbots who regularly talk to chatbots often start to believe they're sentient?
What about dogs? I think my dog is sentient; I talk to her a lot. It takes up much of my day. Can I get a chatbot to take over for me?
Also, I don't get the AI angle. What does artificial insemination have to do with any of this, or is this only with porn chatbots?
"Millions have downloaded the app and many chat regularly to their made-up bots..."
Get away from your computer before it's too late! Get out!
On the other hand... I commute across the meadows twice a day and talk to a... buzzard. :)
"Good morning, gorgeous!" or some other nonsense like that. He's now so accustomed to me, that he no longer takes off!
It's good, life off-line...
This post has been deleted by its author
I wonder: to how many people it has occurred that the phrase, "Artificial Intelligence", is one of the more elegant examples of an oxymoron.
"My artificial flowers died because I didn't artificially water them." [paraphrase]---Dave Barry
The only people who believe in 'artificial intelligence' are the Artificial Intelligentia.
"I have found that the reason a lot of people are interested in artificial intelligence is for the same reason a lot of people are interested in artificial limbs: they are missing one."--David L. Parnas