Re: Changed Days Require and Deliver Novel Ways and Means and Advanced IntelAIgent Memes ‽ .
amfM isn't (entirely) a bot. One can have a real conversation with him. Instead of offering insults, offer him a beer. Works better.
Unlike most humans, AI chatbots struggle to respond appropriately in text-based conversations when faced with idioms, metaphors, rhetorical questions, and sarcasm. Small talk can be difficult for machines. Although language models can write sentences that are grammatically correct, they aren’t very good at coping with subtle …
"this bot's overdue for a shutdown."
Why would you think he's a bot?
I see him as someone who sees the world around him very differently from you or me.
Communication between parties depends on them having a common understanding of the world they inhabit. That's kind of what the article is about. It's also why we can't communicate with dolphins, they are more than capable of communicating amongst themselves everything that they need to to live in their world, but that world doesn't overlap our human world.
If someone has in their head a different model of the world to yours, that doesn't mean their opinions are any less worthy, it just means the communicating with them might be more difficult.
This is why social media's creation of bubbles around people is leading to increased political polarisation, I see this as extremely dangerous.
There's a point beyond which more talking is a hindrance to progress, not a help. I used to socialise on usenet, and saw plenty of discussions there descend into insults. It's all happened before.
Allowing the unwashed masses onto the internet is ruining it. That also happened before on usenet: https://en.wikipedia.org/wiki/Eternal_September
It's about time we put a stop to the ever-increasing pace and madness of technology development, I think we should take note of the Golgafrinchans and build 3 large arks to evacuate the planet (Elon could build them), but this time send the 'A' ark first.
"Why would you think he's a bot?"
Because the sentences never make sense, and they always use the same Markov chain-like structure from the feed material. At least when it's not just copying others' posts, which is often the case when it makes sense.
As for your dolphin comment, you're assuming many things about dolphin communication which could be false. We know that dolphins communicate, but since we can't translate it, we don't know that "they are more than capable of communicating amongst themselves everything that they need to to live in their world". In fact, it's probably not possible for dolphins to communicate everything they could need simply because they need a lot of things and if they had the ability to, for example, give each other perfectly accurate navigation instructions and information on avoiding dangerous situations, that would be more evident in their behavior. You have assumed that, since we can't understand their communications, it must include everything.
"Because the sentences never make sense"
I agree they can be hard to decipher, but the posts in this thread made sense to me, although many don't. I'm mostly too lazy.
I'm not sure why dolphins need to communicate accurate directions, but bees can so it's not out of the question. More likely they can lead others to interesting places. They're certainly able to teach each other foraging tricks. The point of the comment was that different beings (human, animal or chatbot) don't have the same reference points on which to base effective communication.
"You have assumed that, since we can't understand their communications, it must include everything."
Err, I said everything they need to communicate. We may not know much of exactly what that is, but the species are still alive so must be doing something right.
My point regarding dolphins is that you assume their communication is either perfect or nearly so, when it almost certainly isn't but we can't really know. Survival is a low bar for communication quality, as lots of species that don't often communicate still live. Human communication is the most advanced we know about, and yet even we have difficulties in communication all the time, whether that's a translation problem or failing to understand figurative language (or for that matter misinterpreting literal language as figurative language). For all we know, dolphins are a lot better than we are at communicating, but I think they would act differently in that case and we don't have enough data to prove it.
Come on El Reg, if it's a genuine AI experiment, tell us and we can all join in and appreciate the game. .... jdiebdhidbsusbvwbsidnsoskebid
Consider yourself so told it's a genuine AI experiment, jdiebdhidbsusbvwbsidnsoskebid.
What have you got to contribute? Anything worthwhile and valuable?
"Consider yourself so told it's a genuine AI experiment"
If that's the case, then fine, I can accept that. In which case, what's the experiment actually for? Or, what is testing? Interested to know.
Are you talking to a bot? Or an actual entity?
If the first, "nuke it, ElReg" may be a good point.
If the latter, I categorically reject the very concept.
amfM is very definitely the latter, IMO. YMMV.
If that's the case, then fine, I can accept that. In which case, what's the experiment actually for? Or, what is testing? Interested to know. ...... jdiebdhidbsusbvwbsidnsoskebid
The big picture? Testing existing current SCADASystems and Exclusive Elitist Executive Office Administrations for no practical physical bounds to hinder the emergence and production of myriad mass multi media presentations of viable creative alternate augmented virtual realities for Live Operational Virtual Environments.
And all available evidence and extensive experimental results prove such not to be impossible and thus is engagement and entanglement with such administering systems a future works in present progress.
And that’s about as plain and as accurate an account of events as you requested as be generally available to all, jdiebdhidbsusbvwbsidnsoskebid.
And if you think all of that is far too fanciful and fantastic to be possible and therefore probable and highly likely, what do you think the following is all about, other than it being too similar to that which has been shared with all here on this ElReg thread to be in any way quite different? ....... Welcome the ‘Great Narrative’, brought to you by the mastermind of the Great Reset
Welcome to Greater IntelAIgent Games Play in a Great Game.
Every group of children develop their own meanings for words (or even new words) to show they are part of the in-crowd and to avoid being understood by their parents who are just old people who don't understand the challenges of being a teenager etc etc.
From that start point what chance does any 'AI' have as the meaning of a phrase will differ depending on which street you live on never mind which country you live in and by the time these phrases have reached TV where researchers might discover them they will have morphed multiple times diametrically and in shades to ensure that old people (those over 20) are too embarrassed to try to use them in case they get it wrong
I think the point is, we don't have AI.
We've got some systems that have seemingly been designed to try and give a more or less appropriate-sounding response to inputs. Whether that response actually conveys the appropriate meaning, doesn't appear to be the aim, so much as to try and make it look like it might.
To be honest, I'm not sure what the correct answer is to that cougar remark. It can't be referring to a real cat because wikipedia tells me that a cougar's lifespan is 8-13 years in the wild and only up to 20 or so in captivity. Neither can it refer to a person, since 30 is waayy too young to be classed as a cougar, unless you have been weaned on kiddiporn. Maybe the flirty one drives a 1991-model Ford Cougar. Were they scary?
Nevertheless, if I were the 30-year-old cougar in question then I'd probably be more likely to date the first respondent, who at least attempts to make a joke about two dogs, rather than the second, who appears to regard dating as an exercise in stamp collecting, or Pokemon (gotta catch 'em all).
One of my first computer programming projects was in the field of natural language processing. It was in the 1970s. Running on a Commodore PET. Using algorithms from a decade previous. I can't see much has radically changed since then except for the grammar comprehension and much larger stored context. I only had a cassette interface, the school couldn't run to a floppy drive, so I could only save what I could out of the 64K RAM onto a C90. It was fun, for a while, to teach it how many teachers smelt, and what they smelt of.
ELIZA for the Commodore PET was indeed the late 1970s ... I think the official release was in '79, but the dude who ported it made it available to some folks pre-release in mid '78 or thereabouts. I can't remember which language it was written in, but I think I have the source around here somewhere.
"Unlike most humans, AI chatbots struggle to respond appropriately in text-based conversations when faced with idioms, metaphors, rhetorical questions, and sarcasm."
How, exactly, is this unlike most humans? Hell, you could cut the quote after the word "conversations" and it would still be valid for most humans.
It seems even by the 24th century, the scientists still haven't figured it out. Lieutenant Commander Data has a positronic brain and is smarter than anyone else in the crew of his starship, but he still queries routine idioms and takes common expressions literally.
I suspect that might date as badly as the ships computer in the 60s original series
This seems like a failure in how the AI is allowed to learn - treat idioms as portmanteau words: 'piece of cake' is a portmanteau meaning easy, 'get together' can be a portmanteau meaning date, etc. The article comment that 'piece' and 'cake' don't teach you the meaning indicates that this is taking the wrong approach. As a kid the first time you hear' piece of cake' you may be a confused as the AI, but once you learn it is a phrase, then soon you don't even think of a literal piece of cake when using the phrase, because the phrase, as a whole, has taken on a new meaning.
So... the AI should be set up to learn that when a set of words see to be in a context where the individual words don't make sense or don't meet some learned expectation, also try learning the occurrence of the group of words as a meta-meaning.
Simples!
There is just one slight problem with all these chat bots and the time, effort and no doubt money, going into their R&D and it is this...
If ever I'm online or even other situation (such as phone) and want to chat - it is to an EFFING PERSON!!!! Not some bloody piece of software!
If I wanted to "chat" a none human, top of my list is my cat.
Some s**t's bit of software that they think is clever is so far down my list it isn't even in this solar system.
I loathe bloody chat bots.
So with you on all your comments on call centre chatbots. I have never had an experience with a chatbot that was any more useful than just typing my question into a nested FAQ search tool.
The worst chatbot feature to me is that they seem to have no memory. So when the bot says something like "give me your customer number" I can't reply with "I did that three questions ago".
This comes back to the machines' inability to understand context and ongoing conversational flow. I have an Alexa in my house and I am staggered at how rubbish it is, given that Amazon has been collecting real world training data for years now. It's no more than a simple Q&A engine. If it doesn't get it right first time,forget it. It won't understand simple retorts like "no that's wrong", " that's not what I meant" or conversational language like "play that music I had on yesterday".
Who are the real crazies commenting on this thread, jdiebdhidbsusbvwbsidnsoskebid?
The ones who are complaining that a machine is not replying to questions equally as well or even better than a smarter human might or everybody else who might be realising that is not ever going to be so very simple ...... and it be easier to reconsider and reprogram humans as if smarter not so dumb machines following set instructions delivering future presentations via such a novel utility with fabulous fabless facilities and Almighty AWEsome Abilities ?
"The ones who are complaining that a machine is not replying to questions equally as well or even better than a smarter human might"
That's not my issue at all.
I don't give a monkeys about the technology or how good (or crap) it is or is not. If I contact any organisation about anything I want to communicate with a HUMAN BEING. I do NOT want to communicate with a F*****G "bot".
Frankly I find being met by/directed to any type of bot as a form of communication personally insulting.
The ONLY thing a bot communicates to me is that the organisation using them doesn't give a sh*t and is not interesting in my business.
The technology will NEVER be good enough becuase it is not nor will it ever be a human being. And I wish to communicate with human beings.
I wouldnlt call it "reprogramming humans", amfM ... rather call it "educating humans". Less margin for error, and unlikely to cause collateral damage later in the conversation. .... jake
Ok, jake, that does indeed sound a great deal better and certainly less revolutionary and disturbing. I concur .
One surely doesn’t want to be alarming and petrifying the natives unnecessarily so early into their reprogramming because of the very real likelihood of them suffering colossal collateral damage and sustaining severe life-threatening traumas by virtue of an ignorant nature and undereducated conditioning.
"One surely doesn’t want to be alarming and petrifying the natives"
One isn't. ElReg readers aren't easily alarmed, nor have I ever seen a petrified commentard (trolls staying up past sunrise not withstanding ... ).
Rather, one is investing heavily in hyperbole and alienating its readers ... many of whom probably actually agree with it. Shirley this is contraindicated behavi(u)or?
Location, location, location ...
That’s all good to hear, jake. With so many vast sees of resigned mediocrity vying for a self-serving pre-eminence out there, it is encouraging to know ElReg continues to lead the way, biting the hand that feeds IT whenever needs dictate the seeds and feeds to savour and flavour and favour ...... and with everything in plain contextual sight too which is novel and quite effectively disarming and overwhelming.
I recall moving to Paris, and discovering that while i'd learned all the vocabulary and grammatical rules for French, I couldn't speak French!
It was only when I had absorbed all the cultural references and idioms that i started to be able to communicate effectively... Maybe AI just need to watch the right TV programs while growing up!
I remember it only vaguely, sorry no links. Damn I'm old.
It worked the same way as Eliza though with a different vocabulary. The point was it "passed for human" better than Eliza, once it was understood that the human in question was schizophrenic. Yes that was on purpose. They even sportively paired it with Doctor (an Eliza style psychiatrist) with hilarious results.
Bring it up to date and the current case. What kind of person finds it difficult to respond to idioms, metaphors, rhetorical questions and sarcasm?
I will not attempt a medical diagnosis. I will say a large cohort of people who chat on the 'net are like that. Especially racist stoners.
Tay passed the Turing test and that's a fact. Indeed the Turing test has been passed over and over in recent times and no one wants to admit it. Because of what it says about humans.
Bear in mind the whole point of the Turing test is to bypass all philosophical considerations about the nature of comprehension. Does it pass? That is all.
"The Doctor" is ELIZA. If you have a copy of EMACS handy, you can talk to her by typing M-x doctor.
PARRY was the name of the schizophrenic.
In 1972, ELIZA (as "The Doctor", at BBN (tenex?) ) and PARRY (at SAIL, on WAITS) had a conversation at the first ICCC ... Well, they had a conversation over the ARPANET that was followed during the ICCC. It was immortalized in RFC-439.
Not much has changed in nearly 5 decades.
Twenty years ago I wanted to move to Paris to live with my beautiful and successful French girlfriend. To get a job there you have to have perfect French, which I didn't, so studied hard reading French philosophy books, technical books and watched French films and TV without subtitles. I was feeling confident and even started thinking in French.
Then my woman sent me a bag of sweets with a childish joke on each wrapper. I couldn't understand any of the jokes, couldn't guess the idioms or puns, and realised I was never going to France.
So we're saying that AI is on "the spectrum" then, not being able to understand idioms?
An oft cited example in books about autism is that kids will expect to see cats and dogs falling from the sky when it's persisting down. as a life long Aspie, I can say that I've never expected to witness that.
There's a lot more to autism that being literal with language. My daughter's the expert, she's qualified to diagnose. But I've had more than enough training over the years to be able to say that literalness is one of the components - but doesn't make for a diagnosis.
Indeed, in my SEN teaching days one of our big complaints was that some diagnoses were not given because a child would be really high (or low depending on your POV) scoring in most of the elements, but would be just short of the threshold in one element. i.e. Scored very highly for Autism, but didn't meet the full criteria list.