I for one...
...welcome our new nonsensical garbage spouting overlords.
Reading Uni's cybernetic media strumpet Kev "Captain Cyborg" Warwick is poised to put six computer programmes to ultimate test - that devised by Alan Turing in which the machine must engage in convincingly human banter, thereby heralding "the most significant breakthrough in artificial intelligence since the IBM supercomputer …
Go ahead and read the two conversations - hard to tell them apart? Yes!
Why though? Because just as the programs try to imitate a human, the humans imitate the machines!!!
Their answers become whimsical, meaningless non-sequiturs. Just like the rubbish spewed by one of these pathetic pieces of trashy code masquerading as AI.
They should stop this competition - its an insult to real artificial intelligence research and bad comedy. The mass media might be fooled in to thinking we're talking about AI but this site's for IT people who shouldn't be so gullible.
Complete toss. If there's any consciousness there, then my bloody phone is conscious (it's spooky how it "knows" what I want to write when I'm sending texts). Why do news outlets continue to bill Captain Cyborg as an expert in AI when he drops a clanger like this in every single interview he does? I'm not talking about the Register here. I'm sure you only report these things for the comic value. Agree with most here that chatterbots != "AI". Artificial Stupidity would be a better description. The Turing Test may be the best we have, and I've a lot of respect for Turing's contributions to computing/AI, but if this is what it leads to we should be prepared to accept the possibility that he got it wrong and stop wasting time and effort on such drivel and start asking other questions like "how can computing help people?", "what are the limitations?" and the like instead.
10 Write Program
20 IF Intelligent GOTO END
40 GOTO 20
Pretty standard 80's AI paradigm really. Big claims, patch and pray programming, funds dry up. Some researchers got past this. Others implanted themselves with the history repeat chip...
MeThinks that in Order to be Relevant today and Fit for Future Purpose, Man must be able to be Mistaken and Accepted as AIMorphing SMARTer Virtual Machine, given the Destructive Mindless Behaviour of Predatory Humans, a Particularly Nasty Sub-Species of Phormed Life/Programmed Action.
And an Abject Failure to do so, will leave their Hubristic leaderships, floundering in ITs Virtual Machinery Wake, Battling a Losing Battle against their own Deficiencies and Inefficiencies.... which are XXXXPosed at an ever Increasing/Exponential rate until Collapse with a Catastrophic Failure to Progress. ...... with the present "Troubles", more than a Valid Indicator that Changed Systems are Needed rather than Feeding Figures and Freshly Printed Paper into a Systemically Failed, Rotten to ITs Core, Spin Waiting System.
The Party is Over, some would say, and now there is the Bill Presented, 42 Pay. With Nothing Basically and Radically Changed, will Nothing Basically and Radically Change....... so the Crooked Rigged Markets and Dollar Collapse is bound to Continue and Gather Speed. For the System has been Programmed to Server to its Greed Elements/Ring Leaders rather than Charting AI Novel Course, Mindful of All.
They are pretty easy to tell apart. Ultra Hal's conversation shows no signs of understanding context and the responses are the same kind as Eliza's. I.e. when it doesn't know how to handle the sentence it tries to change the subject. After two minutes you just get frustrated talking to it.
If the first conversation were by a program, I'd be more impressed. It shows handling of context (remember that we're still talking about humans vs computers two sentences later) and has a "happiness simulator" which actually answers questions about its own happiness and doesn't just reflect them back to KW. On the other hand conversation 1 would probably also be doable by a program, you'd just have to actually add some AI into the system and not just a phrasebook with pattern matching.
When the strong AI cheerleaders come out of the woodwork to remind us that after fifty years of research they still haven't managed to come up with a proper definition of intelligence, and they still can't resist summing up those fifty years of research with what amounts to a sophisticated parlour game, bless them.
"Their answers become whimsical, meaningless non-sequiturs. Just like the rubbish spewed by one of these pathetic pieces of trashy code masquerading as AI."
Kevin Warwick was _always_ like that, he's like a mini Turing test all by himself, sometimes it's difficult to distinguish him as a sentient human even when you are in the same room with him.
...Those typos almost had me convinced that UltraHAL was a human, though I think the human in the first example failed the Turing test, personally: who answers a question, asks their own in return and then assumes an answer to draw a conclusion ("Yes, I am. Are you? Good. Then we are both happy.") in normal conversation? And they apparently completely forget the subject of the conversation in between answers, too.
I could tell the difference between the human and computer easily - in that the PC feigned typing errors to try to lead the interrogator into observing 'simulated' human failure. And the conversation tailed off to absolutely nothing, not starting too well either.
The first (human) one also seemed more to the point in a lot of ways - the subject revealed something about their job and made general sense, something which AIs typically lead you away from in some twisted way -
KW: Does that worry you?
Subject: Don't worry, we'll work everything through.
I mean, who would say this!? Picked out of a random response I'm sure.
Then theres this following straight on -
KW: Do you have worries then?
Subject: Let's move on from that basis.
Subject: Why? I like this subject.
Again an example of a supposed AI simply avoiding the questions and making a completely nonsensical last statement there.
NOT difficult to see which is which.
@PHIX8 - The human wasn't trying to imitate a machine in any way I could see, he answered the questions properly for one, eg -
KW: To go to a restaurant, for example?
Subject: Then I would much prefer going with a human.
This demonstrates a true understanding of the conversation, showing that restaurants are places you go to socialise as well as eat, and computers are not social objects. You just wouldn't get this far with the computer.
Can I be one of the (highly paid) judges?
Thus we must throw strategic speeling (sic) mistakes into the illogical and rubbish conversation to make it more realistic. I am a robot, or am I? Am I simply a man with a piece of metal in his arm blowing my own self publicity horn to any publication that will listen, or am I a robot? bored yet? Cos I am.
"They are pretty easy to tell apart. Ultra Hal's conversation shows no signs of understanding context and the responses are the same kind as Eliza's. I.e. when it doesn't know how to handle the sentence it tries to change the subject. After two minutes you just get frustrated talking to it." .... By Anonymous Coward Posted Monday 6th October 2008 15:28 GMT
Oh so Similar, if not Identical, to a Spinning/Spin Waiting Politician, AC.
The difficulty is with Cap'n Cyborg - in his/its drive to become increasingly interfaced are his questions coming from the viewpoint of a human who is inadvertently using language similar to that the machine might come up with?
Apart from the Borg set-up where assimilation is desired, there is a wee Jeff Noon story in Pixel Juice called 'Orgmentation'. It is set at a party where mechnoids wear human bits just as we would wear bits of metal. "One shining boy of non-specific machinery had a human index finger pierced through his lower lip". How long before my PC sulks because I didn't get it a birthday card?
Anyway MfM - there's no proof that the machines will be intelligent - Mr Bush jnr doesn't appear to be much more than other people's construct. It looks lifelike but talks drivel.
The Turing Test should be exactly what it says on the tin - the non-human on the wire should be able to fool the person in front of the terminal about its non-human-ness. No five-minute time limit, no "judge can't decide, we have a winrar" decision. Just roll with the conversation, and it should not consist of "softball" questioning and limp-wristed verbal sparring either. These are reserved for the media sucking up to politicians.
The poor machine will then be in the situation of a mathematical genius with deficient real-world experience but an extensive pocket library trying to keep up a bar conversation with Joe "Manimal" Sixpack, but - so be it!
The time when such a conversation can be meaningfully had has not yet come.
This post has been deleted by its author
For those of you who are interested, here's a link to my rejected Loebner Prize 2008 Entry. His name is Chip Vivant:
And here is a link to an article I wrote about my experiences entering this contest this year:
One of the contest organizers said that I should request the judges' feedback after the completion of the contest, so I'm still trying to keep an open mind. (It was my first-time entry and there were a lot of bugs initially.)
Anyway, I'd be interested in your feedback and reactions. Type "!Feedback <your text here>" when talking to Chip to flag a message as feedback for me.
Actually, do I really need to say anything more?
A patriotic dumb-****. Of course I'd vote for her, even though she'd "council" a twelve-year-old raped by her father to be "pro-life"...after having made abortion illegal. I can picture it. "Oh, my dear, you should have taken my advice. Now you'll have to spend the rest of your life in prison for murder. Don't you know to love your father?"
...actually, Artificial Un-intelligence
(Paris: "Wow, she's really stupid.")
Biting the hand that feeds IT © 1998–2021