Are the machines getting smarter...
...or are the humans getting dumber?
Politicians are not quite ready to pass themselves off as human: but machines are almost there. This was the shock conclusion from a Reading University competition run at the weekend designed to sort out machines from people. Five computer programmers from across the globe competed for the $100,000 Loebner prize for Artificial …
The unquestioning way the mainstream press has lapped up the bullshit spewed forth by Captain Cyborg in this story defies belief. "Two of the machines getting very close to passing the Turing Test for the first time"? "Conversational abilities of each machine was scored at 80 and 90 per cent"?
These are chat-bots, little better than Eliza. The winning entry Elbot can be tried online at www.elbot.com. It does a few clever tricks, but what kind of moron would confuse this thing for a human? It's obvious that this line of research, while certainly good fun, seems unlikely to provide any insight whatsoever into true machine intelligence.
"if the responses from the computer were indistinguishable from that of a human, the computer could be said to be thinking"
This statement makes the assumption that the human in the situation is also thinking... I know many humans whose responses to anything could not be classed as 'thinking'.
Some might say that those are the politicians. I wouldn't say that, because they really are following me... even though my tinfoil hat shields the tracker that they implanted in my teeth.
Has anyone considered the possibility of a virus/phishing scam being extended through Eliza style emailing?
People are pretty wary of clicking unsolicited links, but what if this unsolicited link was from either a "friend", or started off as an inane chat back and forward for a while before a link was distributed? Equally, this could be used with MSN deployed trojans.
Mine's the one with the black hat underneath
That "elbot" thing fooled a human 25% of the time? What kind of dummies did they use? Whenever you ask a slightly complex question, elbot either ignores you or spouts nonsense, and if you ask it to clarify the nonsense it tries to change the topic. Unless you are purposefully being soft with your questions, or the human conversators in the experiment are being purposefully dense, there's no way it can pass for a person.
"Tell me about your mother"?
:-)
The Elbot thing is so blatently useless I can't see how anybody wouldn't know it was a machine after 2 questions max. Any test should allow the human participants to lead the conversion. Until we're approaching the ability to do that, these contests are just a joke.
...and since when has Captain Cyborg been a "boffin"?
~~~~
Paris, because people are unable to distinguish her responses from those made by a machine.
R Kev is enthusing - as usual ""This has been a very exciting day with two of the machines getting very close to passing the Turing Test for the first time.", but didn't the Cap'n write the questions himself?
Maybe if a real human drew up the questions we'd have a different result. I just thing that the questions were too close to being easy for machines to work on.
"However, one of the organisers did suggest that the obvious machines didn’t really answer questions - they were quite vague or definitely evasive, obviously picking up on key words, and building an answer around those, rather than what they had been asked."
That'll be the Tech Support desk they were talking to then.
At what level of conversational ability does the machine has to perform to deemed as 'thinking' and 'human-like?'
If we made a machine capable of communicating at the level of a 5 year old child and used 5 year old children as the human control group, would adults talking to both the machines & the 5 year olds be able to tell the difference?
If so does this mean we have created a thinking machine? Most 5 year olds I know can come out with a fair amount of random rubbish when asked simple questions.
If this is not allowed, then what age group do the machines have to represent?
What implications does this have on human 'thought' if the machines have to communicate with people of a high intelligence & trick them into believing they are 'real'.
Are 5 year olds capable of 'thought' if they fail a Turing test?
What happens if you used Jade Goody as one of the control group? instead?
"Dr Warwick is an enthusiast when it comes to the rise of the machine. His personal website – I, Cyborg – explains in detail how he became the world’s first Cyborg, when he agreed to have a silicon chip transponder implanted in his forearm."
Let me guess - the 'I, Tosspot' domain was already taken?
Did they provide a "telepathy-proof room" in which to conduct the test? Alan Turing himself seems to have thought that one would be required. Perhaps this was his concession to the problem of Gödel incompleteness in finite state machines.
His starting point for the test, the imitation game, could be much more fun. Maybe next year...
So there were machines pretending to be humans and humans pretending to be humans, and testers asking them questions, and trying to correctly identify which were machines...
They give that stats for machines being correctly identified, but how many of the human control group were incorrectly identified as machines?? That would be an interesting statistic! And is back to the point about the human in the conversation perhaps not actually thinking!!
This is a great achievement but sadly the Turing test was never a true test of intelligence, it cannot show there is semantic meaning behind the answers. You could ask it "what is your favourite food?" and it might say, "my favourite food is Pizza" Here it has answered the question but this does not show it knows what a pizza is.
The 'Chinese room' is a good way of illustrating this, if you stick someone in a room and ask them written questions in Chinese symbols, if they were quick enough they could answer them all correctly if they had an answer book with a list of Chinese questions with appropriate answers. The person answering would have no idea of the meaning of the question or answer but they can pass off as being a Chinese person. This is exactly how a computer would do it, it is fitting questions to set answers it has, I know it is far more complex than this but it is not 'thinking' or showing any intelligence
Shame on you, El Reg, for missing the Quote of the Year from the Grauniad when discussing this ridiculous contest. Reproduced below for your delectation and delight:
"The event's credibility was hardly aided by the insistence of Hugh Loebner, the prize's American sponsor, that he had no interest in the result and had only set up the competition 18 years ago to promote his firm's roll-up plastic lighted portable disco dance floors."
in response to By Steven Hunter, "If "simulated" intelligence is indistinguishable from "actual" intelligence, does it really matter if it's "simulated"?"
It makes a massive difference, true intelligence and a computer program that gives believable answers but understands noting of what it is saying are not even comparable. Think about what I said about the Chinese room, if I did it I would just be matching up symbols, I would have no idea of what the questions or answers meant, infact you wouldn't even need to tell me they were questions/answers, I could think it was just a match the symbols game and provide the correct answers. This is exactly how the artificial 'intelligence' is working.
Another way to look at is if I wrote (over many years) a book which had an answer to every question a human could think of, the book could pass the Turing test, yet it is just bits of paper, it does not think.