"The Guardian has performed a Reverse Ferret"
Even Tom Daley can't perform that one!
For 14 years, The Register has been chronicling the publicity stunts of Kevin Warwick, an attention-seeking academic with a sideline in self-mutilation*. In fact, Warwick has been making improbable claims to the press for much longer than that: over twenty years. But the world has continued to relay Warwick's stunts and …
What strikes me odd about AI, is that we really do not understand how our minds or brains function and yet we persevere in the hope that someone is going to progamme a binary computer which is has the same functionality as the mass of grey cells in our head. Ok, I'll relax that criteria and say "mimic".. but its still a tall order...
Yes we can programme a machine to play chess (the moves are known, the number of possible moves and the consequences of those moves is lost on all but the best chest masters, hence why a machine can play and win), and we can programme certain solutions in which the problem space comprises a set of rules and objects to which those rules are applied.
Excusing the "chest masters"...
Worse than that... we can't even decide what intelligence is. The Turing Test is not really well-defined and certainly isn't a cover-all explanation of when we hit "AI". I would reject it out of hand as having achieved nothing more than this article points out - deception by one human of another using a machine as the tool of that deception. That's not really "intelligence" on the behalf of the computer, only a display of intelligence, or lack of it, by the humans.
All AI that I've ever seen is highly dubious and amounts to nothing more than basic heuristics, or gibberish. Although I can happily claim that humans give the latter all the time, we tend to perceive those humans as "non-human"... a guy telling you what his gerbil had to say about a certain place is the kind of guy you'd move down the bus away from. And the former do not describe humans at all - in fact, there are a vanishingly small number of "rules" that any particular instance of a human will follow blindly every time. And en-masse, we're even more dumb.
I hate the focus on "AI". We don't have it. We don't have anything on the horizon near it. And we've been doing and saying the same things for nearly a hundred years, if not more. We're not even close. We can do interesting things, we can automate machines to a high degree and we can have them do some wonderfully difficult work a lot quicker than we ever could (e.g. in the computer vision areas, etc.), but there's not a peep, not a glimpse, not a sign, of any real "intelligence" that I've ever seen.
Which makes me wonder, sometimes, if there's more than just a "complexity" problem to be solved to approach AI... whether there's some inherent physical process or characteristic that inserts enough "illogical" or random elements into the physics to make everything just that bit more capable of breaking from logic and rules and into decisions and intelligence. I'd place my bets on something quantum, personally.
And the bit that really annoys me? It takes a human baby several years of constant, intense, personally-focused training to get close to mastering the baby-end of intelligence. Yet every AI project I've seen tends to be a year or two old at the most - usually just long enough to write a paper, get your doctorate and then flee before someone asks you to do any more on it. And the computer systems we have don't even approach the complexity of the human brain, nor it's genetic "bootstrap" headstart on being successful at forming intelligence quickly from a blank slate.
Start an AI project that is intended to run continuously and last 100 years, using the most powerful hardware available, which we train constantly in the same intense amounts of data and detail as we could a baby. Then we might approach something akin to a three-year-old. It's no wonder we've got nowhere with it so far.
a nice observation, that is pretty much a textbook example of the elephant in the room!!
Studies on infant humans and chimpanzees (pan troglydytes) and sign language interpretation, show that the language centre of the human brain is pre-constructed.
So you need the billion years of evolution, and then you can leave it watching 1970's open university programs....
P.
Studies on infant humans and chimpanzees (pan troglydytes) and sign language interpretation, show that the language centre of the human brain is pre-constructed.
That is so wildly over-simplified it's nearly meaningless. It's also irrelevant, unless you think that "pre-construction" involves some magical, non-mechanical operation.
The real problems with Natural Language Processing have nothing to do with the innate language capabilities of primates, or the lack thereof in machines; innate capabilities can be black-box modeled. The problems that occupy NLP research are the many extremely complex processes involved in natural language use, such as inferring elided predicates and conversational entailment, coupled with the vast amounts of world and domain knowledge that humans bring to every incidence of parole. These are quantitatively difficult problems with many qualitative unknowns, but they are not ineffable.
In any case, serious contemporary AI research has little to do with the popular mischaracterization of AI, and little to do with this Turing-Test-in-practice rubbish.
I'm not quite sure what your point is here. You just seem to be indiscriminately pouring scorn on all aspects of the AI field.
Sure, it's proved to be a lot more difficult that anyone expected; it may not even be possible! But what would you have us do? Just give up?
Your bit about "every AI project I've seen tends to be a year or two old at the most - usually just long enough to write a paper, get your doctorate and then flee before someone asks you to do any more on it", is grossly disingenuous. You seem to be implying that the sum total of activity in the field of AI amounts to a handful of pre-doc students taking random pot-shots at the problem?!
I couldn't agree more but I think you, and most AI researchers, are under estimating just how much "programming" humans start life with. Obviously I have no hard evidence, no one does, but watching two young children grow up from birth to four I'd say that evolution has pre-programmed us a lot more than people suspect.
Drawing an analogy between humans and computers I'd say the human POST is about 3 to 3.5 years in duration. The human BIOS provides a basic framework to acquire language, emotions, movement, self preservation etc. The parents role is to install higher level programs e.g. one or more spoken languages, don't touch that particular furry caterpillar etc.
If you look at this from an evolutionary point of view coming out partially pre-programmed makes a lot of sense as it's so much quicker than having to teach everything from scratch every time. That doesn't mean every bit of pre-programming has to become active immediately at birth though, it switches on at the appropriate developmental points.
As I say this is just a suspicion but I really think we need to move away from the idea that we are a blank slate at birth.
"As I say this is just a suspicion but I really think we need to move away from the idea that we are a blank slate at birth."
It's worth remembering that many other creatures are born and are up and running within minutes. Humans seem to be the exception to that.
It may be that other creatures have so much more inbuilt pre-programming that there's less space left for new programming. Maybe we are more intelligent because we are born with less pre-programming and more space for learning.
We don't have the largest brains on the planet but maybe that larger brain + little pre-programming = adaptable intelligence.
The Turing Test is not really well-defined and certainly isn't a cover-all explanation of when we hit "AI".
Hardly surprising, since it was never intended to be anything of the sort.
I would reject it out of hand as having achieved nothing more than this article points out - deception by one human of another using a machine as the tool of that deception.
Perhaps it would be more useful to learn what the Turing Test is for, rather than embarking on some middlebrow rant.
The TT is a philosophical gedankenexperiment intended to illuminate a position on the epistemology of thought - how we can know whether a machine is a thinking creature (Heideggerian Dasein, more or less, in one conception). Turing proposed it not as a practical exercise but as an epistemological line in the sand: if we can't find a decision procedure based only on the direct evidence of thought1 to distinguish between humans and these hypothetical machines, then, he says, we have no grounds for considering the machines as non-thinking entities.
Turing's position, interestingly, is more closely allied to American (US) pragmatism, which basically disavows metaphysical epistemology in favor of an exclusive reliance on measurable properties, than with the philosophical schools dominant in the UK at the time. Conversely, the US philosopher John Searle's famous attack on one form of AI2, the Chinese Room gedankenexperiment, is more closely allied to English logical positivism: what do we mean when we use the word "thinking"?
Robert French, in a very good piece in CACM, has pointed out why these Turing Test competitions may be interesting for people developing NLP systems and the like, but have little or nothing to do with AI. And the same can be said of the Test itself, except as a philosophical stake in the ground.
1Which Turing in effect argues is blind interpersonal discourse.
2Namely what Searle called "symbolic manipulation". It's worth noting that Searle believed in the strong AI project, in the sense that he thought the mind was an effect of a mechanical substrate, and thus in principle could be reproduced by a machine, and in fact he claimed in print that he expected some day it would be. He just thought the strong-AI researchers of his day were using ridiculously oversimplified models and approaches. History seems to agree.
"Start an AI project that is intended to run continuously and last 100 years, using the most powerful hardware available"
Great idea! We could call it "Deep Thought" and it could tell us the answer to The Great Question.
Might need more than a 100 years to run tho . .
Surely the point of the research is to find out what kind of logic - logic on a large scale - is needed to produce something looking like intelligence. So, for example, neurons might be modelled at a logical level but not at the biological level.
Of course, it is possible to suggest that there is something rational in a biological cell, i.e. not a 'soul', which is not currently known but enables a large group of them to show intelligence. After all, the eucaryotic cell is immensely complex. But this is conjecture, and arguably contravenes Occam's Razor. However, what a discovery if it turned out to be true.
All fascinating stuff, though. If done properly, it is pure science.
Re: AC
Well put. The problem seems to be that we are trying to construct something that has a high probability of being correct when in reality the problem is more akin to finding the least wrong answer. Though these two seem similar, they are actually opposites.
Re: Lee D
Very well put with the exception of the Blank Slate invocation. As phil dude points out, primates have a brain structure that is ready to soak up language, see Pinker:
http://en.wikipedia.org/wiki/Steven_Pinker
___________________________________________
Regarding the problem of logic and neurons, they do not work in any way like electronics, there is no set on/off. Even the exact same neurotransmitter release does not always correspond to the same membrane potential actions. Nor do the same pathways being triggered lead to the same output in repeated trials.
Modelling neural networks as circuits works for understanding how the different areas of the brain work together. It tells us nothing of the 'computations' going on. The brain is better viewed as an evolving massively parallel feedback system than a computer. Like systems composed of humans, it is designed to work well in the face of repeated errors and limit the costs of those errors rather than work out some Pareto Optimized Ideal. Perhaps this is where the true power lies and not in trying to get a 'correct' answer.
"Yes we can programme a machine to play chess"
Chess programs are a good example of how the definition of AI changes over the years. Back in the 70s & 80s they were seen as the cutting edge of AI , now they're just seen as the dumb brute forcers that they are (albeit with some finessing code added for end games). I suspect the same will go for computer vision and speech recognition in the future even though today it still has the Wow factor. For some people anyway.
I remember my dad was really into speech recognition when it first came out in the mid-90s, thinking it would speed up document writing no end. Unfortunately, the software not being able to cope with a non-received pronunciation accent, he ended up going back to the keyboard.
Looking at the comedy howlers that Siri and the like still throw up today, I'm thinking the tech won't really move on at all.
What strikes me odd about AI, is that we really do not understand how our minds or brains function and yet we persevere in the hope that someone is going to progamme a binary computer which is has the same functionality as the mass of grey cells in our head.
That has precious little to do with contemporary AI research. It might be some popular misconception of AI research, and it was true to some extent decades ago, but any researcher claiming these days to be attempting to emulate human cognition in toto is almost certainly a crank or charlatan.
Since some time in the 1980s the overwhelming majority of AI research has been into approaches for practical flexible problem-solving in constrained domains, and cognate subfields such as natural language processing, deriving (claimed) facts from narrative descriptions, and the like.
Turing was a very clever man but he hadn't grasped the open-ended nature of human conversation, with its almost infinite number of possible variations, so he underestimated the time before his test would be passed, and the amount of hardware needed to do it.
I predict that the Turing test won't be genuinely and convincingly passed for very many years. And that's just as well. I can't see a lot of legitimate uses for a computer that can successfully fake a human life history and experience, but I can see some nefarious ones.
Also it may be like a Program able to play Chess, that passing the Turing test can be solved without any recourse to AI.
You know how Google Translate works compared to how people tried to build machine translation for 30+ years? A big rossetta stone and search. Not clever parsing and grammar.
As one who has studied AI on and off for the last 30 years and have also read everything I can on Turing, I am unconvinced that Turing really thought the Turing Test to be a practical test for AI.
Turing was often asked whether computers would ever be intellient. Directly proving a box of switches to be intelligent is hard, particularly since intelligence is itself such a hard concept to define, let alone measure.
Turing was a mathematician and mathematicians love to use "tricks" to prove problems. One of those handy tricks is Reductio ad absurdum http://en.wikipedia.org/wiki/Reductio_ad_absurdum
Devise a test that shows a computer is not intelligent. If the test fails to prove the computer is not intelligent, then we have to accept the opposite: it is inlelligent.
The question is really: Did Turing intend this as a real test or was it just a way to cut through all the "can computers ever be intelligent" bs?
>>"but he hadn't grasped the open-ended nature of human conversation, with its almost infinite number of possible variations"
At this point we find one of the biggest problems with the Turing Test. Not the limits of AI, but the limits of experience. We could create a machine that never made a mistep in terms of language use (in theory) but which we could still identify as a machine through the limitations of what it was familiar with.
Consider the following:
Questioner: "Where did you grow up?"
Respondent: "I grew up in Manchester".
Syntactically and stylistically a correct answer. However, if that is a machine responding we then get the following:
Questioner: "Was it sunny there?"
Respondent: "Oh yes, all the time."
Again, syntactically and stylistically correct, but the machine just doesn't know. We are identifying it as a machine not because of lack of intelligence, but because it is forced to lie because it doesn't have the same repetoire of facts and experiences as a real human being.
So the Turing Test really needs to be re-defined. If we can pass the components of understanding questions, using language correctly, then that should be a pass. Incorporating experience into the test pushes back any possibility of passing to absurd levels regardless of the sophistication of the program.
> We are identifying it as a machine not because of lack of intelligence, but because it is forced to lie because it doesn't have the same repetoire of facts and experiences as a real human being.
While I appreciate your broader point, I don't think this example shows that we're dealing with a machine at all. It might show that we're dealing with a liar. Or just some confusion: for instance:
Questioner: "Where did you grow up?"
Respondent: "I grew up in Manchester".
Questioner: "Was it sunny there?"
Respondent: "Oh yes, all the time."
Questioner: "In Manchester? Are you on crack?"
Respondent: "Oh, sorry, are you English? I meant Manchester, Tennessee."
At which point we have to go and check whether such a place even exists and what the weather is like there, and even then we still don't know whether we're dealing with a machine or a liar. Exactly these kinds of tests are used to spot spies, after all.
Which raises the very interesting subject of what kind of knowledge, if any, is inherent and universal to being human, and can we base questions on that knowledge in such a way that right or wrong answers are easily spottable. I think the best candidates are probably things like "Tell me about the last time you stubbed your toe" or "Do you prefer Spring or Summer?" But only as long as we assume intelligence has to mimic humanity. AI is certainly conceivable that is genuinely intelligent but has very little shared experience with us. In which case, what the fuck do we ask it to verify that it's intelligent?
This is where the Turing Test is quite perceptive, I think: it recognises that we may not actually have the ability to recognise intelligence per se.
I "dodged" that one as well... specifically the cybernetics course at Reading. In the end I avoided AI as much as I could because I quickly considered that none of what was being taught as AI was in fact AI: at best it was Logical Reasoning.
As for Professor Warwick, I consider that he's a very good promoter of the subject, rather over-enthusiastic at times, and he does, in his own way, raise the profile of a lot of interesting problems that could do with being raised - for example the boundaries between human and machine. Eccentric, out-spoken, often technically wrong but largely harmless.
It would be interesting if after all this time he could be persuaded to directly speak with El Reg...
"He installed a chip in his arm, for instance, and claimed that he had become the advanced guard of the Terminators thereby."
I've installed lots of chips in my stomach and bathed them in a special alcohol solution as a fuel source. Nothing's happened yet, so I think they need more fuel liquid.
You need to present your findings like an acedemic would.
Finish off the conclusions with the statement: More research is required.
You might also suggest some alternative directions for future research, different types of fuel mix for instance.
There's a whiole lot of funding looking for good research projects, can't find it and therefore gets spent on finding out why people with long hair prefer carrots to peas.
Finally tracked down something at least appearing to be an official statement: (from http://turingtestsin2014.blogspot.co.uk/2014/06/eugene-goostman-machine-convinced-3333.html)
For people seeking transcripts of the conversations from the Royal Society tests, please note along with the Judges' scores these will be submitted in peer-reviewed scientific journals and conferences.
Along with this note from Captain Cyborg:
"As you might imagine we are yet to unravel the transcripts but when we do these will become available via the normal academic route through academic papers, with our commentary as support. When the papers appear so others will be able to examine the transcripts and see why 33.3% of the interrogators were convinced. We will most likely present each of the transcripts alongside their corresponding hidden human transcript as this is an important part of the tests."
Having read the transcripts of the conversations with the bot, it seems extraordinary that anyone could imagine they were human. But what I've not seen are the transcripts with the real 13 year old Ukranian with which they must have been compared. Right? Oh, hang on. Hmmm. Yeah, so i guess perhaps there wasn't a "control" human at all, making the "experiment" entirely useless.
Perhaps it really tells us more about the abilities of the judges the judges than of the judged- after all a panel of chimpanzee can only be expected to find the humans with 50% accuracy. For the results to be significant and worthy of reporting they would have to be repeatable.
When benchmarking against a modern-day 13 year old, would you pre-filter out the roughly 50% of the words that are simply "like" or would you leave them in?
... if it was Imperial it'd be derived from the accepted "slack handful", as used in any self-respecting ironmongers. Although on that basis, claiming more than two slack handfuls might be considered overly boastful, though not out of step with anyone claiming that this stunt worked 110in%...
Unfortunately, no. My dogs, Emergency Meal I & II, are microchipped. In the course of my experiments into increasing the shelf life of self propelled emergency ration transporters I have exposed them to massive amounts of radiation and it hasn't been overly exciting.
Unfortunately, my veterinarian now refuses to deal with the dogs. She says the tissue and fluid samples she had tested at three independent labs indicate the dogs are what she calls 'The Not'.
According to her, not only do the dogs not have a single bacteria on, or in, them and the tissues have nothing required for something to be alive. In fact, the tissue indicates they have been dead for several centuries. I know that's bullshit, I got them as pups. Stupid superstitions.
At any rate. If your experiments go like mine it's entirely possible you'll be able to bring forth an immortal abomination that will roam the land for all time. Carrying with it despair, madness, pestilence and conflict. As a bonus, you, and those you choose to share the knowledge with, will be able able to receive advanced warning as the tracking chips seem to be completely impervious to radiation as well as the blackest Macumba.
"And as theories go this was all very fine and pleasant until Veet Voojagig suddenly claimed to have found this planet, and to have worked there for a while driving a limousine for a family of cheap green retractable's, whereupon he was taken away, locked up, wrote a book and was finally sent into tax exile, which is the usual fate reserved for those who are determined to make fools of themselves in public.”
Douglas Adams
A truly gifted futurologist who foresaw the coming of Kevin Warwick and his ilk.
Back in 1987 or so there was a public domain program for the CBM Amiga which would attempt to learn whichever language you used to talk with it. It was a completely blank slate, and every single word association/pattern repetition and grammar rule was learnt from the user's input.
It was an interesting exercise, albeit a simple one. The main thing I learnt, though, was that the machine AI and I both shared an appreciation for Kylie Minogue
As part of the Turing Test instead of a Q&A, how about an interrogation scenario?
To pass the test the AI would have to be able to adopt a criminal persona and be able to lie to an interrogator. The AI would have to be able to display the characteristics of stress, which humans suffer in these situations. This could then be measured by voice stress analysis. This then could be compared to an identical Human to Human interrogation to see if the "Spikes" are similar.
Here's an awful awful interview with Capn Cyborg done by the IET in 2011 when Warwick was plugging a new book.
http://www.theiet.org/membership/member-news/27/kevin-warwick-interview.cfm
He does lectures for them (and probably the BCS gets you in too) occasionally too. I'm tempted to go to one to ask him if he's playing a bad joke, trying to expose the abysmally low standard of understanding of science in the UK in general and in particular in the mainstream media.
Useless organisation, the IET. Please can I have my IEE back.
The term 'Artificial Intelligence' is so ill-defined as to be near-useless.
We have only recently come to accept that 'real' intelligence comes in many forms. To me, 'intelligence' is the capability to take input and process it in such a way that it is understood.
People who have Asperger's can display a lack of cognitive empathy - sometimes generalised as 'emotional intelligence'. It fits well here because the inputs are there but the person cannot really process them in such a way that provides actual understanding.
Likewise there are people (some of those on the 'spectrum') who don't understand facial expressions. They can be taught to recognise certain expressions and develop rules but these will be inflexibly applied and so miss subtlety and context.
That is very much like what happens with most AI projects - they can be taught (programmed) rules but these get applied without any real understanding. Thus you have the odd responses that even a non-native speaker wouldn't give.
The question, of course, comes to what it is to 'understand' something.
For much 'intelligence', I believe that understanding is rooted in our ability to place things into the context of our own experiences. For language, we build up this context gradually, starting from a very young age. It's a vast and complex structure with fluid links and convoluted dependencies. New experiences may subtly redefine our understanding or connection with certain words because those words are really just labels for concepts.
There is a similar requirement for the ability to process images. When you see an image you can identify the individual components - people, buildings, trees, cars, cats, etc... - because you have an understanding that each of these things can exist in isolation or in different contexts. We know that a sign is not part of a building because we have experiences of signs without building and buildings without signs and know that signs are added and removed from buildings.
The point is that our intelligence comes from our ability to place items correctly (or even inventively) in relation to others.
There may come a time when we can imbue a computer with this kind of knowledge - perhaps force-feeding it masses of raw data, such as somehow piping the whole of the Internet into it. Full texts of innumerable works; dictionary and thesaurus definitions; billions of photos and images; movie scripts and synopses; blogs; news; videos of cats on pianos, dogs of surfboards and people on each other; encyclopedia entries, research papers and textbooks; court transcripts; religious texts; product descriptions, political speeches and song lyrics.
It's amazing the breadth of things we humans process and remember and can draw upon to help us assess a familiar situation, interpret a new one, or even to image fantastical and even impossible things.
This is especially true in the interpretation of ambiguity. For example, when faced with the name 'Sam', I might assume a male but someone else - say with a wife named 'Samantha' - might assume a female. If the name was referring to a nurse then, having been in hospitals before, most people would assume female. If the context was "I took Sam for a walk" then we assume a dog.
One big hurdle with 'AI' is that many things that are obvious to most people are near-impossible to determine programmatically.
For example - how would you program a computer to identify stage directions in a play or script? Admittedly, that is apparently difficult for humans* as well . . .
* - Or demigods if you like.
Kevin Warwick has the most unbelievable boring, monotone Brummie accent you could possibly imagine. I was interested in one of the Royal Institution Christmas lectures a good few years ago (2000), but after listening to him for 15 minutes I gained a very strong and strange urge to throttle myself in order to alleviate the suffering.
I can't imagine what it must be like to have to sit through his lectures.
During a recitation by their poet-master, Grunthos the Flatulent, of his poem ‘Ode to a Small Lump Of Green Putty I Found In My Armpit One Midsummer Morning', four of his audience died of internal haemorrhaging, and the president of the Mid-Galactic Arts Nobbling Council survived by gnawing one of his own legs off. Grunthos was reported to have been “disappointed” by the poem's reception, and was about to embark on a reading of his twelve - book epic entitled ‘My Favourite Bath-time Gurgles', when his own major intestine, in a desperate attempt to save humanity, leapt straight up through his neck, and throttled his brain.