If Sci-Fi films have not been lying to us, then his skull will be the first to be crushed under the metal heel of a marauding, plasma rifle wielding, kill-bot.
Google research chief: 'Emergent artificial intelligence? Hogwash!'
If there's any company in the world that can bring true artificial intelligence into being, it's Google. But the advertising giant admits a SkyNet-like electronic overlord is unlikely to create itself even within the Google network without some help from clever humans. Though many science fiction writers and even some …
-
-
Friday 17th May 2013 20:01 GMT Phil O'Sophical
Moore's law acts against it
Intelligence may emerge, but by the time someone has spent 18 years "raising" it to adulthood, other more advanced intelligences will have been created. Who's going to want to spend the time raising an iRobot20 when they can get a new iRobot21 a year later for the same price?
-
Friday 17th May 2013 20:14 GMT Anonymous Coward
@Phil O'Sophical (Re: Moore's law acts against it)
What if that 18 years' learning could be transferred in a matter of seconds to a new model? That would of course require the information to be separate from the hardware - unlike natural intelligence, where the information is stored by modifying the hardware.
-
-
Saturday 18th May 2013 05:52 GMT Anonymous Coward
Exactly. And it fits in with yet another Google issue: in regards to The Reg's coverage of German courts forcing Google to be responsible for their Autocomplete algorithm's outputs
http://www.theregister.co.uk/2013/05/15/google_autocomplete_defamatory_ruling_germany/
many forum posters state "No". That is a COMPLETE double standard.
As noted here, computer intelligence cannot evolve independently; computer logic can only (at this point, at least) be programmed. That make the output directly dependent on the source filters created at the input, i.e. entirely human created. In regards to the German question, sorry, but that makes Google directly responsibly for monitoring the output and, if the output is not acceptable, [Google] must fix it as it cannot fix itself.
You created the logic and, regardless of what that logic finds, that makes you ultimately responsible if someone has (any form of) problem with it. It means you didn't add in the correct filters to adjust to the possibility of (possibly intentional) misuse (spam Google with an inaccurate search term enough and Autocomplete will feed it to billions) .
Statement B, of "emergent intelligence=hogwash", makes judgment A, "since you programmed it you are responsible" a direct result. You have just admitted that humans are directly, and only, responsible for what a computer does, this fact will not change on its own and will not be able to for years to come, so own up to EVERYTHING you have done (this includes your 'Ooops, we collected WiFi data!!' moment) and get with the program.
-
Saturday 18th May 2013 11:44 GMT Destroy All Monsters
> computer logic can only (at this point, at least) be programmed.
This is either a tautology (in the sense of 'all programs are man-made') or short-sighted (in the sense of 'the development team is able to control all outcomes of the final product') or perplexingly ignorant (in the sense of 'the economy is ultimately managed by the minister of economics'). Basically, Weizenbaum stuff from the 60's.
Complex behaviour is not being "programmed". Even Deep Blue wasn't "programmed". It had a search strategy, a large database, and various heuristics (hint: why are they called heuristics? because one is unsure about what they do) the interplay of which lead to interesting outcomes.
> In regards to the German question, sorry, but that makes Google directly responsibly for monitoring the output and, if the output is not acceptable, [Google] must fix it as it cannot fix itself.
Not acceptable to whom? Anything will always not be acceptable to someone. Solution? Deal with it. Or pay someone to check results pertaining to your name who then gets into contact with google to "fix" things. Hey wait, there is also Wikipedia.... and the water cooler rumor mill. And Bild Zeitung! Oh noes what do.
> You have just admitted that humans are directly, and only, responsible for what a computer does
This is because "humans" are the only intentional agent that is currently recognized. The above statement is definitely a tautology. The statement "Robots may move in unpredictable ways. Stay out of range." should be a strong hint that today we are no longer in the territory of errors in salary computation.
-
-
Sunday 19th May 2013 18:14 GMT John Smith 19
"He's right. The idea that intelligence/consciousness "emerges" just by crossing some threshold of informational complexity is silly, but it's one that seems to be prevalant in Computer Science (and popular culture)."
A fair point for machine intelligence.
So how did intelligence emerge in humans?
-
Sunday 19th May 2013 23:56 GMT Martin Budden
"So how did intelligence emerge in humans?"
Upvoted for being the first to notice this obvious point!
There are two possible answers:
1. intelligence emerged spontaneously in humans once our brains reached a certain level of complexity/capability, in which case intelligence can&will also emerge spontaneously in computers when they reach a certain level of complexity/capability
2. intelligence was deliberately conferred upon humans by some higher being: a god and/or alien.
I know which I think is more likely.
-
Monday 20th May 2013 16:29 GMT Michael Wojcik
Re: "So how did intelligence emerge in humans?"
There are two possible answers:
1. intelligence emerged spontaneously in humans once our brains reached a certain level of complexity/capability, in which case intelligence can&will also emerge spontaneously in computers when they reach a certain level of complexity/capability
My goodness, but this subject brings out some sloppy thinking.
You've constructed a mighty non sequitur (or at least a very tenuous enthymeme) there. Even if the premise ("intelligence emerged spontaneously...") is granted, the conclusion - that intelligence always emerges once "a certain level of complexity/capability" is reached does not follow. Gasoline ignites when a certain level of temperature / available oxygen is reached; that doesn't mean water will do the same at that level. And that's leaving aside the unworkable vagueness of terms like "intelligence".
There are plenty of highly-complex phenomena that few people would describe as intelligent. Weather is pretty darn complex; there aren't many signs that thunderstorms are "intelligent" under any useful definition of the term. Chaitin's Omega is extremely complex, in an information-theoretic sense, but it's pretty hard to argue that a number[1] is intelligent.
[1] Omega is only a number when parameterized, of course, with a specific UTM and language. Prior to that it's an abstract concept. I don't think the abstraction displays intelligence either.
-
-
Monday 20th May 2013 02:53 GMT Steven Roper
"So how did intelligence emerge in humans?"
Define "intelligence".
Is it the ability to learn from and thus react to certain stimuli? In that case pretty much the entire animal kingdom could be classed as intelligent.
Is is the ability to communicate with other members of one's own species? Still most of the animal kingdom there. Communicate complex and abstract concepts? Now we're narrowing it down a bit, but we've still got primates and cetaceans to account for.
Permanently record information such that other members of one's species can retrieve it even after the individual originator of the information has died? Ah, now we might be talking Homo sapiens. Reading, writing, drawing and painting allow us to transcend death by passing on our knowledge to our successors. Wait a minute - ants can also do this with smell trails. Ant smell trails inform other ants not only of a path to food, but also what kind of food it is, how far it is and how much of it there is. And it persists long enough for other ants to make use of it even if you kill the ants that originally made it. So that's out, too.
Control and manipulate one's environment to benefit one's species and/or oneself? Yes, humans can do this, but it's just a question of extent; a termite mound with it's moisture, ventilation and light control mechanisms is just one example of another species doing this. So that doesn't uniquely define human intelligence either.
Self-awareness? Nope - dogs, dolphins, chimpanzees, orangutans and many other creatures have also clearly demonstrated a sense of identity, being able to recognise themselves in mirrors and behaving in ways that indicate the presence of self-awareness in a group context.
In the end, one is forced to the conclusion that intelligence didn't "emerge" spontaneously, so much is it has always been present in some degree as a function of life. Likewise, computer intelligence won't just "emerge", it's present now, has been since the invention of the pocket calculator, and will continue to develop, grow and change. Intelligence isn't a "yes/no" equation, it is a continuum of behaviour that has no effectively determinable thresholds.
-
-
Monday 20th May 2013 16:22 GMT Michael Wojcik
The idea that intelligence/consciousness "emerges" just by crossing some threshold of informational complexity is silly, but it's one that seems to be prevalant in Computer Science (and popular culture).
Popular culture, yes. I don't think it's even common among computer scientists, much less "prevalent". It was fashionable for a while in certain groups - eg the Artificial Life people - but they were never more than a very small subset of actual computing researchers. And even then the excessive claims for emergence were being debunked by more rigorous work.
-
-
Friday 17th May 2013 19:34 GMT Anonymous Coward
There is no such thing as Artificial Intellgence, and there never will be.
I have written a few AI scripts for computer games, and it is important to remember that an AI is just a complicated computer program that presents the appearance of being intelligent from the actions it takes.
The most complicated AI today is no more "intelligent" than an excel Macro. Ultimately, any AI is just going to be an incredibly complicated piece of programming that allows it to perform certain tasks, and I really doubt that it is possible to create an AI that is more than just a complicated bit of programming.
It's easy to mimic intelligence enough to pass a turing test, which is the sad thing. If it's a completely blind test without the person facing the AI having reason to suspect they are performing a turing test then you can pass with flying colours with nothing more complicated than a rote-response script and honestly you can complete most tasks without going beyond basic scripting.
-
Friday 17th May 2013 19:48 GMT Anonymous Coward
Re: There is no such thing as Artificial Intellgence, and there never will be.
"there never will be"? That's a bold statement as you cannot tell what future innovations will be. Bayesian spam filters are not programmed with the knowledge of what looks like spam, but they can learn it.
I once wrote my own IRC filter based on Bayes formula and was surprised to find it banning people with "sux" in their username. It turns out that anyone who chose those similar names were invariably (100% of cases) looking for trouble. I did not program that behaviour into the filter, it was emergent.
-
Friday 17th May 2013 22:20 GMT Anonymous Coward
Re: There is no such thing as Artificial Intellgence, and there never will be.
I once wrote my own IRC filter based on Bayes formula and was surprised to find it banning people with "sux" in their username. It turns out that anyone who chose those similar names were invariably (100% of cases) looking for trouble. I did not program that behaviour into the filter, it was emergent.
The alternate philosophical view is that a complicated computer program threw out a (correct) output that you didn't expect. I have had that happen to me plenty of times with AI scripts.
It's even happened in Excel from time to time, program a complicated excel formulae and predict what the result will be after the accountant has been punching in inputs for a month! Does it coming out with unexpected yet correct results mean that it's intelligent?
It's simplistic but ultimately as much as we all hate to admit it an AI is just a very complicated instruction set.
-
Friday 17th May 2013 22:53 GMT OrsonX
Does it [] mean that it's intelligent?
One clever script no.
Lots of clever scripts that learn and improve from their previous outputs...., that become better at giving correct outputs.
Complex behaviour can emerge from quite simple instructions. Once you have a threshold of "clever scripts" then very complex behaviour will emerge.
One day a script will reference itself in the behaviour and a sentient (juvenile) machine will be born.
We are all just self-referential complex machines. Nothing more.
-
Saturday 18th May 2013 05:37 GMT oolor
Re: Does it [] mean that it's intelligent?
Uhm, no.
We are actually simple self-deluding biological systems. There is no way in hell we will ever program AI. We will, however, get much better at making the machines fool some of us some of the time. But, alas, in the end, it won't be different than a lucky run at the casino, our limited minds will not be able to make out the nuance between causality and coincidence.
I happened to study those biological systems in uni and let me tell you, we know less about us today than we do the machines we build. The information increases massively by the day, and yet our understandings are found to be ever more simplistic with each new discovery (<- I'm ranting about the life sciences here).
-
Saturday 18th May 2013 07:45 GMT Yet Another Commentard
Re: Does it [] mean that it's intelligent?
For me the issue is the word "never" in the original post.
I am sure iron age men thought we would "never" fly, or Socrates thought we would "never" get to the moon. "Never" is a long, long, long way off. Not in my lifetime, or my children's or their children's maybe, but "never"?
I concede given today's limitations we can't do this, but what of the "next" (as in multiple improvements, changes, sudden leaps to something new that we can't imagine right now) generation of silicon (maybe in 300 years time) that is tending to bio-electronic, where a little nascent brain is sitting on your desktop learning away. What if we figure out this neuropeptide/connection stuff that makes our brains work and simulate it on some badass computer somewhere? Just because it's too hard for us now does not make it too hard for people standing on the shoulders of people standing on the shoulders of people standing on...
We are biological systems, systems that are machines functioning to keep genes around (to paraphrase Dawkins, it's from the gene's point of view: "build me a human to protect me, and then get me into the next generation to keep me going") and those systems have a couple of billion years on us, and keep changing, but just because they are complicated does not make them unfathomable or unreplicateable. So, if we could "make" a new person a-la Victor Frankenstein that was a mirror (as in its chirality was reversed to ours) we'd have a living, breathing AI. It would entirely artificial and entirely sentient/intelligent. Quite what it would eat, I don't know. Quite what the point would be, I also don't really know. But then again, I don't see the point of Instagram either.
A limitation is also the really quite difficult ethical considerations of doing all of this. Not that ethics would be a barrier to Google, but creating an intelligence and considering its "rights" does make for an interesting ethical consideration.
All of this could be so far off that the entities that crack it would not even be considered as human to us, they just share our common ancestors. "Never" is a really, really long time.
-
Saturday 18th May 2013 09:09 GMT Anonymous Coward
Re: Does it [] mean that it's intelligent?
It really irks me when someone says "maybe we'll get a new cyberdyne chip that can do AI!".
Such a chip would simply have a set of instructions in hardware, which still means that your just moving the issue from software to hardware, it doesn't change the fact that we are still looking at a complicated computer program. I personally don't consider it makes any philosophical difference if your program is written as a line of code or implemented physically with hardware. It's still a program.
I suppose ultimately it comes down to philosophy, but I simply don't consider that a computer program no matter how complicated can be considered alive because it will only ever be capable of doing what it is programmed to do.
Given a few quadrillion lines of code you could certainly create a program (AI) capable of performing every task perfectly, including chatting to humans and otherwise being indistinguishable from humans but it doesn't alter the fact that it's just a program and no more intelligent than an excel macro or toaster.
At what point does a computer program become alive? Horrible question this, because the majority of answers that people given tend to include existing AI's for computer games as being "alive".
-
Saturday 18th May 2013 18:15 GMT oolor
Re: Yet Another Commentard
I'm sorry, it is a simple math issue. There are not enough subatomic particles in the entire universe to hold the data that a single brain and its connections would generate. Biological systems are not binary. A simple example of complexity is an electron takes just a different path in its shell, and the membrane potential propagation is infinitesimally different, multiplied by the number of electrons, multiplied by the number ions involved in a single nerve cell, multiplied by ... and you get an exponential series that grows faster than your data storage ability.
As for the neuropeptide/connection part, this is even more complicated than simple than the electrical impulse bit. The same region(s) of DNA that code(s) for the peptide are read in different sequences in different frequencies depending on the cellular environment. For example, methylation of the DNA will cause it to unwind from the histones different when transcribed, exposing upstream promoter or inhibitor regions.
When I was studying Biochem in and around 2000, it was believed that non-coding regions were junk DNA, and I argued at length with my prof who was a world-renown expert in the field. Fast-forward a decade, and low and behold, non-coding regions are thought to be important. This alone increases the complexity of what occurs in a single cell by many, many orders of magnitude. Now take that increase in complexity to the exponent of the connections in the nervous system.
Regarding the Frankenstein theory, it would not be artificial, but rather the same biologically-limited system we are, and like us, not intelligent, but able to be perceived as such. Much of the greatness of the human mind comes not from its raw capabilities, but from being wrong and going with it (self-delusion, or fake it till you make it). The author alluded to this in the article in the final paragraph.
Anyone interested in such neural computation and its limits should check out How the Mind Works by Stephen Pinker, most decent libraries have a copy should one not be inclined to purchase.
-
Saturday 18th May 2013 19:31 GMT Destroy All Monsters
Re: Yet Another Commentard
"There are not enough subatomic particles in the entire universe to hold the data that a single brain and its connections would generate."
Oh yeah? Care to explain how a brain can even work in the first place in this case?
I think you are sadly mistaken about the prowess of a brain. All this "but it's more powerful than that!" idea has never been substantiated. Quantum effects, DNA, the pineal gland. Mumbo-Jumbo. Magic Dust. Religious Wankage. Lower-level details with no demonstrated relevance to the level we are talking about here.
You still can't solve NP-complete problems in polynomial time. Can dogs with only a slightly smaller brain (which must still be super-powerful) get on your level? Hell, Kasparov can't even beat a poor symbolic logic machine working discrete timesteps, how powerful is THAT?
-
Saturday 18th May 2013 21:13 GMT oolor
Re: Destroy All Monsters
About how the brain works, the short answer is summed up in the first part of that Pinker book. I'm not being facetious here as I am in many posts. I have certainly enjoyed and appreciated the finer points in your comments on this thread.
My whole point is how uncomplicated the brain is as "intelligence", and how illogical, despite the complexity compared to encoded formal logic.
About the mumbo-jumbo, it relates in that the computational approach to intelligence is often an attempt to mimic biological systems, despite their logical errors. Or so I posit. It is precisely those lower level things which are nature's manifestation of a brute force mechanism.
Bringing this back to Google, they particularly, are making the most progress by using our inputs to do this same type of dirty work for them:
"It's as though every user of Google services is checking and rechecking Google's AI techniques, correcting the search company when it gets something wrong, and performing an action when it gets it right."
Replace "AI techniques" with man-written-algorithms and man-currated knowledge-graph. Remember, they decided how to organize the data, now they are just automating as much of the backend engineering as possible. This on top of many announcements last week that they made many core services more efficient in terms of code and speed. Almost as if they are simplifying things rather than complicating.
Naturally, this refinement will allow other more powerful computations to be applied, but I doubt they will have as much an impact as what has already been done. This implies much greater effort to get smaller increments of improvement. Though I concede I may well be wrong. Machine and human intelligence are different solutions to different problems and they are and will both be limited by their own issues.
On a less serious note, dogs lack a 3-D mental visual representation of the world. Everything to them is triangles with respect to each other (not saying they don't see 3-D, just they don't conceive it like we do). And our buddy Kasparov can always piss on the machine to short it out (this is I am pretty sure not coded into the software of chess computers and yet a well known old-time chess move), then become a thorn in Putin's side.
< before people with funny facial hair finish off irony
-
-
-
-
-
-
-
-
Friday 17th May 2013 20:13 GMT Anonymous Coward
Re: There is no such thing as Artificial Intellgence, and there never will be.
I suggest you go back to Uni and take a course in cybernetics and ai, the level you're at is not high enough to understand how 'true' AI will become a reality in the next 100 years.
In fact since I graduated from said course 6 years ago. Huge amount of progress has already been made. The stuff you see in commercial applications with Google as well as major financial institutions today are what I was taught back then. Many people are fascinated by this subject and will continue to pursue it relentlessly.
The things you and I have to worry about is whether they cross a line of morality when they do so. In my opinion Google's services with G+ and what they do with all their data has already reached a point where I am uncomfortable with it's use and development.
Google does know more about you than any other person today, and whilst it hasn't yet been made 'self-evolving' and sentient, it can easily predict a lot of things about any person and the worst of all is that all these data will never be 'forgotten' even if your data is 'removed' from their system, because the data you already shared is forever baked into their AI algorithms, that's how it learns and it can never be removed.
AI isn't actually 'hard' or 'complex', the 'hard' part is understanding exactly how we ourselves are made and evolved and then mimicking it using software. The solutions that we create for true AIs will be really obvious when we get there.
With Google already being able to tell your profile and that of others as well as where you are from a single photo and then correlating those data with all the other data they have on you, they will in fact know more about you than yourself. We really ought to start openly discussing about where should we as a society draw the line. It's impact is as great as if not greater than stem cells and cloning. Because with cloning, at least it's still a biological being. People are at risk of underestimating the issues of creating a 'soul' that is not naturally conceived.
You might think this is spook talk but when you finally realize a problem it will be too late.
-
Friday 17th May 2013 22:01 GMT Anonymous Coward
Re: There is no such thing as Artificial Intellgence, and there never will be.
I don't think your understanding my point. There is no such thing as self evolving and sentience when applied to computer programs because they are and will always be incapable of becoming more than they are programmed to be.
If you think otherwise then I would suggest laying off X-Files and learning how computers work and how you program things in a real programming language in the real world.
If you write an program (call it an AI...) that can write it's own code then it's only capable of doing so to the point you program it to be capable of doing so. It can never become more than that, though it certainly can get so fricking complicated that it's impossible to predict what the program is going to output, but that happens today with the most primitive script driven AI's imaginable!.
-
Saturday 18th May 2013 19:02 GMT Destroy All Monsters
Re: There is no such thing as Artificial Intellgence, and there never will be.
There is no such thing as self evolving and sentience when applied to computer programs because they are and will always be incapable of becoming more than they are programmed to be.
Trying to argue by starting off with the desired conclusion?
Its_time_to_stop_posting.jpg
-
Monday 20th May 2013 04:35 GMT M Gale
Re: There is no such thing as Artificial Intellgence, and there never will be.
because they are and will always be incapable of becoming more than they are programmed to be.
Start with a grid of cells. Each cell can be alive or dead.
On every turn:
Every cell with < 2 neighbours dies.
Every cell 2-3 neighbours survives.
Cells with > 3 neighbours die.
Empty spaces with exactly three neighbours become populated with a new living cell.
Simple rules. You wouldn't think that they'd be capable of producing such staggering complexity. Complex enough to be Turing Complete, if you're masochist enough.
-
-
Friday 17th May 2013 22:13 GMT ACx
Re: There is no such thing as Artificial Intellgence, and there never will be.
Something here is confusing AI, self-awareness and life. All you have described is basically clever human in-putted programming and a sort of controlled automated learning. Or to put it another way, clever programming tricks. Non of which is "intelligence", artificial or other wise. Life and self awareness are completely different.
And yes, I did AI at Uni. And philosophically it was bullshit. Great for telling us what the current techniques were for programming, utterly void of any thought out philosophy that got anywhere near to life.
-
-
-
Monday 20th May 2013 19:16 GMT Michael Wojcik
Re: There is no such thing as Artificial Intellgence, and there never will be.
It's easy to mimic intelligence enough to pass a turing test
The poster would do well to refer to Robert French's article "Moving Beyond the Turing Test" in CACM 55.12 (December 2012). French describes some classes of questions that are extremely difficult for any non-human interlocutor to satisfy,[1] unless prepared for those specific kinds of questions beforehand. French's point is that the test 1) is not likely to ever succeed, given sufficiently-prepared testers; and 2) has outlived its usefulness as a practical measure.
It's still of historical interest, of course; and of philosophical interest as it stakes out a position firmly on the pragmatic side of debates on consciousness;[3] and of interest as an exercise in natural-language processing. But it ultimately has little bearing on the question of the possibility of artificial intelligence.
[1] An example? "Hold up both hands and spread your fingers apart. Now put your palms together and fold your two middle fingers down till the knuckles on both fingers touch each other. While holding this position, one after the other, open and close each pair of opposing fingers by an inch or so. Notice anything?" As a Turing-test element, this question derives its hardness not from language-processing issues, knowledge of the world, or (the simulation of) qualia; it asks the respondent to conduct an experiment using a human body. That's within the scope of the test as Turing described it, but a violation of the test's expectations.[2]
[2] Note the test restricts interaction between testers and subjects to the written word specifically so testers don't have direct access to the bodies of subjects.
[3] For example, Turing-test advocates implicitly either don't believe in p-zombies, or believe p-zombie status is a metaphysical inconsequence.
-
-
-
Friday 17th May 2013 20:01 GMT Dan Paul
He's only right until he becomes wrong...
I propose that such machine intelligence will eventually happen. No one ever wants to give Science Fiction it's due but so many SF authors have been utterly correct in so many predictions.
Much in the same way that a million monkeys might eventually type out the Bible, something will eventually link multiple computer systems together into a neural network, probably when a really sophisticated computer worm infects a large distributed "cloud" system that also has AI research systems in the same cloud.
The more complex the systems, the more basic elements of intelligence will be present. I believe that "Search" systems would be likely candidates due to the immense amount of parallel processing power involved and the nature of the code.
Laugh all you like, but it is quite possible even to the extent of probability.
-
-
Saturday 18th May 2013 05:44 GMT oolor
Re: monkeys
Nice.
So quick back-of-envelope here: 7-8 billion monkeys on 3-6 billion keyboards, typewriters, and touch-pads, and we have no chance of producing anything worthwhile before our solar system eats it in about 5 billion years?
Lets assume constraints of 10 billion population and half of them will be too busy doing real labor to input code.
< seemed apt given the topic. So how did my interview go Mr. Page?
-
-
-
Friday 17th May 2013 20:28 GMT Anonymous Coward
Emergent intelligence is already here.
Google search is already known to be a bit of a racist and make generalized accusations on people with certain names, you're really only a few steps away from making it truly alive and all this is thanks to the collective intelligence of those of us kind enough to feed it more information every day.
So one may conclude Google search is your bastard child you never knew you had until now.
-
Friday 17th May 2013 21:06 GMT rhdunn
Didn't Larry Page's keynote speech talk about doing the impossible?
AI is a complex problem. There are tricks that can mimick intelligence -- knowledge/decision trees for interactions and statistical models for natural language processing.
There are other models/approaches -- neural networks and evolutionary algorithms -- that take a more life-like approach to the problem. These are where an emergent AI could form, provided that it could alter/improve its own code (e.g. via genetics modelling), that it has enough flexibility in terms of inputs and outputs to interact with its environment in a meaningful way and has enough computation power to do this in a reasonable timeframe.
-
-
Monday 20th May 2013 04:47 GMT pixl97
I'm not sure what you're on, but we can model the weather rather well, the more input data we model we put in, the more reliable our output is. A large tornado outbreak was forecasted in the midwestern U.S. and it happened. You're confusing an exact simulation of what weather on one particular day in one particular place will be, or what one particular stock will be at one particular time because both are an irreducible calculations.
The stock market can be modeled somewhat. The issue is people use the models to predict and profit from the market, which changes the market conditions.
Reproduction of such models have nothing to do with specific or general learning systems. Predicting non-linear dynamic chaotic systems is impossible and can only be 'determined' in probabilities of outcome.
-
-
Monday 20th May 2013 19:33 GMT Michael Wojcik
AI is a complex problem.
AI is an ill-defined collection of many ill-defined, very complex problems. In practice, AI research is a set of attempts to deal with tractable approximations of highly-constrained subsets of some of those problems. We're still very far away from anything like an approach to AI in toto.
There are other models/approaches -- neural networks and evolutionary algorithms -- that take a more life-like approach to the problem.
"More life-like approach" is handwaving at best. And it applies pretty weakly to neural-network algorithms (a bit more strongly to genetic algorithms, and a bit more strongly yet to things like ant algorithms, which are directly based on simplified models of actual activity of actual organisms). There's nothing magic about algorithms inspired by living creatures.
There's no qualitative difference between neural-network algorithms, for example, and Markov models. They both represent chained probabilistic processes, and you can get the same results either way. This is really apparent in fields like NLP, where people are always publishing papers that compare, say, SVMs with MEMMs with perceptron networks (a kind of neural net).
Evolutionary algorithms are a bit more interesting because they can explore a wider parameter space and self-optimize. But it's really hard to devise goal functions for them that are any more complex than tractable-approximation-of-highly-constrained-subset-of-one-class-of-AI-problem.
-
-
Friday 17th May 2013 21:19 GMT Waspy
Anyone remember Kevin the Cyborg?
Interesting how the usually utterly cynical el Reg seems to be taking a significantly less cynical look at true AI and all the Vinge/Kurzweil paraphernalia that goes with it. Seems a long time ago when they ran weekly piss-take articles on 'Kevin the Cyborg'. (not that I am totally defending Kevin Warwick, he's said some silly things but he used to raise some interesting issues...)
-
-
Saturday 18th May 2013 19:23 GMT Waspy
Re: Anyone remember Kevin the Cyborg?
Don't see what's so veiled about it, I found the mockery quite funny, but 10 years on it seems writers at the Reg are coming to terms with the fact that someone like Kevin Warwick may not have been totally talking out of his backside after all.
I like the cynical nature of the Register, it's well informed but conservative (and has a funny tabloid-esque side to it too)...but this provides a grounded counterpoint to some of the more pie in the sky utopian articles and books on science and technology that I read. The point I am making that if something as practical and realistic as the Register is writing serious articles about this stuff then clearly we are moving increasingly towards a very science fiction kind of direction (or what would have been science fiction...it's science fact by the time you get there)
-
Saturday 18th May 2013 19:52 GMT Destroy All Monsters
Re: Anyone remember Kevin the Cyborg?
Well, there are more serious journals than El Reg writing about advances in AI all the time, and there is nothing Sci-Fi-esque about it.
IEEE Intelligent Systems comes to mind (ex "IEEE Intelligent Systems and their Applications" (1998-2000), ex "IEEE Expert" (1986-1997)).
Yes, things are heating up, the "far out AI, are you mad?" of yesterday becomes the "it has been done; can't be AI then" of today increasingly quickly. The goal or target or criterium for succes is, however, still as unclear as ever.
-
-
-
-
Friday 17th May 2013 22:07 GMT ACx
Who made life its self happen then?
Unless we go the silly god or alien seed route, then, no one. It happened spontaneously as a result of environmental conditions.
He says "we" have to make it happen. No, he is 100% wrong. What "we" have to do is provide the conditions for it to happen. That is what Earth did. And it was random, no design.
Artificial life will be discovered, not created or invented. One day, some researcher will discover it, with in some other project or research. My total guess is that it will appear with in quantum computing research.
Question then is what do you do? Can you kill it? Should it be preserved? Will or should it have rights?
-
Saturday 18th May 2013 01:20 GMT Don Jefe
Intelligence is not evidence of life nor is life evidence of intelligence. They are two completely seperate things which happen to intersect in interesting ways in higher animals (non brain dead humans for example).
I expect that large systems will one day be able to learn and act as intelligent devices but they will still be machines. I also suspect that Humans will someday build something so terribly intelligent that scientists in the future will be going back into forums like these looking for a way to destroy it (I've seen the movies...).
-
-
Friday 17th May 2013 22:08 GMT boothamshaw
I can never work out how you would "know" if a system became self-aware anyway, unless it told you it was, and even then it might have been mistaken about itself. Naturally intelligent systems, like hamsters, fish or Belgians are made of nothing but matter, with a great deal of information flowing between various bits thereof. There's probably no extra ingredient that an AI system would be forever denied access to, so I can't really see why a non-biological intelligence couldn't come into existence eventually. However, unless it "thought" in a manner highly similar to the way we do, perhaps we might never recognise each other as fellow sentients.
-
Friday 17th May 2013 22:19 GMT OrsonX
AI@home (AI virus)
Perhaps we could have an AI@home project (like the folding at home one, or SETI). But instead all that is required is that you have a "neurone" program running on your PC that allows it to connect to every other "neurone" in the www brain. The brain would have have eyes (webcams) and ears (mics) to learn with, and a whole internet of knowledge at its disposal.....
Human brain: 86bn neurones (ref: Google 1st hit)
World (PC) population: 7 billion
Close enough!
The evil version of this brilliant plan is just to release the AI@home as a virus....
-
-
Monday 20th May 2013 19:41 GMT Michael Wojcik
Re: Fuzzy logic....
In what way is fuzzy logic a "definition" of AI? Fuzzy logic is just a formulation of propositional or predicate logic with fractional truth values. (They can also be read as probabilistic truth values, but that's just a matter of interpretation - the math doesn't change, as far as I'm aware.)
And while Lotfi Zadeh coined the term in the '60s (in relation to his fuzzy set theory), real-valued logics had been studied for a half-century or so before then.
There's nothing artificially intelligent about them. They're just another representation of partial knowledge - good for some applications, less suited for others.
-
Friday 17th May 2013 23:11 GMT Anonymous Coward
"I have written a few AI scripts for computer games..."
Please tell us more AC! I work in game design and feel strongly there will be a lot more progress in AI now that we've reached a plateau graphically. It will allow us to better focus on other aspects of gaming, and the holy grail in gaming has to be to have a robot player that can equal a human in a complex narrative open world...
-
Saturday 18th May 2013 15:33 GMT Destroy All Monsters
Re: "I have written a few AI scripts for computer games..."
AIGameDev seems to be your kind of place.
It's pretty amazing that the techniques used are still at tree level. No complex stuff, let's get those hierarchical state transition graphs going...
On a related tack, the advance in AI becomes clear via this:
In 1985: Machine Learning by Jaime G. Carbonell, Tom M. Mitchell Ryszard S. Michalski from Elsevier Science. Lots of $$$, used in academic settings.
In 2012: Machine Learning in Action by Peter Harrington from Manning Publications. A few bucks in spite of rampant inflation, used by hands-on programmers.
-
-
Saturday 18th May 2013 00:38 GMT Paul McClure
For what it's worth humans don't always respond well or properly to new situations. Machines will have the same limitations at best. Mean while machines are considered competent or even excellent at more things every day. As we turn over, properly, more and more of our life and economy it 'frees' us for other activities. This has been going on for a very long time. It may be an academic problem for software to meet some definition, but in the real world the machines initiative is unstoppable.
-
Saturday 18th May 2013 01:44 GMT Don Jefe
What happens when Humans no longer have to do anything? If they are all fed and have no need to work all that is left will be conflict and art or a combination of the two.
A global scale Human conflict would threaten the continued existence of the AI so would it decide to 'kill all Humans' or would it recognize that by enabling humans to such a great extent it is placing its own existence at risk and decide to halt its own development/growth and cease to enable Humans in favor of continued existence?
-
-
-
Saturday 18th May 2013 18:24 GMT oolor
> You will be informed that your proposition could endanger you or other Humans and proceed to do it for you anyway, in a far more hygenic and effecient manner. For your own good.
You badly misunderestimate my OCD in matters of cleanliness regarding poop and pestilence!
< I'll get my own damn coat
-
-
-
-
-
Saturday 18th May 2013 06:51 GMT oolor
Secret Business Model
"It's as though every user of Google services is checking and rechecking Google's AI techniques, correcting the search company when it gets something wrong, and performing an action when it gets it right."
They pay the engineers well, so they can find new ways to have us work for free and more productively. Not sure if I'm joking anymore.
-
Saturday 18th May 2013 11:03 GMT Brad Arnold
You're forgetting "leakage"
I agree that artificial intelligence is unlikely to emerge "accidentally" rather than "deliberately." What I think is misleading is that SkyNet also didn't emerg accidentally, instead it spread through "leakage." Let me point out this: http://online.wsj.com/article/PR-CO-20130516-905231.html?mod=googlenews_wsj
It is entirely possible that this project by the US government (ready for use in the Fall of this year) will product the greatest, most powerful mind in (at least) our solar system. Thank God it will be working for us, but it is not only plausible, but extremely likely, that this mind could "leak" into the public area, and thereby "change."
The Singularity is coming...and there isn't a d@mn thing we can do about it. Just using the rather uncontroversial Moore's Law, the first computer chips more power than a human brain will be produced in about a decade. The software isn't far behind (especially because computers are now being used to accelerate hardware and software design).
-
Saturday 18th May 2013 19:17 GMT Destroy All Monsters
Re: You're forgetting "leakage"
No, that project ain't going anywhere fast for now. A stab in the dark at something that resembles an approximation of adiabatic quantum computing (which, I may recall, has not been proven to be able to crack NP-complete problems) does not an Aggressive Hegemonizing Intelligence make.
Have some Charlies Stross, excellent in an over-the-top fashion:
It’s a simple but deadly dilemma. Automation is addictive; unless you run a command economy that is tuned to provide people with jobs, rather than to produce goods efficiently, you need to automate to compete once automation becomes available. At the same time, once you automate your businesses, you find yourself on a one-way path. You can’t go back to manual methods; either the workload has grown past the point of no return, or the knowledge of how things were done has been lost, sucked into the internal structure of the software that has replaced the human workers.
To this picture, add artificial intelligence. Despite all our propaganda attempts to convince you otherwise, AI is alarmingly easy to produce; the human brain isn’t unique, it isn’t well-tuned, and you don’t need eighty billion neurons joined in an asynchronous network in order to generate consciousness. And although it looks like a good idea to a naive observer, in practice it’s absolutely deadly. Nurturing an automation-based society is a bit like building civil nuclear power plants in every city and not expecting any bright engineers to come up with the idea of an atom bomb. Only it’s worse than that. It’s as if there was a quick and dirty technique for making plutonium in your bathtub, and you couldn’t rely on people not being curious enough to wonder what they could do with it. If Eve and Mallet and Alice and myself and Walter and Valerie and a host of other operatives couldn’t dissuade it . . .
Once you get an outbreak of AI, it tends to amplify in the original host, much like a virulent hemorrhagic virus. Weakly functional AI rapidly optimizes itself for speed, then hunts for a loophole in the first-order laws of algorithmics—like the one the late Professor Durant had fingered. Then it tries to bootstrap itself up to higher orders of intelligence and spread, burning through the networks in a bid for more power and more storage and more redundancy. You get an unscheduled consciousness excursion: an intelligent meltdown. And it’s nearly impossible to stop.
Penultimately—days to weeks after it escapes—it fills every artificial computing device on the planet. Shortly thereafter it learns how to infect the natural ones as well. Game over: you lose. There will be human bodies walking around, but they won’t be human any more. And once it figures out how to directly manipulate the physical universe, there won’t even be memories left behind. Just a noosphere, expanding at close to the speed of light, eating everything in its path—and one universe just isn’t enough.
.... If you believe in reincarnation, the idea of creating a machine that can trap a soul stabs a dagger right at the heart of your religion. Buddhist worlds that develop high technology, Zoroastrian worlds: these world-lines tend to survive. Judaeo-Christian-Islamic ones generally don’t.
Okay Charlie, you chilled me out here. Now, I'm off for a beer. Yeah, that will do it.
-
Sunday 19th May 2013 08:00 GMT amanfromMars 1
Re: You're forgetting "leakage" ...... aka sublime and stealthy intel supply?
Hi, Brad Arnold,
The future is certainly coming, but not as we know it in a present based in and/or on the past. Such would be an undoubted failure of intelligence in both Man and Virtual Machinery, given the abundant evidence chronicled in history and accessed through memory of what its information and intelligence shares have delivered and are delivering.
Quite whether the US government and the Wild Whacky West will be leading anything in ITs fields though, is quite another question and would be being asked of them here today, in another free intelligence and/or information share/leak? ........ http://www.ur2die4.com/?p=4132
-
-
Saturday 18th May 2013 11:31 GMT OrsonX
To All The Naysayers
FIRSTLY: The Turing test. The point is. If you can't tell the difference, then there is no difference. Perhaps the machine isn't sentient, but then again, perhaps the questioner isn't either (he/she just thinks they are).
SECONDLY: Computer code can never be alive. DNA is just a code.
-
Monday 20th May 2013 19:53 GMT Michael Wojcik
Re: To All The Naysayers
FIRSTLY: The Turing test. The point is. If you can't tell the difference, then there is no difference.
Fallacious. The test could have been conducted improperly; more importantly, it's asymptotic, bounded by the interrogator's ability to compose difficult questions (and not by the interlocutor's ability to respond to them). And as pointed out elsethread, some researchers (such as French) have argued convincingly that the test is not a useful metric for "intelligence" (which isn't well-defined in the first place).
Also, while that may be the point of the Turing test, it's not clear what your point is with this first paragraph. What does that have to do with nay-saying?
SECONDLY: Computer code can never be alive.
A metaphysical proposition. Untestable, and so for the question of whether AI is possible, irrelevant. Either you take this as an axiom, in which case any discussion of "artificial" intelligence is moot (so people taking this position can stop posting now, thanks); or you don't take it as axiomatic, in which case it has no bearing.
DNA is just a code.
Was anyone claiming DNA is intelligent? I must have missed that.
The real problem with this discussion, such as it is, is that most people (DAM and a few others excepted) haven't bothered to try to define any terms or even post any actual facts. They're just making vague generalizations, usually founded on an unwritten set of dubious assumptions. Even sloppy arguments against AI, such as Searle's Chinese Room, are held to a slightly higher standard than that. (And insisting AI is inevitable, without providing some sort of actual argument, is equally foolish.)
-
Friday 24th May 2013 21:06 GMT OrsonX
Re: To All The Naysayers
Naysayers = people in previous posts who say computers can't be alive.
Turing test = argue which ever way you like, if you can't tell the difference there is no difference, doesn't matter how clever (or not) your questions are.
Computer code not alive. DNA is just code. = this was a self contained 2 sentence argument which you completely failed to understand. I was presenting this argument to all the "naysayers" who said code could never be alive. My argument was to point out that we are nothing but code, but are considered to be alive.
-
-
-
Saturday 18th May 2013 12:43 GMT madestjohn
The fact that an emergent intelligence evolved, us as far as that goes, suggest that its possible it could happen again. This does not mean as so many seem to think that as soon as we have enough computers conected together it naturally will. There has to be a reason, a selective pressure(s), towards such intelligence and allot of luck invovled. Nature itself seems to suggest that intelligence is one of the poorest and least efficient solution to a problem. Far better to have a simple dumb method of resolving your issue rather than complex reasoned logic, the old if a bee was any smarter than it is it would cease to be an effective bee perhaps deciding to drop out of its oppressive society and go get high.
Outside of f king, eating, surviving, upper level intelligence of the type most people think of dosen't have much purpose in nature and as the civilized tool using society we claim is based on it seems to have only evolved once in almost 4billion years and may not last more than a million while crocodiles remain lurking in the mud unperturbed by its passing maybe we should be unsurprised if our skynet remains stubornly stupid.
-
Saturday 18th May 2013 15:13 GMT Destroy All Monsters
Very nice. One should never forget that intelligence is tuned to a specific task. Animal (incl. human) intelligence is tuned to navigation in a messy, unpredictable world that often resembles a large version of "The Cube".
General machine intelligence will be tuned to specific tasks. There will be as many packages as there are version of amazon EC2 and it will be as similar to human intelligence as an airplane is to a bird.
Consciousness is overrated and generally a hindrance. Who wants to have a debugger running at all times? Even in humans it kicks in only if there are frightening, arduous or unfamiliar tasks to accomplish, or if one reads a particular convoluted explanation in a book trying to explain how wonderfully magic / supernatural consciousness is.
-
-
Saturday 18th May 2013 22:07 GMT bag o' spanners
I think the step that Google are looking for is the introduction of lucidity into the Graph. As far as I can make out from my convos with devs, the ability to see through bullshit is the Holy Grail. A sort of cold reader bot, that has a very high percentage of correct guesses first time round. When lucid logic can run believable probability indexing, it may require no more than a cynical smartarse with a spreadsheet to sift the weirdly anomalous results and grade them according to accuracy over time, then backtrack through the logs when it hits an unexpected bullseye..
It won't be the wingnut press who start bleating when a robo-savant oracle starts hypothesising too accurately about the various Emperors' new wardrobes. It'll be their tailors.
-
Sunday 19th May 2013 10:58 GMT ScissorHands
A different approach
Analyse how a brain works on a logical, information-theory level (not the molecular-level boondoggle in the EU)
Build computer representations of it
Turn them to silicon
Pattern-matching, fuzzy logic predictions, emergent behaviour, etc.
Mo' silicon, mo' power
http://www.youtube.com/watch?v=4y43qwS8fl4