But like World beating chess playing computers, it's only AI if you redefine AI. It's clever programming by humans. Not AI.
A Google-designed artificial intelligence system has for the first time beaten a top human player at the board game Go. A team of researchers from Google DeepMind said in a Nature article [PDF] that their AlphaGo program is not only able to beat 99 per cent of all previous Go-playing systems, but has also beaten Fan Hui – …
There's always someone who whines about people saying "that's not AI" and claims they are moving the goalposts.
Preston has it right. It is AI when a general purpose intelligence can read the rules of go for itself, watch a couple games, try playing itself, and work its way up to beating all the meatbags. Programming in the rules, pattern searches, all that stuff is cheating. Its like when Neo learned karate in The Matrix.
It is when you can tell a computer play itself at tic tac toe and it figures out for itself that the game is pointless and stops on its own. Then you have true AI. And you run.
Oh for fuck's sake...
I wonder how many of those that are saying "it's not an AI!" have spent any time having formal education or doing formal research in the field. I'm saying this as I incidentally do have a related an academic degree, and, well - what I learned about the definition of what AI is and isn't aligns pretty much with what DAM said.
But hey, feel free to disagree because your 5 minutes spent reading up on Turing, or your headcanon about the definition of "hacker" say otherwise. You still basically sound like someone saying "it's not a screwdriver, because it has a flat head, and all real screwdrivers are pozidriv!"
I'm looking forward to seeing how many of you will still be saying "it's not AI, it's clever programming!" when said AI will put you out of your job.
Icon because the Reg readership should know better.
All those saying "it's not AI because MUH opinion!!" - you are eejits whose expectations are based on a died of Hollywood movies.
Back in 1956:
"Hey, let's do something more interesting than computing ledgers on computers. Like, symbol processing, learning, maybe even play chess and checkers, invent appropriate programming languages with new concepts, work on that neuron stuff, I dunno ... lots of things to do".
"We need a catchy name"
"How about 'Artificial Intelligence'. Sounds out there. 'AI' for short".
"Check it out, guys! I have a built a machine to solve checkers!!"
An utter eejit in a wifebeater comes in. "It's not like in the movies. This is not AI!. Do expert systems first."
"Check it out, guys! I can find out what ails the patient via a question-answer system"
An utter eejit in a wifebeater comes in. "It's not like in the movies. This is not AI!. Do expert systems first."
"Check it out, guys! I have a built a machine to play chess!"
An utter eejit in a wifebeater comes in. "It's not like in the movies. This is not AI!. Do expert systems first."
Fuck you all, go back to your jerkoff website development in crapnode.js
formal education or doing formal research in the field
I have and this is exactly why I expressed that opinion: "It is not". I will stand by it because I have done _NOT_ _ONLY_ _AI_ and I have seen other real scientific fields and how they work.
I tried doing a PhD in AI and theory of cognition and dropped it after a year. I found it disgustingly fuzzy, full of blah-blahisms and massive hand waving (+/- smoking large amounts of pot by a lot of people involved which is not my fav pastime).
To put things into perspective I have done work and published quoted papers in 3 different fundamental scientific fields: Chemistry, Molecular Biology and Applied Mathematics. So based on the perspective from these as well as doing the first one third of an AI PhD, I will quote Heinlein: "If it can't be expressed in figures, it is not science; it is opinion".
This, unfortunately, describes the present state of AI all too well. Once you remove bits which originate from Probability and Stats, the rest contains way too much of voodoo, alchemy and "tricks of the trade" to qualify for the label science. It also contains sparingly little numbers.
Engineering - sure. Science - nope. Not even close.
So, what are the weights in the two neural networks, if not numbers?
What we have here is an example of Weak AI. Yes, it can be construed as clever programming, but the whole thing about moving the goalposts in the field of AI is well known. It kind of happened when chess-playing programs were written. Before they were, Turing and Minsky and others (of the Dartmouth project) thought that game-playing programs could be considered intelligent. Once they'd been written, it was obvious to everyone+dog that they weren't intelligent, it was just the application of some newly-discovered software engineering principles that surely anyone with a brain could have forseen.
The actual definition of what intelligence is, either in humans or computers, and whether the two intelligences could ever be comparable, is still debated and still no nearer any kind of resolution. General (or Strong) AI still seems an elusive prize. In the meantime, lots of systems that employ weak AI (like this one) are built, and prove very useful, if not informative and entertaining.
Won't the interesting point be when we achieve, in software, equal intelligence and flexibility to the human mind, but realise that it's still just "clever programming"?
The problem with that scenario is that the first clause describes something that is not well-defined, and the second describes something that is endlessly debatable.
There's no useful metric for "intelligence" or for intellectual "flexibility", so you can't meaningfully compare those attributes for two entities, except in a very subjective and vague fashion.
And there's no consensus on whether or when certain types of machine-learning systems become something qualitatively different than their initial state. Many people claim there are "emergent features" in long runs of certain unsupervised-learning and artificial-life algorithms, and that it's not proper to ascribe those features to the programming, since the developer had no idea what they would be. It's a contentious position, but a valid one.
*"...It is AI when a general purpose intelligence can read the rules of go for itself, watch a couple games, try playing itself, and work its way up to beating all the meatbags..."*
Why the pre-condition that a general purpose AI must become better than all humans at whatever it learns? Would not the fact of being able to pick up the rules by itself and work out some kind of strategy for applying them suffice —even if it turned out not to be a great player?
After all, we don't expect every "meat-bag" to be brilliant at everything —not even those rare ones we deem to be "intelligent"
PS: I always wondered why Go playing ability was the one of the holy grails of "AI" programming, when it seemed such a basic game. However, I've only just discovered [from articles about this 'breakthrough'] that Go is not, in fact, just another name for Othello.
No it is not. Neither by Turing nor by Azimov's criteria.
Turing did not propose "criteria" for determining whether a machine was intelligent. He posed an epistemological claim, which was really an ontological claim, disguised as a thought experiment. The gist of his position is that whatever rubric we do use for judging intelligence, it should be based on methodologically-objective measurement of concrete attributes, and not some metaphysical quibbling. (In effect it's a pragmatist position.)
Azimov was not a computer scientist, and his "criteria" don't constitute a useful rubric.
The fact of the matter is that, as a term of art in computer science, "artificial intelligence" has enjoyed a wide range of definitions. People like you and Mage who insist on some particular narrow subset are merely being obnoxious prescriptivists. Your arguments have no consequential foundation.
Fucking tired of the "it's not AI if it has been done" meme.
Then you won't be long for this world, because there's not a single, real AI in existence and probably won't ever be by my estimation.
You'll know when we finally have real AI when someone can create software that can learn the rules and move of a game, pick up strategies from it's failures, and apply all of that to beat a human player, without any tweaking from a programmer.
Hence, it's not fucking AI. Period.
> ...pick up strategies from it's failures, and apply all of that to beat a human player, ...
Sounds like that's what actually has been done. From the article:
> "... the neural networks of AlphaGo are trained directly from gameplay purely through general-purpose supervised and reinforcement methods."
Still "supervised", but very impressive. To me this sounds like state of art AI.
I agree with Destroy All Monsters. Given that us meatbags can'l even agree on what exactly it is that intelligence tests actually test when applied to humans - and it certainly isn't a generalised ability to learn absolutely anything to perfection - then it seems curmudgeonly at the very least to deny that a program capable of playing Go extremely well without being able to predict the consequence of absolutely every possible future move is demonstrating a limited form of artificial intelligence.
I wrote a semi-lengthy post on the similar topic of what is a hacker years back. I think the idea applies, so I dug it out and it follows bellow. The short version is that whatever you think a word or expression should mean is rather irrelevant. What matters is what most people think it means, and that is all that actually defines a word. When universities all over the world, the media and whatever part of the public that has any interest of the topic use an expression one way, you are more than welcome to fight for any other use of it, but I fail to see what benefit you will gain from that fight.
My older post was this:
Words are tricky, they are just a bunch of sounds put together, and they have no natural/god/God given meaning. Some sound alike, some doesn't. Some change and mean the same, some doesn't change, but change meaning. Their meaning is given by what we think they mean. That presents a problem of course, because you and me are not alike. Sometimes we don't even think alike, this might be shocking for you to hear, but just change what you think the words I just wrote meant, and you will be all right. You may also ask an adult for advice, or you may give them some. Back to the words we use. When you say a word and think it means something, and I think it means something else, I will hear something else than you say. So how does this work out, we could as well skip the talking all together then. Well, a long time ago somebody managed to agree on what some words meant. I don't know who those people were, but it isn't important. Lets say they were children so you can say that in a conversation at a party sometime. Say it like you have thought about it a lot. It sounds very deep, even though it isn't, but you might get lucky because of it. If you don't understand what I mean by "getting lucky", don't ask an adult about it, ask your older brother instead. Anyway, these people managed to agree on a few solid good words and what they meant. One word perhaps was "banana". "Banana" one said and pointed at the fruit we mean by the word "banana", and everybody nodded because they knew it was so. Soon we agreed on a whole bunch of words, and we could communicate. As time passed some words turned out to be too long, too complicated or too “uncool” for the youngsters, so they changed them to shorter and better words. The elder people didn't agree on this, but since they die first they tend to loose. So some words were thus changed and shortened, and had added coolness. Sometimes old unused words changed meaning because the youngsters didn’t like that the elder people knew what they were talking about. When the elder people once again died the old meaning was forgotten. I will try to end this now, so I'll get to the point. Once in a while some people try to invent something, often a whole new field of things, and they invent words to go with them. They said “This is the ultra portable cellular cordless telephone!", and the public didn't nod because they knew it wasn't so, it was too long. So they said, it is a cell phone then? The clever inventors then said: "Yes, a cell phone is what it is, will you buy it?" The not so clever ones insisted that it wasn't a cell phone or a cell, and were never heard from again.
Some time ago somebody broke into computer. When I say break, they didn't actually crack it open or throw it to the ground or something like that. They accessed it without the owner’s permission, and managed to get pass the security measures that the owner had placed to prevent that access. Somebody else referred to this person as a hacker to the general population, who didn't really know what a computer was or what a hacker was, but they nodded and we all knew it was so. Except for a few geeks that thought being a hacker sounded much more cool than being called a geek. So the geeks forfeit the quest for a life, and the quest to someday get laid (again ask your older brother about this not an adult), and they picked up this new futile quest to be called hackers and nobody cared.
A word only carries the meaning that most of us think it does. You are of course free to think something else, but don’t expect to be understood (or get laid).
I’ll get my coat.
There's a lot going on out there now, and in the here and now here on El Reg.
This is ITs Program Prime Directive for Supply ..... LOVE’s Immaculate Tool is the Sweetest of Attacking Defence Weapons and in Live Operational Virtual Environments are Immaculate Tools, Default Heavenly Issue and Self-Actualised. Now that is quite something and well worth whatever you're looking for.
What protects and drivers your Operating System
I worry that one day amanfrommars will get some kind of huge mecha body and conquer us all.
I mean given two things, 1) it's probably immortal and 2) eventually the tech for huge mecha bodies will exist, kinda makes it a certainty.
amanfrommars is probably the emergent sentience of the internet.
Thank you. So it seems that the media decided to talk about Facebook which just updated an old paper today. So the whole media circus were gathering around Facebook, while Google had their paper published today that is so much superior. However, the media were talking about Facebook.
Deep neural networks (and similar constructions, such as recurrent NNs) are not simply the common ANNs of the '50s through the '90s (and indeed the '40s, since the theory was around for a while before anyone had hardware that could execute it). While the layers of a DNN are basically older-style ANNs, the feed-forward networks between the layers introduce another dimension and there's considerable research on the various ways to configure them.
That's not to say there weren't DNNs before this century, of course; they basically started in the 1980s, with the Neocognitron in 1980 through the successful use of backpropagation at the end of the decade. Basically this was a matter of hardware resources growing to accommodate techniques that had earlier been discussed in theory, along with some theoretical refinements.
Similarly, the deep-learning machines of today can accomplish tasks that some people felt were still well in science-fiction territory - like beating top human Go players - largely because of the ridiculous growth in hardware resources available for the problem. But there's real new algorithmic work too. It's not just a matter of taking 1990-era software and running it on a big distributed system.
From the article:
"... the possible number of moves in a given match (opening at around 2.08 x 10^170 ..."
From Wolfram Alpha:
Estimated number of atoms in the Universe: 1 x 10^80
It's really incredible. If our universe was an atom in a 'higher order universe' made of other universes, there'd be a combination for every (normal) atom in all those universes, with a good security margin!.
Ha. They've simply expanded the known quantity of operable data particles in the Universe up to the possible number of moves in a given match opening at around 2.08 x 10^170 pieces. Congratulations. Now us all have to deal with it (-;
The more interesting news is that each field on which the move is done, represents a comparable number of chess boards "behind" a single field.
This is exactly where we come closer to "intervariables". Simple fractals haven't become way too boring explaination, anyway...
A 19 x 19 grid. So there are 361 intersections,or nodes. Each intersection can be Black, White or Empty.
So you have 3^361. That's 1.74×10^172, of which only 1.196% could be a legal move. So that's a mere 2.08168199382×10^170 possible combinations. That's 208168199381979984699478633344862770286522453884530548425639456820927419612738015378525648451698519643907259916015628128546089888314427129715319317557736620397247064840935
Point of order: That's atoms in the observable universe. Our Hubble volume, in other words. Current cosmological thinking appears to be that there are way more atoms in the entire universe; but since we'll never interact with those outside our Hubble volume, it's largely a philosophical proposition.
Go makes an interesting analogy for Warfare.
If you play too agresively and spread yourself thinly over the entire board, you will end up being surrounded and be totally annihilated.
If you play too defensively and stay in one corner, your opponent will capture the rest of the board, and you will lose badly.
The game of Go is a fantastic way to measure the advances in machine learning and artificial intelligence by. The rules are deceptively simple, but the board is large and the possible patterns and combinations are mind-numbing. You can learn the rules in an afternoon and then spend the rest of your days getting slowly better and better by playing and playing and playing and getting lessons and looking at pro games and reading book after book after book.
It is a truly remarkable achievement that Google's deepmind division has produced an artificial entity that once trained can beat a weak professional. I wouldn't say it is epoch-defining but it is a watershed. Go is the most difficult board game known too man. How could anyone belittle this feat?
The only worry I have is that we are creating competitive algorithms. Surely that's not in our interest? I think we should be using these algorithms to model societal phenomena so that they can aid us in producing better societies. Clearly at this point there is doubt that Humans+Algos>Humans therefore we should put them to work on solving social ills not playing games.
Note, too, that "will a machine ever beat an expert human at Go?" (or variations thereof) is one of those perennial topics in popular discussions of computing, AI, machine learning, etc.
For example, here's a Wired piece from the long-ago era of 2014. It's typical Wired (breathless and short on technical detail), but it's the sort of thing that appeared regularly in the middlebrow press.
So there's a certain historical weight to this accomplishment, beyond its simple technical complexity.
What would happen if they put their program on 2 different computers and let them play each other? If they just let the programs run for months, playing many games would they learn to get better and better? Would they learn more and more branches that can be pruned, thus making each game faster and faster as they can ignore more and more options?
Interesting...but I would be more impressed if all this "deep mind" stuff, found a better way to *play* the game. Y'know, like humans do with a *finite* biologically powered resource. Sure, practice helps (search), but strategy must count for something (trajectory)?
Here is a counter example:
Protein folding/threading. This class of problems (massive search space), and there have been competitive efforts to solve it (CASP), so we know it is objectively hard.
A) a leading algorithm for "small" structures is Rosetta (brute force with a great deal of library support and a bit of physics empirically derived).
B) use molecular dynamics fold a protein using just the physics, by integrating the trajectory..(D.E. Shaw's machine Anton/AntonII that can do that)
Google's effort is more in the A)* methods, than B)**.
With B) we can use it on any molecule. With A) we can only use it on things we have seen before...
A) is more like a search, B) is an integration of a trajectory.
Does this make sense to others?
*took a lot of computing and decades of empirical results.
**took 1000's years of maths and physics, and then a bit of computing.
You people do realise that human chess players are 'programmed' how to play, right? They are taught the rules, they read books on chess theory, they learn openings and end-games, they take lessons, they have their games studied and are given feedback and they study past games.
This Go system sounds far more like an AI than chess-based systems, if it is learning on its own BY playing or by interacting with humans. If they programmed it how to go about learning how to get good at Go, rather than how to play Go, this IS a big step.
It's not a human-level AI, but an ant is not a human-level intelligence either. We don't have to create something of reasoning, abstract thought and language for it to be an AI.
To me, "intelligence" (the existential state of being, not the method of gathering information) combines several concepts, some of which have been achieved with computers, and some which have not.
1. Sentience, which is the ability to differentiate between Self and Other. This establishes a self-internalised sense of identity, the existential concept of "I am," and its counterpoint, "The universe I am not." To the best of my knowledge, no AI has yet achieved this. Yet it is this ability that forms the crux of human identity, and will form the crux of "robot rights" and related issues for a long time to come.
2. Learning, which is a function of memory linked to a decision tree. A system, living or otherwise, undergoes interactions with its environment, which we call experiences, that have positive and negative impacts on its existence. From these experiences, it builds a decision tree that allows it to avoid the causes of negative effects and to seek out the causes of positive ones. AI has achieved this to some extent, albeit only in highly specialised arenas. However a generalised learning system that can process any kind of existential experience has yet to be achieved.
3. Abstraction, or conceptualisation, which is the ability to extrapolate consequences from actions without prior experience. This is the driving force of invention. A good example is the discovery of the principle of leverage; using a long stick pinioned over a fulcrum with a short distance to an object too heavy for one to move, allows one to move it. To us this may seem almost instinctive; but most other animals cannot figure it out. To date, no AI has even come close to demonstrating this component of intelligence.
4. Imagination, which is a function of abstraction. It is the ability to conceptualise that which is not, to spontaneously create, store and communicate experiences and information that one has not encountered existentially. To the best of my knowledge this facility is the province of humans alone; no animal has demonstrated a capacity for imagination. Likewise, no AI has come close to demonstrating this ability.
5. Communication, which is the ability to transfer concepts relating to sentience, learning and/or abstraction to other entities like oneself. Most animals have evolved this ability with respect to learning, but to my knowledge only humans, chimpanzees and dolphins have demonstrated the ability to transfer abstraction. Current AI has demonstrated notable ability in communicating learning, and in the case of AIs like Siri and Cortana can simulate the appearance of sentience, but they have not actually demonstrated it.
6. Recording, which is the ability to transfer information by enduringly altering the environment. In humans this is accomplished by drawing, painting, sculpture and writing. It is a groundbreaking achievement because unlike immediate (verbal/gestural etc) communication, it transcends death. Experience and existential knowledge can be passed to other entities even though the original source of that knowledge has ceased to exist. This is why we can re-experience the thoughts of Shakespeare, a man who has been dead and gone for more than half a dozen human lifetimes. AI, of course can do this via the medium of computer data storage.
So the question of whether or not we have truly achieved artificial intelligence comes down to a question of any or all. That is, if you hold intelligence to be any of the above traits, then we have achieved AI. But if you hold intelligence to be all of the above traits, then far from having achieved AI, we are likely a long way from doing so - perhaps not even in the lifetime of anyone now alive.
the process of evolution is both incremental ,and occasionally, rapid.
The predetermined genetic behaviours to feed, mate and run away are traits that are required for pretty much all multi-cellular animals.
Therefore, there are incrementally complex behaviours which can provide a roadmap of trait complexity, if not a scale, of intelligence.
e.g. Recognition of self - humans, chimps, not cats. Development of complex mathematics: humans.
There is an *enormous* bias in the human perception of the natural world, because just like the winners of wars, we won the evolutionary race (well for this epoch in history - an asteroid could roll the dice again...).
It is ironic you pick on the ability to record things - humanity has only survived because we had sufficient stability to REPRODUCE many observations.
Hence, any species that can carry out science, is intelligent.
The impermanence of things on the internet is a worrying trend, however, as the signal/noise ratio continues to decrease...
I'll get my (lab) coat...
@CCCP - aye. The fun will really start when someone tries plonking a bunch of the limited AI's we've developed so far into one machine, their joint efforts overseen by some kind of goal-seeking program that can choose the right limited AI to best attack a given problem.
Come to that, how DO my fingers manage to press the keys I want when I set myself the goal of typing these words? Durned if I know, but obviously SOMEthing in here somewhere has learnt how to do the necessary motions under the direction of the goal-seeking bit of me that I'm conscious of.
Oh dear - 'scuse me, my brain's esploding, must go watch some kitteh videos, kthxbye!
Go is a complex game with infinite variations that challenge conventional digital computer programming. The brute force technique fails but a feedback technique reinforcing connections in simulations of brain neural circuits make this a viable alternative solution. Millions of random iterations are possible which should lead to better and better performance. But, and its a big BUT, this is only possible in a scenario that can be easily measured with regard to algorithm performance. Hence GO: hard to figure out but easy to measure the result.
Lets try the technique with say translating English to Japanese,. Run a million iterations, but at the end of each iteration how do you judge the result? Its not easy. Must a human stand in at the end of each iteration? This would plainly slow down the technique by many orders of magnitude. The computer has lost its feedback loop. It can no longer judge itself. This is a significant problem to be solved when we try to migrate from games to real world situations.
My ultimate challenge for AI: place two computers side by side and let them talk to each other via audio. One in English and the other In Japanese and the conversation should be purely improvised. And a human would then need to overhear the conversation and figure out if that was a machine or human conversation. I think we are no anywhere near that goal, but would be privileged to be alive to witness it.
Biting the hand that feeds IT © 1998–2021