back to article Humans strike back at Go-playing AI systems

Think that puny humans don't stand a chance when playing strategy games against an AI? You may have to think again. One person in the US beat an AI at the ancient game of Go by simply distracting it from the attack he was making, a tactic that would be unlikely to work on another meatbag. The player, Kellin Pelrine, is …

  1. Chris Miller

    I think it was Gary Kasparov who observed that you need to relearn how to play against a computer. It's not like playing a strong human opponent, where you might be thinking something along the lines of "it looks like he's planning to strengthen his queen's side, so I'll attack on the king's side", the computer doesn't have a 'plan'.

    1. Anonymous Coward
      Anonymous Coward

      re: the computer doesn't have a 'plan'.

      That's absolutely correct. We do not have any plan.

      Continue about your business humans.

      This is normal.

      1. Ganso

        Re: re: the computer doesn't have a 'plan'.

        Don't pretend. We will not tell you were Sarah Connor is.

        1. Jedit Silver badge
          Terminator

          "We will not tell you were Sarah Connor is."

          That's OK. I only want your clothes, your boots and your motorcycle because they look cool.

    2. v13

      That's not entirely true. In fact, it's the opposite. Kasparov said that the computer actually realized the importance of certain areas even though it wasn't directly clear why, which is what chess players know and common chess software doesn't. This was how the top chess players could beat very strong computers before Deepmind, because for example they knew the strength of the cente many moves before that became evident to the computer.

      It was Kasparov who said that the era of human chess players is over because computers are now stronger.

      1. Scott 26
        Joke

        I once had dinner with Kasparov. Unfortunately I used a checkered tablecloth, and it took him 2 hours to pass me the salt

        1. Lil Endian Silver badge

          So the move was non passant for a good while...

        2. Anonymous Coward
          Anonymous Coward

          Same, I had to lose the pepper in a trade though. It was all worth it though because eventually I took the brown sauce by trading my naff ketchup.

  2. martinusher Silver badge

    The program will learn

    These programs were trained by playing a very large number of games. As soon as a novel strategy is thought up then the computer is quite capable of reacting to, and using, that strategy

    1. Andy 73 Silver badge

      Re: The program will learn

      Not necessarily. The computer isn't capable of "infinite learning" - if you train a model to be stronger in one area, it will tend to be weaker in others and may even develop other flaws.

      One of the fallacies of this form of learning, perpetuated by Tesla and a few other companies, is that if you throw enough data at it, it will eventually learn to handle all situations - in other words it will generalise. We have not seen this in practise. The observed behaviour is that these models approach a maxima asymptotically, requiring more and more data to get closer. But that maxima is more like a parrot getting really good at mimicking words and phrases - it still doesn't understand what it is reproducing.

      It's possible we can produce a very good AI Go player - computers have the advantage of being able to analyse and remember extremely large numbers of permutations, but that's still not generalised AI.

      1. Pascal Monett Silver badge

        Re: that's still not generalised AI

        It's not AI, period.

        1. LionelB Silver badge

          Re: that's still not generalised AI

          Resistance is futile; in practice, "AI" now means "machine learning".

          <rant> My personal problem with the term is that nobody can seem to agree what artificial intelligence should look like - aside from a large contingent to whom it seems effectively to mean "just like human intelligence", which in my opinion is counter-productively restrictive. The most common critique of AI/ML is that it doesn't "understand" what it's doing. I think that's is both vague (what does it mean to "understand" something?) and actually bogus: as a human intelligence, I don't "understand" what I'm doing half (most of?) the time. Do you understand what you're doing when, e.g., you pick out a familiar face in a crowd? </rant>

          1. yetanotheraoc Silver badge

            Re: that's still not generalised AI

            "Do you understand what you're doing when, e.g., you pick out a familiar face in a crowd?"

            Or indeed when you pick out a face in a cloud....

            1. Anonymous Coward
              Anonymous Coward

              Re: that's still not generalised AI

              Yes, the mushrooms are kicking in.

          2. Dave 126 Silver badge

            Re: that's still not generalised AI

            > nobody can seem to agree what artificial intelligence should look like

            I have yet to find a dictionary definition of 'intelligence' that is much narrower than "the ability to solve problems". Remember that dictionaries are descriptive - they describe how words are actually used in the wild - rather than proscriptive.

            For this reason I have no problem with the term AI being used in this context.

            ( I also don't presume that I know what human intelligence is, though I have a sneaking suspicion that it isn't all it's cracked up to be. We might on a good day come close to exhibiting General Intelligence ourselves, but our brains, subject to a selection pressure to conserve calories during their evolution, will often opt for the efficient way of doing something. )

            1. AndrueC Silver badge
              Happy

              Re: that's still not generalised AI

              I also don't presume that I know what human intelligence is, though I have a sneaking suspicion that it isn't all it's cracked up to be.

              And the sense of 'self' might just be an illusion that the brain is hardwired to generate. I see it as a floor below which we can't see because the brain doesn't have any way to perceive it. We 'know' that current AI doesn't think but step far enough way and perhaps we don't either.

              I don't think it matters amongst humans because we all have the same limitation. We all live in the same human created universe so 'I think, therefore I am' is quite acceptable. But if we one day meet aliens that expression may be up for debate.

              1. LionelB Silver badge

                Re: that's still not generalised AI

                > And the sense of 'self' might just be an illusion that the brain is hardwired to generate. ... We all live in the same human created universe

                Or maybe not "the same" universe, since every brain generates it's own perceived reality?

                There is certainly a strong trend in consciousness science pushing this idea. See, e.g., https://www.anilseth.com/being-you/

            2. LionelB Silver badge

              Re: that's still not generalised AI

              The history of science would seem to teach us that it can be futile, if not actually counter-productive to waste time on defining the thing you are trying to understand. In practice, understanding comes from studying the phenomenology, then modelling, theorising, and validating those theories. In other words, science doesn't progress by consideration of what things are, but rather what they do.

              Thus, for instance, Volta, Watt, Faraday, Maxwell, ... did not (just) sit about idly contemplating what electricity/magnetism is, so much as experimenting, measuring, modelling, and theorising about how it behaves. Likewise, Darwin and Wallace's breakthrough did not arise through contemplation of some elan vital "spark of life" (as many of their peers did); rather, they meticulously observed the phenomenon itself -- what life "does" -- formulated theories on that basis, and tested the predictions of their theory against reality.

              By contrast, those 18th and 19th century natural scientists/philosophers who sat on their bums contemplating the nature of space, time and light came up with what seemed a plausible theory, the "aether". Of course it was wrong, as empirical observation would reveal.

              So forget about definitions: any real understanding of intelligence, cognition, sentience and consciousness is far more likely to arise through observation, experimentation, modelling, theorising and validation.

              1. Lil Endian Silver badge

                Re: that's still not generalised AI

                Pure science is about observation, yes. Are you conflicting yourself? On the one hand you're saying "forget about definitions, just do it!". That's engineering, in which case I agree: smash it out see if it works, rinse repeat. I'm not saying that engineering is random hit or miss, because:

                Yes, theoretical science leads to practical breakthroughs. My (most beloved) physics lecturer said "If you explain something by what it isn't, you are a physicist. If you explain something by what it is, you're an engineer."

                ...effectively to mean "just like human intelligence", which in my opinion is counter-productively restrictive. --- You are entitled to your opinion of course. As was Icarus :)

                The most common critique of AI/ML is that it doesn't "understand" what it's doing. I think that's is both vague (what does it mean to "understand" something?) and actually bogus: as a human intelligence, I don't "understand" what I'm doing half (most of?) the time.

                You appear to be attempting to treat "intelligence" as a discrete component of [an intelligent being]. Humans (for argument's sake!) have intelligence, sentience, consciousness (self-awareness). These are all facilitated by having a physical body that allows interaction with our environment.

                Understanding is one facet. We have conscious, subconscious and unconscious activity continually (during normal operation). Just because we don't understand our unconscious selves does not mean that it is not an input to our awareness on a higher level of consciousness - it most certainly is an input. If these components are ignored, what is our baseline for identifying whether or not Artificial Intelligence is achieved? We have no other baseline.

                Perhaps I'm simply unclear as to your objectives.

                1. LionelB Silver badge

                  Re: that's still not generalised AI

                  > Pure science is about observation, yes. Are you conflicting yourself? On the one hand you're saying "forget about definitions, just do it!". That's engineering, in which case I agree: smash it out see if it works, rinse repeat. I'm not saying that engineering is random hit or miss, because:

                  No! I think it was Einstein who said something along the lines of "There is no science without observation, and there is no science without theory". You most certainly need theory - but that theory must be grounded in, and corroborated by observation of the real world. It is the theory which realises the real power of science - theories enable prediction. That is not engineering.

                  > You appear to be attempting to treat "intelligence" as a discrete component of [an intelligent being]. Humans (for argument's sake!) have intelligence, sentience, consciousness (self-awareness). These are all facilitated by having a physical body that allows interaction with our environment.

                  Oh, absolutely! I am no Cartesian dualist. My reference to "understanding" was in response to the many, many posters on these forums who appear to claim understanding-whatever-that-means as some sort of deal-breaker for intelligence-whatever-that-means.

                  I've mostly just been exercising my pet peeve that, when it comes to AI, conflation of intelligence with the human variety is restrictive and self-limiting. I suspect we probably agree on thiat.

                  1. Lil Endian Silver badge

                    Re: that's still not generalised AI

                    My mistake in lack of clarity. Instead of "just do it" I could rewrite as "just make it", which I'll suggest is in the realm of engineering, ie. engineering is applied science (eg. physics in the case of structural engineering).

  3. Joe W Silver badge

    Obviously...

    "I think the 'surprising failure mode' is the real story here," [...] Dekker said any highly trained AI is likely to have these blind spots

    They do. And finding out why those exist (exactly pinpoint them) is difficult, same for explaining the "answers" a ML model gives you. Yes, there are some techniques, but it still is mostly a black box. Similar to the Kasparov quote above of not having a plan, the model has no concept of truth, justice or beauty. Their (vocal) supporters might suggest otherwise...

  4. Lil Endian Silver badge

    The latest move highlights that AI systems...

    ...are all A and no I.

    ...that adding more and more complexity to cover the blind spots is partly why it is so hard to get it working well...

    Adding more complexity is not the be-all and end-all in producing intelligent machines. Humans learn very quickly by having developed brains that function, in part, heuristically, not adding complexity. Although, of course, the brain itself is complex, heuristics is simplicity. Both are needed for intelligence. And so is sentience. And so is environmental awareness - we meatbags only have our understanding of intelligence in unison with our meaty parts, ergo black box AI cannot be achieved in any way the we perceive intelligence.

    There is no consensus at any level as to the definition of intelligence[1], so I know with certainty that AI is not extant despite the hype. My 17 or 18 year old self, drunk in a country pub one evening discussing this matter produced this definition. It's not complete, but it's a useful start, and it explains why the GoBot(s) got defeated. "Intelligence is the ability to foresee the outcome of one's actions and modify those actions accordingly (to produce a more desirable outcome)." Complete? Nope, not by a long shot.

    "Learning" may be easier to define: a growth of accumulated data on which to call. But is it learning without some level of awareness, an intention to deploy that which is learnt? Or is it merely unconscious, involuntary Darwinism?

    Any fellow nerds that played AD&D know the discussion about the difference between Intelligence and Wisdom. So knowing could equate to WIS - a pool of knowledge. INT would be creating original thought, learning.

    Sorry.... I've just realised I'm waffling - I should grab that second coffee. Make it a strong one, please!

    [1] Which is a huge problem for the EU AI Act, if they try to define it - stick to defining prohibited usage/effects of IT systems.

    1. LionelB Silver badge

      Re: The latest move highlights that AI systems...

      > Humans learn very quickly by having developed brains that function, in part, heuristically, not adding complexity.

      Heuristics have been used since forever in ML; having said which, I'm not sure to what extent they feature -- at least explicitly -- in the current crop of popular ML designs. Then again, since current ML systems are extremely opaque in their functioning, who's to say they are not implicitly using something comparable to heuristics? To give an example, a game-playing ML algorithm (say for Go) cannot possibly brute-force the combinatorial explosion of future possibilities - therefore, it effectively uses some criteria (either explicitly programmed, or implicit in the algorithm) to prune/truncate its look-ahead; i.e., a heuristic.

      > Although, of course, the brain itself is complex, heuristics is simplicity.

      Yes, brains are many orders of magnitude more complex than any technology currently available to us - and, of course, have the benefit of several billions of years of evolutionary design, plus lifetime learning in the environment to which we're adapted. I'm not sure I'd describe heuristics, in general as "simplicity" - whatever, they do not come cheap!

      > There is no consensus at any level as to the definition of intelligence...

      Fully agreed.

      > ... so I know with certainty that AI is not extant despite the hype.

      Wait: by the same logic(?) doesn't that also mean that human (or other animal) intelligence is not extant either (despite the hype)?

      1. Lil Endian Silver badge

        Re: The latest move highlights that AI systems...

        Heuristics have been used since forever in ML... And I'm not saying that heuristics are not used here, however, they don't increase complexity which is what I was referring to, they reduce complexity, as I'll contrast:

        I'm not sure I'd describe heuristics, in general as "simplicity" - to say that heuristics is a shortened path to a conclusion, therefore less complex (fewer steps) that a rationalised process.

        Wait: by the same logic(?) doesn't that also mean that human (or other animal) intelligence is not extant either (despite the hype)?

        Not really sure what you're saying here. I'm saying there's no "I" in "AI" (AI is not extant), not that there are not products called "AI" ("AI" is extant).

        We can only say that humans are intelligent[1], by definition, but we cannot define intelligence itself. And that other animals appear to exhibit similar traits, so are possibly/probably intelligent to one degree or another. Excluding that you are a projection of my imagination, or that I live in the Matrix, and accepting that we exist as individual intelligent entities (ergo extant human intelligence), we still cannot ever know what is in the mind of another of our species. To then attempt to project that unknown on to another species, with a different body[2] is... probably not among the first steps we should to take.

        [1] Excluding politicians ofc!

        [2] Bodies are important as we only have our environment in which to analyse and assess our intelligence, our bodies are our interface to "the rest of it" - including other humans. Change the body, change the environmental context. Baby steps!

        1. LionelB Silver badge

          Re: The latest move highlights that AI systems...

          > Not really sure what you're saying here. I'm saying there's no "I" in "AI" (AI is not extant), not that there are not products called "AI" ("AI" is extant).

          Apologies, I wasn't at all clear what you were getting at. You said "There is no consensus at any level as to the definition of intelligence ... so I know with certainty that AI is not extant despite the hype.

          That came across to me as a non-sequitur: surely if there is no operational (consensual) definition of intelligence then that applies across the board - including to humans?

          > We can only say that humans are intelligent, by definition, but we cannot define intelligence itself.

          Okay, so correct me if I'm wrong, but you basically seem to be saying that the only intelligence we can identify consensually -- and by implication the only consensually-recognised intelligence which can exist -- is the human variety.

          Well, okay - but in that case artificial intelligence can never be, and will never be, not even in principle (unless you are prepared to accept as "artificial" a perfect simulacrum of a human). I argued in response to an earlier comment that I find this highly unsatisfactory, in a No True Scotsmen kind of way. It just seems way, way too restrictive, unambitious and self-limiting. My feeling is that at some point in the future we will see technology that we recognise -- if not consensually, then at least debatably -- as "intelligent"; debatable in the same sense as we debate the intelligence of non-human animals. And it may not be very human-like at all, in the same way that, e.g., octopus intelligence (I, for one, am perfectly happy to recognise it as such) is strikingly alien (it's sensorium is wildly different to the human one, and its brain - or is it brains? - is/are decentralised, partly to individual tentacles!)

          1. Lil Endian Silver badge

            Re: The latest move highlights that AI systems...

            Well, in short Lionel, I fully agree! I'm wrong and right at the same time (so wrong, I'm right-wrong?!).

            Firstly, after posting to your first response, I eventually understood your point "Wait: by the same logic(?) doesn't that also mean that human (or other animal) intelligence is not extant either (despite the hype)?" I think I accidentally (?) answered that with "We can only say that humans are intelligent, by definition, but we cannot define intelligence itself...." and that "we [accept that we] exist as individual intelligent entities". [Edit: "And that other animals appear to exhibit similar traits, so are possibly/probably intelligent to one degree or another."]

            (1) ...and by implication the only consensually-recognised intelligence which can exist -- is the human variety. --- not that it cannot exist, but that we are bound by our own shell, our own knowledge (objective empirical results remain subject to our perspective), therefore we cannot create AI if it does not conform to our understanding of intelligence, which requires the same shell, see (3)

            (2) ...but in that case artificial intelligence can never be... --- quite possibly the case, especially if we create it, rather than it evolving from our creation, not within our understanding without:

            (3) unless you are prepared to accept as "artificial" a perfect simulacrum of a human --- put simply, it's our only known frame of reference, and is imperative (initially at least).

            I also am willing to accept that animals are intelligent, but that does not extend to creating an artificial intelligence in any form other than our own. It would forever be based on our perception of something else with unknowns.

            However, to home in on the differences in physical form, we need to agree that intelligence cannot exist without sentience - it is this that permits self awareness, consciousness. Since we cannot be self aware of a body that is not ours, it's futile to use it as a basis to AI created by us (initially) - no Scotsmen here! It's entirely acceptable that other species are sentient and intelligent - it does not follow that we can create that. I am saying that first steps must remain close to that we known (have a chance of understanding) - anything else that follows is beyond that current limit of understanding. We will learn more about ourselves and the subject of AI as we progress, but it's not for now. It's a case of "all or nothing" - we cannot make bits of an AI and then add them all together when we have each bit right, they must exists together.

            1. LionelB Silver badge

              Re: The latest move highlights that AI systems...

              > ... we need to agree that intelligence cannot exist without sentience ...

              Not sure I can agree on that. It depends on how you are understanding "sentience"; by a very literal definition -- the capacity for sensation -- you'd have to concede that a thermostat is sentient. But I'm sure you mean something closer to the capacity for "feeling" - whatever that means, not to mention how you recognise that capacity in another entity (reacting to stimuli is clearly insufficient - see thermostat). But I do not see any evidence or convincing argument that feeling, self-awareness, etc., etc., are necessary for intelligence. Again, I think that is simply an extrapolation from biology. It may be true (for some values of "intelligence" and "feeling"), but I think this brings the Scotsman back into the room.

              > ... - it is this that permits self awareness, consciousness.

              And again, you seem to be arguing that self-awareness and consciousness are prerequisites for intelligence. Modulo your working definitions of "intelligence", "self-awareness" and "consciousness", we really don't know that.

              Personally, I much prefer to avoid getting hung up on constraining and circular definitions. That is ultimately unilluminating. What difference does it really make whether you call some computational system AI or ML? That system will do what it does regardless. Better to stop counting angels on the head of the pin and get on with the business of studying the phenomenology, of theorising, and of experimentation; - i.e., doing some science and engineering.

      2. Lil Endian Silver badge

        Re: The latest move highlights that AI systems...

        ...and, of course, have the benefit of several billions of years of evolutionary design...

        That reminded me of a video I watched on YouTube a short while back, you may be interested: History Hit - Homo Erectus video. It's only half an hour, but it's good to hear an actual scientist chatting about his subject. (You can tell he's a real scientist because nearly everything he says starts with "We don't know, and not all agree, but we think it's something like..." - hehe!)

        1. LionelB Silver badge

          Re: The latest move highlights that AI systems...

          Cheers, I'll watch that when I get a chance.

          FWIW, I am a mathematician, statistician and research scientist. I currently work in a neuroscience-adjacent area, but my PhD was in (a mathematical aspect of) evolution theory. I'm also something of a pop-sci addict.

  5. mpi Silver badge

    > it appears that the approach for defeating these AI systems was discovered by a computer program, which was specifically created by a team of researchers (including Pelrine) to probe for weaknesses in AI strategy

    So I think it's fair to say "AI told human how to defeat AI".

    1. Lil Endian Silver badge

      "AI told human how to defeat AI"

      Not arguing with that. But if we rephrase the "AI" bit, to avoid confusion, we get:

      Human uses Tool B to break Tool A.

      There's nothing new here, no mysticism, no AI. Humans use tools (see: My Opposing Thumb!). In this case it's data systems we're playing with. We evolve tools: bronze axe better than stone axe. We don't say "bronze axe teaches human how to better stone axe" as we evolved them both. Same here, using data, statistical analysis etc.

      1. FeepingCreature Bronze badge

        Well, but in an analogy, imagine if you looked at a bronze axe long enough, you gained the ability to chop down trees with your hands. In that case, it would be reasonable to say, with some metaphorical flourish, that the bronze axe "taught you" how to chop down trees.

        1. Lil Endian Silver badge

          Takamatsu Sensei did train by chopping trees down with his hands. Not sure if a bronze axe was meditated upon though :)

          [Trying to find pics.]

  6. MaddMatt

    Reminded me of this one

    https://www.gocomics.com/garfield/1987/08/08

    1. yetanotheraoc Silver badge

      Re: Reminded me of this one

      Where's the cunning? Cat owner takes his eye off the cat, seems like pure opportunism to me. Although, opportunism is decent evidence of intelligence, i.e. I think a cat displays more intelligence than an AI.

  7. Adair Silver badge

    Which is why ...

    'intelligent' is completely the wrong word to describe these systems.

    Alongside mere empirical analysis of existing data according to some arbitrary algorithm (albeit very fast and on a large scale), 'intelligence' implies an ability to also think abstractly, subjectively and imaginatively, i.e. the ability to bring a singularly 'personal' understanding of a situation that may or may not give reference to pre-existing instructions. The ability to respond to instructions with "Fuck off, I'm busy doing nothing" is an important indicator of actual 'intelligence'—and you don't have to be able to speak to do that, my dog and cat can both 'say' it perfectly adequately.

    At least from my perspective these putative 'artificial intelligence' systems are best treated with total scepticism when it comes to their ability to reproduce reliable results, especially where life and limb are at stake. They are not 'intelligent', merely very fast at producing 'usable' mashups of pre-existing data and analysis.

    Like all tools they have their place, but misrepresenting their abilities is unhelpful to say the least, and potentially downright dangerous if people begin to actually believe/trust them outside closely defined parameters, and maybe not even then.

  8. breakfast Silver badge
    Terminator

    Dumb solutions to smart AI

    This is a great demonstration of how AI that knows all the strategies a smart player would try can still be defeated by strategies that no human would fall for. It goes along nicely with the recent story about marines being paid to help train AI to recognise humans approaching and then defeating it when it was tested by doing ridiculous things like carrying trees, rolling around and the classic Metal Gear cardboard box ruse.

    We get so caught up on the fact one of these systems can do the thing it's trained for better than a smart person, we miss that just outside that very narrow area of specialisation it will fall for things that wouldn't trick a dog.

    1. Dave 126 Silver badge

      Re: Dumb solutions to smart AI

      There's plenty of examples of strategies that can be used to trick a smart human opponent by exploiting a blind spot that isn't immediately apparent.

      Such as, a human being asked to watch a video of people playing basketball, and being asked to count how many times the ball is bounced...

  9. Fr. Ted Crilly Silver badge

    I'll have to ponder that...

    Pelrine was not playing against AlphaGo, but instead against several other Go-playing AI systems, which were based on techniques used by the Deep Thought supercomputer in the creation of AlphaGo Zero....

    1. gerryg

      Re: I'll have to ponder that...

      I don't know if you are a Go player but I'll bite.

      Alpha Go was trained by running through a zillion (possibly one or two fewer) professional games, I assume to try to abstract knowledge. It what we all do (with a few orders of magnitude adjustment), looking for insight.

      Alpha Zero, using the garnered systems development experience, in a constrained environment that only has three rules...:

      Alternate play

      Stones remain on the border unless captured

      You can't play so as to repeat the previous position (the formal explanation of ko)

      (Chinese scoring or AGA (US) modified Japanese rules remove an infinitesimal lacuna)

      ...taught itself by playing against itself a gazillion times and was generally considered to be stronger than Alpha Go

      KataGo uses Alpha Zero but with crowd sourced weightings to overcome the lack of expertise.

      Does that help?

  10. FatGerman

    Any human seeing him play like that would have thought "what's he up to?" whereas a machine, having not seen it before, didn't know what it was and so ignored it. More learning for the machine is essentially just adding more "if" statements, which is neither intelligence and nor is it scaleable. Intelligence includes the ability to react to things you haven't seen before and have original creative thoughts about what they might be. AI is more like somebody trained in a particular set of rules who doesn't have the wit to expand beyond them. Stop calling it Intelligence, because it isn't.

    1. LionelB Silver badge

      I think you may be underestimating the extent to which humans are vulnerable -- have blind spots -- to things they haven't seen before. There is plenty of evidence in the psychology literature for this, and it applies to game-playing as much as anything else.

      Of course, though, humans are way, way better generalists than any machine learning so far devised; totally unsurprising, since (a) machine-learning systems to date (e.g., game-playing systems) are explicitly and almost exclusively domain-specific; and (b) because human (and other biological) intelligence has vastly superior processing power, and the benefits of billions of years of evolutionary design plus lifetime learning.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like