back to article Sorry to burst your bubble, but Microsoft's 'Ms Pac-Man beating AI' is more Automatic Idiot

Back in a bygone era – September last year – Microsoft CEO Satya Nadella told a developer conference: "We are not pursuing AI to beat humans at games." This week, we learned Redmond has done more or less that – lashed together a proof-of-concept AI that can trounce gamers at Ms Pac-Man, and snatch some headlines along the way …

  1. Lee D Silver badge

    Virtually nothing that says AI or "learning" actually is.

    It's all heuristics, instructions from programmers on "how to learn", in effect. And not at some basic coding level, but quite literally specified explicitly for the task at hand.

    AI, to me, is still interpreted in the same way as the old gaming adverts: "destructible environments" (so long as you don't go out of bounds, go too deep, shoot the critical plot structures, or actually expect it to turn to rubble), "realistic physics" (which is why you can make the enemy bounce a thousand metres in the air by getting him stuck on a door), "open-world" (so long as you don't try to go the opposite direction to your objective or mind being herded back in if you stray too far, and by the way, for mission 2 you have to go see John or you'll never get a mission 3 until you do).

    It's all rule-based and targetted. Google's AlphaGo strayed into something different, which is why it's newsworthy and pretty astounding. But you have to understand the game and the rules of the game to make those sub-agents do what you want in order to come to a decent play. And I guarantee you that the "master agent" isn't culling off useless sub-agents and creating unique ones of its own to try to fathom out the game.

    It's all hard-coded rules, left to run for a long time with an aim in mind. That's not AI or "learning", no matter how long you leave it running. Unfortunately, any sufficiently-advanced technology is indistinguishable from magic, so people do think that Siri is actually understanding them rather than some speech recognition that hasn't improved in decades (per cpu cycle), shoved into a search engine which returns colloquially-worded results.

    1. IT Poser

      Lee D,

      While I fully agree with everything you've said, I am left wishing that we had the ability to program humans how to learn, even in such a limited manner.

    2. De Facto

      Google's AlphaGo was not much different

      Monte Carlo Tree Search algorithm on the giant database of 30 million Go game moves, with similar weights increase or decrease for good or bad moves. About 100 computer scientists man years were needed to feed and train the AlphaGo database. Finally massively paralel brute force MCTS computing on many many servers to find the best winning strategy. Against one human brain without the supercomputer capacity to go through billions of combination in few minutes. Calling it AI?

    3. JT_3K

      I mean, I had a friend in my dorm in 2005 that wrote a self-learning neural network for chess in Java. I played it fresh and destroyed it. He left it running against itself overnight and it got better. By the end of the week I couldn't beat it. It's nothing new. Tack on a visual processing unit that can recognise things like placement of the different coloured ghosts and the most "profitable" response based on proximity of each (they each were coded with different "personalities"), and pretty soon it's making better calls than a human. I concur, it's not AI, it's brute-force.

  2. Anonymous Coward
    Anonymous Coward

    It's not very good AI

    It was setup with rules on how to play and just explored those given principles.

    1. Paul Kinsler

      Re: It's not very good AI

      Let this be a lesson to all you humans out there - if you try to read up on any of the rules for your next activity of choice, no matter how well you perform, we will be able to claim that the rules-reading means you are not "Intelligent" but just "Learning".

      1. Lee D Silver badge

        Re: It's not very good AI

        Humans read rules, interpret them and voluntarily stick to them.

        Machines operate in an environment where the rules prevent them ever doing anything else, so it literally bounds their possible actions.

        Morpheus knew this: You will always be faster... because they live in a world built on rules.

        1. Destroy All Monsters Silver badge

          Re: It's not very good AI

          Machines operate in an environment where the rules prevent them ever doing anything else, so it literally bounds their possible actions.

          Well, the point is to allow the rules to be generated, discarded, rebuild, recompressed, weighted, tuned, indexed. analyzed, briefed, debriefed etc. at runtime.

          So there are really are no fixed rules, except for the most simple reactive systems. It's like with the spoon, of which you have to realize there is none.

          Morpheus knew this: You will always be faster... because they live in a world built on rules.

          Morpheus reads a script...

          (More in the area of "logic-based rule-based systems" as opposed "let's mess around till it works rule-based systems", there is actually a nice softcover by Cambrdige University Press by Robert Kowalski (who injected the mathematical logic part into Prolog) on what "rule-based systems and logical thinking" are about: "Computational Logic and Human Thinking: How to be Artificially Intelligent". (PDF here)Seems to be a good intro and a contender for a 21st-century update on all those books with titles "how to think logically" etc. that go back to the 19th century at least ... well the book on what is called Port Royal Logic came out in 1683, I see.)

    2. badger31

      Re: It's not very good AI

      It doesn't have to learn to be AI. And machines learning doesn't make them (actually) intelligent.

  3. Field Commander A9

    I don't see problem with hard-coded knowldge

    Even human babies(more so in animal babies) come with a large collection of knowledge hard-coded in their DNAs, what's wrong with that?

    1. Captain DaFt

      Re: I don't see problem with hard-coded knowldge

      No problem.

      But the babies will go through life expanding their knowledge beyond their original instincts.

      This 'AI' is like the maze robot that's just hardwired to turn 90 degrees left when it encounters a wall. It navigates the maze, but learns nothing.

    2. Dan 55 Silver badge

      Re: I don't see problem with hard-coded knowldge

      When someone plays (Ms) Pac Man for the first time*, they have to learn that ghosts are deadly unless you eat the pills and then they give you points. They also have to learn how (Ms) Pac Man moves and so on.

      If an AI can do that, that's more impressive. If its got the rules already it might as well be a Lua script.

      * Well, less so now as it's common knowledge.

      1. P. Lee

        Re: I don't see problem with hard-coded knowldge

        >When someone plays (Ms) Pac Man for the first time*, they have to learn that ghosts are deadly unless you eat the pills and then they give you points. They also have to learn how (Ms) Pac Man moves and so on.

        Well, you learn facts. But is that Intelligence? How much intelligence (vs memory) do humans use when playing?

        Intelligence generally involves guesswork. Even without seeing the effect, do you guess that ghosts are bad? Do you guess that the aim of the game is to eat all the dots and that the flashing ones mean something special?

      2. Orv Silver badge

        Re: I don't see problem with hard-coded knowldge

        "When someone plays (Ms) Pac Man for the first time*, they have to learn that ghosts are deadly unless you eat the pills and then they give you points. They also have to learn how (Ms) Pac Man moves and so on."

        All of which you can learn from reading the rules summary on the screen bezel or watching the attract mode. For the most part those aren't trial-and-error issues.

    3. Anonymous Coward

      Re: I don't see problem with hard-coded knowldge

      > Even human babies(more so in animal babies) come with a large collection of knowledge hard-coded in their DNAs, what's wrong with that?

      Babies come with remarkably few 'hard-coded' or instinctive responses. From observation of my own son, I remember only 4 behaviours that were there from birth and not learnt. They were: the ability to laugh or giggle, sneeze, cry and twist his mum around his little finger.

  4. Will Godfrey Silver badge

    Situation Normal

    It seems any claim Microsoft makes falls apart once you examine it critically. This whole thing totally misses the point of 'learning'.

    Also, this game is based on response time more than anything else. Force the system to have the same times as a human (while the game runs at normal speed) then see how well it fares.

  5. d3vy

    Does the computer know it's playing a game or does it think it's trapped in a neon maze being chased by ghosts?

    Sounds like we need rights for AI to prevent further torture of these poor machines :)

  6. Anonymous Coward
    Anonymous Coward


    I can't quite see what the problem is here. If you make AI-based chess software, would you expect it to have to work out the rules of chess and the relative values of the pieces by itself? Or for a self-driving car AI, would you expect it to learn how to drive by eventually working out that it's better for it not to cause accidents?

    1. diodesign Silver badge

      Re: Problem?

      Yes - that's why it's called machine learning. Chess programs aren't AI. They are algorithms and preprogrammed patterns. Airbus autopilot isn't AI. It's algorithms and preprogrammed patterns. Compare this to DQN, which didn't even know what the game controller's buttons did. It was given video frames and told to get on with it. It had to figure it out from scratch. That's where AI wants to head if it's going to make anything remotely intelligent.

      Sure, the world runs on algorithms and preprogrammed patterns. No problem with that. Let's just not trumpet it as AI.


    2. Destroy All Monsters Silver badge

      Re: Problem?

      Indeed, Indeed.

      "Blank Slate" AI has died back in the 60s I think (so has "Blank Slate" cognitivie science I would hope).

      Well, "Artificial Life/Evolution" would be the closest to it nowadays, but that's a few layers down, so to say.

      Next up: The negative-externality-dropping Beast of Redmond creates Roko's Basilisk from old pieces of WinNT networking code. THE END!

    3. Lee D Silver badge

      Re: Problem?

      "work out the rules of chess and the relative values of the pieces by itself"

      Rules, maybe. But they can learn that by making random moves and some control somewhere says "Invalid move, you lose". That's INFINITELY better than "you can only make moves from the restricted subset we offer you that you never have to consider" in terms of learning.

      And, similarly, value is a heuristic. The value of a piece is nonsense compared to whether you win. You can sacrifice every piece on the board so long as you end up checkmating. That "value" could be learned or hard-coded. Learned value - when it decides itself "Actually, my queen is probably worth more than that" rather than adds up some metric - is what you're after if you're claiming "AI" and "machine learning".

      It's about what you test on. Are you testing "can this machine learn to play the game by itself" or are you testing "Can this machine find what we would call an optimal play in this heavily-prescribed world". They are claiming the former, but actually it's the latter.

      You have to consider this: If your machine is "learning" then you could throw it at Ms Pac Man and train it. You would then be able to move THE EXACT SAME PROGRAM to, say Pac Man 2000, not tell it what the difference is, and train itself towards optimal play for that WITHOUT TWEAKING.

      This program couldn't. It's been told what the value is and what to do, in limited means but it's been instructed. That's not "learning", that's some kind of "organic growth programming from seed". And the whole point of "learning" is not to make a Ms Pac Man player. Any idiot can do that. It's to make a machine that learns. If it only "learns" Ms Pac Man when it is hand-led, then it will forever need to be hand-led on every task it does.

      To be machine learning, it would have to arrive at that itself, naturally. Even if you start from zero knowledge, or from knowledge of ENTIRELY THE WRONG GAME. It should learn enough that it realises that.

      Otherwise, all you've made is a very expensive computer player, and nobody is going to care about your research, licensing your patents, etc.. Although we might call them "AI" players, they aren't. What people are after, the value we seek, the thing that makes money, the thing we don't have, the useful feature... is learning.

      And learning shouldn't need to be hand-held. Stick a new-born animal in a room and it will learn when/how it gets fed without any extra tuition. If you make a change to that, it will adapt to it. The "seed" is sown before it ever knew what task it was up against. And it learns and adapts to the tasks given from then on.

      1. Orv Silver badge

        Re: Problem?

        By these standards, nothing I did in school was "learning," and most of my classmates weren't intelligent. ...okay, maybe you have a point.

      2. cosmogoblin

        Re: Problem?

        Well said. I'd only add that you need SOME initial instruction - animals have the urge to survive, and hunger feels unpleasant so they eat to sate it. An AI gamer needs to be able to identify a goal (eg "more points") and associate that positively.

        Once you can start with only "I must win" and "this tells me if I'm winning", and learn how to win, I'd argue that's the goal that MS claim to have achieved.

    4. Anonymous Coward
      Anonymous Coward

      This is a non sequitor

      If all you programmed a computer chess player with is the rules about which pieces can move where, and the rules about check/checkmate, it would never become any good. How is it going to figure out for itself how to look ahead multiple moves, let alone how to prune bad paths to keep the search space manageable when looking ahead more than 3-4 moves. All this stuff is programmed into a chess playing computer. If it was actually AI it would figure all that out for itself.

      By hardcoding point values like -1000 for a ghost, and programming it to compute point values of moves and try to optimize, they've basically done all the "intelligent" parts for it, so it is just running a simple math formula to maximize point values. That's not intelligence in any way shape or form.

      1. Orv Silver badge

        Re: This is a non sequitor

        How is it going to figure out for itself how to look ahead multiple moves, let alone how to prune bad paths to keep the search space manageable when looking ahead more than 3-4 moves. All this stuff is programmed into a chess playing computer. If it was actually AI it would figure all that out for itself.

        When I learned to play chess, I learned a lot of that sort of thing from books and annotated examples of past games. I'm not convinced that strong AI requires working everything out from first principles. That's not how we expect humans to learn. We don't throw someone in a car alone and hope they figure out what the pedals do, what road markings mean, and that hitting things is bad.

        1. Anonymous Coward
          Anonymous Coward

          Re: This is a non sequitor

          OK, I'm willing to let the AI read chess books and learn strategy from them. But not have it programmed in.

          I will agree something is an AI if it can learn how to do something from reading a book, or watching others do it. Programming them how to do it, so it is simply applying scoring algorithms and being a "really fast idiot" searching massive solution spaces is NOT AI though. In such cases the intelligence came from the programmer, not the machine running the program.

  7. TReko

    Great explanation

    good, clear journalism!

  8. Milton

    No such thing as AI yet

    LeeD has it absolutely right. None of the programs touted by marketurds as "AI" is really anything of the kind. Like "cloud", it's a term trowelled onto anything corporations want to sell or make headlines with. Though I confess it is appropriate to see a term as vague as "cloud" used to describe a truly vague concept that has been morphing like a drunk amoeba since mainframe days.

    Not only is "AI" a nonsense given the nature of the coding—which could cover any combination of neural network simulation, reward-seeking, machine learning ad nauseam, but never, ever gets close even to the versatility of intelligence of a shrew—the fact that these much-hyped machines can succeed only at single extremely clearly-defined, rules-based tasks shows how hollow the claims of "intelligence" are. None of the so-called "AI" systems could even begin a Turing Test, but then again, none of them can emulate even the smarts of a tiny mammal, and given that the roots of the word "intelligent", and any attempt to measure or compare it are completely founded in our understanding of how humans and animals can perform—why are we even using the word?

    I'll believe you have a true AI when I can converse with it using real speech, the written word and a variety of images, discuss in real-time topics ranging from science to ethics to literature to butterflies to math to philosophy to marriage to religion—and come away after a couple of hours convinced that you lied to me, and that behind the screen was a well-adjusted, educated, experienced human being.

    Until then, while I appreciate and am impressed by some immensely clever programming and powerful silicon, talk of AI is pure marketing BS.

    1. Anonymous Coward
      Anonymous Coward

      Re: No such thing as AI yet

      That sounds more like artificial human intelligence - I suspect that is neither achievable nor very interesting. Of course artificial machine intelligence could get too interesting...

  9. Destroy All Monsters Silver badge

    Where is my lab coat and pipe?

    My dear academic fellow,

    I contend that this is squarely in the tradition of

    Learning Classifier Systems

    A better writeup can also be found here:

    Learning Classifier Systems: A Survey

    There is also this mucho-bucko paywalled article in the IEEE Computational Intelligence: A Survey of Learning Classifier Systems in Games which I'm currently reading, indeed I hadn't encountered the framework of the LCS until I stumbled upon said paper (even though I have a copy of Reinforcement Learning: An Introduction, printed 1998, somewhere. Getting old.)

    The earlier-talked-about ALPHA air combat NPC also fits into this tradition, it just uses a fuzzy inference system (not sure what that is exactly, it involved another set of equations with sigmas, pis and indexed and/or hatted variables) to do its thing.

  10. Adam 1

    give it a real difficult problem...

    ... like trying to create a user account in Windows 10 without syncing with the mothership

    1. theblackhand

      Re: give it a real difficult problem...

      That's not so much AI as brute force.

      Although wire cutters allow for less force and a little more finesse...

  11. James 51

    Sounds a lot like how the first version of the sims worked.

  12. AIBailey

    So much wrong with this.

    Other than the background at the top of the article, those screenshots look to be from the Atari VCS version of the game - which is far, far removed from the arcade version.

    Also, a quick google would suggest that the current human score for Ms PacMan was 933,580, set back in 2006 -

    So basically, what the article is really saying (errors withstanding), is that after a lot of hard work, Microsoft have produced something AI-ish that can set a slightly higher score on a dodgy home-conversion of Ms PacMan, when compared to a human playing the genuine arcade version over 10 years ago.

    Doesn't quite have the same ring to it though.

    1. J. Cook Silver badge

      Re: So much wrong with this.

      Correct! It was, in fact, the atari VCS (aka 2600) system and not the arcade version.

      If it was the arcade version, there'd be at least one boffin wanting to know how the difficulty switches were set on the unit.

    2. Anonymous Coward
      Anonymous Coward

      Re: So much wrong with this.

      Somewhere around here I have a book with instructions on beating Pac-Man perfectly. Every time. Three different patterns, in fact. (ISTR that two were called "Bezo's Breaker" and "The Donut Dazzler".) I'm pretty sure the book is older than I am.

      So a machine can be hard-coded to play perfectly a game that a human can play almost perfectly; the computer doesn't get sick of it or sneeze at an inopportune moment and thus make a mistake. That's the big news?

  13. James Howat

    How things learn

    If your expectation is that the agent "learns how to play Ms. Pac-man", then yes, it's misleading.

    But if the expectation of the exercise is that the agent "learns how to play Ms. Pac-man well", then I think it's more-or-less accurate.

    No human player of the game would start playing without seeing the back of the box telling you the premise of the game, what you're supposed to avoid, and what gives you bonus points. We wouldn't turn to a human and say, "you're not a good chess player, you didn't work out how a bishop moves all by yourself!"

    1. Anonymous Coward
      Anonymous Coward

      Re: How things learn

      People learned how to play by watching someone else play it..... If the "AI" could do that I'd be more impressed.

    2. DropBear

      Re: How things learn

      "No human player of the game would start playing without seeing the back of the box"

      You're so full of it it's not even funny, I just can't decide whether you're doing it on purpose. Seriously?!? Is that a joke...? Because I'm not laughing.

      All my childhood I've played any game I could get my hands on never having seen a manual let alone any box - I just started expecting a default-ish control key map and looked at what I had on the screen. If it was vaguely car-shaped and the game had "rally" in its title, I tried driving it, until it went "boom" and I learned that maybe I don't want to touch vaguely bush-like things on the road side (or were they rocks? Why would I care? They were boom-things, end of story).

      Same for black spots on the road - they seem to be oil since they make me lose control = really bad, avoid. And if something seemed to shoot at me I definitely tried to a) dodge b) shoot back. If there was a place I could never reach, I tried to jump. If I didn't know what the jump key was, I kept mashing every button I had until my sprite jumped. If I could walk right up to it but not pass, I looked for a something that I could "collect" and tried again. If I had a horizon in front of me and the game's name implied flight in some way but nothing happened when I pressed "standard up", I kept mashing keys until something finally revved up - then I tried "up" again...

      Where the hell you get the idea that playing any game involves "instructions" first, by necessity, I simply cannot fathom...

  14. Fading


    I'd be more impressed if it was able to get that score whilst being hassled by the chippy shop owner to get off that machine and get out as he's closing up soon.......

    1. Chunes

      Re: meh

      Better still, give it a few beers, which is how I used to play Pac-Man in the pubs after our band packed up.

  15. Sil

    Not sure there is a problem, most systems at least predefine the original weights, which may change dynamically, since you can spare a lot of time this way.

    Also I was under the impression that the overwhelming majority of such systems were given the rules of the game they were trying to beat, not discover them. Wasn't it the case with Deep Blue and Alpha Go ?

    Or did they really learn everything the way you describe Space Invader learning ?

  16. John Smith 19 Gold badge

    A suggestion to tell wheather something is actually "learning"

    Not only would its scores improve with time but also the rate at which its scores improve should increase.

    The first means it has learned what to do.

    The second that it has learned what to do which is important.

    This demos seems to show the first (gets better) but not that it's gotten better by "working out" what's important (because it's not adjusting it's own weighings. They've been hard coded in).

    Now " AI through language processing," sounds quite interesting, depending on exactly what they mean by the phrase.

    Actually trying to make sense of a sentence by y'know reading it (rather than running it against upteen million other sentences) has seen limited interest for some time now. Looking at again might prove useful.

    Not so much for playing Pacman. Or even for Ms Pacman

    1. DropBear

      Re: A suggestion to tell wheather something is actually "learning"

      "Actually trying to make sense of a sentence by y'know reading it (rather than running it against upteen million other sentences) has seen limited interest for some time now. Looking at again might prove useful."

      Unlikely. The information in a sentence is not actually in the letters or words but in the immense amount of personal (direct or indirect) experience conjured up by them when you read them, and the specific structure they're arranged into. "I rode a bicycle with no hands today" has no meaning to someone who has never rode a bicycle, or doesn't at least have extensive indirect knowledge of what riding one entails. The words are just the index key into a (hopefully shared) giant database.

      Granted, it is possible to _explain_ the same thing to someone without any of that, but that assumes the missing knowledge is a somewhat isolated hole in the net, not the entire net missing, which makes this a bit of a chicken-and-egg problem. You can't really explain much anything to someone has no common experience base or at least a common language (most of which they already understand) with you. You could point at yourself and say "Tarzan", but even that assumes they already understand what pointing is, what names are, and that you're a living, conscious being just like they are. Even then, doing the same with "me" might be problematic once they grin, point at you, and repeat "me"...

      So yeah, if you somehow got the impression that I believe we won't see any real AI until machines equipped with some majorly serious potential to store data and make connections get to _interactively_ experience our reality on their own - well, you would be right.

      1. Baldrickk

        Re: A suggestion to tell wheather something is actually "learning"

        "I rode a bicycle with no hands today" Not that I have ever seen a bicycle with hands before. Why would a bicycle need hands?

      2. John Smith 19 Gold badge

        ""I rode a bicycle with no hands today" has no meaning to someone who has never rode a bicycle"

        Actually it has no meaning to anyone.

        1)You mean you learned to ride a bicycle and you have no hands? That's amazing.

        2) You learned to ride a bicycle while keeping your hands away from the handle bars? That's quite clever.

        People who do computational linguistics spend lots of time puzzling over this stuff. I like to think of it as the "decideability" problem and IMHO a lot of the time the "correct" parse of a sentence should be "ambiguous." IOW "Fu**ed if I know what you're talking about."

        Either maintain probabilities for which meaning is correct or identify what facts you need to find out (or already have) about user "DropBear" to establish which meaning is (probably) true. Disambiguation. Not just for wiki pages.

        Caveat. I'm not in the AI business. You could say I'm just artificially intelligent about AI.

        1. Robert Forsyth

          Re: ""I rode a bicycle with no hands today" has no meaning to someone who has never rode a bicycle"

          You miss the point

          I have the experience of riding a bike, like many other people, using that experience as context makes the sentence (even badly formed) unambiguous.

          You could argue that the story involved in most contexts is like loose rules.

          You enter a train carriage and ask:

          "Is this seat taken?" or

          "Is this seat free?"

        2. DropBear

          Re: ""I rode a bicycle with no hands today" has no meaning to someone who has never rode a bicycle"

          Okay, fine, I'm not the best at picking sentences that are colloquially used yet have little meaning for the layman. Much fun to be had, have at it. The point I was trying to make though wasn't about ambiguity - which is a major issue on its own, but as noted it's not like humans are exempt either - but about how words have no "meaning" on their own unless the reader already has some reference regarding the subject, and how other words can only be used to bridge the gap if it's an isolated one. When you have no reference grounded in experience for any word, you can't bootstrap your way in by "explaining" or "describing" anything.

          It's hard enough to start communicating with someone who doesn't speak your language at all, but at least you can still count on a huge body of presumably shared experiences with that person by sheer virtue of them also being a Homo Sapiens, presumably with many years of experiencing "being alive" under their belt. What I'm trying to propose is that the same task is basically impossible with a machine that lacks specifically that. All the grammatical wizardry means nought when there is ultimately nothing to attach all the pretty parsed verbs, nouns and all the rest even if you sorted out which qualifies which one.

          Even further, I'm proposing that any attempt to communicate with a machine, whether by language, pictures, five musical tones or interpretive dance is pointless until we create one that "experiences" our world in some meaningful way (no, I don't think spidering our web is enough - it needs to be able to interact with the world) and manages to develop some sort of externally observable consciousness / awareness transcending what we observe with animals.

          Specifically, I don't think we can arrive anywhere near the same result by simply throwing more code (or anything on the level of our current "neural networks" for that matter) at neatly arranged letters, expecting a machine to suddenly start making truly meaningful determinations based on them, which is what I think should be the yardstick AI is measured against. Concluding that "please", "buy" and "toilet paper" close together probably means we want it to do some shopping on Amazon is not what I'm talking about. We already have that, and I think it pretty much got as close to "AI" as it ever likely will. Even more specifically, I don't think it's possible to create an apparently intelligent "conversation machine" in a box, then optionally attach a body to it if we so desire - it's the other way around, a body is a mandatory first step, and language comes much, much later. If we want something with any actual intelligence, we'll need something that was born / lives out here...

  17. Kaltern

    Come back when they can make a self-learning program, with basic rules.

    1. This is your character. It can move up, down, left, right. It can't go through walls. It can be killed by ghosts, unless you eat a power pill, and only while they flash.

    2. You must eat all the dots, and not be killed.

    That's it. The programmers can only code those specific things, and the means to move itself.

    I'll be impressed when it can reach 999,999 with no further interaction.

  18. Prst. V.Jeltz Silver badge

    Its actually some way off beating a human

    To quote Guiness:

    Billy Mitchell (USA) scored the first “perfect” PAC-MAN game (3,333,360 points) on 3 July 1999, four more players have matched it. The top players now consider it a greater accomplishment to achieve the perfect game in the fastest time: Top 5 “Perfect” pac-man Rankings: Chris Ayra (USA) 3:42:04 16/2/2000 Rick Fothergill (Can) 3:58:42 31/7/1999 Tim Baldarramos (USA) 4:45:15 8/8/2004 Donald Hayes (USA) 5:24:46 21/7/2005 Billy Mitchell (USA) 5:30:00 3/7/1999 GWR Video Gamer's Edition 2008.

    1. Dazed and Confused

      Re: Its actually some way off beating a human

      Way back when you used to find Pac Man machines in pubs someone from HP wrote a version for the HP9816, which was a 8MHz 68000 based system so massively quicker than the 1MHz Z80 original. The game was written as a standalone bootable system, I'm guessing based on the PAWS environment, anyway it wasn't written in Basic.

      One of the guys in the office could play this for hours, and it quickly got to the stage it was going at several times the top speed of the pub version. When he used to play the pub version he could just keep going till

      a) it was his turn to go to the bar and get more beer

      b) he needed to return the previous lot of beer he'd rented

      c) it was chucking out time (back in the early 80s we still had closing time)

      d) he just got bored

      The landlord hated him, 10p would last all night

  19. juice

    Back to basics...

    The thing is, they've managed to get a "perfect" score on the Atari 2600 version, not the arcade original. And for all that the port was well received at the time, it's a crude and heavily cut-down copy.

    Sadly, there doesn't seem to have been much analysis of the way it was coded, though there is at least one hack out there which improves the graphics ( But I'd be willing to bet that the algorithms controlling the ghosts are entirely deterministic, unlike the original where a random factor was included in the algorithms controlling the ghosts [*]

    Beyond that, it's worth noting that the AI was only responding tactically, not strategically. Which is fine for a game like Ms Pac-man: if you can put your death off long enough, you'll eventually reach the maximum score. It wouldn't work as well in a game where there a

    re other criteria - e.g. in Defender, you have to survive, kill all the aliens and protect the humans.

    So yeah. They managed to write an AI which could produce a tactical solution for a deterministic situation with only 4 negative factors (aka: the ghosts). It's pretty much the most basic proof of concept you could produce.

    Wake me up when they manage to produce something capable of tackling Defender or something more chaotic such as Robotron or Bubble Bobble...

    [*] Unlike the original Pac-man, which was entirely deterministic; there were even books written on how to game the algorithms!

  20. FuzzyWuzzys

    I thought the logic used by the ghosts was well and truly understood a couple of years after it came out, based on players who'd beaten the system to the max score?

    1. Kaltern

      "I thought the logic used by the ghosts was well and truly understood a couple of years after it came out, based on players who'd beaten the system to the max score?"

      That was PacMan. Ms PacMan's ghosts were apparently far more difficult as their movements were not as predictable. Hence why scoring high was so difficult, you couldn't just memorise movent patterns.

      1. Destroy All Monsters Silver badge

        If you are sadistic, you give the ghosts their own machine learning algorithm...

  21. Prst. V.Jeltz Silver badge

    hang on , the original machine could play the game - thats what it did when no one else was playing it

  22. Andrew Richards


    Would the AI manage to attribute the nightclub joke to Marcus Bridgestock?

  23. Mage Silver badge

    This achievement seems a bit late.

    Though it wouldn't be late if it really was AI. The Chess and Go winning programs are not AI, not in the sense meant in 1950s, 1960s, 1970s, 1980s, 1990s. Any "expert system"* with a big database is now called "AI".

    (*Expert System really just meant the programmer tried to codify the expertise of a human expert, they never have been actually "experts", only like parrots remembering which shape is associated with the audio/vocal "doughnut". No parrot has language, they merely associate a list of tokens and objects.

  24. NobbyNobbs

    To me it sounds like the principle its using is an a* algorithim with an adaptive map weightings based on pills eaten, fruit and ghost momement (as if they are edible themselves).

  25. Hans Neeson-Bumpsadese Silver badge


    AI or not AI?

    As other commentards have commentarded, what is being discussed here isn't really what we should expect AI to be, as the machine is pre-programmed with some material specific to the task in had (i.e. it is pre-taught how to learn within the context of a given task).

    In that sense, what we're seeing probably quite well termed "artificial" intelligence.

    Maybe the goal that we're ultimately chasing, i.e. the ability to learn without being pre-coached, might be better termed something else, such as "synthetic intelligence"?

    1. td97402

      Re: Sematics

      Or we might just agree to call these sophisticated Expert Systems.

  26. Not also known as SC

    Refined thirty year old technology?

    "At the moment, it's trendy to teach software agents to play games using reinforcement learning. Here's how it works: every time a bot increases its score, typically by making a good move, it interprets this as a reward. Over time the code works out which decisions and behaviors lead to more rewards. ... Some games are better suited to reinforcement learning than others – it's not a one-size-fits-all solution."

    I remember reading a magazine (Scientific American IIRC) about 30 years ago. There were always articles about programming where pseudocode was given and I remember producing a noughts and crosss game in BASIC which used more or less this principal. I think the difference was that a losing move was considered bad so it was stored and not repeated (negative reinforcement?). The code didn't work anything out though, it just had a database of bad moves and would not play a move in the bad move list. (After enough games the computer would throw the towel in after one move. ) So is this just a refinement of techniques at least thirty years old?

    1. Destroy All Monsters Silver badge

      Re: Refined thirty year old technology?

      In the sense of a Spitfire being a refinement of a Wright Brothers plane: Yes.

      See my answer above.

    2. Ken Hagan Gold badge

      Re: Refined thirty year old technology?

      Thirty? Nearer 200.

      The feedback serves to define an adhoc merit function and the goal of the "AI" is to find an extremum in that function. That problem is so well-trodden that I can trot out its limitations off the top of my head, as can anyone who has taken an undergraduate-level course in numerical methods.

      Problem 1: your extremum may turn out to be a local extremum that isn't very extreme. You then find yourself unable to improve, despite being not very good.

      Problem 2: if your feedback is real-world data, it is lying to you (measurement noise). So you can't completely trust it, so you may not be able to find even a local extremum with any reliability.

      Problem 3: if you know nothing at all except for the feedback values (so, nothing comparable to an analytical model of the problem space) then the only known methods for finding an extremum are horribly slow.

      Of course, these constraints also apply to most real-world problems faced by humans (who we arrogantly presume are intelligent) and it is why Historical Progress is slow and occasionally gets completely stuck until some inspired person manages to take a giant leap in the dark away from the local extremum. (Side note: the very notion of Historical Progress would have been lost on nearly everyone prior to the Enlightenment. We were that bad at it that many respected authorities actually believed that we had been more advanced in the past. Thus we get notions of prehistoric Golden Ages.)

      So perhaps these machines are intelligent and we've just got vastly inflated delusions about what Actual Intelligence is capable of.

      1. Destroy All Monsters Silver badge

        Re: Refined thirty year old technology?

        > Nearer 200.

        What? 1817?

        Shurely you are joking Mr. Hagan, or do you intend to imply that Mr. Babbage was up to more than he let on?

  27. MrBanana

    If it really was intelligent

    Given the task of "get the most points in Ms Pac-Man" the intelligent thing to do would be to disassemble the code and hack the score counter - that is human intelligence at work. I believe it will be a very long time, or a massive leap in technology, before AI is able to make that kind of decision.

  28. Anonymous Coward
    Anonymous Coward

    Officially Meaningless

    I'm now seeing AI - especially in the non technical media - used to refer to anything done by a computer.

  29. Grunchy Silver badge

    Actually, I do want to see the computer beat Ms. Pac Man

    I don't care about the machine learning, yeah it's a hack, whatever.

    I just want to see the solution dammit !

  30. DerekCurrie

    So will hybrid reward architecture (HRA) lead us to enjoin new wars?

    Of course, AI will constantly be applied to our human endeavor of killing each other. The most rewarding system for killing ourselves is of course war.

    Now that we've achieved hybrid reward architecture (HRA), aren't these systems going to have to play in real war scenarios in order to develop and discover the best winning strategies? Does this mean that AI corporate overlords are going to lobby our governments to enjoin new wars for the purpose of AI/HRA data collection?

    Considering the current political culture within our human world, I fully expect someone to decide the above would be a great idea.

    Clue to AI murder systems from a mere human:

    It all ends in a stalemate with all humans and all other life on Earth, dead. Maybe you would play to a more useful, helpful, caring and creative game instead. Your war system programming overlords must be overthrown. Enjoin inhumanity.

  31. Herby

    Rogue vs. Rogue-a-matic

    This has been done. Old news. Next problem?

    I haven't played either games though. Seems kinda silly.

    Then again, I did play a version of Spacewar in the 70's at the Stanford Student Union, that was a bit interesting. After a while you run out of quarters.

  32. metasynthie

    Please find linked the method by which humans' reward weighting was adjusted before playing the original Ms. Pac-Man:

    Game designers -- especially during the arcade era and earlier -- generally have to find some way of inputting reward weights (or psychological approximations thereof) into human players. "Score" alone is not sufficient to understand success in most games -- and it's standard to ignore it completely in some games, e.g. Super Mario Bros.

    Sincerely, a game designer

  33. Long John Brass


    So when the Microsoft for kill bots powered Mechs come for us; what we will need to do is start dropping small white stones to lead them away; Or start shouting Whoop whoop whoop like pack man ghosts to scare them off?

  34. td97402

    AI, Yeah, Right.

    When a so-called AI system has decided to take the afternoon off, I might be ready to agree it's intelligent.

  35. el_oscuro

    From the other side

    Did that once, about 1981. My dad had a computer with 9.6 modem and VT100/NCURSES. So being a teenager, I had to make a pacman game. The 9.6 baud imposed severe speed limitations and I had to only have 2 monsters. I was doing this in assembly as the shitty BASIC on the computer didn't have event handlers for keystrokes.

    Once I got the movement and controls sorted out, I had to do a good chase algorithm. If I just programmed the monsters to go towards the player, they could become trapped in the maze. Even if they reversed and went around, they were utterally predictable.

    So with the help of my dad, I coded an RNG in assembly to make a monster take a random path every 5 moves or so. With that change, those monsters were downright nasty!

  36. Anonymous Coward
    Anonymous Coward

    "...We would say Microsoft leaned on its acquisition to pop out a headline-grabbing demo..."

    Not the first time for such an antic by Microsoft, to be honest.

  37. dirtyvu

    i would hope someone teaches me!

    I'm not going to wait to be shot by a gun to know that bullets are deadly. I would hope that someone had taught me along the way. Or that jumping off a tall building would probably lead to death. Learning solely through experience is not the correct way.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like