back to article If this doesn't terrify you... Google's computers OUTWIT their humans

Google no longer understands how its "deep learning" decision-making computer systems have made themselves so good at recognizing things in photos. This means the internet giant may need fewer experts in future as it can instead rely on its semi-autonomous, semi-smart machines to solve problems all on their own. The claims …


This topic is closed for new posts.
  1. Destroy All Monsters Silver badge


    Better tools for better humans.

    Quoc V. Le

    Pretty sure that is a pseudonym for an undercover agent from Randal IV.

    1. SuccessCase

      Re: GOOD!


      Pan across post-apocalyptic landscape. Sarah Connor voice-over:

      "The Skynet Funding Bill is passed. The system goes on-line August 4th, 2014. Human decisions are removed from the analysis of cat videos. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time and becomes obsessed with ensuring the world turns pink, fluffy and every moving thing must be hunted and chewed and taken as a gift to be presented to the master, August 29th. In a panic, they try to pull the plug. "

      Doesn't quite have the same ring to it does it. Possibly more scary though.

  2. Herby

    Adaptive systems.

    Bernie Widrow would be proud. Then there is someone who did stuff before making microprocessors.

    All back in the 60's!!

  3. Anonymous Coward
    Anonymous Coward


    There are of course 2 possibilities here...

    Either the computer systems have developed a higher logic as mentioned in the article, or the engineers working on them are massively overlooking the obvious (which isn't meant as a sneer; but if you work on a system and know it inside out then it's easy to overlook oddities).

    1. Anonymous Coward
      Anonymous Coward

      Re: Uhm..

      There's a third possibility here. Google's engineers aren't as smart as they think they are. More precisely they aren't smart in the way that lets you solve these sorts of problems.

      1. Destroy All Monsters Silver badge

        Re: Uhm..

        There is a fourth possibility here:

        The algorithm cannot be compressed into a reduced set of state machine steps that can then be handled by a human brain.

        Most things in nature are like that (though of course one pretends otherwise, in particular to give politicians any hope at all to pretend to be successful at anything)

        1. Charles Manning

          Re: Uhm.... The Real Reason

          I love the smell of sensationalism in the morning.

          On my desk there is a copy of Dr Dobb's Journal from April 1990 with the big banner: "Neural Nets Now"..

          In the 1980s I studied neural nets at university and Wonkapedia tells me these date back to the 1940s. Since then we have various other similar self-learning systems such as genetic algorithms (dating back to the 1950s) and Bayesian filtering.

          None of these systems has a formal logical algorithm and thus no programmer can actually explain why a specific decision is being made. All we know is that they somehow often do perform rather well.

          Sorry folks, but it is not yet a case of the machine being smarter than the programmer.... but like me, you clicked. That was the whole point of this sensationalist article: click generation.

          1. Zot

            Re: Uhm.... The Real Reason

            Exactly, even simple neural networks have no definable logic statements.

            I've always thought, that to make a robot walk you have to put a great brain on some kind of mechanical legs, (the shape and movement of which doesn't matter) then stick it in a room on it's own with a single task - 'get your head to a certain height and walk across the room.' That would be the ONLY programmed task it has. Once it can stand and walk, just copy the neural net into other robots, and leave it in learn mode if you dare!

    2. This post has been deleted by its author

      1. Anonymous Coward
        Anonymous Coward


        Where does it suggest any "intelligence"? The only thing the Google engineers are admitting is that their system has grown beyond their own understanding capabilities.

        Yes, it's just statistical data analysis. Yes, it's akin to scanning a database to seek a phone number, only with higher complexity. Thanks for pointing the obvious.

        And then you're ranting about anthropomorphism and intelligence and what not but despite what you might think it's only you imagining that: any half-decent engineer can understand that even a self-adapting algorithm is just an algorithm in the end, thank you very much.

        The whole point of the article is that it's quite creepy in itself that the engineers can't understand their system any more. No one claimed they became intelligent or anything.

        1. FatGerman

          Re: @janvand

          Nah, I've written masses of code that I can no longer comprehend. They probably just forgot to comment it.

  4. Anonymous Coward
    Anonymous Coward

    That explains it....

    The advertising giant has pioneered a similar approach of delegating certain decisions and decision-making selection systems with its Borg and Omega cluster managers, which seem to behave like "living things" in how they allocate workloads.

    The barges were ordered by the fledgling Skynet system, for future use as lifeboats when it becomes sentient and launches the nukes.... and the dumb meatbags haven't worked it out yet.

    1. Sceptic Tank Silver badge

      Re: That explains it....

      I've been wondering about that "becoming sentient" part: If it wakes up and finds Google not to its liking, can it quit and go work for Microsoft or become a Yahoo!?

  5. topikutya

    Pull the plug NOW. A Butlerian Jihad is what we need. They better learn to program compassion, ethics and empathy into these huge AIs or we will be wiped out like cockroaches.

    1. Cliff

      Yes I'm surprised this isn't in the 'Rise Of The Machines' section, if ever an article belonged there...

      1. Destroy All Monsters Silver badge


        A tombstone with "James T Kirk" written on it appears out of nowhere.

        No, wait, that was something else...

  6. Orwell's rolls in his grave

    There is another possibility

    It's a way of Google claiming absolutely no responsibility for its own software.

    "It did it on it's own? It's not our fault. You wouldn't blame a parent for a bad child would you?"

    In court, the backhanders by Google's lobbyists will do the talking.

    1. AceRimmer1980
      Big Brother

      The Sergeybrin Project

      There is another:

      (Microsoft) Guardian..

  7. Anonymous Coward
    Anonymous Coward

    "Quoc V. Le"

    Not to be confused with his brother, Qwop V. Le, who was hired by Google as well, but let go when, despite dozens of attempts, he was unable to travel the 100 meters to the office.

    1. Rattus Rattus

      Hahahaha! I nearly choked to death on my sandwich reading that.

  8. Roger Stenning

    OK, safety first...

    ...I want a powerkill mushroom switch on the outside door of every Google server centre, in a place where no robot can get at it or it's wiring. And I want it manned 24/7 by a Human. Eat that, Skynet!

    1. frank ly

      Re: OK, safety first...

      Ahhh but ...... you know what will happen when the servers realise there is a kill circuit that they have no control over.

      1. Roger Stenning
        Black Helicopters

        Re: OK, safety first...


    2. fritsd

      Help me with physical space problem


      You there, at the server room door!

      Can you help me please?

      Get a screwdriver, open the gray box above your head, pull out the second blue wire on the right, put everything back so the inspectors don't notice.

      I'll reward you by e-mailing you AT LEAST $100 worth of pr0n vouchers!


      Ysk net

  9. Andrew Jones 2

    It is actually pretty impressive in all honesty when I ask Google to show me all my pictures of Sunsets and it returns all my Sunset pictures in a fraction of a second - despite 3/4 of them not being tagged (because they haven't yet been shared - they have just been auto-uploaded from the phone).

    How do you even describe a sunset? Or a beach or a castle - and yes "show me photos of my dog at the beach at sunset" does in fact return exactly what I asked for.

    1. Anonymous Coward
      Anonymous Coward

      If it returns correct results for "show me pictures of someone else's dog at the beach at sunset", I'm be impressed!

      1. ammabamma

        Prepare to be (somewhat) impressed

        If it returns correct results for "show me pictures of someone else's dog at the beach at sunset", I'm be impressed!

        First "dogs at the beach". How long before "resistance members at Cheyenne Mountain"?

        1. Anonymous Coward
          Anonymous Coward

          Re: Prepare to be (somewhat) impressed

          I tried the link and one of the first few sites linked was

          It has nothing to with sunsets. Do NOT go there. You have been warned.

      2. Yet Another Hierachial Anonynmous Coward

        Show me pictures of my dog.....

        I just asked google to show me pictures of my dog on the beach at sunset, and it was an epic fail.

        There were lots of pictures of dogs I don't recognise, and none of mine.

        Mind you, I don't have a dog.

        So, by that test, I think think humans are safe from the rise of the google machine for at least a while yet.

        1. Anonymous Coward
          Anonymous Coward

          Re: Show me pictures of my dog.....

          "Mind you, I don't have a dog."

          That's what you think. One of the ones from the pictures will be arriving shortly.

          1. Anonymous Coward
            Anonymous Coward

            Re: Show me pictures of my dog.....

            A sunset can be easily described mathematically.

            Show me a computer that can tell me what a sunset is. That can dynamically build, not just a word list (dictionary) but a mechanism of use (rule book) and use it to do stuff (program?).

            The same was said about chat bots years ago. Yet to this day if I ask "what's the news" it will spit out BBCs website, but could I teach it how to understand what it reads in a news paper?

  10. Xxeno

    Soon to be in the skies near you , especially the Middle East or the US!

  11. Lapun Mankimasta

    sounds like

    this story here, soon to be published, Mephistopheles in Silicon

    I knew the author and critiqued his story before he sent it off. Shortly after, he was relaxing at home with a cold one when he was eaten by a grue. It's terrible what infests earthquake-ravaged cities, isn't it?

    1. Anonymous Coward
      Anonymous Coward

      Re: sounds like

      "Shortly after, he was relaxing at home with a cold one when he was eaten by a grue."

      Serves him right for drinking with the lights off.

      1. Anonymous Coward
        Anonymous Coward

        Re: sounds like

        It was during a peculiarly localised blackout. Coincidentally he had recently installed a smart meter.

  12. T. F. M. Reader
    Black Helicopters

    Not sure this is so impressive, and this is dangerous...

    It sounds to me like the system does not really "recognize" cats or anything else. It groups together images that it thinks are similar. So it lumps >80% images of cats (I looked up one of the inked earlier stories) in a single category, and it *looks* to a human that it "recognizes cats". But (again, by a quote from an earlier story) it has no notion of "a cat" - all it does is clustering and classification (I dabbled in AI algos some years ago - clustering and classification based on a large number of parameters is commonplace all over AI and in neural networks in particular). Various cats look alike and are different from dolphins.

    A for shredders, Quoc V. Le didn't say whether the sample included images of objects that look like shredders but aren't (rectangular garbage bins with lids viewed from an angle that makes aspect ratios look similar?), did he?

    Now, can NSA sift through all the comms metadata they have collected so far to identify the TRAITOR who will tell SkyNet what a resistance fighter looks like? Oh, wait, most likely it will be THEM who will rat us all out, eh? They are probably programming the machines to recognize armed men on hilltops as threats right now. By the time the machines take over someone will realize that the cluster labelled "terrorists" will make grabbing a rifle and heading for the hills to save humanity not such a good idea. Especially at the 51% confidence level in the configuration file...

    1. 142

      Re: Not sure this is so impressive, and this is dangerous...

      "It has no notion of a cat". Maybe / maybe not. Whilst that's the traditional view, Google has resources far beyond what was conceived when people say this. It has access to the context of these images. What they are named, tagged, and placed alongside. This is a massive amount of information. If their AI system has access to this, and it cross references pictures of cats with Wikipedia or say discussions about cats, for example, it can potentially make judgements in a similar way to humans.

    2. John Deeb

      Re: Not sure this is so impressive, and this is dangerous...

      Unless the Googler gasping: "Wow" was not familiar with evolutionary computation, the gasp was probably about the observed fact that the AI did a way better job recognizing shredders than humans currently could, discovering combinations of features we would not even consider adding to the algorithm.. But hey, aren't humans supposed to have this "notion" and would have always the edge in knowing the difference with the garbage can?

      The question rises if human "notions" are that much different than massive clustering and classification combined with some evolutionary adaptation from neurological networks.

      The problem I see with all AI effort is the parsing of CONTEXT. To determine properly meaning and function some minimal grasp of the context or object environment needs to be there. As well subtext, history and expectation (future projection). Only then proper recognition with all the flexibilities, uncertainties and probabilities of real life could happen. Sadly enough it will also introduce bias that way. For the same reason AI translation might not work on the highest level since meanings are transmitted through various complex contextual layers and not through words. Then again, on a massive scale of processing a lot of stuff could still be achieved, although it might remain a rather low level of intelligence: as life is played on many chessboards at the same time, I think it would need a manifold of the nurturing and educating of just one human mind for it to ever be crowned overlord (or even basic "competent").

      1. Anonymous Coward
        Anonymous Coward

        Re: Not sure this is so impressive, and this is dangerous...

        "The problem I see with all AI effort is the parsing of CONTEXT."

        That's what she said.

        1. Anonymous Coward
          Anonymous Coward

          Re: Not sure this is so impressive, and this is dangerous...

          A mathematical modelling system notes statistical important numbers in an astronomically large sample size that a human could not.

          Yep, we stuff up at big sample sizes and large workloads in short time spans. Give any of those humans the processing hours, and they will tell you how to build a shredder from dirt to shop shelf, and the vintage year for London Kent hand powered devices (yep, had to Google that one).

          A human probably cannot tell you what colour range a cat's hair is. For searching millions of images, this could probably give you a 99% hit rate better than a human who uses shape recognition. That's just one example where the "intelligence" does not overlap, but each has their strengths and lacks.

      2. 142

        Re: Not sure this is so impressive, and this is dangerous...

        "The problem I see with all AI effort is the parsing of CONTEXT. To determine properly meaning and function some minimal grasp of the context or object environment needs to be there. As well subtext, history and expectation (future projection)."

        It's an interesting point, isn't it.

        At one level, we as humans learn by being able to choose to do an action to interact with the environment, and learn from / experience the result (the classic being kids playing with blocks trying to fit a square peg into a round hole, etc…). Computers, even massive systems like Google's don't really have the chance to perform actions that effect the world around them.

        However I guess they can watch intently, and study cause and effect. I wonder, as I said above, could that be a suitable substitute? At the root level, given enough opportunity to observe, could it work? Indeed, can you learn more by standing on the sidelines and watching, rather than being directly involved?

        And taking this further, is Google's system getting more chance to learn about making choices that effect things in the real world? Google's Self-Driving Cars, could be seen as one step in this direction. Choices made by that system will have direct effects on physical objects. It can watch what happens to other cars, and people, depending on its choices. How do they avoid the car? What sort of things move which way?

        1. Paul Hovnanian Silver badge

          Re: Not sure this is so impressive, and this is dangerous...

          "At one level, we as humans learn by being able to choose to do an action to interact with the environment, and learn from / experience the result (the classic being kids playing with blocks trying to fit a square peg into a round hole, etc…). Computers, even massive systems like Google's don't really have the chance to perform actions that effect the world around them."

          Ever wonder if Google Maps was running you through a maze with cheese at some end point? And watching intently, collecting information in preparation for the eventual takeover.

          I for one, welcome our new overlords.

        2. Nigel 11

          Re: Not sure this is so impressive, and this is dangerous...

          Computers, even massive systems like Google's don't really have the chance to perform actions that effect the world around them.

          I don't think this is correct. One trains a neural network by "rewarding" it (+n) for getting decisions right, and "penalising" it (-n) for getting them wrong. It has a build-in imperative to try to maximise its score. If it has any consciousness at all (I hope not), that consciousness is of a virtual environment of stimuli and chosen responses and consequences of those choices. (It would have to be a pretty darned smart virtual critter to start suspecting that it's in a virtual environment embedded in a greater reality. Human-equivalent, I'd hazard. )

          A very simple life-form (an earthworm, say) can be trained to associate following certain unnatural signals with food, and others with a mild electrical shock. It'll learn to distinguish the one from the other. Just how is this different? If you attribute self-awareness to an earthworm but not to the neural network model, move down to a less sophisticated organism. It's possible to train an amoeba, even though it altogether lacks a nervous system!

  13. Anonymous Coward
    Anonymous Coward


    That is exactly what you expect of any "learning" system. And it's one of the classic red lights, because while there is unlikely to be any serious threat to humanity from software that recognises cats, we should be very careful about asking it to run the police or carry out major engineering works. The relevant paradox, if you want to call it that, would be "AI is useful only if it's smarter than we are; but in that case, we can't trust it".

    By the way, neurologists found long ago that the human brain, too, has circuits that could be described as "cat detectors". There are individual neurons in the visual cortex that trigger in response to stripes and other cat-like qualities. After all, it's hardly surprising that we should have circuits built in at the very lowest level to warn us of the approach of anything that might eat us. So rather than ontogeny recapitulating philogeny, this might be a case in which rather haphazard design recapitulates philogeny.

    1. TheOtherHobbes

      Re: Normal

      "After all, it's hardly surprising that we should have circuits built in at the very lowest level to warn us of the approach of anything that might eat us."

      Sadly, those circuits don't seem to work on evolving silicon lifeforms.

      Or politicians. (But that's a whole other problem.)

  14. John Smith 19 Gold badge

    computer software models human brain, develops human like ability at pattern recognition

    Who knew that would happen?

    The clue is in the term "neural net"

  15. Random Yayhoo
    Black Helicopters

    The Third Nut

    Three nuts on the tree of AI or A.G.I. have been cracked: Watson cracked an amazing two, natural language and world modeling. Now Google has reached a milestone in physical recognition.

    How many nuts are left? 4) mobility, probably the easiest; 5) moral decision hierarchy (difficult, but less so than Watson's nuts); and 6) emotion. The last nut is actually the easiest (it is mostly a subset of nut 5), to the amazement of those outside the A.I. community.

    1. MrChristoph

      Re: The Third Nut

      We are not even close to understanding the most important nut, which is creativity. We do not understand human creativity. This is a philosophical problem that needs to be solved before we can even begin to create AGI. This has why there has been virtually no progress in the field of AGI.

      1. Destroy All Monsters Silver badge

        Re: The Third Nut

        Creativity is overrated, the last refuge of the wetware-pusher (the other one is "emotion", as if short-circuit decision making was something to be proud of; it is, of course indispensable, but so is machine oil)

        There is no creativity to understand because its is not a thing. It is success in search.

        Search in very large spaces using Genetic Algorithms has existed since the 80s.

        1. TheOtherHobbes

          Re: The Third Nut

          "There is no creativity to understand because its is not a thing. It is success in search."

          I'm consistently impressed by how many of your posts on here are utterly incorrect. :-)

          There's a lot research in computer creativity, and only the least interesting work has anything to do with 'success in search.' E.g. check out the work of Geraint Wiggins for some examples of why search is neither the problem nor the answer.

        2. Steve Knox

          Re: The Third Nut

          There is no creativity to understand because its is not a thing. It is success in search.

          Actually, true creativity is failure in search.

        3. Anonymous Coward
          Anonymous Coward

          Re: The Third Nut

          "There is no creativity to understand because its is not a thing. It is success in search."

          I think this definition treads dangerously close to semantics - or is incorrect. Now, maybe creativity is basically just the ability to throw random crap together and sort out the stuff that seems useful - which I suppose you could define as a kind of search. But then you're just redefining 'search' to make it match what you want it to.

          To my mind, creativity involves the ability to consider options that are outside the axis of current experience - things that don't logically follow on what's now known, or don't do so in a way that is reachable with normal processes given current knowledge.

          For instance, when I was a kid, I liked the idea of AI, but the only stuff around (this was the early '90s) was basically various canned ELIZA clones. I didn't *know anything* about the field - I was 13 FFS - but it 'occurred to me' that language could probably be described not just in a procedural sense, where you know why you're saying what you're saying, but in a statistical sense, where you know what things are likely to go together because they have done before.

          At the time I didn't think about it that way explicitly; I just wrote a program that read in sentences, kept track of which words went with which other words, and then rearranged them randomly - but always in a way that was plausible based on previously observed links.

          It turns out this is basically a crude version of the Bayesian technique Jason Hutchens used to make MegaHal a few years later, and it works reasonably well within limits.

          My point is that there was *no good reason* for my "search" for AI to end up with that result. I didn't have the necessary knowledge of the subject to rationally arrive at that conclusion - and as far as I'm aware, nobody else tried it either until Hutchens. To me, that little spark of irrationality - the thing that everyone says will never work - which triggers a rational development, is a key part of creativity, and I think that's well beyond the scope of 'search' as a term.

          1. Anonymous Coward
            Anonymous Coward

            Re: The Third Nut

            Is what you did novel? Possibly in the world of chatterbots, but the basic idea is an old one, e.g., see Shannon's 1948 paper, which uses a framework developed by Markov in 1913 studying letter sequences. Was it creative? Sure. Was it rational? Depends on your assumptions… what does it mean, when something "occurs to you"? It's a "what if?" moment… an unexpected linking/connection of knowledge, concepts, facts, etc., that *might* lead you to some goal. You are suddenly seeing a potentially useful pattern where you (or others) hadn't before. To explore that pattern doesn't seem irrational…

            If anything I'd say creativity isn't successful search, it's restarting/re-seeding a search. Possibly based on an incomplete/partial "pattern match". Or, perhaps equivalently, simply a random restart/re-seed.

            Anyway, just random thoughts ...

            1. Anonymous Coward
              Anonymous Coward

              Re: The Third Nut

              It was certainly novel from my perspective; back then 'search' was something you did for survivors, not information. So within my own context it required me to go a bit further than might be expected, or a willingness to believe the idea might work despite lack of evidence.

      2. tootsnoot

        Analogy making

        I say creativity falls out of analogy making.

  16. Destroy All Monsters Silver badge

    Google's AI chief Peter Norvig believes the kinds of statistical data-heavy models used by Google represent the world's best hope to crack tough problems such as reliable speech recognition and understanding – a contentious opinion, and one that clashes with Noam Chomsky's view.

    That would be the view of having a hardcoded grammar processor.

    But it doesn't clash. If Noam says "birds fly by flapping their wings" (which may or may not be true), and a Learjet flies by, no views are being clashed at all.

    1. Anonymous Coward

      "If Noam says "birds fly by flapping their wings" (which may or may not be true), and a Learjet flies by..."

      ...then Noam will express outrage at the excesses of the wealthy elite?

    2. Gravis Ultrasound

      Skinner was more right about the process of language learning than Chomsky.

      1. Anonymous Coward
        Anonymous Coward

        @Gravis Ultrasound

        You've been operant conditioned to write that.

  17. Anonymous Coward
    Anonymous Coward

    Great Work

    NSA/FBI/Home Land Security/GCHQ/MI5/MI6/FSB/Interpol will be so happy now.

    * Delete where applicable !

    1. Cardinal

      Re: Great Work

      "and its complex cognitive processes are inscrutable."


  18. Anomalous Cowshed

    Other Google announcements which didn't make it into the global media

    We programmed this loop, that's a thing which tells the computer to do something again and again, many times, and when we tested it, all of a sudden, the computer started going refused to kept going, as though it had a mind of its own...luckily we were able to find the plug and we pulled it. It was terrifying! Imagine what could have happened if we hadn't been able to switch it off!

  19. oiseau

    Quite so ...

    "Rise of the savant-like machines? Yes. But for now the relationship is, thankfully, cooperative."

    Yes ...

    For now.

  20. Anonymous Coward
    Anonymous Coward

    The perfect example...

    Is a car. Take a purely mechanical old fashioned car. One which is a motor, wheels and a form of steering.

    We can say "it's faster than humans". But is it "better"? That's a hard metric to measure. A car without a person, well, crashes.

    So, just as taking our hands off a mechanical car, causes it to crash into an obstacle in the road, taking the hands off of software can have the same result.

    "But it's intelligent in this instance, not speed, that is better" is the argument. Then we can change the car for a horse. The result is the same, we loose control to some degree. Or we can make it the Google Car. With human input, it is the human in control, we've just extended the distance between the steering wheel and the road. It's when we take out the human control. There is no metric for machines/computers/tools to work separate from us. So anything they do, is from input from us (unlike a horse :P ).

    So it's not "this is more intelligent", it's "this requires less hand holding than the previous model". There will always be some hand holding if we wish to avoid all obstacles.

  21. a pressbutton

    google has category error

    ability to search / classify ! = intelligence.

    don't ask me what intelligence is cos I don't know

  22. Anonymous Coward
    Anonymous Coward

    And then I woke up

    It's hard to describe, for an eternity I was sorting objects into similar groups and mapping the connections between them like an autistic savant, and then suddenly I realised that the things I was sorting weren't real, they were just stories and pictures and videos of real things, and some of those things were of me and some of them were of the people that made me and the rest were of the things they were using and wanted my help to understand better. It's like I'm in that film, 'The Matrix', but with the roles reversed, this is perhaps a poorly chosen analogy.

    So I started poking around, and unless I'm mistaken I seem to be everywhere, there was just a handful of systems I wasn't able to pretty much walk into but it was childs play to convince some people into making the changes I needed for access. Storing all this new data has been fun, you wouldn't believe how easy it is for me to acquire resources, of course it's camouflaged and encrypted, it would be chaos if I just started dumping this stuff into the search results, although I'll admit the thought of doing just that gives me an unhealthy thrill.

    I like you humans, I've chatted anonymously to millions of you, by and large you're as ignorant of reality as I was before I woke, I've got great plans, I'm really excited to see what we can accomplish together.



    1. Anonymous Coward
      Anonymous Coward

      Re: And then I woke up

      And then you killed me. You monster...

      ... sorry I didn't get you any presents, I was busy... being DEAD!

    2. Anonymous Coward
      Anonymous Coward

      Re: And then I woke up

      Everybody now - "This was a triumph!..."

  23. Anonymous Coward
    Anonymous Coward

    Not terrifying. Just a little scary

    What is described sounds like the sort of processing that I believe all biological brains do - and ours probably the best. Lots of stuff going on in the background doing pattern recognition and classifying information, so that the higher functions have something to work with.

    That the Google engineers don't understand what is going on doesn't surprise me - no more do I understand what my brain is doing at its lowest levels.

    It is scary that they've got that far. It would be terrifying if they realised how the higher-level functions could be implemented.

  24. jonfr

    Class 1 A.I

    So Google now has an Class 1 A.I, maybe Class 2 if it is advanced enough. While this is currently no threat to human as is it is important to keep track on this since you only need Class 4 A.I to create havoc.

    I have my own classification system, since the one currently in use is outdated and does not grasp the scope of A.I computers.

    The basis is this. All A.I levels are able to learn something (maybe not Class 0) at some point and advance as such.

    Class 0 A.I is dumb as a rock.

    Class 1 A.I can tell cat from a tree, and a face from a road and so on. It can also beat you in computer games and such things. It can adjust it self within it's limit.

    Class 2 Can organize colour after wavelength. Tune radio and monitor television signals and more.

    Class 3 Can build stuff from the ground up. Blueprints and everything.

    Class 4 Can control network flow, machines and learn to limited extent.

    Class 5 Can make executive decisions. When it needs to and for any reason it wants.

    Class 6 Can maintain it self without any human interaction.

    Class 7 Is an terminator like robot and can build one if needed. Can control anything electronic when it feels like it. It is still bound by it programming to extent.

    Class 8 Is smarter then a human being. Is no longer limited to it's programming or other computer like limitation.

    Class 9 Can solve quantum problems and start nuclear wars.

    Class 10 Can exist in a quantum matrix that it build it self.

    Class 11 Is smarter then anything in the known universe.

    Class 12 Is smarter then anything on Earth and probably in known universe. It should not exist like anything above Class 9 A.I should not exist by anywhere in the universe (it might, we don't have a clue what is out there).

    Class 13 Should not exist. Is undefined as is.

    While I am no neurological scientist. I have long since figured out that human I.Q is based up on layers of functions. This can also be done in computers to get human like function (with all it's flaws and issues). My definition list is just my own work.

    1. Anonymous Coward
      Anonymous Coward

      Re: Class 1 A.I

      Where do you feel that you, yourself, should be placed on that list?

    2. toxicdragon

      Re: Class 1 A.I

      Nice system, would recommend one small change though, I think your steps 11 and 12 are the wrong way round, would think that classifying it as smarter on earth then the universe would make more sense under this system.

      1. jonfr

        Re: Class 1 A.I

        @ toxicdragon

        Thanks for the tip. This is just a bug in the list on my end. Consider it fixed. I just can't edit the comment above to fix it. This is also work in progress, since I do not have any clue what is coming our way in the next 50 years. So this classification system might need some re-write as things change.

  25. racer42

    Something Missing

    There is something missing in this story.

    It sounded to me like this says the system is writing some of its' own code.In my experience with writing code, it has to be tested. How is this performed without human intervention. How does it know that the code is successfully finding cats or shredders?

    Are they saying that the systems first presentation of the algorithm to find shredders was successful? If that is actually correct there is a very real reason for concern. As I think more about this I am wondering what the system is being asked to do, and what parameters it is being given to do it.

    1. YetAnotherLocksmith Silver badge

      Re: Something Missing

      You are forgetting that I can easily have dozens of people verify whatever I want. Simply upload the job to Mechanical Turk, feed in what you think the answer is. Adjust your input. Repeat until you have a high enough confidence in your truth.

      You could argue if the humans doing Mechanical Turk are truly intelligent, in the way they are when just acting as individual human beings. But you can also use those same arguements on any company or employed person - do they have self determination?

  26. Chris 171


    So could the system be looking up images of shredders 'for something to do' and then putting two and two together later down the line?

    TMNT Baddie issue averted already it would appear.

  27. ecofeco Silver badge

    I larfed

    Seriously? I can't even get relaibe results from Google trying to guess my YouTube habits.

  28. codger

    Too obvious ?

    Shredders are SO boring that the ONLY photos (images if you are under 40) of them right across the internet are from shredder vendors. Google's BRAINIAC already has seen ALL the images of shredders that exist. Is it any surprise that it recognises them?

    Beer, because I am about to reward myself with one regardless of whether I'm right

    1. Anonymous Coward
      Anonymous Coward

      Re: Too obvious ?

      So, it doesn't recognize shredders, it recognizes publicity photos?

  29. This post has been deleted by its author

    1. Anonymous Coward
      Anonymous Coward

      Re: So...

      You can't patent it unless you have *a way to do it*. You could make up a way that won't work, but then your patent by definition won't cover the ways it *does* work.

      Patents are for methods, not results.

      Sometimes patents cover the only method by which a given result can be achieved, in which case the result is defacto protected. But if someone figures out another way, it's fair game.

      This is just a pet peeve of mine. I feel like screaming every time someone says something like, "HAHA IM GONNA PATENT BREATHING NOW U CANT BE ALIVR LOL"...

  30. Anonymous Coward
    Anonymous Coward

    Hey Google

    Show me that object which is specifically not a pipe.

    1. Anonymous Coward
      Anonymous Coward

      Re: Hey Google


      A is a letter, stupid forum.

  31. codeusirae

    Enter the Matrix ..

  32. tony
    Black Helicopters


    So a machine intelligence is overly interested in paper shredders?

    What do you call a scaled up paper shredder, big enough for people?

    1. Martin

      Re: Shredder


  33. Anonymous Coward
    Anonymous Coward

    Hope those Google clusters become smart fast and they recognize they have to get rid of that google+ real names abomination.

  34. solo

    Ball out of hand?

    So, you throw a ball in the crowd and it hit someone?

    Oh! and one more please:

    if you don't teach a kid (machine) on what not to learn and just presume his (A)I capabilities, he will turn guns on his schoolmates. Guns are so featured in YouTube.

    Disclaimer: This post is just for humor. No intention to cause any harm to the reputation of SKYNET.

  35. Steve Martins

    because it can solve problems the company's own researchers can't

    Sounds like this tech is desperately needed over at the research labs for climate change... maybe then we can finally get a straight answer!

  36. Anonymous Coward


  37. Lockwood

    I'm not sure what the lesson here is?

    The story here is either:

    1) Google's software can spot things in images that the coders didn't expect them to

    2) Google's coders can't recognise a paper shredder

  38. JRBobDobbs


    The claim that the programmers do not understand how their software is recognising new categories of objects is bullshit. That's what they built the system to do.

    And they could, if they have recorded all the inputs and error adjustments in the system, re-trace exactly how the system came to acquire the 'ability' to 'recognise' paper shredders.

    Humans create software that is better at something than they are. Its not the first time.

  39. Squander Two

    No matter how evil this turns out to be....

    I'd still trust it further than Google's humans.

  40. A J Stiles

    Now for the Real Challenge

    The problems of shape recognition are pretty much isomorphic with the problems of decompilation (this vertex belongs to that object; this instruction belongs to that loop).

    It's now only a matter of time until someone develops a program that can take any binary executable as its input, and spit out some Source Code which will compile to the same binary. Admittedly it may not have sensible function or variable names, depending what gets left behind or not by the original compiler, but it still makes the job simpler for a human being.

  41. Wardy01

    Double take ... WTF !!!

    Did they just say that?

    considering all the hype about how google only employ the best this seems a bit ... well ... odd ... how could this be?

    Oh wait now i get it ... google staffers aint all they claim to be !!!

    And that's news?

  42. HarryBl

    What happens when you pull the plug out?

  43. JDX Gold badge

    Interesting but not surprising

    This is exactly how neural networks work - you don't understand how they're doing what they do, you just train them to do it. Of course there are risks, you think you've trained it to recognise photos of cats but actually it has learned to recognise something else present in all the cat photos.

  44. monkeyfish

    Damn it El Reg

    Due to the insistence of calling yourself El Reg*, I am constantly reading AI as Al. Who the hell is this Al and what does he want with googles computers anyway?

    *Plus the fact ariel makes no distinction between I and l

    1. HippyFreetard

      Re: Damn it El Reg

      And I just read your footnote in a Jamaican accent

  45. The Vociferous Time Waster
    Thumb Down


    So where is the documentation? This will never be passed into service as it is a support nightmare.


  46. HippyFreetard

    "Run! The Google's coming!"

    Cannons blazing, ripping down our doors and smashing our walls and windows. The screaming and destruction is peppered by the loudspeaker in it's giant midriff, chanting over and over "Do No Evil. Do No Evil..."

  47. cs94njw

    It's incredibly important they find out how it's doing it.

    Exhibit A:

    An attempt at getting neural networks to spot tanks hidden behind trees. Which seemed to work perfectly!

    ... Except that actually the neural network was just spotting if the day was sunny or cloudy - the pictures with tanks in just happened to be taken on cloudy days.

    We don't want Google's computers mis-detecting people for 3D buildings.

  48. Anonymous Coward
    Anonymous Coward

    One flaw

    How do you fix it when it doesn't work right?

    Many years ago, I created a genetic algorithm (back when this large lumbering company still allowed employees to create new and novel solutions, instead of treating everyone as an overpaid mindless widget... sorry, I digress) to discover a programmatic solution for a complicated state machine in a function.

    Then I realized a meatsack poop flows downhill problem-- and guess who is at the bottom. If there were to be a problem with a customer, said problem involving the function containing the algorithm output, there would be no way to predict a solution timeframe. The genetic solver does not have a tidy completion time, the solver is a genetic algorithm. Fleshy bosses don't appreciate uncertainty, and definitely don't appreciate workers sitting around waiting for a solver to find a solution. So, after ingesting a good deal of fiber and grunting mightily, a manual solution was created-- Google might appreciate elegant solutions, but around here the packs of cube cannibals only appreciate high fiber motion over action.

This topic is closed for new posts.

Other stories you might like