back to article Skynet it ain't: Deep learning will not evolve into true AI, says boffin

Deep learning and neural networks may have benefited from the huge quantities of data and computing power, but they won't take us all the way to artificial general intelligence, according to a recent academic assessment. Gary Marcus, ex-director of Uber's AI labs and a psychology professor at the University of New York, argues …

  1. Locky
    Terminator

    Deep learning?

    Don't talk to me about deep learning

    1. Anonymous Coward
      Anonymous Coward

      Re: Deep learning?

      The first rule of Deep Learning is: Don't talk about Deep Learning?

      Here's one for the anthropic principled ones if they want to burrow deeper into the rabbit hole: Why are we living at exactly the epoch where we are working on AI and General AI emergence seems possible, maybe by some research group doing that one weird trick with large amounts of Google hardware crap?

      Could it be that we are being simulated by a Brobdignagian AI running a large quantum computer that wants to experience how all this existential bullshit that it has to put up with every. single. second. came about to pass?

      (Update: Googling reveals there is the idea of Roko's Basilisk floating around ... I kekked. Humans really are crazy.)

      1. Muscleguy

        Re: Deep learning?

        If we're in a simulation where is the I/O bus? there must be one. A simulation has to run ON something and there must be information flow between them.

        We live in a universe with a speed limit, which means there must be lots of little local I/0 links. Where are they? why hasn't CERN seen signs?

        Simulation angst is just a psychological peculiarity of the fact that we are running them in gamespace, in Climate modelling etc etc. Just like in the past waking dreams conjured culturally specific incubi and succubi now they conjure abducting aliens.

        If in this environment people were NOT thinking weird thoughts about it that would be strange. To try and decide that culturo-scientific musings are the universe talking to us is not just to put the cart before the horse but an act of enormous hubris.

        The universe not only has not noticed us rising apes, it has no mechanism to do so.

        1. Matthew Taylor

          Re: Deep learning?

          "If we're in a simulation where is the I/O bus? there must be one. A simulation has to run ON something and there must be information flow between them.We live in a universe with a speed limit, which means there must be lots of little local I/0 links. Where are they? why hasn't CERN seen signs?"

          Super Mario can only run at a certain, maximum speed, so therefore there must be lots of i/o links in his world. Why has he not yet discovered these links? Surely he must at least see a hint of them if he looks really hard.

        2. julianbr

          Re: Deep learning?

          The creator of the simulation has coded specifically for us, the simulated, to be unable to detect, by any means, the bounds or edges of the simulation. As such, we are only empowered to contemplate that such things may (or may not) exist, but we have have no power to prove our contemplated (un)reality.

          1. Muscleguy

            Re: Deep learning?

            Sophistry, easy to write but prove it can be done. Also if you have crippled your simulation that badly then it will be crippled in other ways and so what value does it have as a simulation?

            Then there's the Planck Length something these Silicon Valley billionaires who thought this up have not thought about. They cited the 'photo realism of games' as evidence when in fact the effective Planck length in those games would be on the order of a cm or so in our world.

            No simulation can have a Planck Length smaller than or equal to the Planck Length of the universe it is being simulated in. Otherwise you are trying to compute with more information than your universe contains.

            So for every level of simulation (it was posited that it might be simulations ALL the way down, really) the Planck Length has to go up, significantly. Very significantly unless you are using a large proportion of the mass of your universe to run it on.

            The Planck length of the is universe is very, very small. This very much limits the room for it to be a simulation. Even without hand waving stuff you cannot prove. Which like when someone asked the Star Trek guys how some piece of scifi kit worked 'very well' was the reply. I decline to suspend my disbelief for your piece of asserted scifi though.

            I'm only a mere Biology PhD though mine is in Physiology with Physics and Chemistry knowledge and 101s a requirement and including equations and even algebra and calculus (biological things move and change) and I understand this stuff.

            1. Phil Bennett

              Re: Deep learning?

              You could argue that the existence of a Planck length is weak evidence that we're in a simulation - why would nature need to quantise everything, including distance and time, unless it was doing the equivalent of computing at a certain precision? Why isn't everything analog?

              The second point is that the people within the simulation can't see the outside universe, so what we think of as very small or very large might be a small fraction of the scales available to the outside. If their Planck length is ridiculously smaller, like 20 orders of magnitude, then running us as a simulation becomes much much easier.

              The third point is that the simulation doesn't have to run at or above real time - we're looking at simulating brains (I think from memory mouse brains?) but it'll run at 1% real time because we simply don't have enough compute available at the moment.

              The fourth is that you don't know the bounds of the simulation - it's almost certainly the size of the inner solar system now we've got permanent satellites lurking around other planets and the sun, but it would be pretty trivial to intercept e.g. Voyager and produce plausible radio waves from the edge. There would essentially be a screen around the simulation beyond which everything was roughly approximated - think the draw distance in computer games.

              I don't personally believe we're in a simulation, if only because surely no ethics board would allow the creation of an entire civilisation of sentient beings capable of misery.

            2. Matthew Taylor

              Re: Deep learning?

              "No simulation can have a Planck Length smaller than or equal to the Planck Length of the universe it is being simulated in. Otherwise you are trying to compute with more information than your universe contains."

              You are assuming:

              1. that the universe is simulated at "Planck fidelity" throughout all of space-time. Depending on the simulation's purpose, that might well not be necessary.

              2. That "space" has the same meaning in the simulator's reality that it does in ours. For example, there may be many more dimensions.

  2. Lee D Silver badge

    What I've been saying for ages.

    What we have is complex expert models built by simple heuristics on large data sets providing statistical tricks which... sure, they have a use and a purpose, but it's not AI in any way, shape or form.

    Specifically, they lack insight into what the data means, any rationale for their decision, or any way to determine what the decision was even based on. If identifying images of bananas, it could just as easily be looking for >50% yellow pixels as it is for a curved line somewhere in the image. Until you know what it saw, why it thought it was a banana, and what assumptions it was making about the image and bananas in general (i.e. they're always yellow and unpeeled), you have no idea what it's going to continue doing with random input and no reasonable way to adjust it's input (e.g. teach a chess AI to play Go, etc.).

    This isn't intelligence, artificial or otherwise. It's just statistics. Any sufficiently advanced technology is indistinguishable from both magic and bull. In this case it's bull.

    The scary thing: People are building a certifying cars to run on the roads around small children using these things and yet we don't have a data set that we can give them (unless someone has a pile of "child run under car" sensor data from millions of such real incidents), nor do we have any idea what they are actually reacting to in any data set that we do give them. For all we know, it could just be blindly following the white line and would be happy to veer off Road-Runner style if Wile E Coyote was to draw a white line into a sheer cliff in a certain way.

    We don't have AI. We're decades away from AI. And this intermediate stuff is dangerous because we're assuming it is actually intelligent rather than just "what we already had, with some faster, more parallel computers under it".

    1. Anonymous Coward
      Anonymous Coward

      Great comments!

      And yes, someone has already done that! https://www.vice.com/en_us/article/ywwba5/meet-the-artist-using-ritual-magic-to-trap-self-driving-cars

      (Though artistically, I don't know if they actually tested the software?)

      1. Anonymous Coward
        Anonymous Coward

        Re: Great comments!

        Magic: actually lifehacks from the future, send accidentally to the past.

        1. amanfromMars 1 Silver badge

          Re: Great comments! Seconded!

          Magic: actually lifehacks from the future, send accidentally to the past. ..... Anonymous Coward

          Oh? Accidentally, AC?

          Are you sure? Absolutely positive about that?

          There are certainly A.N.Others who would fail to agree and would be able to Offer a Different Discourse, and Not By Accident.

    2. colinb

      Seems clear, refuse to use it if that's what you believe

      So if on the NHS you, or any of your family, get offered a system called Ultromics to review your cardiovascular health you will of course refuse, point blank, because 'its dangerous' and 'bull'?

      It uses machine learning (a form of AI as per their press release) to review ultrasound heart scans and while currently going through peer review looks to "greatly outperform ... heart specialists" who would review those scans. UK Tech:

      http://www.bbc.co.uk/news/health-42357257

      A friend of mine had a heart attack Tuesday so personally I feel this s**t needs rolling out as fast as it possibly can be.

      A.I. is a misused label, so what.

      1. JellyBean

        Re: Seems clear, refuse to use it if that's what you believe

        > "So if on the NHS you, or any of your family, get offered a system called Ultromics to review your cardiovascular health you will of course refuse, point blank, because 'its dangerous' and 'bull'?"

        conlinb: you sound a bit hysterical. After using my "deep learning", you are in danger of blowing a gasket.

        [ from the article in question ]

        "Humans, as they read texts, frequently derive wide-ranging inferences that are both novel and only implicitly licensed, as when they, for example, ..." (edit) - read colinb's comments. =)

        > "It uses machine learning (a form of AI as per their press release) to review ultrasound heart scans and while currently going through peer review looks to "greatly outperform ... heart specialists" who would review those scans. UK Tech:"

        http://www.bbc.co.uk/news/health-42357257

        What a wonderful tool to have and use. (I did read your link.) But Ultromics workings do not relate to the problems discussed in the article. The article states that narrowly-confined and focused - AI performs very well.

        > "A friend of mine had a heart attack Tuesday so personally I feel this s**t needs rolling out as fast as it possibly can be."

        All the best to your friend.

        > "A.I. is a misused label, so what."

        It certainly is uncontaminated by cheese.

        p.s. I will remember your friend in my prayers.

        1. Lee D Silver badge

          Re: Seems clear, refuse to use it if that's what you believe

          Would I take the advice of an AI over a doctor's interpretation of the same result?

          No.

          P.S. For many years I was living with a geneticist who worked in a famous London children's hospital but has also handled vast portions of London's cancer and genetic disease lab-work. Pretty much, if you've had a cancer diagnosis (positive or negative) or a genetic test, there's a good chance the sample passed through her lab and/or she's the one who signed the result and gave it back to the doctor / surgeon to act upon. Doctors DEFER to her for the correct result.

          Genetics is one of those things that's increasingly automated, machinified, AI pattern-recognition, etc. nowadays. Many of her friends worked in that field for PhDs in medical imaging, etc. It takes an expert to spot an out-of-place chromosome, or even identify them properly. Those pretty sheets you see of little lines lined up aren't the full story you think they are. She has papers published in her name about a particular technique for doing exactly that kind of thing.

          The machines that are starting to appear in less-fortunate areas to do that same job (i.e. where they can't source the expertise, let alone afford it)? All have their results verified by the human capable of doing the same job. The machines are often wrong. They are used to save time preparing the samples etc. rather than actually determining the diagnosis (i.e. cancerous cell or not, inherent genetic defect or not, etc.) and you can't just pluck the result out of the machine and believe it to be true, you would literally kill people by doing that. Pretty much the machine that could in theory "replace" her costs several million pounds plus ongoing maintenance, isn't as reliable and needs to be human-verified anyway.

          So...er... no. A diagnostic tool is great. But there's not a chance in hell that I'd let an AI make any kind of medical diagnosis or decision that wasn't verified by an expert familiar with the field, techniques, shortcomings and able to manually perform the same procedure if in doubt (hint: Yes, often she just runs the tests herself again manually to confirm, especially if they are borderline, rare or unusual).

          If one of London's biggest hospitals, serving lab-work for millions of patients, with one of the country's best-funded charities behind it still employs a person to double-check the machine, you can be sure it's not as simple as you make out.

          Last time they looked at "upgrading", it was literally in the millions of pounds for a unit that couldn't run as many tests, as quickly, as accurately, wasn't able to actually sign off on anything with any certainty, was inherently fragile and expensive to repair, and included so many powerful computers inside it I could run a large business from it. You can put all the AI into it that you want. It's still just a diagnostic tool. The day my doctor just says "Ah, well, the lab computer says you'll be fine" is the day I start paying for private healthcare.

          Computers are tools. AI is an unreliable tool.

          1. Muscleguy

            Re: Seems clear, refuse to use it if that's what you believe

            Depends, a lot of medics make statistical errors of the sort 'it is unlikely you have X because you are: too young, too old, wrong sex/race/culture etc. so I don't have to test for it, despite the symptoms. Myself and various family members have been victims of this and been proved right in the end with good old fashioned middle class educated persistence.

            Just because I/You are at or towards one end of the normal distribution of disease incidence that does not mean I CANNOT have disease/condition X. If my symptoms are entirely consistent with that diagnosis then it should be tested. It seems young women are very badly served by this common error.

            If the AI doesn't make those errors then I'm all for it.

            Doctors seem to be good at finding post hoc 'reasons' to subvert the diagnostic heuristic tree. When you add in GP practice funds it gets pernicious.

    3. Anonymous Coward
      Anonymous Coward

      Finally!

      It is nice to see the Reg finally writing a realistic article on "AI" and covering the points many of us try to make in the comments, instead of believing the hype from Google or the doom from Musk!

      1. Andrew Orlowski (Written by Reg staff)

        Re: "range"?

        You must be new here, Doug.

        https://www.theregister.co.uk/2017/01/02/ai_was_the_fake_news_of_2016/

        https://www.theregister.co.uk/2017/10/11/el_reg_meets_the_lords_to_puncture_the_aipocalypse/

        But thanks!

    4. Lomax

      Hear, hear. I often argue that the big risk with "AI research" is not that we will somehow by accident create a "super AI" which takes over the world and enslaves us all as lanthanide miners, but that we will attribute "intelligence" to systems which are anything but, and hand over control of essential infrastructure to algorithms which are in fact incompetent. Human history, it would seem, is littered with examples of similar hubris. And investor hyped belief in the superiority of algorithms carries an even greater potential risk; that we will start to shape society, and ourselves, to fit their narrow and unimaginative conclusions. Some might say this is already happening.

  3. Anonymous Coward
    Anonymous Coward

    The problem is..... Try telling any of that to:

    Wall Street / Silicon Valley / Big Media and Bitcoin chasing elites...

    Apart from today (Intel), guessing few of those elites read El Reg.

    Or they just skip over articles like this one that came before today:

    ~~~

    https://www.theregister.co.uk/2018/01/03/fooling_image_recognition_software/

  4. Adair Silver badge

    'Skynet it ain't: Deep learning will not evolve into true AI, says boffin' - well who'd a thunk it?

    'AI', one of the great hypgasms of the early 21stC.

    When a putative 'AI' can decide it 'can't be arsed' to do what it's told, can put 'moral sensibility' ahead of 'empirical determinism', and generally be awkward then I may begin to be impressed.

    1. Pascal Monett Silver badge

      Totally agree.

      We'll have AI the day we ask a question and it answers it can't be arsed to care.

      What we'll do with it then is another issue.

      1. I ain't Spartacus Gold badge

        We'll have AI the day we ask a question and it answers it can't be arsed to care.

        So you're saying that printers achieved AI decades ago. They've simply been too smart to let us find out and do even more horrible things to them than we already want to do to printers...

    2. Naselus

      "'AI', one of the great hypgasms of the early 21stC."

      And the late 20th. And the mid 20th, too.

      Basically, every 30 years we have this huge hyperventilation over the latest tiny incremental trend in AI research (LISP machines, anyone?) and AI researchers don't do enough to manage the public's expectations, and then when they fail to produce a fully self-aware robot who can dance the fandango while asking "what is this human thing you call 'love'?" within 18 months, the sector collapses and the funding dries up for the following 20 years.

      Wait and see what happens in the 2040s, I'm guessing.

      1. amanfromMars 1 Silver badge

        AIMessages from Heaven? :-) Or Secret Intelligence Services at Their/Our Work For You?

        and then when they fail to produce a fully self-aware robot who can dance the fandango while asking "what is this human thing you call 'love'?" within 18 months, .... Naselus

        Are El Regers up for accepting and fulfilling that Immaculate Challenge ....... with Live Operational Virtual Environment Vehicles at the Disposal of Advanced AI Researchers ...... who be Knights of the Virtual Kingdom.

        What have you to lose? Not even your shirt is at risk?:-)

      2. a_yank_lurker

        @Naselus - About once a generation a new set of wide-eyed, scifi enthralled groups get the AI religion. It lasts a few years as they hype some trivial exploit as meaning AI is just around the corner. Sort of sounds like fusion research.

        1. I ain't Spartacus Gold badge

          Yeah, but in fusion research we have many groups using different methods who regularly achieve actual fusion. OK it might only be lasting for microseconds, and is currently using more energy than it puts out - but the point is that they can point to success and claim that all they need to do is refine the process.

          Whereas we've currently observed one type of natural intelligence, and still don't even know how that works. Meanwhile we're busily trying to replicate it, using a completely different set of physical mechanisms.

          So given that fusion is just 20 years away (and has been for 40 years), how far are we from working AI?

      3. DrBobK

        The thing that is so odd about the current Deep Learning hype is that it is the *same* AI that was being hyped 40 years ago (ok, actually 30 years ago, back-propagation of error through nets with hidden layers), just with more raw computing power and bigger datasets.

  5. Triumphantape

    Wait...

    "Neural networks sort stuff – they can't reason or infer"

    That sounds a lot like most of the humans I see.

    1. Anonymous Coward
  6. JimmyPage
    Stop

    This is news ?

    I have said, and will continue to say ...

    If Google (for example) *are* developing "AI", then they are keeping it a long looooong way from their search engine.

    Bear in mind that almost the first thing I would do with real "AI", is to train it to zap adverts and other unwanted cruft.

    1. Steve Knox

      Re: This is news ?

      If Google (for example) *are* developing "AI", then they are keeping it a long looooong way from their search engine.

      Of course they are. They're optimizing their search engine for the average user. Artificial Stupidity is much more relevant for that use case.

  7. aenikata

    Current systems, perhaps

    The public does need to understand the difference between a sophisticated but specific AI and the concept of General AI. Currently the latter is very limited, although there are researchers looking specifically at this, such as projects like OpenWorm to simulate a Nematode worm.

    However, it may be that a more general intelligence actually doesn't act in this way. Some of the more sophisticated systems use a blackboard approach where discrete subsystems process some data and return the results to a shared space where other elements can then operate on it. Games-playing systems may be added into such a blackboard, picking up data from other systems. Creation of a more general intelligence may involve some kind of overall prioritisation system that selects which systems to run, chooses (perhaps with some randomness) which of the tasks or goals to pursue out of the ones available, and simply aims to maximise its score overall. Learning wouldn't necessarily involve researchers, there could be sharing of successful networks. While a network that can play Go isn't directly useful for playing Chess, there may be scenarios where parts of a network can be re-used - this is known as Transfer Learning. A sophisticated system could try to identify networks which might be similar to a new task and try various networks that take some of the deeper elements of the other network as a starting point - it wouldn't necessarily be 'good' immediately, but it may have some ability to recognise common patterns shared with the existing tasks it can do.

    These wouldn't necessarily be 'intelligent' in the sense that some people think, but such a system could potentially transfer what it knows to related subjects, have likes and dislikes (in terms of what it has given a higher scoring to from previous success) and could communicate with other such systems to share and improve its knowledge, and you're then heading a long way towards a system that could interact in a manner that seems increasingly intelligent. After all, if it can recognise people, talk, understand enough of language to at least beat a young child (it can be useful while still naive in its understanding), recognise emotions, play a range of games, learn new things and express its own preferences, how general does the intelligence need to be?

  8. Anonymous Coward
    Anonymous Coward

    I got as far as

    The same skills learnt from one game can't be transferred to another.

    Transfer learning has been a thing since 1993. The way things are going, I give it five years to get the first automated demonstration.

    1. Charles 9

      Re: I got as far as

      Part of the intelligence problem is that we're not ourselves fully aware of how we think. For example, we haven't much insight into subconscious concepts like intuition, which figures into things like driving where we can sense something coming without consciously thinking about it. We can't teach what we ourselves don't understand.

      1. Anonymous Coward
        Anonymous Coward

        Re: I got as far as

        "We can't teach what we ourselves don't understand."

        Hasn't stopped the last few generations of teachers and management consultants.

        (Anon because I work in a school).

      2. Anonymous Coward
        Anonymous Coward

        Re: I got as far as

        I'd say that an even bigger problem is that we don't actually think in as much detail as we think we do.

        1. Anonymous Coward
          Anonymous Coward

          Re: I got as far as

          "I'd say that an even bigger problem is that we don't actually think in as much detail as we think we do."

          Oliver Sacks wrote about some "autistic" people who could draw very detailed scenes from memory after only a short exposure. The implication was that our minds remember far more detail than that of which we are conscious.

          The rub there is "conscious". Too much access to detail by our conscious mind would give information overload. It is probable that unconscious "thinking" is using that data to influence our conscious mind.

          How many times do you say "I had forgotten I knew that" - but only after you have surprised yourself by factoring in something you had forgotten you once knew.

          It has been said that usually we don't seem to be able to remember much that happened to us before about the age of 15. When people reach extreme old age they apparently can get crystal clear recall of early memories - even if their short term memory doesn't exceed a few minutes.

          1. Charles 9

            Re: I got as far as

            So therein lies the rub. We can't teach a computer how to reason, infer, and draw from relatively obscure things when we don't even know how we ourselves do it. What's the specific process by which our brains identify stuff, make not-so-obvious observstions, reason, infer, etc.?

      3. 's water music

        Re: I got as far as

        Part of the intelligence problem is that we're not ourselves fully aware of how we think

        The secret of intelligence? Post facto rationalisation of what transpired

        1. amanfromMars 1 Silver badge

          Re: I got as far as

          The secret of intelligence? Post facto rationalisation of what transpired ... 's water music

          That's a recipe for CHAOS, 's water music, and we can do much better with all of that.

          But leaping further into the future discovers you Clouds Hosting Advanced Operating Systems ..... with Wonders to Share and Magnificent to Behold.

          I've got so far ..... and quite whether I would quickly, or even ever choose to move on to Elsewhere with Lesser Wonders, is a sweet muse to feed and savour, seed and flavour.

  9. Herbert Meyer
    Boffin

    old result

    Minsky and Papert wrote a book about this, a long time ago:

    https://books.google.com/books?hl=en&lr=&id=PLQ5DwAAQBAJ&oi=fnd&pg=PR5&dq=Minsky+and+S.+Papert.+Perceptrons&ots=zyDCuJuq23&sig=g6U9pngheQkbaRqqFiyPRgWbtBA#v=onepage&q=Minsky%20and%20S.%20Papert.%20Perceptrons&f=false

    Nobody read or understood it then.

    1. Destroy All Monsters Silver badge
      Headmaster

      Re: old result

      Nobody read or understood it then.

      Apparently nobody does now.

      Perceptrons are one-layer neural networks. Irrelevant to Deep Learning, which are very deep neural networks with Bells and Whistles of all kind.

      Back when I was in school, people were well-informed about the problem with perceptrons. They were used as simple models to teach students. Everyone including Pinky and the Brain were working on 3-layer NNs and possibly looking at Boltzmann machines while the first NN chips were being talked about in BYTE and IEEE Micro.

      1. Rosco

        Re: old result

        I don't think you've understood Fodor & Pylyshyn's argument.

        Their argument is that cognition operates at a higher level of organisation than the physical substrate. True cognition involves generating internally consistent symbolic representations of causal relationships and ANNs on their own aren't capable of that. They - like all approaches to AI so far - must have problem-space representations baked into them before they can generate solutions.

        I'm not saying they were right, by the way. I'm just saying that simply adding more hidden layers or using a convolutional training algorithm doesn't go any distance towards invalidating their rather deep philosophical argument because those techniques don't add causal symbolic processing. It's not clear what would add symbolic processing to a neural network, although it is clear that nature has found a way at least once.

        1. Rosco

          Re: old result

          Whoops!

          I moved quickly from Minsky & Papert's Perceptron paper to Fodor & Pylyshyn's work but forgot that you hadn't. Sorry.

          (Fodor & Pylyshyn's work is probably more relevant to this article than Minksy & Papert's, mind, so I'll leave my comment in place).

  10. Anonymous Coward
    Anonymous Coward

    The late great Dr. Christopher Evans ...

    covered AI in wonderful layman-friendly detail in "The Mighty Micro".

    Not much has changed, since 1979.

  11. Rebel Science

    Deep learning must be discarded like yesterday's garbage in order to solve AGI

    A number of us have been saying this for many years. But the AI community, like all scientific fields, is extremely political. Only the famous leaders have influence, even if they are clueless.

    Why Deep Learning Is a Hindrance to Progress Toward True AI

  12. amanfromMars 1 Silver badge
    Mushroom

    Surely you didn't think you'd been left here all alone?

    but they won't take us all the way to artificial general intelligence, according to a recent academic assessment.

    And that would be because of A.N.Other Human IT AI Intervention? A Simple Advanced IntelAIgent Future FailSafed DeProgramming of Sublime Assets for ESPecial ReProgramming in Quantum Communications Channels which at their Best both Provide and Protect, Mentor and Monitor Heavenly Streams of Augmented Future Virtual Realities ...... for Out of This World Worldly Global Presentation?

    And the Intervention works wonderfully well, with All Deliveries Cast Iron Guaranteed to not Implode or Explode upon Epic Fails and Failure to Best Use Provided Prime Assets. And you won't get many offers as good as that today, that's for sure, Amigo/Amiga.

  13. Dinkrex

    Finally!

    Someone finally said it. What is called 'AI' today is not AI, not even weak AI. I know why it's so prevalent though, AI researchers don't want to repeat the over-promising that led to the last two "AI winters", but they're leaving the door open to different over-promising by corporations who want to turn it into a buzzword and the media who want a soundbite.

  14. John Smith 19 Gold badge
    Coat

    "deep learning teaches computers how to map inputs to the correct outputs. "

    Or as humans call it "Growing up."

    Because when you take a human brain apart what do find?

    Multiple highly inter connected layers of neurons (up to 10 000 to1) loosely connected to other sections of multiple highly inter connected neural layers.

    Everything else is built on top of that hardware.

    Which leaves 2 questions.

    Are human as un "intelligent" as existing multi layer NN systems, but we're too stupid to recognize it? and

    If not why are existing "deep learning" systems so s**t?

    1. Anonymous Coward
      Anonymous Coward

      Re: "deep learning teaches computers how to map inputs to the correct outputs. "

      ‘Mapping’ technique finds links between brain connections and intelligence.

    2. grumpy-old-person

      Re: "deep learning teaches computers how to map inputs to the correct outputs. "

      Apparently neurons make up only 10% of the brain with glial cells of many kinds making up the remainder.

      90% of brain tissue is padding?

      How about glial networks?

  15. Rebel Science

    Deep learning is not and has never been intelligent

    Deep learning systems have no idea what they're seeing. We must give them a label. Even then, they still have no clue. An adult human brain can instantly see and interact with a completely new object or pattern that it has never seen before. And it can do it from different perspectives. A DNN, by contrast, must be given hundreds if not thousands of samples of the object in order to properly detect it in an invariant manner. And you still have to give it a label.

  16. Anonymous Coward
    Anonymous Coward

    68 million matches

    "That's far above what any human professional will play in a lifetime."

    Why is that relevant? If the AI can achieve the same result as a human regardless of whether it has to learn something 68 trillion, 68 million or 68 times, it's the outcome that matters.

    1. Charles 9

      Re: 68 million matches

      Because the REAL real purpose isn't the destination but the journey. Take this. Why does it take a computer millions of simulated matches to match wits with some like Kasparov who could only have played tens or hundreds of matches, tops? How come newborn babies too young to be taught in the usual way can nonetheless identify differences and certain abstract concept's no computer can distinguish?

      1. Anonymous Coward
        Anonymous Coward

        Re: 68 million matches

        The journey? Are you serious?

        Whilst that may be absolutely fascinating to the engineers and techies on the this site, Joe Bloggs is only interested in the destination, ie whether the result is better or worse than a human could achieve. He really doesn't care about the detail when compared to yesterday's technology, eg smartphones or cars.

        Your justification implies that you think there is only one way to learn and if the AI doesn't conform to the "human" way, then it doesn't count. I would suggest being a little more open minded.

        1. Charles 9

          Re: 68 million matches

          And that's precisely part of the problem. People want results, not realizing that the route to get there can be as important as the end result; otherwise, you can inadvertently end up with a one-trick pony and find yourself screwed when a slightly different problem comes along. As noted, a computer trained to play chess would have a hard time playing go (because chess is a game of movement and go is a game of placement, there's significant differences in strategy) because it can't figure out what things it learned from chess can actually apply well to go.

        2. John Smith 19 Gold badge
          Unhappy

          "Your justification implies that you think there is only one way to learn "

          Wrong.

          What he's saying is that it's clear this thing does not learn as a human does. This implies

          a) It's a very poor model of how humans think.

          b)The conclusions it reaches from the same data as a human may be completely different to a human.

          So the article looks right. This stuff won't turn into general, have-a-conversation-with-it-that-makes-sense AI.

          BTW People who post AC with highly argumentative views tend to look like astro turfing marketroids on a retainer.

  17. Jim Birch

    Has anyone noticed that the brain is a neural network? It's obviously a question of scale and architectural complexity. Our brains have the advantage of 500 years of evolution and being targeted at all problems the organism encountered, but the disadvantage of being developed by a slow, random, stepwise process. It won't take artificial neural networks 500 million years to catch up, but when they do, the architecture will have advanced. Compare a current multicore SoC to an Intel 4004 chip (etc) and use your imagination.

    1. Dinkrex

      Our brains are only like neural networks in the most superficial ways. They're only called neural networks because of the way neurons look, not the way they function.Neurons alter their chemical composition for every process, we don't have one neuron for each memory, thought or instinct. Neural networks are not capable of altering themselves beyond tacking on more branches and even that is severely limited by what is already in the network.

      This touches on one of the key reasons why neural networks are not AI. If a net receives input it can't identify it just throws up an error, ideally using it to improve its detection capabilities for next time. An AI needs to have the ability to identify something it hasn't seen before, even if only to identify it as an unknown entity, to be corroborated with any future examples. A neural net is only designed to identify the thing it's made to work on, so it just throws info on new items away in favour of improving its systems.

    2. AdamWill

      definitely not, no.

      "Has anyone noticed that the brain is a neural network?"

      No. No-one has noticed that at all. It's not like that's more or less what the name *literally means* or anything. Congratulations. You're the first. Here. Have a cookie.

  18. Anonymous Coward
    Anonymous Coward

    Are they confusing AI with sentience?

    Or are they saying the AI has to conform to human understanding of intelligence before they call it AI?

    If so, that's somewhat narrow minded and unimaginative.

  19. Britt Johnston
    Go

    we will know when it happens

    I'll recognise true AI when there is a job type to reverse engineer a better expert system and dig out the factors that we hadn't considered before

    In the meantime, I'm waiting for someone to hash the amazon and facebook databases, to provide more ideas for Christmas gifts, and to remind you that tomorrow is last orders for timely delivery

  20. AdamWill
    Joke

    welp

    "It may be selecting to move a pawn, or knight, or queen across the board, but it doesn't learn the logical and strategic thinking useful for Go."

    It sounds like the author is about as good at learning the actual rules of games as AI is, since you're not going to get very far in Go if you try to "move a pawn, or knight, or queen across the board"...:P

  21. SeanC4S

    Uber have started to use evolutionary algorithms to train neural networks which allows them to go beyond the restrictions imposed by backpropagation. Basically unsupervised and reinforcement learning become much more possible and all you ever need is Boolean good/bad signal to the system.

    Gary Marcus doesn't know what's going on. Repeat 3 times.

  22. Anonymous Coward
    Anonymous Coward

    ah, smoke and mirrors. got it.

  23. Milton

    I don't like to say—

    —I told you so. That said, most Reg readers didn't need to be told this, I admit. Unlike the general tabloid-wiping population, technologists have a healthy scepticism about claims for "AI", as should anyone who's ever considered what a Turing test really tries to elicit, or indeed, who has ever considered what intelligence really is.

    "AI" is not here, not in any form worth the "I" part of the acronym, and only marketurds, fathead politicians (well, *all* morons, as a supergroup) and some lazy journos believe otherwise. "AI" is not going to be here for at least another two or three decades at the soonest, I would suggest, because even if we could solve the problem of simulating the number and type of neuronal connections in a human brain—monumentally difficult all by itself—we have barely begun to appreciate how biological implementation may be critically different from the electronic kind, and have gone basically nowhere in synthesising an understanding of motivation, agency and emotion in a machine context.

    Here's a stick-me-neck-out prediction. First, uncontroversially I think, within ten years, computing platforms will exist which seek to pass a (video) face-to-face voice-to-voice Turing-type test when challenged by a reasonably well-educated normal human being. The 'Uncanny Valley' problem of human face simulation will have been solved and vocal intonation, use of grammar etc will also be good enough to be convincing. The sheer quantity of data and pattern recognition resources available to this "AI" will make it seem awesomely well informed.

    And my prediction? I predict that as soon as our "AI" goes online, there will be competition to see who can devise the neatest, simplest, most elegant ways to expose it most quickly. There will be burning rivalry among those who seek kudos for formulating the fastest ways of fooling the computer—the questions, answers, statements, requests, digressions, lies, emotional cues and responses which most rapidly reveal the machine behind the curtain. Think of them as "filters" which briskly separate a definite human from a definite fraud.

    And for many years, I submit, the best filters will always succeed within 60 seconds. I wouldn't be particularly surprised to find that we'll still be concocting them in 50 years' time. Who knows, we may even call them the Voigt-Kampff Test ....

  24. jackharbringer

    Still Useful

    Please don't overreact and think that machine learning, neural networks and the current "AI" crop of solutions is all bull. That is also not what the article is driving at:

    1) Current "AI" is not general AI and probably shouldn't be give the label Artificial Intelligence as it is misleading and will probably never achieve what we think of as AI.

    2) Neural networks can be an extremely powerful tools to solve some types of problems.

    There's even room for research in both areas. Research into machine learning is not fake or disingenuous just because it doesn't achieve general AI.

    Yes, there is a lot of smoke and mirrors and people trying to make money out of a hype cycle, that's what the industry does. Yes, it's based on research many of us heard about before with some incremental twists. But don't throw the baby out with the bathwater. Neural networks and machine learning are here to stay and I think that's a good thing. We will be able to solve many problems we couldn't up until now. We just have to learn where an when to use the new tools and when not.

    1. Anonymous Coward
      Anonymous Coward

      Re: Still Useful

      Exactly. Many on this site won't accept something is "AI" unless it can replace all human abilities as a single entity. Whilst that won't happen any time soon, AI will be able to out perform humans at many functions in the not too distant future, probably based on repetitive learning, self tuition and communication with other AI.

      And that makes many people very defensive. Especially when people like Stephen Hawking state that AI could be an existential threat to humanity. Denial is a method often used to avoid facing an unwelcome reality rather than dealing with the issue.

      1. Charles 9

        Re: Still Useful

        Because when the issue becomes something as simple as, "You're obsolete. Game Over. No Continues," "dealing with the issue" as you put it is not possible as that means going against the survival instinct.

        IOW, what you describe gets dangerously close to Butlerian Jihad territory.

  25. StuntMisanthrope

    Its me noggin not me peepers.

    I'm starting to think that there's no such thing as consciousness, its merely a trick of the imagination or a combination of learnt behaviour and stimulus. For example, I can program a text conversation to a close family member outside of computing who would not be aware they're talking to a machine. Then there's another friend, who recently lost an eye to a tumor, does he still dream stereo-scopically? #answersonapostcard

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like