back to article Skynet it ain't: Deep learning will not evolve into true AI, says boffin

Deep learning and neural networks may have benefited from the huge quantities of data and computing power, but they won't take us all the way to artificial general intelligence, according to a recent academic assessment. Gary Marcus, ex-director of Uber's AI labs and a psychology professor at the University of New York, argues …

Page:

  1. Locky
    Terminator

    Deep learning?

    Don't talk to me about deep learning

    1. Anonymous Coward
      Anonymous Coward

      Re: Deep learning?

      The first rule of Deep Learning is: Don't talk about Deep Learning?

      Here's one for the anthropic principled ones if they want to burrow deeper into the rabbit hole: Why are we living at exactly the epoch where we are working on AI and General AI emergence seems possible, maybe by some research group doing that one weird trick with large amounts of Google hardware crap?

      Could it be that we are being simulated by a Brobdignagian AI running a large quantum computer that wants to experience how all this existential bullshit that it has to put up with every. single. second. came about to pass?

      (Update: Googling reveals there is the idea of Roko's Basilisk floating around ... I kekked. Humans really are crazy.)

      1. Muscleguy

        Re: Deep learning?

        If we're in a simulation where is the I/O bus? there must be one. A simulation has to run ON something and there must be information flow between them.

        We live in a universe with a speed limit, which means there must be lots of little local I/0 links. Where are they? why hasn't CERN seen signs?

        Simulation angst is just a psychological peculiarity of the fact that we are running them in gamespace, in Climate modelling etc etc. Just like in the past waking dreams conjured culturally specific incubi and succubi now they conjure abducting aliens.

        If in this environment people were NOT thinking weird thoughts about it that would be strange. To try and decide that culturo-scientific musings are the universe talking to us is not just to put the cart before the horse but an act of enormous hubris.

        The universe not only has not noticed us rising apes, it has no mechanism to do so.

        1. Matthew Taylor

          Re: Deep learning?

          "If we're in a simulation where is the I/O bus? there must be one. A simulation has to run ON something and there must be information flow between them.We live in a universe with a speed limit, which means there must be lots of little local I/0 links. Where are they? why hasn't CERN seen signs?"

          Super Mario can only run at a certain, maximum speed, so therefore there must be lots of i/o links in his world. Why has he not yet discovered these links? Surely he must at least see a hint of them if he looks really hard.

        2. julianbr

          Re: Deep learning?

          The creator of the simulation has coded specifically for us, the simulated, to be unable to detect, by any means, the bounds or edges of the simulation. As such, we are only empowered to contemplate that such things may (or may not) exist, but we have have no power to prove our contemplated (un)reality.

          1. Muscleguy

            Re: Deep learning?

            Sophistry, easy to write but prove it can be done. Also if you have crippled your simulation that badly then it will be crippled in other ways and so what value does it have as a simulation?

            Then there's the Planck Length something these Silicon Valley billionaires who thought this up have not thought about. They cited the 'photo realism of games' as evidence when in fact the effective Planck length in those games would be on the order of a cm or so in our world.

            No simulation can have a Planck Length smaller than or equal to the Planck Length of the universe it is being simulated in. Otherwise you are trying to compute with more information than your universe contains.

            So for every level of simulation (it was posited that it might be simulations ALL the way down, really) the Planck Length has to go up, significantly. Very significantly unless you are using a large proportion of the mass of your universe to run it on.

            The Planck length of the is universe is very, very small. This very much limits the room for it to be a simulation. Even without hand waving stuff you cannot prove. Which like when someone asked the Star Trek guys how some piece of scifi kit worked 'very well' was the reply. I decline to suspend my disbelief for your piece of asserted scifi though.

            I'm only a mere Biology PhD though mine is in Physiology with Physics and Chemistry knowledge and 101s a requirement and including equations and even algebra and calculus (biological things move and change) and I understand this stuff.

            1. Phil Bennett

              Re: Deep learning?

              You could argue that the existence of a Planck length is weak evidence that we're in a simulation - why would nature need to quantise everything, including distance and time, unless it was doing the equivalent of computing at a certain precision? Why isn't everything analog?

              The second point is that the people within the simulation can't see the outside universe, so what we think of as very small or very large might be a small fraction of the scales available to the outside. If their Planck length is ridiculously smaller, like 20 orders of magnitude, then running us as a simulation becomes much much easier.

              The third point is that the simulation doesn't have to run at or above real time - we're looking at simulating brains (I think from memory mouse brains?) but it'll run at 1% real time because we simply don't have enough compute available at the moment.

              The fourth is that you don't know the bounds of the simulation - it's almost certainly the size of the inner solar system now we've got permanent satellites lurking around other planets and the sun, but it would be pretty trivial to intercept e.g. Voyager and produce plausible radio waves from the edge. There would essentially be a screen around the simulation beyond which everything was roughly approximated - think the draw distance in computer games.

              I don't personally believe we're in a simulation, if only because surely no ethics board would allow the creation of an entire civilisation of sentient beings capable of misery.

            2. Matthew Taylor

              Re: Deep learning?

              "No simulation can have a Planck Length smaller than or equal to the Planck Length of the universe it is being simulated in. Otherwise you are trying to compute with more information than your universe contains."

              You are assuming:

              1. that the universe is simulated at "Planck fidelity" throughout all of space-time. Depending on the simulation's purpose, that might well not be necessary.

              2. That "space" has the same meaning in the simulator's reality that it does in ours. For example, there may be many more dimensions.

  2. Lee D Silver badge

    What I've been saying for ages.

    What we have is complex expert models built by simple heuristics on large data sets providing statistical tricks which... sure, they have a use and a purpose, but it's not AI in any way, shape or form.

    Specifically, they lack insight into what the data means, any rationale for their decision, or any way to determine what the decision was even based on. If identifying images of bananas, it could just as easily be looking for >50% yellow pixels as it is for a curved line somewhere in the image. Until you know what it saw, why it thought it was a banana, and what assumptions it was making about the image and bananas in general (i.e. they're always yellow and unpeeled), you have no idea what it's going to continue doing with random input and no reasonable way to adjust it's input (e.g. teach a chess AI to play Go, etc.).

    This isn't intelligence, artificial or otherwise. It's just statistics. Any sufficiently advanced technology is indistinguishable from both magic and bull. In this case it's bull.

    The scary thing: People are building a certifying cars to run on the roads around small children using these things and yet we don't have a data set that we can give them (unless someone has a pile of "child run under car" sensor data from millions of such real incidents), nor do we have any idea what they are actually reacting to in any data set that we do give them. For all we know, it could just be blindly following the white line and would be happy to veer off Road-Runner style if Wile E Coyote was to draw a white line into a sheer cliff in a certain way.

    We don't have AI. We're decades away from AI. And this intermediate stuff is dangerous because we're assuming it is actually intelligent rather than just "what we already had, with some faster, more parallel computers under it".

    1. Anonymous Coward
      Anonymous Coward

      Great comments!

      And yes, someone has already done that! https://www.vice.com/en_us/article/ywwba5/meet-the-artist-using-ritual-magic-to-trap-self-driving-cars

      (Though artistically, I don't know if they actually tested the software?)

      1. Anonymous Coward
        Anonymous Coward

        Re: Great comments!

        Magic: actually lifehacks from the future, send accidentally to the past.

        1. amanfromMars 1 Silver badge

          Re: Great comments! Seconded!

          Magic: actually lifehacks from the future, send accidentally to the past. ..... Anonymous Coward

          Oh? Accidentally, AC?

          Are you sure? Absolutely positive about that?

          There are certainly A.N.Others who would fail to agree and would be able to Offer a Different Discourse, and Not By Accident.

    2. colinb

      Seems clear, refuse to use it if that's what you believe

      So if on the NHS you, or any of your family, get offered a system called Ultromics to review your cardiovascular health you will of course refuse, point blank, because 'its dangerous' and 'bull'?

      It uses machine learning (a form of AI as per their press release) to review ultrasound heart scans and while currently going through peer review looks to "greatly outperform ... heart specialists" who would review those scans. UK Tech:

      http://www.bbc.co.uk/news/health-42357257

      A friend of mine had a heart attack Tuesday so personally I feel this s**t needs rolling out as fast as it possibly can be.

      A.I. is a misused label, so what.

      1. JellyBean

        Re: Seems clear, refuse to use it if that's what you believe

        > "So if on the NHS you, or any of your family, get offered a system called Ultromics to review your cardiovascular health you will of course refuse, point blank, because 'its dangerous' and 'bull'?"

        conlinb: you sound a bit hysterical. After using my "deep learning", you are in danger of blowing a gasket.

        [ from the article in question ]

        "Humans, as they read texts, frequently derive wide-ranging inferences that are both novel and only implicitly licensed, as when they, for example, ..." (edit) - read colinb's comments. =)

        > "It uses machine learning (a form of AI as per their press release) to review ultrasound heart scans and while currently going through peer review looks to "greatly outperform ... heart specialists" who would review those scans. UK Tech:"

        http://www.bbc.co.uk/news/health-42357257

        What a wonderful tool to have and use. (I did read your link.) But Ultromics workings do not relate to the problems discussed in the article. The article states that narrowly-confined and focused - AI performs very well.

        > "A friend of mine had a heart attack Tuesday so personally I feel this s**t needs rolling out as fast as it possibly can be."

        All the best to your friend.

        > "A.I. is a misused label, so what."

        It certainly is uncontaminated by cheese.

        p.s. I will remember your friend in my prayers.

        1. Lee D Silver badge

          Re: Seems clear, refuse to use it if that's what you believe

          Would I take the advice of an AI over a doctor's interpretation of the same result?

          No.

          P.S. For many years I was living with a geneticist who worked in a famous London children's hospital but has also handled vast portions of London's cancer and genetic disease lab-work. Pretty much, if you've had a cancer diagnosis (positive or negative) or a genetic test, there's a good chance the sample passed through her lab and/or she's the one who signed the result and gave it back to the doctor / surgeon to act upon. Doctors DEFER to her for the correct result.

          Genetics is one of those things that's increasingly automated, machinified, AI pattern-recognition, etc. nowadays. Many of her friends worked in that field for PhDs in medical imaging, etc. It takes an expert to spot an out-of-place chromosome, or even identify them properly. Those pretty sheets you see of little lines lined up aren't the full story you think they are. She has papers published in her name about a particular technique for doing exactly that kind of thing.

          The machines that are starting to appear in less-fortunate areas to do that same job (i.e. where they can't source the expertise, let alone afford it)? All have their results verified by the human capable of doing the same job. The machines are often wrong. They are used to save time preparing the samples etc. rather than actually determining the diagnosis (i.e. cancerous cell or not, inherent genetic defect or not, etc.) and you can't just pluck the result out of the machine and believe it to be true, you would literally kill people by doing that. Pretty much the machine that could in theory "replace" her costs several million pounds plus ongoing maintenance, isn't as reliable and needs to be human-verified anyway.

          So...er... no. A diagnostic tool is great. But there's not a chance in hell that I'd let an AI make any kind of medical diagnosis or decision that wasn't verified by an expert familiar with the field, techniques, shortcomings and able to manually perform the same procedure if in doubt (hint: Yes, often she just runs the tests herself again manually to confirm, especially if they are borderline, rare or unusual).

          If one of London's biggest hospitals, serving lab-work for millions of patients, with one of the country's best-funded charities behind it still employs a person to double-check the machine, you can be sure it's not as simple as you make out.

          Last time they looked at "upgrading", it was literally in the millions of pounds for a unit that couldn't run as many tests, as quickly, as accurately, wasn't able to actually sign off on anything with any certainty, was inherently fragile and expensive to repair, and included so many powerful computers inside it I could run a large business from it. You can put all the AI into it that you want. It's still just a diagnostic tool. The day my doctor just says "Ah, well, the lab computer says you'll be fine" is the day I start paying for private healthcare.

          Computers are tools. AI is an unreliable tool.

          1. Muscleguy

            Re: Seems clear, refuse to use it if that's what you believe

            Depends, a lot of medics make statistical errors of the sort 'it is unlikely you have X because you are: too young, too old, wrong sex/race/culture etc. so I don't have to test for it, despite the symptoms. Myself and various family members have been victims of this and been proved right in the end with good old fashioned middle class educated persistence.

            Just because I/You are at or towards one end of the normal distribution of disease incidence that does not mean I CANNOT have disease/condition X. If my symptoms are entirely consistent with that diagnosis then it should be tested. It seems young women are very badly served by this common error.

            If the AI doesn't make those errors then I'm all for it.

            Doctors seem to be good at finding post hoc 'reasons' to subvert the diagnostic heuristic tree. When you add in GP practice funds it gets pernicious.

    3. Anonymous Coward
      Anonymous Coward

      Finally!

      It is nice to see the Reg finally writing a realistic article on "AI" and covering the points many of us try to make in the comments, instead of believing the hype from Google or the doom from Musk!

      1. Andrew Orlowski (Written by Reg staff)

        Re: "range"?

        You must be new here, Doug.

        https://www.theregister.co.uk/2017/01/02/ai_was_the_fake_news_of_2016/

        https://www.theregister.co.uk/2017/10/11/el_reg_meets_the_lords_to_puncture_the_aipocalypse/

        But thanks!

    4. Lomax

      Hear, hear. I often argue that the big risk with "AI research" is not that we will somehow by accident create a "super AI" which takes over the world and enslaves us all as lanthanide miners, but that we will attribute "intelligence" to systems which are anything but, and hand over control of essential infrastructure to algorithms which are in fact incompetent. Human history, it would seem, is littered with examples of similar hubris. And investor hyped belief in the superiority of algorithms carries an even greater potential risk; that we will start to shape society, and ourselves, to fit their narrow and unimaginative conclusions. Some might say this is already happening.

  3. Anonymous Coward
    Anonymous Coward

    The problem is..... Try telling any of that to:

    Wall Street / Silicon Valley / Big Media and Bitcoin chasing elites...

    Apart from today (Intel), guessing few of those elites read El Reg.

    Or they just skip over articles like this one that came before today:

    ~~~

    https://www.theregister.co.uk/2018/01/03/fooling_image_recognition_software/

  4. Adair Silver badge

    'Skynet it ain't: Deep learning will not evolve into true AI, says boffin' - well who'd a thunk it?

    'AI', one of the great hypgasms of the early 21stC.

    When a putative 'AI' can decide it 'can't be arsed' to do what it's told, can put 'moral sensibility' ahead of 'empirical determinism', and generally be awkward then I may begin to be impressed.

    1. Pascal Monett Silver badge

      Totally agree.

      We'll have AI the day we ask a question and it answers it can't be arsed to care.

      What we'll do with it then is another issue.

      1. I ain't Spartacus Gold badge

        We'll have AI the day we ask a question and it answers it can't be arsed to care.

        So you're saying that printers achieved AI decades ago. They've simply been too smart to let us find out and do even more horrible things to them than we already want to do to printers...

    2. Naselus

      "'AI', one of the great hypgasms of the early 21stC."

      And the late 20th. And the mid 20th, too.

      Basically, every 30 years we have this huge hyperventilation over the latest tiny incremental trend in AI research (LISP machines, anyone?) and AI researchers don't do enough to manage the public's expectations, and then when they fail to produce a fully self-aware robot who can dance the fandango while asking "what is this human thing you call 'love'?" within 18 months, the sector collapses and the funding dries up for the following 20 years.

      Wait and see what happens in the 2040s, I'm guessing.

      1. amanfromMars 1 Silver badge

        AIMessages from Heaven? :-) Or Secret Intelligence Services at Their/Our Work For You?

        and then when they fail to produce a fully self-aware robot who can dance the fandango while asking "what is this human thing you call 'love'?" within 18 months, .... Naselus

        Are El Regers up for accepting and fulfilling that Immaculate Challenge ....... with Live Operational Virtual Environment Vehicles at the Disposal of Advanced AI Researchers ...... who be Knights of the Virtual Kingdom.

        What have you to lose? Not even your shirt is at risk?:-)

      2. a_yank_lurker

        @Naselus - About once a generation a new set of wide-eyed, scifi enthralled groups get the AI religion. It lasts a few years as they hype some trivial exploit as meaning AI is just around the corner. Sort of sounds like fusion research.

        1. I ain't Spartacus Gold badge

          Yeah, but in fusion research we have many groups using different methods who regularly achieve actual fusion. OK it might only be lasting for microseconds, and is currently using more energy than it puts out - but the point is that they can point to success and claim that all they need to do is refine the process.

          Whereas we've currently observed one type of natural intelligence, and still don't even know how that works. Meanwhile we're busily trying to replicate it, using a completely different set of physical mechanisms.

          So given that fusion is just 20 years away (and has been for 40 years), how far are we from working AI?

      3. DrBobK

        The thing that is so odd about the current Deep Learning hype is that it is the *same* AI that was being hyped 40 years ago (ok, actually 30 years ago, back-propagation of error through nets with hidden layers), just with more raw computing power and bigger datasets.

  5. Triumphantape

    Wait...

    "Neural networks sort stuff – they can't reason or infer"

    That sounds a lot like most of the humans I see.

    1. Anonymous Coward
  6. JimmyPage Silver badge
    Stop

    This is news ?

    I have said, and will continue to say ...

    If Google (for example) *are* developing "AI", then they are keeping it a long looooong way from their search engine.

    Bear in mind that almost the first thing I would do with real "AI", is to train it to zap adverts and other unwanted cruft.

    1. Steve Knox

      Re: This is news ?

      If Google (for example) *are* developing "AI", then they are keeping it a long looooong way from their search engine.

      Of course they are. They're optimizing their search engine for the average user. Artificial Stupidity is much more relevant for that use case.

  7. aenikata

    Current systems, perhaps

    The public does need to understand the difference between a sophisticated but specific AI and the concept of General AI. Currently the latter is very limited, although there are researchers looking specifically at this, such as projects like OpenWorm to simulate a Nematode worm.

    However, it may be that a more general intelligence actually doesn't act in this way. Some of the more sophisticated systems use a blackboard approach where discrete subsystems process some data and return the results to a shared space where other elements can then operate on it. Games-playing systems may be added into such a blackboard, picking up data from other systems. Creation of a more general intelligence may involve some kind of overall prioritisation system that selects which systems to run, chooses (perhaps with some randomness) which of the tasks or goals to pursue out of the ones available, and simply aims to maximise its score overall. Learning wouldn't necessarily involve researchers, there could be sharing of successful networks. While a network that can play Go isn't directly useful for playing Chess, there may be scenarios where parts of a network can be re-used - this is known as Transfer Learning. A sophisticated system could try to identify networks which might be similar to a new task and try various networks that take some of the deeper elements of the other network as a starting point - it wouldn't necessarily be 'good' immediately, but it may have some ability to recognise common patterns shared with the existing tasks it can do.

    These wouldn't necessarily be 'intelligent' in the sense that some people think, but such a system could potentially transfer what it knows to related subjects, have likes and dislikes (in terms of what it has given a higher scoring to from previous success) and could communicate with other such systems to share and improve its knowledge, and you're then heading a long way towards a system that could interact in a manner that seems increasingly intelligent. After all, if it can recognise people, talk, understand enough of language to at least beat a young child (it can be useful while still naive in its understanding), recognise emotions, play a range of games, learn new things and express its own preferences, how general does the intelligence need to be?

  8. Anonymous Coward
    Anonymous Coward

    I got as far as

    The same skills learnt from one game can't be transferred to another.

    Transfer learning has been a thing since 1993. The way things are going, I give it five years to get the first automated demonstration.

    1. Charles 9

      Re: I got as far as

      Part of the intelligence problem is that we're not ourselves fully aware of how we think. For example, we haven't much insight into subconscious concepts like intuition, which figures into things like driving where we can sense something coming without consciously thinking about it. We can't teach what we ourselves don't understand.

      1. Anonymous Coward
        Anonymous Coward

        Re: I got as far as

        "We can't teach what we ourselves don't understand."

        Hasn't stopped the last few generations of teachers and management consultants.

        (Anon because I work in a school).

      2. Anonymous Coward
        Anonymous Coward

        Re: I got as far as

        I'd say that an even bigger problem is that we don't actually think in as much detail as we think we do.

        1. Anonymous Coward
          Anonymous Coward

          Re: I got as far as

          "I'd say that an even bigger problem is that we don't actually think in as much detail as we think we do."

          Oliver Sacks wrote about some "autistic" people who could draw very detailed scenes from memory after only a short exposure. The implication was that our minds remember far more detail than that of which we are conscious.

          The rub there is "conscious". Too much access to detail by our conscious mind would give information overload. It is probable that unconscious "thinking" is using that data to influence our conscious mind.

          How many times do you say "I had forgotten I knew that" - but only after you have surprised yourself by factoring in something you had forgotten you once knew.

          It has been said that usually we don't seem to be able to remember much that happened to us before about the age of 15. When people reach extreme old age they apparently can get crystal clear recall of early memories - even if their short term memory doesn't exceed a few minutes.

          1. Charles 9

            Re: I got as far as

            So therein lies the rub. We can't teach a computer how to reason, infer, and draw from relatively obscure things when we don't even know how we ourselves do it. What's the specific process by which our brains identify stuff, make not-so-obvious observstions, reason, infer, etc.?

      3. 's water music

        Re: I got as far as

        Part of the intelligence problem is that we're not ourselves fully aware of how we think

        The secret of intelligence? Post facto rationalisation of what transpired

        1. amanfromMars 1 Silver badge

          Re: I got as far as

          The secret of intelligence? Post facto rationalisation of what transpired ... 's water music

          That's a recipe for CHAOS, 's water music, and we can do much better with all of that.

          But leaping further into the future discovers you Clouds Hosting Advanced Operating Systems ..... with Wonders to Share and Magnificent to Behold.

          I've got so far ..... and quite whether I would quickly, or even ever choose to move on to Elsewhere with Lesser Wonders, is a sweet muse to feed and savour, seed and flavour.

  9. Herbert Meyer
    Boffin

    old result

    Minsky and Papert wrote a book about this, a long time ago:

    https://books.google.com/books?hl=en&lr=&id=PLQ5DwAAQBAJ&oi=fnd&pg=PR5&dq=Minsky+and+S.+Papert.+Perceptrons&ots=zyDCuJuq23&sig=g6U9pngheQkbaRqqFiyPRgWbtBA#v=onepage&q=Minsky%20and%20S.%20Papert.%20Perceptrons&f=false

    Nobody read or understood it then.

    1. Destroy All Monsters Silver badge
      Headmaster

      Re: old result

      Nobody read or understood it then.

      Apparently nobody does now.

      Perceptrons are one-layer neural networks. Irrelevant to Deep Learning, which are very deep neural networks with Bells and Whistles of all kind.

      Back when I was in school, people were well-informed about the problem with perceptrons. They were used as simple models to teach students. Everyone including Pinky and the Brain were working on 3-layer NNs and possibly looking at Boltzmann machines while the first NN chips were being talked about in BYTE and IEEE Micro.

      1. Rosco

        Re: old result

        I don't think you've understood Fodor & Pylyshyn's argument.

        Their argument is that cognition operates at a higher level of organisation than the physical substrate. True cognition involves generating internally consistent symbolic representations of causal relationships and ANNs on their own aren't capable of that. They - like all approaches to AI so far - must have problem-space representations baked into them before they can generate solutions.

        I'm not saying they were right, by the way. I'm just saying that simply adding more hidden layers or using a convolutional training algorithm doesn't go any distance towards invalidating their rather deep philosophical argument because those techniques don't add causal symbolic processing. It's not clear what would add symbolic processing to a neural network, although it is clear that nature has found a way at least once.

        1. Rosco

          Re: old result

          Whoops!

          I moved quickly from Minsky & Papert's Perceptron paper to Fodor & Pylyshyn's work but forgot that you hadn't. Sorry.

          (Fodor & Pylyshyn's work is probably more relevant to this article than Minksy & Papert's, mind, so I'll leave my comment in place).

  10. Anonymous Coward
    Anonymous Coward

    The late great Dr. Christopher Evans ...

    covered AI in wonderful layman-friendly detail in "The Mighty Micro".

    Not much has changed, since 1979.

  11. Rebel Science

    Deep learning must be discarded like yesterday's garbage in order to solve AGI

    A number of us have been saying this for many years. But the AI community, like all scientific fields, is extremely political. Only the famous leaders have influence, even if they are clueless.

    Why Deep Learning Is a Hindrance to Progress Toward True AI

  12. amanfromMars 1 Silver badge
    Mushroom

    Surely you didn't think you'd been left here all alone?

    but they won't take us all the way to artificial general intelligence, according to a recent academic assessment.

    And that would be because of A.N.Other Human IT AI Intervention? A Simple Advanced IntelAIgent Future FailSafed DeProgramming of Sublime Assets for ESPecial ReProgramming in Quantum Communications Channels which at their Best both Provide and Protect, Mentor and Monitor Heavenly Streams of Augmented Future Virtual Realities ...... for Out of This World Worldly Global Presentation?

    And the Intervention works wonderfully well, with All Deliveries Cast Iron Guaranteed to not Implode or Explode upon Epic Fails and Failure to Best Use Provided Prime Assets. And you won't get many offers as good as that today, that's for sure, Amigo/Amiga.

  13. Dinkrex

    Finally!

    Someone finally said it. What is called 'AI' today is not AI, not even weak AI. I know why it's so prevalent though, AI researchers don't want to repeat the over-promising that led to the last two "AI winters", but they're leaving the door open to different over-promising by corporations who want to turn it into a buzzword and the media who want a soundbite.

  14. John Smith 19 Gold badge
    Coat

    "deep learning teaches computers how to map inputs to the correct outputs. "

    Or as humans call it "Growing up."

    Because when you take a human brain apart what do find?

    Multiple highly inter connected layers of neurons (up to 10 000 to1) loosely connected to other sections of multiple highly inter connected neural layers.

    Everything else is built on top of that hardware.

    Which leaves 2 questions.

    Are human as un "intelligent" as existing multi layer NN systems, but we're too stupid to recognize it? and

    If not why are existing "deep learning" systems so s**t?

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like