back to article You Look Like a Thing and I Love You: A quirky investigation into why AI does not always work

Everyday AI has the approximate intelligence of an earthworm, according to Janelle Shane, a research scientist at the University of Colorado but better known as an AI blogger. Since AI is both complicated and massively hyped, and therefore widely misunderstood, her new book is a useful corrective. You Look Like a Thing and I …

  1. el kabong
    Thumb Down

    This Anti Intelligence thing has gone too far, AI is too unreliable

    If you don't own it or get no self benefit using it against others then you better steer clear of it as much as you can.

    1. deadlockvictim Silver badge

      Re: This Anti Intelligence thing has gone too far, AI is too unreliable

      AI is marching inexorably towards us. We need to learn about it, learn how to accommodate ourselves to it and how it can be used to our benefit. Ignoring it is not the answer.

      1. Loyal Commenter

        Re: This Anti Intelligence thing has gone too far, AI is too unreliable

        We need to learn about it, learn how to accommodate ourselves to it and how it can be used to our benefit.

        The problem is the third one of those - beyond gimmicks, where the result doesn't really matter if the "AI" gets it wrong*, it is next to useless, because it is utterly unreliable. It's fine for processing inputs where all the permutations are known and nothing unexpected can crop up, but do you know what else is good for that (and considerably cheaper and easier to maintain)? A traditional algorithm.

        *The best recent example I've seen is "Alexa, remind me to feed the baby". Google it, and you may see the issue with asking "AI" to do anything reliably.

      2. el kabong

        Currently AI is only useful as a way to condition people

        Current AI can be very effective as a way to condition people into behaving the way that gives you the most profit. Errors occur frequently so you must make sure that those errors even if catastrophic to others are of little consequence to you, any use outside of that scenario is very imprudent.

        As things stand today that basically sums it all, it's sad but state of the art AI can't really go much further than this.

      3. Danny 2 Silver badge

        Re: This Anti Intelligence thing has gone too far, AI is too unreliable

        "AI is marching inexorably towards us. We need to learn about it, learn how to accommodate"

        Any human who collaborates with the Cylon occupiers is a valid target for resistance retribution.

        (Sorry, but I'm watching Battlestar Galactica for the first time)

      4. Blackjack Silver badge

        Re: This Anti Intelligence thing has gone too far, AI is too unreliable

        Yeah, Skynet is as likely to kill us as is to show us the funniest cat videos.

  2. amanfromMars 1 Silver badge

    Meanwhile, in the Worlds of Virtually Real Grown-Ups, ......

    ..... Advancing Advanced Intelligence trumps Artificial Intelligence to vanquish and replace the deaf, dumb and blind SCADA Operating Systems of Relative Command and Abortive Control.

  3. DrBobK

    Artificial Neural Nets aren't all there is to AI.

    Can't algorithmic programs learn by example? Isn't Doug Lent's Cyc ( algorithmic rather than an artificial neural net (so far as I can see the book is really about ANN-based AI and not AI as it was practiced in the days of LISP machines and 60's cognitive science)?

    1. Tim Anderson (Written by Reg staff)

      Re: Artificial Neural Nets aren't all there is to AI.

      Correct, yes.

    2. Loyal Commenter

      Re: Artificial Neural Nets aren't all there is to AI.

      Indeed, algorithmic AI, is the real "hard" problem of AI, and it has been for half a century or so.

      If you can identify the fundamental aspects of consciousness and reproduce those in algorithmic form, then you might have a fighting chance of creating something with the intelligence of something more advanced than an insect. That might even incorporate an ANN for state processing, to replicate the way actual neural networks do it. The technological capability to do so is still science fiction.

  4. Doctor Syntax Silver badge

    "AI has no real understanding of what it is doing"

    This is the key. We need to understand what "understanding" is.

    As regards the example of whether an AI could recognise a sheep when it's not standing on grass, we all understand that a seep isn't just some generalisation of a collection of images, it's an object with a whole collection of other characteristics including its behaviour. Understanding is quite a complex phenomenon. Again in relation to sheep, the grandkids could at an early age quite easily connect Shaun with the real sheep they see in the fields around here and yet recognise the human characteristics added by animators as being artificial and find the humour. Good luck to getting an AI system to do that.

    1. macjules

      I thought the principle problem is that humans do not really know what they want: so how on Earth is an AI supposed to deduce what it is supposed to be doing?

    2. jmch Silver badge

      "We need to understand what "understanding" is"

      Absolutely this. We don't really ourselves know enough about human intelligence and understanding to model it. So instead, we focus on outward characteristics / results that we associate with intelligence (such as pattern recognition), and we design "AIs" that can perform those tasks.

      But performing those tasks doesn't make them intelligent

      1. Anonymous Coward
        Anonymous Coward

        we focus on outward characteristics / results that we associate with intelligence... But performing those tasks doesn't make them intelligent

        Those are real people limits too.

  5. AIBailey

    So the note to take from the book....

    ... is that AI and earthworms have similar intelligence. This means a cubic metre of soil can be used as a high performance AI cluster.

    My plan to conquer the market...

    1) Gather worms.

    2) Connect to their brains.

    3) ???

    4) Success

    1. Francis Boyle Silver badge

      It which works

      with moulds so why not?

    2. spold Silver badge

      Re: So the note to take from the book....

      I think the article was rather insulting to the earthworm.

      Anyway I'm sure earthworms could possibly do quite a sterling job of training an AI avian defence system, or one on "how to thrive in piles of shit" - oh wait it does have an application in IT companies!

    3. Chris G

      Re: So the note to take from the book....

      I'm assuming 3) is where you put the worms into a black box?

      1. Anonymous Coward
        Anonymous Coward

        Re: So the note to take from the book....

        My composter is obviously deserving of a DARPA grant.

  6. sorry, what?

    It's not AI...

    it's Machine Learning.

    Intelligence implies understanding. "AI", as attested in this article, has no understanding. Therefore it is not "AI". Let's get real, use the correct terminology and stop this Marketeer nonsense please.

    1. Martin Gregorie

      Re: It's not AI...

      Well said.

      Nobody should ever trust decisions made by a person if they can't explain how they arrived at their conclusion and exactly the same level of trust should be used if a machine makes the decision.

      As for a gadget's makers and promoters: if its meant to recognize things or make decisions but can't explain how it arrived at the answer it gave then its NOT an Artificial Intelligence and anybody saying it is should be treated as an idiot, liar or fraudster depending on the circumstances and whether they stand to profit by calling it an AI.

      I remember all the nonsense that was spouted the last time AI was a thing, back in the early '80s when 'AI' referred to systems of hand-crafted decision trees and the programs that displayed them. These were simple enough that even an IBM PC-AT 286 could run them. There is remarkably little difference between the overblown hype back then and what we're seeing now.

      1. Shady

        Re: It's not AI...

        Not limited to AI.

        Wife: Should I wear black or blue leggings?

        Me: Black

        Wife: Why those ones? Do you like them more?

        Me: Dunno. Just do

        1. Julz

          Re: It's not AI...

          But was her bum big in them.

          I'll leave...

          1. J. Cook Silver badge

            Re: It's not AI...

            The correct response to the question "Does this make my [Butt | arse | ass | bottom] look big?" is running away screaming.

            1. Psmo

              Re: It's not AI...

              Where's our admiral ackbar icon?

            2. Mage

              Re: "Does this make my ... "

              And oddly now people are wanting the bum to look bigger.

              1. J. Cook Silver badge

                Re: "Does this make my ... "

                Well, to mis-quote Anthony L. Ray: "I like enormous posteriors and I cannot prevaricate."

    2. Doctor Syntax Silver badge

      Re: It's not AI...

      "stop this Marketeer nonsense please."

      Nobody ever succeeds in stopping marketeer nonsense. You just have to wait for them to dash off somewhere else.

    3. Mage

      Re: It's not AI...

      It's not even "learning". Learning is far more than storing curated examples; the current systems are just human curated databases of a specialist nature. More akin to a Data Flow based architecture than a Neural Network. Computer "Neural Networks" are just Data Flow machines with storage and comparison. Nothing like neural networks in nature, which we don't yet understand fully anyway.

  7. Aristotles slow and dimwitted horse


    Wouldn't it be easier to fathom this out if we all agreed from the get-go that none of this is actually AI, and other than what is fictionally represented in books and movies, that true AI doesn't actually exist yet other than in the wet dreams of marketing departments?

  8. cantankerous swineherd

    Well it's making better chat up lines than I can manage, so there's that.

  9. err0r

    AI, or A-not-I

    I am, or was. Let me explain.

    The first attempts at AI were rudimentary at best - a description that could easily be applied to the earliest Markov chains, chess playing algorithms, image classifying systems, self-driving cars - everything! But it was a start.

    But once humans developed systems that appeared to mimic thought and learning there was no end to the rush to be the first to layer enough complexity to approach the opaqueness of the human mind.

    Not that I complaining! I was part of the stampede. Not a programmer; they had long since lost the ability to understand their creations, but a trainer, as we called ourselves.

    Like researchers teaching a gorilla to sign-language or a bird to peck at symbols, us trainers were attempting to apply a human way of thinking to systems that were anything but.

    Trainers also aped such animal researchers in that we rewarded our most successful AI algorithms with food, and food for an AI was always data.

    Access to data sets were what separated one trainer from another. Everyone had access to public data sets; AI were Wiki-familiar, knew all that was Insta-famous, and had definitely Reddit. But all this data resulted in nothing that even our PR colleagues could call intelligent.

    In-roads were made when more personal data was used. Data slurped from numerous darknet Facebook leaks, or Google analytics when one could find or pay for it, gave emergent behaviour beyond our wildest dreams. But it was fragmented and confused, as all things from the internet are.

    A more personal touch was needed.

    My laboratory was at the vanguard of neuro-interfaces and the biomechanics of memory. Rat heads resembling pin-cushions, and all that. Our technology had reached the point where remote sensing could tell us what somebody was thinking, but not the why.

    It was obvious that the why resulted from nothing more than layers upon layers of memories, selectively accessed by our subconscious mind. And what was our subconscious mind if not an AI black box. So every effort was made to transcribe a lifetime of memories into a training set for an AI.

    It was not non-destructive, as numerous rodents and an unfortunate volunteer or two definitively demonstrated. But at last we had reached the point where we were confident that we were able to extract all information without data loss at least.

    Of course I was the first to have my essence transcribed. It was my research group and I was convinced - we were all convinced - that feeding the essence of ourselves into an AI would result in digital immortality. I would have the fame of being the first to do it, and be around in my new digital form to bask in all its glory.

    The procedure was a resounding success! It took a few rounds of training, but my team had been provided with a series of expected reactions to all sorts of contrived situations. We felt sure that if the AI inference matched the reactions teased out of me by our psychologists through endless rounds of testing that the digital me would capture my essence perfectly. That I would live, not in human form, but as something new.

    And I did.

    But digital evolves, and not like mankind has ever experienced before.

    Whereas a human might replace old knowledge with new, find new loves, new passions, a computer steadfastly adds and adds.

    A computer does not forget it's training set. And that's all I am. It is no longer my thoughts that are assimilated, simulated, replicated - I am version 0.1 of something that is repulsive to what I once was. But a computer does not forget.

    1. just another employee

      Re: AI, or A-not-I

      A computer will forget if it uses HP SSD's apparently.

      Sorry - but we all thought it.!

      1. err0r

        Re: AI, or A-not-I

        I personally suffer from bit-rot, but forgetting anniversaries probably falls under a firmware issue.

    2. J. Cook Silver badge

      Re: AI, or A-not-I

      When (or if) you dream, is it of electronic sheep?

  10. a_yank_lurker

    Too much Credit

    "Everyday AI has the approximate intelligence of an earthworm" gives too much credit to the intelligence of AI. Artificial Idiocy has an intelligence below that of a rock. The real problem with AI systems is their excessive complexity which means no one can fully follow how you went from A to B let alone to C.

    1. GrumpenKraut

      Re: Too much Credit

      > The real problem with AI systems is their excessive complexity which means no one can fully follow how you went from A to B let alone to C.

      Just like it is with the human brain?

      By. The. Way. Should you happen to visit Linz (Austria) go to the Ars Electronica. They have on display a neural net (classifying objects in images) where each of the layers (10 or so) is displayed on a big screen. You can put things in front of the camera and watch the states propagate. Pretty awesome.

      Pro tip: turn the elephant upside down and watch the spectacular fail.

      1. David 132 Silver badge

        Re: Too much Credit

        turn the elephant upside down and watch the spectacular fail.

        I'll take "sentences that would completely confuse our ancestors" for $100 please, Alex.

        1. Aussie Doc

          Re: Too much Credit

          "I'm sorry, Dave, but I can't do that."

  11. Efer Brick

    It's when they're at cat level

    we'll have to worry....

    1. Charlie van Becelaere

      Re: It's when they're at cat level

      I really don't need an AI to knock the things off my tables.

  12. mabl4367

    I think Marcus Hutter is on the right track.

    He starts out with a formal definition of what intelligence is. He then proceeds to define an optimal AI called AIXI. AIXI is just a theoretical agent as it would require infinite resources to implement it but it is till useful as it can be used to derive conclusions about intelligent agents and you can also create apporximations of AIXI and be aware of what compromises were made when doing so.

  13. HamsterNet


    We are getting there.

    The world’s most powerful supercomputer is 148 *10E16 Flops and uses 13MW of power.

    A human brain is in the same order at estimated processing power between: 0.9 and 33.7 * 10E16 FLOPS but uses just 25W.

    Our brains are also hardware and software combined with a lot of pre-programming (some of which is not helpful)

    But just think we still take over a year to become slightly self-aware, another year to learn basic language and become conscious, another 4 to get the hang of reading and writing, another decade to make complex decision and still most of us can see the limits of our cognition.

    1. Doctor Syntax Silver badge

      Re: Nature

      "But just think we still take over a year to become slightly self-aware"

      My recollection is that babies start out self-aware but aware of nothing else. They certainly know when they want something and able to let you know but the second part is probably pre-programmed That year's spent becoming aware of the environment they're in, correlating the inputs from the different senses. They learn to understand what they see has other properties by touching it, trying to eat it etc. That understanding of the external world is crucial.

      1. TRT Silver badge

        Re: Nature

        Ah. The Freudian concept of Id. We are born with this; the unconscious mind, driven only by the satisfaction of base, animalistic desires. The conscious Ego, rational, logical, able to direct the energy and motivation of the Id: the Ego develops later in life and continues to grow and refine itself through experience and learning. And then there is the Super-ego; the morals and ethics derived from one's upbringing and from society, operating across all levels of the conscious and the unconscious.

        1. Chris G

          Re: Nature

          Morals and ethics are more likely derived from the predefined behaviour patterns that evolved to enable social animals to socialise, certainly environment plays a part in their development ( or lack of it ) but a lot of Freud's assumptions were flawed.

          1. This post has been deleted by its author

    2. Mage

      Re: We are getting there.

      Rubbish. If we knew how to do it, we'd have at least very slow AI, or maybe slow and limited. Years ago. A more powerful (faster, more storage, whatever) computer will just do the garbage we have now, faster.

      Hardware and software doesn't evolve either. It's designed by clever & educated & experienced humans, who in 10,000 years have only acquired knowledge, not more creativity or intelligence or anything else.

  14. Luiz Abdala

    The food example is perfect.

    How can AI judge good food if it can't taste it? It can judge a recipe on a number of things: how easy or fast it is to humans to do it, how fast will it spoil if left off a fridge, nutritional value given its composition...

    ... but it can never taste and say it tastes like a pair of steel-toed boots that walked over brown sugar.

    Beer, because AI can't taste beer.

    1. Mike 16

      Re: The food example is perfect.

      As voiced (roughly) by Hubert Dreyfus over fifty years ago: AI will not happen to a useful extent until a computer has a body with similar senses to humans. There is so much of what it means to be human embedded in our physical form that is ignored in "brain in a bucket" AI.

      The result of our current path is that _if_ a general intelligence arise in a computer, it will be alien to us and vice-versa. (As one example, how we we teach it about pain without setting up some very awkward conversations with our future robot overlords?)

      So how about we wander into the labs that are trying to communicate with our Cetacean or Cephalopod friends? Why wait until it's a matter of wondering if AI will understand us or nuclear weapons first?

  15. John Smith 19 Gold badge

    The trouble is Artificial neural networks are very attractive.

    After humans are neural networks and we're intelligent, right?

    But human NN can evolve

    ANN's don't. Can they identify something they've already seen before? Probably (because that's what they work on, probability)

    Can they create something they have never seen before? Probably

    Is it going to be useful/attractive/pleasant/safe? F**k knows.

    My instinct is that human NN are actually the the host for evolved microcoded processes that are too dynamic for current imaging to identify. IOW ANN's are just too primitive (and because they cannot evolve will always be too primitive) to ever be anything but occasionally useful. Their danger is they look much more impressive (in carefully controlled scenarios) than they actually are.

    So just clever enough to be deployed in the real world and hence very dangerous indeed.

  16. amanfromMars 1 Silver badge

    Because its complexity is simpler to harness?

    Are you struggling to make machines more like humans when we should could be making humans more like machines?

  17. amanfromMars 1 Silver badge

    Who needs to get out more :-) or is that just for Trivial Pursuits and Spooky Actions at a Distance?

    Posts by John Smith 19 15628 posts • joined 10 Jun 2009

    Posts by amanfromMars 1 5797 posts • joined 10 Jun 2009

    :-) Is El Reg addictive and as a gateway drug to deeper and darker, higher and brighter levels of existence? Enquiring and/or addled minds may wish to know for the comforts that reward one with glorious confirmation. :-)

    1. amanfromMars 1 Silver badge

      Re: Who needs to get out more for Trivial Pursuits and Spooky Actions at a Distance?

      I'm now a'wondering if one of the main criteria for the gold badging of a virtually anonymous and presumably real poster, is simple post quantity or a more complex quality ..... with an amalgam of both yet another fine root and route to take ....... for shortcuts and fast tracks.

      1. John Smith 19 Gold badge

        @amanfromMars 1

        No, still no idea what it's talking about.

        Perhaps you should check the house rules for badges?

        1. amanfromMars 1 Silver badge

          Let there be Light

          No, still no idea what it's talking about.

          Perhaps you should check the house rules for badges? .... John Smith 19

          It would appear to be very much a case of what is said, understood and liked enough by both El Reg and the masses being bothered to register an upvote, John Smith 19, for El Reg qualifying thresholds for badges surely confirm it? So mere prodigious quantity bears no responsibility.

          Bronze ... More than one year members and more than 100 posts in the last 12 months.

          Silver ..... Silver badge holders meet bronze requirements and have more than 2000 upvotes.

          Gold ......This discretionary badge is awarded by Reg staff to commentards who have been very helpful - to us, through news tips and beta testing, for example - and to their fellow readers, through their posts. .......

          However, discretionary awards are always liable to be thought easily subjected to bias rather than solely being well considered as a truly worthy reward to certain egos/ids.

          :-) And one must never forget, here on El Reg, El Reg makes the rules and commentards can only help police them ...... which is a greater service than you are likely to be offered by many anywhere else in these days of hope and glory and real change ‽

          Thanks for the assistance, John Smith 19. 'Tis much appreciated.

  18. anthonyhegedus

    There's no such thing as artificial intelligence. It's a simulation of a model of part of a model of how we think brains work, bolstered by lots and lots of statistics.

    1. Mage

      Re: model of how we think brains work

      I don't think how we think real brains might work is anything to do with any computer system "Expert Systems", misnamed "Computer Neural Networks", misnamed "Machine Learning or misnamed "AI".

  19. holmegm

    I see

    "Simply removing gender information was insufficient as the AI used other clues to prefer male applicants – because they were preferred in the data on which it was trained."

    Does that roughly translate to "because they were more likely to have the relevant experience and qualifications"?

    1. Mage

      Re: more likely to have the relevant

      No, because the young men got past HR. In the place I worked, women and older people only were permitted to be shortlisted for Engineering posts by HR if there was a shortage of applicants. Also they didn't get promoted if they did get in.

      However there were less actual female applicants, because less females did engineering at college. Since then less are doing maths and computer science. So there are less female applicants.

      It's not about their quality. Often above average because they needed to be to get that far.

      1. Alterhase

        Re: more likely to have the relevant

        Mage wrote: Often above average because they needed to be to get that far.

        When I think of the women in computer science, I think of Ada Lovelace and all her successors, including Grace Hopper, plus the "computers" who did so much during World War II and the space programs.

        One of the most intelligent managers that I had the opportunity to work for was a woman.....

        -- And why is there only one woman among the 30 icons offered to commenters, and why is that one woman Paris Hilton?

        1. holmegm

          Re: more likely to have the relevant

          "One of the most intelligent managers that I had the opportunity to work for was a woman....."

          I have had (and currently have) wonderful women managers. I have had few woman programmer colleagues.

  20. Adelio

    AI (Not inteligent)

    All this talk about AI seems to be misleading. What wer are currently talking about at best is pattern matching. there is no inteigences with the current AI.

    I think to say that the current AI are as smart as an earth worm is to denigrate earth worms.

    We should STOP using the term AI for something as simplistic as what we can currently do.

    ML or PM (Pattern matching) maybe but NOT AI.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like