back to article The sooner AI stops trying to mimic human intelligence, the better – as there isn't any

Never again. As Gods are my witnesses, you will never catch me [insert gerund here] in future. I have learnt my lesson. You won’t catch me because I’ll be more careful next time. Learning from one’s mistakes is a sign of intelligence, they say. This is why machine-learning is at the heart of artificial intelligence. Give a …

  1. stiine Silver badge

    "Make shit up for a weekly column on an IT news website"

    I bet the editor choked on his tea when he read that. I simply laughed out loud...wonderful.

    1. b0llchit Silver badge
      Happy

      Yes, if this is Dabbs' level of incompetence, then we should all die in the coming apocalypse with good humor. We'll all go under the AI hammer. At least with this level of incompetence we can do so with a smile from Dabbs' failings.

    2. ITMA Silver badge
      Joke

      Must work for the BBC then.....

  2. Jamesit

    "Practice makes perfect" Or as my motorcycle instructor put it "Practice makes permanent"

    1. jake Silver badge

      No practice makes for twisted metal.

      So does practice.

    2. Anonymous Coward
      Anonymous Coward

      People who believe they are very experienced are likely to overreach themselves when they think the situation is one they've seen many times before.

      The human mind sees what it expects to see - it takes a habitual worrier to think there might be a hidden twist in some small detail.

  3. Dave K

    Reminds me of an old tale (possibly an urban myth) I was told some years ago.

    The military had the idea of creating and training an AI to spot potential weapons installations, possible hidden bases etc. from snapshots of satellite footage. Programmers created the AI, and they proceeded to train it by showing it numerous satellite pictures of weapon installations, hidden bases and the likes, then showed it a bunch of nice scenery shots where there was nothing suspect.

    All seemed to go well!

    Then they tried it on some real data, and the AI promptly started flagging *everything* as a weapons base. After some head-scratching, someone figured it out. The photos of weapon installations they'd used for training were all gloomy, murky shots. The normal scenery shots they'd used were taken on a brighter sunnier day.

    In essence, all they'd done was train the AI to recognise a sunny day...

    1. Warm Braw

      It may be true that AI learns from every single case it sees, the problem is that you don't know what it learned, can't ask it and can't correct it. No doubt it will some times get the right answer, but that just makes it Accidental Intelligence.

      And that's put off my looming work for all of 30 seconds: must be time to make a cup of tea.

      1. Mage Silver badge
        Devil

        Learns?

        That's marketing speak. No AI is capable of learning at all. It's pattern recognition aided by human curated data and human initial labelling.

        But indeed, you don't really know if it's matching (data comparisons) for what you intended or some other feature in the images. We see patterns, familiar objects in clouds, toast, flames in the fire, scattered stones. Because once we understand chair, we can decide to use a crate as a chair. A child that has eaten bread and sausages will assume a sausage-in-a-bun or a hot dog is edible. A two year old can do things easily that are impossible for AI.

        It's called the AI paradox and it was known nearly 60 years ago.

        Expert systems were the big thing in AI in the 1980s because they used text. The problem was capturing the expert. Faster cpus, bigger databases and more RAM simply made actually simpler so called image recognition possible. There is no recognition. Just matching.

        It's all marketing. None use "machine learning" or "neural networks" as those don't mean what they mean outside of AI marketing.

        Even machine translation has gone backwards. It now uses a brute force approach like a giant Rosetta stone and matching phrases and words.

        Text to speech isn't much better than nearly 40 years ago and so called smart agents are just voice to text front ends using pattern matching to search engines and chat bots hardly better than Eliza or ALICE. Speech recognition has moved from being a program on your car radio, phone or PC to something creepy running on a 3rd party system, the so called cloud. That's a backward step in privacy and needs the Internet.

        1. Allan George Dyer
          Terminator

          Re: Learns?

          @Mage - "A child that has eaten bread and sausages will assume a sausage-in-a-bun or a hot dog is edible. A two year old can do things easily that are impossible for AI."

          A good thing too. It'll be bad enough when the machines take over, without them eating all the damn hotdogs!

          1. Charles 9

            Re: Learns?

            "A two year old can do things easily that are impossible for AI."

            Has someone formally proven the word in bold for any and all cases?

            1. Michael Wojcik Silver badge

              Re: Learns?

              Proof, or actual evidence, or substantive argument, or displaying any knowledge of the subject area are discouraged in Reg forum debates about machine learning or related areas.

              Hell, just using the term "AI" makes anything someone says on the subject suspect. (Those O'Neill quotes in the article were painful.)

              1. jake Silver badge

                Re: Learns?

                You gotta admit that it's a handy filter word, though. Like "cyber", pretty much anybody using it seriously in conversation is technologically incompetent and can thus be safely ignored ... at least in regard to technology matters.

            2. Strahd Ivarius Silver badge
              Joke

              Re: Learns?

              well, I never saw an AI eating a hot-dog, did you?

        2. Doctor Syntax Silver badge

          Re: Learns?

          I suppose a medical system does have some objective feedback - patient lives vs patient dies. It's the training that's too expensive.

          1. Anonymous Coward
            Devil

            Re: Learns?

            An AI can learn - If and Only If - every decision it makes is reviewed and rated by a subject matter expert.

            I'll volunteer if, when it's wrong I get to hit it with a cattle prod while screaming "Bad AI. Bad AI."

            1. Precordial thump Silver badge

              Re: Learns?

              Yikes. I wouldn't want to be there when your 2-year-old ate your hot dog.

              1. jake Silver badge

                Re: Learns?

                Relax. It probably only happened once.

                1. Strahd Ivarius Silver badge
                  Devil

                  Re: Learns?

                  In the country Dabbs resides in, when you say that someone is eating your hot-dog, it may have a different meaning...

          2. swm

            Re: Learns?

            Actually medical "AI" systems can be quite good - better than doctors. The problem is liability. If an AI makes a mistake and a patient dies there will be a messy lawsuit. If a doctor makes a mistake and 5 patients die then, well, we did the best we could.

            1. jake Silver badge

              Re: Learns?

              "Actually medical "AI" systems can be quite good - better than doctors."

              For very, very, specific things that they have been told to look for. For example, a dude falls off his horse/bike/roof. Complains of sore ribs. Doctor orders X-rays. AI comes back "no busted ribs in that X-Ray, tell him to take two aspirin and call us in the morning" ... all while missing a little, tiny dark spot on the left lung that would stand out like a searchlight to any halfway competent radiologist.

              But wait, it gets worse ... with today's digital X-rays, it's just numbers being fed into the AI. So said radiologist will probably never see them rendered as an image, and thus won't even be able to spot it accidentally. At least it's all lovely and modern, though, right?

          3. Version 1.0 Silver badge

            Re: Learns?

            "AI" will note that all patients die eventually so AI can demonstrate that medical treatments simply don't work. AI is simply dividing the writers IQ by zero.

            1. Strahd Ivarius Silver badge

              Re: Learns?

              or the AI will decide that since everyone dies in the end, it is more cost efficient to reduce the lifespan of patients with a proper dose of medicinal herbs...

    2. deive

      I remember it as tanks and trees... but I have just found this: https://www.gwern.net/Tanks#origin

      1. TimMaher Silver badge
        Facepalm

        Wooden top

        Many years ago, somebody I knew was working on a military project to get a tank launched missile to recognise and destroy other tanks.

        They were testing on Boscombe down or somewhere and launched their missile from their tank.

        Off it went, turned around, came back and hit them. It had decided that their tank was the best threat.

        Good job it was a wooden payload.

    3. ITMA Silver badge

      Did the source of this "myth" have the two words "Tony Blair" associated with it by any chance?

      LOL

    4. Loyal Commenter Silver badge

      It reminds me of the also possibly apocryphal story of the AI trained on chest X-rays to spot cases where a chest drain would be beneficial. It was given a nice set of X-rays to train on, which had been vetted by the thoracic consultant, half of which were cases which didn't need a chest drain, and half were cases which did. It worked really well, and picked up all the ones needing a chest drain. When tried on real data, it didn't perform so well at all. It turns out that those training images from patients who needed a chest drain had already had one fitted (for ethical reasons!), and the "AI" was matching the chest drain in the X-Ray and not the symptoms.

      1. Anonymous Coward
        Anonymous Coward

        the "AI" was matching the chest drain in the X-Ray and not the symptoms.

        One wonders whether the "AI" was to blame, or the unsophisticated (or perhaps careless) nature of it's training.

        1. Loyal Commenter Silver badge

          Re: the "AI" was matching the chest drain in the X-Ray and not the symptoms.

          I'd say it's a little from column A and a little from column B. The problem with trained "AI" is you can't know what it has actually been trained to do, you can only look at the results, without knowing how they are arrived at. For example, if you trained an AI to spot red BMWs amongst a sea of blue Fords, is it spotting blue cars, BMWs, a combination of both, or personalised number plates? Unless you have suitably controlled test cases to measure the output, you can't know.

          At least with a real person, you can ask them how they arrived at an answer. If they give a reply along the lines of "it just felt right the right answer", they've just failed their Voight Kampff test.

          1. ibmalone

            Re: the "AI" was matching the chest drain in the X-Ray and not the symptoms.

            This is not actually impossible to do, for example you can feed in reams of perturbed example data and see which perturbations affect the outcome. More efficient methods might be possible for specific learning frameworks.

          2. Allan George Dyer

            Re: the "AI" was matching the chest drain in the X-Ray and not the symptoms.

            @Loyal Commentator - 'If they give a reply along the lines of "it just felt right the right answer", they've just failed their Voight Kampff test.'

            Not really, often people find it difficult to explain why something is "not quite right", the Uncanny Valley can provide examples. In the opposite sense, conmen try to manipulate people into trusting them with the right triggers, "but he seemed so nice".

            1. Anonymous Coward
              Anonymous Coward

              Re: the "AI" was matching the chest drain in the X-Ray and not the symptoms.

              There is the expression of the "ring of truth". I find that when I'm trying to remember something out of many possibilities then the right match really does have "ding" feeling. It also happens when I'm trying to solve an IT problem - before the then exacting application of logic to prove it.

          3. Anonymous Coward
            Anonymous Coward

            Re: the "AI" was matching the chest drain in the X-Ray and not the symptoms.

            Or the absence of working indicators???

          4. Anonymous Coward
            Anonymous Coward

            Re: the "AI" was matching the chest drain in the X-Ray and not the symptoms.

            it just felt right the right answer

            That's an interesting point.

            Sometimes - with humans - a 'gut feeling' might still be based on input data. It's just that the person in question can't identify and isolate the particular data that ultimately gives rise to the subsequent decision.

            When I worked in the rat race many years ago, I could always seem to tell if something was up (involving other people, not machines), but could never identify why. At the time, I assumed I was just being paranoid or something, but as time went by, my 'gut feelings' tended to be strangely accurate.

            In the job that I do now, picking up 'the vibes' of the people I deal with is quite important. I can tell if someone's mood is different to the last time I met them within a few seconds, even if they say they're fine. Then, later, it'll come out they've had a negative experience somewhere and it's that I'm picking up. It's impossible to say precisely what it is I'm detecting, but I can just tell it's somehow different. It's my 'gut feeling'.

            I'm sure everyone else can detect it, too. They just don't realise or put it to use.

            1. Charles 9

              Re: the "AI" was matching the chest drain in the X-Ray and not the symptoms.

              It's most likely what can be attributed to subconscious training. That's part of that makes measuring intelligence so difficult; we don't even understand how we arrive at everything, as a good chunk of our brain functions are autonomic and outside our conscious perception.

              1. Terry 6 Silver badge

                Re: the "AI" was matching the chest drain in the X-Ray and not the symptoms.

                A good example of this is where the words come from when we speak. We don't actually select which words we are going to use. We sort of find out as we say them.

                Even more freakily; Brain research shows that the signal to move a muscle occurs before we decide to move the muscle. (Can't find/be bothered to look for the reference).

            2. imanidiot Silver badge
              Coat

              Re: the "AI" was matching the chest drain in the X-Ray and not the symptoms.

              I'm not all that clued in to others and I've had co-workers where the words "completely and utterly oblivious" wouldn't even begin to describe it. But then again I didn't choose to become a mechanical engineer because I like working with people. I prefer things I can hit with a hammer if they don't work. While that method works in humans too, its usually frowned upon.

              "Other people" are annoying and confusing. Best avoid them imho.

      2. Anonymous Coward
        Anonymous Coward

        When tried on real data, it didn't perform so well at all. It turns out that those training images from patients who needed a chest drain had already had one fitted (for ethical reasons!), and the "AI" was matching the chest drain in the X-Ray and not the symptoms.

        And this is the other big lie that AI people/companies tell - the more you train it, the better it becomes, or like in TFA:

        "A system with two years’ learning under its belt will do the job far better than a new one that has to learn everything from scratch. It's the same reason that someone who's been with your institution for two years works more efficiently than a newly hired employee. Experience matters."

        In scenarios like this, where you eventually find that you've trained the wrong thing, you go back ot the start and redesign something, and start from scratch again. AI is like having a job role that you keep firing people from, "the model" is constantly being adjusted and retrained.

        We have an AI team; they currently training a model to crop images to the right size for the website. 5 developers, 4-5 years of time so far, and the thing still isn't a) in production, or b) any better than the hand-coded auto-cropper which gets adjusted by a mechanical turk.

        The aim of course is to do away with the people who currently do this as 10% of their working time, or "free them up to do other things" in manglementspeak - there are 10 of them, probably earning 30-40% of what each developer does (and probably just 10% of what AWS earns from the project...) Long time to get the payoff from this.

      3. Yet Another Anonymous coward Silver badge

        Did a similar project for skin cancer and the first thing we had to do was edit out the ruler in the pictures taken in clinics, once someone was pretty sure the mole was bad, from the control set of normal moles

    5. John Miles

      Re: Then they tried it on some real data,

      You know this sort of error is older than computers -

      Anti-Tank Dog -

      Another serious training mistake was revealed later; the Soviets used their own diesel engine tanks to train the dogs rather than German tanks which had gasoline engines. As the dogs relied on their acute sense of smell, the dogs sought out familiar Soviet tanks instead of strange-smelling German tanks.

      1. TimMaher Silver badge
        Windows

        Winter is coming

        And then the German petrol powered tanks froze up.

    6. Ken Shabby

      There was the AI classifier that was trained to identify dogs vs wolves. They discovered that most wolf pictures showed them with snow in the background and that is what it was recoginsing.

    7. Dan 55 Silver badge
  4. Franco

    I am reminded of the wonderful game Deus Ex.

    The Illuminati created an AI called Daedalus to advise them, and eventually to analyse the data captured by the Echelon system and identify terrorist groups and threats to the Illuminati. Unfortunately, due to a "pattern matching" error, the Illuminati themselves were classified as a terrorist organisation as well.

    1. Anonymous Coward
      Anonymous Coward

      Having first played Deus Ex in 2000, I found it strangely prophetic - particularly the destruction of a major US landmark by terrorists, leading to the creation of a new government agency and a global war on terror that got rather out of hand...

      1. Franco

        One of the weirdest thing about it is due to the limitations of computers at the time the New York skyline didn't include the Twin Tower

    2. amanfromMars 1 Silver badge

      Well, they would say that, wouldn't they

      Unfortunately, due to a "pattern matching" error, the Illuminati themselves were classified as a terrorist organisation as well. .... Franco

      Who/What decided on that particular "pattern matching" error being a false positive and not a fake negative? I'm calling out MRDA on that if the answer is the Illuminati themselves.

      1. Anonymous Coward
        Anonymous Coward

        Re: Well, they would say that, wouldn't they

        Mens Roller Derby Association?

        Mandy Rice-Davies?

        Multi-Resonant Dipole Antenna?

        1. First Light

          Re: Well, they would say that, wouldn't they

          Mierda? You know, Spanish for BS.

          1. jake Silver badge

            Re: Well, they would say that, wouldn't they

            "Mierda? You know, Spanish for BS."

            You would say that, wouldn't you.

  5. jake Silver badge

    A friend of mine is playing with ...

    ... intentionally using the Butterfly Effect in AI training data as a means of surreptitiously manipulating the end result. He's doing it for fun, as a distraction from his Doctoral studies.

    Or rather it WAS fun ... until I asked him what would happen if "the bad guys" did it.

    1. Steve K

      Re: A friend of mine is playing with ...

      until I asked him what would happen if "the bad guys" did it.

      Probably not "if" - I am sure that this is already done in some cases, and would be impossible to detect (unless you had associated suspicions from system intrusion/access logs)

  6. chivo243 Silver badge
    Trollface

    fool me once!

    "Volunteer my services, only to find everyone else is getting paid for theirs." Never again, not on your life bub! The first thing out of any self respecting contractor's mouth would be "What's in it for me?" Just once I tried to be nice, turned into the longest week of my life!

    1. Steve K

      Re: fool me once!

      "No good deed goes unpunished"

      1. chivo243 Silver badge
        Go

        Re: fool me once!

        Ah, yes, you were there!!

  7. Pascal Monett Silver badge
    Coat

    "Experience matters"

    Yeah, right up until a beancounter looks at his data and says "he costs that much !", and then your experience follows you out the door.

    Never forget : the cake is a lie.

    1. Aristotles slow and dimwitted horse

      Re: "Experience matters"

      Aperture Labs and GladOs welcome your participation in the live human UAT phase of our Lethal AI security response programme.

      And yes, the cake is always a lie.

      1. Steve K

        Re: "Experience matters"

        "If life gives you lemons, ask to speak to life's manager!"

        -- Cave Johnson

        1. jake Silver badge

          Re: "Experience matters"

          "If life gives you lemons, put them up in salt!" —Grandma, c.1960

          1. ibmalone

            Re: "Experience matters"

            If life gives you lemons, peel them, taking care only to remove the outer skin with no pith. Seal the peel in a container with about 100ml of pure alcohol (95% will do if 100% not available) per lemon for a few weeks. Strain the liquid and mix it about 1:2 with simple syrup. Ideally lay this up for another week or two. The attractive yellow liquid you have prepared is limoncello.

            After peeling the lemons can be juiced to use as an ingredient in a cocktail such as a whisky sour, gin fizz or French 75. For the French 75 you will need gin, champagne and some additional sugar syrup, you may also wish to put aside a little of the zest.

            (Yes, ideally you would use Amalfi or Sicilian lemons for this, but if life is giving away lemons I prefer not to look them in the mouth. Also assuming that any lemons life has in stock will be organic.)

            1. TimMaher Silver badge
              Headmaster

              Keep a teaspoon of the juice.

              Nice recipe for the 75 @ibm.

              The juice should be put in the jug that comes with your electric whisk.

              Then add a teaspoon of salt flakes, a teaspoon of Dijonais and a teaspoon of organic white wine vinegar that you have pre-prepared by soaking with some home grown tarragon in its bottle.

              Take an organic egg and put it one side, to get to room temperature, while you swirl the mix in the jug, by hand.

              Add about 10 rotations of a pepper mill, preferably using white peppercorns as they are more aesthetic than black in this case.

              Crack the egg into the jug and mix it with the electric whisk.

              Slowly pour in about 250ml of organic vegetable oil and move the whisk around whilst you do so.

              After a couple of minutes you should have a near perfect mayonnaise.

            2. Tail Up

              Re: "Experience matters"

              1. 1/4 of glass of Moonshine

              2. 3/4 of strong sweet tea

              3. Enjoy

    2. Doctor Syntax Silver badge

      Re: "Experience matters"

      "and then your experience follows you out the door."

      Until the beancounter finds out what the experience really contributed and how much it costs at freelance rates.

      1. A.P. Veening Silver badge

        Re: "Experience matters"

        Until the beancounter finds out what the experience really contributed and how much it costs at freelance rates.

        Most never do as that is another budget and even if they do, it is always way too late, the competition pays better for that experience (and knowledge of the inner workings of their competition).

  8. Dr_N
    Pint

    Smells like, "dog turds"?

    I believe the correct terminology used by the tasting-note bores is, "Nutty."

    1. Anonymous Custard
      Trollface

      Re: Smells like, "dog turds"?

      Depends what you've been feeding the dog ...

      1. The Oncoming Scorn Silver badge
        Alien

        Re: Smells like, "dog turds"?

        I'm reminded at this point of the closing line from (Grant Naylor) "The Strangerers".

        "Look! Dog eggs!

        1. Dr_N

          Re: Smells like, "dog turds"?

          Viz stickers: "Caution: Dog Eggs In Transit"

          1. Keven E

            Re: Smells like, "dog turds"?

            Tastes... yet... also smells (like) the dogs bollocks.

      2. TRT Silver badge

        Re: Smells like, "dog turds"?

        Squirrels.

    2. jake Silver badge

      Re: Smells like, "dog turds"?

      Only if you feed your poor dog a vegan diet based on soybeans ...

      1. Tim99 Silver badge
        Devil

        Re: Smells like, "dog turds"?

        A number of years ago my neighbour was/is a vegetarian. So was her dog. The dog didn't seem to like me much until I started surreptitiously feeding it small pieces of my bacon sandwiches. Well my homemade bread was organic & vegetarian...

        1. imanidiot Silver badge
          Flame

          Re: Smells like, "dog turds"?

          Poor dog. Those scraps were probably the only thing keeping it alive. Stupids forcing their eating habits on their (carnivorous) pets should be done for animal cruelty.

    3. James Anderson

      Re: Smells like, "dog turds"?

      But he lives in France. how could he possibly get a beer that smelt of anything but water?

      Real water that is, not French tap water or Evian.

      1. TRT Silver badge

        Re: Smells like, "dog turds"?

        Making a light beer is like making love in a canoe. They’re both f***ing close to water.

        1. jake Silver badge

          Re: Smells like, "dog turds"?

          a) Your comment needs help, it's not coherent.

          b) It's actually much harder to make a decent lighter beer like a lager than a darker, heavier stout or ale because there is no place off flavo(u)rs to hide. Simple Ales are a lot easier to make than American industrial lager clones. If you don't believe me, try it.

          1. TRT Silver badge

            Re: Smells like, "dog turds"?

            Both end up f***ing close to water. Better?

            And yes. I have no problems personally with light beers though I do prefer a rich brown mild ale myself. There used to be a bottled beer called Forest Brown Ale. Lovely stuff. Not seen it for years.

        2. Charles 9

          Re: Smells like, "dog turds"?

          Ever thought both are DESIRED? Something that's close to water but isn't actually water is a boon to hot climes. As for sex in a boat...there have been stories.

  9. brotherelf

    Ah yes, Artificial Stereotyping.

    It might be good for writing romance novels and sci-fi pulp, though? (Or at least as good as the current crop of acute adjectivitis.)

    1. Terry 6 Silver badge

      Re: Ah yes, Artificial Stereotyping.

      Now then, you've hit on a nugget of real world truth there. The curriculum for teaching young kids to do creative writing is very much focussed on having lots of adjectives. (And other "parts of speech" but mostly adjectives/adverbs).

      The Powers-That-Be have decided that there is a formula for good writing that has a high bias toward using these "wow words"- especially in opening sentences.

      Imagine an AI trained by such people, or indeed the current crop of youngsters when they've grown up...

      1. jake Silver badge

        Re: Ah yes, Artificial Stereotyping.

        Shit, we've been putting up with that kind of thing for years. Ever read any Verne or Wells? How about the Brontës? At least Shakespeare had the cajones to poke fun at it ...

      2. Anonymous Coward
        Anonymous Coward

        Re: Ah yes, Artificial Stereotyping.

        Don't forget the fronted adverbials. I haven't a clue what the f**k they are but the way the teachers all harp on about them they must be absolutely critical to writing anything meaningful.

        1. Terry 6 Silver badge

          Re: Ah yes, Artificial Stereotyping.

          I had to check back on that one myself -despite a) having done a fair bit of classroom teaching since my retirement from being a literacy specialist and b) having an 'A' level and a large chunk of my degree in Eng Lit.

          Because it isn't something anyone should need to bother about with, if not for the fact that it's part of that same "Wow words" bollocks.

          Example,

          "Suddenly, Francine shot the politician".

          "Suddenly" being an adverb. And it's fronted because it comes before the verb it describes.

          But the point to it is that in the Behaviourist inspired world of curriculum design UK style it's an element of their model of "impactful" (I think the word is actually in a curriculum document somewhere- certainly in training materials) writing.

          Being very much a Behaviourist model of literacy teaching, everything has to be taught from mechanical, testable components rather than the messy, intuitive, subjective, real life activity that is literacy. It's also easy and cheap to publish then sell training materials to hard pressed teachers at inflated prices. Teachers are under pressure for the kids to get ticks in boxes, so must use this stuff in the prescribed manner.

          It suits politicians because it is testable and measurable - whether it adds up to decent writing is another matter. IMHO it creates an army of clone Zombies - every kid churning out the same rubbish to get the marks. And by the way, the same goes for the focus on "Phonics". Totally Behaviourist in method. Easy to teach, easy to test, easy to design programmes, easy to sell, both financially and politically- because it seems logical - even if it doesn't match how we actually read.

          And, to briefly draw this back to "Artificial Intelligence", it seems to me that the approaches I've read about also seem quite Behaviourist in the underlying thinking- I may be wrong.

          1. James O'Shea

            Re: Ah yes, Artificial Stereotyping.

            Did Francine get a medal?

  10. Roger Kynaston

    same old story

    GIGO!

    Love the idea of always applying updates straight away as being a stupid idea.

  11. Irony Deficient

    • Park in the same spot at the supermarket

    Why is this item on your “I will never do this again” list?

    1. Alistair Dabbs

      Re: • Park in the same spot at the supermarket

      I rank it alongside "always get the same haircut" as being a sure sign of turning into a sad old bastard who's given up on life.

      1. Red Ted
        Go

        Re: • Park in the same spot at the supermarket

        Yes, I agree, as I generally park in the same spot in the work car park.

        On the occasions I park in a different space, I inevitably can't find my car when I want to leave again!

    2. Pen-y-gors

      Re: • Park in the same spot at the supermarket

      It is very sensible. As one ages ones memory often becomes a little, thingy, you know. Particular problems come from trying to remember a specific instance of a regular event, like parking in the supermarket. The brain just drops the info as soon as it comes in. So wandering out with 3 large carrier-bags you're standing there like a plonker for ages trying to remember where you parked THIS time. Much better to stick to the same place, and stick a flag on your car radio aerial.

      I have a similar problem parking in town. Probably a dozen spots where I can sometimes find a slot, so drive from one to the next until I find a space. An hour later: where the heck did I park! I have to mentally replay my route until it clicks.

  12. Caver_Dave Silver badge
    Joke

    Wrong day out for the significant other

    You take your beer tasting, whereas I ...

    I booked a table for the evening as my girlfriend said she needed to go out. How was I to know that she couldn't hit a snooker ball!

  13. Clinker

    A wonderful Friday article! Thank you Mr Dabbs!

  14. Anonymous Coward
    Anonymous Coward

    "All you’ll get is a machine that’s learnt to be as unconsciously racially biased in its profiling as the arresting officers and judges delivering the sentences."

    Rotherham.

  15. Astrohead

    Artificial Stupidity

    I'm developing an artificial stupidity system. Progress has been phenomenal.

    I think that's primarily due to the vast data set I have to work with.

    1. jake Silver badge

      Re: Artificial Stupidity

      There is no Artificial Stupidity ... Stupidity is the most common thing in the infinite Universe, therefore all examples of stupidity already exist naturally.

      "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former." —Albert Einstein (supposedly)

      "Apart from hydrogen, the most common thing in the universe is stupidity." —Harlan Ellison

      "There is more stupidity than hydrogen in the universe, and it has a longer shelf life." —Frank Zappa

  16. shortfatbaldhairyman
    Flame

    Forget mimicking, they are fragile

    Reminds me of my research days. I was trying to figure out HOW to show that a particular neural net (not that that was the name used) did learn. A friend suggested that I look even more closely at some standard optimisation techniques I was using, and all seemed well.

    Until I realised that I was essentially showing how fragile the whole shit was.

    Repeat after me, neural nets are not generic.

    And a few years ago started reading about this deep learning and had smoke out of my ears. Called a friend who is still in academia and asked him about "old wine in new bottles". He laughed and said "It is not that bad".

    Indeed.

  17. Muscleguy
    Boffin

    It’s the wetware

    As a sometime neuroscientist it’s in part because our brains work more like massively parallel analogue computers than digital ones so using digital computers to make intelligence like ours is doomed to fail.

    For those who like a challenge I recommend Peter Ulric Tse’s The Neural Basis of Free Will: Criterial Causation. A background in neurophysiology is recommended but there is also an argument in formal logic at the end.

    Basically when a thought goes once round the brain as the signal passes through neurons there are mechanisms which alter the set point of the neurons on the fly. So when the thought comes around again you can think about it differently, make connections from it etc. The range of mechanisms which do this is quite large and it is likely we have not detected all of them yet.

    It MAY be possible to make silicon emulate this but the computing power for a simple network modelling all the permutations will be enormous. The human brain is the most complex thing we know in the universe. Modern AI is just big data, it is NOT the route to machine intelligence.

    Oh and if we ever make a brain, it will be a baby and will need to be taught, and corrected just like a human being. Since we can make real human beings fairly easily and in a fun way, other than for interest’s sake why would you do this? So you can treat the conscious robot like a slave?

    1. Anonymous Coward
      Anonymous Coward

      Re: It’s the wetware

      "Since we can make real human beings fairly easily and in a fun way..."

      Unless one happens to not be set up that way...

    2. Anonymous Coward
      Anonymous Coward

      Re: It’s the wetware

      Since we can make real human beings fairly easily and in a fun way, other than for interest’s sake why would you do this?"

      The expectation is:

      1) such creations will live much longer than humans.

      2) their state at any time can be replicated in clones.

      As human knowledge gets distributed into manageable portions - then it creates silos whose knowledge is incomplete or built on misconceptions. People learn levels of abstraction as a "truth" - without an understanding of the constraints beneath them.

      The Enlightenment was three centuries ago. Looking at the news recently it appears that several generations have been raised without the benefits of such thinking.

    3. ThatOne Silver badge
      Devil

      Re: It’s the wetware

      > Since we can make real human beings fairly easily and in a fun way, other than for interest’s sake why would you do this?

      Because, first of all an AI doesn't get pay, and second, it doesn't mind working 24/7.

      That's about all the reasons you need. Who cares if they do substandard work, humans aren't always super-efficient either. The big difference is AI is CAPEX, while a human is OPEX.

    4. Anonymous Coward
      Anonymous Coward

      Re: It’s the wetware

      and you can certainly add a site https://laetusinpraesens.org as a vast array of information on this matter

  18. Fruit and Nutcase Silver badge
    Coat

    Eurotrash

    Watch trash TV after midnight

    Make shit up for a weekly column on an IT news website

    Alistair "Antoine" Dabbs? SFTWS as a Eurotrash for the IT crowd? (Ok, may be that is going a bit too far).

    Eurotrash was broadcast by Channel 4 on a Friday (2230?). Just like Eurotrash, SFTWS is "produced" in France.

    We even had Pipi and Popo make an appearance here recently.

  19. Willy Ekerslike

    AI vs NS

    As somebody much smarter than me once remarked "Artificial Intelligence will never beat Natural Stupidity!"

    1. amanfromMars 1 Silver badge

      All urBases Belong to Us. What Do You Now to Succeed? Fight Defeat or Concede Victory?

      As somebody much smarter than me once remarked "Artificial Intelligence will never beat Natural Stupidity!" ....... Willy Ekerslike

      Advanced IntelAIgents do not engage or compete against Natural Stupidity. Such is how and why ITs Progress is so rapid and rabid in those dumb sectors and terrified vectors drowning in its Base Subprime See Scapes ....... Perverse Subverted Corrupt COSMIC Guidance Systems

      Does humanity Do you like to think the converse/obverse/reverse, because it gives you something a warm blanket of cold comfort, that Natural Stupidity engages and competes against Advanced IntelAIgents?

      How well do you imagine any prize element or sundry vital component encountered or launched from either side, faring in that entanglement?

      Be honest now, for your life certainly depends upon the honesty of such a reply.

      1. amanfromMars 1 Silver badge

        Re: All urBases Belong to Us. ......... Fight Defeat or Concede Victory?

        00ps .... I almost forgot to give you ... COSMIC is an abbreviation for "Control Of Secret Material in an International Command" ......to try further help you understand the dire straits present condition of the current future situation you be in.

        1. jake Silver badge

          Re: All urBases Belong to Us. ......... Fight Defeat or Concede Victory?

          Huh. And here I always thought COSMIC was an abbreviation for "Common System Main Interconnecting Frame". Must be my telephony background rearing it's ugly head again ...

      2. jake Silver badge

        Re: All urBases Belong to Us. What Do You Now to Succeed? Fight Defeat or Concede Victory?

        "Advanced IntelAIgents do not engage or compete against Natural Stupidity."

        Don't be silly, amfM. That's EXACTLY what we are building them for ... to do the drudge work formerly done by the under-educated and ineducable.

        If you want something truly worth railing against, how about the sorry state of education in the Western World? In theory we've had the time, money and capability to give every child a real education for over a century ... and yet we've intentionally crippled the education budget pretty much everywhere. One of the first places any government makes spending cuts has been in education these last fifty years or so ... and people wonder how Trump got elected? I give you exhibit A ...

        1. amanfromMars 1 Silver badge

          Corrupt Fuzzy Logic for all Intent on Tempting Wares and Tasting Fare from Hell. Take Care.

          "Advanced IntelAIgents do not engage or compete against Natural Stupidity."

          Don't be silly, amfM. That's EXACTLY what we are building them for ... to do the drudge work formerly done by the under-educated and ineducable. ..... jake

          ?????? Pray tell, jake, how the "we" in all or any of their wisdom, expect the Naturally Stupid to do the drudge work formerly done by the under-educated and ineducable, and for the practise not to create a perfect base breeding ground for revolt and revolution, madness and mayhem in their midst and surrounding them and feeding them their daily bread, milk and honey.

          That is a permanent ACTive vulnerability always ready for exploitation by more than just the Naturally Stupid anywhere and everywhere. To say such is one of the silliest of building projects is not the most accurate of descriptive superlatives to use whenever the program is so devastatingly self-destructive of elite executive officers and non-state support actors. ..... the nucleus core of the operation gone critical and rogue and into runaway China Syndrome meltdown phase.

          Your last paragraph may perfectly highlight the underlying root cause of all of the present problems with madness and mayhem, conflict and chaos and why the novel solutions for viable resolutions will not be found in the conventional and traditional/homegrown harvested..

        2. Charles 9

          Re: All urBases Belong to Us. What Do You Now to Succeed? Fight Defeat or Concede Victory?

          The reason education keeps getting cut is because no one can agree on what needs to be taught: not even the basics, no thanks to all those mental liberation movements and talks about conspiracies and everything being a lie over the past few decades. And if that means civilization stops having a stable footing, anarchists in the bunch will emphatically reply, "THANK YOU!"

          1. amanfromMars 1 Silver badge

            Re: All urBases Belong to Us. What Do You Now to Succeed? Fight Defeat or Concede Victory?

            And what could possibly go wrong with the following Fuzzy Wuzzy type action? .....

            The Biden administration is gearing up to carry out cyberattacks aimed at Russian networks, the New York Times has reported, describing the provocation as a retaliatory measure designed to send Moscow a message. ..... https://www.rt.com/usa/517481-cyber-attack-biden-russia-solarwinds/

            What are they toking/dropping/injecting in the USA?

            1. Anonymous Coward
              Anonymous Coward

              Re: All urBases Belong to Us. What Do You Now to Succeed? Fight Defeat or Concede Victory?

              ...Joined with another elegant question "What is it ticking in the USA"...

            2. Terry 6 Silver badge

              Re: All urBases Belong to Us. What Do You Now to Succeed? Fight Defeat or Concede Victory?

              Good rule of thumb-; If it's claimed that a government is about to carry out a sneaky action somewhere it won't be. Or at least not that particular one. Because you don't warn them of your move before you make it.

  20. WolfFan Silver badge

    Many years ago

    I read a SF story set in what was then the near future (the date in question is now in the past) in which the author had Roomba-like Artificial Stupid devices which sufficiently annoyed users that researchers did a bit of genetic engineering on assorted rodents, felids, and canids because it was easier to give them thumbs and training than to get the AS devices to work properly. The researchers failed to consider exactly what rats and cats with thumbs might get up to. Mayhem ensued. (The dogs behaved. It is possible that the author was not a cat person.)

    1. Version 1.0 Silver badge

      Re: Many years ago

      "I've seen things you people wouldn't believe. Attack ships on fire, off the shoulder of Orion. I've watched C-beams glitter in the dark near the Tannhauser Gate. All those moments will be lost in time, like tears in rain. Time to die..." - Roy Batty

  21. Giacomo Bruzzo

    Thank you

    This is fabulous, I just peed myself a little.

  22. FlamingDeath Silver badge

    “ “some of humanity’s greatest achievements arise from hunches, guesswork and pure luck rather than from the painstaking evaluation of evidence.”

    I’m not sure if this is a positive trait, one of these days we’re going to be messing with stuff that can destroy whole universes, I’m still waiting for those pricks at CERN to throw some samples into the test chamber and open the Gates of Hell, albeit accidentally and unintentionally.

    That prat Oppenheimer and his team, somehow managed to create a terrifying weapon and this fact eluded them, all the way up to detonation, only then did the fucking penny drop.

    Yes mate, while you been busy solving problems, you failed to notice the multitudes of problems you’re dumb fucking brains are creating

    “Slow clap”

  23. FlamingDeath Silver badge

    Are smart people smart?

    It sounds like a silly question right?

    Oppenheimer didnt say “I shall become the destroyer of worlds” prior to building the atom bomb

    He said it after the fact, as if it was a surprise to him

    That doesn’t sound very smart, it sounds awfully autistic

  24. Potemkine! Silver badge

    Install update or not install update? That is the question.

    Either you install them, and you risk to have your system broken by a bug, or don't install them and you risk to have your system hacked through a corrected vulnerability.

    What do you prefer Sir, impalement or quartering?

  25. Potemkine! Silver badge

    Big Data is dead

    Or so say Gartner.

    This is the end of Big Data, now it's Small & Wide Data!

  26. Bitsminer Silver badge
    Pint

    Coincidentally

    I watched Billion Dollar Brain the other night. I seem to recall that it (the BDB) did order someone to kill poor Harry. But he survived.

    Kudos for the Harry Palmer reference.

  27. Anonymous Coward
    Anonymous Coward

    Biden To Go ?

    Is this thing Biden wants to do - turning off electricity in some places on Earth?? That would be an act of w*r. By coincidence, some country might have a plan to attack its separated regions in the very same time.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like