back to article AI in the enterprise: AI may as well stand for automatic idiot – but that doesn't mean all machine learning is bad

Welcome to the inaugural Register Debate in which we pitch our writers against each other on contentious topics in IT and enterprise tech, and you – the reader – decide the winning side. The format is simple: a motion is proposed, for and against arguments are published today, then another round of arguments on Wednesday, and a …

  1. Robert Grant

    Deploying machine learning models still requires traditional hardcoded techniques

    What are traditional hardcoded techniques?

  2. katrinab Silver badge
    Meh

    1. Machine Learning is not AI

    2. The training data is the code, and the "AI" algorithm essentially compiles this into assembly language that the computer understands.

    If there are problems with the training data, then the algorithm won't work.

    Problems with the training data might not always be the most obvious ones. For example, there was an attempt to train a computer to recognise a medical condition from viewing images. The doctors who supplied the images helpfully circled the areas which indicated the medical condition was present. A human trying to learn from these images would understand that this was a circle added by the doctor, and concentrate their efforts in looking at the area highlighted by that circle. The computer just decided to look for images with circles drawn on them, because it was too dumb to understand that there were circles that were not part of the original image.

    Computers are no more intelligent now than they were in the 1970s when Unix came out. Most of them still run Unix, or something based on the Linux kernel which is basically the same as Unix. Windows came later and is different, but not in any way that is relevant to intelligence.

    Matt Parker has a series of videos on YouTube about using machine learning to train a computer to learn noughts & crosses (en:US tic tac toe). As noughts and crosses is a very simple game, the ML model is small enough to understand what is going on. It can, with suitable training, learn how to play one perfect game, but it doesn't understand the different strategies you can use (eg. corner / centre / edge), and how to deploy them to your advantage.

  3. Chris G

    "1. Machine Learning is not AI"

    And there you have it, stop calling machine learning, deep learning and neural networks intelligent , they are not, the use of the term Artificial Intelligence for any of the current uses at the current state of the technology is fraudulent.

    Without doubt there are machine learning algorithms that are extremely useful, particularly in industry and medicine.

    Are they clever? Yes.

    Are they intelligent? No!

  4. nautica Silver badge
    Happy

    Does this ring a bell, artificial-intelligentia?

    "I have found that the reason a lot of people are interested in artificial intelligence is [for] the same reason a lot of people are interested in artificial limbs:‭ ‬they are missing one."--David L. Parnas

  5. nautica Silver badge
    Happy

    "cold fusion", anyone?

    You will perhaps notice that there is no longer a stand-alone entity at MIT known as The Artificial Intelligence Laboratory.

    Perhaps MIT, taking a cue from Duke University--which, perhaps, waited a little TOO long--, did not want its name sullied DIRECTLY by being associated with a possibly credulous, and certainly NOT falsifiable concept (Duke's close call with disaster?: only something as credulous as parapsychology and ESP. Duke went so far as to have built and funded an entire laboratory for the research of same. Only after many years was this entanglement--with JB Rhine--dissolved).

    This is not to suggest, in any way, that there might possibly, even in the slightest, be something credulous about the concept of "Artificial Intelligence". Not one bit.

    Or, perhaps, their is some correlation between the contemporaneous nature of the AI Lab's being quietly downgraded at about the same time as Marvin Minsky's--the Lab's founder and head, of many years--having accepted a $100,000 research grant from Jeffrey Epstein in 2002, four years before Epstein's first arrest for sex offenses.

    Simply do your very simple, and readily available and easy, searches. You'll find them enlightening, to be sure.

    1. jake Silver badge

      Re: "cold fusion", anyone?

      You will notice that SAIL had the good grace to close down in 1980 ... Only to be reopened in 2003 when they noticed a largish cash injection was available from a new generation of suckers.

  6. John Geek
    Devil

    I've always considered AI an abbreviation for Artificial Ignorance.

    1. jake Silver badge

      It has long been the abbreviation for ...

      Artificial Insemination.

      About three years ago my large animal Vet came in with a funny bit of advertising. This guy's in his second career, he became a Vet after 25 years as a DBA working for IBM. He knows I'm a computer guy, and thought I'd be amused. The ad was for a large animal veterinary practice management software package "NOW WITH AI!!!"

      The Vet was laughing, and wondered how many times the company in question got Vets inquiring about their new Artificial Insemination package. Without a pause, I dialed the 800 number ... the answer was over 80% of calls! The guy on the other end wasn't amused when I suggested they fire their marketing genius and hire an AI expert ...

      1. Evil Scot
        Paris Hilton

        Re: It has long been the abbreviation for ...

        But marketing genius is just one half of the AI process.

  7. Anonymous Coward
    WTF?

    AI = Advertising Insertion

    AI has become a marketing feature like blockchain. It is something that you claim to have if you can.

    The vast majority of what is claimed to be AI was and remains pattern recognition. At best it can eliminate the human bias to find patterns where none exist or to find patterns which humans miss. However it still generates results with significant false positives and false negatives mostly due to a flawed starting dataset or flawed constraints.

    True AI research is going on with neural networks that simulate organisms will a limited and defined number of neurons and neural connections. But don't expect these to be commercially available in our lifetime.

    1. werdsmith Silver badge

      Re: AI = Advertising Insertion

      I know that some are using simple linear regression and calling it AI. I believe LR goes back to Gauss in the the early 19th century or even earlier.

    2. Mike 137 Silver badge

      Re: AI = Advertising Insertion

      "At best it can eliminate the human bias to find patterns where none exist"

      In theory, maybe. But in reality, because nothing can "make sense" to the machine it can, and does frequently, misinterpret patterns. There's masses of research on this from road signs to turtles.

      The human capacity for recognising the incongruous has not so far been understood well enough for any real attempt to implement it on machines. However I think it may be based to a great extent on the truly vast amount of experience a human has gained by the time they're an adult. The "training set" is probably many orders greater than any we have for "AI", as the human has been gathering and sifting information with context for a couple of decades 24/365 by the time they're compared to the machine.

      In any case, every implementation of "AI" so far has been a "one trick horse", but humans are at best versatile.

      However, for basic repetitive jobs, "AI" can be cheaper and faster than humans. On the radio today it was reported that a machine learning based cherry sorter on a fruit farm can sort 30 cherries a second and replace 40 people.

    3. Anonymous Coward
      Anonymous Coward

      Re: AI = Advertising Insertion

      I agree.. Without a model of causation everything touted as AI in marketing bumph is a pattern recognition machine. And often not a very good pattern recognition machine because it 'knows' only what parts of a pattern are common, not what parts are significant, and if the pattern hasn't been seen in it's training set it is literally clueless. If its being used to look for patterns in a dataset that can't modify it's behaviour or be modified- i.e. tumours on x-rays, it can be a powerful adjunct to human skill. The real danger comes when you start polluting the AIs training set with the results of it's pattern recognition, and calling it 'Machine Learning'. Because that's the fast route to algorithmic reinforcement of biases., implicitly or inadvertently.

  8. USER100
    Headmaster

    Agree with the gist of the article but

    It's badly worded.

    > Feed them [the algorithms] an input that slightly deviates from the ones encountered during the training process and it’ll make mistakes. {...and THEY'LL make mistakes, or feed IT [the algorithm] an input...}

    > There are all sorts of issues with modern machine learning that it’s no wonder you’re highly suspicious of the rising number of companies {There are [SUCH] issues with modern machine learning that it's no wonder... or 'There are all sorts of issues with modern machine learning, (SO) it's no wonder...'}

    On a non-grammatical point,

    > AI has now progressed enough that cloud companies even offer off-the-shelf models to help companies perform simple tasks like translating languages or recognizing objects and faces.

    Recognizing faces is not a simple task for a computer.

    1. big_D Silver badge

      Re: Agree with the gist of the article but

      Neither is translating languages. Google's efforts, especially with English <-> German are still laughable, when not downright dangerous.

      It was a few years ago now and I did submit the correct translation afterwards, so it did learn, but:

      Do not open the case, high voltage inside -> Das Gehäuse öffnen, Starkstrom drinnen (Open the case, high voltage inside)

      Do not open the case, no user serviceable parts inside -> Das Gehäuse öffnen, nichts drinnen (Open the case, nothing inside - not what you want to read, when you just paid 4,000€ for a new industry PC).

      Google Translate hat real problems with formal English. Use "don't" and it translated correctly as "tu es nicht", but use do not, it ignored the "not" and gave out "tu es" (do it).

      I did an internship at a translation company and, whilst my translations weren't anywhere near the professional standards required, they were still readable, accurate and a million times better than what Google, Bing or other translation engines could or can manage.

      While telling someone to open the case, because there is high voltage inside is laughable, it can also be downright dangerous, if you have absolutely no grasp of the source language and are 100% reliant on the translation. I've tried feeding German news articles through Google Translate to post them on English forums, for speed, but I still usually have to go back end re-edit the result, because it can't cope with the subtle way a sentence can imply the negative of a situation in German, without seeming to do so at first glance - and not after analysis by their so called AI.

      1. Nifty Silver badge

        Re: Agree with the gist of the article but

        I've noticed exactly the same type of German-English errors with Google Translate. I think the reason is, at some point users decided it was 'good enough' and have stopped feeding back corrections to train the system.

    2. diodesign (Written by Reg staff) Silver badge

      "It's badly worded"

      I've fixed those minor issues -- software has little bugs, articles have slightly wonky grammar from time to time. Don't forget to email corrections@theregister.com if you want to report things like this.

      Also on the point of facial recognition being simple -- the end result is, yes. It mostly works in a lot of production systems (hi, China). The process inside isn't simple.

      C.

  9. tfewster
    Facepalm

    I was hoping for a more compelling argument...

    ...in favour of AI. I'm an cynic regarding the current state of AI, and a pessimist, so I'd like to be proven wrong!

  10. anthonyhegedus Silver badge

    I'm very cynical with regards to the use of AI in more critical use-cases than asking google when kittens lose their baby teeth (which I did the other day). It appears to me that the current state of AI is basically lots of statistics, and lots of computing heft. None of it actually 'understands' a situation.

  11. Pete 2 Silver badge

    Not AI, but not dumb, either

    Right now (after round #1) I am leaning towards Claburn's argument. Primarily because the term "AI" has been hijacked by marketeers.

    There probably are some true instances of limited Artificial Intelligence out there. But they are in highly specialised areas and might only have a function as research tools.

    Take an example of identifying a cat in a photo. The "dumb" approach is to train a system with many images of cat / no-cat and then have a correlation system that settles on a probability of cat / no-cat. But a real intelligent system would be given those attributes of "catishness" and would then determine if, or how many, any object in the image possessed those features. Part of that intelligent behaviour would be the ability to exclude behaviours that were not apparent from a photo - for example: is warm, meows. That might be asking too much as it is veering towards a model of general intelligence.

    However Quach does not present a convincing case against the motion that: AI is not just dumb algorithms. Talking about "deep learning" and "machine learning" rather than actual AI. This rebuttal seems to be more looking to the future. About what will come, rather than what is here now. For example "But there is still a touch of machine learning in the works." and using examples about startups.

    In summary, I get the impression that the two participants have very different ideas of what "artificial intelligence" actually is.

  12. herman
    Devil

    Annoying Idiot

    I always thought AI stands for Annoying Idiot - the technology behind all the tech support bots of ISPs world wide.

  13. Version 1.0 Silver badge
    Joke

    AI is not that bad

    I've never seen AI code do policy U-turns all the time, lie about where the code goes even if there are restrictions on data access (AI was just checking it's DLL insight, that's not illegal), or even saying that it will get the program done and then wipe out the entire code saying the "AI Got It Done"

  14. ThatOne Silver badge
    Terminator

    AI does not exist

    Except maybe in some lab.

    Beyond that, "AI" is just a marketing buzzword for computer code making decisions according to a simple "IF...THEN" condition. Even in complicated cases like driving a car there is no "intelligence" whatsoever, and the only difficulty is deciding if the "IF" condition is (or not) met in our non-digital, fuzzy environment (It's way easier to determine if 2 = 3, than to know if that vague blob of pixels is a pedestrian or just a painting on some wall).

    Marketing has always tried to call a nag a purebred, and in this case it's not even a nag, just a wooden horse. Yet, because we have all grown up with SciFi movies full of talking computers displaying character and initiative, we are all tempted to believe those claims. Sorry people, "those are not the Droids you're looking for."

  15. DutchBasterd

    The first url is broken, it lacks _mon

  16. Denarius
    Unhappy

    not again

    AI, so far, is slightly more flexible Expert Systems. Narrow set of tasks, narrow field of "knowledge", narrow application. In the set field the expert system may do well. That does not mean it fits any useful definition of intelligence

  17. TonySomerset

    Teacher also Judge

    Fallible humans set the teaching tasks and fallible humans decide whether a 'good' outcome should be rewarded. So it is just humans training for human responses albeit a couple paces removed. Except no group of humans can simulate the real world interactions of the whole world of humans.

  18. Il'Geller

    Ilya Geller

    The wrong technology is used! Instead personalization, which I call Lexical Cloning, should be applied.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like