back to article AI in the enterprise: Prepare to be disappointed – oversold but under appreciated, it can help... just not too much

Welcome to the inaugural Register Debate in which we pitch our writers against each other on contentious topics in IT and enterprise tech, and you – the reader – decide the winning side. The format is simple: a motion is proposed, for and against arguments are published today, then another round of arguments on Wednesday, and we …

  1. Andy Non Silver badge

    Status of AI now

    I think AI is still in the very early stages of development and is of limited use and over-hyped. However, in the longer term I foresee AI becoming much more sophisticated and powerful. As a hardware analogy, I'd say AI as it stands now is the equivalent of the original valve powered computers and of very limited use.

    I think the software behind AI and the hardware it runs on needs to vastly improve... which will probably come with time.

    1. Anonymous Coward
      Anonymous Coward

      Re: Status of AI now

      "I think the software behind AI and the hardware it runs on needs to vastly improve... which will probably come with time."

      An IBM quote from the mid-90s talking about fuzzy logic?

  2. Anonymous Coward
    Anonymous Coward

    "Artificial intelligence in the enterprise is just yesterday's dumb algorithms rebranded as AI"

    This strikes me as a very poorly phrased debate subject, in that it is probably only partially true (or false) at best, but is also far too broadly scoped, and is nothing like the "AI" question(s) we want to argue about [1].

    "Artificial intelligence" has a range of meanings, "enterprises" differ greatly; "yesterday's dumb algorithms" could be any of a number of approaches, perhaps even including some unfairly maligned examples of yesterday's smart approaches; and I'm not even sure that the in-use meanings of "artificial intelligence" and "AI" always match up.

    Thus it will no doubt generate a great deal of heated discussion, a great proportion of which will be at cross purposes. Well done! :-)


    [1] Of course we have to argue about which is the best question first, before getting around to arguing about the answer.

    1. juice

      Re: "Artificial intelligence in the enterprise is just yesterday's dumb algorithms rebranded as AI"

      > This strikes me as a very poorly phrased debate subject

      I dunno - it feels pretty accurate to me.

      In much the same way as "blockchain" became the answer to all problems - despite solving none of them - AI has become the current big buzzword for enterprises, and is being very loosely interpreted, to the point where it pretty much covers any algorithm which involves an IF() statement.

      Even if you look at neural networks, they're just a tool which is designed to do a single job, and which show no intelligence whatsoever outside of their specialised subject.

      E.g. a neural network designed to identify giraffes can't be repurposed to predict the weather. It can't even be repurposed to identify pictures of lions!

      And even when you're just looking at it's specialist subject, there's no guarantees it'll do a good job, because you're dependent on the quality of the data which has been fed into it, which may have a bias, or have some other element which the black-box training has decided to give a weighting to.

      (There's also the point that once a neural network is trained up to the desired level, it's then "frozen" and shipped out. I'd argue that a key element of intelligence is the ability to change and adapt. But then, we're getting into philosophical-discussion-at-the-pub-on-Friday levels...)

      So, yeah. What enterprises call AI is often just a dumb algorithm.

      To be fair, we are increasingly entering a zone where the science is almost indistinguishable from magic, especially on a personal level.

      E.g. point your phone at a plant, and get a full description. Talk to your phone and it'll talk back. Paint out the background in a video conference, or give yourself virtual cat-ears. Or if you receive a social-media message, you can choose from one of the predefined responses your phone has selected for you.

      And so on.

      But these are all individual, highly specialised tools, and it doesn't take too much to blow the smoke away from the mirrors. The fun will come when we do finally figure out how to integrate them into something which is capable of adaptation and evolution...

    2. chroot

      Which option to pick?

      Exactly. AI simply does not exist yet, see Max Tegmark's book "Life 3.0". There is Machine Learning and that does use new algorithms. Which option to pick?

    3. diodesign (Written by Reg staff) Silver badge

      "Thus it will no doubt generate a great deal of heated discussion"

      Yes, that was the point. It was deliberately loosely defined. I didn't want it to be something like: "Backups are good for your business. Discuss." That's a bit dead end.

      I wanted to spark an argument over what exactly is AI, whether it has a place in business, and whether it's previous software routines with better PR.

      This is El Reg, not Cambridge.


  3. Mike 137 Silver badge

    Whose intelligence?

    The key question that nobody seems to be considering is what criterion we use when comparing machine and human intelligence. Having watched AI developments for some 35 years I still note that pretty much every implementation is and has always been a "one trick horse".

    The key aptitudes that distinguish smart humans are versatility, adaptability and the ability (which we still don't understand) to exercise intuition, and in over three decades I've seen no evidence that "AI" can do any of these autonomously. Consequently the greatest danger is that we may be aiming to replace humans with simulacra of rather dumb humans. The inevitable outcome of this will be ever reducing societal expectations of human mentation, which as a result will be less and less cultivated. This ultimately cannot avoid adversely affecting the developers of AI as they are drawn from the same population, so there's a good chance the quality of the technologies will spiral downward.

    It might be more effective in the long run to improve our education systems so we can unlock the vast pool of increasingly wasted human potential.

    1. This post has been deleted by its author

    2. JClouseau

      Re: Whose intelligence?

      Not directly related to the topic, but amen to that :

      It might be more effective in the long run to improve our education systems so we can unlock the vast pool of increasingly wasted human potential.

      Emphasis on "increasingly wasted".

      RIP Sir Ken.

    3. ohrm

      Re: Whose intelligence?

      Of course, humans with little experience lack intuition/gut feel too. I remember running a team with young IT engineers in the days when google had just started to become ubiquitous. They could google an exact error message and try the recommended solution, or do a reboot; but not much else. They could never answer my response to them telling me that "google hasn't come back with anything" as to what they should do next then.

      The main function of intelligence is to be able to make decisions with incomplete information while recognising what biases are trying to fill in the gaps. Even if the decision is 'find out more information'.

      Algorithms have biases (and errors) true AI would recognise these and counter balance them. We're a million miles away from that at the moment.

  4. USER100

    Performant in the Enterprise

    Whether or not it's "in the enterprise", I take issue with this: 'AI pioneer Marvin Minsky says that AI is basically machines doing what we do'. Yes, AI can be useful for various tasks, but saying that it's 'machines doing what we do' sounds a bit silly.

    +1 for the AC who said this debate is poorly phrased.

    > Of course we have to argue about which is the best question first, before getting around to arguing about the answer.

    I propose building a giant supercomputer...

    1. Anonymous Coward
      Anonymous Coward

      Re: Performant in the Enterprise

      The question seems perfectly adequately phrased, it seems to me that people (inc you) just want to answer a different question about AI. It's understandable too, it's a big emotive area.

      But as it stands the question will show what our fellow IT bods think about AI - whether it's bullsh*t or worth exploring.

      Ergo, well phrased question. Useful too.

      1. doublelayer Silver badge

        Re: Performant in the Enterprise

        I disagree. For one thing, what is a "dumb algorithm"? Does "dumb" just exist in the subject to contrast with "intelligence"? Does it actively mean "stupid", which means what is called AI would be less useful? For that matter, what about "yesterday's"? Does it mean that, if anything new was created and called AI, then I have to disagree because that algorithm wasn't here yesterday? Or perhaps it means that it was based on things known to us already, making pretty much everything a thing of yesterday.

        I think both sides are agreeing that AI is a nebulous term that has come to be applied to many different things, but the question asks us to decide what AI is and we only have two choices. I've seen things called "AI" which are old code and most certainly stupid. I've seen "AI" which is old code but it's rather useful. I've seen "AI" which uses new techniques and has shown dramatic improvement recently. I've seen "AI" which uses new techniques and is either going to be abandoned as useless after eating through a large budget or cause active damage to its unfortunate users. How am I supposed to assign all of this to one of two buckets when I don't even know where the boundary is?

    2. werdsmith Silver badge

      Re: Performant in the Enterprise

      I believe the question is “ Artificial intelligence in the enterprise is just yesterday's dumb algorithms rebranded as AI”

      So, not a question about the current state and capability of AI, but a question about old stuff being rebranded as AI.

      1. doublelayer Silver badge

        Re: Performant in the Enterprise

        So the question is "Have people rebranded old things as AI"? Yes. Question answered. A pity that doesn't seem to be what the arguments in the articles are talking about, since they both agree that this has happened.

        Maybe it's "Everything called AI is something old which has been rebranded"? Depending on your definition of "old", that's either an obvious yes, because every program is going to be based on things that were known to us a while ago, or an obvious no, because I can point to at least a couple tools we didn't have before but we now do.

        Either way, if we're just arguing about whether AI is old, we're going to come up with obvious answers. I interpreted the spirit of the question as involving some level of "Is the stuff called AI of use compared to what was previously available", which would make the debate more worthwhile, but I've now seen at least four interpretations of what the question really asks so I haven't a clue now.

  5. avakum.zahov

    AI - The new hot trend ...

    ... especially at organizations where NI is severely lacking.

    1. Anonymous Coward
      Anonymous Coward

      Re: AI - The new hot trend ...


      1. el kabong

        Re: NI?

        Natural Intelligence

  6. Potemkine! Silver badge

    I made my first AI software in 1993, using this beautiful language called LISP (especially beautiful if you find parenthesis beautiful). This software was able to guess a word you were mentally visualizing, and was able to learn if it failed to succeed next time.

    What is new IMHO is not the algorithms or heuristics, but the volume of data to process which is much huger than before.

    1. mhoneywell

      Yes. Volume is part of it, agreed. But equally the accessibility through the language, the tooling, the hardware/cloud/chip support.

  7. all ears

    As the article states, this debate "question" is vague and conflates some very different things. You will certainly get a lot of heat and smoke in return, but not much light.

  8. Schultz

    The primary function of AI is to attract funding.

    Looking for investors? Grant money? Resources for some IT related project in the company? Well, then you better get on the train, because AI captures attention and opens wallets. Until it stops doing so, but I am sure the Next Big Thing (AKA fashion trend) is around the corner ...

  9. Version 1.0 Silver badge

    Proof that it's AI

    Proof will not be seen until AI writes it's own code and it works error free. Until then it's just programmers thinking that they are smart enough to write code that AI can't write.

    1. JerseyDaveC

      Re: Proof that it's AI

      Did you ever see Cognos PowerHouse back in the 1990s, a so-called 4GL, or 4th Generation Language? The idea was that you told it in something resembling natural language (yeah, right) and it wrote the 3GL code for you, which was then compiled to a lower level via traditional means.

      In real life, it was rubbish and slow. But that was 25 years ago, so I'm a little surprised we've not seen a 21st Century reboot using modern technology and algorithms.

  10. Huckleberry Muckelroy

    The Concept of Real AI is Years Off

    The massive manipulation of humongous databases by ginormous arrays of infinite processing power is NOT AI.

    It is reminiscent of 1950s muscle cars, which could go very fast, but could not turn, stop, nor last past 50K miles. What they call "AI" is just an exercise in bloat, and a misnomer bigger than calling offsite-storage-somewhere as "The Cloud".

    Real AI cannot get achieved in binary.

  11. martinusher Silver badge

    Great Marketing Tool

    Artificial Intelligence has been around in one form or another for many decades. Its failed to live up to the promise because its oversold, its not really 'AI' as 'SI', simulated intelligence. This doesn't mean its not useful, its just that to get the best out of it you need to understand both its capabilities and its limitations.

    I may be a bit out of date but I think of the techniques used as Inference Engines, Production Systems and Neural Nets/Perceptrons. Each technique can yield what appears to be intelligent behavior to people who don't know what's going on but if you know what you're working with then its easy to drive the system mad (i.e. get erratic and/or meaningless answers from it). My fear is that given the properies of typical marketing folk and the eagerness of a significant subset of programmers to please that we're going to be saddled with algorithms that 'kinda/sorta/almost' work and people who believe the machine is infalliable. Not an exciting prospect.

    1. Anonymous Coward
      Anonymous Coward

      Re: drive the system mad (i.e. get erratic and/or meaningless answers ...)

      Thank goodness you can't do that with people! :-)

  12. Pascal Monett Silver badge

    There is no such thing as AI

    It's all statistics. I did the Google on AI, nothing but statistics. It's even said in the article, it's statistics.

    I have nothing against the tool itself, I'm quite sure that there are many areas in which a statistical analysis machine will indeed help large companies greatly, but it's a question of volume of data and expertise of the technician in charge. Small companies will not benefit from a statistical analysis machine because they don't have the volume of data to make analysis significant. Large companies with reams of data can benefit if they define the problem correctly and apply the proper analysis.

    As usual, the bigger you are, the more benefits you get.

  13. Jusme

    AI Is snake oil

    I once wrote a simple "bot" to act as a CPU opponent in an on-line game. It had various levels of ability, from playing purely random (but legal) moves, to analysing the game state and making the move most likely to result in a win, but tempered with a varying degree of randomness. It worked very well - most human players were really impressed by this "AI" opponent, especially when a random move appeared to be "inspired" gameplay.

    I've recently taken (well, was forced to take) a course in AI and neural networks. It convinced me that even the experts in this field don't have a clue how it works, and just keep turning up the complexity dial until they get acceptable results from the test data. A big mistake they seem to make is then extrapolating these results to new inputs - at a small distance outside the training set it can look quite convincing, but the further the real-world data gets from the training set the worse the results, up to the point where RNG would be just as effective.

    A further mistake is to mis-represent what their AI baby is doing. "This model recognises numbers". No it doesn't - it has absolutely no concept of "numbers", only a set of arbitrary shapes that it has been told to classify into specific buckets that we call "numbers". Show it a shape that any human would instantly (and yes, sometimes incorrectly) recognise as a number - e.g. a stylised 7-segment numeral or a heavily cursive one, and the AI would fail. "So it just needs more training data...", but that's not how human intelligence works - we can recognise numbers, with great accuracy, in forms and contexts we have never seen before. This "AI" is nothing but a poor, over-complicated, incomprehensible pattern recognition algorithm. A decent engineer could do a much better job at number recognition by writing a proper pattern-recognition algorithm, but that is hard work and needs skill. The "AI" solution just throws lots of data at a block box until it gets good enough results to satisfy test criteria, no skill required. The mistake is to then apply this outside the limited domain of the training dataset and expect "computer" accuracy (i.e. believe it 100%). Intelligence is not a brute-force game, it's much more subtle.

    Phew, good to get that rant off my chest :)

  14. This post has been deleted by its author

  15. elregukac

    Is a Calculator "artificially intelligent"?

    Even a basic calculator can perform multiple digits calculation instantaneously, which a normal human would struggle to do.

  16. Big_Boomer Silver badge

    AI is just another Marketing term for your Buzzword-Bingo sheet

    Nothing I have yet seen even comes close to AI or even SI for that matter. There are a few that can simulate intelligence briefly, but it pretty rapidly becomes obvious you are dealing with a program. I hope that we are still researching AI but so far as I can tell these last few years have been an attempt to monetise research systems that are not ready, don't work as they should, and as such are effectively a con. There are very few exceptions out there that I would put my own money into.

    As a concept, AI worries but excites me. You have no idea what I would give to be able to have a chat with Andrew Martin or R.Daneel Olivaw, but given human greed and corruption, I suspect that Skynet and The Matrix are far more likely outcomes.

  17. AVee

    Oh! Shiny!

    "AI can allow you to do new things in new ways."

    As opposed to doing old things in new ways, which would actually be an improvement? Or as opposed to doing new things in old ways which seems impossible?

  18. shortfatbaldhairyman

    Random flailing

    Oh well, here goes. At a tangent, yea yea.

    How can an algorithm be anything else but dumb? Quoting Dijkstra "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."

    So yes, may or may not be yesterday's algorithms (they mostly are, with slightly better understanding of structure, FAR better hardware and maybe minor algorithm extensions) but that word intelligence is overused.

    And, when I started reading about deep learning, called up a friend asking what the fuss was about, old wine in new bottles and all. His response was that it was not as bad as that. Maybe not.

    Oh well, can go on and on.

  19. Pete 2 Silver badge

    We probably don't even want AI

    ISTM the great attraction of AI (apart from being novel and hence open to exploitation) is that it is seen as doing things like humans do, only better. But is that the way to go?

    Fish have many techniques for swimming. People also have different methods of propelling themselves through water. Boats or submarines mimic neither of these. Yet more people cross the ocean by boat than by swimming. This is because a purely artificial method is better than the ways that developed naturally. Natural swimming styles are limited by the nature of organic life and by the course of evolution. Building a boat from metal with an engine powered by oil or gas is not.

    And so, back to computing and AI. "Natural" intelligence has evolved (in some people, at least) due to the way the brain is constructed. To try and create AI without that same set up of neurons and their organisation seems like a flawed strategy. However the history of IT is filled with flawed strategies: programming languages that came and went, techniques that had some fleeting trendiness - or that survived far beyond their usefulness, standards that promised much and delivered nothing and a plethora of computer architectures.

    But so far we have come as close to developing a true AI as we have to making a computer that works as the brain and at the scale of the human brain. Or as close as we have come to a ship with arms and legs that thrashes through the water.

    1. This post has been deleted by its author

  20. USER100

    The comparison (made by others also) between swimming and submarines, while initially appealing, is not apt. 'Thought', 'sentience', 'conscoiusness', 'awareness', 'intelligence', 'feeling' (so many words for it yet it's still so hard to describe) - whatever you want to call it, is in no way analogous to an engine turning a propeller. You might as well compare a person lifting a weight to a crane, which can lift thousands of times more.

    While totally impractical, a ship with arms and legs that thrashes through the water could be built, but computers versus actual brains? It's not even a mismatch.

  21. mevets

    Yeah, No. It is a pity both can't lose.

    Both sides are little more than apologists for the inability of the marketing automated regression analysis as AI. Since the first was buffoon enough to quote an actual respected scientist in the context of a pulp science fiction writer, then an implied rebuttal from David Parnas: "Artificial intelligence has the same relation to intelligence as artificial flowers have to flowers" is in order.

    There are other quotable bon mots about why anybody with a working limb would choose an artificial one that are pertinent to the discussion, but the base line is the same: marketing hype, nothing to see here. Array processors are much cooler now, so we can do better statistics, end of story.

    1. USER100

      Re: Yeah, No. It is a pity both can't lose.

      I agree. Neither side has laid their cards on the table, i.e. they've failed to address the real debate everyone wanted to see: for or against Strong/True AI.

      Maybe because they know the whole thing is a philisophical concept rather than a scientific one.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like