back to article Adding AI to everything won't make sense until we can use it for anything

We are teetering on the brink of a golden age of AI. It must be true, we keep being told so. This week's preacher is Samsung, which says it has integrated processors with memory to achieve stellar AI numbers: "Approximately twice the performance in AI-based recommendation applications and a 40 per cent decrease in system-wide …

  1. Anonymous Coward
    Anonymous Coward

    machine learning is done in Python, AI is done in PowerPoint. Managers don't really care either way about the technology, so much as being able to stuff "AI" into their slides in order to gravitate towards the executive bathroom. It's just a career vector in companies where data isn't seen as the primary revenue stream, and there's very little investment in long-term data infrastructure to facilitate any kind of deep learning ventures because that bit's boring and doesn't make the headlines at the annual executive retreat. A "winter of AI" has been predicted for a number of years now and the industry kind of needs it to clear out the charlatans and get the hype curve into the slope of enlightenment.

    1. Anonymous Coward
      Anonymous Coward

      I somewhat agree with this...when execs are involved AI/ML it's usually a load of bollocks as the execs tend to have little to no understanding of their request.

      When used as a tool rather than a product though, machine learning has it's merits.

      The major problem with machine learning though is data...I recently embarked on a machine learning adventure in the fashion industry. I soon discovered that data across this industry is hurrendously inconsistent which made developing the particular application I was interested in virtually impossible.

      The data is garbage because there is no foresight in the industry in terms of how complete datasets might be useful. The focus in the industry is "get something to market as fast as possible and see if people buy it". Only when they've manufactured a shit load of a product and discover that it doesn't sell do they start investigating why or just sending it off to a different market/country to test it there.

      A massive problem with fashion at the moment is sizing of garments. When you shop online, you can't try a garment on so you have to guess the right size and fit (herein is the key to solving the problem), order it, try it and then send it back if it doesn't fit. I haven't found any independent statistics on the scale of this, so I can't quantify the problem.

      Size is easy for a customer to work out and determine, but fit isn't...body shapes vary a lot and just because something is your size, it doesn't necessarily follow that it "fits".

      The problem with "fit" is that you need to work out a customers body shape which can be done mathematically, but it's a bit rough and ready and leaves large gaps where "fit" might still be a bit dodgy.

      What I tried to do was work out the shape of someone using their dimensions (hip, high hip, waist, bust etc) and then apply the same methodology on garment dimensions.

      Here's the rub...getting a customers dimensions is easy...you can tell them exactly what to measure and have them input it into a form and off you go...however, getting all the relevant dimensions for a garment...not so simple...there is always a critical measurement missing. I'm not sure if this is by design or because the industry sucks...I suspect the latter.

      1. Pascal Monett Silver badge

        The industry sucks

        I've been saying for a while already : what is it with Size 8 (or pick another number, it's the same problem) ? Why can't the industry just state the size in centimeters ?

        A Size 8 at one store is not a Size 8 at another store, and it's possible it isn't even a Size 8 between two lines of clothes in the same store.

        I'm very sorry if someone has an issue with looking for a 78cm-wide shirt instead of a Size 7 (just inventing numbers here), but wouldn't it be easier to know that if you buy that 78cm-wide shirt, it will fit ?

        1. veti Silver badge

          Re: The industry sucks

          I don't see what problem that would solve. You're still trying to describe an entire garment with a single number, when there are many more variables - diameter of cuffs and sleeves and collar, length of sleeve, length of shirt, etc.

          Even the "78cm" will be measured by one label around the shoulders, by another around the chest, a third will measure the average all the way down the shirt. There will still be enormous differences between shops.

          1. Chris G

            Re: The industry sucks

            Correct, it doesn't make any difference whether the sizr is stated in centimeters, inches or bananas. What really matters is 'fit', fit is much more than style, it's the dimensional relationships between the standard measurements.

            What is the distance vertically between waist and chest? Is there a simple taper or is it a concave or convex line between the two measurements and so on?

            The fashion industry from the designers point of view does not want standards because it feels limiting to them and makes cooying or pirating easier, it also goes beyond simple numbers, fabric type, orientation, seams, all make a difference.

            I got involved in a fashion project a decade back with an acquaintance, he spent several tens of thousands of euros and that was enough to tell him he needed a lot more, so he got out while he could still afford to.

            Fashion software if it's any good is expensive but is still a fair way from getting you to a single solution that will produce a good fit on every size and shape.

            1. Anonymous Coward
              Anonymous Coward

              Re: The industry sucks

              Yes exactly.

              Fit is what ML can be used for.

              This is what I was trying to do.

              Unfortunately though you need a lot of specific data to define fit.

          2. Nick Ryan Silver badge

            Re: The industry sucks

            But... the entire fit of a man's ("formal") shirt is based on neck size. Doesn't matter how long in the body, broad shoulders or chest or arm length... the entire sizing is based on neck size. It's just insane.

            At least with trousers we have waist size and length which works most of the time, however the sizing metrics of other things is often just ridiculous.

            1. Anonymous Coward
              Anonymous Coward

              Re: But... the entire fit of a man's ("formal") shirt is based on neck size.

              Doesn't have to be - that's just the retail clothing industry keeping things simple and cheap. You can always try to find a tailor who'll do something bespoke or semi-bespoke; and they will take more measurements. It costs more, but not necessarily as much as you might think. However, fashion shirts might use more wacky patterns and features that a non-specialist tailor might not find easy to sourse or replicate.

            2. Anonymous Coward
              Anonymous Coward

              Re: The industry sucks

              ...and chest if you shop outside supermarkets.

        2. Neil Barnes Silver badge

          Re: The industry sucks

          Why can't the industry just state the size in centimeters ?

          When a company (looking at you, Levi!) finds it possible to make the same model and size of a pair of jeans actually the same size, independent of colour, and not have the blue ones noticeably tighter on the waistband than black ones (consistently and for at least ten years) then I might start to believe that sizing in cm - or even inches - might work.

          Until then, as the OP pointed out, there are just too many variables. I guess the only solution is to have all of ones garments custom made by one's personal tAIlor...

          1. jake Silver badge

            Re: The industry sucks

            With me, the black Levis are a hair tighter than the blue (pre-shrunk 501s). So I usually get the original shrink-to-fit variation ... which incidentally seem to last a trifle longer than the pre-shrunk ones, and are a hair cheaper. Win-win.

            Don't forget shoes ... Different manufacturers use different sized lasts. And plants on different continents belonging to the same manufacturer also seem to use different sized lasts. The only good way to purchase shoes is to physically try them on.

            Last time I bought "walking the dawgs" shoes, I tried on four identically labeled pairs. One pair was too small, one too big, and two fit. I bought the two. They were made in Mexico, the large pair was made in Malaysia, and the small pair was made in Taiwan. Caveat emptor.

            1. Ken Moorhouse Silver badge

              Re: Tailors...

              ...don't like law suits.

            2. Anonymous Coward
              Anonymous Coward

              Re: The industry sucks

              Trying them on doesn't take into account wear and stretch. If you're buying leather shoes they ought to be ightly tight so as to stretch and mould to your foot.

              You can also adjust fit with appropriate socks. I think you'll find fit in various countries is related to climate and the market they were originally made for. Where you would wear thicker socks, the fit will be less tight because you'll probably be wearing thicker socks.

              Unfortunately, with shoes, if you buying a cheaper pair of what you think is the same model, you're probably buying a batch that was diverted from where it was supposed to go because the distributor got a better deal because the pricing for a different region was advantageous.

        3. RegGuy1 Silver badge
          Joke

          Why can't the industry just state the size in centimeters ?

          Inches please. We've left...

          (Added icon just in case there are some out there that mistake me for a brexiter.)

        4. Anonymous Coward
          Anonymous Coward

          Re: The industry sucks

          It's a vanity thing. Based on what I've learned, they sell less women's clothes if they have an exact measurement on them.

      2. NoneSuch Silver badge
        FAIL

        The smarter machines get...

        The dumber humans get...

        Gen Z kids have "smart" phones in front of their faces all day and by god it's like the walking dead out there.

      3. EarthDog

        All data suck unless it is carefully reviewed and vetted. Which is impossible. The data stream is hopelessly polluted, once a bad datum enters cleaning the data stream is impossible. This can be unintentional (typos happen), a bug in the system (Buttle can become Tuttle), intentional (I often use false DOB or other information when I am forced to sign up for something) or malicous.

        We then use those data to "train" AI. Of course the AI is going to suck. At worst, when used in state security, military, or other life critical applications, it will be deadly.

        AI shouldn't be unless 1) it is a non-critical e.g. suggesting movies or 2) in a well constrained environment where data can be vetted, e.g. space exploration.

        Remember children, GIGO.

        1. batfink

          Yup. GIGO was the first thing I learnt when I were a young wide-eyed lad coming into the field back in the 70's.

          Still at least as true today, but now we have new labels to disguise the problem.

      4. Anonymous Coward
        Anonymous Coward

        I was working on this in the 90's with a major shoe company and it's supply chain - glad to hear what progress has been made since ;-)

      5. Keven E

        Is less more?

        "...there is no foresight in the industry in terms of how complete datasets might be useful."

        There aren't alot of shared goals out there, so *foresight seems unattainable by definition...especially from generations of "me" inspired "capitalists". Perhaps just as foolish as expecting intelligence to coalesce... artificial or otherwise.

    2. Doctor Syntax Silver badge

      "the industry kind of needs it to clear out the charlatans"

      History says it won't happen. At best they'll just move onto something else.

      1. Il'Geller

        Don’t panic! I am here. Microsoft, SAP and others help me.

        1. Doctor Syntax Silver badge

          As I said.

    3. Anonymous Coward
      Anonymous Coward

      re: get the hype curve into the slope of enlightenment.

      Thank god for that, I thought you'd posted a bunch of management bullshit for a minute!!

  2. b0llchit Silver badge
    Flame

    It is all about marketing

    Adding AI to everything won't make sense until we can use it for anything

    But it is used for something! It is the greatest marketing ploy since long. It is a reason for everybody to buy a new computer because it now can do, ehm, the same in a different and more profitable way.

    There is a good reason for AI to mean Artificial Idiot and ML to be Machine Limping. A good marketing department will convince the standard poor soul that his machine is limping after less than 12 months. The poor soul will buy into this and proof beyond a shadow of a doubt that intelligence if no part of that soul and it is artificially created by the marketing department. Therefore, the new artificial machine is a perfect match for the target idiot (aka. consumer). Just be sure to add some blinking lights and buttons to push.

    1. Schultz
      Thumb Up

      Re: It is all about marketing

      ... and the best marketing can be done when you sell potential as opposed to specified function.

      Because you can't understand or predict the AI, it's impossible to predict that it won't work. Put in some data, shake it to make the magic happen, and see for yourself how quickly we get to within some 90% of recognizing cats. Clearly it'll only take some more money / hardware / software to make it perfect. It's the ideal tool for people who don't understand a problem: now they can pretend to solve it without actually having to put in the blood, sweat and tears. Magic makes life so much easier -- so let's all embrace AI!

  3. Brewster's Angle Grinder Silver badge

    My impression of this stuff is it going to turn main memory into a vector processing unit - for integer vectors. Which is nice. Has its uses. But is not going to help me out with a great deal of code I write.

  4. Mike 125

    think Bluetooth

    "won't make sense until we can use it for anything"

    Everything is better with Bluetooth AI. (Even an electric toothbrush.)

  5. John H Woods Silver badge

    25 years in IT has made me increasingly cynical about AI ...

    ... but being a biologist before that certainly doesn't help. For all her vocabulary and uncanny valley sex appeal, Hanson Robotic's Sophia appears vastly less intelligent and engaging than all but the dumbest of canines, let alone primates. In thinking per watt, we still seem just as far from the intelligence of the Corvids (crow family) as we were when I started in IT.

    Machines that can outperform humans in highly specific fields are not, in my opinion, doing it in a manner that we would normally recognise as 'more intelligent'. A lot of stuff that was almost impossible in the 80s and 90s, such as image recognition, moving object tracking etc. has vastly improved but I still don't believe there's anything yet going on in the silicon that could be justifiably regarded as thought.

    1. Brewster's Angle Grinder Silver badge

      Re: 25 years in IT has made me increasingly cynical about AI ...

      But do we even want them to think? Or do we really want them to do the kind of things we do, but in a different way? I mean, you don't want your self driving car to be distracted by the cute little postal delivery bot its just spied on the pavement.

      And then there's the ethic of having things that might be conscious. Machines are machines. But sentient computers can be exploited.

      1. fajensen
        Terminator

        Re: 25 years in IT has made me increasingly cynical about AI ...

        Phillip K. Dicks intelligent robot cars are always bitching and moaning - programmed to let off steam gradually as a way to keep it from coming out suddenly and all at once in a robot apocalypse

      2. Anonymous Coward
        Anonymous Coward

        Re: 25 years in IT has made me increasingly cynical about AI ...

        30 years in Computer Science, most of those working in AI-like applications (started with Turbo Prolog!)

        We don't really need general artificial intelligence (except maybe hardware makers and futurologists?), but some specific-domain assistance can be wonderful. I wouldn't want to classify terapixels of remotely sensed images by hands (did that too in the 80s).

        Explainable ML could also be very useful.

        1. Neil Barnes Silver badge

          Re: 25 years in IT has made me increasingly cynical about AI ...

          This. In many ways, except in very specific fields, 'AI' is pointless and - to my mind - increasingly dangerous because of its black-box lack of transparency.

          I'm not convinced in artificial intelligence until I see sentience in a box: a person who just happens to be running on silicon instead of proteins. Which is a whole bag of ethical worms in and of itself...

          1. jake Silver badge

            Re: 25 years in IT has made me increasingly cynical about AI ...

            "increasingly dangerous because of its black-box lack of transparency."

            In my mind it is increasingly dangerous because people treat the results like religious dogma, as if it were automagically irrefutable because the computer said it was true.

            Scary, that.

      3. Filippo Silver badge

        Re: 25 years in IT has made me increasingly cynical about AI ...

        I suspect that lots of the tasks we are currently trying to automate using ML, cannot actually be automated with any degree of real-world effectiveness, without the system being AI or very close to it.

        For example: self-driving cars. Seeing road signs and surrounding objects is nice, but all those ambiguous situations that happen all the time in any city centre? I don't see a ML system being able to handle those reliably, unless it gets to the point where it can actually reason on what it's looking at.

        1. Brewster's Angle Grinder Silver badge

          Re: 25 years in IT has made me increasingly cynical about AI ...

          If things happen quickly, you'll need to convince me humans have time to reason about the situation rather than react on instinct as shaped by experienced and emotion.

          And even if it happens slowly enough for higher order thinking, we often dial up a pre-learnt model and use that. (A sense of how things "ought" to be.) I imagine most of us here are smart enough to construct new models on the fly, but even then, I suspect, we get it wrong more often than not. (Look at the reaction to Covid-19 to see how even experts struggled to adjust their mental models.) Human brilliance comes from building up a body of knowledge which we transmit to others.

          You're also thinking about it in a very human way. For example, why do self driving cars need to understand signs? They need to at the moment, because the world is set up for humans and that is the way we communicate dynamic information to each other. But once self driving vehicles become common (and perhaps not very good at reading signs), we start asking people to log the info on a database that can be transmitted to vehicles. And from there we go to only logging it on database (because putting up signs is expensive and slow compared to entering it into a database). And suddenly humans drivers end up disadvantaged...

          1. Keven E
            Holmes

            Re: 25 years in IT has made me increasingly cynical about AI ...

            "reason about the situation (rather than) react on instinct as shaped by experienced and emotion."

            Perhaps perceiving those as separate things will permanently relegate the descritptor *artificial.

        2. LionelB Silver badge

          Re: 25 years in IT has made me increasingly cynical about AI ...

          I'm not sure self-driving vehicles is a good example for human-like AI because, well, humans are not necessarily particularly good at driving vehicles. Human intelligence did not, after all, evolve to deal gracefully with piloting large hunks of metal at biologically unfeasible speeds on tracks jammed with other hunks of metal travelling at biologically unfeasible speeds. Those ambiguous situations are, furthermore, largely born out of fallible human attempts to second-guess other fallible humans. Nor are our processing speeds and reaction times spectacular compared to the technological potential. (Or maybe that's just my driving...)

          It seems to me plausible that a dedicated non-human-like intelligence ought to be able to manage the driving scenario much better than a human-like AI - or indeed humans. Are we there yet (sic)? No, but it doesn't feel that far off.

          As another poster noted, perhaps we should simply get used to the idea that the future of "real" AI is not going to mean human-like AI. And why should it? We already have humans for that.

      4. Paul Crawford Silver badge

        Re: But do we even want them to think?

        Hopefully not a corvid-like "Oh those eye balls look tasty"

        1. LionelB Silver badge

          Re: But do we even want them to think?

          A bit melodramatic, no? Where I live we're more concerned about gull-like intelligence - as in "Oh, those chips look tasty", or "That garbage would look so much better strewn over the pavement".

    2. Primus Secundus Tertius

      Re: 25 years in IT has made me increasingly cynical about AI ...

      @John Woods

      I have long felt that AI is a bag of tricks that ignores any lessons from biology about how RI (Real Intelligence) evolved.

      OCR has improved since the 1980s, but voice recognition has not. British and American people can chat easily, but US machines do not handle British accents. So the machines are lacking something that RI handles easily. Its a similar story with translating natural languages: it is obvious from the results that the machine does not 'understand' the material. The grammar checks in Word and its competitors do not understand complicated sentences, so their parsing is hopelessly wrong,

      There was a hiatus in AI research in the 1980s, when the money men had become deeply unimpressed. It is time for another hiatus.

      1. LionelB Silver badge

        Re: 25 years in IT has made me increasingly cynical about AI ...

        Yes... but the problem is that biology doesn't actually leave that many clues as to how it evolved "real" intelligence (not to mention the vast time spans it had at its disposal to do so). Evolution is, notoriously, an unholy kluge of hacks built upon hacks built upon hacks. No doubt crucial constructive and organisational paradigms emerge in the course of this process (indeed neural networks might be regarded as a necessary but not sufficient one), but we are light-years away from understanding to any depth what those paradigms might be, and how we might exploit them in silico.

        So yes, major breakthroughs would seem to be required. Which is easy but unhelpful to say.

    3. EarthDog

      Re: 25 years in IT has made me increasingly cynical about AI ...

      Like usuable nuclear fusion, AI has always been 20 year away.

    4. veti Silver badge

      Re: 25 years in IT has made me increasingly cynical about AI ...

      This is a two part challenge.

      First, define "thinking". Describe an experiment, with an observable outcome that would convince you that a machine was thinking. If you can't or won't do that, then you're only really arguing with yourself. (We used to have the Turing test, but machines blew through that years ago and it convinced precisely nobody.)

      Second, tell us why anyone would want a machine that does whatever it is you just described. If the spec says it acts like an average human, isn't it easier just to hire an average human for that?

      1. Doctor Syntax Silver badge

        Re: 25 years in IT has made me increasingly cynical about AI ...

        "We used to have the Turing test, but machines blew through that years ago and it convinced precisely nobody."

        I've come across customer service people - at least I think they were people - who reliably failed it.

  6. Boris the Cockroach Silver badge
    Terminator

    We have AI

    Actually we dont.

    Lets look at a process .. eg I'm given a job to build at work, now beyond the usual "What idiot designed this?" and "for god's sake take the crayons away from the drafter" comments, I apply a vast store of what has worked in the past, combined with pattern recognition(this looks like XXXX), and then ram it into machine cell #7 while knowing what #7 in capable of, and not forgetting to make some custom grippers for the robot.

    now apply the AI to this process as skilled injeeners like wot I are are extremely expensive and an AI system is much cheaper.(in theory)

    The insults at the start of the process are fairly easy to do, big string array, search for insults with 'designer' in them and insults with 'drafter' in them, then using these terms as a reference point... come up with "The designer sat on a point containing the drafter" now anyone having a good grasp of english knows that although that sentence makes sense according to the rules of english, its hardly insulting.

    Which does'nt bode well for the rest of the process......

    The part is 76.2mm dia cabon fibre, the AI will need the knowledge about #7 being the only cell that can do carbon fibre, but it will need to design a holding system for the part that will allow the part to be clamped, but also allow the robot fingers in to pick it up, easy peasy if you are human. but for a machine..... hmmm... I feel that AI generated insult looming large in my vision.....

    Anyway... moving away from my world into more AI suited applications.

    Scanning pictures of millions of people with a name looking for a match with a picture of a bad guy with no name

    Sorry, but that does'nt sound AI'ish to me..... that sounds more an exercise in pattern recognition combined with data processing

    Digitise the picture, any close matches gets rotated/flipped/mirrored to see if it matches closer, and finally spit out the one you are after.

    Who turns out to have a cast iron albi on the basis he was 2000 miles away and in prison at the time of the crime.....

    But AI and ML are the latest buzzwords to go around executive meetings... so like most executives... just ignore what they are drooling on about because chances are they'll be gone in 12 months and their AI/ML project with them (although their replacements will have plans based on the buzzwords of 12 months time..)

    1. Pascal Monett Silver badge

      Re: We have AI

      We most certainly do not.

      As I repeat all the time, what we have is Statistical Analysis Machines.

      And we can't even tell how they come to their conclusions.

      So, they're intelligent because we're too stupid to understand them.

      1. Doctor Syntax Silver badge

        Re: We have AI

        We had statistical analysis machines with techniques such as cluster analysis at least back as far as the mid '60s. Back about 50 years ago I was helping an archaeology student with his honours project trying machine classification of bronze swords.

        That sort of analysis could certainly give the exact reasons why it made its choices provided you could handle the volume of print-out. The real problem, now as with the bronze swords is what of the input data is meaningful and how should it be weighted. Measuring is hard if you want to do it accurately so people will measure what's possible, what's easiest, or just what they recognise as being measurable. These are not necessarily the things which are relevant or the subtle cues we use when we recognise things. And it seems things can be even worse when the machine is left to make its own decisions.

      2. Geoffrey W
        Happy

        Re: We have AI

        RE: "what we have is Statistical Analysis Machines"

        It's even narrower than that implies. We have Expert Systems with a very limited domain of knowledge, nothing like AI at all. So yes, Statistical Analysis Machines, but with a very tightly controlled boundary on that knowledge. A fridge knows about food and dates, but nothing about how to put them together to make a tasty snack...Well, I suppose that example fits me too; I am hopeless in the kitchen - the difference between me and that fridge, however, is that I could learn how to cook if I wanted to (which I don't) but the fridge would need a new brain for it to learn something new. And as for self driving cars: I am coming to the conclusion that the only way a car will be able to drive itself safely, is if all the other cars on the road are also self driving-unless you have a car made in the US running in the UK, or vice versa, and all cars are trying to drive on the same side of the road. With human drivers you never know which side of the road they might be on at any given moment and likely causing nervous breakdown in self driving AI; we better hope they don't learn how to do road rage.

        Ahem...I digress. Anyway, my term for these smart things is Expert Systems. It's what I was taught. That's all I wanted to say. :-)

      3. LionelB Silver badge

        Re: We have AI

        You assume implicitly that we are not Statistical Analysis Machines. Since we can't even tell how we arrive at conclusions (and I mean "conclusions" in the broadest sense), that assumption looks shaky.

        Are we, then, intelligent because we're too stupid to understand ourselves?

    2. a pressbutton

      Re: We have AI

      We dont have real AI.

      Call me when the AI needs to tighten a screw, cant find a screwdriver so uses a kitchen knife.

      Real AI that we can engage with and recognise as AI will be able to see and have a memory and hands and want things / to do things and talk and listen.

      Everything labelled as AI so far is a pentalobe screwdriver - great if you need to do something with a pentalobe screw of that size.

      Otherwise not.

      1. doublelayer Silver badge

        Re: We have AI

        If innovation is your wish, it's been done before. Robots instructed to perform actions and doing them in unexpected ways because it decided how to do it without instruction. For example using an object that isn't a hammer to pound in a nail, even though a hammer was present. Was this intelligent because it improvised, dumb because it should have used a hammer, or inconclusive because it didn't know what a hammer was and chose a policy which, while not optimal, got the goal accomplished?

        The question for the "AI is impossible/we have never seen it" people remains: what does a computer need to do to convince you it is intelligent in some way, keeping in mind that you can be intelligent in one thing without knowing everything else there is to know.

        1. Ken Moorhouse Silver badge

          Re: Was this intelligent because it improvised

          This is the problem with AI.

          Human "Intelligence" is not something we were born with, in non-autonomous* cases, we learn it. The scenario you outlined could highlight this learning process, if this were a true AI robot (which I very much doubt). The robot was making a mistake with which it might learn something.

          Maybe this business about AI making catastrophic errors in having to learn something might be the thing that will put people off AI, forever.

          * Autonomous being things like breathing, where the longer-term learning operation is called evolution.

        2. Shalghar

          Re: We have AI

          " what does a computer need to do to convince you it is intelligent in some way,"

          Commit suicide as soon as it realises what situation it is in.

          Funnily, even by this definition there might be "AI" out there, regarding the many, many siliconics that suddenly commit suicide/self destruction for no apparent reason.

          1. a pressbutton

            Re: We have AI

            Defn of intelligence

            I want to see intentionality.

  7. Anonymous Coward
    Anonymous Coward

    How to piss off a computer...

    One, two, three, four, five... Why am I counting when the computer can do it much faster & better? Because it pisses them off. I'm thinking, the "AI" is not, it's merely calculating & is no more intelligent than any other calculator. It's not truely *thinking* until it can coherently articulate the Throatwarbler Mangrove of the Vogosphere as it applies to the janitorial services of Eroticon6...

  8. Rafael #872397
    Terminator

    Who doesn't want to have a graphics program that will respond to the command...

    That is going to be abused/misused in 0.0000000000000003 seconds.

    1. Paul Crawford Silver badge
      Paris Hilton

      Re: Who doesn't want to have a graphics program that will respond to the command...

      Surely in under 69 seconds?

    2. Falmari Silver badge
      Trollface

      Re: Who doesn't want to have a graphics program that will respond to the command...

      "Cock" ;)

  9. herman

    "AI/ML is the poster child here because it is uniquely good for obtaining vast amounts of venture money" - TFIFY.

  10. fidodogbreath

    Those tools won't happen without proper standards to give them a wide set of target platforms

    ...and those standards won't happen -- or if developed, won't be implemented -- because proper standards and interoperability reduce vendor lock-in.

  11. jake Silver badge

    Time for another AI winter.

    "We are teetering on the brink of a golden age of AI. It must be true, we keep being told so."

    And have been since the early 1960s.

    Methinks the Marketing bozos have cried "wolf" enough for a couple decades. Time to let this subject matter rest, it stinks like last week's fish. Methinks we're heading into another so-called "AI winter", and about time, too!

  12. jake Silver badge

    "But why does Samsung specify AI? "

    Because Samsung's marketing department (which knows nothing of AI, it's just a buzzword to them) insisted that it be included in the description on product roll-out. And that is the ONLY reason.

    Sticking a label on something doesn't automagicaly confer that attribute on the thing. The map is not the territory.

    1. Doctor Syntax Silver badge

      Re: "But why does Samsung specify AI? "

      "The map is not the territory."

      And the title of the map even less so.

  13. Tron Silver badge

    AI will steal at least 1 job.

    AI makes computers unreliable. And that's Microsoft's job.

  14. steelpillow Silver badge
    Holmes

    Horses for courses

    "have it work no matter where the AI hardware lives?"

    Err, nope.

    If by AI you just mean Big Data + Machine Learning + Endless Patches for Pixel Fuck-ups, then you are looking at proprietary solutions and that means proprietary hardware gets weaponised not avoided.

    If you mean an AI that can handle the general problems that the general hardware is built to encounter, then no again. Checkout the realities exposed by Darwinian evolution in Mother Nature: the day octopus, manta ray, crow, horse and human are all intelligent (i.e. they have the necessary cortical architecture and display complex cognitive behaviours). But beyond being neural nets featuring certain structural topologies, those neural architectures are not exactly standardised. Why should semiconductors be any different, be they silicon crystals or carbon organics or whatever? It is, literally, a world of horses for courses.

  15. Nick Ryan Silver badge

    Microsoft's prior "AI" art...

    Does anyone else remember the awful feature in Microsoft Office where some utterly braindead "designer" vomited into the UI a feature that hid menu options that you didn't use very often? As in entirely missing the point of menus and that they are there for discoverability of functionality, not to damn well hide it.

    If Microsoft introduced such a feature into Microsoft Office now it would be termed "AI" despite all it doing is little more involved than recording the last time that a particular menu item was selected and to hide those past a certain time threshold. That's pretty much still the cutting edge on AI UX. The "AI" component would probably be to decide how long this threshold period is. Genius.

    1. Shalghar

      Re: Microsoft's prior "AI" art...

      Lets take that "bright" idea to the limit.

      Windows 12 will delete itself if it does not get enough attention and is not even allowed to phone home extensively.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like