back to article Winter is coming for AI. Fortunately, non-sci-fi definitions are actually doing worthwhile stuff

When British Prime Minister Theresa May bigged up AI at the World Economic Forum in Davos last month, it was as if she had nothing better to talk about. Name-dropping DeepMind, perhaps the only justification for her claim that the UK can be a "world leader in artificial intelligence", seemed a little desperate, especially as …

  1. Anonymous Coward
    Anonymous Coward

    Google DeepMind

    Beware of Cupid Hunts bearing Sweetheart Deals.

  2. artem

    Can we please drop "I" from "AI" because there's very little if no intelligence in all existing AI algorithms. I'd really love AI to be replaced with "automation" or "fuzzy algorithms" because it's what it is.

    People are throwing "intelligence" around as if we understand what intelligence is. We don't. There isn't even a universally accepted definition of it.

    The worst of all, is, of course, the fact that people believe that that the brain is a computing device which processes information. We do NOT know that. It's akin to medieval people saying that the Sun is burning. Yes, the Sun is "burning" but it's not a chemical reaction, in fact it has nothing to do with chemistry.

    Likewise with the brain - we see what it does. We created computers roughly 30 years ago and there's some resemblance between what we do and what computers do. Thus the brain is a computational device. This corollary is false.

    1. Chemist

      "We created computers roughly 30 years ago"

      What, in 1988 ? I think I'd have remembered that !

      ( I was taught Physics to A-level ~ mid-sixties by someone who had worked on the Manchester 'Baby' 20 years before BTW)

      1. artem

        OK, I had to be more specific: we created computers which resemble modern PCs roughly 30 years ago. To be precise the first PC was released in 1981. And, of course, the first digital computer was created shortly after WW2 but it was its performance and usability were just lacking so I wouldn't call it a "computer" ;-)

        1. big_D

          I nearly gave up on the article when it lauded the Google translation AI. It has improved from downright dangerous to laughable, but is that really "good"?

          I do a lot of translations and I am nowhere near professional at it, but I can still run rings around Google's efforts, between German and English.

          At least it has gone from falling off my chair laughing bad to sniggering bad.

          As to the article, better an AI winter of discontent than a discontented Wintermute...

          1. mrjohn

            I know some nervous translators and interpreters. The translations may not be good, but they are good enough because they are cheap & quick. Automated transcription is also impressive. It's not perfect, it needs checking, but it does a lot of the drudge work.

            1. big_D

              @mrjohn

              Nope, the translations are generally not good enough.

              Automated transcription is not impressive and it certainly needs checking.

              The last time I tried to translate a large piece of text, the Google Translate text was "so good", I only re-wrote 95% of it and I just needed a "good enough" translation for an email. I also did a short internship at a translation service and the results there are even stricter.

              It works for some simple themes, as long as you don't need an accurate translation.

            2. Dave800

              It may remove a lot of drudge work for someone bilinual in the relevant languages. Indeed I know someone who does translations by using GOOGLE translate followed by carefully fixing the result.

              The problem is, if you don't know the original language, you can't tell when the automatic translation contains a conceptual error.

          2. Anonymous Coward
            Anonymous Coward

            Best aphorism I've seen this year

            "...better an AI winter of discontent than a discontented Wintermute..."

            I like it, I like it. With your kind permission (and with attribution of course), that is going into my collection of memorable and amusing aphorisms.

            1. big_D

              Re: Best aphorism I've seen this year

              @Archtech glad you liked it. Feel free.

              Just a coincidence, I started re-reading Neuromancer yesterday.

        2. joeldillon

          What the bloody hell does the release of the first IBM PC have to do with, well, anything about this? The AI researchers of the time were using rather more beefy (and usable) kit than that.

    2. Orv Silver badge

      I feel like we sort of redefined AI downward until it matched what we already knew how to do, and then declared we'd conquered AI.

      A lot of what's going on is statistical methods, like Bayesian classification. Calling it "intelligence" is a big stretch. Even calling it "learning" is a bit iffy.

    3. itzman

      Please can we drop the I from HI because it's clear from anyone watching the media that humans do not think, merely respond to programming by the media coupled with a crude form of pattern recognition.

    4. Anonymous Coward
      Anonymous Coward

      Computing is a (very) small subset of what brains do

      People (and other animals) have been using their brains to good effect for millions of years. Very recently, a few intelligent people drew up rules for certain highly artificial abstractions that turned out to have practical applications - for example, geometry and trigonometry allow accurate measurement of fields for tax purposes.

      Subsequently, the disciplines of mathematics and logic were identified and explored in some depth. Then machines were designed and built to implement a few mathematical and logical operations. The speed and small size of microelectronic circuits have allowed an increasing number of useful tasks to be performed by computers, often much faster - and sometimes better - than human brains could do them. (A goal mentioned by Blaise Pascal nearly 400 years ago).

      Computers, then, perform a small subset of the functions of human brains - those that can be specified by very simple and limited abstractions such as logic and arithmetic. On the other hand, unlike all brains, they are not fundamentally driven by instincts to seek some outcomes and avoid others - their goals must be programmed in, and do not evolve. In consequence, they don't combine emotions and urges with logic, as brains do.

  3. Anonymous Coward
    Anonymous Coward

    Currently the general population see AI as a method of superimposing one face onto another in a pr0n film.

    Interesting comment about propriety verses open, Personally I think open would be the best way forward as closed would leave people at the mercy of algorithms which you can't see or control and there has been plenty of noise about AI potentially being unintentionally discriminatory by design.

    1. Destroy All Monsters Silver badge

      A downwards-shifted Bell Curve is fact, better deal with it.

      AI potentially being unintentionally discriminatory by design.

      Reality is Racist, especially if unconvered by non-affirmatively-actioned statistics.

      Will "open" be a way forward? Can anyone understand what a neural network does?

    2. Orv Silver badge

      The problem is you're unlikely to find a smoking gun in the algorithm itself. What you really care about is the training data.

      But to be honest we don't really need the algorithm to determine if something is discriminatory. For example, the algorithm for credit scoring is proprietary, making it a black box, but its disproportionate burden on certain groups when they try to not just borrow money, but also rent housing and land a job, is well-known.

    3. find users who cut cat tail

      Lots of things are unintentionally discriminatory, including nature itself (think genetic lottery and generally the role of chance -- which people completely underestimate, clinging to the delusion they are in control). AI is one of them...

      The problem with AI is that there is no appeal against its decisions. We are already neck deep in ’this must be correct because the computer says so‘, but AI makes it orders of magnitude worse. You can -- in principle -- reason with people and you can -- in principle -- point to bugs in code. You cannot do either with AI. So even discrimination that just appears randomly will likely become self-fulfilling prophecy.

  4. Neil Barnes Silver badge

    Please tell me

    The difference between current 'AI' and statistics?

    1. Anonymous Coward
      Anonymous Coward

      Re: Please tell me

      Statistically speaking it's not really AI.

    2. Christian Berger

      Simple

      Statistics will give you metrics you can use to make decisions. Neural networks give you weird metrics and do decisions for you.

  5. Destroy All Monsters Silver badge

    Welp!

    See you at RuleML+RR 2018, then!

  6. steelpillow Silver badge
    Holmes

    Intelligence is a graded thing

    Intelligence is not a binary thing. Any creature with a complex body plan has sufficient "intelligence" to know its own spatial posture and to base decisions on that. Even a 1980s computer chess game had that level of intelligence. By the time animals get eyes and stuff, like say an insect, their level of "intelligence" grows to match the Big Data pouring in from their senses. Frogs, mice, crows, dogs, humans all represent an evolutionary chain of incremental advances in intelligence. Machines are no different. Currently they are probably around the frog level - once they level with the crow we can start crying "AGI".

  7. Pascal Monett Silver badge

    "We have only just seen the beginning of what AI can achieve"

    No. What we are seeing is the beginning of what correlating and evaluating massive amounts of data can achieve with a bespoke program.

    We are nowhere near AI and talking about improvements in translation, although impressive indeed, has nothing to do with AI and everything to do with better coding (meaning code that does the job better).

    Besides, translation still needs a ways to go before you feed a text to Google Translate in English and get a proper German/French/Spanish/your choice version that does not need to be almost completely rewritten by a competent linguist to be up to par with the original version.

    1. DropBear

      Re: "We have only just seen the beginning of what AI can achieve"

      I reckon the translation thing is approximately going from "finding translations for each word in the original and returning them roughly in the order they were in the original text" (equal to Classic Chinglish) to "returning the most common phrase the resulting words tend to be found in, possibly also preferring the phrases typically found closest to the rest of the phrases in the text" (Modern Chinglish a.k.a. "just run it through Google"). True context awareness is nowhere in sight of course, seeing as how that would require actual proper Turing-resistant AI.

  8. Anonymous Coward
    Anonymous Coward

    AI primarily is a word used to disguise sales & propaganda

    Process that are largely not intelligent but are able to make guesses and inferences about data given lots of training, either that or they use a lot of rules dreamed up my fallible meatbags (usually where "the computer says no..."

    Identifying pictures as "cat picture" or "not a cat picture" is fine, but not that intelligent as it did not work out that one was a fire extinguisher with a hose wrapped round the bottom and two tags on the handle.

    Using an algorithm generated by people will usually end up finding scenarios that were not originally considered which can make decisions appear arbitrary.

    As the article says though the hype engine has worked and people are buying the concept.

    I still remember the 80's when software like ELIZA was going to revolutionise interactions with computers and make them seem like people. I don't see a huge difference from that even to the expensively constructed digital assistants now proliferating.

    The biggest danger in all this is that people end up trusting computers instead of their common sense...

  9. amanfromMars 1 Silver badge

    Cosmic Flight Controls for Future Commanded Systems of Administration

    It's an old idea that has regained momentum in the past few years, fuelling more hype ..

    It is not hype whenever Reading and Leading AI Gospels .... for they be Virtual Flight Travel Instruction Manuals.

    For Per Ardua ad Astra AIMaster Pilot Programs .....Remotely Delivering Immaculate Missions.

    cc .. Air Chief Marshal Sir Stephen Hillier, Chief of the Air Staff, in command of the Royal Air Force

    El Regers and Commentards All,

    I think it is high time that it should be made more widely known that SMARTR Machines are now Generating Future Raw Source for IntelAIgent Lead into New Cyber Terrain Territory ..... Live Operational Virtual Environments under Heavenly Protection and Control for an Almighty Command.

    And such has been freely offered to home intelligence services and defence forces ... for their own trial beta servering, for the blazing of new trails.

    How very odd that is not widely known. Is it residing and presiding in some Deep Google Minded NEUKlearer Silo?

    Inquisitive Minds would have an Answer in Reply.

  10. a_yank_lurker

    Artificial Idiocy

    AI is not intelligence or even resembles intelligence. It is very complex pattern matching with a massive amount of data. Intelligence partially is the ability to use limited data to make accurate decisions.

  11. JohnFen

    We still haven't solved the very first problem

    The first problem is defining what "intelligence" is. We have no solid technical definition of that. It's pretty hard to solve a problem that you haven't really defined. Or, it's easy, since you can do a "shoot and call what you hit the target" sort of "solution".

    1. Anonymous Coward
      Anonymous Coward

      Re: We still haven't solved the very first problem

      Alan Turing himself remarked (with what I think was very subtle irony) that as soon as we understand the mechanism behind any apparently "intelligent" behaviour, we insist that it isn't really intelligent. Thus, he argued, true intelligence must forever remain wrapped in mystery.

      I very much wish I could have met and talked with him. People like that come along once in a very, very long time - and then the government kills them if it can. (The so-called "cyanide apple" was never analyzed for poison, and it seems likely on the whole that Turing was murdered for some reason comprehensible only to moronic apparatchiks).

      For my money, Turing was as close as England has ever come to its own Leonardo da Vinci. Sadly, he only lived a bit more than half as long - and even then Leonardo's constant lament was "Di mi se mai fu fatta alcuna cosa". ("Tell me if anything was ever done/completed"). There is even more crushing irony in the thought that Leonardo survived for 67 years in the chaotic and violent 15th-16th centuries, whereas Turing only made it to 41 in supposedly "civilized" 20th-century England.

      1. quxinot

        Re: We still haven't solved the very first problem

        Sadly Turing was a man before his time, and was treated rather poorly for what we consider today to be pretty dumb reasons. He isn't alone in being beaten down by the "normal" people of the time.

        (Eppur si muove.)

  12. Michael Sanders

    Let's not start patting each other on the back just yet.

    You don't need to be HAL to take out quite a bit of the middle of IT and accounting/exec. jobs. With an AI baked into the windows AD environment it would be a lot easier than the AI trying to administrate by powershell scripts. The only thing saving us is that microsoft's AI is dead last in the ranking. Can't drone on, so please use your imagination to fill in the gaps.

    1. Dave800

      Re: Let's not start patting each other on the back just yet.

      So has someone written such an AI application? Lots of projects can seem like perfect AI applications until you start to look at the details.

      1. Anonymous Coward
        Anonymous Coward

        Re: Let's not start patting each other on the back just yet.

        Exactically. "The devil is in the details". Otherwise politicians would have solved all the world's problems by now.

  13. Teiwaz

    As noted, same thing happens over and over when boffin-hobbits make a few baby-steps which get over-hyped by companies or the media and we end up with a lot of over-excited up-talking what the technology is going to be capable of doing.

    When this fails to manifest, it'll get quietly buried for another ten or twenty years to be taken out every couple of years so people can laugh at the naivety.

    Earlier A.I. and premature over-excitment over terms like VR in the 1990's.

    I think it might actually be worse this time round as Marketing have indeed latched onto it as another nebulous hyper term to throw around, making previous exotic IT term usage as naive as 1950's washing powder commercials.

  14. Mike 16

    "A few bricks shy of the moon"

    Or roughly that, was what I heard Hubert Dreyfus say back in the early 1970s. Some of you cursed that name when you read it, some nodded in agreement, and most went "Whodat?".

    Anyway, his point was that much of AI seemed to consist of grant proposals that were equivalent to someone stacking one brick atop another and noting that the height of the stack had doubled, thus "proving" that in only about 30 such steps the stack would reach the moon.

  15. Dave800

    I am very cynical about AI, having experienced what happened in the 1980's. It was just the same as now, with all the hype but without the threat of AI controlled cars.

    The idea of a driver-less car sounds great, until all the qualifications are spelled out - such as that these cars might be restricted to motorways, and would have to revert to manual control in other circumstances - so you can't be driven home after a boozy party, nor can you send your children somewhere alone by car, nor can you start a walk at point A and tell the car to meet you up at point B!

    A real driver-less car would have to figure out whether a child (or adult) was likely to blunder into the road, whether a lorry shedding some of its load was a danger or not, understand the difference between a horse with rider, and one without (which I encountered once on a motorway), recognise the sound of some piece of debris getting stuck under the car, etc etc.

    The 1980's AI hype was dominated by the idea of Logic Programming, which is hardly ever mentioned nowadays. LP was great at figuring out family relationships, but only if nobody was adopted, changed sex, or whatever! There is no steady progress towards AI - just a series of hype fuelled lurches.

    1. Tom 7

      @Dave800 prolog still lives

      I've just finished Bratko's 'Logic Programming for AI' and its alive and kicking. Its way way beyond family relationships. Get a copy from your local library. I'm self unemployed at the moment and ploughing through as much AI as possible and, compared with what I saw in the 80s its come a very long way down an extremely long road. In the right hands I think its capable of some remarkable things - but then software itself does too in the right hands with the right people managing it. I think we may be coming up to what will be called a winter when it is really a plateau - a bit like when you realise you need to completely re-factor something to make it go that bit further. Libraries will be consolidated and people who know what they are doing will help other people who know what they want to do in a AI ish way and, unlike the massive amount of effort spent managing paper shaped documents on computers that has seriously wasted the vast effort of some of the most intelligent outputs from out universities for the last 30 years, we will see some serious jumps forward.

      1. Anonymous Coward
        Anonymous Coward

        Re: @Dave800 prolog still lives

        I don't see us coming to a "winter" or a plateau. I see us coming to a fork, where businesses want to take "AI" in one direction, advancing the mechanisms that already exist; while researchers want to take AI in another direction, attempting actual AI and not just a simplification of the definition.

  16. Anonymous Coward
    Anonymous Coward

    "If the brain was simple enough to understand, we'd be too simple to understand it."

    I read this in a book about neural networks during a Cognitive Science degree, 25 years ago. One of the few things I remember about my time at University!

  17. Name3

    No wonder, A.I. is still as dumb as in the 1990s

    Siri, Cortana, Alexa, etc are still as dumb as the annoying Office assistant in M$ Office 1997.

    While in 1997, we had Nuance Naturally Speaking running on Pentium 1 with whopping 166 MHz. Now, the speaking agent needs constant "cloud" web access, so the Nuance software that powers Siri, Cortana and Alexa now runs on internet "cloud" servers instead of on a high end 3GHz octo-core smartphone, that is running loops around the good-ol Pentium 1.

    And of course Google shut-down Freebase.com, (was the biggest open source ontology), to prevent competition (from smaller companies) in software assistants.

    We had these expert systems, the hated telephone computers for four decades. And guess what, A.I. winter is coming again, because the progress just didn't happen, well except for semi-automated FakePorn, I guess.

  18. Tom 7

    Like much technology its still in the wrong hands.

    I'm not saying I'm the right hands but I have a Pi3/Camera combination that currently recognises the difference between a chicken, a duck, a rat and a Magpie with over 90% accuracy. This is quite useful when it comes to stopping the latter two nicking the food and eggs. If I can get it to 99.9% on the rat I might just automate a trap and for the magpie a loud alarm (which would put the hens off lay). Its not much but it would pat for itself quite quickly.

    A similarly trained device for insect recognition (with a laser!!!) could roam crops and do away with need for pesticides, and indeed weed recognition could also reduce the need for herbicides. Both solutions in mass production could work out cheaper than the chemicals without the likelyhood of resistant mutations cropping up (OK you do get mirrored beetles and silvered plants).

    Useful solutions will continue to develop - its twats trying to sell you shit you dont want that are going to have a winter not AI.

  19. amanfromMars 1 Silver badge

    Meanwhile, in the Exotic Erotic East, Quite Another Atypical Confection?

    Looking at things through Other Cloudy Lenses .....

    amanfromMars Feb 9, 2018 3:21 AM [1802090821] ..... upping the ante on https://www.zerohedge.com/news/2018-02-07/china-developing-ai-enabled-nuclear-submarines-can-think-themselves

    The one question we ask: What piece of military hardware will China infuse an AI system on next?

    Another question for answering, and one which is much more difficult to realise is true and there is precious little that one can do to either halt or divert it from an already chosen path, is what piece of hardware, military or civilian, will AI systems infuse for China next

    1. amanfromMars 1 Silver badge

      Re: Meanwhile, in the Exotic Erotic East, Quite Another Atypical Confection?

      Something Truly Different to Look Forward to Witnessing. ? !

      All Necessary Sublime InterNetworking Technologies Exist to Currently Server such Feats.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like