back to article Artificial Intelligence: You know it isn't real, yeah?

"Where's the intelligence?" cried a voice from the back. It's not quite the question one expected during the Q&A session at the end of the 2019 BCS Turing Talk on Artificial Intelligence. The event was held earlier this week at the swanky IET building in London’s Savoy Place and the audience comprised academics, developers and …

Page:

      1. TomPhan

        Re: No...

        Perhaps you'd like a bagel?

        1. Spamfast

          Re: No...

          Strike a light! I'm a genius!

    1. Uncle Slacky Silver badge

      Still waitiing for Artificial People Personalities(tm), aka "Your Plastic Pal Who's Fun To Be With".

  1. mr-slappy

    It's Just Pattern Recognition

    It's not AI - it can't be because we don't even understand what intelligence is in humans, never mind in machines.

    It's not Machine Learning, because we don't really understand what learning is in humans either, never mind in machines. (I'm speaking as a school governor who spends a lot of time with teachers, many of whom are excellent, a few not so much. It's really complicated. If you could distill the essence of a really good teacher someone would have done it by now.)

    It's just advanced pattern recognition, operating from very large but inevitably biased and flawed data sets.

    1. Swarthy

      Re: It's Just Pattern Recognition

      My theory is that intelligence is pattern recognition. Well, pattern recognition and predictions, several layers deep.

      We see a pattern and make a prediction based off of it, we then review the predictions for patterns, and predict our predictions, and note the patterns, and then we alter our future predictions to give a better pattern.

      And then we see Jesus in a grilled cheese.

      1. doublelayer Silver badge

        Re: It's Just Pattern Recognition

        Effectively, this is true. We see patterns in observation, then make predictions about how each possible action will affect the situation before choosing a set of actions to take. So any functioning artificial sapient system should also need this. However, pattern recognition and statistical analysis are slightly different, and human pattern recognition and limited pattern recognition based on a subset of available data are also quite different. I don't have as many problems with the term machine learning, because the creation of a model does learn from its training set. If the set is faulty, it will learn the wrong thing and use that, just as you could teach someone that circles have straight edges, the bright thing in the sky is called a tree, and certain types of people have ingrained qualities that can be applied to any other person in that category, and they will act on those flawed notions.

        1. donk1

          Re: It's Just Pattern Recognition

          I was able to walk across the road blindfolded between 3am and 4am hence I can walk across the road blindfolded anytime...go head! The stock market has been going up all year hence will always go up...hhmmm!

      2. Doctor Syntax Silver badge

        Re: It's Just Pattern Recognition

        "And then we see Jesus in a grilled cheese."

        No, intelligence is reminding oneself that it's just grilled cheese, not even a religious painting.

      3. Toni the terrible Bronze badge
        Devil

        Re: It's Just Pattern Recognition

        jesus is in a cheese sandwhich, He is every where, even in your Flat White!

    2. veti Silver badge

      Re: It's Just Pattern Recognition

      You're saying that we need to understand exactly what intelligence is, before we can create it?

      Counterpoint: your mother.

      I have yet to see anyone define intelligence in any form that holds water past a couple of rounds of analysis, and so I'm not willing to dismiss AI as readily as some people who seem to believe that intelligence is some kind of magic that's inherently impossible to create.

      1. Charles 9

        Re: It's Just Pattern Recognition

        No, not inherently impossible, just something so vague and incomplete that we ourselves don't know yet what intelligence really means.

        In simpler terms, how can we teach what we don't understand ourselves?

        1. BrownishMonstr

          Re: It's Just Pattern Recognition

          Isn't that what teachers do, though?

        2. Michael Wojcik Silver badge

          Re: It's Just Pattern Recognition

          how can we teach what we don't understand ourselves?

          Name any one phenomenon we understand completely.

          Don't get me wrong. I think we are far, far away from machine intelligence that's even roughly as powerful, for some reasonable set of metrics, as human intelligence; and if we do produce such a machine intelligence, I don't expect it to look (i.e. have visible attributes similar to) much like human intelligence. But as usual the cliched handwaving objections raised about AI in this forum have little significant content.

          1. Charles 9

            Re: It's Just Pattern Recognition

            The concept of binary: on/off, white/black, 1/0. If we don't understand this, we don't understand anything, AND it's the basis for computer logic, too.

            Of course, that doesn't exclude the possibility of things that cannot easily fit into a binary world. The infinite shades of gray and all. That's part of the reason Trolley Problems keep getting brought up; they represent a dilemma that requires a (usually binary) answer that no one can satisfy.

  2. Joe W Silver badge

    "where is the intelligence"

    Coffee through the nose hurts. A lot.

    Now I'll read the rest of the article...

    1. Nick Kew
      Holmes

      Re: "where is the intelligence"

      Intelligence is knowing better than to combine coffee with Dabbs.

  3. Gordon861
    FAIL

    […takes a slug of Relentless…]

    Does anyone still drink this stuff since they changed the recipe a while back ... it's now syrup.

  4. Version 1.0 Silver badge
    Unhappy

    Is it an oxymoron?

    I think it's just a marketing term for a poor database... no different really from "search engine" - who cares about Truth and Reality when you can market crap and make billions?

    1. Rich 11

      Re: Is it an oxymoron?

      who cares about Truth and Reality when you can market crap and make billions?

      Are we back on the subject of Trump University?

      1. sprograms

        Re: Is it an oxymoron?

        Perhaps. I thought it was referring to the synthetic mortgage-backed securities business, or perhaps the investment advisory industry.

  5. GX5000

    "He is also the only man in the world who can articulate the word "recidivism" mid-sentence without a few practice runs or pausing for a swig of Monster Energy between syllables."

    Way to go lowering the bar under water.

    No wonder my younger subordinates are all lost unless I explain in emojis.

  6. Justthefacts Silver badge

    Logical fallacy alert.....

    So, current algorithms aren’t able to give answers matching the best of human thought. But that’s neither a reasonable requirement, nor a necessary one. Just like automated driving, they only have to match the *average* human. And the truth is, average humans are way more biased than we admit.

    When we recruit people, you think we genuinely hire the best? Or just people who match our judgement of the skills required, with a background similar to people we have previously seen perform well on the job? You think juries and judges are unbiased? These algorithms are presenting us evidence that if we examine data honestly, human decision making is not great and we are embarrassed about it.

    Of course, we should try to outperform, and decision making that tends to revert to mean can’t get us there. But the truth may well be that algorithms are no worse than the guy who slopes off early, or always approves loans to people he was at school with because their business plans seem sensible to him, or only ends up hiring white people not because they are white but because in each case they talk a good talk about being a team player in the interview - ie follow his norms.

    1. doublelayer Silver badge

      Re: Logical fallacy alert.....

      I'm not sure that's good enough. For self-driving cars, they should reach a safety level of an average driver before they're used, which they have done and exceeded in tests. That's why they are acceptable, though of course they need to verify that they'll pass those tests under more difficult conditions. However, even if we do get a system to perform judgements at the level of an average person (difficult to quantify for topics like bigotry), it can degrade the situation. If we can quantify negative events like this, we can also identify parts where their frequency is excessive, increasing the average. We can also find methods of reducing the likelihood of those events when things are more important, for example moving a trial of a person likely to face discrimination to a location with less connection. With an automatic tool, the parameters can't easily be changed without outright manipulating the result, and a great deal of oversight is needed to ensure that no unforeseen biases are impacting those who the model affects.

      A uniform mediocrity is not always enough, and that's still assuming we can achieve that with these tools. I think the evidence shows that, sometimes, we fail even to reach that threshold.

      1. Justthefacts Silver badge

        Re: Logical fallacy alert.....

        I partly agree. What you’re saying is we need some supervisory oversight of the outcomes; where the supervisor has a higher expertise, and can analyse the outcomes that fall below the average level (which will often have some special characteristic that the lower-level decision maker hasn’t accounted for) and tweak the decision criteria of the lower-level decision maker to move its Normal Distribution upwards. That, I agree with.

        I also agree that higher authority needs to be human, and we shouldn’t defer to “computer knows best”. Plus, with classical AI it’s really difficult to tweak the parameters in a semantically meaningful way. That is, in ML terms, we don’t want to overfit, such that we are only training it to be more lenient to people with the same surname. So, yes, the evidence *does* show that sometimes we fail to meet that threshold.

        Where we differ:

        a) I don’t see how this differs from the current situation where in many fields we see “failing institutions” that cause serious harm and then we have public inquiries to correct them. Care homes that abuse their patients. Investment banks with cultures encouraging traders to manipulate interest rates. Hospitals that build up inventories of surgical waste, failing to realise that one persons logistical hitch is someone else’s mother post mortem.

        b) *Of course* AI would be expected to replace junior-level decision making first. We shouldn’t up-end or flatten our hierarchies of decision-making or appeal just because we automate one layer. Today, senior bank staff can overrule junior ones. But we need fewer senior staff than junior ones. And that applies *even amongst judges*. Most cases are routine.

        c) I think the *real* problem in the long-term is hollowing out of expertise. How do you grow an upper layer of really good decision makers, if there is no lower layer for them to grow from. We will get a set of people who have never been “on the ground” to work through the morass of easier decisions. They will get increasingly blinkered and academic. And that’s related to your point about manipulating results. When there are 10000 court officials, there is a variety of viewpoints and expertise, and they remain culturally connected by debate. If the easiest 99% of decisions are delegated to software, we only need a top layer 100 supervisors setting policy by parameter rule. That looks rather like an autocracy, and seems very vulnerable to single-point manipulation.

        One of the things that protects our democracy, is that the lower layers don’t always follow the rules set down by their supervisors. Ironically, the very feature that enables individuals to enforce their own bigoted ideas in opposition to societal morals, protects us from the dictats of dictators.

  7. Anonymous Coward
    Anonymous Coward

    Turkish pronouns

    Turkish does not have gender pronouns, no he or she or him or her exists in Turkish, only one word for all. Anecdotal proof that you can have a very sexist society without gender pronouns! All that he/she business cracks me up having known this for years...

  8. SVV

    Data bias

    It's sightly incorrect to say that the data is biased in the example given here. The data is accurate and produces a correct answer with whatever statistical analysis you choose to run on it. It is better to say that the data reflects an underlying bias (prejudice in average sentences in the case described).

    It is nice to keep emphasising that this sort of data processing has nothing to do with intelligence, other than the intelligence of people working out how to do the analysis. It is also far from new, the insurance industry for example being entirely based on the abiity to use statistical analysis to calculate risk depending on a combination of facts - that industry needs to be as "biased" as possible in order to maximise profits.

  9. naive

    Bias, facts and statistics

    Like it or not, all AI will be based on statistics and things people know.

    Suppose the situation where one has to share a hotel room with either a rabbit or a lion.

    Any sensible AI system aiding in deciding which room would offer the best experience would probably recommend a room with a rabbit as roommate.

    Since the AI system would base its decision on information like "lions are large carnivorous predators with large teeth", thus the fluffy bunny is probably preferable above the lion.

    Is it interesting to see wether this is considered to be biased information.

    1. John G Imrie

      Re: Bias, facts and statistics

      Feed it Monty Python's Holly Grail, then see what it thinks about Rabbits

      -- Tim (the Enchanter)

    2. Doctor Syntax Silver badge

      Re: Bias, facts and statistics

      Since the AI system would base its decision on information like "lions are large carnivorous predators with large teeth", thus the fluffy bunny is probably preferable above the lion.

      An AI basing its decision on no more than this would just as likely prefer lion or refuse to give an answer at all. In order to give a sensible answer it needs to "know" the implications of large carnivorous predators for human beings and, indeed, that the sharer is a human being.

      1. Doctor Syntax Silver badge

        Re: Bias, facts and statistics

        I should also have said that the AI needs to be instructed it's make the decision on behalf of the human, not the lion or rabbit. It's easy to take so much for granted.

        1. kirk_augustin@yahoo.com

          Re: Bias, facts and statistics

          Exactly. The computer will not know what a hotel room is, and for all a computer could come up with, it could be a circus act that requires lions. The problem being that computers never really know anything at all.

    3. kirk_augustin@yahoo.com

      Re: Bias, facts and statistics

      There is no way a computer can know information about lions and bunnys to be able to make decisions like that. It takes a human programmer to try to simulate reasonable choices based on data like size and danger, but that is far too unreliable to ever put to the test. So instead those sorts of choices should be left to humans, who have a built in value system and world knowledge.

  10. Anonymous Coward
    Anonymous Coward

    So - Garbage in, Garbage out? Just on bigger data sets?

  11. holmegm

    There's a pretty large assumption being made here that the data is unfairly biased, as opposed to simply reflecting an unpalatable reality.

    It's probably necessary to show your work.

  12. Ken Hagan Gold badge

    "If the AI were intelligent, it would work this out for itself. It's not so it doesn't."

    I'd dispute that. I'm always being told that *we* are intelligent, but the hard evidence is that millions of people have spent several thousand years on the problem and are only very slowly figuring it out.

    That's probably why we *still* don't have a definition of "intelligence" that isn't circular (with an embarrassingly small radius).

  13. hellwig

    And Facebook wondered...

    Why it couldn't create a software algorithm to provide relevant impartial news and social media posts? The fallacy of the modern nerd is thinking they're smarter than everyone else despite evidence to the contrary.

    "Just because Einstein couldn't rationalize his theories on relativity without the cosmological constant, doesn't mean my Hemp-based dating app can't solve the mysteries of the universe!".

  14. Rich 10

    chaos theory

    and all those people in this thread looking up data in Google to provide examples to justify their answers here are introducing new biases in Google's predictive "AI" algorithm - at the end of the day the world is now different, just because of this one little query storm. When a (technical) butterfly flaps its wings in El Reg.....

  15. Doctor Syntax Silver badge

    Numbers and the like have an unfortunate effect on people. People tend to believe them. It leads to quoting results to infeasible levels of precision. It leads to measuring and acting on stuff which is easy to measure and ignoring the stuff which it more difficult to measure even if it's more meaningful (simple example is the setting of arbitrary speed limits and installing equipment to reinforce them whist ignoring tailgating).

  16. Anonymous Coward
    Anonymous Coward

    Given biased training data, do we not want the AI to be equally biased?

    Ie, with the doctor / nurse example. If the majority of training examples refer to doctors in the masculine and nurses in the feminine, assuming the training set is reasonably representative of real input, the likelihood is that this is precisely the translation the majority of users want / expect. The fact that the training data is biased, simply implies the end user is likely equally biased. If the AI were to deliberately choose to remove this bias, this is getting worryingly close to the machines attempting to impose their will on us... and I can see nothing but madness down that road.

    1. Charles 9

      Then how do you handle the edge cases (female doctors and male nurses) without a fuss about discrimination being thrown?

      1. holmegm

        "Then how do you handle the edge cases (female doctors and male nurses) without a fuss about discrimination being thrown?"

        We had a few female doctors and male nurses back when people assumed "he" and "she" for the generic cases. Somehow everyone survived this just fine.

  17. Garahag

    If machines are not intelligent, just algorithmic... are humans not algorithmic, or not intelligent?

    1. Charles 9

      The better question to ask is, "What is intelligence?" Because we don't even have a concise answer to that question yet.

    2. kirk_augustin@yahoo.com

      It does not at all matter if humans are also computers and algorithmic. The point is we have an inherent, built in and functioning value system, emotions, unambiguous data storage and retrieval system, instincts, pain/pleasure motivations, etc., that we likely will never understand or be able to program into a computers.

      We function in complex ways relevant to our inherent system of values, instincts, etc.

      Since computers can never share this exact set of instincts and values, they will never be relevant to us in terms of those human instincts and values.

      1. Charles 9

        I wouldn't say our value system is inherent because it's different from person to person. More that it's acquired but subconscious, thus why we don't understand it ourselves. As for our data storage, I wouldn't call it unambiguous given how easily we MIS-recall things (thus my constant password protest, "Was it correcthorsebatterystaple or donkeyenginepaperclipwrong?")

  18. bpfh

    As for the nurse example...

    Well, this is also the great thing with English having mostly neutral nouns , and when there are male of female ones it is either implicit (ship is a “she” by convention in English, but “he” in French and Russian for example), so the translation needs to make some guesswork to Identify from a neutral noun to guess the one you want.

    So, “i talked to the nurse today” becomes “j'ai parlé à l'infirmière aujourd'hui”, but if you specify “i talked to the male nurse today”, it does change to the male sentence “j'ai parlé à l'infirmier aujourd'hui”, so you can overrule it if you need to by being explicit.

  19. Anonymous Coward
    Anonymous Coward

    The reason for calling it AI is simply to accelerate the current trend of justifying doing nasty things to other people with "the computer said".

  20. JeffyPoooh
    Pint

    If these conclusions are...

    If these conclusions are shocking to you, then you're an AI fanboi.

    Although being a fanboi gives a warm and pleasant syrupy feeling inside the skull, it is not actually a good thing as it's the exact opposite of actually keeping your brain switched on. Many parallels with cults.

    (The AI-propelled spell checker in my device keeps insisting that the word fanboi should be spelled 'cannot'. Artificial Imbecile.)

  21. I.Geller Bronze badge

    From the discoverer: Machine Learning and Intelligence

    Machine Learning is the addition of structured texts, where each pattern is a direct analogue of the command of the programming language. Structured text sets the context and multiple subtexts for these patterns.

    As for intelligence: it has the ability to find, use, and modify sets of tuples, where in mathematics, a tuple is a final ordered list (sequence) of elements. I. e. speaking about intelligence we speak about sets of phrases, each of which is explained by a set of other phrases.

  22. StuntMisanthrope

    There isn’t any.

    That’s the point. Two types of path. Squid eye or pressure chemical with about a petabyte in use. Ranking FAQ cache, the 64 million dollar question. See me, for a bollocking, I thought it was live chat. #programadatelinecondition

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like