back to article AI pioneer reckons China's where the Rise of the Machines will start

Artificial intelligence pioneer and former head of the Google Brain project, Andrew Ng, has said China's fast-growing internet economy has created the conditions for the best AI research opportunities in the world. The software guru was poached by Baidu Research from Google last year to become the firm's chief scientist, and …

  1. TonyWilk

    I , for one...

    拿我来说,欢迎我们的新“ AI ”霸主

    (best I can do with Google Translate :)

    1. Khaptain

      Re: I , for one...

      难道他们有什么漂亮的女机器人的性别......像在ExMachina

      1. Graham Marsden
        Coat

        Re: I , for one...

        Can I have Prawn Crackers with that, please...

  2. Anonymous Coward
    Anonymous Coward

    He's not wrong

    "...a willingness to challenge even basic assumptions"

    I have worked with Chinese software engineers on a number of occasions and they certainly challenged my basic assumptions.

    1. Bucky 2
      Black Helicopters

      Re: He's not wrong

      I remember calling a plumber and finding that it was not standard practice for him to bring his own tools.

      As I chat with people who still live in China, I understand that this continues to be true to the present day.

      My perception is that the innovation that comes out of China is along the lines of the WWII innovation that was called "Jerry rigging." It is certainly true that the engineers in China enjoy a similar lack of availability of resources.

      As for Baidu, well, it's impossible to talk about a "search" engine in China without addressing the increasingly terrifying effect of Chinese censorship. I'm sure that it is indeed very streamlined once you eliminate all concepts which are not approved ahead of time.

    2. Mike Shepherd

      Re: He's not wrong

      I once worked with a diminutive Japanese software engineer. I refrained from physical assault on the annoying t**t only because I thought he might be skilled in karate.

  3. Andy Non Silver badge

    I'm sorry Dave, but I can't let you do that.

    Why don't you sit down and take a stress pill...

    1. Destroy All Monsters Silver badge

      Re: I'm sorry Dave, but I can't let you do that.

      ...bought via Baidu!

  4. John Smith 19 Gold badge
    Unhappy

    "you need data and you need compute power,"

    And yet the brains of babies have neither.

    Do you not think they are doing it wrong?

    1. Triggerfish

      Re: "you need data and you need compute power,"

      I thought the whole thing about babies brains was that it was a shedload of computing power dedicated to processing data?

      1. Destroy All Monsters Silver badge
        Pint

        Re: "you need data and you need compute power,"

        I thought the whole thing about babies brains was that it was a shedload of computing power dedicated to processing data?

        Exactly. There are two things in this universe: Bulk matter and structures able to process data.

      2. Michael Wojcik Silver badge

        Re: "you need data and you need compute power,"

        I thought the whole thing about babies brains was that it was a shedload of computing power dedicated to processing data?

        Yes. And they acquire data very quickly.

        The OP should look at some of the extensive contemporary research into infant learning. A lot of good, methodologically sound work has been done in the past couple of decades - a huge improvement on much of went before, which was either anecdotal and invented nonsense or shallow, narrow compilations of statistics (which in turn fed the developmentalism we still haven't broken free from in the "West").

  5. Destroy All Monsters Silver badge
    Holmes

    "I enjoy working with people. I have a stimulating relationship with Dr. Ng of Baidu"

    Seriously, I hope his statement was marketing material by Baidu and he just read it off.

    Content compression: "I love Baidu"

  6. Captain DaFt

    Another wrong place to look for AI

    Authoritarian states like China are never going to develop an AI, because they couldn't control it.

    Same for most other major Governments, corporations, and universities.

    The two most likely scenarios for true AI emergence seem to, me at least, be the following:

    Created by some enthusiast working on his own on 'something cool'.

    Or, accidentally via increasingly complex interactions between programs, data sets, and hardware.

    The second one would be the real problem, nowhere to pull the plug, because it arose naturally from everything. You'd have to try to shut down its environment, with it resisting*.

    ... and that's enough speculation from me for one day.

    *The traits I'd expect in a practical AI are curiosity, an ability to learn and remember, and self awareness. I'd posit that self awareness implies either self preservation or suicide, and an AI that immediately terminates itself could hardly be called practical in evolutionary terms.

    1. Chris G Silver badge

      Re: Another wrong place to look for AI

      "*The traits I'd expect in a practical AI are curiosity, an ability to learn and remember, and self awareness. I'd posit that self awareness implies either self preservation or suicide, and an AI that immediately terminates itself could hardly be called practical in evolutionary terms."

      Unfortunately a couple of the most likely traits given the nature of the beast will be lack of empathy and inhibition, two of the primary aspects of psychopathy, so the case where the occurrence of true AI (whatever that actually is) and self awareness is spontaneous rather than developed under controlled conditions with a learning process that imbues morals and ethics will likely be potentially the most dangerous. How would you teach empathy,something that animals and humans evolved, to a computer?

      1. amanfromMars 1 Silver badge

        Smarter AI learns a lot from Absolutely Fabulous Fabless Humans

        Evolved humans can choose to temporarily, whenever needed, turn off empathy, Chris G.

        And 'tis always a lingering question as to whether that be an advanced process or retrograde step.

    2. Triggerfish

      Re: Another wrong place to look for AI

      "Or, accidentally via increasingly complex interactions between programs, data sets, and hardware."

      River of Gods by Iain McDonald explores a similar generation of AI like that, the prequels stories also, it's a good novel as well.

    3. WalterAlter
      Megaphone

      Can You Spell C R I M I N A L - E L E M E N T ?

      >>Authoritarian states like China are never going to develop an AI, because they couldn't control it.

      Axiom: TECHNOLOGY IS INHERENTLY DEMOCRATIZING. Therefore there will be nothing to keep criminal elements from adding A.I. to their heist bag. For my reference I shall cite a fascinating motion picture graced with seriously heretical proclivities: "Kingsmen: The Secret Service"

      1. Michael Wojcik Silver badge

        Re: Can You Spell C R I M I N A L - E L E M E N T ?

        Axiom: TECHNOLOGY IS INHERENTLY DEMOCRATIZING

        Which is why technological progress has inevitably led us to the present state where power is shared equally by all.

        Axiom: If it fits on a bumper sticker, it's been simplified past the point of meaning anything useful.

        Ah, September. Will you never end?

    4. Michael Wojcik Silver badge

      Re: Another wrong place to look for AI

      I love the comments on articles like this. They do such a great job of rehashing, in greatly simplified form, the AI debates of the 1960s and '70s.

      As the Avett Brothers said, "Ain't it funny how most people (I'm no different), they love to talk on things they don't know about?"

  7. Shades

    Translation

    "He added China has been able to leapfrog other countries in terms of its tech developments, with the country's existing tech infrastructure being less of a "chore" to navigate."

    Translation: China pretty much doesn't give two shits about Intellectual Property Rights.

  8. Schlimnitz

    "willingness to challenge even basic assumptions"

    Also willingness to turn a blind eye to any ethical concerns.

    1. Little Mouse Silver badge
      Boffin

      Re: "willingness to challenge even basic assumptions"

      I was expecting a brain-in-a-jar angle to the story at the very least.

      Or a network of babies all wired-up together in a lab somewhere, perhaps causing amoral scientists' noses and ears to bleed when they get upset.

  9. Michael Wojcik Silver badge

    Reasonable goals

    It's conceivable that Deep Learning is the royal road to strong AI - though I tend to doubt it - but even if it is, it's going to be a long road.

    People who don't pay attention to the field sometimes don't understand just how tremendously far we are from solving even basic problems; and marketing fluff like this does nothing to clarify the picture. Ng knows his field (DL), and presumably has a decent perspective on AI research in general, but he's being disingenuous. When he talks about a 99% "success rate" for speech recognition, he's referring to the 95% rate that's currently only achieved with a single speaker under reasonably good conditions - applications like Siri and Dragon Naturally Speaking.

    Try that with, say, parlor discourse, with multiple speakers carrying on multiple conversations, people entering and leaving conversations, etc. You have to deal with conversation entailment; with sarcasm, jokes, in-group references; with all sorts of implicit antecedents and predicates; with a vast cultural context. It's hard for humans.

    Even creating metrics for testing that sort of NLP system is difficult, because we get into the realm where human judges can't arrive at a consensus on the precise interpretations of discourse. (That's been shown by a variety of methodologically-sound studies - it's not just speculation. Human language use is stochastic and heuristic: we toss words at each other until we're satisfied that we've probably arrived at sufficiently-congruent meanings, or we give up.)

    Of course, there are potential benefits. I eagerly await the day when a machine model can accurately distinguish between an adjectival noun phrase and a noun phrase in apposition - a task that seems to elude Reg writers and editors. (Those commas in the first paragraph of the story. They should not be there.)

    And that's just the NLP domain, which is a small part of strong AI.

    It's much too soon, in other words, for serious CS researchers to be talking about achieving strong AI. Set realistic goals and work toward those. Leave the strong-AI speculations to the philosophers.

    1. Queasy Rider

      Re: Reasonable goals

      You just couldn't restrain yourself, could you? I was all ready to up vote you till you conflated UI with speech recognition. It is not the task of speech recognition to understand sarcasm, jokes, in-group references or cultural contexts except for certain homonyms, which even plenty of people don't get, especially those with first vs. second language limitations. No up vote for you.

  10. Queasy Rider

    You just couldn't stop yourself, could you? I was all ready to up vote you till you conflated UI with speech recognition. It is not the task of speech recognition to understand sarcasm, jokes, in-group references or cultural context except as it pertains to homonyms. And even people have homonym problems, especially non-native listeners. So, no up vote for you in spite of your excellent stochastic and heuristic reference.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022