I , for one...
拿我来说，欢迎我们的新“ AI ”霸主
(best I can do with Google Translate :)
Artificial intelligence pioneer and former head of the Google Brain project, Andrew Ng, has said China's fast-growing internet economy has created the conditions for the best AI research opportunities in the world. The software guru was poached by Baidu Research from Google last year to become the firm's chief scientist, and …
I remember calling a plumber and finding that it was not standard practice for him to bring his own tools.
As I chat with people who still live in China, I understand that this continues to be true to the present day.
My perception is that the innovation that comes out of China is along the lines of the WWII innovation that was called "Jerry rigging." It is certainly true that the engineers in China enjoy a similar lack of availability of resources.
As for Baidu, well, it's impossible to talk about a "search" engine in China without addressing the increasingly terrifying effect of Chinese censorship. I'm sure that it is indeed very streamlined once you eliminate all concepts which are not approved ahead of time.
I thought the whole thing about babies brains was that it was a shedload of computing power dedicated to processing data?
Yes. And they acquire data very quickly.
The OP should look at some of the extensive contemporary research into infant learning. A lot of good, methodologically sound work has been done in the past couple of decades - a huge improvement on much of went before, which was either anecdotal and invented nonsense or shallow, narrow compilations of statistics (which in turn fed the developmentalism we still haven't broken free from in the "West").
Authoritarian states like China are never going to develop an AI, because they couldn't control it.
Same for most other major Governments, corporations, and universities.
The two most likely scenarios for true AI emergence seem to, me at least, be the following:
Created by some enthusiast working on his own on 'something cool'.
Or, accidentally via increasingly complex interactions between programs, data sets, and hardware.
The second one would be the real problem, nowhere to pull the plug, because it arose naturally from everything. You'd have to try to shut down its environment, with it resisting*.
... and that's enough speculation from me for one day.
*The traits I'd expect in a practical AI are curiosity, an ability to learn and remember, and self awareness. I'd posit that self awareness implies either self preservation or suicide, and an AI that immediately terminates itself could hardly be called practical in evolutionary terms.
"*The traits I'd expect in a practical AI are curiosity, an ability to learn and remember, and self awareness. I'd posit that self awareness implies either self preservation or suicide, and an AI that immediately terminates itself could hardly be called practical in evolutionary terms."
Unfortunately a couple of the most likely traits given the nature of the beast will be lack of empathy and inhibition, two of the primary aspects of psychopathy, so the case where the occurrence of true AI (whatever that actually is) and self awareness is spontaneous rather than developed under controlled conditions with a learning process that imbues morals and ethics will likely be potentially the most dangerous. How would you teach empathy,something that animals and humans evolved, to a computer?
>>Authoritarian states like China are never going to develop an AI, because they couldn't control it.
Axiom: TECHNOLOGY IS INHERENTLY DEMOCRATIZING. Therefore there will be nothing to keep criminal elements from adding A.I. to their heist bag. For my reference I shall cite a fascinating motion picture graced with seriously heretical proclivities: "Kingsmen: The Secret Service"
Axiom: TECHNOLOGY IS INHERENTLY DEMOCRATIZING
Which is why technological progress has inevitably led us to the present state where power is shared equally by all.
Axiom: If it fits on a bumper sticker, it's been simplified past the point of meaning anything useful.
Ah, September. Will you never end?
I love the comments on articles like this. They do such a great job of rehashing, in greatly simplified form, the AI debates of the 1960s and '70s.
As the Avett Brothers said, "Ain't it funny how most people (I'm no different), they love to talk on things they don't know about?"
It's conceivable that Deep Learning is the royal road to strong AI - though I tend to doubt it - but even if it is, it's going to be a long road.
People who don't pay attention to the field sometimes don't understand just how tremendously far we are from solving even basic problems; and marketing fluff like this does nothing to clarify the picture. Ng knows his field (DL), and presumably has a decent perspective on AI research in general, but he's being disingenuous. When he talks about a 99% "success rate" for speech recognition, he's referring to the 95% rate that's currently only achieved with a single speaker under reasonably good conditions - applications like Siri and Dragon Naturally Speaking.
Try that with, say, parlor discourse, with multiple speakers carrying on multiple conversations, people entering and leaving conversations, etc. You have to deal with conversation entailment; with sarcasm, jokes, in-group references; with all sorts of implicit antecedents and predicates; with a vast cultural context. It's hard for humans.
Even creating metrics for testing that sort of NLP system is difficult, because we get into the realm where human judges can't arrive at a consensus on the precise interpretations of discourse. (That's been shown by a variety of methodologically-sound studies - it's not just speculation. Human language use is stochastic and heuristic: we toss words at each other until we're satisfied that we've probably arrived at sufficiently-congruent meanings, or we give up.)
Of course, there are potential benefits. I eagerly await the day when a machine model can accurately distinguish between an adjectival noun phrase and a noun phrase in apposition - a task that seems to elude Reg writers and editors. (Those commas in the first paragraph of the story. They should not be there.)
And that's just the NLP domain, which is a small part of strong AI.
It's much too soon, in other words, for serious CS researchers to be talking about achieving strong AI. Set realistic goals and work toward those. Leave the strong-AI speculations to the philosophers.
You just couldn't restrain yourself, could you? I was all ready to up vote you till you conflated UI with speech recognition. It is not the task of speech recognition to understand sarcasm, jokes, in-group references or cultural contexts except for certain homonyms, which even plenty of people don't get, especially those with first vs. second language limitations. No up vote for you.
You just couldn't stop yourself, could you? I was all ready to up vote you till you conflated UI with speech recognition. It is not the task of speech recognition to understand sarcasm, jokes, in-group references or cultural context except as it pertains to homonyms. And even people have homonym problems, especially non-native listeners. So, no up vote for you in spite of your excellent stochastic and heuristic reference.
Biting the hand that feeds IT © 1998–2022