Yup, Real Life is tough
These statistical analysis machines are going to have a hard time analyzing reality.
AI does not exist, and any use of the term today is abuse.
All the biggest labs leading AI research will have you believe that their fancy game-playing software bots will one day be applicable to the real world. The skills from playing Go, Poker or Dota 2 will be transferable to algorithms designing new drugs, controlling robots, teaching computers how to negotiate – you name it. One …
AI does not exist, and any use of the term today is abuse. .... Pascal Monett
Well, one can only imagine and conclude it doesn't exist in your worlds, PM, just as you can imagine and suggest it doesn't appear to exist anywhere else either.
What you need is to hook the robot up to another ML AI back in the lab, dedicated to tuning the hyperparameters of other AI. You could call the lab one "Teacher" and allow it - ohh, I dunno, fifteen years or so the train up the "pupil" robot.
Now where have I met that paradigm before?
" Deep learning systems work well under specific conditions set by developers, these hyperparameters are carefully tuned to help them learn patterns from data."
So in other words, the programmers can train this AI... It's less "AI" and more "Human Intelligence", just we are the humans, and we are imparting the intelligence.
Proof that real AI does not exist. Get an an, worm or other bug. It's has a few (in comparison) neurons, perhaps a community (so a collection of AI ;) ), and it can do amazing things! Ants can find food, build habitats and *farm*.
Now do the same with current "AI", and watch the robot barely be able to stand. XD
All the machine learning stuff is very domain-specific: subsystem is used for processing LIDAR data, another one is used for video, etc. AFAIK no one is using ML itself to orchestrate this, though no doubt certain combinations are probably running through ML controllers. But, of course, more data is always needed, which is why all the companies are keen to shift stuff as quickly as possible so that their customers (eternal beta-testers) can collect that data for them,
What caught my attention was the amount of time real world training actually takes. Unless the same AI can power multiple robots, these systems will have to learn at exactly the same rate we meatbags have to, or even more slowly. Other machine learning scenarios involve many hours of trial and error or guided learning done in parallel. That does not seem to have been the case in this example. There is also the possibility of direct transfer of learning as the technology progresses.Once one system has learned a skill, it can be given to similar systems without their having to go through the same learning process.
Exactly so. If you try to understand the human mind, you find you have to study the brain and its constituents - neurons, synapses, neurotransmitters, etc. But the brain is not an isolated system; it is just the biggest and most obvious part of the nervous system, without which it would have no function at all. And the nervous system is linked tightly to the rest of the body.
What computers can do is a subset of what the brain can do, abstracted away into a machine. The brain can count and do logic, and computers can do those things much faster and more reliably. But in the absence of the reasons why brains count and do logic, so what?
I didn't go into the details of the study. But considering how much of our mental performance is interlinked with (and limited by) our body with its actors and sensors, it seems unlikely to ever get anything robotic AI that is not, to at least a certain extent, "aware" of its body. ... Evil Auditor
How about considering the much easier and quicker provided route/root, Evil Auditor. Have Humans recognise they be as Robots too. Advanced IntelAIgent Machines.
And Future Programmed Always Almighty Creative and Never Oft Unnecessarily Destructive is a Fine Lead to Follow with Questions to Answer.
All of these things are still just statistical models.
People hope that by training on enough data, these things will produce inferences. But they don't. They just train on the data.
A baby who grows up to the age of, say, five doesn't need to know that a baseball bat on the head hurts just as much as a Tonka toy or whatever else. It infers that from the data it has and applies it to everything it sees.
This is the crucial step that we can neither define, detect or induce in our "AI" of today. People just hope that a complex enough system will somehow display this trait that we can't define without us doing anything different to what we're doing now. I can't see it. I'm not even sure it's possible with classical computing.
960 hours of training is also NOTHING for a system that is itself only a slow-running tiny-sliver mockery compared to even insect-sized neural networks running in real-time.
We honestly need to just find something new... anything that needs ANY human parameter-tweaking or hand-holding obviously is not sufficiently able to make its own inferences about what those parameters should be, what's important or not, and what's going to lead to success.
"All of these things are still just statistical models."
Exactly! and any of them that have a useful function, in real terms is a simple tool, when you compare the usefulness to say, a screwdriver, without which getting a screw into a piece of wood, would be next to impossible, then, the current applicability of most AIs is not that great. The alternative to a screwdriver would be a hammer, cruder and likely to cause damage, whereas the alternatives to most AI uses so far are often better, quicker and more efficient.
From Goodfellow, Bengio, Courville (2016) - Deep Learning:
on which common computer models learn: 10^4 - 10^9 (e.g. images). // Humans: when we are awake: full-time video, audio, tactile, smell, etc.
Connections per neuron
Computer model: 100 - 10^3 (there are outliers, but not commonly used) // Human: >10^4
Computer model: 10^6 - 10^7, also, neuron models are simplified a lot. // Human: 10^10
There is a lot we cannot efficiently model in silicon and more we don't know about individual neurons and their local interaction. One problem: we want to be able to transfer learning (topology, weights) between computers, so we have to use digital circuits.
Size is limited by computation speed, and well-parallelizable. I hope that Intel releases Stratix 10 MX soon, and not only to national interest buddies in the US. I think the work that can be done by current neural networks is impressive.
I am not worried about AI ever being a threat to humans. Without human guidance, a machine will still just be a machine, incapable of escaping its programming parameters. I’m not sure if humans can be taught creativity, which means we could never teach it to machines.
The true danger will be when the human brain can directly interface with, and be augmented by, the capabilities of machines. Imagine having all of human experience to draw upon to decide your actions, plus the speed and strength of machines. The first human who successfully does that will be given a new name, God, and his first act will be to ensure that no such connectivity is ever done using any other human brain.
Biting the hand that feeds IT © 1998–2021