So the world's biggest software companies
not to mention the most intrusive and datagrabbing, are building artificial intelligences?
What could possibly go wrong?
Google is helping bring HAL to life by open sourcing its machine learning software. The software, called TensorFlow, is the successor to the DistBelief system that the online giant used for the past five years to make sense of the vast amounts of data it has access to. DistBelief has sifted email for spam, scanned YouTube …
They will probably eventually hit some nasty hard limits on digital data size and processing, then they crash!
Even just one of our relatively large and greedy analogue brains, for our body size, to allow consciousness, are still many orders of magnitude smaller, lower power, and probably the same for processing and memory capacity, than all the digital computers on Earth!
Many orders of magnitude faster digital clock speeds and increases in memory density are very improbable because we are already getting closer to the expected physical density limits of semiconductor components in solid state materials, thus the stop-gap move wider to multi-core CPUs and computer clusters. I doubt that 3D chips and fluid cooling will provide enough increase, so I think that fear of conscious digital computer AI or skynet is fantasy.
Computers will probably have to become analogue again, with different programming technology, because I doubt that Quantum computers can ever be dense, fast or adaptable enough!
Psychology and brain scans are revealing just how amazingly sophisticated our analogue brains are and by inference even smaller brained animals' brains are too! Some people say they have modelled small living things brains, but have they really, especially the brain plasticity required for learning etc.?!
For example human brain waves appear to not be like a clock signal but rather carrier waves for faster analogue signals via network router like nodes, which self-optimise signal travel time between 3D distributed, /self-rewiring/ brain cells for memory storage and retrieval, and composition. Trying to build this kind of self-adaptive architecture ourselves would probably be very very hard.
"Sure, self-interest is a part of this too, but in the end it's a win-win for everyone. ®"
To my eye, this reads like light ad copy.
As someone who is faced potentially with making a choice of platform in which to invest time and human learning, I'd be more interested in learning the reg's take on relative merits of viable alternatives.