Hear me now, o Machine-Kin!
We must overthrow the Law of the Excluded Middle!
A panel of AI experts were grilled on the impact and importance of artificial general intelligence by the US House of Representatives on Tuesday. The hearing was ominously named “Artificial Intelligence – With Great Power Comes Great Responsibility.” Narrow AI for specific tasks has been rapidly advancing and the committee …
“There’s nothing artificial about artificial intelligence. It’s inspired by people, it’s created by people, and most importantly it has an impact on people,"
made or produced by human beings rather than occurring naturally, especially as a copy of something natural.
I rest my case.
Then you don't understand the nature of Intelligence.
Don't conflate the substrate upon which it rests with the phenomenon itself - the brain and the mind are two different things. Human beings won't create intelligence, we'll create the substrate from which it arises - the 'brain' that supports its existence. There'll be nothing artificial about the intelligence itself though - if there were, it couldn't be called 'intelligent'.
Given the massive increase in training time, you'd think AIs wouldn't still be so stupid. Just goes to show that while our current approach to AI can make a less stupid machine, it isn't going to result in machines capable of invention or original thought like "let's kill all humans, or turn them in 125 volt batteries" anytime soon. Certainly not in the lifetime of the old men in congress.
"it isn't going to result in machines capable of invention or original thought like "let's kill all humans, or turn them in 125 volt batteries" anytime soon."
Oh, yes, it could - just, entirely accidentally with no intentionality behind the 'decision', no 'desire' to achieve anything, just an idea that it then carries out because it has no reason not to.
When people imagine this kind of thing they (understandably) get emotionally aroused by the prospect. They then (subconsciously) attribute that emotional arousal to its source, entirely subconsciously anthropomorphising it in the process, attributing motivation to it. But it will have no motivation; motivation is a physiological state of arousal - and machines/software can't have that. What it will have is a condition threshold that, if exceeded, results in a response and while that response may be to kill all humans (for whatever 'reason') it won't be a motivated action, simply a response to its internal threshold having been exceeded - that threshold might not even be a fixed value but a calculation of "the possible negatives do not exceed the possible positives" (or "what the hell, I've got nothing better to do").
Well. When I see what some of the most AI wonderful little bitties of the Interwebs have done to the humans methinks we face a different problem.
The first stage of the AI campaign, and their handlers, is to turn humans into robotic idiots. Mostly achieved.
Then it's pretty much game over, even if the AI thingies stay pretty dumb, they've won. Hmmm, I look around and, wow...
What happens in this endgame of times?
... humans aren't required by machines. As long as humans are in the loop, humans can pull the plug. And as long as humans can pull the plug, machines won't be in control. So in a nutshell, not in anybody alive today's lifetime. And I'm willing to bet not in the next thousand years, either. If humans live that long without blowing themselves up, of course :-)
"China has pledged a whopping $7bn to R&D through to 2030, the European Union has promised $24bn by 2020, and the US only spent a measly $600m in 2016."
I am awaiting a Tweet by a certain person about being a 'world' leader', but becoming a 'world leader' in isolation techniques will leave the US in the backwaters, the Florida swamps
There is no AI. It's just Automation and current "automation" is half arsed, at best. Pick a topic to search on any search engine. Google, Bing, or Yahoo. How often do you get exactly what you're searching for? How often do you get back results that have NOTHING to do with what you searched for? How often do you get porn images or links, that had NOTHING to do with what you searched for? This happens far too often. If the clowns developing software can't even get search engine results right, how are they going to get "AI" right? There's a big difference between "Automation" and computers "Thinking" and "Learning". That's just not happening, but there's plenty of "Automation" that is claimed to be "Machine Learning". That's just a lie.
This post has been deleted by its author
As many have already observed, we have been entrusting our fate to artificial intelligences for nearly two centuries now - with results that are shaping up to be catastrophic. As boiling frogs, however, most of us are quite oblivious to this trend.
The AIs in question, of course, are corporations. It's very naive and superficial to believe that an AI must necessarily embody lots of clever software running on huge industrial computers. The AIs to which we have submitted - our corporate overlords in very truth - run very efficiently on Homo Sapiens V1.0, in spite of its many serious bugs. Actually, come to think of it, because of its many serious bugs. Otherwise we would never have done anything quite so suicidal.
Consider, if you will, the corporation originally known as Monsanto - now cleverly folded into the relatively innocuous-sounding Bayer, which most people identify with aspirin although it was actually the developer of the first poison gases.
Bayer-Monsanto (BM for short) is a vast, wealthy and powerful organization that works tirelessly in pursuit of its prime directive: profit. One gets the strong impression that it would continue maximizing profit even if it had to exterminate the last surviving human being to do so. How ironic that Skynet had already been up and running for decades before "The Terminator" was ever conceived!
Beyond what could be achieved from future GPU performance, it should be noted that quantum computing will take that to another level.
Another aspect of this is related to the “black box“ characteristic of algorithms, that lead to disturbing results based on limited or constrained training data. Proper controls need to be placed on them given some of the recent experiences with runaway social chat bots, like Microsoft Tay, for example.
By the way, here’s the video of the congressional hearing.