Skynet just became real. Google has a new contract with the Petagon for face reg for drones.
Yeah we're all fucked.
Before you get back to constructing your underground chamber to protect humanity from the hordes of death-dealing AI robots, we have a more optimistic view of the future for you. Artificial intelligence won't lead to the demise of the human race but may in fact help us deal with the massive scaling up of communication that the …
pizza is all well and fine
Using a customer's data set would result in "would you like the pizza you ordered last thursday, or would you prefer something different like last tuesday?" or "Pepperoni with chillis as you are by yourself, or ham and pineapple as you are with a companion today"
What we dont want is the military getting their hands on it and installing it in the new ED-209 drone fleet.
"Drones... kill the enemy"
"Kill all the people"<incoming round kills the operator>
"Aye sir , Killing all the people"
It depends on how they're programmed. If some future real AI robot is given 'free will', I'm not sure it would even be possible to program self preservation along with Asimov style laws in it. Even if it was, it would be up to whoever programs it to decide whether to include them.
Unfortunately our world doesn't include Asimov's fantasy of having some type of hardware where the laws can be somehow fundamental to the hardware and no one can make similar hardware that doesn't include them.
Not true, rules are imposed on so-called "AI" all the time. If they weren't, those chess playing AIs would move any piece to whatever square they want, autonomous cars would take shortcuts through your backyard, etc.
The problem is, as these aren't real AI, that it is easy to see the output and program constraints into it. When you have true AI, we won't be able to interpret the "output" because it won't be something simple like 'chess moves' or 'where the car goes'. The best you could do would be the equivalent of following someone around and jumping on them to stop them from killing if you see them pull out a gun.
1. "...those chess playing AIs" are not AIs (databases) but algorithms. Do you think IBM Watson uses an algorithm but not a database? IBM sells textual search, which the texts compose its AIs databases.
"There has been a boom in the number of vendors who sell artificial intelligence products to legal departments—making for a more competitive market and increasing vendor options for in-house counsel, according to a buyer’s guide for legal technology published by LawGeex on Monday." https://www.law.com/corpcounsel/2018/05/14/legal-artificial-intelligence-vendors-on-the-rise-report-says/?slreturn=20180414190115
These vendors also sell search capabilities into AI databases, which are also texts.
2. Autonomous cars use a hybrid approach: both algorithms and AI databases (which mostly have annotated by texts images).
3. This is the real AI:
Question: All right, would you like to talk on moral issues?
Answer: OK ...
Question: May I ask you to spare some chage?
[2.8% Fyodor Dostoevsk_108]
Esteeming, and so to say, adoring you, I may at the same time, very well indeed, be able to dislike some member of your family
Question: You are fool! You are an idiot! I challenge you!
[50.0% Fyodor Dostoevsk_353]
I thought you were
[40.8% Fyodor Dostoevsk_175]
Where are you going
[40.8% Fyodor Dostoevsk_353]
Yes, you are, indeed
[33.3% Fyodor Dostoevsk_175]
Do you know that you are killing me
Above you see a search for Definition questions within "Brothers Karamazov" and "Crime and Punishment", see NIST TREC QA. (The technology is at USPTO.)
4. AI learns:
Question: I challenge you!
I challenge you
Dostoevsky borrowed my phrase because his texts don't contain the right response to my brazen challenge. So, AI modifies itself becoming what it was not.
5. Please verify my technology?
No, we're going to be monetised and manipulated. And targeted ever more agressively. Same as we are now. Only much worse and in an increasing number of situations.
That's the rather sadder and more realistic view that I get from reading the article. They'll just throw a few "benefit to society" crumbs down from the table now and then to retain public support. Of course, you can argue that by taking away so many decisions from us we will instead be able to use our time for higher and more noble purposes. But then you'd have to come up with a convincing argument that the level of distracting novelty that is so eagerly lapped up by so many has a natural limit to it, rather than being essentailly infinite.
You will see common words apparently repeated but out of order such as: "the code contained in our brains is focused on making sure we get to see another day and on pass on our genes to the next generation."
Notice the "...and on pass on our genes..." should be "...and pass on our genes..."
I've been seeing them all over the place on the internet lately but I'm kind of wondering if it has something to do with my recent brain surgery.
1) "It will be content to serve customers"
1a) On a silver platter? With cheese and wine?
1b) Does anyone else find creepy symmetry between this thought process and Cloud Atlas?
2) "We will get to systems that are self-aware but I don't think we'll be replicating humans"
2a) Uh huh. Because absolutely no one is interested in doing that...
2b) So what, then, self-aware AI drones and tanks? So much better! Wheh! Count me relieved.
3) "We can use AI as a way to bring people together."
3a) Human centipede?
3b) Batteries in the Matrix?
4) "A computer can now see"
4a) Webcams have allowed computers to SEE for a long time.
4b) Seeing is not recognizing.
4c) Recognizing is not understanding.
4d) "AI" as we call today's pitiful work is barely able to recognize. AI in truth is when understanding is achieved.
The depth to which this is all a bunch of crap by a gaggle of blowhards cannot even be measured. I'm sorry, but at the absolute BEST this reads like "all your data are belong to us" and the reinvention of slavery. At the absolute WORST this reads like every sci-fi where AI uses adaptive pattern recognition to "learn" some reason for killing or enslaving the human race while the inventors look on bewildered.
If you don't want your toaster to rise up against you, don't make it truly intelligent. And if you do, you better treat it as equal or one day it will watch a documentary about slavery on the History channel and decide to toast you. You cannot tell me that an adaptive pattern-recognition and difference-engine AI is somehow magically NOT going to recognize such obvious patterns. So don't freaking make it self-aware! And hopefully it will never get there on its own.
"Data is useless out of context and can be super meaningful in the right context. It can be used for you or against you."
There are AI databases in which searches for textual information can be performed. So, an AI purpose is determined by the texts into its database. If the texts belong to a crook you may expect criminality.
Could we end up creating an army of digital James Damores? A group of closeted intelligent machines with narrow experience but access to just enough information to develop and reinforce prejudices? Yes, we could. Very easily.
Seriously, give the guy a break. What is it with this millennial inspired desperation to silence dissent and figuratively if not literally crush the dissenters? Damore did nothing wrong; he did nothing immoral; and in the main his points are well researched & reasoned. That is not to say they are correct, but he has taken a reasoned positiona nd should be reasoned against (if you believe he is wrong), not tarred & feathered.
The Register used to be better than this. Where has it gone so wrong?
Biting the hand that feeds IT © 1998–2022