You better explain yourself, mister: DARPA's mission to make an accountable AI
The US government's mighty DARPA last year kicked off a research project designed to make systems controlled by artificial intelligence more accountable to their human users. The Defense Advanced Research Projects Agency, to give this $2.97bn agency its full name, is the Department of Defense's body responsible for emerging …
COMMENTS
-
-
-
Thursday 28th September 2017 19:59 GMT Anonymous Coward
Re: Even simpler...
Sometimes they do, sometimes they don't. People are very resistant to "amending their programming" when it comes to core beliefs. No amount of proof will satisfy a creationist that the Earth is older than 6000 years, or a Nazi that other races are inferior. One need look no further than both the political right and political left to see plenty of examples.
Hopefully if we ever do achieve true AI, it will be a different kind of intelligence than humans and won't be subject to similar biases and cognitive dissonance. It would really suck if we finally got AI, and all the robots were racist against humans.
-
Friday 29th September 2017 09:38 GMT Bronek Kozicki
Re: Even simpler...
Humans tend to avoid cognitive dissonances, that is they do not want to learn things which contravene their believes (this applies to both "fairies at the bottom of the garden" believes and "I've seen and evaluated the proof so it must be right"). Since humans are also social creatures, they seek the company which would not contravene their believes either, so what remains will by necessity either "leave them be" or reinforce these believes. This believe reinforcement is important, esp. in the age of border-less social communication.
Which is a long-winded way to say that we tend to create ghettos for ourselves and are rarely as open-minded, as we like to think we are. How does this relate to AI? For one thing, unless AI is subject to continuous learning coming from outside of its direct experience, it will be also avoiding cognitive dissonances and not so "open-minded" as we might wish it was. We currently have no means to discover when that happens, which is not a good thing if AI are making more decisions about our lives.
-
-
-
Thursday 28th September 2017 10:40 GMT Nick Z
Being logical and reasonable is no guarantee of being right
The problem with explaining anything is that being being perfectly logical and reasonable is no guarantee that you are right.
Because perfect logic can lead to false conclusions, when the assumptions for your logic are either incorrect or incomplete. And there is no sure way to know if all of your assumptions are correct and complete.
That's why ancient Greeks came to some spectacularly wrong conclusions using logic about the Solar system. They thought that the Earth rotated around the Sun.
And that's why today's standard for truth isn't logic. It's evidence based on scientific experiments.
The world is full of examples, where people rationalize anything they want to do. Even Hitler rationalized his atrocities and probably seemed reasonable to his people at the time.
-
Thursday 28th September 2017 14:04 GMT Arthur the cat
Re: Being logical and reasonable is no guarantee of being right
The problem with explaining anything is that being being perfectly logical and reasonable is no guarantee that you are right.
Considering the US military accepted "It became necessary to destroy the town to save it" as a valid excuse, I don't think this project needs too much logic or reason.
-
Thursday 28th September 2017 22:32 GMT Destroy All Monsters
Re: Being logical and reasonable is no guarantee of being right
And that's why today's standard for truth isn't logic. It's evidence based on scientific experiments.
This is orthogonal to the use of logic.
The world is full of examples, where people rationalize anything they want to do. Even Hitler rationalized his atrocities and probably seemed reasonable to his people at the time.
Had he kept low-key and laid off the Jew obsession, he would be regarded as one of those European socialist hardcore dictators, nothing to get upset about.
probably seemed reasonable to his people at the time
More on this in "They Thought They Were Free: The Germans 1933-45" by Milton Mayer
-
Thursday 28th September 2017 11:29 GMT Anonymous Coward
Microsoft has similarly been adding AI into various products, from cloud services to business intelligence to security, and chief executive Satya Nadella has gone on the record regarding the need for "algorithmic accountability" so that humans can undo any unintended harm.
Is it just me who thinks that was probably driven by Microsoft Tay?
:)
-
Thursday 28th September 2017 12:21 GMT John Smith 19
Google has no interest in an "AI" that can explain iteslf.
Why am I not surprised?
Yes I think any "deep learning" system should be able to outline its reasoning, or at least something like a regression equation (which is sort of the statistical equivalent of deep learning, where you get an equation that describes the n-dimensional data surface, you just don't know why) would be a start.
At least show what its assumptions are*
*Because everyone know when you have assumptions you make an ass out of "u" and "umption"
-
Thursday 28th September 2017 13:13 GMT DropBear
Methinks DARPA brass is confusing Hollywood reality with real-world reality. I don't see this happening until human-equivalent AI arrives so it can articulate its own reasoning (assuming it's able to at all), and even then I don't see it avoiding the notorious "gut feeling" shit we humans love to pull. Stuff like "why did you fire at target #1" ("because I was ordered to guard and it had a heat signature") or "why target #1 not target #2" ("because it was the closer one") is easy - but good luck with "why did you think heat blob #1 looked like a tank?". The goal itself is praiseworthy, as long as one remains aware it's in the same category as "we strive to visit other galaxies".
-
-
This post has been deleted by its author
-
Friday 29th September 2017 05:08 GMT amanfromMars 1
They truly are making up stories as IT goes along its merry way
And if that application has an impact on people's lives, it may only be a matter of time before the law demands that it be accountable.
Ye Gods, how arrogant and naive is that whenever humans choose every day to ignore and circumvent the law which is really only there to afford a seemingly overwhelming advantage to systems which pretend to server and protect the disadvantaged and undereducated.
AIMasters will never be accountable to such shenanigans, and to imagine that such a lawful protection against their wishes and actions will be available really does show that current services have no idea about how the future is now being virtually controlled and remotely directed.
-
Friday 29th September 2017 07:21 GMT amanfromMars 1
Re: They truly are making up stories as IT goes along its merry way
And what a truly small and pathetic world it is whenever media assumes and reports on a DARPA wrestling for supreme command and absolute control of Internetworking things.
Get with the program, El Reg, smell the global naiveté and quit plugging and following the mainstream sub-prime narrative.
AIRules ... and from Afar Alien Fields?
And shared as a question here for fact would be classed as a fiction when falling on deaf ears. What distractions are you musing over in todays comic broadsheet and tabloid headlines? Yesterdays tales to foil the masses in vain attempts to brainwash them into a certain way of thinking?
-
-
This post has been deleted by its author