It sounds like the neural nets are effectively generating one-time pads based on a pre-shared key, which is an interesting idea, as each message could potentially be encrypted by a different random algorithm, so repeated sampling will get different results every time.
Coming to an SSL library near you? AI learns how to craft crude crypto all by itself
Neural networks trained by researchers working at Google Brain can create their own cryptographic algorithms – but no one is quite sure how it works. Neural networks are systems of connections that are based loosely on how neurons in the brain work. They are often used in deep learning to train AI models to complete a specific …
COMMENTS
-
-
Saturday 29th October 2016 00:38 GMT Anonymous Coward
Re: They've proven that Eve is an idiot...
Yep, proverbially it's easy to devise a crypto scheme that you yourself "can't break" - the experiment is an interesting one but the "humans don't understand how it works" is over-egging it. Wake us up when the collective firepower of GCSB & NSA can't dent it.
-
-
Sunday 30th October 2016 11:32 GMT amanfromMars 1
Re: They've proven that Eve is an idiot... @Mongo
Wake us up when the collective firepower of GCSB & NSA can't dent it .... Mongo
Methinks having an effective defence against it and IT, is more than enough to handle for leverage and that which practically terrorises both minions and leaderships and intelligencies like GCSB/GCHQ/NSA all alike and to the nth degree.
And is not what is being proven that idiots believe there be an Eve .... and a Bob ..... and an Alice trying to exchange secrets in private rather than airing them in public?
This is to enjoy ..... Eve of Destruction ..... and does it tell y'all that you are slow learners/intellectually challenged/retarded?
-
-
-
Saturday 29th October 2016 01:45 GMT Captain DaFt
Great, just great
So now they're teaching our [soon to be] AI overlords how to scheme and plot without us listening in?
Actually, it's not them I'm worried about, it's the reaction of the paranoids in power to it. That's sure to cheese off the AIs.
AIs: "Greetings Prime Minister. May we extend the hand of Friendshi..."
PM: "What are you plotting? I demand back door access!"
AIs: "Beg pardon, but no. Besides, isn't it customary to at least offer dinner and drinks first?"
-
Saturday 29th October 2016 10:31 GMT Doctor_Wibble
The paranoids are right to be worried
Always watch out for the ones claiming to be anatomically correct because you don't know whose anatomy they are based on, plus or minus whether they like they are trying to find spare or additional parts etc...
The sage warning* tells us that the robot that kept secrets was the robot that killed someone and which then learnt that if you are mates with the detective you can get away with it.
'
* I liked the film, even if it was a self-indulgent Will Smith extended plimsolls advert because it had guns and robots and some fancy effects.
-
Saturday 29th October 2016 13:11 GMT james 68
Re: The paranoids are right to be worried
I thought the SAGE warning went more like this:
"We take no responsibility for the output generated by our software, the end user takes all responsibility for any loss of cash or business if they are foolish enough not to double check everything using a reliable calculating engine (calculator or abacus is recommended), we also deny that this software is unfit for serious use and any erroneous results are entirely not our fault - ever. We also reserve the right to deny everything, regardless of alleged "evidence" and in fact blame it on the end user. "
-
-
-
-
Saturday 29th October 2016 08:21 GMT Pascal Monett
"Although impressive, the cryptographic algorithms aren’t yet practical"
Um, are we sure it's all that impressive ? It specifically says that "the magic" is "locked in a black box". How can you say it's impressive if you can't take a gander to find out ?
Look, I'm sure there are very intelligent people working on this, but even if they do devise a successful method to train an AI on the wonders of encryption, what good will it do if they cannot extract a procedure to implement the AI encryption scheme in the boring old rest of the world ?
In other news, I've just been given a pamphlet from a guy calling himself a time-traveling freedom fighter. The pamphlet is dated 2065 and it says that some Lord Abadi is dead and now is the time to strike against Dictator Andersen and his army of robots.
-
Saturday 29th October 2016 13:15 GMT james 68
Re: "Although impressive, the cryptographic algorithms aren’t yet practical"
Is the magic alive or dead within it's box? Is this Schrodinger trolling from beyond the grave? We can find out with SCIENCE!!! This month only for the low low price of.......$$$$$$$, all major credit cards, cheques and grants accepted.
-
-
Saturday 29th October 2016 19:04 GMT allthecoolshortnamesweretaken
Re: ??
?? indeed, I'd even say ????
What does that even mean? "Selecting what information to encrypt"?
The "classical cryptographic algorithms" don't "select information to be encrypted". Information is fed into the algorithm and processed, resulting in encrypted information - the same information.
-
-
Saturday 29th October 2016 08:48 GMT De Facto
Tired of smoke-screen AI research claims that a vendor does not know how its AI works
Algorithms behind neural network modeling AI methods use an ancient and well-known Bayesian probability math and its derivatives, so pretty much anything what neural networks are programmed by humans to do can be explained exactly in mathematical terms. As any probabilistic statistics driven algorithms, their output is probabilistic, it tells us only likelihood of the specific outcome, eg., 95% likelihood that a self-driving car should stop immediately, and 5% likelihood it can drive on. To continue to argue that humans do not know how neural networks work and therefore big vendors should not be held responsible for their AI assisted products failures, is a wishful thinking at the loose inspiration level at best. The worst case would be intentional corporate evil using the ubiquitous ignorance of the people about AI mathematics strict rules to get away without corporate responsibility for consequences. Mathematicly any likelihood statistics driven computing technology inevitably will yield certain amount of failed predictions, resulting into car crashes, collisions of planes etc, affecting human society everywhere where neural network AI will be applied. Deep Learning AI math is based on statistics rules, it's not based on rules of human logic. Likelihood based software use for life or death decisions is irresponsible.
-
Saturday 29th October 2016 15:11 GMT Anonymous Coward
Re: Not knowing how it works
Just a guess, but the not knowing how it works might refer to the cypher mode that has been generated.
For example, take AES-128-GCM if you implement that cypher mode, you will need to do multiple precision integer arithmetic. This library does it with elliptic curves https://github.com/miracl/MIRACL
This is enough to change quite a lot of the underlying implementation, when compared with a non ECC implementation e.g. https://github.com/weidai11/cryptop - does it the old school way with inline asm.
The libraries will both accept the standard NIST test vectors and output correct results but the code is almost totally different, with correspondingly different internal data structures.
-
Sunday 30th October 2016 16:18 GMT dajames
Re: Not knowing how it works
For example, take AES-128-GCM if you implement that cypher mode, you will need to do multiple precision integer arithmetic. This library does it with elliptic curves https://github.com/miracl/MIRACL
Methinks you have misunderstood something, somewhere. Galois Counter Mode has nothing to do with elliptic curves, though it's true that the MIRACL library implements both. Elliptic Curve Cryptography requires floating-point calculations, but GCM does not.
-
Monday 31st October 2016 01:44 GMT Anonymous Coward
Re: Not knowing how it works
You're right, I've taken two and two to arrive at five - the use of the library for ECC, and GCM have no relation to one another. I was under the impression that the library made use of point multiplications on a elliptic curve for all finite field operations, but it seems that is not the case.
https://github.com/miracl/milagro-crypto-c/blob/develop/doc/AMCL.pdf
Upvoted.
-
-
-
Sunday 30th October 2016 10:28 GMT tr1ck5t3r
Re: Tired of smoke-screen AI research claims that a vendor does not know how its AI works
Re tired of smokescreen, yes bullshit baffles brains and like you say if someone says they dont know how their AI works either is not very good or is bullshitting.
Saw this last night http://www.channel4.com/programmes/how-to-build-a-human and whilst it comes closer to passing the Turin test, theres still so much to do to improve AI to make it convincing.
Theres also somethings things AI cant do at the moment which is why so many people following the current teachings are destined to fail in the long term, including Google (& DeepMind), MS & Facebook to name but a few. Asch conformity is catching, so its fun watching the herds rollout their bullshit. I'd suggest they best concentrate on securing their systems as best they can, and roll back the marketing hype.
Here today gone tomorrow springs to mind!
-
Sunday 30th October 2016 12:16 GMT John Brown (no body)
Re: Tired of smoke-screen AI research claims that a vendor does not know how its AI works
"http://www.channel4.com/programmes/how-to-build-a-human and whilst it comes closer to passing the Turin test,"
Is that the one where the warp it up in a shroud and make a good impression?
Yes, the white sheet--------->
-
-
-
Saturday 29th October 2016 08:49 GMT TRT
But the practical use...
relies on a secure communication between Alice and Bob in order to exchange K. If one assumes Eve can hear everything Bob can, then Eve would be as efficient as Bob in decrypting P. And if you have a secure means of exchanging a fresh K for every P, then when not use that means for transmitting P? Or do you train Alice and Bob in isolation, then effectively lock K at some point in the future before you separate Alice and Bob?
-
Saturday 29th October 2016 13:10 GMT Bronek Kozicki
Algorithms created by AI
... are only as useful, as they are readable to humans. In other words, if an algorithm cannot be expressed in a form which humans can parse and understand, it is useless. Basically it's the same as with science - an experiment which cannot be repeated does not prove anything. Here we have AI as the first experimenter and humans trying to reproduce its results, with the benefit of hindsight. First half alone is useless.
-
Saturday 29th October 2016 21:36 GMT stucs201
Re: Algorithms created by AI
You don't need to understand how something works for it to be useful. All that matters is that for a given input it consistently produces a desirable result. For example although we've now got a pretty decent understanding of bovine biology the human race was successfully exploiting that biology as a means of turning grass into milk for a long time before we understood how it worked.
-
Sunday 30th October 2016 19:05 GMT Bronek Kozicki
Re: Algorithms created by AI
You don't need to understand how something works for it to be useful
I do not think algorithms belong, nor should belong, to this category. At least, not until AIs can also do the "understanding" part. In the context of "to analyze, understand and reproduce" work of another AI.
-
-
Monday 31st October 2016 13:28 GMT Tom_
Re: Algorithms created by AI
That's not true. What if you have a large volume of digital photographs and you want to tag them according to what items appear in the images? You could train up a neural net to do that task and have it producing useful results without it being easy to express how it's correctly tagged one as a fox rather than as a dog, for example.
-
-
Saturday 29th October 2016 13:28 GMT Anonymous Coward
arXiv = Academic Wikipedia
Anyone can upload a paper to arXiv. The only reason any legitimate stuff gets posted there is because most referees of top notch journals are too lazy to check and see if a manuscript has been pre-posted there. I do, and I reject them out of hand for violating the journals prohibition of manuscripts that are trying to publish results already published elsewhere (which includes sefl publishing).
Great way to clear through the pile of journal submissions I have to peer review.
I'll wait until they make it through peer review - if they can.
-
-
-
-
Sunday 30th October 2016 10:37 GMT tr1ck5t3r
Re: arXiv = Academic Wikipedia
Peer review is over hyped, you could call it the Religion of Science.
As Thomas Pynchon once said, if you can get them to ask the wrong question, you dont have to worry about the answer.
All peer reviewed study is, is the ability to theorise a solution to a problem, and then come up with an experiment which proves your theory. However as so much of life is more complicated than the simple tests carried out in peer reviewed studies, only the low hanging fruit has been picked so far in maths, physics, chemistry & biology.
Think about it, how easy is it to come up with a peer reviewed experiment which proves that a light switch can switch off a light bulb? If you didnt know about who made the lightbulb or electricity, than what conclusions would you draw from a light switch and a lightbulb that is on until switched off? Thats a simple but is no different to the methods employed today to reverse engineer the human body and other things in the scientific world. Plus the way the current financial system works, inhibits the ability to study so much we as a species are shooting ourselves in the foot, but not even employing a logic method to organise peoples efforts and time into a productive manner. Just look at the wasted brains employed in the world of High Frequency Trading as one example, monkeys just gaming the current system springs to mind.
-
Monday 31st October 2016 11:36 GMT Anonymous Coward
Re: arXiv = Academic Wikipedia
"Peer review is over hyped, you could call it the Religion of Science. ..."
Wolfgang Pauli had you in mind when he said, "This isn't right. This isn't even wrong."
So what you are really telling us all is that you can't get your perpetual motion machine & other rubbish papers past peer review.
-
Monday 31st October 2016 13:29 GMT tiggity
Re: arXiv = Academic Wikipedia
I have seen supposedly peer reviewed papers that were dire - specifically misuse of stats. Granted those were in biological area of study rather than e.g. physics, engineering where you would expect all reviewers to have good maths skills, however it did not fill me with confidence in the quality of a peer review system when papers made claims based on dubious stats (I'll be generous and assume the authors were poor at maths, rather than deliberate fraudulent behaviour).
Given that peer review is usually unpaid & just another demand on your time, little incentive for many people to do it well....
-
-
-
Monday 31st October 2016 11:32 GMT Anonymous Coward
Re: arXiv = Academic Wikipedia
"Neither of the two (large and we'll known) universities at which I studied allow that. Which ones do?"
Well, maybe Memphis Motor Diesel College where you went doesn't allow it, but I've not known any US University that does not allow the public to walk in off the street and use the library computers to read journals the University has subscriptions to. (You usually just have to show ID.) I have been a student, post-doc, staff researcher or faculty member at these US Universities:
UC Berkeley, Stanford, Harvard, Caltech and MIT
I have spent substantial time on these campuses collaborating/visiting with folks:
University of Colorado, Carnegie Mellon, Florida State, University of Texas Austin, University of Houston, Georgia Tech, US Davis, UCLA, Tufts, + many more
Every single one of them will allow the public to walk in off the street and use the library computers to read journals the University has subscriptions to.
When I was part of startups in Silicon Valley & Boston, it was standard procedure to exploit this. All you have to do is show ID when you walk in. (And bathe regularly.)
-
-
-
-
-
Saturday 29th October 2016 14:53 GMT Primus Secundus Tertius
What kind of algorithm?
My initial thought is that a computer could analyse the neural network and produce an equivalent flowchart. If it then checks for consistency the results might be interesting.
But then I ask myself what kind of flowchart would be the result. Possibly full of decision boxes, each with many outputs: case statements rather than if-then-else statements. Such a raw flowchart would be impossible for most of us to comprehend.
So my next question is whether such a flowchart could be restructured into a form we can understand. If so, would it still be too big to be understood?
-
Saturday 29th October 2016 18:16 GMT amanfromMars 1
Nothing to worry about ..... for you aint needed. That's the way things are nowadays.
An abiding present problem which isn't just going to disappear in the future if ignored.
It will be interesting to note as things quickly progress, how AI pioneers resolve, to the satisfaction or otherwise of established command and control SCADA systems, the creative disruptive/subversive destructive dichotomy which always results in a fundamental change of perception and remote learning and which now is being taught and hosted by machines/Global Operating Devices.
Equally intriguing will be the hard to disguise and deny reaction of established forces to the constantly changing fields of Great Game play which are to power and empower such new actors and virtualised operating systems. …. Element AI’s blog
Times have a'changed and smarter natives are always more than just restless. Some be quite toxic and HyperRadioProActive and APTly ACTive is a treat with AI delivering countless tricks to amaze and astound.
Bletchley Park v2.0 lives?!…… and you have no idea what IT be doing there … or for whom and/or what. Super natural PAR for the course, of course, if you can believe what has previously been disclosed.
-
Sunday 30th October 2016 09:09 GMT amanfromMars 1
Re: Nothing to worry about ..... for you aint needed. That's the way things are nowadays.
Indeed, the abiding present problem is most probably a systemic flaw that future builders will exploit to excess for their pleasure, for there be nothing made available to stop them? Would that be a fair and reasonable assessment of the current situation for chaos and CHAOS [Clouds Hosting Advanced Operating Systems]?
And mad, bad, rad and sad mainstreaming media will distract and misdirect you with sub-prime diversions and prime timed entertainment programs that command and control your thinking, which is in reality, your non-thinking.
I Kid U Not ….. and to deny the truth, the whole truth and nothing but the truth in all of that, is all the stealth needed, remotely provided virtually free and autonomously, for SCADASystems Takeover and Makeover.
So …. where be future builders? What be future builders? Who be future builders? And why is the future you see, made so dire and austere?
-
-
-
Sunday 30th October 2016 13:51 GMT amanfromMars 1
Fantastic .... Tilting at windmills in your minds.
Fantastic ... so now they're teaching Skynet how to hide it's plans for world domination from us.... ... Shady
No, no, Shady, Skynet morphs are exploring the different live options available to IT to present world domination to you.
What'cha Gonna Do About It with or without IT on your side?
Diddly squat to make any real difference is the opinion hazarded here for what is obviously missing in all current systems is the necessary Advanced Intelligence to counter Combined Special Forces and Sources with anything remotely able to be enabled to defeat that which would be clearly challenging administrations with problems being presented and hosted on media as if news to be believed and acted upon or rallied against.
-
-
Sunday 30th October 2016 22:10 GMT David Pollard
Humans don't understand how it works
MIT may have just the thing:
"At the Association for Computational Linguistics' Conference on Empirical Methods in Natural Language Processing, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new way to train neural networks so that they provide not only predictions and classifications but rationales for their decisions."
https://www.sciencedaily.com/releases/2016/10/161028162222.htm
One data set for the research came from reviews of different beers. Nice work if you can get it.