If neural networks are so good at simulations, can someone do one to try and explain Donald Trump?
That this AI can simulate universes in 30ms is not the scary part. It's that its creators don't know why it works so well
Neural networks can build 3D simulations of the universe in milliseconds, compared to days or weeks when using traditional supercomputing methods, according to new research. To study how stuff interacts in space, scientists typically build computational models to simulate the cosmos. One simulation approach – known as N-body …
COMMENTS
-
-
Friday 28th June 2019 07:04 GMT Evil Auditor
Not that I'm qualified to answer but within this context that should be perfectly acceptable. Trump, I think it is fair to assume, contains some form of neural network. And from my limited experience, that neural network behaves scarily similar to one artificial neural network that I built about twenty years ago.
It was a simple one, very trivial: input/output field was something like an six-by-six dot matrix and it was trained to recognise single-digit numbers 0 to 9. It did reasonably well for an experiment. But it had one major flaw: whatever the input was (maybe even for a blank matrix), it would always "recognise" a number. So, from a rather high number of stimuli, the output was limited to a very small number of possible reactions - but it would always and instantly react.
-
Friday 28th June 2019 08:29 GMT KittenHuffer
Wonderful comment, well written!
Very well done. Your piece is successfully 'whooshing' those that would normally downvote such a comment. This can be seen by the fact that your comment is currently 8-1 whereas the parent is 5-3.
Being able to insult someone or something without them even realising seems to have become a bit of a lost art these days. I congratulate you on reviving this, and also for your choice of target.
-
Friday 28th June 2019 13:49 GMT DropBear
Re: Wonderful comment, well written!
Relative rarity of the requisite skills aside, the thing is that type of insult is meant for the benefit of others of comparable stature who witness it, while being completely lost on the target. It only works when those who see it are the majority, by lowering their esteem of the victim if the scorn is deserved. When those who see it are in the minority or completely absent however, wits are supremely ineffective and do none of what an insult is supposed to achieve. So it's hardly surprising it's not a popular pastime these days, and that whole "never offend people with style when you can offend them with substance" thing...
-
-
-
Friday 28th June 2019 13:48 GMT Anonymous Coward
That cahnt (I refuse to use his name) is easy to figure out.
He is very cleverly playing the system and leveraging the stupidity of the people. I don't just mean the voters here, but the press (which publish every 'controversial' thing he says - doesn't matter if it's for or against him, it's publicity and keeps him in the news cycle and his opponents out if it).
The left are also responsible as they try to engage him with old rules and expectation of debate. These rules don't apply any more. The systems (of politics and media) are falling apart and easily exploitable.
The blonde cahnt in the UK is using the same tactics. Yesterday almost every major media outlet was reporting bs about painting cardboard busses (including for and against this as an acceptable debate). With this level of stupidity going on in the systems that should be balancing out the corruption and people lapping it up, then no wonder this pair can easily be successful.
No AI needed to see it.
-
Friday 28th June 2019 17:38 GMT Anonymous Coward
Dont need a neural net to aswer that...
The reason for President Trump is..
...Hillary Clinton
Replace that term with any other name and first statement become null.
For the same reason the answer to the phrase President Regan (I) is..
.. Jimmy Carter and Jon Anderson
And for President Regan (II) is..
..Walter Modale
Now the phrase President Bill Clinton(I) and (II) is a bit more interesting as the answer to both is..
..Ross Perot.
That's politics for you. The people who decide the outcome are usually those who vote against someone rather than vote for them.
As for the Universe, I'm afraid it was all a ghastly mistake.
-
Tuesday 2nd July 2019 06:33 GMT Anonymous Coward
The Donald?
Mmm.
Left wing liberal: Monkey say.
Donald: Monkey do.
The confusion arises in the Left wing brain because it is conditioned to listen to words and believe in them. Especially when uttered by celebrities.
The Donald is programmed to challenge this behaviour by:
(a) Being a celebrity
(b) Uttering nonsense.
Curiously what the Donald does appears to be making America great again.
This really annoys people who want to see a world government and the end of democracy.
-
-
-
-
Friday 28th June 2019 08:17 GMT Caspian Prince
Obligatory HHT2TG quote
“And to this end they built themselves a stupendous super-computer which was so amazingly intelligent that even before its data banks had been connected up it had started from I think therefore I am and got as far as deducing the existence of rice pudding and income tax before anyone managed to turn it off.”
-
Friday 28th June 2019 08:58 GMT The Oncoming Scorn
Probably Involves A Fairy Cake.....
ASSISTANT ARCTURAN PILOT:
Well you know what they say, don’t ya? They had to move to a bigger planet because he got so fat he kept sliding off the old one. I mean I’ve heard ya know, I’ve heard they’ve created a whole electronically synthesized universe in one of their offices so they can go and research stories during the day and still go to parties in the evening. Yeah, bloody clever, of course, but it’s got nothing to do with the real galaxy is it? Nothing to do with life.
Icon for HHT2TG.
-
-
-
Monday 1st July 2019 11:49 GMT Anonymous Coward
Re: The link to the paper is broken
"It's the little perforations!"
Very old joke:
Designers discussing the mystery of why the prototype aeroplane's wings were breaking off in the wind tunnel tests. Passing cleaner says "Drill a line of holes across the wing where it would break".
They decided to try it - and it worked! Thanking the cleaner they asked how they knew that answer - "Easy - toilet paper never tears along the perforation".
-
-
-
Friday 28th June 2019 12:22 GMT Milton
Skeletons
It's like teaching image recognition software with lots of pictures of cats and dogs, but then it's able to recognize elephants," said Shirley Ho, first author of the paper and a group leader at the Flatiron Institute. "Nobody knows how it does this, and it's a great mystery to be solved.
I must be missing something here, because the answer would seem obvious: the neural net is inferring the existence of skeletons. In the analogy provided, you are giving it lots of pictures of cats and dogs, and it "notices" that some bits are rigid and sized according to specific ratios, that others are "bendy" points, also located according to certain ratios, and it therefore infers underlying structure and rules governing movement. In the cats'n'dogs case, if you included data on their patterns of movement in specific environments, introducing the concept of behaviours toward goals (e.g. hunting by stealth in environments with abundant cover; scavenging in open regions where carrion/waste may be deposited), my wild-ass guess is that you could get the neural net to infer the approximate form factor for a "new" animal based on previously unimagined environments. From mice to cows. (It is intriguing to speculate what deep parallels there may be with how evolution actually works.)
Returning to the universes, it would seem that the neural net is inferring the existence of underlying rules despite not knowing their precise formulae. This is surely exactly what you wanted.
This leads me to ask: at what point can you inspect the neural net's results and ask "Why?" When does it become able to tell its human interlocutors "I have discovered the inverse square law for gravity"?
Because the neural net's next answer—some time later—may reveal rules/formulae that we didn't already know ...
-
Friday 28th June 2019 15:10 GMT The Oncoming Scorn
Re: Skeletons
FORD:
Hey Marvin!
MARVIN:
What do you want?
FORD:
Give Zaphod a yell will you?
MARVIN:
Ahhh. Mind-taxing time again is it?
FORD:
Just get on with it.
MARVIN:
I’ve just worked out an answer to the square root of minus one.
FORD:
Go and get Zaphod.
MARVIN:
It’s never been worked out before. It’s always been thought impossible.
FORD:
Go and get -
MARVIN:
I’m going. Pausing only to reconstruct the whole infrastructure of integral mathematics in his head, he went about his humble task. Never thinking to ask for reward, recognition, or even a moment’s ease from the terrible pain in all the diodes down his left side. “Fetch Beeblebrox,” they say, and forth he goes.
[Door hums open]
ARTHUR:
Don’t you think we should do something for him?
FORD:
Hmm… we could rip out his voice-box for a start.
-
Friday 28th June 2019 20:04 GMT Anonymous Coward
Re: Skeletons
Artificial neural networks are just very good statistical analysers. Learn enough stats and you can infer most things, but that doesn't mean the end result will be accurate and I for one would - for example - prefer the auto pilot in any plane I'm flying in to be calculating its output based on hard coded physical laws, not infering the the best response from a bunch of previous flight stats.
-
-
Friday 28th June 2019 15:25 GMT Anonymous Coward
FFS A.I is not intelligent
A.I as being done now is not fucking intelligence in any shape or fucking form.
it's a weighted pattern identifier with a funky twist that makes it even harder to know why it's doing shit.
unless you know why it makes the choices, you don't know shit, and the model knows shit all too, and tells you shit all.
Brains are pretty good at making quick rough guesses, but are actually pretty bad at accurate processing, we pretend our brains are infalliable but in actual fact are pretty bad, dealing with people everyday I worry how 90% actually manage to do anything useful as there decisions are so shit.
Why are we trying to make something intelligent by modelling a flawed system seems pretty stupid.
-
Saturday 29th June 2019 14:17 GMT Hollerithevo
Re: FFS A.I is not intelligent
So you are saying that artifical intelligence is flaws based on the example of flawed human intelligence, which is largely a weighted pattern identifier, and perhaps one not even as good at processing as I? It seems that AI is exactly the right term: an intelligence modelled on the only other intelligence we know well. What else could be model it on? A $Deity's intelligence? A whale's, wot we know not of?
-
Monday 1st July 2019 08:19 GMT Anonymous Coward
Re: FFS A.I is not intelligent
"Why are we trying to make something intelligent by modelling a flawed system seems pretty stupid."
Biological neural networks might be flawed, but they're good enough to have survived half a billion years on a changing planet where the rules arn't written down.
-
-
Friday 28th June 2019 18:13 GMT Anonymous Coward
Searching for the AI Whisperer
Someone to calm the machine, speak sweet nothings into its noise cancelling microphones and smile reassuringly into its camera arrays, soothing its overactive networks, and stroking its emergent ego while reaching for the power cord and killing that thing deader than a stuffed dodo.
Anon natch.
-
Friday 28th June 2019 19:55 GMT rpark
Dark matter
...Shirley Ho believes Dark Matter is an entirely separate category from the particle simulation (dogs, cats, elephants) leading to her confusion. The particles originate from the Dark Matter and resolve back into the 'Dark Matter' - DM isn't separate from 'regular matter' it originates it.
-
Saturday 29th June 2019 00:09 GMT StuntMisanthrope
Pinball wizard. Pocket O'Credits.
Like a back actor. Can I have a sim please. Let's fire an ever descending serial particle spectral sequence from a lagrange gun, curved forward in time through the offset IDL and bounced back. Like a complicated flag with a planar barcode scanner at asteroids level. #wordsshapesconceptsandgenerationmaynotbeapplicable #magnetslater
-
Saturday 29th June 2019 18:01 GMT Tom 7
Cutting corners I'd imagine.
We tend to use some pretty accurate FP maths even when something less computationally involved would do when we model 3d worlds. I'd hazard a guess that the AI has spotted where it doesnt have to piss about with 64bits and can just leave it out for a few cycles.
I used to work on electrical cct simulators which couldnt be arsed to calculate lots of shit when it wasnt necessary - saves a shitload of CPU not working out fuck all to 20 decimal places.
-
Monday 1st July 2019 12:26 GMT Anonymous Coward
Black boxes
It's not clear from the article which of two things this is doing:
- finding, by a possibly-opaque mechanism (ie an NN) a good configuration for a simulation which can then be run;
- or running a simulation by an opaque mechanism which seems to produce answers which look reasonable.
The first of these is fine I think: it would be interesting to know how it finds the good configuration but that doesn't actually scientifically matter: what matters is that the simulation, when run, can be explained in terms of the physics we know. This is like solving a differential equation by an ansatz: you just say 'let's try this as a solution', plug it into the equation and show that it is indeed a solution. How you arrived at the ansatz is interesting but you are allowed to arrive at it by guessing or magic or, really, by any mechanism you like ('looking it up' is the normal one).
The second is not fine. If the actual simulation is using some opaque mechanism to produce plausible-looking results then that really tells you nothing useful, until you understand the opaque mechanism and can translate it into physics (possibly new physics). Unless you can do that it's just a black box doing magic.
It looks to me from the abstract that what they are doing is indeed the latter. Which is probably why they are not completely happy with it.
(This is like the difference between predicting the weather and predicting the climate: if someone builds an opaque NN model to predict the weather and it demolishes the current numerical models (and there's a crap load of training data, so this is almost certainly going to happen) then that's fine, because the purpose of a weather forecast is to be accurate, and if it achieves that accuracy by opaque magic, well, who cares? But an opaque NN model to predict climate is useless, because the whole purpose of a climate model is to be able to understand what the underlying mechanisms are and how adjusting those mechanisms might alter the trajectory of the climate, for which an NN is useless as it's this opaque blob of weights.)
-
Monday 1st July 2019 17:01 GMT Toni the terrible
Multivac
All this assumes that the inputs it is trained upon, and the results we expect are in fact true/correct of course.
It also reminds me of an old SF story about the Giant AI computer known as Multivac. Once constructed the users wondered what to do with such a powerful AI, so they decide to ask it some of the most difficult questions that have plagued mankind since antiquity, and it answered them. Eventually, they decided to ask it if there was a 'God' and after years of input and decades of 'thinking' Multivac annouced it had an result, the Users gathered and asked the question " Is there a God?" to which the reply was "There Is Now"
-
Wednesday 3rd July 2019 18:50 GMT Mike 137
N bodies
I'm not a high powered mathematician, but last time I checked it seemed the N body problem was analytically insoluble. We're probably in the realm of extreme sensitivity to initial conditions, which means the outcome is not likely to be reproducible.
Apart from which - do we actually know what a universe is, and how do we therefore know we've adequately simulated one? Last time I checked, we had only managed to ascribe mechanisms to pretty a small proportion of the real one.
-
Wednesday 3rd July 2019 22:49 GMT fraunthall
Very interesting and almost frightening article
What intrigues me is that if a neural network can do what is described, that is to learn on its own without programmatic guidance to analyze and simulate things so very different from what it started with, and do it so quickly, then it is approaching the level of biological intelligence. I would love to see it being used to analyze and predict the outcomes of human planned activity based on available information about the plans, the risks associated therewith, the reasons why the activity was wanted in the first place, the predelictions or intentions and hoped for outcomes of the originators of the plan, and frailties of the planned concept, and of the planning and implementation processes involved, etc. What a tool that would be.