This is so shoe-horned into "physics" that I think I'm hallucinating.
AI godfather-turned-doomer shares Nobel with neural network pioneer
If you needed another sign that we've well and truly entered the AI age, here it is: The first Nobel Prize has been awarded for contributions to artificial intelligence. AI "godfather" Dr. Geoffrey Hinton, and his intellectual predecessor in the realm of learning machines, Dr. John Hopfield, were jointly awarded the 2024 …
COMMENTS
-
-
Wednesday 9th October 2024 04:24 GMT Joe W
Quite rightly so, hallucinating like an AI - hey, maybe the committee outsourced that decision to an AI?
I find this irritating. It is maths, and nothing is wrong with that. It's even interesting maths. Give the dudes a Fields Medal, but this is not physics by any means. It does not describe the world, it is not a ground breaking observation of a natural process, or a new proposed particle or any of that.
-
Thursday 10th October 2024 08:59 GMT LionelB
> this is not physics by any means
It's not such a stretch: Hopfield and Hinton's work was based explicitly on statistical physics models. And Nobel prizes in physics have most certainly been awarded for quite abstract and highly mathematical work. Hopfield & Hinton's work, furthermore, was not Fields Medal material - it does not solve any outstanding mathematical problem. Physics is about as close as it gets.
-
Wednesday 9th October 2024 07:47 GMT O'Reg Inalsin
John Hopfield was a highly productive physicist first, and some statistical mechanical properties of spin glass inspired his paper on an associative memory model (btw there was other very similar work existing so the idea wasn't completely original). Unlike todays NN's the Hopfield Net simulations remember extact patterns - in other words perfect memory without hallucinations (but also without flexibility).
If only there were an effective way to shoe horn actual verbatim "facts" into an LLM that might alleviate the hallucination problem greatly - i.e., basing logic upon a foundation of axioms (foundational facts). I know I do this - when opining about something I may internally reference specific past life events (associate with place, time, occasion, etc.) that somehow remain as landmarks (while most events are forgotten completely).
Perhaps it is not that the Hopfield net is a predecessor to the ubiquitous backprop NN that runs todays LLMs, it is that the behavior of the Hopfield net is what is missing from the backprop NN.
-
Wednesday 9th October 2024 02:33 GMT Peter Prof Fox
If somebody else...
I did invent it (in 8 bits) but left it to others.
Was I wrong?
Or morally weak?
Or Worry too much about how easy it was for humans to let computers do the hard things like... think?
PS I have reinvented the wheel ( Wheel patent) Some of us are quite clever you know!
The OTHER thing about AI is if you throw enough Mips at something you can make it APPEAR real, just like your child telling you details about their imaginary friend when (a) the friend doesn't exist and (B) they don't have any friends.
I know people in general can be well sub-normal and terribly deluded, so perhaps I shouldn't be surprised. I'm beginning to think that like left-handedness and red-hairiness and tone-deafness and attention-to-detailness and amusement-at-sufferingness and beliving-their-liesiness and so on that the ten-eighty-ten rule applies to intelligence. 'Ten percent' are properly clever. 'Ten percent' are unbelievably dumb. There could be an evolutionary reason why 10% of a population have a different trait. That is they might survive a wipe-out, or of course be the ones to be clobbered by circumstances. If you are one of the few who run far away from home then you might be the only drop left of the gene pool which was wiped-out by the something-plague. Look at the scale of tone-deaf to perfect pitch or incapable of driving to smoothly anticipating and understanding physics then wonder how the human world of outliers has evolved. Should AI be renamed Artificial Competence?
-
-
Wednesday 9th October 2024 12:46 GMT druck
Re: If somebody else...
My 9 year old is pretty indistinguishable from an AI chatbot at the moment. He does nothing but watch youtube all day, shovelling stuff in to his head some of which is good factual knowledge, but mostly crap. He loves to jabber on at me about what he's learnt, and if I'm actually listening I'll correct him when he get's things wrong, explaining how stuff works in the real world. He then does a very AI thing by saying sorry, then expanding on what I've just said as if he knew the answer all along, but embellishing it more guess work.
It's very annoying, more than an AI, as I don't care how crap they are as I have no intention of actually using one for any purpose. But I want my son to grow up into a useful human being. Hopefully his continued formal education will allow him to establish a grounding in facts, so he can judge if the torrent of information coming from internet is true or even useful, and not just allow his mind and his mouth run at a million miles an hour regurgitating and embellishing nonsense. There is a possibility that AIs will eventually be able to develop the same way, but I'm not holding my breath.
-
-
-
Wednesday 9th October 2024 08:01 GMT O'Reg Inalsin
if it matters
Even mushy brained humans are made of matter, and are bound by the laws of physics. He spent two years in the theory group at Bell Laboratories working on the structure of hemoglobin [~1959] ...In 1974 he introduced a mechanism for error correction in biochemical reactions known as kinetic proofreading to explain the accuracy of DNA replication. His fields of research really had a very wide range ... I think it likely that his multi faceted observations of nature across a wide range informed and multiplied his insights.
-
-
Wednesday 9th October 2024 11:05 GMT Gene Cash
Google DeepMind co-founder shares Nobel Chemistry Prize
https://www.bbc.com/news/articles/czrm0p2mxvyo
> David Baker, Demis Hassabis and John Jumper have won the Nobel Prize for Chemistry for their work on proteins.
> Demis Hassabis co-founded the artificial intelligence research company that became Google DeepMind.
> UK-based Demis Hassabis and John Jumper used artificial intelligence to predict the structures of almost all known proteins and created a tool called AlphaFold2.
That's not chemistry, that's biology, but at least the Committee has a long history in that.
-
This post has been deleted by its author
-
Wednesday 9th October 2024 16:56 GMT Eclectic Man
Pantomime season
"Companies like OpenAI can't just put safety research on the back burner."
Oh yes they can!
Just like in large corporations "Security is everyone's responsibility", until something goes wrong when it was exclusively the Data Custodian's responsibility (for personal data breaches), the System Administrator's responsibility (for hacking, DDOS attacks and Ransomware attacks), or the person who clicked on a link in an unsolicited phishing email, but definitely not, never the direct responsibility of any member of the Board of Directors.
Safety research does not generate dollar revenue or get the attention of C-Suite executives like some new ability to predict the weather (or whatever).
I'll get my coat (it's behind you).