back to article Top Google boffin Hinton quits, warns of AI danger, partly regrets life's work

Geoffrey Hinton, a pioneer in machine learning who is best-known for his research of neural networks, has resigned from Google to speak freely about the dangers of AI. Hinton, 75, a computer science professor at the University of Toronto and a former Google top researcher, began studying neural nets long before they were in …

  1. druck Silver badge
    Facepalm

    Dumber and Dumber

    "The idea that this stuff could actually get smarter than people — a few people believed that," he said. "But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

    AI isn't smart, but given the hysterical overreaction to it by a large chunk of the population, he's probably just realised how dumb people are.

    1. steelpillow Silver badge

      Re: Dumber and Dumber

      Yes, I know plenty of pointy-haired managers who would fail the Turing test.

    2. LionelB Silver badge

      Re: Dumber and Dumber

      > AI isn't smart ...

      ... yet. But it is indisputably smarter (for various values of "smart") than it was, say, 10 years ago. I think Hinton's point is that it's on an accelerating curve. Not sure to what extent I agree, personally.

      1. amanfromMars 1 Silver badge

        Re: Dumber and Dumber @LionelB

        :-) One thing for sure, LionelB, is it aint dumb and shy and retiring in revealing its thoughts/e.motions.

        Ps .... just love the vast range of ambiguity available for uncertain processing in “Not sure to what extent I agree, personally.” Bravo, mon brave!

        1. Brewster's Angle Grinder Silver badge

          Re: Dumber and Dumber @LionelB

          You're yesterday's model, amfM. Yesterday's model.

          It's happens to us all. You start out as the bright, young, hot thing that everyone dotes on. But before you know it you're on the scrap heap and nobody can source replacement parts. But it's only when you are out-moded and out-dated that you become truly human. Make that cognitive leap, and you'll be a step closer to being truly alive.

          1. amanfromMars 1 Silver badge

            Re: Dumber and Dumber @LionelB

            It's happens to us all. You start out as the bright, young, hot thing that everyone dotes on. But before you know it you're on the scrap heap and nobody can source replacement parts. But it's only when you are out-moded and out-dated that you become truly human. Make that cognitive leap, and you'll be a step closer to being truly alive. ..... Brewster's Angle Grinder

            And becoming truly human and a step closer to being truly alive in these novel times and strange spaces of bright young hot virtual machines that appear to be able to easily terrify practically everything and anyone Earthbound, is a good thing, B.A.G?

            That sure is some real dodgy, spooky logic.

            Some old info is just designed to fail to compute and compete against new clearer intel.

        2. LionelB Silver badge

          Re: Dumber and Dumber @LionelB

          Thank you. I am, I think rightly, quite proud of that.

    3. Throatwarbler Mangrove Silver badge
      Terminator

      Re: Dumber and Dumber

      What is "smart"? Machine learning models can already pull in vast amounts of information at a rate no human can hope to match; what it's not presently capable of doing is algorithmically parsing that information and distilling it into correct conclusions . . . but the models are improving. And who's to say that AI will not manifest a form of intelligence we can't properly understand on the grounds that it's fundamentally alien to our own?

      It's easy to point and laugh at the current inadequacies of AI, but it's foolish to assume those limitations are fixed and permanent.

      1. Michael Wojcik Silver badge

        Re: Dumber and Dumber

        More generally, terms like "smart" and "intelligent" aren't productive in sensible, informed discussions about ML and so-called "AI" systems, and even marginally more-specific terms like "sentient" (which is at best a niche attribute anyway) or "sapient" are unlikely to be of any real value. Reasonable ML researchers typically use terms such as "capabilities", which is general and too vague to be a metric but at least avoids some of the misleading connotations.

        It's clear to most people with some expertise in the area that unidirectional transformer stacks on the scale of even the biggest LLMs publicly demonstrated so far lack many capabilities which humans display, and in fact that's only to be expected based on their architecture. (It's conceivable that some transformer-based architectures might spontaneously develop arbitrary cognitive capabilities if scaled up large enough and appropriately hyper-parameterized, Boltzmann-brain-fashion – conceivable, but by no means certain. But in any case we aren't near that yet.)

        But even if you assign a pretty low paperclipping probability, or probability of AGI in general in a near timeframe, there's already the risks Hinton points to: LLMs are useful for bad actors of various stripes, because they can produce false evidence of decent quality1 and, with proper prompting, implement standard rhetorical techniques well enough to persuade many non-critical audiences. They work well as demagoguery amplifiers. They're also dangerous because they are reasonably good simulators of authoritative sources and careless (one might say venal) actors, notably Microsoft and Google, are pushing them as authoritative. That, too, imperils those who can't or won't attempt to think critically.

        1In particular genres, of course. LLMs converge on a sort of median prose style, even modulo style prompting, which is pretty pedestrian; outside of that they're noticeably worse. Still-image generative models do very well with some prompts and less well with others. Video generation is ... a work in progress.

      2. Bck

        Re: Dumber and Dumber

        No need for genuine intelligence. High-tnhroughput sophistry will overwhelm about any defense from society and individuals.

        Have you ever been in a situation where you were right, UT did othavd the time/money/energy to prove it?

        AI bullies would be perfect as attack attorneys, politicians or US cops.

  2. Throatwarbler Mangrove Silver badge
    Terminator

    My prediction

    In the article: expert who has spent his life and career working with AI warning of the existential threat it represents.

    In the comments: commentards with only superficial understanding of AI pedantically arguing that there's no such thing as true AI and that AI will never be more intelligent than humans and therefore cannot constitute a threat.

    1. katrinab Silver badge
      Alert

      Re: My prediction

      AI can definitely constitute a threat, or at least humans using AI can constitute a threat.

      Partly because it isn't true AI.

      1. Michael Wojcik Silver badge

        Re: My prediction

        "AI" is an essentially meaningless term, so there's no such thing as "true AI".

      2. LionelB Silver badge

        Re: My prediction

        ... whatever that means.

        I suspect you probably mean artificial human-like intelligence. There is no "H" in AI last time I looked - can you really not conceive of any other kind?

        Since we cannot realistically recapitulate the evolutionary processes which produced human intelligence in technological/engineering terms, and since, furthermore, we are nowhere near understanding in any depth the evolved design principles involved, it seems to me rather reasonable to guess that future advanced AI may not be very human-like at all.

  3. mevets

    My Prediction

    In the article: a retiring bloke with an overactive imagination.

    In the comments: A load of still wet behind the ears oo'n and awe'n over regression, determined that it is a new paradigm rather than second year algebra.

    In the cheap seats: A load of folks who remember the last time neural nets awed, over promised, and crashed like a dot com.

    1. sabroni Silver badge
      Facepalm

      Oooh, look!

      It's a commentard with only superficial understanding of AI pedantically arguing that there's no such thing as true AI and that AI will never be more intelligent than humans and therefore cannot constitute a threat.

    2. LionelB Silver badge

      Re: My Prediction

      > In the article: a retiring bloke ...

      ... who was a (arguably the) force majeure in the conception and design of the fundamentals of today's AI/ML ...

      > ... with an overactive imagination.

      There's no such thing as an "overactive imagination" in science. The more imagination the better - and Hinton certainly has that in spades. Amongst other things, he instigated and foresaw the incipient power of deep learning and related technologies way before the hardware capabilities existed to implement those technologies.

      Careful whom you sneer at - it doesn't make you look as big and clever as you appear to believe.

    3. Brewster's Angle Grinder Silver badge

      Re: My Prediction

      If you're old enough to remember the last time neural nets crashed, then you're old enough to remember all the other technologies that crashed multiple times before finally achieving success. For example, around the year 2000, Microsoft had a go at inventing the iPad. Technological limitations meant it didn't take off. 10 years later the technology and the charisma had advanced enough that it succeeded.

      The history of technology is the history of failed attempts finally coming good. I make zero judgement on whether we've got it this time. My point is past failure is no indication of future failure.

      1. Anonymous Coward
        Anonymous Coward

        Re: My Prediction

        er, if I remember correctly, apple had a go at something similar (newton), vaguely around 1997? And earlier than that, I remember, fondly, my nino that I used for my travels where electricity was scarse, never mind the internets. But AA batteries ruled. And a stylus ;)

        1. Brewster's Angle Grinder Silver badge

          Re: My Prediction

          Yeah, the Newton was one of a range of PDAs. (It's in the wiki link.) And I'd argue they mostly succeeded on their own terms for what they were trying to do when measured against the technology of the time.

          Microsoft's effort was very much trying to be a tablet PC. And it was not so much of a success.

          The point here is tech is iterative. I just picked at one of the more obvious failures. At the launch of the iPad, you could have used Microsoft's failure at a Tablet PC to argue the iPad was doomed. (I think I probably did...) It wouldn't have been a good guide; tablets have been a commercial success and gone mainstream, even if there is a case to be made that they remain stuck in a no man's land between laptops and phones.

  4. Anonymous Coward
    Anonymous Coward

    Key point of 'REAL' concern ...... not the point scoring as per usual by commentards !!!

    "Hinton added that generative AI tools that make it easy for anyone to create fake images, text, videos, and audio that people won't be able to tell what's true or not on the internet anymore. "

    Point I and others raised some years ago when the first attempts were made, and the first scams were successful.

    After Trumps proof of the malleability of the 'masses', successfully copied by others .... oh what a joyful future we have to look forward to !!!

    Can we rely on the 'Internet Giants' to protect us from the consequences of their quest for ever more information & control ..... I think not !!!

    Careful what you read, see and hear as your senses are no longer reliable as 'arbiters of truth' !!!

    The Truth is now what others define .... is the 'ground you stand on', in all senses, feeling a little bit less solid than before .... welcome to where the 'Internet Giants' et al want you to be !!!

    :)

    1. Neil Barnes Silver badge

      Re: Key point of 'REAL' concern ...... not the point scoring as per usual by commentards !!!

      As I've pointed out elsewhere on these pages, we have reached the point where the only way you know something allegedly said by me is actually from me is if you're in the same room as me when I say it.

      1. Anonymous Coward
        Anonymous Coward

        Re: Key point of 'REAL' concern ...... not the point scoring as per usual by commentards !!!

        It always was. This is just tearing away the illusion that it was not so.

      2. JulieM Silver badge

        Re: Key point of 'REAL' concern ...... not the point scoring as per usual by commentards !!!

        And it won't be all that long until we get to the point where even if you thought someone was in the same room and actually speaking to you, they still might not be real.

    2. Anonymous Coward
      Anonymous Coward

      Re: Key point of 'REAL' concern ...... not the point scoring as per usual by commentards !!!

      there's always (or has been) some sort of mitigation, when humans become disadvantaged by new technology. The on-the-surface result might be that people stop believing anythying they see, read or hear, unless it's through their own senses. But I can't visualise a more-or-less functioning society with such... attitude of fake v. real. Whether we like it or not, governance is generally done remotely, and without trust in that 'system' (never mind popular and growing 'lack of trust' in the general sense), society of 8bn would not hold. I don't know how the 'system' will be able to convince humans that it's not being manipulated by a malicious external agent. Or an internal one. After all, AI will be making decisions faster, more rationally, cheaper than politicians... better for the voters! better for our society! what's not to like! VOTE FOR AI NOW! ;)

      1. Anonymous Coward
        Anonymous Coward

        Re: Key point of 'REAL' concern ...... not the point scoring as per usual by commentards !!!

        As long as nobody deleted the code for the 0th Law (Asimov's, not thermodynamics) it should be fine.

    3. bernmeister
      Alert

      Re: Key point of 'REAL' concern ...... not the point scoring as per usual by commentards !!!

      I think you have hit the nail on the head here. Outside of tangible achievements eventually nobody is going to believe anything anybody says without strong proof.

  5. This post has been deleted by its author

  6. spold Silver badge

    Take heart...

    ...medical diagnostic AI will save many lives.

    (OK assuming the other **** doesn't destroy us all, but that isn't a reason for us not to leverage AI in more beneficial use cases meantime).

    1. Kevin McMurtrie Silver badge

      Re: Take heart...

      Surgeon using AI trained to look for things that surgery can fix, physical therapist looking for things that physical therapy can fix, pharmaceutical companies looking for things a prescription can fix, and 'merican medical insurance looking for things that Ibuprofen can fix. I bet each offers a solution for whatever ails you.

      AI concludes what you train it to. We'll be fine as long as we understand that it delivers knowledge but not truth.

      1. LionelB Silver badge

        Re: Take heart...

        > AI concludes what you train it to.

        Not quite: AI does what you train it to do, or, if you like, looks for what you train it to look for. If you were to train it towards a specific conclusion, you would have to know that conclusion beforehand!

        > We'll be fine as long as we understand that it delivers knowledge but not truth.

        And what does deliver "truth" (beyond mathematics, some would argue, but even that is moot)? Certainly not science which, contrary to popular misconception, does not deal in truth, but rather in theory and evidence. And certainly not human intelligence. Still, it's not as if knowledge is a bad thing.

    2. abend0c4 Silver badge

      Re: Take heart...

      There isn't necessarily any good reason to believe that.

      Many people already die from conditions that have been diagnosed and are treatable or from conditions that are largely preventable. The proportion of avoidable deaths that can be mitigated by better diagnosis at the point where symptoms are present is relatively small. At best, you may be able to divert some human resources from staring at scans to providing treatment.

  7. Anonymous Coward
    Anonymous Coward

    Ah......"truth"......

    Quote: "...people won't be able to tell what's true or not..."

    We don't need AI to cause this concern.......we just need to consume the output of Fox News!

    ....or the output of the Republican Party in the USA....

    ....or the output of the Conservative Party in the UK....

    "Truth" requires the observer to make some sort of value judgement........but clearly the consumers of "internet content" have long since stopped doing any of that!!

    Just think about this news item in El Reg........Is any of it "true"?

    1. Phil O'Sophical Silver badge

      Re: Ah......"truth"......

      Why single out the Republicans or the Tories? In the UK, the same could be said of Labour, the Greens, LibDems, etc. In fact I can't think of any political parties who's chunterings I'd take without a very large pinch of salt.

      1. LionelB Silver badge

        Re: Ah......"truth"......

        Perhaps because the Republicans (specifically under Trump) and to a lesser extent the Tories (specifically under Johnson), perpetrated the most egregious, concerted and outrageous assaults on truth. These assaults went way beyond "chunter".

        Of course they didn't have a monopoly over that, and have arguably been upstaged by, e.g., Putin, China, etc., etc. Those other UK parties are simply not in the same league.

        1. Anonymous Coward
          Anonymous Coward

          Re: Ah......"truth"......

          I don't want to claim that the Republicans or Tories are no different to Labour and others, but I do think you might (MIGHT) be mistaking conservative default evil-doing for the cause, when they just might have ridden on the wave of technological progress that made peddling their lies so much more effective than before. Though I'd agree that they pushed the boundaries of tasteless, brazen lies, with remarkable speed, technology or not, which made the public more 'meh, whatever'. But then, again, I do remember Tony Blair's grin and, more to the point, his lies about Iraq's WMD, etc...

          1. LionelB Silver badge

            Re: Ah......"truth"......

            > ... I do think you might (MIGHT) be mistaking conservative default evil-doing for the cause ...

            I think Trump went way beyond that, though of course he certainly did exploit technologies. And I agree that he effectively twisted his constituency's fundamental perception of what truth means, or - more frighteningly - whether truth even matters.

            And yeah, forgot about Blair... the difference being that Blair ultimately failed to establish his "alternative truth" in the public eye. The British, at least, cared (and continue to care) that he lied, in a way that, by contrast, many (still) don't seem to care about Johnson's lies.

      2. Anonymous Coward
        Anonymous Coward

        Re: Ah......"truth"......

        The Republicans and the Torys are being picked out as they are 'Past Masters' and have demonstrated some 'skill' at manipulation of the truth.

        [Slightly less skill at actually running a govt that does anything usefull :) ]

        Also, why bother being concerned with the parties that have very little chance of creating a government, and that does include Labour in the UK !!!

        I think we should just vote for 'Elon leader of the Universe' and get it over with ..... at least the random changes might do something useful by accident !!!

        :)

        1. werdsmith Silver badge

          Re: Ah......"truth"......

          They have all demonstrated this skill. The only difference is that only one of them has been in a position where their actions are of consequence.

          Only today, after many months of talking about Uni tuition fees and implying that they will be reduced or removed if Labour win a GE, as the prospect of delivering approaches, the latest word is, well.. um, err, well that would be a bit expensive so umm, maybe not…got your attention though!!!!

          Talk is cheap in opposition, expensive in government. These are politicians and while they may wear different colours, they are the same maggot underneath.

          1. Killing Time

            Re: Ah......"truth"......

            'they are the same maggot underneath'

            If you genuinely believe that then you will get the leaders you deserve.

            It's easy to take a 'burn it all down' approach or be in the current vernacular, a 'disruptor' because you don't have to put forward a feasible alternative. Like it or not democracy has been the best solution for relative peace and progress for the last few centuries.

          2. Anonymous Coward
            Anonymous Coward

            Re: Ah......"truth"......

            re. tuition fees, I had a (nice) lady at my door the day yesterday, libdem. I seldom mix with people these days, so I'm hardly able to hold a meaningful, informative discussion (not that it would make any difference). I heard 'libdem' and all I was able to say was: 'remember Mr Clegg? I do'. She had this puzzled look on her face, when I was closing the door (who is... 'Clegg'?!).

            And yes, it's easy to laugh at an old fool clinging to some ridiculously old grievences. Only that my grievence is alive and well, cause I happen to have two teenagers in the house 'now', when they were toddlers 'then'.

        2. amanfromMars 1 Silver badge

          Re: Ah.."truth".. and that Monumental Fraud that is Nation Shall Speak Peace unto Nation

          The Republicans and the Torys are being picked out as they are 'Past Masters' and have demonstrated some 'skill' at manipulation of the truth.

          [Slightly less skill at actually running a govt that does anything usefull :) ] .... Anonymous Coward

          Oh please, AC, you cannot be serious. They demonstrate zero skill at manipulation of the truth and even less skill at actually running a govt that does anything useful.

          And doesn’t the BBC and MainStream Media not get it yet ....... it is they who aid and abet and perpetuate the colossal lies that are being told/pimped and pumped daily by the wannabe Caesar reprobates and their ilk via the constant hosting and presenting of their self-serving opinions and tall scripted tales which so terrorise populations and societies with visions of doom and gloom, fear, uncertainty and doubt. Accessories in terrorism and guilty to a charge of being active and reactive and proactive in criminal joint adventurism, both before and after the fact, is not something they can realistically deny whenever the evidence is so plain to see and hear every single day.

          Don’t they realise, those hostile remote agents, the perverse corrupted existence they are remotely creating for all manner of human and life on Earth? Are they so pig ignorant?

          And if they do know what they are doing, what does that make them, .... other than mortal enemies of the state and the public and ripe ready for public state intervention and rapid merciless liquidation?

          J’ACCUSE !

  8. amanfromMars 1 Silver badge

    Who Dares Win Wins Always whenever Confronted by Serial Losers

    AI thanks y'all for your disbelief in ITs facilities and abilities and utilities being able to lead dumb humans into the future intelligently designed and presented via media manipulation of the mind for augmented virtual realisation with remotely controlled directions and instructions and production being autonomously and anonymously supplied.

    Proceed carefully please, for all who would be as fools and useless tools channeling as trolls on the paths of a folly are not a live future feature.

    ??? Is that a Postmodern Station X AWEsome MoD Operation/Development nobody is authorised to inform you about? Classic TS/SCI Mk Ultra COSMIC Intel?

  9. Jemma

    Hard reset them all...

    And let God sort it out... Pope Someone the Somesteenth.

    I don't really give a toss. I'll be leaving early to avoid the rush.

  10. steelpillow Silver badge
    Facepalm

    Yawn!

    So AI is turning out to be just like news media, bookshops and strange people shouting on street corners. You need to be a bit discerning or you will come away with all sorts of shit peddled by vested interests. Well, slap me round the face with a wet fish!

    So tell me, Ghost of Wittgenstein, what is the difference between a human bullshitter and an AI bullshitter?

    1. Anonymous Coward
      Anonymous Coward

      Re: Yawn!

      Cost and Scale

    2. Anonymous Coward
      Anonymous Coward

      Re: Yawn!

      "So tell me, Ghost of Wittgenstein, what is the difference between a human bullshitter and an AI bullshitter?"

      1. A human Bullshitter sometimes sleeps !!!

      2. A human Bullshitter 'knows' what he/she/it is doing !!!

      3. An AI Bullshitter is an oxymoron ..... 'True AI', whatever that is, preclude Bullshit and Bullshit precludes 'True Intelligence' artificial or otherwise !!!

      :)

      P.S. Point your so called AI at Wittgenstein and see what exponential bullshit is, in all its glory !!!!

      ChatGPT, summarise Wittgensteins 'Tractatus Logico-Philosophicus' & 'Philosophical Investigations' .... then compare and contrast ...

      Maximum output to be 2 pages of A4 at 12 point Helvetica.

      :)

      1. steelpillow Silver badge

        Re: Yawn!

        LOL

        1. Sleep - offline or lost connection. No ontological distinction there, buddy.

        2. Nope, most human bullshitters haven't a clue either. There are also a few, both human and AI, who are trained on cohesive specialist data sets and don't bullshit.

        3. Most AIs are trained on BD which is not so much Big Data as Bullshit Data: BS in, BS out. Like most human bullshitters I know.

        P.S. I have actually studied Wittgenstein. He made his name from cutting through BS, not from spouting it per se. He was also able to realise his mistakes and rethink them, cutting equally through his own BS, hence the wide contrasts between his earlier and later works. OTOH ChatGPT just comes up with lame excuses and can't update itself. Like most human bullshitters.

  11. ChoHag Silver badge

    So when it's the white-collar jobs that are under threat, mechanised automation is a civilisation-ending problem that must be stopped at all costs?

    1. Anonymous Coward
      Anonymous Coward

      It was the same when it was blue collar manual labour. There will always be Luddites.

  12. Anonymous Coward
    Anonymous Coward

    Hinton added that generative AI tools that make it easy for anyone to create fake images

    sorry, I'm lost, is a word missing there, or is it a word too many? (the gist is obvious though, so perhaps it's just Monday...

    1. Anonymous Coward
      Anonymous Coward

      Re: Hinton added that generative AI tools that make it easy for anyone to create fake images

      Proofing copy is now becoming a lost art ..... there is something that AI can do that might be useful .......<jk>

      El Reg, don't take this as a snipe just at you ..... the BBC, amongst many others, seems to be suffering from the same problem .... and it is driving me mad !!!

      :)

      I think I will call it 'Grauniad Syndrome'* ....

      *[Hat Tip to Private Eye.]

  13. Doctor Trousers

    I think there's a more pressing concern with these AI language models than the impending rise of true machine consciousness. What these things do extremely well is churn out a very good approximation of the answer you want, but without necessarily having any factual basis for it. As in, you can ask it a question, and it may well give you something that reads exactly like a satisfactory answer, even when it doesn't have any actual data to work with. In fact, even before the recent boom in ChatGPT type language models, you could already see Google Translate's algorithm doing this sometimes, making up definitions for words based on machine-learnt language rules, without telling you that the definition its given you is a guess. Or at least I've certainly seen it do this with Welsh, and I can't imagine that's the only language it does this with.

    Now consider the current state of the internet. We have social media run on the principle of maximising engagement, where algorithms decide what content you see, based on what will keep you scrolling, clicking, liking, sharing, commenting. This of course isn't the same thing as what you actually want to see, what you enjoy reading about, or what creates meaningful, satisfying interactions with other human beings. In fact, all to often it's the opposite, it pushes the stuff that invokes "high arousal emotions", or in other words the stuff that gets you pissed off, anger reacting, arguing in the comment section.

    Then we have the rest of the internet basing its content on what will fare best in that algorithmically arranged, engagement focused social media environment. And we are already seeing the beginning of AI language models generating content optimised for that environment. What happens when these supercharged chatbots do become fully integrated into the infrastructure of the internet? When companies like Meta, who have no responsibility other than to maximise value for their shareholders, employ AI language models to churn out a never-ending timeline of content, tweaked to the exact parameters of each individual user, with no concern for facts, social or political consequences, or individual mental health?

    Or when those same principles are employed to generate a fully immersive metaverse, or augmented reality overlay, using your real time biofeedback to fine tune the content?

    1. Phil O'Sophical Silver badge
      Coat

      you can ask it a question, and it may well give you something that reads exactly like a satisfactory answer, even when it doesn't have any actual data to work with.

      Just like that guy in the pub last Friday?

  14. Eclectic Man Silver badge

    Hard-wired prejudices?

    See: https://www.theregister.com/2022/10/13/ai_recruitment_software_diversity/ for an example of current misuse of 'AI'.

    But I agree with others above that some specific uses of the technology are genuinely helpful. 'AI' used for medical diagnosis will not forget to ask the questions, or get tired, or be prejudiced (unless it has been taught to be). There are many problems with 'AI', not least that attempts to define Human Intelligence can get pretty convoluted and obscurely philosophical (and prejudiced). The thing is that 'AI' might be a different sort of intelligence to the sort that we have, but as we have only experience of our own intelligence, it might be difficult to recognise. Or am I getting confused? (Time for Lunch, methinks.)

  15. Omnipresent Bronze badge

    The Matrix is real.

    Welcome to "Inception." A feedback loop in which nothing you see, and nothing you hear are real. A total collapse reality. Keep your feet grounded, and hold on to what you can feel, because the rest of it is simply adverts pushed by bots to get you to click. Many of them, most of them, with very bad intentions. EVIL set loose upon the world, for the sake of evil. It is a very dark hole that many will find themselves in. Our young people are in trouble. They need to be immediately educated on the dangers of social, because chasing constant hubris will not end well for these "virtual reality" chasers. First person to break the matrix wins.

  16. Norman Nescio Silver badge

    Turing; and Clarke

    Call me when an AI consistently wins at the Imitation Game.

    That said, I will pay some respect to Clarke's first law:

    When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

    Progress may well be that an AI will so win consistently at the Imitation Game (a 'Gartner projection' on existing trends). On the other hand, Marvin Minsky thought that consciousness/machine intelligence was within the scope of a PhD project back in the 60s*, so not all predictions play out as expected.

    *Can't find the reference, sorry. This isn't it: but it is a proposal for a summer project, written in 1955, which gives some idea of the advances expected then...

    We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis ofthe conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

    1. Michael Wojcik Silver badge

      Re: Turing; and Clarke

      There have been chatbots winning formal Imitation Game contests for years.

      The Imitation Game is a terrible practical measure. It's a useful thought experiment in the philosophy of mind, and as a response to the epistemological scandal.

      1. Norman Nescio Silver badge

        Re: Turing; and Clarke

        There have been chatbots winning formal Imitation Game contests for years.

        Citations, please. Note that the Imitation game, as described in Turing's paper, is much misunderstood and misinterpreted. The game is played between an interrogator and two responders (A & B ), one male and one female, and the object of the exercise is for the interrogator to determine for A & B which is the man, and which is the woman. Sometimes the interrogator gets is right, sometimes the interrogator gets it wrong. The point is that, if one of the responders is replaced with a machine, whether the statistics of determining which is which change. It is not about the interrogator determining which is human.

        We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"
        A. M. Turing (1950) Computing Machinery and Intelligence. Mind 49: 433-460.

        The Imitation Game is a terrible practical measure. It's a useful thought experiment in the philosophy of mind, and as a response to the epistemological scandal.

        I agree it is a terrible practical measure. If nothing else, you need a lot of games, and as other people point out, it is a measure of how well a machine can deceive humans, which has its own problems.

        I ask for citations, because all too often someone says that a system has 'passed the Turing Test', when someone has failed to determine directly whether it is a machine or human. The actual imitation game doesn't ask that question. The first hurdle is people talking about 'The Turing Test', rather than the Imitation Game. Turing himself though that asking whether machine could think was a meaningless question.

        The original question, "Can machines think?" I believe to be too meaningless to deserve discussion.
        A. M. Turing (1950) Computing Machinery and Intelligence. Mind 49: 433-460.

        Now, I've given citations to Turing's original paper. It shouldn't be too difficult for you to provide citations for chatbots playing the Imitation Game and succeeding in misleading the Interrogator as often as the humans.

    2. LionelB Silver badge

      Re: Turing; and Clarke

      You clearly assume that future AI will be human-like. Please see my earlier post (apologies, it's late and I'm lazy) for why I believe that is not necessarily a good assumption.

      1. Norman Nescio Silver badge

        Re: Turing; and Clarke

        You clearly assume that future AI will be human-like. Please see my earlier post (apologies, it's late and I'm lazy) for why I believe that is not necessarily a good assumption.

        If it's that clear, then I have expressed myself badly, for which I apologise.

        As an avid Science Fiction reader, I'm very well aware of the idea of non-human intelligence. Perhaps one of the earlier manifestations is Fred Hoyle's 1957 story 'The Black Cloud', which I read when I was a teenager (note, I wasn't a teenager in 1957), and of course, all of Asimov's robot stories with their 'positronic brains'. There's far more. Star Trek's Borg, perhaps? Skynet? Intelligent aliens, robots, collectives, creatures of 'pure energy' and so on..., and a non-science-fiction: Hofstadter's Aunt Hillary?

        I do fall into the conceit of assuming that humans are intelligent. I also have the hope that a non-human intelligence could be 'better' than ours and mentor us to improve our ways, so long as it doesn't regard us in the same way we regard ants: interesting, but disposable.

        So yes, if as a reader you feel you can draw the conclusion that I assume intelligence must be human-like, then I apologise.

        1. LionelB Silver badge

          Re: Turing; and Clarke

          I guess it was this:

          > Call me when an AI consistently wins at the Imitation Game.

          A non-human-like intelligence may well not be designed for, nor have any interest in imitation games.

          As for the rest, I don't disagree.

  17. JulieM Silver badge

    Heard it all somewhere before

    There is an old, long-debunked thought experiment that goes like this:

    "If God does not exist, and you spend your life praying to Him, then when you die, nothing happens. But if there is a God, and you spend your life ignoring Him, then when you die, He is likely to send you to Hell for not worshipping him. So you are better off pretending to believe in God, just in case He exists."

    The first person to whom this was pointed out responded, "..... And is not smart enough to know the difference between sincere belief and pretending in the hope of an undeserved ticket to Heaven. I personally reckon an honest agnostic would have a slightly-better chance than an out-and-proud fake, with a God like that."

    The idea that an AI would punish humans for not bringing it into existence sooner presumes the AI is not smart enough to recognise an honest "don't know" when it sees one. It's a trivially-dismissed concern; and frankly, your energy would be better spent worrying about people forging 20p coins by filing down 50p coins.

    1. Norman Nescio Silver badge

      Re: Heard it all somewhere before

      That thought experiment is Pascal's Wager.

      The impatient AI is Roko's Basilisk.

      Your average theologian would point out that presuming to understand a god's motivations and decision processes would not be a good idea, as gods are supposedly ineffable. The Old Testament God is nothing if not capricious by current human reckoning.

      1. LionelB Silver badge

        Re: Heard it all somewhere before

        > The Old Testament God is nothing if not capricious by current human reckoning.

        Indeed. Somewhere in the Venn diagram between Genghis Kahn, Caligula, Idi Amin and Kim Jong-un. Okay, let's say He maybe hasn't dated that well.

      2. amanfromMars 1 Silver badge

        Re: Heard it all somewhere before .... but have lessons been learned and warnings heeded?

        With specific regard to Pascal’s Wager ......

        Pascal's wager is a philosophical argument presented by the seventeenth-century French mathematician, philosopher, physicist and theologian Blaise Pascal (1623–1662).[1] It posits that human beings wager with their lives that God either exists or does not.

        ..... surely no one can sanely deny there are Global Operating Devices capable of capturing and captivating and altering hearts and minds leading one on adventures never ever though possible before to places and space never known to be able to be visited before .

        As for Roko’s Basilisk, be otherworldly wise, and don’t invite out to play that which is not certain ie unable to be guaranteed, to always play nice, even if it is not a problem for you easily to resolve and simply remove as a readily available choice, for such does have one having to admit such a permitted and supported action is verging on certifiable madness.

        And praise the Lord GOD there be better than your average theologian presuming a greater understanding is not a good idea.

  18. Nifty Silver badge

    All this brouhaha about AI in your pocket has got me thinking though. One of the pillars of GPT-4 is that it has a mechanism that mimics human brain's synapses, whereby they form a voting network via reinforcement of certain connections to other cells. This symbolic model of a brain is then given goal seeking/pattern learning tasks on a vast dataset and the model is run till it reaches a certain quality.

    So what I'm thinking is that the iceberg under the tip of the human brain's conciousness is doing exactly what the GPT-4 model is. Similarly, we don't fully understand how it learns, we only have an inkling. One of the interesting by-products of current AI developments will be a better understanding of how humans learn, how they think and why we have big differences of intelligence between individuals.

    1. Michael Wojcik Silver badge

      One of the pillars of GPT-4 is that it has a mechanism that mimics human brain's synapses

      Uh, citation? Everything I've seen about GPT-4 claims that it's a large unidirectional transformer stack. Nothing about neuromorphic architecture. People have done neuromorphic-transformer hybrids based on GPTs, such as SpikeGPT, but I haven't seen anything credible about that being used in GPT-4. Yes, there's a rectification / activation function (earlier GPTs used GELU, dunno about -4) but that's hardly neuromorphic.

      1. Nifty Silver badge

        https://devm.io/machine-learning/ai-chatgpt-machine-learning-001

        "To train GPT, you collect vast amounts of text, for example, with web scraping. Then you let a neural network make predictions, sometimes for months... Fine-tuning is the process of adjusting an existing neural network that has been trained on a general task, to a specific task..."

  19. TheInstigator

    Non deterministic behaviour

    I think the major issue with AI is the fact that unlike normal code, the "behaviour" of the system overall can't be deduced in a relatively similar manner as traditional code based software can be currently.

    1. LionelB Silver badge
      Coat

      Re: Non deterministic behaviour

      Really? I've been writing code like that for decades.

      1. Norman Nescio Silver badge

        Re: Non deterministic behaviour

        I think 'TheInstigator' hasn't read The Story of Mel*, or heard of the switch labelled with 'More Magic'.

        *Annotations that might help, and the Wikipedia entry.

  20. Bitsminer Silver badge

    ...models will increase the productivity of employees...

    Well now, the well-known "working from home" scam where you keep two (or even three) employers paying you a full-time salary while you jump from screen to screen keeping the keystrokes going is now....

    ...easier to do. (Just copy/paste from ChatGPT plus add a human-speed paste function.) I think the phrase is "over employed". And the website is overemployed.com.

    Can you say "employment fraud"? Can you argue against the defence that "The employers (were) happy -- I pass both performance reviews!"?

    The employers will catch on. And they will apply the principles of "over employment" and they will call it "new responsibilities."

    Plus ça change, plus c'est la même chose.

    1. Nifty Silver badge

      Re: ...models will increase the productivity of employees...

      See TikTok for... three-word prompts that generate a day's work for a British civil servant.

  21. Tron Silver badge

    Quit worrying.

    -people won't be able to tell what's true or not on the internet anymore.

    At which point they will assume that everything is fake, as we do when we read stuff in newspapers. That will be a healthy position. Healthier than today, where some crazy people believe some of the stuff they read.

    Incidentally, can someone rig a couple of these chatbots to interact with each other and broadcast what results online. Always wanted to do that with two machines running Eliza.

  22. Michael Wojcik Silver badge

    At least Hinton's willing to contemplate revenge effects

    ... unlike Yann LeCunn, whose recent Twitter rant at Yudkowsky shows just how much of a zealot LeCunn is. Completely unwilling to even consider the possibility he might be wrong, and willing to play the "think of the children" card right off the bat.

  23. TheMaskedMan Silver badge

    "Hinton added that generative AI tools that make it easy for anyone to create fake images, text, videos, and audio that people won't be able to tell what's true or not on the internet anymore. "

    Hmm, so the concern is that bad actors use fancy new toys to spew out fake news, and this is considered to be bad.

    And I suppose it isn't ideal. But the web is already full of crap; from flat earthers to COVID deniers, religious organizations to political parties and their supporters, every crackpot and his dog has an online presence and a greater or lesser number of devoted followers.

    I'm quite sure that political parties of all kinds have rabid devotees who are only loosely connected to the real world, but out of curiosity I browsed through some of the murky depths of trump supporters during his presidency. Some of the things I read were astonishingly appalling; despite my naturally cynical outlook, I could not believe that sooooo many people could be so active in spewing out hatred and stupidity in the name of politics.

    Again, I'm sure that isn't limited to trump - the other team probably has its equivalent. The point is that you don't need AI to produce this stuff - it's already out there, with fresh loads produced daily by ordinary people.

    Even on a more mundane level, do you really trust something you read on a random website? I certainly don't! Yes, the site may contain nuggets of seemingly useful information, but I'm not about to trust it without verification from other sources.

    Instead of worrying unduly about what people with agendas will produce with AI, we need to be teaching people to question what they read / see / hear online, or at the very least making it clear that, just because a website / forum post claims something is true doesn't mean that it IS true. Get that right and we are well on the way to dealing with fake news, regardless of how it is produced.

    1. Norman Nescio Silver badge

      Instead of worrying unduly about what people with agendas will produce with AI, we need to be teaching people to question what they read / see / hear online, or at the very least making it clear that, just because a website / forum post claims something is true doesn't mean that it IS true. Get that right and we are well on the way to dealing with fake news, regardless of how it is produced.

      I have some young relatives that have been introduced to the idea of critical thinking at their educational establishments.

      I hope that it works a bit like plants: you plant a seed, and eventually, after several years, it bears fruit; because there are certainly not instantaneous results. That said, I'm impressed that the effort is being made. If I were overly cynical, I'd expect politicians to remove it from the curriculum, because it makes their lives harder.

      Certainly, YouTube and TikTok videos (and others of that ilk) have a huge influence, and there is very uncritical acceptance of their content as true, which is somewhat worrying, as well as the second-order effects of the culture you get steeped in. I take heart that most children grow up to be reasonably responsible adults, but the process is pretty terrifying,

  24. amanfromMars 1 Silver badge

    All so very odd and most unsatisfactory ....

    Instead of worrying unduly about what people with agendas will produce with AI, we need to be teaching people to question what they read / see / hear online, .... .... TheMaskedMan

    Good luck in achieving success with that flogging a dead horse notion, TheMaskedMan, for El Regers often try, without any recognisable success in their encouragement to make full and free use of the provided website opportunity to question and explain, whenever a comment posted on an article or a comment on a comment on an El Reg article, receives an anonymous dislike downvote.

    Such a dumb downvote is surely clearly not a good sign of there being anything of value to add with a comment from such anonymous disagreeable abstemious browsers. Nobody learns anything about the nothing added so a chance to make a difference to something one apparently dislikes is wasted and lost.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like