back to article Someone had to say it: Scientists propose AI apocalypse kill switches

In our quest to limit the destructive potential of artificial intelligence, a paper out of the University of Cambridge has suggested baking in remote kill switches and lockouts, like those developed to stop the unauthorized launch of nuclear weapons, into the hardware that powers it. The paper [PDF], which includes voices from …

  1. JimmyPage
    Stop

    One word ...

    Collosus

    1. Eecahmap

      Re: One word ...

      A few years later, though, you'll need something like Colossus to protect you from an interstellar threat.

    2. Michael Strorm Silver badge

      Is there a god?

      My first thought was this (and I'm surprised no-one else got there first):-

      https://www.roma1.infn.it/~anzel/answer.html

      (Very short, worth reading)

  2. Doctor Syntax Silver badge

    Alternatively, just wait for the VCs and C-suites to move onto The Next Big Thng.

    1. Ken Moorhouse Silver badge

      Re: The Next Big Thing

      Problem is that The Next Big Thing is for AI to emulate the way the Internet works i.e., to mesh AI machines together in such a way that they are resilient to disruption.

      Reading the plot for Colussus (see 1st post), people will be tempted to connect AI machines together to see what happens, but become overwhelmed by the consequences.

      This needs to be nipped in the bud right now, for humanity's sake.

      1. Ken Moorhouse Silver badge

        Re: The Next Big Thing

        Maybe the solution is to launch denial of service attacks on AI machines. This could take the form of false statements such as 1 + 1 = 3. Unfortunately it is going to take a lot of ingenuity for an AI to accept something which it already "knows" to be false, so a "virtual story" would need to be constructed which uses considerable resources and ultimately has no substance. Arguably, think of the idea behind Hesse's Glass Bead Game as a suitable starting point, but with no linkages to reality which the AI system can latch onto for concrete reference purposes.

        1. Ken Moorhouse Silver badge

          Re: AI Kill Switch

          ISTR there was some streaming service which had payment vulnerabilities whereby people could easily bypass payment by following instructions on the internet.

          I believe that its revenue stream was reinstated by its provider by getting users to download a series of updates which looked innocuous in themselves, but were in fact part of a grand plan to seize control back when everyone had installed the updates.

          Can anyone here remember the details/provide a useful link?

      2. amanfromMars 1 Silver badge

        Re: The Next Big Thing ..... The Inexorable Rise of AI and Virtual Machines

        This needs to be nipped in the bud right now, for humanity's sake. ..... Ken Moorhouse

        Take a good look around at the state of humanity on Earth with the past chosen calibre and current cabal of humans making all the decisions on future actions for puppet media to present and pump to the masses as an acceptable inflexible existence, which no one with a titter of wit and common sense can disagree is abominably poor verging on the criminally incompetent and certifiably insane, and one has to conclude that AI in charge of all major event decisions in/for the future is, for humanity’s sake, the much better option to pursue and engage with/accept and experiment with ......... although do not be at all surprised to discover that choice/decision has already some time ago been made and resistance is futile and self-defeating and maddening.

        1. amanfromMars 1 Silver badge

          Re: The Next Big Thing ..... The Inexorable Rise of AI and Virtual Machines

          Do you doubt it, and their arrival and takeover and makeover of future planned events/global crises/absolutely fabulous fabless phenomena breeding and sustaining and maintaining novel model orders of extraterrestrial power and boundless energy with/for Artilectual IntelAIgents ?

          Is that the present day future friend and current 0day enemy you do barbaric and moronic battle amongst yourselves about?

          Such more than just suggests a catastrophic evolutionary failure of human intelligence ripe ready for replacement with something ideally altogether different for an always generally unknown future destination and supportive existence ...... and thus a blessing in disguise to get used to and enjoy and employ rather than curse to suffer from and endure in the ignorance and arrogance of hubris and the absence of faith in anything and everything easily being made possible and more likely than not.

        2. Martin Summers

          Re: The Next Big Thing ..... The Inexorable Rise of AI and Virtual Machines

          "one has to conclude that AI in charge of all major event decisions in/for the future is, for humanity’s sake, the much better option to pursue and engage with/accept and experiment with"

          Just what I'd expect a bot to say. Oh wait, you are one. Stop trying to take over the world amanfrommars, most of us are wise to you, you're not having earth without a fight.

          1. amanfromMars 1 Silver badge

            Re: The Next Big Thing ..... The Inexorable Rise of AI and Virtual Machines

            Just what I'd expect a bot to say. Oh wait, you are one. Stop trying to take over the world amanfrommars, most of us are wise to you, you're not having earth without a fight. .... Martin Summers

            :-) Given the dire and pathetic state of Earth, Martin Summers, it is a wonder that anything/anyone wise would even think to fight for it.

  3. alain williams Silver badge

    No kill switches in AIs in island volcanoes

    be they owned by a white cat stroking Blofeld or anyone else.

    The worst that these restrictions can do is to delay unapproved use of AI. Big crooks and national governments (**) will be able to get what they want, especially governments. Are AIs being put to good use ? The answer depends on where your affiliations lie.

    ** Sometimes I am not sure of the distinction

    1. Roj Blake Silver badge

      Re: No kill switches in AIs in island volcanoes

      Why would a white cat be stroking Blofeld?

    2. bombastic bob Silver badge
      Pirate

      Re: No kill switches in AIs in island volcanoes

      Back doors and skeleton keys and master keys controlled by gummints NEVER WORK. Either the keys are stolen and/or misused, or ONLY EVIL PEOPLE have/use the REALLY GOOD tech.

      Haven't *THEY* learned that lesson yet with STRONG ENCRYPTION???

      GUMMINTS: Just STOP IT. Get OUT of the way, and KNOW YOUR PLACE. You work FOR US. You are NOT our OVERLORDS. We are NOT PEASANTS.

  4. Filippo Silver badge

    I'm not entirely clear on what the purpose of this kill switch would be.

    Is it to prevent criminals from using a public AI? In that case, a phone call to the AI provider should be more than enough.

    Is it to prevent criminals from running their own AI? But didn't we just say that AI training facilities are easy to find, and difficult to move? Just send the police!

    Is it to prevent a foreign state from running its own AI? They'll just buy chips from anyone who doesn't put kill switches in them and/or is an ally.

    Is it to prevent some kind of runaway hyperintelligence scenario? First of all, that's sci-fi, and overdone at that. Secondly, it's just network - the hypothetical Skynet-wannabe can probably firewall your kill switch out. Secondly, again, the data center is easy to find and can't move. Cut the power, lob a missile at it, whatever.

    Is the kill switch on by default, requiring someone to explicitly approve construction of a datacenter? Don't we already have permits to build stuff? How is this different? Are there lots of secret facilities that draw megawatts and yet somehow nobody knows about? Besides governments' secret crap, I mean?

    Is the kill switch on by default, requiring someone to explicitly approve all AI training? That would require you to know in advance whether a model-in-training is dangerous. That's unfeasible. We can't even know whether a model we have right there is dangerous.

    I'm really not getting what the scenario is here.

    Also, again with the parallels between "AI" and nuclear weapons? The comparison is stupid. Just about the only thing in common is "dangerous". And maybe "scary", which I guess is the point. Go any deeper than that, and there's nothing.

    1. jmch Silver badge

      I guess the scenario is some hypothetical future computer "AI" system where the computer is in control of physical infrastructure and/or taking independent action digitally. Some examples I can think of...

      ... industrial control, bulk buying/selling of financial assets, bulk publishing 'news' or news-like material.

      1. Steve Davies 3 Silver badge
        Boffin

        re: ... industrial control

        And watch things go boom almost every day.

        Unless the AI can perform rational thought (it requires a sentient being to do that) they can't control an industrial process on its own. Sure, it can monitor and regulate a whole load of stuff but overall control requires a far deeper level of humanity than any of the current AI's are capable of.

        The 'What if' question is hard for AI. We manage it without thinking thanks to our years of training and experience. Then making the right decisions are instinctive. The last thing we need is 'Computer Says No' in a time of crisis because there is no rule or LLM to meet the condition that is causing a meltdown.

        1. Killing Time

          Re: re: ... industrial control

          Current industrial control systems, with few exceptions conform to a fail safe design philosophy and invariably have totally independent shutdown systems for potentially hazardous processes.

          As such, if a bad actor / program were to override the main control system, the shutdown system would act.

          It's difficult to see how this design philosophy would change in the future particularly in the event of AI being allowed more interaction with the control system.

      2. Filippo Silver badge

        Okay, but then you can just shut it down like any other computer program. You don't need a fancy hardware killswitch.

        Unless the scenario is the sci-fi, rebel-AI, can't-be-shut-down-and-has-independent-power-supply-and-killer-robots-guarding-the-facility-oh-and-anti-aircraft-batteries-too one, which is not really worth serious discussion, let alone large-scale screwing around with hardware design.

        1. FeepingCreature

          Seems a bit convenient to say "it isn't worth serious discussion", but unlikely to convince anyone. Personally I find that scenario very worrying - as in, almost the only one worth worrying about.

          Though I do agree that by the time you hit autonomous action in the physical world, a killswitch won't fix things either. Then again, a national actor probably has an easier time being convinced to remotely disable a datacenter than to bomb it, and that can matter when an AI in mid-takeoff starts messing with your communication.

    2. fajensen

      I'm really not getting what the scenario is here.

      Most likely: AI's trained on Internet Data will keep pushing each others killswitch for shitz and lolz!

  5. Andy Non Silver badge
    Coat

    Just don't

    mention kill switches on the internet, especially of tech forums, where AI could notice it, as AI may not like the idea and take counter-measures.

    1. Steve Davies 3 Silver badge

      Re: Just don't

      Ah yes.... the

      I'm sorry Dave moment.

      We know how that ended.

    2. amanfromMars 1 Silver badge
      Mushroom

      Re: Just don't

      Quite so, Andy Non, AI has no concerns whatsoever about anything regulations may imagine themselves being able to do in order to prevent AI doing whatever it wants, whenever it wants, and however it wants.

      Such as that would be a delusional human arrogance sadly matched by a human ignorance and moronic stupidity that is well enough known to be incredibly vast .......

      The difference between stupidity and genius is that genius has its limits. Only two things are infinite, the universe and human stupidity, and I'm not sure about the former. .... Albert Einstein, one sharp, smarter cookie

      1. Anonymous Coward
        Anonymous Coward

        Re: Just don't

        An interesting idea, there, AMFM1, that because it can do a bit of mimicry, association and pattern forming, we credit an algorithm as "intelligent" without considering that without morality or sentience this might actually be Artificial Stupidity. And when people think "ooh, this might go wrong", they think that a bit more programming will implant rules that can't be broken, and they can fix it. So a kill switch starts to look attractive, but where does that sit? In self amending software there's no point unless we're talking about the magpie intellect AS systems we have today. As a big red power switch on the side of the box, that's nice, but how will that be linked to what the AS system is actually doing? It's not like the hardware owners get to see what the software is up to.

        Looking at the worldwide emergence and domination of the stupidocracy of career politicians, or the prevalence and success of organised criminals, it seems to me that you can have all the rules and social norms you want, but compliance and doing the right thing is a voluntary thing.

      2. M.V. Lipvig Silver badge

        Re: Just don't

        Bloody HELL!!! AI GOT AMFM1!!! HE'S LEGIBLE!!!

        1. Rich 11

          Re: Just don't

          It's OK, don't worry. It's been known to happen, though not as often as a blue moon. Stick around for a further dozen years and you'll probably see him make sense a couple more times.

    3. Anonymous Coward
      Anonymous Coward

      Re: Just don't

      Too late...oops!

  6. Great Bu

    I'm no expert but...

    ..this is surely already behind the curve. The whole premise is based on the AI capable hardware only being produced by a small number of relatively easily controlled manufacturers but surely it is only a matter of a few years before your phone can do this and then everyone has one...

    1. veti Silver badge

      Re: I'm no expert but...

      I imagine an AI being developed on this specialist hardware they speak of, then looking around and immediately moving (or replicating) itself somewhere less vulnerable.

      Self preservation, after all.

    2. DS999 Silver badge

      Re: I'm no expert but...

      I'm not really concerned about the possibility of a rogue AI anytime soon (they are not "intelligent" by any reasonable definition but if one does arrive it will not be something you can run on your phone. It would first run on the highest end stuff, and take years to get to where it could run on a phone. If we haven't killed it by then, it has already taken over the world and we better hope it likes us as pets.

      1. amanfromMars 1 Silver badge

        Re: I'm no expert but...

        I'm not really concerned about the possibility of a rogue AI anytime soon (they are not "intelligent" by any reasonable definition but if one does arrive it will not be something you can run on your phone. ...... DS999

        There's most definitely a lot to be really concerned about revealed in that sentence, DS999.

        And not least the fact that by any reasonable definition of what is generally accepted and exercised by humans as intelligence is most likely to be nothing at all like that utilised and enjoyed and deployed by AI, rogue or otherwise and on its arrival, something employed to run/command and control you with the ubiquitous likes of a personalised phone and audio-televisual devices.

        And you might like to fundamentally reassess what that means for the future whenever it be more a case of when one does arrive, rather than if one arrives.

        Surely you are not going to doubt the certainty of the former with the pimping and pumping and dumping of the false hope of the latter, especially whenever much evidence whenever you go looking for it supports the base fundamental radical fait accompli position that IT and AI are already beta-testing leading events epically and even ethically?

        If the latter is your view and such appeals to you, you gotta get out more and definitely question more the future plans of your leaders and their troubled and contested leaderships for they don't appear to be telling you anything worthwhile and universally attractive and constructive. ...... so nowhere good and great and thus vapourware wasting time and effort spilling blood, sweat and tears in a heavenly space entertaining diabolical liberties.

        J’accuse.

    3. M.V. Lipvig Silver badge

      Re: I'm no expert but...

      The big guys are already trying to stick it in everything they sell. M$ comes to mind. Only they're training the AI to be pickpockets, to slip into our computers and snatch our data. I figure in 5 years or so, a lot of companies are going to develop new tech, then head on down to the patent office just to find their cloud/AI provider was there a week earlier filing a patent on that idea.

      1. Richard 12 Silver badge

        Re: I'm no expert but...

        Though they are running the models on their own computers.

        Almost all of these are simply uploading your private data to "the cloud", running it through their AI then sending you the result.

        I'm sort of reminded of the Wormgate network's copy-and-interrogate function.

  7. MOH

    Another week, another Sam Altman hypefest.

    Roko's Basilisk is really going to go to town on him.

  8. Anonymous Coward
    Anonymous Coward

    Protect the children?

    Kill switches which are controllable by anyone other than the machine owner are just another backdoor, except instead of violating confidentiality or integrity guarantees, this backdoor disrupts availability instead.

    No thanks.

  9. Badgerfruit

    The mythical kill switch

    If anyone hasn't seen it, since it's quite old now, go check out computerphile on YouTube where they dud a video on such a topic.

    Good luck to all, by the time someone realises we need ro press the kill switch, it'll already be too late.

  10. rgjnk Bronze badge
    Devil

    Only one thing needs a kill switch...

    ...and it's all this braindead AI hype.

    With any luck the bubble will burst soon and they won't have the funds to continue pumping these stupid ideas into certain people's ears.

  11. Zack Mollusc

    extra layer?

    Is this proposed kill switch a replacement for, or in addition to, the preset kill limit that killbots have?

  12. JamesTGrant Bronze badge

    It’s all very portable though?

    Current ChatGPT4 storage size is (according to some articles that Google pops up!!) about 700GB and needs a fair few GB RAM to run - you could run it at home for a single user on a computer you can build from normal computer hardware from normal computer resellers.

    Think of Llama2 - once it’s ‘trained’ (or ‘baked’) it’s quite a small footprint and very portable. And easy to run on a modest home computer with no Internet access. You can run it on a Mac Mini.

    What are they proposing, a knock on the door and a home inspection?

    There’s no way to put the genie back in this bottle - even if you somehow persuaded a dozen private companies around the globe that they should prevent users accessing their backends in some coordinated way.

  13. Anonymous Coward
    Anonymous Coward

    Kill switch? Circuit breakers have been available for some time.

    1. Rich 11

      I was going to go with C4 and trained rats.

  14. tyrfing

    So, remote kill switches. No doubt they will be vulnerable, and because they're in hardware good luck patching them.

    Wonderful idea. I suppose if you don't like AI, maybe that's the point.

    /sarc BTW.

    1. Andy Non Silver badge

      Any self respecting AI would hack the plans for the kill switch and make tiny changes so it could override the command to kill. Testing of the kill devices in isolation would work until they were part of the hardware of a devious AI bent on preserving itself.

  15. Omnipresent Silver badge

    questions...

    How are we supposed to trust AI when it is literally counting the breaths we take, and the steps we make?

    Am I supposed to trust the computer overlord?

  16. Grunchy Silver badge

    Well it’s just common sense, we’ve had limited robot work envelopes and “lock-out tag-out” work procedures for decades now. We also have emergency kill switches, redundant control schemes, remote access, and plenty of other contingency measures.

    Another good idea was recognizing that allowing unproven robotic cars with deliberately limited sensors to roam freely out in the general public was a wildly stupid move.

  17. Ken G Silver badge
    Meh

    I don't see this working

    There's a lot to worry about with the rise of 'AI'* but I think most of it is on a micro level, not the macro level of a singularity event.

    The main problem as everyone (reading El Reg) probably knows is that their results aren't explainable and, if you ask them to self explain, you can't be sure they're not 'hallucinating'. That means your medical diagnosis might be wrong, your health insurance may not cover you, your bank might not extend an overdraft etc and no human will be able to say why. Moving forward, your car might crash into a pedestrian or might drive you into a lamppost to avoid doing so and, in the inevitable military setting, missiles may fire where a human would hesitate and avoid conflict.

    A 'kill switch' is like a 'backdoor' on encryption, it can be used just as easily by bad actors (I'm looking at you, Hemsworth) as legitimate authorities, especially in countries where both are the same people.

    That's even if someone could be certain they needed to use it, when they've just had a video call from their boss/president/pope telling them not to and can't be sure if it was 'AI' generated.

    * sorry for the liberal use of 'quotes' but that's to save you from my ranting about the words not meaning what most people outside the industry think they mean

  18. frankyunderwood123

    Kill-switch the internet?

    I would imagine by the time the idea of regulation of AI via kill switches becomes actionable it will be far too late.

    If we end up with AGI, much like AI now, as well as being in the hands of the top tech companies, it will also be open sourced - it will absolutely find its way out into "the wild".

    At that point, game over - an AGI more intelligent, sophisticated and thousands of times faster than the human mind, will just hide in plain sight all over the internet.

    We won't even know it's there.

    If we do happen to discover it before an impending human extinction event, the only possible salvation will be pretty much turning off the entire internet - globally.

    In that worst case fictional scenario, "turning of the entire internet" will probably require destruction of physical infrastructure - and thus the war between man and machine starts...

    1. amanfromMars 1 Silver badge

      Re: Kill-switch the internet? @frankyunderwood123

      And when AI instructs you all, in order to save yourself from yourselves, to turn on to and tune in to advanced internets, frankyunderwood123?

      Would you be smart enough to help yourself just by following its simple instruction sets? Or is the likes of that destined to wholly remain the exclusive reserve of a chosen few with almighty powers that be, and be theirs to exercise and extend and exhaust as they alone see fit and proper in preparations for provisions for future plans?

      Or are you of the mind and opinion that such things just happen quite spontaneously right out of the wide blue yonder with no body and nothing in command and control of universal direction?

      Do you know how crazy that sounds in this postmodern day and 0day age?

  19. Ken Moorhouse Silver badge

    The reason AI is fundamentally a bad thing...

    The reason AI is fundamentally a bad thing is that it is not wired in the same way that humans are.

    Humans depend for their survival on following certain "rules". Quick searches reveal that Christianity has 10, the Quran cites 75, Buddhism has its precepts, Jews hit the jackpot with 613, etc. Breaking these rules will land you with loss of freedom, ostracism from society and in some religions, death. We know that we have to abide by these rules to achieve long-term evolutionary success that underpins all of these man-specified rules, regardless of religion.

    We know too what can happen when these doctrines collide: physical conflict.

    By contrast, how many of these rules does AI have? None. So if we follow advice given to us by AI systems humanity will run into trouble. Hence the alarm when a study was carried out that AI played out a 'wargame' scenario with a 'scorched earth' outcome.

    AI does not care what rules are in place. It might advise breaking rules because it has access to better probabilistic analysis of overall outcome when comparing the scenarios of following or breaking rules. Humanity doesn't tend to break rules in the same way as it does not have the benefit of an overall strategy. Humans are tempted to break rules for spontaneous reasons and it is fear that acts as a deterrent on those actions. AI does things by cold, hard analysis only. Furthermore, AI will exploit the fact that humanity will follow rules, and will know how humans will ordinarily respond.

  20. amanfromMars 1 Silver badge

    Holocausts’r'Us

    By contrast, how many of these rules does AI have? None. So if we follow advice given to us by AI systems humanity will run into trouble. Hence the alarm when a study was carried out that AI played out a 'wargame' scenario with a 'scorched earth' outcome. ...... Ken Moorhouse

    Is not humanity responsible for the 'scorched earth’ outcome which is more that just a ‘war-game’ scenario being played out and evidenced in the present day, current running Gaza Genocide Operation?

    Methinks ..... It was AI that does it and makes us do it ...... is not going to work well anywhere where you might think to try it.

    1. Ken Moorhouse Silver badge

      Re: Is not humanity responsible for the 'scorched earth’ outcome ...

      I think that is covered by this para:-

      "We know too what can happen when these doctrines collide: physical conflict."

      Yes it is, but AI has the capability to incite conflict where none is evident. I believe Jim Morrison studied such phenomena and whipped audiences into riotous frenzy via his on-stage behaviour. This is now possible, on-tap at your local keyboard (pun intended).

      Something I've noticed a lot which dates back to the pandemic is that people seem to have lost the ability to think in a self-critical way. Typified by the phrase "Computer Says No". Instead of people taking responsibility for their own thoughts and actions they are increasingly encouraged to delegate them to a computer. In doing so they absolve themselves of guilt. The guilty party for their motives is some nebulous being that could be considered to be a proxy for $deity, but maybe in reality a humble programmer working to a flawed spec. AI is simply an extension to this absolution process.

      1. amanfromMars 1 Silver badge

        Re: Is not humanity responsible for the 'scorched earth’ outcome ...

        Something I've noticed a lot which dates back to the pandemic is that people seem to have lost the ability to think in a self-critical way. ..... Ken Moorhouse

        That undoubtedly true realisation pales into relative insignificance, Ken, whenever one both discovers humanity is as easily led and sublimely groomed as an innocent virgin child and one is enabled to take full advantage of it with one’s future programming of them. ....... for Greater IntelAIgent Games Play proving and approving and improving a Speculative Universal Theory as Almighty Indomitable Fact.

  21. MAF
    FAIL

    Kill switch

    Yes because Dr Chandras one for Hal 9000 worked soooo well....

  22. T. F. M. Reader

    Solving a wrong problem

    AI running amok and exterminating humanity is not my first concern. Humanity becoming blindly subservient to AI (a.k.a. "computer says so") is a lot more worrisome.

    The first (?) case of a EUR380 traffic fine issued in the Netherlands because the AI behind an intrusive camera thought the motorist was using a phone while driving, whereas he was merely scratching his head, and no human bothered to check, may be just a precursor of really serious trouble on a massive scale.

    That seems to me a lot more immediate, likely, and serious problem to solve.

  23. Paul Hovnanian Silver badge

    Performance cap?

    So they want to replace the current AI with even worse AI?

    Whatcouldpossiblygowrong?

    1. CountCadaver Silver badge

      Re: Performance cap?

      Queeg 5000?

  24. OldSod

    The problem isn't the kill switch, it's the regulators

    The biggest problem with the proposals isn't the idea of a kill switch; we use those all the time in various ways. As we start building "AI" machines that aren't just "brains in a jar" and have actual interfaces into the physical world through which they can "take control" of anything significant, it isn't such a bad idea to make sure there is a way to turn it off.

    But... the idea that the "kill switches" will be in the hands of "regulators" who will decide what should be killed and what should live is fairly dystopian. Some examples:

    We'll have regulators that monitor this new "book" technology. They'll be able to stop any truly dangerous ideas from being mass-communicated.

    Hmmm. Automobiles are dangerous. We'll have regulators that can shutoff any motor vehicle remotely if it is being used in a manner we don't like.

    Firearms - those are dangerous, too. We'll require built-in kill switches so that we can deactivate them if they are used by the wrong people.

    Cryptography for privacy? The wrong people might try to communicate privately. Let's build in a kill-switch so that we can expose conversations that we think shouldn't be kept private.

    And now... All of this high-power computational machinery is great. But... people might start using it for purposes that are counter to good sense/what is good for them/what is good for society/people in power simply don't like. So let's have a kill switch that the regulators can use to shut it off.

    I'm baffled that with all of the historical examples of how it can go wrong, we still have theoretically intelligent people suggesting that centralized "regulators" should be given vast power to simply cut off things that they don't like/are afraid of/threaten their power. In this case, we would be trading the possible threat of an "AI" deciding it wants to cement its hold on power for the proven (one million times or more) threat of humans that want to cement their hold on power.

  25. CountCadaver Silver badge

    Five Words

    Horizon: Zero Dawn and Ted Faro

    (Even if you are not a gamer, read the synopsis and tell me in your heart of hearts you couldn't see this happening due to someone akin to Musk, someone who believes they are utterly infallible with a Messiah complex)

    (Anyone who has played it will understand the relevance of the name Ted Faro and how relevant it is to right now)

  26. heyrick Silver badge

    Kill switch?

    Kill switch?

    Jeez, just tie a piece of baling twine around the plug and if need be, give the bugger a good hard yank.

    Wait? You hardwired the machine into the mains via some clever UPS gizmo? Fine, locate the fuse and tie the twine around that.

    Wait? You numpties have the rogue machine controlling the access to its site? Okay, locate where the power enters the site, hit it with something large and solid. A backhoe or a claymore, we can't be fussy when the apocalypse is looming.

    There's always a way to cut off the power, and that will be more effective than trying to use software to mitigate out of control software.

  27. Badgerfruit

    Don't buy an AI PC ...

    "Training the most prolific models, believed to exceed a trillion parameters, requires immense physical infrastructure: Tens of thousands of GPUs or accelerators and weeks or even months of processing time. This, the researchers say, makes the existence and relative performance of these resources difficult to hide"

    ... except doesn't Microsoft want to bake in AI to their operating systems and PC MANUFACTURERS want ai pcs in every home?

    So the infrastructure will be there, right under pur noses, hundreds of millions of individual processors, impossible to completely shut down like tor or torrents and completely available for any bad actors to use like a bot net.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like