back to article OpenAI calls for global watchdog focused on 'existential risk' posed by superintelligence

An international agency should be in charge of inspecting and auditing artificial general intelligence to ensure the technology is safe for humanity, according to top executives at GPT-4 maker OpenAI. CEO Sam Altman and co-founders Greg Brockman and Ilya Sutskever said it's "conceivable" that AI will obtain extraordinary …

  1. b0llchit Silver badge
    Facepalm

    Check the checks to check checks at the check

    Well, al least the IAEA looks at tangible problems, but is essentially a paper tiger.

    Now they want an organisation to look at intangible problems and hope it has any effect? Are we going to restrict the sale of graphic cards or other compute devices? The required compute hardware is not even comparable with the nuclear problems. We have "AI calculation" practically in the backyard whereas we are lightyears distanced for anything to the nuclear requirements to get fission going. Get a (very big) PC and get AI-ing. Or can we build a gigantic centrifuge facility to enrich and refine uranium in your home backyard with (physical) access to the required technology and feedstock?

    This watchdog blabla is a "good luck with that" prayer. At the speed of compute development, are we going to raid the garages because someone did a too big calculation using a Joule or two too much? The technology created with GPT and other LLMs is essentially an uncontrollable system because it is software. It is an idea codified in code; a program.

    You cannot unidea the idea. Trying to control it will simply become a huge wack-a-mole game where you know beforehand that the moles have won, big time.

    1. Vincent van Gopher

      'Good luck with that' was my initial thought too.

  2. TheMaskedMan Silver badge

    "At the speed of compute development, are we going to raid the garages because someone did a too big calculation using a Joule or two too much?"

    Exactly. Either Altman et al are suffering from tunnel vision and can't imagine that open source / hobbyist folks will soon have enough compute to do the job, or they're pulling the ladder up behind them to reduce competition.

    I'm still not convinced that regulation is even necessary - from a regulatory point if view,at least. It's probably quite useful for sawing the legs of their critics, though.

    I am, however, convinced that it will be ineffective unless it is so Draconian that only the really big players can afford to comply. And even then there will be those who get around the regulations by being government / military bodies. This genie is out of the bottle and isn't going back.

    1. ChoHag Silver badge

      > Either Altman et al are suffering from tunnel vision and can't imagine that open source / hobbyist folks will soon have enough compute to do the job, or they're pulling the ladder up behind them to reduce competition.

      It's marketing through security theatre.

      "Look how successful this is going to be! All those security guards wouldn't protect nothing would they? Please invest more."

    2. mpi Silver badge

      > I'm still not convinced that regulation is even necessary

      Regulation is necessary about real world problems, as it is with all other software products: Compliance to data security, copyright concerns, privacy rights, high standards in critical applications ... aka. exactly what the EU is doing right now.

      What isn't necessary is regulation against imaginary scifi scenarios.

  3. Anonymous Coward
    Anonymous Coward

    We also need regulation for hyperspace travel and pocket-size fusion reactors...

    which should arrive about the same time as general artificial intelligence.

    Keep in mind this is the maker of GPT-4 talking. They know full well that we're nowhere near real ("general") artificial intelligence - machines that can think for themselves. This sounds a lot like a way to restrict any "AI" (really just advanced pattern matching) work so it can only be done by the biggest names in the business. Like them.

    1. Anonymous Coward
      Anonymous Coward

      @AC - Re: We also need regulation for hyperspace travel and pocket-size fusion reactors...

      You don't need AI to be that good. It only needs to be good enough and this is already happening.

      1. Zack Mollusc

        Re: @AC - We also need regulation for hyperspace travel and pocket-size fusion reactors...

        Nah. GIGO.

        These LLMs are not outputting anything new or creative, so never adding to the sum of knowledge and much of what they churn out is plain wrong, thus diluting the sum of knowledge. Since the training set has now been diluted, the LLMs will produce ever crappier output.

        It will be in vast volume and super cheap, though.

      2. mpi Silver badge

        Re: @AC - We also need regulation for hyperspace travel and pocket-size fusion reactors...

        No it isn't happening.

        We have stochastic parrots that will happily announce that 2 + 2 = 7 or that caterpillars are mammals, if prompted correctly. Please show me any threat scenario of these models, that doesn't rely on the humans using them.

  4. Anonymous Coward
    Anonymous Coward

    He means

    an international agency controlled by the US of course.

    1. Anonymous Coward
      Anonymous Coward

      Re: He means

      An ineffective international talking shop headquartered in a nice western European city with good restaurants and shopping. Maybe FIFA can give them some pointers?

      1. mpi Silver badge

        Re: He means

        Whereas US lead international institutions are a model of efficiency and security, amirite? Take NATO for example, which managed to manouver itself into a position where a "nope" from any member state can effectively halt proceedings.

  5. This post has been deleted by its author

  6. DS999 Silver badge

    The IAEA has a much easier time

    Nuclear power plants are large, and a relatively small number of countries have the capability for building them or going beyond that to nuclear weapons. Radiation is detectable from a distance, making nuclear materials difficult to hide.

    Any country could start an AI program if it wanted, as well as any reasonably sized corporation or even wealthy individual or group of individuals - the main limitation is having enough money to buy a sufficient amount of A100s or equivalent. There is no chance any regulatory body would know about something if its owners decide to hide it.

    So yeah, Elon Musk, George Soros, Bill Gates...pick your boogeyman. Any of them could individually fund development of an AI for their own purposes, and the outside world wouldn't know until they wanted them to know.

    1. amanfromMars 1 Silver badge

      Re: The IAEA has a much easier time

      Countries can't/don't start anything, and most certainly not any AI programs, individual entities do.

      And in consort and conspiring with similarly minded others to do as they do and as they please, do governments of countries, in reaction to what can and is being done by other individual entities/foreign agents, either in glorious free-lancing isolation or co-operative communal joint AI program adventuring, engage and pay them handsomely to either both preserve and protect historically established public and private interests/vital societal infrastructure and/or invest all manner of government public and private finance funds in such AI programs as also are intelligently designed not to wantonly and wilfully remotely destroy them in next to no time at all and with no prior warning, via virtually and practically unstoppable, novel never before known means ...... for such can also so very easily be done by AI developments/developers and is a much prized and extremely valuable invisible export earner for all those nations or individual entities with a genuine need for the import supply of such an incredibly dangerous and explosively destructive prize, to clear the way forward into the future of the present dead wood and putrid detritus of currently failed and failing institutions and zombie ponzi organisations not needed by positively creative postmodern, mutually beneficial constructive AI Programmed and Remotely Programmable Augmented Virtual Reality AIdDVenturing Pioneers ...... Advanced IntelAIgently designed Digitised Venturing Pioneers.

      And what further can you imagine IT/them to maybe be ..... Championing Grand Master Knights of the Cyber Realm in/from an Immaculate Sphere of Alien Influence, or another Corrupt and Perverse Future Space for the Exercise of Earthly Decadence ? ......... and whenever it can be both, or either, which would you prefer to win win and crush and crash all opposition and negative competition?

      PS.... That final question is a no-brainer, having as it does, only the one simple clear correct answer and the consequence suffered for choosing wrongly has one permanently removed from Future Influential Instrumental Greater IntelAIgent Games Play ...... one's bit part in the future is to be fulfilled as a deaf, dumb and blind spectator puppet in the vast seas of other lost souls forever to bob about helplessly in the wakes of near and far troubles and disturbances in the force and forces at play.

      Well? ...... Surely you weren't expecting the future to be simply a copy of the present with reflections of the past being added to consider as memory? That would be an insane expectation in the times and spaces of today which, one does surely have to admit with the advent of cyber and digital manipulation and binary coding and AI Programming, are completely different from all our yesterdays.

      Now there be genuinely unique and novel opportunities to create and deliver something Earth-shattering and quantum-leaping evolutionary and revolutionary .... and that chance is not the one going to being missed or ignored and denied.

  7. amanfromMars 1 Silver badge

    Hope Springs Eternal but Other Needs Must be Exercised for the Sake of Sanity and Humanity

    What on Earth makes CEO Sam Altman and co-founders Greg Brockman and Ilya Sutskever think superintelligent AI developments/developers/nation state and non-nation state actors/private pirates and public champions super ACTive in the field will take heed of and follow human based orders and self-serving administrative agencies?

    Such is surely delusional? Just look at the existing chaos and conflicted situations resultant from such interventions presently existing and expanding on Earth without their novel input/output to see what and where that leads to.

    Give Peace a Chance with AI Leading Somewhere/Anywhere Different, and Significantly Better and More LOVEable for Live Operational Virtual Environments easily manage the catering for every human delight if not having to demonstrate their powers in destroying those basic humans so many are advised to fear to disobey, for such is also an easy option readily managed and catered for.

  8. Brian 3

    ..... they thought it couldn't happen, 5 more times!

  9. Someone Else Silver badge

    [...] We can have a dramatically more prosperous future; but we have to manage risk to get there.

    Sure, you can have a dramatically more prosperous future...if you are a fatass at the top of the pyramid. If you're not (i.e. the remaining 99% of us), faggedaboudit.

    Fatasses don't understand (or, more likely, understand but don't care) that the entire world economy is based on consumption. For there to be consumption, there must be consumers (i.e. ordinary folk with disposable income, so they can buy the latest doodads and geegaws). If nobody is working because a "superintelligence" has subsumed all the income-creating jobs, there ain't gonna be any consumption. No consumption, no "dramatically more prosperous future".

    Unless, of course, world governments are suddenly going to go full-metal socialist, and provide for the needs and wants of their carbon-based populace. I'll hang up and wait for Rep. McCarthy to answer...

  10. Anonymous Coward
    Anonymous Coward

    Wait, isn't their CEO

    the guy backing the creepy scan your eyeball crypto project?

    Maybe these people aren't as bright as they think they are.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like