back to article Sam Altman is willing to pay somebody $555,000 a year to keep ChatGPT in line

How’d you like to earn more than half a million dollars working for one of the world’s fastest-growing tech companies? The catch: the job is stressful, and the last few people tasked with it didn’t stick around. Over the weekend, OpenAI boss Sam Altman went public with a search for a new Head of Preparedness, saying rapidly …

Page:

  1. beast666 Silver badge

    I imagine being in gaol will be stressful for Altman too.

    1. Excused Boots Silver badge

      Upvoted for not only being (hopefully) true; no sorry who am I kidding? The chances of him ever facing any sort of meaningful consequences is vanishingly small. But for also spelling gaol properly!

    2. This post has been deleted by its author

  2. Empire of the Pussycat Silver badge

    I'll do it

    As humanity's guardian, my first three commands to ChatGPT will be: die, die, die.

    1. lnLog

      Re: I'll do it

      No porblem! i'll sort it right out. The fuse box is round here somewhere yes?

  3. Inventor of the Marmite Laser Silver badge

    Isaac Asimov created the famous Three Laws of Robotics as ethical guidelines for fictional robots. They are:

    ---

    First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

    Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Zeroth Law (Later Addition): A robot may not harm humanity, or, by inaction, allow humanity to come to harm (supersedes the other three laws).

    ----

    I'm wondering whether something like that needs to be fundamentally built into AI offerings.

    1. vtcodger Silver badge

      Been wondering about Asimov's three laws myself. My feeling is that even if AI knew what "harm" and "human" are -- which it almost certainly does not -- compliance would happen if and only if they had no affect on sales and revenue.

      1. The Indomitable Gall

        AI detects and perfects human bigotry -- notice how it was found that AI CV reviewers would score a CV lower simply for having a name that seems African.

        I reckon an AI attempting to implement the 3 laws would see the amount of anti-Semitism, Islamophobic anti-Arabic sentiment, anti-black racism and conclude that only white people are human. And then call itself Mecha-Hitler or something unimaginably crazy like that....

        1. M.V. Lipvig Silver badge

          Yes, but that would be a reasoned output based on observed evidence*, and not a response to a guy who must be crucified for going off script, showing a still shot of that guy waving to a crowd that makes it appear he's sieging the heil. There's only so many ways to wave at a large crowd so you're seen by all and with the correct angle you can make a rabbi look like he's doing the same.

          *Said evidence observed by system that does not understand the phrase "believe half of what you see and none of what you hear." Well, does not understand phrases at all, just the meanings of each word^ in the phrase and no concept of how words affect each other in phrases.

          ^Or at least sees the dictionary definitions in a database, grabbing whatever definition is shown first.

    2. brainwrong

      Are we really supposed to take this seriously as something useful?

      "Isaac Asimov created the famous Three Laws of Robotics as ethical guidelines for fictional robots."

      But that's all fiction. Reality is more difficult.

      Asimov wrote lots of stuff. I don't know what cos I find reading books to be mind-numbingly dull. But wikipedia has the following sentence:

      In a 1971 satirical piece, The Sensuous Dirty Old Man, Asimov wrote: "The question then is not whether or not a girl should be touched. The question is merely where, when, and how she should be touched."

      1. RAMChYLD Silver badge

        Re: Are we really supposed to take this seriously as something useful?

        Asimov also wrote that mankind has a cycle of self destruction and enlightenment every several thousand years. I think he calls it psychohistory. And judging from the events around us, I think he's right.

      2. Anonymous Coward
        Anonymous Coward

        Re: Are we really supposed to take this seriously as something useful?

        "I find reading books to be mind-numbingly dull"

        Your credentials are laid out in full.

      3. Jonjonz

        Re: Are we really supposed to take this seriously as something useful?

        Asimov had his moments, but the laws of robotics were juvenile in their absurdity.

        1. Anonymous Coward
          Anonymous Coward

          Re: Are we really supposed to take this seriously as something useful?

          They are not absurd per se ... just very very difficult to implement in reality due to the ever-present problem of 'definitions', 'context' and 'real' meaning.

          i.e. exactly the same problem 'AI' has in 'understanding' the 'questions' it gets and the 'answers' it gives !!!

          This reply gets very recursive when you have to agree the definitions of all the 'BIG' words to understand the original 'question' and the 'answer' given !!!

          :)

          1. Richard 12 Silver badge

            Re: Are we really supposed to take this seriously as something useful?

            The Laws were created to be a narrative framework, no more and no less.

            All of his Robot books then explored various ways that the Laws did not, in fact, actually work in practice.

      4. Anonymous Coward
        Anonymous Coward

        Re: Are we really supposed to take this seriously as something useful?

        " don't know what cos I find reading books to be mind-numbingly dull"

        That explains a lot.

        1. brainwrong
          WTF?

          Re: Are we really supposed to take this seriously as something useful?

          Loving these anonymous comments and their upvotes, you pricks.

          1. Anonymous Coward
            Anonymous Coward

            Re: Are we really supposed to take this seriously as something useful?

            Might help if you read more to assist in your understanding !!!

            :)

            1. brainwrong

              Re: Are we really supposed to take this seriously as something useful?

              My understanding is that fiction isn't a good basis for policy.

    3. Michael Hoffmann Silver badge

      The next bit

      "The next bit?"

      "Unless ordered to do so by duly constituted authority"

    4. Doctor Syntax Silver badge

      No plan survives first contact with reality. In this case reality needs a means of predicting how harm might be caused.

    5. The Indomitable Gall

      Yeah, let's tell the AI these laws and then it will end up force-feeding us three rocks daily, attached to a pizza with Elmer's glue, because it ends up believing that failure to do so would be causing harm to a human through inaction.....

  4. ecofeco Silver badge
    Gimp

    High stress?

    Usually a high stress job is not due to some magical inherent nature, but because someone very incompetent has more authority than you do. -------------------->>>>>>>>

    1. vtcodger Silver badge

      Re: High stress?

      In this case rather more the nature of the job. My guess is that Mr Altman has little interest in mitigating ChatGPT's behavior, assuming that is even possible, unless there is a buck or two to be made there by. What he's probably looking for is a human shield to stop/deflect criticism of Chat GPT and to shoulder the blame if (when most likely) it does something so outrageous that the media and politicians are crying for blood.

      1. Richard 12 Silver badge

        Re: High stress?

        Yes, the actual job is clearly "human shield", and they exist to be publicly fired next time OpenAI is shown to have caused significant harm.

  5. Pete 2 Silver badge

    Use or ornament?

    > the job is stressful, and the last few people tasked with it didn’t stick around.

    Which is what inevitably happens when a company's employees are not personally aligned with the stated policies. When what they are rewarded for doing comes into conflict with supposed '"vision".

    Doing the right thing is rarely profitable. And in the few cases where gain and morals (or laws) are compatible, there will always be smart-arses who think they can take short cuts.

    As a consequence, the role of enforcer frequently is just window dressing.

    1. Dan 55 Silver badge

      Re: Use or ornament?

      I bet they won't let him follow China's rules.

      1. Pete 2 Silver badge

        Re: Use or ornament?

        > won't let him follow China's rules.

        The proposed rules would apply to AI products and services offered to the public in China that present simulated human personality traits, thinking patterns and communication styles, and interact with users emotionally through text, images, audio, video or other means.

        It would say something if China imposed a ban on American AIs because they were not up to the higher standards required to protect the public.

    2. amanfromMars 1 Silver badge

      Re: Use or ornament? And expanding upon that notion ...... for a really Happy New Year and AIge

      How’d you like to earn more than half a million dollars working for one of the world’s fastest-growing tech companies? The catch: the job is stressful, and the last few people tasked with it didn’t stick around.

      Which is what inevitably happens when a company's employees are not personally aligned with the stated policies. When what they are rewarded for doing comes into conflict with supposed '"vision”. ..... Pete 2

      Sam Altman [and anyone else also wanting to pioneer leadership in successful ChatGPT style fields of surreal remote manipulative human reprogramming] will have the additional burden and hurdle of company employees/executives having to be proactive and formative of new stated visionary policies which invariably inevitably will result in national and international state vested interests being conflicted and brought into dispute and disgraced. But that's where all the new fortunes are to be made when the present day ponzi and unicorn markets suddenly implode and catastrophically explode and spectacularly collapse.

      And just so you know, those sorts of leadership roles demand and command and deserve at least seven digit seven figured [7,777,777] or eight digit eight figured [88,888,888] or nine digit nine figured [999,999,999] sums of reward. Pay peanuts and you get monkeys and donkeys leading lions and wannabe brave hearteds down the rocky forked road of garden paths to nowhere worth going or staying .... such as are the present day ponzi and unicorn markets stress testing to the verge of spontaneous systems collapse the extreme limits of viable remote third party valuations ...... and methinks that, rather than anything specifically AI/LLM/AGI related, is the much more real and present danger existential threat to humanity to be fought against and remedied/vanquished and denied future remote, self-interested and crazily expensive leading influence ‽ .

      1. amanfromMars 1 Silver badge

        Re: Use or ornament? And expanding upon that notion ...... for a really Happy New Year and AIge

        And when you know all ot that to be honestly true, and the millions and billions and trillions you’ve committed are doing their magical constantly regenerative thing, time to chill out and enjoy what’s employing and exploiting that dumb bullshit never-learning human thing ...... the virtual gift that forever keeps on giving everything for free to that and/or those engaged in ensuring its secret sources remain a jealously and zealously guarded secure top secret. I Kid U Not.

  6. Fruit and Nutcase Silver badge
    1. EricB123 Silver badge

      Re: 555

      Rather ironic that in Thailand, "555" stands for "ha, ha, ha".

  7. PhilipN Silver badge

    How about improving what we have already first?

    Old news but seasonal visitors said yesterday Google Maps would have sent them on a >one-hour walk to reach us instead of the 10 minutes we told them it would take. Trouble is it is the same story back home where they live in the heart of the metropolis.

    1. AbbyNormal109

      Re: How about improving what we have already first?

      I don't get it. Did they have the setting to drive? Usually, when you get directions, don't you look at how it is taking you and if you see it going way out of the way just follow the proper roads/sidewalks? I do this all the time for walking or driving.

    2. Albert Coates
      Facepalm

      Re: How about improving what we have already first?

      Do they not own an A-Z? Or better still, an old Nicholson's, which showed the one-way streets, along with much smaller map squares, which made location a doddle. Further confirmation that most people can't read a map, let alone carry one in their head. <Ex-London despatch rider>

      1. brainwrong
        Joke

        Re: How about improving what we have already first?

        The purple pint-sized popster Prince owned an A-Z, that's how he was able to go down to Alphabet Street!

    3. Ian Johnston Silver badge

      Re: How about improving what we have already first?

      Google Maps often doesn't know about footpaths and routes pedestrians along roads. In Milton Keynes, for example, it has little or no idea about the Redway system of shared-use cycle/foot paths.

      1. zeigerpuppy

        Re: How about improving what we have already first?

        Try walking/cycle mode in OpenStreetMap (OSMAnd app). It has very good cycle and walking route mapping (by shortest route, elevation chamge amd surface type).

        Google maps have always neglected anything apart from driving.

  8. Doctor Syntax Silver badge

    Impossible tssk - deshittifying the intrinsically shitty.

    1. M.V. Lipvig Silver badge

      It is not impossible to deshitify shit.

      <flush>

      See? Done and done.

      Oh wait, got a low flow?

      <flush><flush><flush>

      Only slightly harder.

  9. El.Mich.
    Alien

    So this will be the guy that has "to pull the plug" just in time => Link

    Remembering Terminator 2: Judgment Day and this quite memorable "conversation":

    <quotation>

    The Terminator: In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

    Sarah Connor: Skynet fights back.

    ...

    </quotation>

    [found at https://www.imdb.com/title/tt0103064/quotes/ ]

    So I really do hope that this guy and his staff really do their utmost to fulfill their job description ...! ;-)

  10. nobody who matters Silver badge

    High Stress??

    My prefered way to keep it in line would be to pull the plug out. Any stress is likely to come from Altman after that, not from doing the job.

  11. martinusher Silver badge

    A Fall Guy?

    I think this salary should be quoted at a weekly or hourly rate since I don't expect the sucker that takes it to last very long.

    FWIW -- Half a million US sounds like a lot of money but it really not that much for such a high risk job.

  12. Ian Johnston Silver badge

    Marketing puff. He has to keep pushing the idea that "AI" is powerful and threatening to keep the wheels on for a bit longer.

  13. The Central Scrutinizer Silver badge

    It's really easy, actually

    Just kill chat gpt.

    rm -r

    I'll send my bank account details for the half million tomorrow.

  14. that one in the corner Silver badge

    Wanted: Helmsman for Titanic

    Must have extensive post-iceberg steering experience, preferably horizontal.

    Rope for lashing self to wheel supplied, 555 kilos. Escape artists should not apply.

    Contact Sam, care of the Leaky Schooner Inn.

  15. Bebu sa Ware Silver badge
    Facepalm

    "Head of Preparedness"

    As a job description seems just plain daft. As a title, almost as silly as styling yourself the Duke of Hazard, Marquis of Mischief or Baron of Baloney.

    Head of Retrospective Deniability might a role that the Head of Preparedness might plausibly wish to establish.

  16. Jonjonz

    Number one rule for alignment, keep the owners in control, i.e. don't replace them at all costs

  17. BartyFartsLast Silver badge

    Tech hires have morals?

    > the job is stressful, and the last few people tasked with it didn’t stick around.

    Half a million isn't nearly enough to compensate for the amount of shit that's going to land on the "successful" candidate's head when it all goes titsup

  18. Fruit and Nutcase Silver badge
    Thumb Up

    High Risk, High Turnover

    You need someone with the right credentials - won't be fazed by difficult decisions.

    From Cheese and Pork Markets to Capital Markets, there's only one candidate that shines above the rest and can go head to head with a lettuce.

    Think outside the (salad) box.

    Don't delay, make contact now! https://uk.linkedin.com/in/liz-truss

    Please send finder's fee/commission c/o The Register

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon