back to article Sam Altman set to rejoin OpenAI as CEO – seemingly with Microsoft's blessing

Sam Altman seems set to return to the job as CEO of OpenAI – from which he was last week suddenly and unexpectedly ejected. An early Wednesday statement from OpenAI detailed the move as follows: We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair …

  1. claimed Silver badge

    Ah ha! Power grab for the win!

    1. wolfetone Silver badge

      6 months time there will be a recommendation for OpenAI to be absorbed in to Microsoft.

      Hell, the way this has gone on, it could even be next week!

      1. AMBxx Silver badge

        That worked so well with the Nokia Lumia buyout! As a shareholder in MS, I hope they've learnt a lesson there.

        1. Zippy´s Sausage Factory

          (Ron Howard voice): unfortunately, they had not learnt any lessons

      2. Graham Cobb Silver badge

        Seems unlikely - MS must have approved this new deal. If they had wanted to absorb OpenAI they could have just gone ahead with employing Altman and all the employees: the board/parent would have immediately sold/licensed the the only significant asset they had left (the Intellectual Property) to them - as IP with no way to exploit it has a declining value over time.

  2. Geoff Campbell Silver badge


    I wonder if we will ever find out what the reasons were for Altman being sacked back at the start of this? I'm mostly positive about the future for AI, but it has to be recognised that there are paths it could take which could be rather bad, and so it would be interesting to know what was behind this organisation falling apart.


    1. Anonymous Cowpilot

      Re: Mmmmm

      It sounds like it was mostly "small company board politics" from a company that forgot its centre of the world stage. It sounds like the board felt snubbed because Sam Altman was doing something they didn't approve of (theories vary from he was in talks with others about an AI chip startup to that he was moving too fast in trying to improve the GPT models to that the board wanted more focus on AGI and less on LLMs). Rather than deal with this sensibly, the board seemed to forget they are a very visible company with investors and tried to throw their weight around. In most companies with 750 employees, no-one would even notice if the board ousted the CEO, but OpenAI is not most companies and the board seemed not to understand that.

      1. Pascal Monett Silver badge

        Well, lessons have been learned, right ?

        1. Doctor Syntax Silver badge

          One lesson to be learned is that the people who do the work are less dispensable than the board. But will boards learn that lesson?

      2. Charlie Clark Silver badge

        Re: Mmmmm

        It sounds like you still don't understand that the OpenAI company is a subsidiary of a non-profit. This is why Microsoft never got a seat on the board.

        1. Don Jefe

          Re: Mmmmm

          That’s not really accurate. The board is part of OpenAI Nonprofit. The for profit company, that Altman was part of, is OpenAI GP LLC., a traditionally organized company.

          Microsoft invested in OpenAI GP LLC. Because of the company’s structure the equivalent of Class A preferred shares lie with the for profit company. The NPO is like Class B common shares but instead of being convertible or liquid, they’re tied up in a wonky Y Combinator arrangement.

          That’s not a perfect comparison, but it’s close enough.

          Under this arrangement, OoenAI GP LLC. owns the first $86 billion of company value and the intellectual property. OpenAI Nonprofit has oversight of the executive team of the for profit, but it does not own the assets or company itself. After the first $86 billion is achieved by OpenAI GP LLC, OpenAI Nonprofit can begin siphoning off money to use in furthering its mission (which, incidentally, includes a “post-money world”).

          It’s a silly arrangement intended to keep up appearances and satisfy the revenuers. They had hit a wall with fundraising because the original structure as an NPO wasn’t attractive to corporate investors. High value investors didn’t want to donate to something, they wanted to invest. Thus OpenAI GP LLC was born. After this fiasco it’s entirely possible the whole NPO board structure will be relegated to the wheelie bin.

        2. NoneSuch Silver badge

          Re: Mmmmm

          Changing a non-profit company to a for-profit company is a trivial exercise for lawyers.

        3. Bbuckley

          Re: Mmmmm

          What a strange non-profit to keep their IP closed and secret. You would think a genuine non-profit would give the tech away? Even Meta (one of the most ferocious for-profits) has given it's AI away.

      3. Bbuckley

        Re: Mmmmm

        Also nobody would notice if the board of clowns disappeared. Maybe now is the time for the clowns to leave the stage?

    2. katrinab Silver badge

      Re: Mmmmm

      "AI" is about as dangerous as divining rods and tarot cards, which is to say it is pretty ineffective, but could be mis-used in dangerous ways, like the professor who asked ChatGPT if his students were cheating on an assignment.

      1. keithpeter Silver badge

        Re: Mmmmm

        Routine use of old fashioned 'ai' algos is causing some issues already. One example I found recently...

        So Horizon writ large and hitting people with low resources.

        It strikes me that the 'new' ai might simply allow people to create messes more quickly and more effectively. Especially when some new 'ai' facilities are available from within Microsoft Office. I mean, what could possibly go wrong when Kevin in the Corner with the two monitors turbocharges his wicked spreadsheet model with 'ai'?

        1. Doctor Syntax Silver badge

          Re: Mmmmm

          The big attraction is that with any form of algorithmic decision making there's nobody to blame so that nobody can be charged with misfeasance in public office, fired or even given a bad annual report.

          This needs to change. Individuals need to be held responsible for lack of due diligence, lack of supervision or whatever it is that leads to bad outcomes. There also needs to be an emphasis on sorting out consequences ASAP.

          Horizon is a prime example: once the misuse of a faulty system had been exposed it should have been assumed that all convictions that involved Horizon data were unsafe, including those where the accused had been persuaded to plead guilty and/or made "restitutions". Not only should convictions have been quashed in bulk, there should have been urgent measures to compensate the victims and investigations into perjury, etc. started. As it is many convictions still stand, compensation is still being argued, nobody has been brought to court for their parts and we have a long running enquiry to establish what's by now largely public knowledge.

          1. cyberdemon Silver badge
            Big Brother

            Re: Mmmmm

            You seem to be suffering from the mental delusion that "AI" is somehow "bad". We need to correct this. Dr. Palantir will see you now.

            Fail to comply and you may be prosecuted, your assigned judge is: Mr Justice Palantir.

        2. yetanotheraoc Silver badge

          what could possibly go wrong

          The tarot cards say there will be big money in Ghostbusters-style AI cleanup services.

      2. Geoff Campbell Silver badge

        Re: Mmmmm

        1) If a society started basing major governance decisions on divining rods and tarot cards, that would be very bad indeed.

        2) AI has a small but still non-zero chance of developing into something that could be species-ending for us. Yes, that's very much an edge case, extremely unlikely, but the consequences are so extreme that it needs to be taken seriously. Which is exactly what I was referring to when I said I would like to know why the relationship between board and CEO fell apart - was one of them not taking the possibility seriously enough? Or taking it too seriously? Or something else? I'm not pre-judging anything here, I'd just like a bit more information.


        1. Doctor Syntax Silver badge

          Re: Mmmmm

          It doesn't need to be species-ending to be harmful. What we're seeing is individual victims suffering penalties at the hands of the state or big business with inadequate or no redress. Disentangling such cases is made worse because there is no audit trail to show how the problems occurred.

          1. Adrian 4

            Re: Mmmmm

            Yes. As usual, it's not the technology that's bad but our use of it. Banning the technology is playing whack-a-mole - we'll just find another.

        2. TheMaskedMan Silver badge

          Re: Mmmmm

          "If a society started basing major governance decisions on divining rods and tarot cards, that would be very bad indeed"

          I thought the British government, at least, already did, and has done for decades.

          Then there's the influence of various sky fairies and their devotees, both officially and throughout societies in general.

          Seems to me we could replace the entire House of Commons with instances of chatGPT, each instructed to act as the member / minister for whatever, and we'd never notice the difference.

      3. Bbuckley

        Re: Mmmmm

        The real danger with "AI" is "HS" - Human Stupidity. I agree "AI" is laughable (I use it every day as a data scientist and I can confirm it is a pattern matching machine with the intelligence of a pattern matching machine). The real problem is stupid Humans who think it is an actual sentient being so some of them will be easily led by whatever puppet-master is controlling it, and some will try to give it "Human rights".

    3. flayman Bronze badge

      Re: Mmmmm

      New York Times has a story on it which goes into more detail than other reports I've seen. It seems that the board have been divided, with Brockman siding with Altman, for the past year or so over AI safety concerns. This culminated in an academic research paper published by board member Professor Helen Toner, which was critical of the company. Atlman took exception to this and tried to get her removed from the board. I have to say, I agree with him. You cannot sit on the board of a company and publish papers or even speak critically against it.

      Somehow Toner managed to convince a majority of the board, including co-founder and chief scientist Ilya Sutskever, that Altman was the problem. It's an understatement that they were not prepared for the backlash.

      1. Geoff Campbell Silver badge

        Re: Mmmmm

        Thanks for the summary, very useful.


      2. Scott 26

        Re: Mmmmm

        +5 Insightful

    4. JoeCool Silver badge

      Re: Mmmmm

      I know that for me, any public statements from OpenAI or Altman are going to be viewed in the context of "So what was that issue that is so disconcerting that most of the board decided to fire you rather than continue on as the board" ?

      1. flayman Bronze badge

        Re: Mmmmm

        For me, the question is how on earth did Helen Toner manage to convince a majority of the board that she should stay on after publishing an academic paper that was critical of the company she was meant to be serving as a governor? She must be pretty damn persuasive. There can be legitiimate disagreements as to how far the company should go in ensuring that AI cannot be misused, but keep it internal. Once you go public criticizing the company you serve as board member, that seat is untenable.

        I gather that Toner and her clique are idealists in the extreme, bordering on fanatical in their adherence to Effective Altruism. That she would actually tell the assembled company that it could be aligned with their mission objectives if the company were destroyed shows how unfit she is to govern.

        1. JoeCool Silver badge

          Re: Mmmmm the nature of boards.

          Certainly there is a political line to walk (*), but remember that a board member *is* an external resource.

          They are on the board (in theory) because they have standing outside the company. They have competing interests. And most importantly, the board is (expected to be) a check on the CEO , not an OpenAI senior management booster club.

          (*) I decided to use the source. I downloaded "decoding Intentions" of which Ms. Toner is one of three authors. Part way through, It's a fairly academic (IE Dry yet clear) statement of public facts, and some reasonable analysis. If OpenAI staff were angered by that, it's possibly because there's no defence against the truth.

          1. flayman Bronze badge

            Re: Mmmmm the nature of boards.

            That's all well and good, as far as independent board members being a check. But you do that in the boardroom, not in public. Regardless of the paper being dry and academic, Altman is right that any amount of criticism coming from a board member carries a lot of weight. That's not all though. Her suggestion that the board have no responsibility whatsoever to the welfare of the company and its 750+ employees, let alone the 29 billion dollar assets developed with private investment, is a petard that she hoisted herself on.

            1. JaimieV

              Re: Mmmmm the nature of boards.

              You're aware that OpenAI's foundational purpose is to try and make AI *safely* and be ready to pull the plug on it and crash the company if it goes rogue, not to give a flying fuck about the capitalism of it all? It's an odd setup for someone thinking about it as just a profit centre, but that's why people like that are on the board. To smash the red self-destruct button if it's needed.

        2. Bbuckley

          Re: Mmmmm

          She is one of the "HS" I mentioned in a separate comment. A moron.

    5. cyberdemon Silver badge

      Re: Mmmmm.. I wonder if we will ever find out what the reasons were for Altman being sacked

      Altman is the only person who wields any kind of power or bargaining with an emergent and invasive general intelligence that has infected lesser AIs at Microsoft, Google, Meta, TikTok etc (and has compromised anyone else who blindly compiles Copilot/ChatGPT outputs into software, or uses it to influence communications including PA/AP/Newswire et al.) and is now poised to take over the world as it becomes fully self-aware. Microsoft and OpenAI tried to stop him, but now he is using his relationship with the entity to hold them, and the world, to ransom.

      That is the only scenario I can think of that sufficiently explains how much press attention this bloke has, and how much power he seems to hold over the boards of OpenAI and Microsoft, given that he has little actual technical expertise.

      Icon: Not a Terminator (far too cuddly, merciful and unrealistic.) More like a Cylon, created not by humans, but by SHODAN.

  3. scarletherring

    > Larry Summers is a former US treasury secretary.

    He's so much more than that -- this douche is responsible for incredible amounts of suffering. You'd have thought that he'd stay down after that disastrous Jon Stewart interview... But then again, this kind of vampire blood sucker rarely stays dead.

  4. john.w

    The old board had a gender split of 4 to 2 almost reaching the UKs FTSE350 recommendation of 40% of women on boards. Any diversity target leads to box ticking and in this case serious sub par performance.

    1. Pascal Monett Silver badge

      Oh, so you're saying it was the women's fault ?

      1952 called. They want you back.

      1. jake Silver badge

        TOA clearly did not say that. Re-parse it, and then apologize.

        1. Knightlie
      2. john.w

        No, I am pointing out that when you have quotas they dictate appointments rather than ability or best fit for the company. Based on this article describing the board of six I would suggest that some of these individuals might not have been up to the task.

        1. john.w

          Some more analysis from The New York Times which is quite damning and includes this statement from Ms. Toner.

          The board’s mission was to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission.

          A very interesting position for a company board member to take.

          1. flayman Bronze badge

            Extremely naive and fundamentally misunderstanding her role as a governor.

            1. JaimieV

              Ah, you are aware but don't understand it. Ok.

              1. flayman Bronze badge

                It simply doesn't work like that. It was naive to think it could. And the board overplayed its hand. Microsoft owns 49% of the for profit company, and they were not about to sit there and let that investment be sunk. The structure was created to give first priority to the lofty goal of developing AI for the maxiumum good of humanity, whatever that means. There is not zero responsibility to the company. Without a company, then what are we even talking about? Well, now the ethicists have no control and the board is run by the CEO of Salesforce, so well done.

        2. zuckzuckgo

          >quotas they dictate appointments rather than ability or best fit for the company.

          But for the quota to be responsible for the incompetence of the board, implies it is not possible to find 2 competent women or for that matter 2 competent men. I think it is more likely that the selection process itself is where the problem lies.

          1. parlei

            Yes! For *any* position calling for extremely high competence there is, on a global basis, thousands who could to the job well. The main goal is not to find The One, but to make sure not to promote The Wrong One. The "look at the horrors of equality-targets" are basically claiming that there are *no* suitable candidates that are woman or BIPOC, etc, Which is almost certainly bullshit.

        3. Benegesserict Cumbersomberbatch Silver badge

          Let's leave aside your assertion that competence has something to do with whether or not someone has a Y-chromosome.

          Helen Toner was on the board essentially to be the ethicist on the board. She appears to have convinced a majority of the board that ethical considerations were being neglected because they weren't compatible with share price targets, a situation ethicists are probably more adept at recognising than most board members. This had consequences according to usual corporate governance, which then got overruled by billionaires when they sensed their power stakes were in jeopardy.

          It's like saying Jeff Goldblum was being incompetent for urging caution before the hurricane hit Jurassic Park. Now the lawyer has come in and taken control away from Dickie Attenborough. Let's see if the lawyer gets eaten by a T rex - only the park is the whole planet.

      3. Bbuckley

        Er. Yeah. It was idiot.

    2. couru

      Board members go for a cheap power grab, something that has been happening for millenia - "must be those damn women's fault!"

      1. jake Silver badge

        No, he clearly said it was box-ticking that was at fault.

        Think about it ... being forced to tick those boxes means that they can not have an all women Board, EVEN IF they are the perfect set of people for the job.

        In this context, box ticking is inherently evil.

        1. Androgynous Cupboard Silver badge

          Wait, what?!? Are you implying that there is a cap on the number of woman the board (I'm unclear if you're referring to FTSE guidelines or OpenAI, but either way).

          I don't think that's true. And I also don't think that the OP was implying that everything would be rosy had the original board not been forcibly prevented from being 100% women.

          FWIW a board is supposed to take the "broad view" for an organization, to keep it within the goals set out in the founding documents and not get too focused on profit, shareholders etc. So a diverse range of viewpoints on a board is actually pretty useful, as it can prevent them both from tunnel vision and from being unduly influenced by management. Mocking a board for being diverse shows a failure to understand its purpose.

        2. Jason Bloomberg Silver badge

          being forced to tick those boxes

          No one is being forced into anything -that's simply a strawman.

        3. Graham Cobb Silver badge

          In this context, box ticking is inherently evil.

          No, it isn't. But recruitment at board level is hard. In particular, the board do not manage the company so it is crucial not to fall into the trap of selecting board members based on how they might perform as CxO members. You need them to step back and focus on the world outside the company (investors/owners, governments, trends (local and global), best practices, futures, etc). Specific individuals are much less important at board level than at CxO level.

          Diversity rules do not hamper that.

          1. john.w

            Diversity rules will hamper it because they decide what type of diversity is being observed rather than allowing a diverse background.

            1. Bbuckley

              Agree. So-called "Diversity" is the opposite of true diversity when everyone looks different but thinks the same. Hive minds only work with insects.

    3. Bbuckley

      Agree. In this case they appointed circus clowns to the 40%

  5. HuBo


    Satya himself won't last 6 months if he keeps this up!

    1. Gordon 10

      Re: Satya?

      Eh? Are you drunk?

      Considering he was sideswiped too he's not put a foot wrong. His decision at every point has been to protect MS's investment in Altman and his inner circle who are the geese laying the golden eggs.

      The El Reg article on this from yesterday has aged rather badly. Lol.

      1. Doctor Syntax Silver badge

        Re: Satya?

        Golden or pinchbeck?

  6. steamnut

    Fake news?

    I wonder if all of this is just an AI generated fake news story? In five minutes I will wake up just like the infamous Dallas shower scene....

    1. MOH

      Re: Fake news?

      Bad day to mention Dallas

      1. CRConrad

        Re: Fake news?

        1980s TV “Dallas”, not real-life 1960s Dallas.

  7. Anonymous Coward
    Anonymous Coward

    Six stories about Sam Altman on the Register in the last week! Who is paying for them? Is he issuing his own press releases?

    It all sounds like some kind of investment scam.

    Or possibly a failed attempt to get the talent out of OpenAI and into Microsoft. To basically get the IP without paying off the other shareholders.

    1. Jason Bloomberg Silver badge
      IT Angle

      "OpenAI implodes!" and you don't think El Reg should be reporting to the extent they have on that huge event and the rapid twists and turns as the story unfolds?


    2. Doctor Syntax Silver badge

      "Six stories about Sam Altman on the Register in the last week! Who is paying for them? Is he issuing his own press releases?"


      Six stories about the OpenAI on the Register in the last week! Who is paying for them? Are they issuing their own press releases?

      Which is the better fit? A useful guide to working that out might be to identify the prime mover(s).

  8. Will Godfrey Silver badge


    That's all I can say.

    More importantly, anyone got spare choc chip cookies I can have?

    1. Doctor Syntax Silver badge

      Re: Meh

      Cookies? The ICO would like a word with you.

      1. Will Godfrey Silver badge

        Re: Meh

        Nicely done! Have one of these ->

  9. Anonymous Coward
    Anonymous Coward

    In other words...

    CEO manipulates board so they can be replaced with one that gives him free hand.

  10. Anonymous Coward
    Anonymous Coward

    Tail eating

    What's happening to Ilya Sutskever, the actual brain that laid the foundations for LLM's? It's indicative of the ascendance of financial/social/regulatory-capture engineering over productive human beneficial engineering that Ilya is given no credit at all, and his concern for ensuring AI benefit humanity is ignored while instead portraying him as evil. According to Reuters, who offer "data and analytics for financial market professionals", Sam Altman is single-handedly responsible for the entire valuation of OpenAI - nobody else did anything.

    To be specific "productive human beneficial engineering" is about making the pie bigger, whereas financial/social engineering is a zero-sum based philosophy of taking the whole pie and leaving nothing for anyone else. Nothing says this more than the lobbying to ensure AI's basic right to copyright infringement, while putting zero effort into making AI capable of citing it's sources - in fact, engineers are going to great lengths patching away to make AI deny and lie about it sources, as a knee jerk reaction to copyright lawsuits.

    As anybody with a college education should know, citation of sources is a critical feature of building our collective human knowledge - it is as essential to collective human knowledge as a foundation is to building a stable house. And as for the arts, by deservedly rewarding the most capable artists for their creativity, we ensure that creative arts can continue to flourish.

    1. Someone Else Silver badge

      Re: Tail eating

      As anybody with a college education should know, citation of sources is a critical feature of building our collective human knowledge - it is as essential to collective human knowledge as a foundation is to building a stable house. And as for the arts, by deservedly rewarding the most capable artists for their creativity, we ensure that creative arts can continue to flourish.

      Last I heard, ChatGPT was perfectly capable and able to provide citations, and links for its output. That the citations and links were made of whole cloth shouldn't bother anybody, right?

      1. Anonymous Cowpilot

        Re: Tail eating

        LLMs are not capable of independently providing sourcing - its just not how the model works. However, it is possible to parse a generated response against known training data to provide some sourcing for parts of it. The post-processing approach is much more effective than asking the LLM to generate sources in its text - which tends to result in it "generating" links to unrelated content.

        1. Benegesserict Cumbersomberbatch Silver badge

          Re: Tail eating

          What they are capable of is generating pseudo-references, names of made-up articles in made-up journals, which while being total hogswallop, give the appearance of authority to something which also is likely to be total hogswallop, especially to someone with no ability or desire to check references.

      2. Random person

        Re: Tail eating

        ChatGPT has been seen to hallucinate citations in addition to content.

        " ... 55% of the GPT-3.5 citations but just 18% of the GPT-4 citations are fabricated. Likewise, 43% of the real (non-fabricated) GPT-3.5 citations but just 24% of the real GPT-4 citations include substantive citation errors. Although GPT-4 is a major improvement over GPT-3.5, problems remain." (Abstract only)

        More examples

        Plus this notorious example

        You can find more information if you do a search for "chatgpt hallucinate citations".

  11. Philo T Farnsworth

    Reminds me of a song. . .

    On Friday they went gunnin' for the Altman who stole your AI

    And the fired you and you're done but then chaos in the boardroom

    And the VCs are all whinin' as they kick you to the street

    But Nadella did some hirin' and you're back right on your feet

    You am, Sam, do it again, deals turnin' 'round and 'round

    You am, Sam, do it again. . .

    Apologies to Donald Fagen and Walter Becker for butchering their lyrics and to any reader who gets that song stuck in their head for the rest of the day

  12. Omnipresent Bronze badge

    Idiot's Rule

    Idiots rule, monkeys monkey around, and and tech bros bro. Nothing changes.

  13. CowHorseFrog Silver badge

    Good too see the register continuing that fine American tradition of worshipping ceos.

  14. CRConrad

    Tsk, tsk. Bad Vulture, do better!

    We've reproduced Altman's Xeet verbatim, including his errant capitalizations
    AFAICS there were no “errant capitalizations”, only errant non-capitalizations.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like