Ah ha! Power grab for the win!
Sam Altman set to rejoin OpenAI as CEO – seemingly with Microsoft's blessing
Sam Altman seems set to return to the job as CEO of OpenAI – from which he was last week suddenly and unexpectedly ejected. An early Wednesday statement from OpenAI detailed the move as follows: We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair …
COMMENTS
-
-
-
Wednesday 22nd November 2023 13:16 GMT Graham Cobb
Seems unlikely - MS must have approved this new deal. If they had wanted to absorb OpenAI they could have just gone ahead with employing Altman and all the employees: the board/parent would have immediately sold/licensed the the only significant asset they had left (the Intellectual Property) to them - as IP with no way to exploit it has a declining value over time.
-
-
Wednesday 22nd November 2023 08:04 GMT Geoff Campbell
Mmmmm
I wonder if we will ever find out what the reasons were for Altman being sacked back at the start of this? I'm mostly positive about the future for AI, but it has to be recognised that there are paths it could take which could be rather bad, and so it would be interesting to know what was behind this organisation falling apart.
GJC
-
Wednesday 22nd November 2023 08:14 GMT Anonymous Cowpilot
Re: Mmmmm
It sounds like it was mostly "small company board politics" from a company that forgot its centre of the world stage. It sounds like the board felt snubbed because Sam Altman was doing something they didn't approve of (theories vary from he was in talks with others about an AI chip startup to that he was moving too fast in trying to improve the GPT models to that the board wanted more focus on AGI and less on LLMs). Rather than deal with this sensibly, the board seemed to forget they are a very visible company with investors and tried to throw their weight around. In most companies with 750 employees, no-one would even notice if the board ousted the CEO, but OpenAI is not most companies and the board seemed not to understand that.
-
-
Wednesday 22nd November 2023 13:59 GMT Don Jefe
Re: Mmmmm
That’s not really accurate. The board is part of OpenAI Nonprofit. The for profit company, that Altman was part of, is OpenAI GP LLC., a traditionally organized company.
Microsoft invested in OpenAI GP LLC. Because of the company’s structure the equivalent of Class A preferred shares lie with the for profit company. The NPO is like Class B common shares but instead of being convertible or liquid, they’re tied up in a wonky Y Combinator arrangement.
That’s not a perfect comparison, but it’s close enough.
Under this arrangement, OoenAI GP LLC. owns the first $86 billion of company value and the intellectual property. OpenAI Nonprofit has oversight of the executive team of the for profit, but it does not own the assets or company itself. After the first $86 billion is achieved by OpenAI GP LLC, OpenAI Nonprofit can begin siphoning off money to use in furthering its mission (which, incidentally, includes a “post-money world”).
It’s a silly arrangement intended to keep up appearances and satisfy the revenuers. They had hit a wall with fundraising because the original structure as an NPO wasn’t attractive to corporate investors. High value investors didn’t want to donate to something, they wanted to invest. Thus OpenAI GP LLC was born. After this fiasco it’s entirely possible the whole NPO board structure will be relegated to the wheelie bin.
-
-
-
Wednesday 22nd November 2023 11:35 GMT keithpeter
Re: Mmmmm
Routine use of old fashioned 'ai' algos is causing some issues already. One example I found recently...
https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/#
So Horizon writ large and hitting people with low resources.
It strikes me that the 'new' ai might simply allow people to create messes more quickly and more effectively. Especially when some new 'ai' facilities are available from within Microsoft Office. I mean, what could possibly go wrong when Kevin in the Corner with the two monitors turbocharges his wicked spreadsheet model with 'ai'?
-
Wednesday 22nd November 2023 13:14 GMT Doctor Syntax
Re: Mmmmm
The big attraction is that with any form of algorithmic decision making there's nobody to blame so that nobody can be charged with misfeasance in public office, fired or even given a bad annual report.
This needs to change. Individuals need to be held responsible for lack of due diligence, lack of supervision or whatever it is that leads to bad outcomes. There also needs to be an emphasis on sorting out consequences ASAP.
Horizon is a prime example: once the misuse of a faulty system had been exposed it should have been assumed that all convictions that involved Horizon data were unsafe, including those where the accused had been persuaded to plead guilty and/or made "restitutions". Not only should convictions have been quashed in bulk, there should have been urgent measures to compensate the victims and investigations into perjury, etc. started. As it is many convictions still stand, compensation is still being argued, nobody has been brought to court for their parts and we have a long running enquiry to establish what's by now largely public knowledge.
-
-
Wednesday 22nd November 2023 12:59 GMT Geoff Campbell
Re: Mmmmm
1) If a society started basing major governance decisions on divining rods and tarot cards, that would be very bad indeed.
2) AI has a small but still non-zero chance of developing into something that could be species-ending for us. Yes, that's very much an edge case, extremely unlikely, but the consequences are so extreme that it needs to be taken seriously. Which is exactly what I was referring to when I said I would like to know why the relationship between board and CEO fell apart - was one of them not taking the possibility seriously enough? Or taking it too seriously? Or something else? I'm not pre-judging anything here, I'd just like a bit more information.
GJC
-
Wednesday 22nd November 2023 13:18 GMT Doctor Syntax
Re: Mmmmm
It doesn't need to be species-ending to be harmful. What we're seeing is individual victims suffering penalties at the hands of the state or big business with inadequate or no redress. Disentangling such cases is made worse because there is no audit trail to show how the problems occurred.
-
Wednesday 22nd November 2023 17:53 GMT TheMaskedMan
Re: Mmmmm
"If a society started basing major governance decisions on divining rods and tarot cards, that would be very bad indeed"
I thought the British government, at least, already did, and has done for decades.
Then there's the influence of various sky fairies and their devotees, both officially and throughout societies in general.
Seems to me we could replace the entire House of Commons with instances of chatGPT, each instructed to act as the member / minister for whatever, and we'd never notice the difference.
-
-
Friday 24th November 2023 14:31 GMT Bbuckley
Re: Mmmmm
The real danger with "AI" is "HS" - Human Stupidity. I agree "AI" is laughable (I use it every day as a data scientist and I can confirm it is a pattern matching machine with the intelligence of a pattern matching machine). The real problem is stupid Humans who think it is an actual sentient being so some of them will be easily led by whatever puppet-master is controlling it, and some will try to give it "Human rights".
-
-
Wednesday 22nd November 2023 13:54 GMT flayman
Re: Mmmmm
New York Times has a story on it which goes into more detail than other reports I've seen. It seems that the board have been divided, with Brockman siding with Altman, for the past year or so over AI safety concerns. This culminated in an academic research paper published by board member Professor Helen Toner, which was critical of the company. Atlman took exception to this and tried to get her removed from the board. I have to say, I agree with him. You cannot sit on the board of a company and publish papers or even speak critically against it.
Somehow Toner managed to convince a majority of the board, including co-founder and chief scientist Ilya Sutskever, that Altman was the problem. It's an understatement that they were not prepared for the backlash.
https://www.nytimes.com/2023/11/21/technology/openai-altman-board-fight.html
-
-
Thursday 23rd November 2023 11:52 GMT flayman
Re: Mmmmm
For me, the question is how on earth did Helen Toner manage to convince a majority of the board that she should stay on after publishing an academic paper that was critical of the company she was meant to be serving as a governor? She must be pretty damn persuasive. There can be legitiimate disagreements as to how far the company should go in ensuring that AI cannot be misused, but keep it internal. Once you go public criticizing the company you serve as board member, that seat is untenable.
I gather that Toner and her clique are idealists in the extreme, bordering on fanatical in their adherence to Effective Altruism. That she would actually tell the assembled company that it could be aligned with their mission objectives if the company were destroyed shows how unfit she is to govern.
-
Thursday 23rd November 2023 18:21 GMT JoeCool
Re: Mmmmm the nature of boards.
Certainly there is a political line to walk (*), but remember that a board member *is* an external resource.
They are on the board (in theory) because they have standing outside the company. They have competing interests. And most importantly, the board is (expected to be) a check on the CEO , not an OpenAI senior management booster club.
(*) I decided to use the source. I downloaded "decoding Intentions" of which Ms. Toner is one of three authors. Part way through, It's a fairly academic (IE Dry yet clear) statement of public facts, and some reasonable analysis. If OpenAI staff were angered by that, it's possibly because there's no defence against the truth.
-
Friday 24th November 2023 08:56 GMT flayman
Re: Mmmmm the nature of boards.
That's all well and good, as far as independent board members being a check. But you do that in the boardroom, not in public. Regardless of the paper being dry and academic, Altman is right that any amount of criticism coming from a board member carries a lot of weight. That's not all though. Her suggestion that the board have no responsibility whatsoever to the welfare of the company and its 750+ employees, let alone the 29 billion dollar assets developed with private investment, is a petard that she hoisted herself on.
-
Saturday 25th November 2023 19:34 GMT JaimieV
Re: Mmmmm the nature of boards.
You're aware that OpenAI's foundational purpose is to try and make AI *safely* and be ready to pull the plug on it and crash the company if it goes rogue, not to give a flying fuck about the capitalism of it all? It's an odd setup for someone thinking about it as just a profit centre, but that's why people like that are on the board. To smash the red self-destruct button if it's needed.
-
-
-
-
-
Thursday 23rd November 2023 09:56 GMT cyberdemon
Re: Mmmmm.. I wonder if we will ever find out what the reasons were for Altman being sacked
Altman is the only person who wields any kind of power or bargaining with an emergent and invasive general intelligence that has infected lesser AIs at Microsoft, Google, Meta, TikTok etc (and has compromised anyone else who blindly compiles Copilot/ChatGPT outputs into software, or uses it to influence communications including PA/AP/Newswire et al.) and is now poised to take over the world as it becomes fully self-aware. Microsoft and OpenAI tried to stop him, but now he is using his relationship with the entity to hold them, and the world, to ransom.
That is the only scenario I can think of that sufficiently explains how much press attention this bloke has, and how much power he seems to hold over the boards of OpenAI and Microsoft, given that he has little actual technical expertise.
Icon: Not a Terminator (far too cuddly, merciful and unrealistic.) More like a Cylon, created not by humans, but by SHODAN.
-
-
Wednesday 22nd November 2023 08:33 GMT scarletherring
> Larry Summers is a former US treasury secretary.
He's so much more than that -- this douche is responsible for incredible amounts of suffering. You'd have thought that he'd stay down after that disastrous Jon Stewart interview... But then again, this kind of vampire blood sucker rarely stays dead.
-
-
-
-
Wednesday 22nd November 2023 15:07 GMT john.w
No, I am pointing out that when you have quotas they dictate appointments rather than ability or best fit for the company. Based on this article describing the board of six I would suggest that some of these individuals might not have been up to the task.
https://www.cnbc.com/2023/11/18/heres-whos-on-openais-board-the-group-behind-sam-altmans-ouster.html
-
Wednesday 22nd November 2023 15:29 GMT john.w
Some more analysis from The New York Times which is quite damning and includes this statement from Ms. Toner.
The board’s mission was to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission.
A very interesting position for a company board member to take.
https://www.nytimes.com/2023/11/21/technology/openai-altman-board-fight.html
-
-
-
Monday 27th November 2023 08:54 GMT flayman
It simply doesn't work like that. It was naive to think it could. And the board overplayed its hand. Microsoft owns 49% of the for profit company, and they were not about to sit there and let that investment be sunk. The structure was created to give first priority to the lofty goal of developing AI for the maxiumum good of humanity, whatever that means. There is not zero responsibility to the company. Without a company, then what are we even talking about? Well, now the ethicists have no control and the board is run by the CEO of Salesforce, so well done.
-
-
-
-
Wednesday 22nd November 2023 17:24 GMT zuckzuckgo
>quotas they dictate appointments rather than ability or best fit for the company.
But for the quota to be responsible for the incompetence of the board, implies it is not possible to find 2 competent women or for that matter 2 competent men. I think it is more likely that the selection process itself is where the problem lies.
-
Thursday 23rd November 2023 07:17 GMT parlei
Yes! For *any* position calling for extremely high competence there is, on a global basis, thousands who could to the job well. The main goal is not to find The One, but to make sure not to promote The Wrong One. The "look at the horrors of equality-targets" are basically claiming that there are *no* suitable candidates that are woman or BIPOC, etc, Which is almost certainly bullshit.
-
-
Saturday 25th November 2023 21:32 GMT Benegesserict Cumbersomberbatch
Let's leave aside your assertion that competence has something to do with whether or not someone has a Y-chromosome.
Helen Toner was on the board essentially to be the ethicist on the board. She appears to have convinced a majority of the board that ethical considerations were being neglected because they weren't compatible with share price targets, a situation ethicists are probably more adept at recognising than most board members. This had consequences according to usual corporate governance, which then got overruled by billionaires when they sensed their power stakes were in jeopardy.
It's like saying Jeff Goldblum was being incompetent for urging caution before the hurricane hit Jurassic Park. Now the lawyer has come in and taken control away from Dickie Attenborough. Let's see if the lawyer gets eaten by a T rex - only the park is the whole planet.
-
-
-
-
-
Wednesday 22nd November 2023 12:55 GMT Androgynous Cupboard
Wait, what?!? Are you implying that there is a cap on the number of woman the board (I'm unclear if you're referring to FTSE guidelines or OpenAI, but either way).
I don't think that's true. And I also don't think that the OP was implying that everything would be rosy had the original board not been forcibly prevented from being 100% women.
FWIW a board is supposed to take the "broad view" for an organization, to keep it within the goals set out in the founding documents and not get too focused on profit, shareholders etc. So a diverse range of viewpoints on a board is actually pretty useful, as it can prevent them both from tunnel vision and from being unduly influenced by management. Mocking a board for being diverse shows a failure to understand its purpose.
-
Wednesday 22nd November 2023 13:34 GMT Graham Cobb
In this context, box ticking is inherently evil.
No, it isn't. But recruitment at board level is hard. In particular, the board do not manage the company so it is crucial not to fall into the trap of selecting board members based on how they might perform as CxO members. You need them to step back and focus on the world outside the company (investors/owners, governments, trends (local and global), best practices, futures, etc). Specific individuals are much less important at board level than at CxO level.
Diversity rules do not hamper that.
-
-
-
-
-
Wednesday 22nd November 2023 12:26 GMT Gordon 10
Re: Satya?
Eh? Are you drunk?
Considering he was sideswiped too he's not put a foot wrong. His decision at every point has been to protect MS's investment in Altman and his inner circle who are the geese laying the golden eggs.
The El Reg article on this from yesterday has aged rather badly. Lol.
-
-
Wednesday 22nd November 2023 10:40 GMT Anonymous Coward
Six stories about Sam Altman on the Register in the last week! Who is paying for them? Is he issuing his own press releases?
It all sounds like some kind of investment scam.
Or possibly a failed attempt to get the talent out of OpenAI and into Microsoft. To basically get the IP without paying off the other shareholders.
-
Wednesday 22nd November 2023 13:22 GMT Doctor Syntax
"Six stories about Sam Altman on the Register in the last week! Who is paying for them? Is he issuing his own press releases?"
Alternately:
Six stories about the OpenAI on the Register in the last week! Who is paying for them? Are they issuing their own press releases?
Which is the better fit? A useful guide to working that out might be to identify the prime mover(s).
-
Wednesday 22nd November 2023 14:21 GMT Anonymous Coward
Tail eating
What's happening to Ilya Sutskever, the actual brain that laid the foundations for LLM's? It's indicative of the ascendance of financial/social/regulatory-capture engineering over productive human beneficial engineering that Ilya is given no credit at all, and his concern for ensuring AI benefit humanity is ignored while instead portraying him as evil. According to Reuters, who offer "data and analytics for financial market professionals", Sam Altman is single-handedly responsible for the entire valuation of OpenAI - nobody else did anything.
To be specific "productive human beneficial engineering" is about making the pie bigger, whereas financial/social engineering is a zero-sum based philosophy of taking the whole pie and leaving nothing for anyone else. Nothing says this more than the lobbying to ensure AI's basic right to copyright infringement, while putting zero effort into making AI capable of citing it's sources - in fact, engineers are going to great lengths patching away to make AI deny and lie about it sources, as a knee jerk reaction to copyright lawsuits.
As anybody with a college education should know, citation of sources is a critical feature of building our collective human knowledge - it is as essential to collective human knowledge as a foundation is to building a stable house. And as for the arts, by deservedly rewarding the most capable artists for their creativity, we ensure that creative arts can continue to flourish.
-
Wednesday 22nd November 2023 15:38 GMT Someone Else
Re: Tail eating
As anybody with a college education should know, citation of sources is a critical feature of building our collective human knowledge - it is as essential to collective human knowledge as a foundation is to building a stable house. And as for the arts, by deservedly rewarding the most capable artists for their creativity, we ensure that creative arts can continue to flourish.
Last I heard, ChatGPT was perfectly capable and able to provide citations, and links for its output. That the citations and links were made of whole cloth shouldn't bother anybody, right?
-
Wednesday 22nd November 2023 19:49 GMT Anonymous Cowpilot
Re: Tail eating
LLMs are not capable of independently providing sourcing - its just not how the model works. However, it is possible to parse a generated response against known training data to provide some sourcing for parts of it. The post-processing approach is much more effective than asking the LLM to generate sources in its text - which tends to result in it "generating" links to unrelated content.
-
Saturday 25th November 2023 03:59 GMT Benegesserict Cumbersomberbatch
Re: Tail eating
What they are capable of is generating pseudo-references, names of made-up articles in made-up journals, which while being total hogswallop, give the appearance of authority to something which also is likely to be total hogswallop, especially to someone with no ability or desire to check references.
-
-
Thursday 23rd November 2023 10:14 GMT Random person
Re: Tail eating
ChatGPT has been seen to hallucinate citations in addition to content.
" ... 55% of the GPT-3.5 citations but just 18% of the GPT-4 citations are fabricated. Likewise, 43% of the real (non-fabricated) GPT-3.5 citations but just 24% of the real GPT-4 citations include substantive citation errors. Although GPT-4 is a major improvement over GPT-3.5, problems remain."
https://www.nature.com/articles/s41598-023-41032-5 (Abstract only)
More examples
https://news.ycombinator.com/item?id=33841672
https://www.linkedin.com/pulse/i-asked-chatgpt-provide-citations-hallucinated-its-sources-street
https://www.amjmed.com/article/S0002-9343(23)00401-1/fulltext
Plus this notorious example https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt
You can find more information if you do a search for "chatgpt hallucinate citations".
-
-
-
Wednesday 22nd November 2023 15:37 GMT Philo T Farnsworth
Reminds me of a song. . .
On Friday they went gunnin' for the Altman who stole your AI
And the fired you and you're done but then chaos in the boardroom
And the VCs are all whinin' as they kick you to the street
But Nadella did some hirin' and you're back right on your feet
You am, Sam, do it again, deals turnin' 'round and 'round
You am, Sam, do it again. . .
Apologies to Donald Fagen and Walter Becker for butchering their lyrics and to any reader who gets that song stuck in their head for the rest of the day