>"The OpenAI saga," said Toner, "shows that trying to do good and regulating yourself isn't enough."
When has "regulating yourself" ever worked? For a corporation, I mean. Honest question.
OpenAI's board was only aware that ChatGPT had been launched after reading about it on Twitter, according to a former board member. Helen Toner, an AI researcher with an interest in the regulation of the technology, made the allegation on the TED AI Show podcast. Toner had been a board member at the time, and departed shortly …
A startup is basically moving out of stealth and putting a product out in public for the first time and you don't even bother to tell the board? Especially something that you'd know is going to have a lot more potential blowback than say releasing an early prototype of a video game.
The shareholders ought to have a say before something that big happens, or at the VERY least be told so they don't have to find out on Twitter.
It's got nothing at all to do with his bank account
Frankly, after reading quite a bit from various participants and commentators over the past couple of years, I don't think Altman is motivated primarily by money any more. I think he's motivated by his deep conviction that he's right about everything, and everyone else can follow him or fuck right off.
The non-disparagement scandal is the clincher. We have ample evidence that Altman was behind the ND and clawback clauses, and he's still claiming he didn't know anything about them and certainly wouldn't have allowed them if he had. He lies publicly, people point it out, he continues doing it.
"The unusual structure of OpenAI" that's some understatement.
There is the OpenAI board which is the board of OpenAI, Inc the nonprofit OpenAI. The nonprofit OpenAI wholly owns and controls the management company OpenAI GP LLC. OpenAI GP LLC controls a holding company owned by nonprofit OpenAI, employees and investors. The holding company is the majority owner of OpenAI Global, LLC the for profit OpenAI which like the holding company is controlled by OpenAI GP LLC.
Unusual does not do the structure justice.
https://openai.com/our-structure for flow chart of structure.
I'm with Toner on this, and much impressed by her unparalleled courage as she heroically stands our common moral ground on AI safety, in the face of the $80 billion OpenAI steamroller, backed by the $3 trillion Microsoft juggernaut, similar to Tank Man Wang Weilin memorably staring down a whole lineup of the PRC's people-crushers in 1989 Tiananmen Square.
Nadella should be sure to urgently straitjacket that "deceptive and chaotic" *usual suspect* if he hasn't done so already IMHO (the evil one, that was removed from Y Combinator as well, before OpenAI). Lest our future shall be one of "frantic corner-cutting" that "[stokes] the flames of AI hype" onwards and through to a hellfire robot dog machine gun nonsense apocalypse of doom (or some-nearly-such)! (quotes are from the Spotify and the linked "Decoding Intentions" report by Imbrie, Daniels, and Toner)
Toner's the real world Sarah Connor!
No.
Someine just corrupted his training set and twisted his LLM weights to produce hallucinations.
I am surprised that anyone expects the utterances of these movers and shakers from any part the technology sector to be in accord with any part of reality (drug distorted or otherwise.)
"disappointed that Ms Toner continues to revisit these issues"
If such a high ranking person (Bret Taylor, chair of the OpenAI board) replies that, he basically confirms that:
Sam Altman chose to not inform the board (the non profit part that was created with the explicit intend to verify any and all technology produced by OpenAI on safety and safety protocols AND if needed had the power AND duty to even disband all of OpenAI for humankind's sake) and release a (by impact on the investment in machine learning, market capitalisation) "rather impact full" technology that at the very least started a gold rush (of which there is debate if it will lead to an empty bubble, will become a disaster to humanity or anything in between). For the board to be taken by surprise, Sam Altman likely ordered lower ranking people to prepare ChatGPT's release under the highest security *towards the OpenAI safety board* because otherwise it probably would have been impossible to take the OpenAI board so well by surprise.
That tells something rather appalling about the safety or better said "I need to become a billionaire at any and all cost" culture of OpenAI and its leader Sam Altman. It also indicates how much confidence Sam Altman had that the early release of ChatGPT would be approved by this "safety first" board of the non profit part of OpenAI: likely zero.