
OpenAI may be an upstart, but I think "startup" was intended in para 5. Really, the whole sentence probably wants rewriting.
OpenAI's board of directors just fired CEO Sam Altman for not being "consistently candid in his communications." CTO Mira Murati has been appointed as the interim CEO to lead the lab in the meantime as the board finds a new boss. In a statement Friday, the board of directors said: "OpenAI was deliberately structured to …
OpenAI may be an upstart, but I think "startup" was intended in para 5. Really, the whole sentence probably wants rewriting. .... laurence brothers
Par for the course, laurence brothers, ......The Register [vulture icon] Biting the hand that feeds IT :-) And whenever OpenAI presents one with at least four valid alternative descriptor possibilities ...... upstart/startup/upstart startup/startup upstart ...... well, being pedantic is not really helpful whenever it doesn’t actually matter in the great schema of Internetworking Things.
1) Even more loss-making than we thought?
2) Even bigger illegal data grab than we thought?
3) There is more truth to the analysis that GPT4 performs worse than GPT3.5 than we thought?
4) Sam Altman knew all this time about the fundamental issue with transformers, the core of LLMs, being incapable of ever becoming intelligent, as Google research revealed this week?
5) Something more mundane such as lying about his actual remuneration?
6) An office affair?
Is this the article and research about 'revelation' 4 ?
* Google researchers deal a major blow to the theory AI is about to outsmart humans (at businessinsider.com)
* Pretraining Data Mixtures Enable Narrow Model Selection Capabilities in Transformer Models (at Arxiv)
Erm, didn't he publicly say that AI is a threat to humanity? .... cyberdemon
Erm, does anyone doubt it and not have very comprehensive plans to profit and prosper from one's leading treatments for those threatened by it?
This is where the problems arise...OpenAI was founded with one of its missions being to "ensure that artificial general intelligence benefits all humanity"...therefore highlighting the possible threats and suggest how to mitigate them is in line with their mission.
We may see in the coming weeks that Altman may have been operating in ways that are counter to this mission...or the board wants to take the organisation in a direction that is counter to this mission.
Of course we could be looking at the usual "hire, exploit, expunge" method that many investors use in order to ensure "their" IP never becomes too closely associated with the people that had a hand in creating it...it can be annoying for investors when a person or people become synonymous with a brand...for example, many people out there think Bill Gates still runs Microsoft and that Steve Jobs single-handedly built Apple by himself.
Doctor Syntax, Hi,
Does that report in Time you supplied .... https://time.com/6300522/worldcoin-sam-altman/ ..... which revealed iris scan payments from a company/entity/organisation in a crypto-currency created by that company/entity/organisation mirror something extremely similar to practices which led to Sam Bankman-Fried being found guilty and convicted on seven counts of fraud and now patiently anxiously awaiting sentencing to umpteen years of jail time?
Or would it be completely different?
...but the organisation is largely a not for profit. There is a profit making arm, but it's not the one bearing most of the expense as far as I can tell. OpenAI...the lab...is what produces the models etc etc...and OpenAI the ChatGPT "lol it made a funny story about Robocop performing brain surgery on a house plant with a packet of sausages, ok time to get back to cheating my Uni coursework" arm is the for profit side.
> 1) Even more loss-making than we thought?
Certainly possible
> 2) Even bigger illegal data grab than we thought?
I can't see that being the case. They clearly are leaning on the argument that training is equivalent to data-mining thereby giving them that fair-use carve out. And there is no feasible data grab bigger than what they've already publicly acknowledged - they have said they trained on the entire internet.
> 3) There is more truth to the analysis that GPT4 performs worse than GPT3.5 than we thought?
That's definitely no longer true with gpt4 turbo. It's pretty damn capable.
> 4) Sam Altman knew all this time about the fundamental issue with transformers, the core of LLMs, being incapable of ever becoming intelligent, as Google research revealed this week?
I don't see that being a major issue: you bypass that limitation by linking LLMs with different approaches, and there really isn't that much preventing OpenAI from doing so.
> 5) Something more mundane such as lying about his actual remuneration?
Certainly possible.
> 6) An office affair?
Possible, but would it have been phrased like that?
I can't help but notice how the board talk in the statement about protecting the company's mission. I wonder was there a severe undeclared conflict of interest.
8) ChatGPT is just the result of a secret information treaty where we in the West provide guidance on which pictures have buses, bikes and bridges in them in return for blocks of code written by people in the East...because we can't code and they can't work out what the fuck is going on around them on the streets.
AI is already pretty saturated to be fair. The open source scene is massive and catching up to OpenAI fast...what isn't catching up fast though is the hardware to run it all. Models like Falcon 180B require heinous amounts of resource to run effectively.
We're entering an era where having lots of RAM is actually going to matter...you need 400GB of RAM to run Falcon 180B. Achievable for business / homelabbers, but still well out of reach for the average joe.
I have no idea how much RAM is required to run GPT 3.5...but we'll find out soon enough if they stand by their pledge to open source it.
This post has been deleted by its author
Probably 1-4 are all part of it, but I suspect whatever it is we won't find out for a while. Possibly not unless the company itself goes belly up, and if that happens I suspect the two things will turn out to be directly linked in some way.
Telling porkies to your customers should be punishable...but telling lies to your board? Surely that's just par for the course when doing business with a group of self appointed "donors" that have no idea what they're talking about and have no idea how the day to day operations function?
Pretty much every client I've had that is run by a "board" is run badly...usually because the board is made up of "experts" not "business people"...so they can tell you everything about their academic realm of expertise, but haven't the first clue about running a business...one I've worked for was a group run by GPs...not entirely sure of it's purpose...but they had a massive event once a year. The entire board was self appointed and comprised of just GPs...when it came to IT decisions, we let them make them, took the money and did it our way anyway...as long as the result was the same and the board felt they'd had the final word, it was all good. None of them could ever tell the difference.
Sometimes one half of the "board" doesn't know what the other half is doing.
I've been in a situation where two halves of the same board had the same idea at the same time but didn't call a meeting to discuss...the result was two separate pentesting firms being booked to arrive on the same day but neither of them knew the other one was there...because they were at different branches.
It was amusing to read the reports afterwards...each one detected the other running scans, exploits etc.
It was confusing to begin with, because the report was damning on both sides..."possible persistent threat detected"...but we quickly worked out that the reports were actually just highlighting the presence of the other pentesters...it was spotted when we noticed the different letterheads on the printed reports and the completely different email domains that the reports came from.
Didn't stop the board shitting themselves though...because they didn't read the reports and a summary was written by an "executive assistant" who didn't understand what she was reading and didn't realise both reports were from different firms on the same day.
Didn't surprise me that the pentesters didn't twig though, they were both like clones of each other...running the same tools etc the usual shit you'd expect. I did have a laugh with one of them though, the group that turned up at my branch...
Pentester: Ok, so we're going to perform a bunch of tests, scans and exploit tests but we don't expect you to respond any other way than the way you would normally respond if you detect or suspect an attack.
Me: Cool...so what's next then?
Pentester: Well, just tell us where we can sit and we'll crack on.
Me: Sit? I can't let you in.
Pentester: Why not?
Me: Well I suspect you're going to attack us, and in line with company security policy I have to eject you from the building and report this incident to the board.
The team at the other site managed to write an 8 page report on the vulnerabilities they found in a honey pot box...which found its way to the board...who thought I'd left an extremely vulnerable box on the network and that I should get rid of it...it was them who asked me to deploy a honeypot and monitor it.
I know, and yes...I walked away about a month later.
The conclusion for Altman and others crying about the risk of AI was, and always has been since the start, “this stuff is dangerous … so let me write the rules to stop anyone else from competing with me”.
OpenAI loves their own AI.
They just don’t like anyone else.
"....He was partially right though : some people will end up losing their jobs because of AI. As he has now proved........."
Except that what currently exists <isn't> artificial intelligence ;), and I have serious doubts as to whether actual AI will ever be achieved.
I am like Altmann, I feel sure someone will try and do something with it before it is capable which is likely to bring about the end of the human race, possibly of much more :(
This comes not long after Sam's interview with the Financial Times - “Right now, people [say] ‘you have this research lab, you have this API [software], you have the partnership with Microsoft, you have this ChatGPT thing, now there is a GPT store’. But those aren’t really our products…Those are channels into our one single product, which is intelligence, magic intelligence in the sky. I think that’s what we’re about."
1. Is it possible that was too sickening for even for some of the directors, which includes serious project leads actually involved in the engineering?
According to people familiar with the discussions, Microsoft invested $10 billion in OpenAI earlier this year as part of a 'multiyear' arrangement that valued the San Francisco-based startup at $29 billion. When asked if Microsoft planned to make more investments, Altman responded, "I'd hope so." "There's a lot of compute to build out between here and AGI. Training expenses are just huge,"
2. Is it possible Nadella & co objected to the spectacle of MS being pressured in the media to write a blank check for Altman's vision? Of course Nadella is under pressure to show the AI will produce profit before making further investment. Undoubtedly there was feedback from that. But it was reported that Nadella only found out about the firing after it happened ....
Sam Altman being the Adam Neumann of 2023? Building a house of cards on lies to scam investors out of billions? It’s not outside the realm of possibilities.
There’s a large chunk of the LLM crowd that is basically just the blockchain-is-going-to-solve-all-of-humanity’s-problems crowd that needed to find some new tech to prop up their investment rounds after the blockchain was not as useful as promised.
OpenAI isn't really costing Microsoft $$$ - they are paying in Azure time and can schedule when nodes aren't busy.
In return they are getting the only upgrade to O365 that people will pay for.
"Want to upgrade to Office365-pro? Only $49/month and you get 3 new Powerpoint bullet styles" vs "Office365-AI for $99/month, and it writes your Powerpoints for you"
Copilot is seriously impressive for doing a lot of the boilerplate setup and skeleton code for complicated stuff. Try writing Vulkan code just from the specs without it .
...was to ask ChatGPT what happened:
>>>
Title: "Sam Altman's Grand Exit: OpenAI's CEO Fired for Perfecting the Art of Coffee Brewing"
In a shocking turn of events at OpenAI headquarters, Sam Altman, the once-revered CEO, has been unceremoniously shown the door. The reason? Allegedly, he spent more time perfecting the art of coffee brewing than advancing artificial intelligence.
Rumors suggest that Altman's obsession with the perfect cup of java reached epic proportions. Employees reportedly witnessed him passionately debating the merits of different coffee bean blends in the company kitchen, neglecting strategic meetings, and even turning the conference room into a makeshift coffee laboratory.
One anonymous employee quipped, "We used to have board meetings; now we have coffee tastings. Sam would bring in exotic coffee beans, ask us to rate them, and then disappear for hours into his caffeine-fueled haven."
Altman's departure has left the tech world buzzing with speculation about who will fill his rather large coffee mug-shaped shoes. Will the new CEO be a barista turned tech guru or perhaps an AI programmed to detect the perfect roast?
In response to the news, Altman released a statement saying, "I have always believed in the power of a good cup of coffee to stimulate creativity and innovation. Maybe OpenAI needs a CEO who understands the finer nuances of a pour-over brew."
As for OpenAI, the organization seems ready to embrace a new era. The search for a CEO is underway, with candidates being asked to submit their coffee credentials along with their resumes.
In the meantime, OpenAI employees are adjusting to a world where board meetings are no longer punctuated by the soothing hiss of an espresso machine and the aromatic scent of freshly ground beans. The future of OpenAI remains uncertain, but one thing is for sure – the coffee at headquarters will never be the same. <<<
Well, well, well ...... blow me down with a feather. That's a strange turn of events very likely to boost the fortunes of leading AI entrepreneurs/Earth-shattering, novel ground-breakers rather than the members of those company boards that would be milking and bilking them.
And now Altman and Brockman are as renegade rogues set free to do as they wish themselves rather than as they be told by A.N.Others ‽ .
Nice one. And just in time for everything entertaining so very many new strange things in dire straits need of novel stellar leadership[s] and forks of direction attending to/mentoring and monitoring future feeds and seeds.
:-) Whenever starting in an apartment with practical nothing and achieving so much, one can only wonder at what the future can bring whenever one has much more than just enough to practically achieve virtually anything.
Hell hath no fury like genii scorned? :-)
If you worked better, Sam would still have a job!..... probgoblin
Well, whether it is the same or not, all I maybe know for certain is, if Sam had given me a job, things would have been working a hell of lot better.
And the future does not change that being an absolute certainty ....... which must be quite comforting should things ever take a turn for the worse for him and those depending on him and his followers/supporters. And there’s not many surely prepared and able to prove and tell you all that, I bet.