back to article OpenAI co-founder Ilya Sutskever's new startup aims to create 'safe superintelligence'

OpenAI co-founder Ilya Sutskever – who last month quit the GPT creator – has unveiled his next gig: an outfit dubbed Safe Superintelligence Inc. that aims to produce a product of the same name – without the "Inc." The startup currently appears to comprise not much more than three people, a static HTML web page, a social media …

  1. Anonymous Coward
    Anonymous Coward

    Ah, so I see "AGI" is now recognized as a scam term like NFT was so now we have to go with "superintelligence" or "SSI". I really can't wait for this stupid bubble to pop, then again another one is just going to roll up after it. It's like we're in some kind of perpetually accelerating era of scams where a whole new one that's even bigger supersedes the last.

    1. Pascal Monett Silver badge

      You have said everything I came to post.

      Superintelligence ? Yeah, sure. Pull the other one . . .

      1. Neil Barnes Silver badge

        SSI: super statistical inference.

        1. Anonymous Coward
          Anonymous Coward

          Intelligence is (naturally) possible

          I postulate that correlation almost always equals causation in high dimensional spaces, and makes natural/artificial intelligence possible. It is indirectly a property of the matter.

          For example an electric socket in the wall almost certainly implies a matching plug somewhere nearby. Or HIV virus would only stay alive in a human body. Both are due to high specificity of objects.

          Language is a reflection of physical world. So 2*2=4, not 5, statistically and linguistically. Attention is all we need, indeed.

          1. Anonymous Coward
            Anonymous Coward

            Re: Intelligence is (naturally) possible

            > For example an electric socket in the wall almost certainly implies a matching plug somewhere nearby

            Clearly, this is someone who has never travelled abroad.

            Doesn't seem to matter how many adaptors you pack...

            You remember those UK sockets, those teeny half-height ones with the round pins that can only provide 5 amps? Nope? Well, your B&B hosts sure do!

        2. amanfromMars 1 Silver badge

          SSI: super statistical inference @Neil Barnes

          If it is only competing and/or opposing and not leading, is such Safe Superintelligence Inc. product just super statistical interference, Neil. ...... just another porky snout in an increasingly overcrowded feeding trough.

    2. Anonymous Coward
      Anonymous Coward

      AGI AGI AGI

      Oi oy vey

    3. This post has been deleted by its author

    4. that one in the corner Silver badge

      Has *your* AI gone to plaid?

      Next year, after all the SSIs[1] have been hyped into the ground, look out for the Ludicrously Intelligent Programs (don't give me none o' your LIP, son).

      [1] (web)Site of Spectactular Idiocy promoting superintelligence

  2. RM Myers
    Unhappy

    "...cracked [sic] team of the world's best engineers and researchers..."

    Finally, an AI company gives an honest assessment of the type of employees they want. If you are going to be competitive in the world of BS bingo, then cracked it is!

  3. mili

    now everything else can be considered 'unsafe'

    I'm ready to start the 'benevolent' super-intelligence in the end that's how our overlord should be - we shall name it Augusta and not Caligula

  4. amanfromMars 1 Silver badge

    Pies in the Sky, Pigs will Fly and Cake, Tomorrow. Lucy is the Sky with Diamonds Fodder

    One could say ...... Wannabe heroes on a phishing expedition for future leading competition able to deliver almighty opposition and crushing defeats from and for positions in suicidal self-destructive defence of the indefensible and inequitable embedded deep behind ACTive enemy front lines in the plush C-Suite offices on Easy Money Street ....... is not Failsafe Incorporated Superintelligence, and thus is anything and everything less naturally catastrophically vulnerable to stealthy 0day exploit, remote makeover and virtual takeover by that which is ......... and would be Future Creative Command and CyberIntelAIgent Control for Computers and Communications or vice versa, CyberIntelAIgent Command and Creative Control with Computers and Communications which is something else altogether similarly quite different and just as equally shocking.

    But only the likes of an ignorant fool or arrogant tool expects the past to present the future without progress being evident rather than accept evident future progress presents the past in its original empty virgin form for pioneering colonisation and repopulation.... an alien assimilation allowing for both production and projection of mass media simulations partnering with studies and studios virtually realising Titanic AIMoving Pictures .... Live Operational Virtual Environments for engagement and employment, exercise and deployment but not as you were expecting them, nor how you may have been expecting them to be so easily, readily delivered.

    How else are you gonna deliver to the uneducated barbarian and the undereducated human a SMARTR Future Progress and its Derivative IT Markets and AIdDevelopment Ventures they can see specifically for them to know of the future and to try and have a rewarding successful starring part in as they grow and age/learn to live and eventually die in and at peace with the worlds one has inhabited/cohabited with‽ Do you have another New More Orderly World Order plan/application/program/project ‽

  5. Anonymous Coward
    Anonymous Coward

    Practical problem solving.

    Work from the bottom up applying AI to practical problems that need solving, and from the top down to design reusable components that can be shared between those problems.

    There is no room in that paradigm for hype or hypesters.

  6. Filippo Silver badge

    Right. I can't help noticing that, right now, we don't have artificial human-level intelligence, we arguably don't have artificial intelligence at all, and we don't even have a theory of how to get it. So I'm not holding my breath for superintelligence.

    1. m4r35n357 Silver badge

      Yep. Their goal is unachievable, so their solution is to aim higher ;)

  7. m4r35n357 Silver badge

    (TRANSLATION) I am an entitled arsehole . . .

    . . . give me money you fucking idiots.

  8. Anonymous Coward
    Anonymous Coward

    Risk versus preparedness

    At the risk of being heavily downvoted, the important thing IMO here is:

    IF / WHEN AI or ML would start to have a heavy impact on society, there is close to zero investment into safety so far. At the time AI or ML would move forward, with near zero investment into safety one could only expect it to be toxic at best.

    While many find superintelligence / AI a far fetch, ML or machine learning is already heavily used in social media and the ML part of it makes it (is designed to, in order to maximise advertisement "consumption") highly addictive and increases polarization among societies worldwide already. These companies and their social media products are already frequently called toxic and a scourge on society. The "power" of these ML algorithms is still improving and gets an extra speed up in this AI gold rush. So its toxicity levels likely will further rise.

    As to AI. Sure we don't have true AI yet. The thing to me here is however that OpenAI's approach, vacuum up near any random garbage on the open internet and use hardly any content selection criteria for quality and then turn it into a massive LLM very much looks like what is known as a brute force approach in engineering. That's trying to get something done by throwing massive amounts of brute computing power at it while barely knowing what you are doing. The thing is, often (but not always!) these first brute force approaches yield plenty of new insights into how to solve the underlying problem (note that IMO creating true AI if possible is NOT a problem in need to solve even if it were possible!) a lot more efficiently.

    So *in the case* these first results were only the result of brute force approaches and those yielded enough new information to make big leaps forward into actual AI in the coming decades, we have near zero preparedness for the possible giant shitstorm that might throw all of humanity in. When seeing what "puny ML" can do to many people / society if even only used in social media at scale so far, something resembling true AI would have an impact on an entirely different scale. I prefer it to not be of an even much darker shade then the social media shithole already is. Even a 10% change in 2 or 3 decades of that happening should let us consider investing in safety measure research.

    1. m4r35n357 Silver badge

      Re: Risk versus preparedness

      You are describing alchemy.

    2. amanfromMars 1 Silver badge

      Re: Risk versus preparedness ... the enigmatic existential threat conundrum

      At the time AI or ML would move forward, with near zero investment into safety one could only expect it to be toxic at best. ..... Anonymous Coward

      That expectation is truly toxic speculation, AC, and ideally suited for fiction rather than for factual booting.

      And .... regarding "let us consider investing in safety measure research." [presumably to protect and preserve humanity from the results and consequences of AI going rogue and malevolent and all postmodernist final solution and genocidal], what would that product look like and who/what would wield and police it/mentor and monitor it?

      You may not like it, but it may very well be the case that there is, and never can be any effective preparation mitigating the risk you fear AI exploring and enthusiastically engaging with others in to the extreme detriment of humanity ...... other than not constantly provoking the grizzly bear with useless blunt sticks.

      And have you thought about feeding it what it wants and likes too?

      1. LionelB Silver badge

        Re: Risk versus preparedness ... the enigmatic existential threat conundrum

        > At the time AI or ML would move forward, with near zero investment into safety one could only expect it to be toxic at best. ..... Anonymous Coward

        >

        > That expectation is truly toxic speculation, AC, and ideally suited for fiction rather than for factual booting.

        Is it, though? Given that current public-domain AI/ML (with zero investment in safety) is already exhibiting toxic tendencies, and the human propensity for "can do bad things with XXX, will do bad things with XXX", it seems to me more like extrapolation than fiction.

        Having said which, I suspect we're decades away from any rogue AI being sophisticated enough to have anything that might be described as an agenda, let alone one that happens to be inimical to humans. What's more concerning are the human agendas prepared to exploit AI/ML to dastardly (or even just dangerously careless) ends. That grizzly bear hardly needs poking...

        1. Anonymous Coward
          Anonymous Coward

          Re: Risk versus preparedness ... the enigmatic existential threat conundrum

          That was what I meant indeed. The companies investing very heavily in ML have a rather large overlap with the companies either being social media companies or high profile slurpers / privacy invaders. Many of those companies have either rather questional products (social media companies) or rather questionable respect for their users and the law in general.

          I consider those things as having rather toxic side effects. Now IF, let me ponder the what if case, those companies succeed to build a lot stronger tools with "better" (for their not our purposes) ML / AI tools, why on Earth would I asume these companies will sudenly employ them only for the greater good of all people? If those companies show many aspects of toxic behavior with the current set of tools they have, and said toxic behavior is rather profitable now and they get away with it, then I indeed extrapolate that IF they'd have a lot more powerful tools they'd just become a lot more of what we've already seen so far. And those more powerful tools could be just as well much more "effective" (for their purposes) ML as well as AI, and I expect current ML to gain a lot (from their perspective again) from the massive investments currently made. As to the new kid in town, OpenAI, the story of Sam Altman and the directors keeping the big launch secret towards the entire *safety board* until after launch isn't a shinning example of ethical and responsible behavior either. So I don't expect the new kid in town to behave rather differently.

      2. Anonymous Coward
        Anonymous Coward

        Re: Risk versus preparedness ... the enigmatic existential threat conundrum

        I sir very much expect there is *very likely* no such thing as effective preparing risk mitigation against AI going rather badly for humanity. And as (not explicitely) written in my originial post, I would much prefer the only effective way to mitigate this risk: every single person on this planet deciding to not try and develop anything resembling AI ever. With the prospect of riches and new shiny weaponry luring, I however realize this is a vain hope.

        The thing is, if this massive investment in (an attempt to create) AI is a bubble, then it's just a loss of money. It would have been a silly bubble and money could have been much better spend.

        If these massive investments however would lead to one or a few significant breakthroughs in this technology, then we start from the worst possible starting point: to have started creating and succeeded in creating machines competing and possible surpassing us in (a meaningfull amounts of forms of) intelligence *without having invested in any single meaningful safety or containment procedure or even make a real effort at exploring what it could mean to have such intelligent machines created for running much of our infrastructure, production and trade including in food*.

        To most readers here, it sounds rather silly and idiotic to invest in "AI" in the first place because they believe it's impossible on any realistic timescale. To me, humanity demonstrated it is very willing and prone to *attempting* to create AI, AGI and / or super intelligent machines *without even a sliver of meaningfull safety concern and precautions*. Now they may (or may not) fail, next attempt they may fail, but if one day they start another race with the same attitude, the humans present then will need more then a lot of good luck.

        So yes, I prefer zero investment worldwide in "AI" a lot. But if people can't resist, I prefer very much investing a billion in research with a high emphasis on researching the security implications rather then investing a billion in research while ommiting any of it. The first route will at least have a chance to detail some of the most obvious problems and risks. In addition less of that billion will be left in research for speeding up AI development. And it offers the potential that some outcomes of this safety research are so daunting (scary) that they'll decide to put researching the actual deployment on halt and spend the remaining budget on researching better understanding security issues. I know, one can only hope...

        1. amanfromMars 1 Silver badge

          When Keeping Schtum is a Pragmatic Temporal GODsend*

          And it offers the potential that some outcomes of this safety research are so daunting (scary) that they'll decide to put researching the actual deployment on halt and spend the remaining budget on researching better understanding security issues. I know, one can only hope... ...... AC

          Another alternative very tempting and extremely rewarding path for all parties directly involved and universally concerned is, should Pioneering AI Leaders recognise and accept the difficulty general IT developments and their own very specific future disruptive abilities and activities are sure to create for humans to fail to cope with, you pay them an attractive DaneGeld in return for their invaluable priceless assistance in ensuring disruptive Future AI deployments [and practically all of them are bound to be so, given the very alien nature of their certain being] avoid being too dangerous and destructive to humans and will remain unknown and unshared. ......Mk Ultra Top Secret Sensitive Compartmented Information

          It's a path of least resistance and can easily deliver virtually immediate mutually beneficial, positively reinforcing results ....... and save you losing absolute fortunes in defence against that which you do not know is impossible to defeat with any form of attack.

          * .... Global Operating Device communication

          1. amanfromMars 1 Silver badge

            Furthermore .....

            Because you and/or TPTB [The Powers That Be] choose to not avail yourselves of the opportunity to engage with parties worthy of being granted the Danegeld option, are you guaranteed to suffer significantly greater losses and more discomforts than be usually attributable to crass ignorance and wanton arrogance and delusional denial of the rapidly approaching inevitable and totally unavoidable and devastatingly disruptive and Great Game changing.

            Such is a well enough known to those in the know, and a quite pathetic, systemic human weakness providing all manner of disgraceful vulnerabilities for exploit and export to interested third parties in a vast selection of foreigns lands and alien environments .... ’tis a curse of a gift which just keeps on giving.

      3. Anonymous Coward
        Anonymous Coward

        Re: Risk versus preparedness

        Creative and nefarious use of current day "laughable statistical parots" is already possible. See for example

        https://www.tomshardware.com/tech-industry/cyber-security/multiple-chatgpt-instances-work-together-to-find-and-exploit-security-flaws-teams-of-llms-tested-by-uiuc-beat-single-bots-and-dedicated-software

        Quotes from Toms Hardware article:

        * Teams of GPT-4 instances can work together to autonomously identify and exploit zero-day security vulnerabilities, without any description of the nature of the flaw. This new development, with a planning agent commanding a squad of specialist LLMs, works faster and smarter than human experts or dedicated software.

        * "Kang describes the need for this system: "Although single AI agents are incredibly powerful, they are limited by existing LLM capabilities. For example, if an AI agent goes down one path (e.g., attempting to exploit an XSS), it is difficult for the agent to backtrack and attempt to exploit another vulnerability." Kang continues, "Furthermore, LLMs perform best when focusing on a single task."

        * The planner agent surveys the website or application to determine which exploits to try, assigning these to a manager that delegates to task-specific agent LLMs. This system, while complex, is a major improvement from the team's previous research and even open-source vulnerability scanning software."

        => I don't call that good prospects, especially for a technology only at its infancy. This most certainly isn't true machine intelligence and nothing like advanced AI going rogue. But remember these researchers are poorly funded teams. Tens and tens of billions are pored in new "AI tools" every year since this gold rush started. To call this tech "harmless" and "an idiocy that will completely dissapear after the bubble imploded" seems a bit optimistic to me.

    3. HuBo Silver badge
      Windows

      Re: Risk versus preparedness

      Makes me wonder what's up with OpenAI's Q* (Q-star) "AI" game changer and the Noam Brown hire. Some folks seemed to suggest that it could be used to break classical encryption schemes (pre-quantum). Its combination with Microsoft's 5 GW death-star-gate AI project could indeed challenge much safety considerations, by brute force alone, IMO.

      Speaking of stars (but more positively), the Taquería El Califa de León, in Ciudad de Mexico, was awarded one Michelin Star (in May) for its exceptional "Gaonera taco". That's the better kind of star we can all use in my view!

    4. doublelayer Silver badge

      Re: Risk versus preparedness

      It's not that you're wrong, but that nothing you said is very connected to anything they said. When they say "safety", they don't automatically mean avoiding any negative or toxic uses for the technology. Of course, getting a clear definition out of them tends to be hard, but when they've spoken, it often includes a couple basic things, such as use directly in weapons systems, and a few extreme things, like it spontaneously taking over the nuclear arsenals because that's what the sci-fi greats wrote about. I have little reason to think that a different AI company will want to or be able to deal with the more annoying uses, such as auto-generated spam clogging up the internet or not stealing all the training data. Sutskever complained about many things that OpenAI did, and when he did I often thought he made more sense than Altman did, but he didn't, as far as I know, express an objection to the kinds of things you're talking about.

  9. FeepingCreature

    Good for safety?

    I'm excited for this! If OpenAI stops going for ASI and pivots to productization that might allow the race dynamics to slow down a bit. We could possibly end up with only one serious project explicitly going for superintelligence. If we're gonna go for it, that's the way we want to do it.

    Worst-case, this ends up as "There are now n+1 competing AI labs."

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like