back to article Sam Altman's chip ambitions may be loonier than feared

OpenAI CEO Sam Altman's dream of establishing a network of chip factories to fuel the growth of AI may be much, much wilder than feared. As reported last month, Altman is supposedly seeking billions of dollars in funding from partners including Abu Dhabi-based G42, Japan's SoftBank, and Microsoft, to build out all those neural …

  1. b0llchit Silver badge
    Mushroom

    Hypeland is (very) expensive

    Throw around numbers and see what sticks. Hype a bit more and ride the current of highs to push the pyramid still higher while you are sitting at the top.

    There are quite a few billionaires to dethrone. Therefore, hype a bit more and revise the numbers upward. Then you hype some more with all new and better numbers. (rinse, repeat)

    1. Fruit and Nutcase Silver badge

      Re: Hypeland is (very) expensive

      or, like Hype-r-loop, what ends up getting delivered is a far removed from the vision

    2. Anonymous Coward
      Anonymous Coward

      Re: Hypeland is (very) expensive

      Sam Altman is just another Adam Neumann. It won't end well for anyone but him.

  2. Doctor Syntax Silver badge

    I expect he was very demanding as a toddler.

    1. jake Silver badge

      Still is, from what I can see.

  3. vtcodger Silver badge

    One wonders

    One wonders at times how much, if any, normal garden variety human intelligence is behind the AI push. Not much it seems to me. But I'm old and increasingly cynical.

  4. abend0c4 Silver badge

    A shortage of skilled workers

    Yea, but AI's going to sort that, innit?

  5. Anonymous Coward
    Anonymous Coward

    Oh what big numbers you have, Grandma!

    The better to fleece the sheeple, my dear.

  6. MOH

    There's a lot of competition, but Sam Altman must in the running for "Modern PT Barnum of the Decade"

    1. lglethal Silver badge
      Trollface

      Yeah Elon was for the 2010's, I think we can give the 20's to Altman...

      If he lasts the decade, of course...

    2. jake Silver badge

      He is quite Jobsian.

      I was immune to Steve's machinations, too.

      I'll probably be first up against the wall.

  7. Barracoder

    I support him

    Roko's Basilisk is a very persuasive argument

    1. jake Silver badge
      Pint

      Re: I support him

      "Roko's Basilisk is a very persuasive argument"

      Only to people who don't understand the technology. Machines do not, and cannot think[0].

      Before you say it, let me counter: Don't TELL me, SHOW me.

      Until then, it's nothing more than bad science fiction bordering on religion, and used to scare the children.

      [0] amfM will now spontaneously combust. Have a pint to put that fire out, mate.

      1. theOtherJT Silver badge

        Re: I support him

        "Do not" seems fair. I don't see any evidence of it.

        "Cannot" however is a stretch. To quote(ish) Captain Jean Luc Picard:

        "We took are machines, only of a different construction"

        I don't see any reason we couldn't make a machine that thinks in theory.

        1. jake Silver badge
          Pint

          Re: I support him

          Again, don't TELL me, SHOW me.

          I look forward to meeting your theoretical man-made[0] thinking machine sometime in the somewhat vague, completely undetermined future. Maybe.

          Until then, perhaps we should get in another round?

          [0] Non-biological ... any idiot can reproduce, as can be seen down the Walmart on any given weekend.

          1. doublelayer Silver badge

            Re: I support him

            You are asking them to prove a hypothetical and refusing to prove your own. Neither is going to be possible. They cannot prove that a computer can think by going and building you one, and even if they could manage it, you probably wouldn't accept that they had. Similarly, from what you've said, you don't have any reason to think that such a thing is impossible, you just state it as an axiom. I agree that nobody has built one, and the way we are going, nobody will, but that is not sufficient evidence to prove that it can't exist.

            If you think you have a proof that machines could never be made to think, you could post it, but simply saying to show you is not a valid argument. For example, if I told you that it is impossible for a rock to exist on the ocean floor at 3 km, you would be correct to tell me that my statement is incorrect, but you probably don't have a machine capable of retrieving one of the rocks that are down there to show me that it really is a rock. I cannot take your inability to retrieve a rock from a location as proof that no rock can exist in that location, and you can't treat someone's inability to produce a thinking computer on command as proof that one can never exist.

            1. Michael Wojcik Silver badge

              Re: I support him

              Yes. And also, Roko's Basilisk can apply to a great many systems that may not meet arbitrary definitions of "thinking". It's simply the application of a type of decision theory to a set of circumstances; the decision theory is formalized, and the circumstances can be.

              Ducking behind dualist metaphysics is the refuge of people who don't have any actual theory of cognition and don't want to admit it.

        2. Not_A_Hat

          Re: I support him

          If the physics of the brain is algorithmic, AI is at least theoretically possible on our current hardware.

          Turing, Church, and Godel proved this something like a eighty years ago. If a computation engine is Turing complete, it can emulate any other Turing complete computation engine. Therefore, if the laws underlying the mechanisms of consciousness can be computed, then they can be computed digitally.

          This is why, I think, people get so excited about neural networks. If the equations of a neural network entirely describe the parts of the brain that actually allow us to think, then we've gotten our foot in the door; the rest of it is just a matter of scale and finding the right bits to focus on.

          There are definitely formulas that can't be computed, though; anything involving the halting problem is provably impossible on a Turing machine. If consciousness relies on something like that, A.I. is definitely impossible on digital computers.

          Moreover, it hasn't been positively demonstrated that consciousness is algorithmic. If it was practically demonstrated, we'd have AI by now, and to theoretically demonstrate it, we'd need a complete description of the physical laws describing the brain. 'A few more years' of research won't begin to scratch that.

          Personally, I doubt that consciousness is algorithmic. I believe in free will because I experience it; therefore I reject the idea that I'm deterministic - therefore I reject the idea of computable consciousness, and I don't believe digital computers will ever manage to host A.I.

          It may be a slightly solipsistic argument, but that's my view.

          1. Alex Stuart

            Re: I support him

            > I believe in free will because I experience it

            You experience the (extremely convincing, many-layered) illusion of free will, but it's not technically possible without invoking the supernatural.

            1. theOtherJT Silver badge

              Re: I support him

              Eh.... maybe.

              There's always the wonderful 3rd option that we are both non-deterministic and non-free because the illusion of free will arises from the action of some effectively random process along the lines of atomic decay.

              That being said I've always harboured a suspicion that atomic decay is not at all random - seeing as that it is at least statistically completely predictable, but we just lack the theoretical grounding to explain what causes it. If that's the case then it could just as well be the case with mental phenomena. Maybe they do actually "exist" in some physicalist sense, we just aren't looking in the right place.

              I would immediately concede that this is pure speculation mind you, and anything I suggested to explain it without a theoretical basis that could be in some way tested might as well be defined as "Supernatural".

              1. Alex Stuart

                Re: I support him

                > There's always the wonderful 3rd option that we are both non-deterministic and non-free because the illusion of free will arises from the action of some effectively random process along the lines of atomic decay.

                That is what I believe is the case - non-deterministic yet also non-free.

                Sam Harris (big no-free-will proponent, wrote a book on it) and Daniel Dennett (prominent philosopher/cognitive scientist) had a good debate on this, if you're interested - https://www.youtube.com/watch?v=_J_9DKIAn48

            2. Michael Wojcik Silver badge

              Re: I support him

              Penrose says otherwise.

              I'm not persuaded, but he has a physics Nobel, and I don't (the committee is so stingy with those things), so I have to update a bit in his favor.

              (And he's not the only one.)

              That said, everyone ought to believe they have free will. Either you do, in which case it's a correct belief; or you don't, in which case you have no choice about whether to believe in it or not, and all arguments are vacated.

          2. Pete Sdev

            Re: I support him

            Your post is good and I'll concur with much of it.

            Though:

            I believe in free will

            Most of the current scientific knowledge indicates, somewhat depressingly, that we probably don't have free will.

            Belief in free will is after all, a cultural attribute.

  8. elsergiovolador Silver badge

    What's next

    What is going to be next after AI craze fades away?

    What is going to be big in 10 years?

    Could those chips be used for something else than AI?

    Once human made input is going to drastically decrease, the AI trained on other AI is simply going to end up with noise.

    They are already being desperate to limit that under guise of "safety" - all this AI watermarking is not designed so that you can know the next PM address is fake - it's so that when it enters the training pipeline, whatever supervises it get it rejected.

    1. jake Silver badge

      Re: What's next

      "Could those chips be used for something else than AI?"

      Heaters to keep us all warm during the upcoming AI Winter?

    2. Howard Sway Silver badge

      Re: What's next

      Crypto 2.0

      1. elsergiovolador Silver badge

        Re: What's next

        Craipto

    3. Snowy Silver badge
      Joke

      Re: What's next

      What is going to be next after AI craze fades away?

      Easy Ai just called something else. Maybe Fuzzy Advance Neutral Network sYstem, or Fan.. I do not think I need to completely spell it out for you :)

      1. Flocke Kroes Silver badge

        Re: What's the next name for AI

        I was thinking Deductive Figuring or DeFi for short.

    4. Michael Wojcik Silver badge

      Re: What's next

      Could those chips be used for something else than AI?

      Forget "AI". It's a meaningless term.

      The question is what operations those specialized processors are optimized for. There are useful things you can do with TensorFlow for computing close-to-optimal approximations of complex problems, for example. And tasks like automatic document classification and summarization have many applications.

      Doing big matrix operations on low-precision matrices probably has other uses.

      Zvi mentioned recently a study that showed LLMs are better than individual human lawyers at contract review. That's a good application. Contract review is tedious and largely repetitious, and having an LLM do at least the first pass frees up junior partners to do more useful things.

      There will be uses for LLMs in the entertainment sector. Whether those are good uses is debatable,1 but they're inevitable; it's just too easy and most of the audience doesn't care about quality and there's money to be made.

      How much of a chip glut from the "AI" bubble could be absorbed ... now that's a more difficult question.

      1No. There, debated.

      1. very angry man

        Re: What's next

        I direct you to the matrix movie, just looking at the tech, humans in coffens All there lives, at last we can be more than 20% efficient

  9. jake Silver badge

    Personality cults ...

    ... are inherently evil.

    I tend to shun them. YMMV.

    1. PhilipN Silver badge

      Re: Personality cults ...

      My local newspaper no longer franchises Dilbert but given [how many?] decades ago he witnessed VC’ers similarly swooning over “streaming multimedia” - GASP! - Scott would be able to milk this story for every drop.

      1. DS999 Silver badge

        Re: Personality cults ...

        Considering Scott Adams fell for Trump's personality cult hook, line and sinker I don't think he has the proper perspective to milk a story about personality cults without some serious introspection.

        1. Snowy Silver badge
          Holmes

          Re: Personality cults ...

          One racist rant anyone from being cancelled unless your rich?

          1. Michael Wojcik Silver badge

            Re: Personality cults ...

            Those are certainly several words. For the next exercise, try composing a coherent sentence with them.

            1. Snowy Silver badge
              Coat

              Re: Personality cults ...

              Sorry sometimes what I think I wrote is not what I written.

              Nearly anyone is one racist rant from from being cancelled unless your rich enough?

  10. xyz123 Silver badge

    Its not an antitrust issue if you reserve $1trillion of that for bribes.

    Big bribes. Bribes big enough to make a US senator gasp in shock before creaming his pants.

    1. very angry man

      That's pretty funny big brib

  11. abufrejoval

    An invest of Trillions requires a matching return: who would pay that?

    My doubts actually started with IoT. The idea of having all things in your home somehow smart, sounds vaguely interesting... until the next patch day comes around and you find that now you have to patch dozens or more vulnerable things, most of which are more designed to feed the vendor's data lakes than providing any meaningful empowerment or value.

    I've also always marvelled at my car vs. my home: my car was made in 2010 so it isn't even new any more. yet everything inside is connected and "smart", will adjust to whoever is driving it automagically, things happen at the touch of a button or even on a voice command, if that were actually any easier or faster.

    Of course, once I took the wrong key, the one which had all adjusted to a person half my size and I feared mutilation if not death as in I searched in total panic a way to halt the seat sqeezing me into the steering wheel... And since I never really came out of home-office, I tend to spend so little time in my car, I often can't even remember how to turn on the defrost when the season changes.

    Yet sometimes I find myself wanting to click my key when I enter my home, especially when I carry my supplies, hoping the door would open just as automagically, perhaps even carry the darn boxes up two rather grand flights of stairs, as you see my home was built around 1830: mine is the part under the roof where the domestics used to live, who never found a worthy successor, but gave me perspective.

    You see, Downton Abbey provided me with the perfect vision of what IoT should be: life with non-biological servants. Most importantly, life not with intelligence somehow scattered all across things, but with an absolute minimum of non-biological servants: one servant per domain, the butler for the shared family mansions, a valet or ladies maid for each individual's personal needs, a chauffeur for all inclusive transportation, an estate agent-secretary to manage all fortunes, that's it! Delegation for the lesser services like cleaning and food suply, scale-out for grand events, coordination amongst them, life-long memory for anything relevant would be all part of their job, not for me to worry about.

    Alexa, Siri, Co-plot, none of them ever came close even envisioning that for me. And you know where their loyalty lies: Downton Abbey has plenty of proof what happens if servants are disloyal to their masters. Actually, what I really want aren't even servants that might just go off and marry or have a career of their own, but good old roman/greek non-bio-slaves where obedience is existential, even if it includes proper warnings against commands that might in fact be harmful. And I don't recall slaves ever being more loyal to their slavers than their owners. So just imagine how Apple would be treated by owners a few centuries or two millenia ago!

    Yet, how much would all of that be worth to me or the vast majority of the poplation which are consumers?

    Trillions after all means a thousand bucks for each individual with billions of consumers... And that is just the chips portion of what it requires to make it happen.

    It comes back to my smart car: would I have paid extra to have all that intelligence in it?

    Not really, I bought it used. It just happened to have all that stuff in it, and I would have rather liked to forego those "extras". I paid for the room, the transport capacity and it's ability to cruise the Autobahn at speeds I consider reasonable with adequate active safety.

    It's really a lot like the electric sunroof which I couldn't opt out from: it limits the head-room every time I enter the car, yet by the time I find myself actually using it, it's typically broken and would be very expensive to repair: so it winds up just being a glass brick covered up 99% of the time. I'd have much rather had the cruise control, but a used car with these options wasn't on sale when I needed a replacement.

    Same with the electric seats, which may be ever so slightly easier to adjust, once you've figured out how they work and how to keep them from breaking your bones. But they become one big giant liability if they're stuck in some ridiculous position, because my son wanted to show it off to his lovely but tiny girl friend.

    Turns out the main reason I've never seriously considered making my home "smart" is the fact that I need it to function 100% of the time, I don't really have a backup if the door failed to open, the windows failed to close, or if chairs at the dinner table were suddenly glued to the floor.

    So count me very sceptic when it comes for AI based automation creating empowerment with enough value and trustworthyness to choose the AI variant over the stupid one EVEN at EQUAL PRICE.

    Chances of me actually paying extra? Very ultra slim with an extra dose of heavy convincing required.

    But next comes the corporate angle, whence my disposable incomes currently comes.

    Yes, there may be a lot more potential for money savings there, but how much AI are consumers going to spend on once it's reduced workforces by the percentages corporate consumers of AI are hoping for?

    New jobs and opportunities take time to arrive and one thing is very sure: those investing billions if not trillions today cannot wait a decade for demand to pick up again. Their shareholders demand sustained order entries month by month, quarter by quarter and returns best within a year.

    And that's where I see bloody noses coming all around already with Microsoft & Co. spending billions or the GDP of smaller countries on nothing but AI hardware.

    I can hardly see myself using Co-pilot even if they force it into my desktop and my apps.

    Actually, much of my late career has been worrying about IT-security and the very idea of Microsoft infusing every computer with an AI begging everyone to use it, gives me nothing but nightmares about the giant attack surface they are opening up: that company still doesn't even manage to print securely, decades after selling their first operating system, CP/M was safer than that!!

    Much less I can see myself paying for it, nor do I see 90% of consumers paying a significant amount for it, either.

    Sure, that's belly button economics, but I humbly consider myself mainstream and ordinary enough to represent your regular John Doe.

    Investors spending billions and trillions need matching returns and I fear their desperation more than anything else about AI.

    1. Fruit and Nutcase Silver badge

      Re: An invest of Trillions requires a matching return: who would pay that?

      perfect vision of what IoT should be: life with non-biological servants

      You have described life as it is for our feline overlords

  12. Ilgaz

    There is a worse problem

    In a technical point of view, money should be spent to quantum computing hardware and software. For regular computing there should be a way to implement this thing on a p2p grid which will benefit the grid members somehow, monetarily.

    The really worrying word in the article is Abu Dhabi based funds. Funds owned by oppressive regimes in the Middle East should have no say in such a massive power. They may look uneducated, filthy rich to you but trust me they have their own agenda. Just watch what is happening to Twitter.

    AI/hardware is totally a knife thing. Can prepare food or stab a person.

    1. Anonymous Coward
      Anonymous Coward

      Re: There is a worse problem

      They may look uneducated, filthy rich to you but trust me they have their own agenda.

      Don't worry. The Tories in the UK will never get involved in manufacturing silicon. They are just not that forward or longterm thinking.

    2. Flocke Kroes Silver badge

      Re: AI prepare food

      Video or it didn't happen. Next people will be saying AI can safely drive a car.

      1. Dan 55 Silver badge

        Re: AI prepare food

        "If you think about what we are doing with cars, Tesla is arguably the largest robotics company in the world, because our cars are semi-sentient robots on wheels."

        - Elon Musk, August 2021, increasingly desperate to flog the snake oil.

    3. Schultz

      ... quantum computing hardware and software

      Last time I checked, quantum computing was still a fundamental science project. Yes there are companies collecting investor money and selling access to whatever-bit quantum computers but so far nobody has run a useful computation on these. It's a bit like fusion energy: we know it works in principle but we don't know if we can assemble a device that scales sufficiently to be useful.

      1. jake Silver badge

        Re: ... quantum computing hardware and software

        Last time I checked, AI was still a fundamental science project. Yes there are companies collecting investor money and selling access to supposed "AI" computing, but so far nobody has run a useful computation on these. It's a bit like fusion energy: we know it works in principle but we don't know if we can assemble a device that scales sufficiently to be useful.

        1. doublelayer Silver badge

          Re: ... quantum computing hardware and software

          Useful to who? Some people have performed a computation on them that they think is useful. I may not agree, but I might not think that what you do with your computers is useful and that doesn't stop it actually being useful. Worth the resources expended and the collateral damage, probably not. Useful to someone, yes, I'm afraid it has been.

  13. HuBo Silver badge
    Windows

    Marbles ... shambles

    Unicorning of the straitjacket industry (surgical mask to the loony) -- plan ahead!

  14. TheLLMLad

    OpenAI isn't so special

    Tend to concur with the assessment that Altman is being point man for hyping up OpenAI, because fundamentally there really isn't anything special about GPT other than it being bigger than everyone else's models and using a lot more fine tuning. Their only real trick, mixture of experts, already is done by the French and it took them what, six months?

    Ok it's from Google but the "we have no moat" paper was entirely on point. Unless OpenAI has got a new model architecture hiding somewhere they're nothing special at all really.

  15. Anonymous Coward
    Anonymous Coward

    G42

    There's a company that El Reg really needs to look into, remember DarkMatter? Well that turned in Digital14 (where Mozzila blocked them from becoming a root cert Auth due to previous shenanigans) and then rolled into G42.

    What could possibly go wrong?

    1. diodesign (Written by Reg staff) Silver badge

      Re: G42

      They are on our radar and we will cover them more. One story lately we did about them:

      https://www.theregister.com/2023/11/28/cerebras_g42_china_refile/

      C.

      1. Anonymous Coward
        Anonymous Coward

        Re: G42

        G42 and the Abu Dhabi government (not necessarily the whole UAE), didn’t get ‘early’ access to the Chinese Sinopharm vaccine without a very close relationship with Beijing:

        https://www.g42.ai/resources/news/g42-sinopharm-phase-3-clinical-trial-vaccine

  16. Treelaw45

    Better start working on quantum computing for security methods.

  17. Walt Dismal

    the risks

    All this is based off expectations that the current paradigm for AI is correct. And what if it isn't. And what if AI moves to a different paradigm?

    As I tell people, the human brain learns and computes on about 20 watts of power. Should some quantum computing means come about to better emulate the brain mechanisms, your $7 trillion will now lie on quicksand.

    Right now everything is driven by mindless techie hordes who believe in the current ML fads. May I remind you that since 2012 - one decade - the NN paradigms changed greatly. There is no 100% solid reason to believe that vector processor chips will always be the only way to go.

    Right now AI is based on cloud-based training. It depends on learning mostly static patterns in data but it has nowhere near the flexibility of the human mind mechanisms which can dynamically learn - zero-shot learning - and reconfigure their architecture on the fly to solve a problem. We will see new architectures that may drastically reduce the needed computing power to develop and employ cognitive systems. So putting 7 trillion on a horse right now is risky.

  18. nautica Silver badge
    Happy

    Title: "Sam Altman's chip ambitions may be loonier than feared"

    With apologies to that preeminent British biologist JBS Haldane, who famously said, "The universe is not only stranger than we imagine, but it is stranger than we can imagine."...perhaps the title here could--or should--be

    "Sam Altman's chip ambitions may not only be loonier than feared, but loonier than can be feared."

    1. jake Silver badge

      Re: Title: "Sam Altman's chip ambitions may be loonier than feared"

      The Haldane quote is actually "My own suspicion is that the universe is not only queerer than we suppose, but queerer than we can suppose."

      Your version is from one A.C.Clarke, who tidied it up for the modern era.

    2. Bebu
      Windows

      Re: Title: "Sam Altman's chip ambitions may be loonier than feared"

      Given the source of these ambitions it is drawing a rather long bow to claim "loonier."

      For $7T he could probably buy(bribe) everything between Kalingrad and Pyongyang (FWIW.)

      Who is loonier: he who tries to sell the Moon or he who buys it? (Or Mars etc) although I could imagine X-Aries RE flogging off the plan Martian condos to the faithfool. :)

  19. Ben Goldberg

    The biggest issue is a lack of people with the right skills.

    Maybe he'll build a university?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like