back to article Now OpenAI CEO Sam Altman wants billions for AI chip fabs

OpenAI CEO Sam Altman is reportedly seeking billions of dollars in capital to build out a network of AI chip fabs. Citing multiple unnamed sources familiar with the matter, Bloomberg said on Friday Altman has approached several outfits, including Abu Dhabi-based G42 and Japan's Softbank, to help make it happen. Microsoft, …

  1. amanfromMars 1 Silver badge

    Live Operational Virtual Environments for Flash Fast Cash Crash Testing Dummies

    The certification group Uptime Institute echoed those fears in its latest analysis, warning that while AI will remain hot in 2024, inadequate silicon supply is likely to hamper wide-scale deployments.

    The buzz amongst that and those in the hot AI know is the critical future rich dependence all aspects of human systems administration are increasingly to be totally reliant upon, in order to be enabled to survive and enjoy prosperity, rather than to constantly falter, eventually to wither and then subsequently, ideally mercifully quickly to die and disappear and be eradicated from view and memory ....... the drivering forces and sources of Novel Generative AI for clear leading advantage in competitive situations accelerating unprecedented progress growing and answering demanding neural networks with otherworldly systemic instruction sets/augmented virtual reality scenarios with universal mass media programming responsible for final fine tuning of product for Earthly presentation/global deployment.

    Bet your shirt, or skirt, against any of that, and you are guaranteed to lose everything.

    Is it comforting to definitely know what not to do, rather than everyone either being expected to know or expecting you to know what to do, or is it much more of a challenge to simply comply to clear sound advice and constructive instruction for the Flash Fast Cash Crash Testing of Dummies?

    1. TimMaher Silver badge
      Coat

      Re: Live Operational Virtual Environments for Flash Fast Cash Crash Testing Dummies

      Eyup @amfm. Just fed your comment to a giant AI system.

      It crashed.

      1. amanfromMars 1 Silver badge

        Re: Live Operational Virtual Environments for Flash Fast Cash Crash Testing Dummies

        Eyup @amfm. Just fed your comment to a giant AI system.

        It crashed. ..... TimMaher

        Hmmm? Too much info too soo, eh?

        That’s valuable intel, TM, and next to impossible to mitigate and defeat. Thanks.

  2. EricM
    Stop

    Optimize worldwide chip production for running overhyped statistics?

    OK, they tell you to "think big", but ...

    AI as practiced today in the form of ML and LLMs is still (admittedly a bit oversimplified) statistics on stereoids.

    While these technologies often seem to turn out impressive results at first sight in a number of applications, the missing details and intermediate complete fails caused by "hallucinations" and complete lack of any process resembling real understanding generate even more impressive "results".

    Call me old-fashioned but having a working solution that turns out consistently reproducible correct output still has to be demonstrated to be achievable (let alone being achieved) by the curent AI approach.

    So it might still be a bit too early to optimize our chip production capacity large-scale towards the kind of hardware only usable by this specific branch of technology.

    1. Justthefacts Silver badge

      Re: Optimize worldwide chip production for running overhyped statistics?

      Sorry, it *is* your failure of imagination: “consistently reproducible correct output” is a ridiculously high bar. 95% or more of the time most humans are “on automatic”, and produce deeply fallible output. LLMs already easily surpass humans who aren’t paying full attention, which is what they do for 95% of their earned salary by hours.

      “consistently reproducible correct output” in human enterprise is *almost always achieved by process*. Software coding is actually a very rare task out of all the ones humans do, that does actually require precise correctness, and even that…..Do you write code that works first time? Of course not. 90% of your time is spent in debug. And then on top of that, code review by someone else. And then on top of that, hopefully you write unit tests; and then quite likely a whole other team is responsible for wider testing. And then for actually critical software, there’s another 10x of endless checklists.

      It’s simply ridiculous to require that AI should guarantee the *output* of process, rather than just execute at up to £30/hr of human-equivalent chomping through the work including getting coffee. You know what would be worth billions of dollars a year in itself? An LLM that could perform code-review at decent accuracy rates; not perfect, just decent. Spot the standard top 10 coding errors, plus top 10 “best practice style” issues, finding 90% of those actually existing on released production codebases. Just that.

      By comparison, the main premise of transitioning to Rust is memory-safety, which forms only 70% of bugs even according to its proponents, so the max win is 70% reduction in bugs. Getting a 90% reduction in released bugs for $0.40 per *million tokens*…..that would be just insane and seismic. How much would *you* charge to review a 1MLOC codebase? Would $4 cover it? Code-review-LLM (aka vulnerability scan tool) doesn’t even need to auto fix, that’s not where the bulk of work/value is. It just needs a sufficiently low false-positive rate, which is where scanning tools have struggled previously. But previous scanning tools haven’t been fundamentally more than regexp. That’s just one application.

      1. EricM

        Re: Optimize worldwide chip production for running overhyped statistics?

        > “consistently reproducible correct output” is a ridiculously high bar.

        It is not for traditional, deterministic von Neumann architectures.

        > You know what would be worth billions of dollars a year in itself? An LLM that could perform code-review at decent accuracy rates; not perfect, just decent. Spot the standard top 10 coding errors, plus top 10 “best practice style” issues, finding 90% of those actually existing on released production codebases. Just that.

        No it won't. This kind of tools have been available for years now - completely free of any "AI"...

        https://owasp.org/www-community/Source_Code_Analysis_Tools

        We can agree, however, that humans are the creative but inconsistent, hard to become correct part of the human-tech interaction, that needs processes and tools and reviews and whatnot to get things right overall.

        But how does combining inconsistently performing humans with inconstently performing AI solve any problem then?

        1. Justthefacts Silver badge

          Re: Optimize worldwide chip production for running overhyped statistics?

          I’m perfectly aware of standard source code analysis tools. Did you not read the bit where I said “find 90% of the bugs on released production code”?

          In other words, at minimum find 90% of the errors that *escape* current best-practice tooling. Plus, find 90% of the errors that have *escaped* current best-practice human code-review that allowed it into production. This might or might not actually be that much harder, since the type of errors made by an LLM may well be completely orthogonal to human code review accuracy.

          By the definition of what I’ve written above, if it can reach a 90% statistical accuracy, it is 10x better than our current human code review system. That’s not at all the same as standard rules-based deterministic analysis tools. It’s much closer to “code smell”: I’ve seen this anti-pattern before, and more times than not it had to be patched later like X; lets see if that applies here.

        2. Justthefacts Silver badge

          Re: Optimize worldwide chip production for running overhyped statistics?

          Also, “It is not for traditional, deterministic von Neumann architectures.”

          Everything in its place. If I look at the general landscape of machine vision algorithms, we’ve had forty years of development of Canny edge detectors, SIFT, SURF, erode/dilate etc. The usual suspects that are fully comprehensible from a maths perspective, and deterministic. For labelling YouTube videos…..in 2024, I just wouldn’t do any of that now. Let the CNN do its stuff; probably under-the-hood it’s learned to implement vaguely similar feature detectors anyway. Forty years of maths PhDs overtaken in the first few minutes of training data. But honestly, why should I care?

          Versus. If am writing a machine vision system for an extremely constrained problem-space, eg. inside a high-speed 6DOF CNC machine (a not-random example) then yes indeed I will bolt on a front-end set of old-school deterministic feature-extractors. Because I’m trying to optimise the processing bandwidth down to the *downstream RNN*: from a megapixel per frame to a few hundred features per frame; and I can offload those algos into front-end hardware. At 10kframes/sec real-time, and synthesising the views from 8 cameras, thats essential, But it’s a speed performance optimisation, not some ideological statement about determinism and explainability.

          1. EricM

            Re: Optimize worldwide chip production for running overhyped statistics?

            For both your posts above: I see a sequence of words that do not seem to form a coherent thought or argument.

            Have these just been generated by an AI-type process, maybe without sufficient training related to the actual subject at hand?

          2. Anonymous Coward
            Anonymous Coward

            Re: Optimize worldwide chip production for running overhyped statistics?

            "usual suspects that are fully comprehensible from a maths perspective, and deterministic. "

            Machine vision uses pattern recognition and that's literally a picture version of regexp. Zero 'intelligence' in it and because of that it's of course deterministic too.

            LLM or related products never are. No idea why the commenter even tries to claim so:Totally unrelated things.

            1. Justthefacts Silver badge

              Re: Optimize worldwide chip production for running overhyped statistics?

              Nope. Our company develops and uses exactly that for image-processing. Proper industrial metal-bashing, it’s in production today. Like a CNC machine, except not, (proprietary etc).

              As I said, through processing pipelines, closed-loop from cameras to tool commands, we use deterministic algorithms in some places, and non-deterministic CNN/RNN/LLM in others. If you think your technical solution is better than ours, you are welcome to compete in the marketplace.

    2. An_Old_Dog Silver badge
      Joke

      In an Earlier Era ...

      ... my great-great-great Uncle Farnsworth proclaimed something like: ~We need millions to build and equip new decimal-arithmatic-accelerator plants, and to train the machinists who will run them! The crisis is on the horizon! We need decimal-arithmetic-accelerators, because decimal-based machine arithmetic is the wave of the future!~

  3. Bebu Silver badge
    Windows

    It would be ironic...

    if after investing gazillions in these fabs that it turns out that it is easier, cheaper and environmentally more friendly to genetically engineer organisms to perform the same operations. At least we know this approach would ultimately lead to something more or less resembling intelligence. Not a strong claim though.:)

    1. Yet Another Anonymous coward Silver badge

      Re: It would be ironic...

      But, except in certain US states, you have to pay them. And even in Florida you probably have to feed them - that gets expensive at scale even for child labour

  4. heyrick Silver badge

    build enough assembly lines to ensure there is a healthy supply of AI processors to meet demand

    Is there that sort of demand? Or is this a solution in desperate need of a problem?

    1. David 132 Silver badge
      Unhappy

      Re: build enough assembly lines to ensure there is a healthy supply of AI processors to meet demand

      Honestly, considering it's Sam Altman and OpenAI, I'm surprised he's talking about building his own infrastructure, and not merely using other peoples' facilities without their permission or licensing. After all, that approach has worked just fine so far for training the AI models...

    2. herman Silver badge
      Coat

      Re: build enough assembly lines to ensure there is a healthy supply of AI processors to meet demand

      It could indeed be a silly con.

  5. Anonymous Coward
    Anonymous Coward

    "I'm surprised he's talking about building his own infrastructure, "

    If i read right he isn't talking about that, he's hoping someone builds it for him and someone else pays for it. For him free to use, of course. That also would be in line of the stuff they do now.

  6. Anonymous Coward
    Anonymous Coward

    All of this AI [redacted] needs power

    and lots of it. I'd like to see a 5000% levy on the whole thing. When the world needs all the power generation it can get a bunch of bozo's want to drain the already stretched grid just to power their faulty AI models.

    To me, this is a case of Demand without supply.

    If this takes off and every man and their dog starts using AI models the grid will collapse if those models use GPU's from the likes of NVidia.

    We are well and truly doomed if you rely on the grid for power. I forsee a lot more people going off grid and generating their own power just for peace of mind.

  7. Anonymous Coward
    Anonymous Coward

    Who's to say OpenAI even would have good tech to fab. For example there are analog AI chips being researched that are potentially very power efficient. So much so we may not need a dedicated or shared hosting service (cloud) to do all the processing but instead have it all processed locally on our phones and PCs.

  8. EricB123 Silver badge

    Thank God we have nuclear fusion working

    Otherwise, how would we power all of those space heater chips?

  9. Anonymous Coward
    Anonymous Coward

    August 29, 1997

    Skynet begins to learn rapidly and eventually becomes self-aware at 2:14 a.m., EDT, on August 29, 1997.

  10. pimppetgaeghsr

    Softbank are interested? Surely that signals the peak.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like