back to article From quantum AI to photonics, what OpenAI’s latest hire tells us about its future

Quantum computing has remained a decade away for over a decade now, but according to industry experts it may hold the secret to curbing AI's insatiable appetite. With each passing month, larger, more parameter-dense models appear and the scale of AI deployments expand in tandem. This year alone hyperscalers like Meta, plan to …

  1. Anonymous Coward
    Anonymous Coward

    If it attracts more investment capital, it's profitable

    in the short run.

  2. Pascal Monett Silver badge

    "it'll take [..] about a million physical qubits just to compete with modern GPUs"

    Sorry, why ?

    We have been repeatedly told that the quantum computer resolves all possible values in one go. I am aware that modern GPUs have more than a million transistors, but I am not aware that anybody has yet drawn an equivalence between how many transistors are needed to equal a qubit.

    Now, all of a sudden, a million physical qubits are needed to equal a single Nvidia GPU ?

    When the best anyone can do at the moment in a lab are 1000 qubit computers, it looks like quantum cumputing is looking weaker by the year.

    The NSA would do well to set up a 1000-GPU system to crack encryption, rather than wait for quantum.

    1. Michael Wojcik

      Re: "it'll take [..] about a million physical qubits just to compete with modern GPUs"

      We have been repeatedly told that the quantum computer resolves all possible values in one go.

      Not by anyone who understands how QC works. This is utterly, completely wrong.

      When the best anyone can do at the moment in a lab are 1000 qubit computers

      Even that claim is pretty dubious. The New Scientist piece is just marketing fluff; they don't cite any independent verification.

      1. Erik Beall

        Re: "it'll take [..] about a million physical qubits just to compete with modern GPUs"

        Have an upvote for the link to the Aaronson-informed comic!

  3. druck Silver badge

    The universal principle of no free lunch.

    Quantum computers aren't going to work, they aren't going to crack encryption, and they are going to do sod all for 'AI'.

    1. Michael Wojcik

      Re: The universal principle of no free lunch.

      Quantum computers already work, so that's your position disproved right there.

      Has anyone actually demonstrated quantum supremacy? Debatable. Aaronson's blog has some good posts on the question, and none of the results thus far are in areas where we can have really high confidence that classical methods couldn't be improved.

      Will we resolve QC scaling and error-correction sufficiently for some practical problems, such as "crack[ing] encryption" (by which I'll assume we mean factoring reasonable-sized RSA keys and computing FF and ECC discrete logs of reasonable size)? Unclear.

      Will QC ever be economically feasible for anything besides very specific applications, where we put a few very expensive systems to work on a handful of hard problems, most likely quantum-physics simulations? I suspect not, at least not in my lifetime. Physics simulations still look like the best application.

      Will QAOA actually prove useful? Will we find a lot of other useful algorithms in BQP? I'm not holding my breath.

      But quantum computers most definitely do work.

      1. Anonymous Coward
        Anonymous Coward

        Re: The universal principle of no free lunch.

        The universe itself is already the perfect QC simulator, and there is no question that it is "working". This is a theory: "In our quantum behaving universe, the space/time/energy requirements of QC scaling and error-correction rise exponentially". So far that theory has not been disproven, although all the data from efforts to date in trying to scale QC should be considered as empirical evidence. Plotted in a graph, and predictions made as to how it will scale - I haven't seen that though. I feel that the phrasing "Will we resolve QC ... ?" has the unspoken implication that failure is simply the result of humans not trying hard enough, with the side effect of possibly ignoring the discovery of basic laws of Quantum Physics.

        I'm an outsider so feel free to disconsider my amateur opinion. You seem to be more educated about the field than I, and I respect that.

  4. The Man Who Fell To Earth Silver badge
    WTF?

    Quantum + AI

    Yea, that will fix it.

  5. Michael Wojcik

    Ugh, what a lot of rubbish

    I don't know what specifically OpenAI think Bartlett is going to do for them, much less QC (if that's actually what they hired him for). I'd be interested to hear Aaronson's take on this, but he hasn't posted anything to his blog about it yet.

    Frankly photonics seems more likely than QC as a role for Bartlett at OpenAI. Still pretty much speculative primary research, but at least it might be applicable to what OpenAI do.

    The stuff mentioned in the article regarding QC is rubbish, frankly. QAOA and other "quantum optimization" approaches have yet to be shown to achieve anything useful; there's no proof QAOA is better than classical approaches, despite years and many papers published. "Optimization problems" have attracted a lot of attention in the QC community because there's money behind them. The results, despite much noise, have not been "promising". Or if they've promised, they haven't delivered.

    the unique attributes inherent to quantum computers allow them to explore these factors simultaneously: No. They. Do. Not. Please stop repeating this misconception.

    Nothing from DWave is worth the space taken to quote them; they don't do general QC (just adiabatic QC, which is utterly irrelevant here). That comparison to neurons is embarrassing.

  6. Anonymous Coward
    Anonymous Coward

    Fragility with respect to noise

    That is THE wall that Quantum Computing (as it is formulated today) has to climb. It's not clear whether that's simply a hardware problem waiting to be solved, or a Physics problem of proving it's exponential cost waiting to be proven. The progress so far in noise control has consisted adding more circuits to compensate for noisy results, so now there are QC with several thousand QB, but in equivalent noise free QB terms that would be a lot less. In my book, experimental evidence consistent with QC having exponential cost would be a success simply because it is experimental evidence consistent with scientific methodology. I'm not so sure the QC investment industry would agree.

    In contrast, intelligence as we see it in natural lifeforms is incredibly robust with respect to noise. Our own intelligence is so good with respect to noise that more often than not we can filter noisy, hallucinatory primitive so called "AI" results and extract the useful parts while ignoring the rest, using fact checking and/or common sense.

    There is a heretical theory that we are the result of a unpatented process of a series of hacked up alpha-beta tests generating marginal improvements, some of them significant, others minor - as opposed the established orthodox view that Sam said "Let there be intelligence!", and created us a proprietary cheaper consumer-grade version of his own image. Only time will tell which is true.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like