Really ?
"a fault-tolerant system capable of generating reliable results was likely a decade or more away"
I've got the feeling that we'll be hearing the same soundbite in a decade.
Japan's government scientific research institute Riken is hedging its bets on quantum computing with the deployment of Quantinuum's trapped-ion H1 systems at its facility in Wako, Saitama. Riken aims to harness quantum computing as an accelerator for traditional high performance computing (HPC) applications. To do this the …
In the case of QC, significant progress has been made. It's very likely we have "quantum supremacy" — QC systems which have solved problems which are infeasible for conventional computers — albeit only for a few experimental runs, solving problems like quantum-circuit evaluation which are not, perhaps, wildly exciting outside the field of QC itself.
Since the most plausible actually-useful application of QC is in physics simulations, we may not need perfectly error-corrected QC systems, and relatively few error-corrected qubits (relative to the number needed for Hollywood applications like OMG BREAK ALL THE CRYPTO nonsense) are required to perform simulations we can't reasonably do any other way.
But regardless, QC is likely to remain niche for quite a long time, if not forever. Google Sycamore, for example, is useful — but mostly it's useful for developing other QC systems.
As usual, Scott Aaronson's blog is the best-known source for information in this area for non-specialists, and is worth reading if you're actually interested in the subject. While Scott's word isn't the last on any subject (and he'd be the first to admit that), I think he does a good job of explaining what's happening.
This could be one of those circumstances where the other big hype train, AI, actually bears fruit: if quantum researchers can train a model to analyze their quantum machines, perhaps it will help them to iterate faster and create a workable, scalable quantum computer.
Building on that idea, what happens when an AI built on quantum computing arrives? I doubt I'll see it in my lifetime, and I'm not sure I want to.
You might train an LLM on all the QC research papers, and with a lot of prompting and exploration, find some subtle ideas that are worth further exploration. Beyond that I don't immediately see an application of DNNs to QC research. Well, maybe on the materials side; there's been some good work done using DNNs to explore the space of possible innovations in materials.
The other suggestion — building "AI" on QC — is, at this point, nonsensical, as far as I can see. They're diametrically opposition information technologies. The sort of thing that gets called "AI" (ugh) these days are large ANN stacks using either gradient descent or diffusion as their particular trick to provide (ostensibly) impressive features. They require vast amounts of conventional computational resources. QC requires lots of physical resources to implement a very small number of noisy qubits, which are useful only for 1) simulating quantum processes (well, "simulating" in the sense of actually implementing, in a controlled way), and 2) implementing algorithms in complexity class BQP with fewer operations than would be required on a conventional machine.
I'm not aware of any ANN algorithms in BQP. Seems like someone would have mentioned if any were known. Like, say, Aaronson, who's a major QC researcher but doing a year at OpenAI.
Having said that, the general consensus is that practical applications of quantum computing are still years away from reality. In October, Fujitsu said as much, cautioning that a fault-tolerant system capable of generating reliable results was likely a decade or more away.
Yes, well they would say that wouldn’t they, and so would you too, I’d wager, if in the same position, when a fault-tolerant system capable of generating reliable results is likely and never likely to be a public facility whenever a private utility.
Bravo, Fujitsu. Bravely and stealthily sticking it to the man, and biting the hands that feed IT and AI fodder.
In October, Fujitsu said as much, cautioning that a fault-tolerant system capable of generating reliable results was likely a decade or more away.
Makes you wonder which October, and with reference to which system Fujitsu were referring. Perhaps all Fujitsu systems are designed and coded to the same high standards that they're currently getting press coverage for here in the UK.