back to article Google claims milestone in quantum error correction

Google is claiming a new milestone on the road to fault-tolerant quantum computers with a demonstration that a key error correction method that groups multiple qubits into logical qubits can deliver lower error rates, paving the way for quantum systems that can scale reliably. A team at Google Quantum AI said it has …

  1. A Non e-mouse Silver badge

    Why are Google shooting for a 1,000,000 qubit system when only 4,000 qubits are needed by Shor's algorithm for breaking RSA?

    1. Claptrap314 Silver badge

      Try reading the article again.

      1. Michael Wojcik Silver badge

        Or, indeed, anything about practical quantum computing.

    2. mpi Silver badge

      "That million qubit machine is intended to have roughly a thousand to one overhead, resulting in 1,000 reliable logical qubits available for quantum calculations"

      So in short, 1E6 actual qb provide 1E3 practically usable qb ... so even this ambitious goal would still fall short of realizing Shors Algo in practice. That is, IF the 3OrdersOfMagnitude overhead is enough.

      1. Michael Wojcik Silver badge

        With surface codes, the 1000:1 ratio is exceeded at a 23x23 code. Since the move from 3x3 to 5x5 only decreased the error rate by ~0.1 percentage points, I'm not wildly optimistic that even 1000:1 will be a low-enough error rate for some of the possible applications – including breaking cryptographic keys. Perhaps for the occasional high-value target, but not in bulk.

        Note that breaking an ECC key of N bits requires ~6N error-corrected qubits. So a 1000-ECQ machine would only be effective for ECC keys around 167 bits, which isn't a threat for any commonly-used or NIST-approved curves. (See the just-published SP 800-186, which doesn't recommend any curve smaller than 223 bits.) You'd need a machine about a third again as large even to attack the ECC curves that NIST assigns to the "128-bit" security equivalence level (aka "stuff that's not really important").

        Quantum computers are going to be a lot more interesting for other purposes – such as physical simulations – than for breaking cryptography, for the foreseeable future.

  2. Long John Silver
    Pirate

    Silly questions?

    Suppose a quantum computer deemed suitable for practical use nevertheless throws up occasional uncorrected errors. How do these manifest to the computer's operators and/or to people running software on the system?

    Unless a program collapses into spewing obvious nonsense, how might subtle, yet important, miscalculation be recognised? Does one have to resort to repeating a calculation many times and to somehow constructing an unbiased mean estimation of the correct answer?

    1. Brewster's Angle Grinder Silver badge

      Peircean reasoning

      Disclaimer: I don't know.

      But, naively, many things quantum computers will be employed to do can be checked digitally - e.g. if you're factorising a number, it's easy to check.

      Failing that, you could run it multiple times and see if the answer is the same. If it is, then you have the confidence the undetected error hasn't crept in.

      1. Michael Wojcik Silver badge

        Re: Peircean reasoning

        Right. When we talk about quantum computing, we're most often talking about algorithms in complexity class BCP: bounded-error computations with polynomial-time verification. Getting the answer is hard; checking it is relatively easy.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like