# Google demonstrates impractical improvement in quantum error correction – but it does work

Google has demonstrated a significant step forward in the error correction in quantum computing – although the method described in a paper this week remains some way off a practical application. In December 2019, Google claimed quantum supremacy when its 54-qubits processor Sycamore completed a task in 200 seconds that the …

1. #### "1,000 to 10,000 error-correction qubits for each logical qubit"

So you need up to 10K times the physical nodes to approach a reliable qbit.

Well, looks like we might just have a fusion reactor before we have quantum computing.

2. #### What does this actually mean?

"In theory, as you add qubits, the power of your quantum computer grows exponentially, increasing by 2n, where n is the number of qubits."

[1] what is the definition of "power" here?

[2] 2n growth (e.g. of numeric range) with bit count is not exclusive to the quantum domain. It's somewhat relevant to any binary base system.

So what point is being made?

1. #### Re: What does this actually mean?

Supposing a 2 bit "plain" quantum state q.

We apply an sampling S and randomly sample the state S(q).

The sample is always one of 00, 01, 10, or 11 - and each occur with probability 0.25 for one sampling.

Now we design a quantum circuit has 2 bit output state such that S(q) is NOT uniform for each of 00, 01, 10, or 11. The probability for each q1(i,j) depends upon the design of the circuit and external non-quantum parameters V applied to the circuit.

The quantum circuit designers have to solve the inverse problem of designing the circuit that provides the mapping V=>{q1(i,j)}

For two bits it is not such a big help. But suppose V is 10000 bit vector and the quantum output state q1 is also 10000 bits, and the mapping V=>{p(q)} satisfies (prob(q)=1/N when q=FInv(V), or 0 otherwise, where FInv(V) is some difficult inverse function with N solutions.

The quantum computer is "representing" the entire probability distribution, for each of the 2^10000 10000 bit long values of q, simultaneously.

To naively calculate the probability spread deterministically would require 2^10000 sequential operations, one for each 10000 bit long value of q.

To put this in perspective, suppose we just want compute V=>{prob(q)=1 when q=V, otherwise prob(q)=0}

In that case we know a special algorithm - the classical computer program simply returns the input.

Only an idiot would write a program that sequentially tested all 2^10000 possible values to see which one matched the input.

One the other hand supposing we want to compute V={prob(q)=1/N when SHA256(q)=V, otherwise prob(q)=0}.

In that case - as far as we know - the classical computer program must sequentialy test all 2^10000 possible values of q.

However, assuming the circuit could be built, the theoretical noiseless quantum computer would require just a few iterations to get an answer with sufficient confidence.

2. #### Re: What does this actually mean?

The definition of "power" is the [upper limit of complexity of a problem] that can be solved in O(1) time.

The [upper limit of complexity of a problem] is O(2^[length in bits of the answer description]).

With a classical computer, in the worse case, each of the possible O(2^[length in bits of the answer description]) permutations in the answer space must be examined independently in time (or space with a parallel architecture). For example, password guessing a random password with a classical computer requires iterating O(2^[length in bits of the answer description]), matching the hash of guess against the target hash.

A quantum computer, given the hash as input, can potentially guess the password in faster than O(2^[length in bits of the answer description]) - it can potentially do it in O(1).

It works in a probabilistic manner, like a slot machine. Pull the handle once, and get one educated guess. With a good program (=quantum circuit), and no noise, a correct password can be yielded with certainty in a single pull of the handle.

The circuitry inside the computer has complexity O([length in bits of the answer description]^N) where N is some constant, hopefully small. But with any constant N it grows slower than O(2^[length in bits of the answer description]) as we consider longer answer descriptions. I.e., the complexity of the circuitry is polynomial bounded, but the complexity of the answer space is not.

3. Simple solution. Let the computer generate a superposition of all possible outcomes & then pick the right answer. Or the one you like best.

1. What's the difference?

4. If you have an error rate that requires 1000 checking bits for every data bit, how do you know that the computer is more accurate than a coin toss?

1. #### "how do you know that the computer is more accurate than a coin toss?"

In that case you actually know it's a lot worse, as a coin toss has only two states, but a useful data word must have many.

The fundamental (and insoluble) problem is that quantum bit states are statistical in nature, not finite like the states of conventional logic. Several decades back I was using a "maybe gate" - a squared up diode noise generator, and it had similar uncertainty (which was why it was useful as a random sequence generator) ;-)

But if nothing else, this research has clearly demonstrated the problem.

2. Let's say that each QC "educated guess" contains N working bits and M error bits.

A classical computer computes a function on than answer in O(N+M) time that tells us "valid" or "error". The "error" cases are ignored. In the "valid" cases, the N working bits are taken as the working QC answer.

The total computation is still polynomial bound, while the problem being solved is not (in a classical computer).

5. #### A case for Monster cables? ;-)

'"They are working on really poor quality qubits. These early qubits just aren't good enough, they have to get at least 10 times better in terms of their noise and stability..."'

This is a very real problem, by virtue of scale. Almost anything electromagnetic can interfere at this level. It's the ultimate hard nut to crack if quantum computing is ever going to go mainstream.

6. #### Quantum versus classical properties

Quantum systems start to resemble classical systems if the system comprises a large number of quantum states. A single atom can only be described by quantum Physics, but add more atoms and you'll quickly move towards apiece of matter with classical properties. It even works with a single particle in a large 'pseudoclassical' superposition of states.

So I wonder if "a practical quantum computer might need 1,000 to 10,000 error-correction qubits", will it start to loose the very quantized properties that make it special? Will the error correction possible destroy the very quantum nature of the calculation?

7. #### Relational Universe

Once one accepts the Relational Universe, you realize that everything is in an unknown state in many different ways depending on nearby observers capabilities. E.g. only the human observer cares about the cat's life-state, but the molecules around it would know it if they could see it.

Which makes me wonder what these quantum computer guys are playing at. It's almost like they're doing a google search/screener of the universe to get their answers.

## POST COMMENT House rules

Not a member of The Register? Create a new account here.