Schrodinger? Oh, goody
Can we literally put Google and IBM in a box with poison and a decaying isopope???
Just before Christmas, Google claimed quantum supremacy. The company had configured a quantum computer to produce results that would take conventional computers some 10,000 years to replicate - a landmark event. Bollocks, said IBM - which also has big investments both in quantum computing and not letting Google get away with …
Good to see the private sector investing heavily in the research for this technology. If it pays off it could be a big win. Of course the money will all be wasted.
Kudos to both -- at least they are putting their money were their mouths are. You never know, we may all benefit big time if either makes a breakthough.
Their money, their choice!
I read not long ago that the largest amount of functional qbits was 16. Well that's been blown out of the water.
Progress in this field seems to be going strong. One day they might actually be able to do something useful with it.
Now the question is : are 56 qbits enough for everyone ?
56? Pretty sure 53.
Not that that's a significant objection to what Google have achieved, by the way. I've been something of a QC skeptic myself - not that I didn't believe in the published results, just that I've felt it was likely that 1) it would take a long time to achieve quantum computational supremacy, if 2) it was feasible at all. (Interested parties can hunt down some of my earlier posts for links to arguments from physicists and others supporting the latter position. I wasn't convinced, but I thought some of the arguments were at least plausible.)
And we still have a long way to go before we have a machine with enough "logical qubits" (that is, error-corrected qubits, which may well mean orders of magnitude more physical qubits) to compute arbitrary problems in BQP. The random-circuit problem Google demonstrated on Sycamore essentially returns a distribution, not an exact answer; you couldn't use a scaled-up-by-100 Sycamore to factor a 2048-bit RSA product, because it doesn't have the error correction (i.e. it has 53 physical qubits, but not 53 logical ones).
But, yeah, I have to say my predictions were wrong. It still looks like we have plenty of time to roll out post-quantum asymmetric crypto before anyone has a machine that can practically crack RSA or ECDHE, but we might have useful QC machines for things like simulating quantum processes in a few years. That's cool. (And it's nice that the Google approach is not particularly resource-intensive.)
As usual, Aaronson has some good posts. He wrote this on the IBM rebuttal, and it mentions some other critiques and links to the "quantum supremacy FAQ" he wrote when the Google paper was leaked.
Word. I'll be impressed when somebody makes money out of it.
The marketing peeps will likely be taking zeros, superimposing a single quantum of utility on them and calling it a success for a while yet.
Unless Google can use it to sell ads its wont help the business.
Or the sub header, to be precise. I hope the day El Reg stops producing such ledes is the day they open Erwin's kitty box, to this music, of course
The article covers this nicely.
Google’s ability to do anything with a quantum computer is impressive but the achievement itself is so specific as to be practically useless, while IBM demonstrated the flexibility of monstrous conventional computing power by undertaking a task not of their choosing.
If the IBM machine set a challenge and the Google machine stepped up we’d be in a whole different discussion.
"Regardless of what IBM says, I think Google's claim still stands - you can hardly call a supercomputer as being "conventional"
You need to include price as well for both sides - QC's aren't cheap or readily available at the moment and doing "just enough" to demonstrate they are faster than conventional, general purpose systems maybe setting the bar too low.
Given that Google will likely exceed their current QC compute levels, pointing at the exact system that will be considered equivalence is likely to change when we look back - I think it will be higher than the current estimates until QC can handle a wide range of problems and become practical replacements for large, conventional, facilities like the one IBM has highlighted. However, this is likely a matter of time and those with functioning, high qubit equipment are the groups likely to succeed at this task.
The problem for IBM is that history will forget the doubters and the losers...
"conventional" in this context referrers to the basic operating structure of the computer, more or less as defined by Alan Turing. So digital super computers are included as conventional. As I understand it there is a class of problems that in which no "conventional" computer could beat a fully functioning quantum computer.
To prove this in practice you need a problem which has been studied enough that we know (or at least believe we know) the fasted algorithm for solving it on a conventional computer. Part of IBM's objection was that the problem solved was not one that has been studied enough to have figured out the fastest algorithm, so there may well be an algorithm that allows an existing conventional computer to beat Google's quantum computer.
Part of IBM's objection was that the problem solved was not one that has been studied enough to have figured out the fastest algorithm, so there may well be an algorithm that allows an existing conventional computer to beat Google's quantum computer.
It's difficult to prove an algorithm is "the fastest", and impossible, in general, to prove that an implementation is optimal (from AIT; see Chaitin or Kolmogorov).
But I'm with Scott Aaronson on this: It seems very unlikely that there's a sub-exponential classical algorithm for the random-QC problem. He proposed offhand (and probably wouldn't be held to this, but it's suggestive) that P=PSPACE is about as likely.
Objections that hold more water are that what Google have shown is either near quantum computational supremacy ("we can do that on a really big conventional machine"), or that it's "baby" QCS, since without error correction (which would require a whole lot more physical qubits) the hardware is only suitable for a small subset of problems, without a lot of practical applications.
Random-QC does have at least one application, though - Aaronson notes it can be used to implement his protocol for generating a stream of computationally-provably-random bits. And as I noted in another post, things like quantum physics simulations have requirements that may be more achievable than those of things like Shor's and Grover's algorithms.
it MIGHT be useful in applications such as trying to determine protein structures based on the sequence alone. Don't know if it's still running but you used to be able to dedicate your box's off time to do this a la SETI at home.
The other application which suggests is drawing likelihood species relatedness trees. What happens at the moment is the papers give you al the highest likelihood trees then let you choose which one you think best fits while they defend their choice.
Delighted to see Rupert writing for El Reg, but I think it "a frozen lunchbox consuming virtually no power in its core can outperform a computer chewing through enough wattage to keep a small town going" is carefully drawing the lines to ignore the power consumption of cooling the whole lot to 15 millikelvins.
is carefully drawing the lines to ignore the power consumption of cooling the whole lot to 15 millikelvins.
Oh I don't know, Maggie Thatcher was rumoured to be able to get the subject of her disapproving glacial stare down to a few microkelvins, and the only energy input required would be someone standing next to her repeatedly saying the phrases "Arthur Scargill" and "poll tax".
Maybe Theresa May would make a suitable present day substitute now she's got more time on her hands, all you'd need would be someone standing behind her whispering incessantly into her ear "strong and stable", "hung Parliament"...
Not that I claim any ability to really understand the low down scientific details, but as a middle aged and somewhat jaded IT professional, QC is about the only area of computing science I still have any form of interest in following outside of daily work. ..... Aristotles slow and dimwitted horse
The present question to power the future is simply whether one can lead anything, and therefore also everything, with QC in Absolute Command and Overall Control? To realise yes, certainly, has one incredibly advantaged with many abilities much sought after to exercise and prepare for employment and deployment of largesse. Is that not akin to a Holding of Right Royal Privilege?
:-) Seems like Harry and Meghan are keen to explore the more Antony and Cleopatra Root for Service and Servering rather than suffer being mirrored in an Edward and Mrs Simpson concoction of bitter confections.
Aristotles slow and dimwitted horse, there is a great deal more following your interest yet to come, and if it stops coming, it has only been temporarily stopped by spooky forces beyond conventional control.
So, in the near future, your code and data will simultaneously be running/stored on the quantum processor cloud, whilst, at the same time, not being either accessible to you nor owned by you. The results, if any, that you get back both will and won't be the results that you wanted to see and both will and won't have been mangled with/duplicated during the whole process. But fear not, for there are two certainties, you will be using this and you will be paying for it (through the nose with a lot of hard ca$h).
Actually, all of the things I personally use a computer for, and all the things my customers do with my software, either aren't applicable to QC or wouldn't derive enough benefit from it to bother.
Most computer applications need QC the way most transportation applications need a Saturn V rocket. It's kind of a specialized use case.
A huge question which I see is: ok, the present Google machine has 56 qubits = 2exp56 size solution set.
How do you verify that an algorithm is working correctly over this range of solution set? Existing systems can't seem to do a very good job of testing/quality control - will quantum magically change this situation?
I wonder because once you start going into the 2exp100+ range - this is literally the million monkeys on typewriters for a million years scenario. Makes hash collision really interesting - in cryptography, for example.
There is a class of problems where checking the answer is trivially easy but finding the answer is very very hard. That is the mathematical foundation of data encryption. It is also the kind of problem quantum computers are supposed to be good at.
A basic example would be finding the square root of an arbitrary very large number. In most cases finding the answer is much harder then testing if it is correct.
It is also the kind of problem quantum computers are supposed to be good at.
"It" - that is, problems in BQP - are one class of problem general quantum computers would be good at, for some value of "good".1
In the case of the Google paper we're talking about here, the specific problem they're using the Sycamore chip to solve is random-quantum-circuit, which isn't one of those. It's one of the "we have reason to believe this output stream looks right" sort of problems.
Another QC application that doesn't have poly-time confirmation of the results is quantum-physics simulation. Some of those experiments can probably be cleverly confirmed, but at a certain point we're likely to move from "yes, the QC gave the right result for this very small simulation that we can also do on a conventional computer, so let's assume it will get this bigger simulation right too".
Or, similarly, we might use QC to model some protein interactions, then test those empirically, and if they look good decide we can trust the QC on others.
1The specific problem would have to be large enough to take longer on a conventional system than it takes to do the setup and post-processing on the QC; but not so large that it exceeds the QC's capacity. And it would have to be valuable enough to justify using the QC, both in terms of absolute value and relative to other problem instances. And while solving it on the QC might be faster than on a conventional system, or feasible on the former but not the latter, it wouldn't necessarily be especially "fast" in human terms.
53 qubits, actually - see the paper (or its abstract, which you can read for free on the Nature site). So 253, and not 256, which obviously would require 8 times as much conventional computing power.
The problem they're computing (random-QC) has a solution which can be tested probabilistically in polynomial time. It's probabilistic because the output of the QC is a series of bit strings which should fit a particular distribution, and the shape of that distribution can be approximated on a sufficiently powerful conventional system. Then it's just a matter of collecting a lot of samples from the Sycamore output and seeing if they converge on the right distribution.
There's been some quibbling about how sure you want to be, and Google are pushing the limit of what they themselves can check. I think I saw a comment somewhere about how it's not clear they've really satisfactorily checked the biggest problem (the depth-20 one) they threw at Sycamore.
Way back in 2018, though, they did this with a smaller circuit that they could simulate at length on a conventional machine, and there the output of their QC looked good. Also, for these latest results, they archived all the data (the whole set of outputs) so if it becomes feasible to check them exhaustively in the future, someone can do that.
If Google had, say, a 53-logical-qubit system, one that provided the equivalent of 53 error-corrected quibits, then it'd be dead easy to test, because you could use anything in BQP, such as Shor's and Grover's. You could, say, multiply a 26-bit prime and a 27-bit prime, then have the machine factor them. You could do that all day. But no one has anything like that sort of quantum-computing machinery yet.
(Someone will probably leap in with the speculation, if not outright claim, that the NSA or some other nation-state intelligence service has such a machine. Well, I can't prove they don't; but I think it unlikely.)