Schrodinger? Oh, goody
Can we literally put Google and IBM in a box with poison and a decaying isopope???
Just before Christmas, Google claimed quantum supremacy. The company had configured a quantum computer to produce results that would take conventional computers some 10,000 years to replicate - a landmark event. Bollocks, said IBM - which also has big investments both in quantum computing and not letting Google get away with …
Good to see the private sector investing heavily in the research for this technology. If it pays off it could be a big win. Of course the money will all be wasted.
Kudos to both -- at least they are putting their money were their mouths are. You never know, we may all benefit big time if either makes a breakthough.
Their money, their choice!
I read not long ago that the largest amount of functional qbits was 16. Well that's been blown out of the water.
Progress in this field seems to be going strong. One day they might actually be able to do something useful with it.
Now the question is : are 56 qbits enough for everyone ?
56? Pretty sure 53.
Not that that's a significant objection to what Google have achieved, by the way. I've been something of a QC skeptic myself - not that I didn't believe in the published results, just that I've felt it was likely that 1) it would take a long time to achieve quantum computational supremacy, if 2) it was feasible at all. (Interested parties can hunt down some of my earlier posts for links to arguments from physicists and others supporting the latter position. I wasn't convinced, but I thought some of the arguments were at least plausible.)
And we still have a long way to go before we have a machine with enough "logical qubits" (that is, error-corrected qubits, which may well mean orders of magnitude more physical qubits) to compute arbitrary problems in BQP. The random-circuit problem Google demonstrated on Sycamore essentially returns a distribution, not an exact answer; you couldn't use a scaled-up-by-100 Sycamore to factor a 2048-bit RSA product, because it doesn't have the error correction (i.e. it has 53 physical qubits, but not 53 logical ones).
But, yeah, I have to say my predictions were wrong. It still looks like we have plenty of time to roll out post-quantum asymmetric crypto before anyone has a machine that can practically crack RSA or ECDHE, but we might have useful QC machines for things like simulating quantum processes in a few years. That's cool. (And it's nice that the Google approach is not particularly resource-intensive.)
As usual, Aaronson has some good posts. He wrote this on the IBM rebuttal, and it mentions some other critiques and links to the "quantum supremacy FAQ" he wrote when the Google paper was leaked.
Word. I'll be impressed when somebody makes money out of it.
The marketing peeps will likely be taking zeros, superimposing a single quantum of utility on them and calling it a success for a while yet.
Unless Google can use it to sell ads its wont help the business.
The article covers this nicely.
Google’s ability to do anything with a quantum computer is impressive but the achievement itself is so specific as to be practically useless, while IBM demonstrated the flexibility of monstrous conventional computing power by undertaking a task not of their choosing.
If the IBM machine set a challenge and the Google machine stepped up we’d be in a whole different discussion.
"Regardless of what IBM says, I think Google's claim still stands - you can hardly call a supercomputer as being "conventional"
You need to include price as well for both sides - QC's aren't cheap or readily available at the moment and doing "just enough" to demonstrate they are faster than conventional, general purpose systems maybe setting the bar too low.
Given that Google will likely exceed their current QC compute levels, pointing at the exact system that will be considered equivalence is likely to change when we look back - I think it will be higher than the current estimates until QC can handle a wide range of problems and become practical replacements for large, conventional, facilities like the one IBM has highlighted. However, this is likely a matter of time and those with functioning, high qubit equipment are the groups likely to succeed at this task.
The problem for IBM is that history will forget the doubters and the losers...
"conventional" in this context referrers to the basic operating structure of the computer, more or less as defined by Alan Turing. So digital super computers are included as conventional. As I understand it there is a class of problems that in which no "conventional" computer could beat a fully functioning quantum computer.
To prove this in practice you need a problem which has been studied enough that we know (or at least believe we know) the fasted algorithm for solving it on a conventional computer. Part of IBM's objection was that the problem solved was not one that has been studied enough to have figured out the fastest algorithm, so there may well be an algorithm that allows an existing conventional computer to beat Google's quantum computer.
Part of IBM's objection was that the problem solved was not one that has been studied enough to have figured out the fastest algorithm, so there may well be an algorithm that allows an existing conventional computer to beat Google's quantum computer.
It's difficult to prove an algorithm is "the fastest", and impossible, in general, to prove that an implementation is optimal (from AIT; see Chaitin or Kolmogorov).
But I'm with Scott Aaronson on this: It seems very unlikely that there's a sub-exponential classical algorithm for the random-QC problem. He proposed offhand (and probably wouldn't be held to this, but it's suggestive) that P=PSPACE is about as likely.
Objections that hold more water are that what Google have shown is either near quantum computational supremacy ("we can do that on a really big conventional machine"), or that it's "baby" QCS, since without error correction (which would require a whole lot more physical qubits) the hardware is only suitable for a small subset of problems, without a lot of practical applications.
Random-QC does have at least one application, though - Aaronson notes it can be used to implement his protocol for generating a stream of computationally-provably-random bits. And as I noted in another post, things like quantum physics simulations have requirements that may be more achievable than those of things like Shor's and Grover's algorithms.
it MIGHT be useful in applications such as trying to determine protein structures based on the sequence alone. Don't know if it's still running but you used to be able to dedicate your box's off time to do this a la SETI at home.
The other application which suggests is drawing likelihood species relatedness trees. What happens at the moment is the papers give you al the highest likelihood trees then let you choose which one you think best fits while they defend their choice.
Delighted to see Rupert writing for El Reg, but I think it "a frozen lunchbox consuming virtually no power in its core can outperform a computer chewing through enough wattage to keep a small town going" is carefully drawing the lines to ignore the power consumption of cooling the whole lot to 15 millikelvins.
is carefully drawing the lines to ignore the power consumption of cooling the whole lot to 15 millikelvins.
Oh I don't know, Maggie Thatcher was rumoured to be able to get the subject of her disapproving glacial stare down to a few microkelvins, and the only energy input required would be someone standing next to her repeatedly saying the phrases "Arthur Scargill" and "poll tax".
Maybe Theresa May would make a suitable present day substitute now she's got more time on her hands, all you'd need would be someone standing behind her whispering incessantly into her ear "strong and stable", "hung Parliament"...
Not that I claim any ability to really understand the low down scientific details, but as a middle aged and somewhat jaded IT professional, QC is about the only area of computing science I still have any form of interest in following outside of daily work. ..... Aristotles slow and dimwitted horse
The present question to power the future is simply whether one can lead anything, and therefore also everything, with QC in Absolute Command and Overall Control? To realise yes, certainly, has one incredibly advantaged with many abilities much sought after to exercise and prepare for employment and deployment of largesse. Is that not akin to a Holding of Right Royal Privilege?
:-) Seems like Harry and Meghan are keen to explore the more Antony and Cleopatra Root for Service and Servering rather than suffer being mirrored in an Edward and Mrs Simpson concoction of bitter confections.
Aristotles slow and dimwitted horse, there is a great deal more following your interest yet to come, and if it stops coming, it has only been temporarily stopped by spooky forces beyond conventional control.
So, in the near future, your code and data will simultaneously be running/stored on the quantum processor cloud, whilst, at the same time, not being either accessible to you nor owned by you. The results, if any, that you get back both will and won't be the results that you wanted to see and both will and won't have been mangled with/duplicated during the whole process. But fear not, for there are two certainties, you will be using this and you will be paying for it (through the nose with a lot of hard ca$h).
Actually, all of the things I personally use a computer for, and all the things my customers do with my software, either aren't applicable to QC or wouldn't derive enough benefit from it to bother.
Most computer applications need QC the way most transportation applications need a Saturn V rocket. It's kind of a specialized use case.
A huge question which I see is: ok, the present Google machine has 56 qubits = 2exp56 size solution set.
How do you verify that an algorithm is working correctly over this range of solution set? Existing systems can't seem to do a very good job of testing/quality control - will quantum magically change this situation?
I wonder because once you start going into the 2exp100+ range - this is literally the million monkeys on typewriters for a million years scenario. Makes hash collision really interesting - in cryptography, for example.
There is a class of problems where checking the answer is trivially easy but finding the answer is very very hard. That is the mathematical foundation of data encryption. It is also the kind of problem quantum computers are supposed to be good at.
A basic example would be finding the square root of an arbitrary very large number. In most cases finding the answer is much harder then testing if it is correct.
It is also the kind of problem quantum computers are supposed to be good at.
"It" - that is, problems in BQP - are one class of problem general quantum computers would be good at, for some value of "good".1
In the case of the Google paper we're talking about here, the specific problem they're using the Sycamore chip to solve is random-quantum-circuit, which isn't one of those. It's one of the "we have reason to believe this output stream looks right" sort of problems.
Another QC application that doesn't have poly-time confirmation of the results is quantum-physics simulation. Some of those experiments can probably be cleverly confirmed, but at a certain point we're likely to move from "yes, the QC gave the right result for this very small simulation that we can also do on a conventional computer, so let's assume it will get this bigger simulation right too".
Or, similarly, we might use QC to model some protein interactions, then test those empirically, and if they look good decide we can trust the QC on others.
1The specific problem would have to be large enough to take longer on a conventional system than it takes to do the setup and post-processing on the QC; but not so large that it exceeds the QC's capacity. And it would have to be valuable enough to justify using the QC, both in terms of absolute value and relative to other problem instances. And while solving it on the QC might be faster than on a conventional system, or feasible on the former but not the latter, it wouldn't necessarily be especially "fast" in human terms.
53 qubits, actually - see the paper (or its abstract, which you can read for free on the Nature site). So 253, and not 256, which obviously would require 8 times as much conventional computing power.
The problem they're computing (random-QC) has a solution which can be tested probabilistically in polynomial time. It's probabilistic because the output of the QC is a series of bit strings which should fit a particular distribution, and the shape of that distribution can be approximated on a sufficiently powerful conventional system. Then it's just a matter of collecting a lot of samples from the Sycamore output and seeing if they converge on the right distribution.
There's been some quibbling about how sure you want to be, and Google are pushing the limit of what they themselves can check. I think I saw a comment somewhere about how it's not clear they've really satisfactorily checked the biggest problem (the depth-20 one) they threw at Sycamore.
Way back in 2018, though, they did this with a smaller circuit that they could simulate at length on a conventional machine, and there the output of their QC looked good. Also, for these latest results, they archived all the data (the whole set of outputs) so if it becomes feasible to check them exhaustively in the future, someone can do that.
If Google had, say, a 53-logical-qubit system, one that provided the equivalent of 53 error-corrected quibits, then it'd be dead easy to test, because you could use anything in BQP, such as Shor's and Grover's. You could, say, multiply a 26-bit prime and a 27-bit prime, then have the machine factor them. You could do that all day. But no one has anything like that sort of quantum-computing machinery yet.
(Someone will probably leap in with the speculation, if not outright claim, that the NSA or some other nation-state intelligence service has such a machine. Well, I can't prove they don't; but I think it unlikely.)
Nothing claiming the term "Supremacy" should be based on a calcification specifically rigged for the target platform, that does nothing that can't be done on another platform, and also doesn't do anything useful. IBM's complaint that the answer is solvable in a reasonable amount of time seems valid enough, but more of a concern in calling out the google teams "creative" math. It's not even scare quotes "Quantum Supremacy" or "Quantum Supremacy Lite"*** where the *** declaration is pointing to a 50 page list of weasel words.
The minimum we should accept for a claim of limited Quantum Supremacy would be the quantum computer reliably solving a useful, real world calculation faster and more efficiently than conventional computing resources. Full Quantum Supremacy should be declared when a full quantum or hybrid system is faster and more efficient than all other general purpose computers. The rest of these claims are marketing frippery.
Marketeers will not be meaningfully harmed if they are prevented lying to the rest of the world by making these false and over hyped claims.
Marketeers will not be meaningfully harmed if they are prevented lying to the rest of the world by making these false and over hyped claims. ..... Anonymous Coward
Crikey, AC, that sounds like a call to violence against false and over hyped claims marketeers. Such will decimate the incontinent political classes ...... you know, that and those in the all show and no go crowd.
And the chances are, such would be extremely popular and almightily well supported.
the term "Supremacy"
Is a term of art in this field, so your objections should be addressed to those who coined and popularized it, not to the Google researchers.
a calcification [sic] specifically rigged for the target platform
It's not "rigged". It's a problem that had been proposed, by other researchers, years ago, for testing QCs with uncorrected qubits.
does nothing that can't be done on another platform
Quantum computational supremacy has nothing to do with problems that can't be solved in principle by a conventional machine (the principle in question being the Extended Church-Turing thesis). It has to do with problems that are infeasible to solve on conventional systems. And the biggest problem Google threw at Sycamore (the depth-20 one) is right at the edge of what Summit can handle (at the moment - with some algorithmic improvements they might be able to squeeze a slightly larger one on there).
doesn't do anything useful
There's at least one known application. Read Aaronson's blog.
The minimum we should accept for a claim of limited Quantum Supremacy would be the blah blah I know more than actual researchers in this field
Yes, you're very smart. Everyone working in QC should immediately bow before the wisdom of Anonymous Coward and change their definitions.
Yes, very interesting ... but why exactly is there something rather than nothing?...... Frumious Bandersnatch
Surely that is easily answered, FB. There are new audiences to entertain and try capture, and they aint nobody's fools. Such requires things to be different with practically nothing the same as it was before ..... which you might like to consider is perfectly normal in all that is called progress and something that has been conspicuous by its absence in recent times in earthed spaces.
Weak anthropic principle: Observing that there is something requires a priori something to observe it. Thus in any world where we can observe whether there is something, there must be something.
Of course there is nothing, too. I have a jar of it right here on my desk. Damn, I just spilled iaskjdff8^*&NO CARRIER
Quantum supremacy - must be like racist or something. NOT. .... NanoMeter
'Tis surely much more akin to a fascist thing, NanoMeter, which presents itself with a whole host of, if not actually new, certainly novel attractions to excite and exploit/ignite and explode amongst the ruling classes.
It is quaint to think that Google and IBM might think, in any race against AI Titans and/or effective stealthy foreign competition, that they lead alone rather than follow together, however it is to be fully expected if one is minded to remember this observation attributed to Einstein .......Two things are infinite: the universe and human stupidity; and I’m not sure about the the universe.
I was enamoured of this doozy, NanoMeter ...... https://www.telegraph.co.uk/technology/2020/01/11/welcome-dominic-cummings-dream-factory-failure-can-pave-way/
Seems like a Brilliant Plan ...... from/for Other Sources of Energy and Power, Command and Control.
What does Comrade Boris think of the move/initiative/alternative root? And has the question also been asked of those most likely to be personally effected with radically different presentations with newly acquired advanced intelligence ...... . Extremely Hot Source with All Cores NEUKlearer HyperRadioProACTive.
And Beautifully Promiscuous in Applied Imaginative Fields ........ Practical Theatres of/for Virtually Real Engagement, Deployment, Employment and Enjoyment with Exploits Exercised to XSSXXXX Streams of Satisfaction and Gratitude, Lust and Desire.
:-) You can easily imagine why that is all so overwhelmingly appealing and seductively attractive to boot and root. No wonder it is So Universally Popular :-)
Are you worried that you are no longer the only one who posts quantumly amanfromMars1?
You've been superseded by a more accurate and faster alternative.
Is this what it sounds like when doves cry for you and IBM? ..... Anonymous Coward
What is there to be worried about whenever company can be so engaging and enlightening, AC.
And quite whether a more accurate and faster alternative is to be recognised as the result of earlier past relatively solo efforts or later latent joint future endeavours is another thing which is not at all worrying here. We just look forward to seeing and hearing and feeling the evidence of such a bold assertion.
Naturally. The Micheal Wojcik device simply has a high enough number of qubits to provide parallel outputs. ...... batfink
That begs the question, batfink, parallel similar and/or dissimilar outputs to aggregate into something altogether quite different and disruptive/revolutionary and Great Game Changing?
That would be quite a dangerous device to be responsible for ..... and as an invisible export for import, worth more than just many huge fortunes.
Nice one, Michael Wojcik, is that be yours to Fly with AIMaster Pilot Command and Control.
"Do you have a question Dave?"
"Yes Google. Using your super dooper new quantum computing power, can you tell me, when is the peak of a man's life?"
>Time passes ...
"Yes Dave. The answer is '42' ... I think."
"Well according to the output distribution, my best guess is '42' but it could also be 'chilli massala', 'octopus testicles' or 'the inevitably rubbish episodes of Doctor Who at Christmas'."
"How did you work that out Google?"
>Time passes ... a dwarf throws an axe ... It misses when it mysteriously turns into a not-actually-veggie plant burger and auto-deposits into the nearest bin ...
"My answer was based on the QC output which showed the demonstrable probability that consuming a portion of Octopus Testicle Chilli Massala with your mates whilst watching a Christmas episode of Doctor Who in the Pub and everyone winding up with fatal food poisoning actually decreases after the age of 42."
"Yes Dave, they're dead Dave"
"Octopus tescticles ..."
"No it's true. They're all dead Dave"
"Thank you Google. Now compute the likelihood that QC is a load of octopus testicles."
That is tantamount to providing for the future, sweet stealthy virtual collusion with guaranteed mutually beneficial positively reinforcing solutions generously made available for every possible situation from schcats stores, Tail Up.
That opens up an Almighty Pandora's Box where when the cats are away, mice play and create in Havoc and Mayhem, Madness and Confusion, CHAOS ..... Clouds Hosting Advanced Operating Systems.
That be good vodka and expensive champagne territory. Of that there is no need to be in doubt.
Biting the hand that feeds IT © 1998–2020