A huge question which I see is: ok, the present Google machine has 56 qubits = 2exp56 size solution set.
How do you verify that an algorithm is working correctly over this range of solution set? Existing systems can't seem to do a very good job of testing/quality control - will quantum magically change this situation?
I wonder because once you start going into the 2exp100+ range - this is literally the million monkeys on typewriters for a million years scenario. Makes hash collision really interesting - in cryptography, for example.