Re: What is "deterministic", who cares anyway
Beat me to it, and *exactly* my point. Göedel has a few pithy things to say about it, too. As, in fact, does Plato, albeit in a different context; see _Phædo_, which (among other things) he points out that language is an insufficient construct to truly convey thoughts.
Along those lines, to the person who accused me of failing first year comp sci-- I was not making the full logical argument in CS terms. Because, frankly, it can't be made (again, see Gödel), at least without fully including Gödel's proof*S* (yes, there were more than one, and in some ways the subsequent proofs were more important). If you're only aware of the incompleteness thereom (as I suspect you are, and probably only in the _Gödel, Escher, Bach_ form, which is itself incomplete), I suggest a few remedial logic classes. Finally, if you do not understand the truly gigantic implications of the difference between deterministic and non-deterministic Turing machines, you are the one that needs some remedial CS. Hint: it is not merely computability. That is merely *one* of the implications, and one of the least interesting.
Now, the obligatory _ad hominem_ out of the way, the point is, that analog computers are in fact infinitely better than digital, simply because of Cantor's proof. That there is a discrete quantum underlay might one day become an issue, but I suspect it would be overwhelmed at that point by the probabilistic (analog!) nature of the choice of those discrete states. Further, don't confuse the map with the territory. Quantum mechanics is a *model*, it is not the *reality*. For instance, it currently requires that we treat EM radiation (e.g. light) as a wave and as a particle means that our map is not sufficiently accurate-- the radiation is neither a light nor a particle, both of which are models, but something else that we don't actually understand very well. We can use the "particle" model or the "wave" model at certain times to predict certain behaviors, but that does not actually mean that light is changing between being a particle and being a wave. It's always light. Someday we might have a more accurate model of something that seems to behave as both models (and, in fact, maybe we do; I'm not a theoretical physicist, though I play one on the Internet), but for the time being switching maps when appropriate is sufficient. It's kind of like relativity versus Newton's laws. Newton's laws are sufficient for predicting how your car behaves. They're not for predicting how a neutrino behaves. You choose the map that fits the resolution you're using. Even relativity is only a model-- at some point, *it* will almost certainly prove to not have enough fidelity, and need to be modified to better model the strange, strange thing we call reality.
How does this get back to AI? Well, we have a model of how the brain works. It's nice and all, and can do some fairly amazing things. But it's still only a model. Neuroscientists are still trying to figure out some of the grosser ways that the brain works; we're a long way from understanding it at more fundamental levels. The complexity is staggering; hundreds or thousands of different chemicals interacting, modified by a non-deterministic network of interconnection, all operating in an analog fashion. Hell, we don't even have a Newton's laws of motion level of fidelity of model for the brain, much less relativistic models. How can we hope to replace the human brain when we are not only not even the stone age of modeling it, we're probably still some small animal scurrying around under the feet of T-Rex trying not to get squashed in our understanding and modeling of the human brain. Hell, probably even of the *flatworm* brain.
So, no, we're nowhere near true machine intelligence, much less machine consciousness. And probably won't be for hundreds of years.