Reply to post:

Aye, AI: Cambridge's Dr Sean Holden talks to El Reg about our robot overlords

SuccessCase

AI encompasses some interesting developments. But there is a category mistake made by many in the field as to what constitutes judgement (which is a very different thing from blind rule following). Conscious emotional judgements are a part of what it is to be human. We sort of imagine that many many very fast clock cycles where a computer is following an algorithm, which every single time is following rules, might somehow be equivalent to an emotional judgement. As though if we have enough processing such that we don't really understand at a macro-level what has just been done, where the computer has implemented the kind of recursive self-defining patterns of logic we find in neural networks, we have created something that is the equivalent of emotional judgement. But there is absolutely no evidence that is the case, there is no way to know the computer has consciousness, and there is plenty of reason to think it probably doesn't have (it's though the "not knowing" what consciousness is, is then sufficient to say "we probably created it" if a computer passes a Turing test. A test which has always been logically insufficient as proof of anything other than that a human can under certain strictly limited circumstances confuse a machine with a human).

Doing much, much more processing very, very quickly doesn't transform a category mistake into a truth. It just means the same mistake is being made over and over on a larger scale.

It's important not to say "never" with regard to advances in computing and AI. Of course we are going to make great strides. But IMO there has to be a very different kind of advance than the current limited set of tools is providing.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon