Re: The Real Problem
One big issue, as some above expressed, is how do you define AI?
As humans, composed of meat, our intelligence is inescapably meat-based. (I am not talking ab out bacon as brain-food here.) What it means to be intelligent is therefore naturally tied up in the physical processes that give rise to it. How then do we even go about defining intelligence where such processes do not exist?
Most would agree that intelligence is not simply knowing that 2 + 2 = 4 but in some way understanding it. What does it mean to say that a computer understands something?
One would expect that there would be limits to such an intelligence, despite the fact that its computational power may significantly exceed our own. Could an AI actually think and theorise about the world? Take the theory of General Relativity, which by all accounts was a 'thought' of the most amazing order and a leap of quite remarkable genius and was not based on the input of some data that pointed to it.
Or what about any of the other theories initially proposed before there was any ability to verify or even test them? By that I mean those hypotheses that came about, in some measure simply by thinking about the problem at hand.