Re: So, if I understand things correctly...
To 'mimic' what a human does in a repetitive physical task with some defined scope for correction is one thing. To 'understand' why the task is being performed in the first place and the implications of that task is a totally different thing. Can the algorithm truly 'feel' dissatisfied and fatigued from carrying out the task and ponder why it's being done in the first place, show instinct and make the task more pleasurable? or simple stop from the futility of it all?
Myna birds can 'mimic' human words - but its near impossible to say if they 'understand' the meaning of a word (or what a word is) and what context it should be squawked in. This is more of a task/reward scenario - squawk what sounds like a word to make the human happy and the outcome is a tasty treat. When you train a neural network should it demand a treat?
Can an AI Chess program/algorithm that repeatedly beats a human player (it's ultimate goal) sense despair from the human moves being made, show instinct and empathise with the human and occasionally let them win for a better shared experience? Sure - an algorithm can track win counts and target a threshold of balance but there is no feeling or emotion involved - just pure logic.
Can AI exhibit a 'sense' of morality (to be fair a lot of humans struggle with this one) in a situation? The moral dilemma of crashing an autonomous vehicle with 3 occupants to spare a 30 car pileup resulting in a higher death toll? are the occupants more important given they are the vehicle owners?
In the meantime warehouse staff are being taught to behave more like robots which is somwhat ironic.
If Watson hits the streets with legs - its time to get to the panic room LOL.