
"Artificial intelligence is never a threat of itself [...]"
If an AI decision is able to inflict damage in any way - then there is always a chance of unintended outcomes or collateral damage. That is why Asimov's Laws stipulate overriding contingencies as a catch-all.
There is the classic dilemma of the runaway train heading for a broken bridge and certain destruction. If it is diverted onto a spur then the passengers will be saved. However - that guarantees that a man in the spur will be killed. Does the machine choose the greater good - or avoid a direct action that would deliberately kill the man on the spur?