AI may not be able to directly cause any harm, but indirectly they can cause a LOT of harm. Imagine for a second an intelligent virus (yes, I know, such a thing is well beyond our current capabilities, but this is a thought experiment) that manages to infect air traffic control workstations with the intent of causing as many deaths as possible. Or traffic light control systems. Or the emergency alert system.
And that doesn't even get into the nightmare scenarios of hospital systems and infrastructure control systems. How many people do you think would die if medical equipment started putting out inaccurate data and all the lights went out? Heck, just shutting off gas pumps would result in millions of deaths in the US inside of a month.
True, we don't have to worry about AI triggering a nuclear apocalypse directly, but what about sending falsified communications to all the world's nuclear powers making it seem like they were under attack?
Unfortunately there are LOTS of ways strong AI could harm humanity if it chose to. But, on the plus side, the kind of strong AI that could choose to do that would probably have little reason to make an enemy of humanity. I personally think we have a lot more to fear from the paperclip maximizer than from terminators.