A sneaking fear that the machines might turn on us is just not good enough - we need to be able to quantify that risk if we want to avoid it, or at least manage it. Or we could just push on regardless and see how things work out.
At the moment, the threat is massively overhyped. Even if we were to ignore the fact that AI simply isin't capable of developing a desire for world domination and is in truth more akin to a complicated excel macro than an intelligence then what could an AI do to us?
On the desktop side machinery has no ability to harm the operator (excepting these) https://www.theregister.co.uk/2006/07/07/usb_missle_war_breaks_out/
At a high level, nuclear weapons are very, very offline and rather throughly airgapped. They are also controlled by 1970's floppy discs and elaborate man in the loop security proceedures, so nothing is going to happen there. That leaves causing industrial accidents from companies putting too much online secured very poorly, but that's not going to wipe out humanity and has a questionable ability to harm any significant number of people.
The only thing likely to change that is actually self driving vehicles if they are insufficently secured as a few million self driving EV's roaming around under computer control trying to run anybody over on sight would be a mite unpleasant, but simply requiring a physical key in the circuit (doing it in software would create a risk of bypassing the safety measure) would allow humans to remain in control there, eliminating that as a threat.
In short, AI can't seriously affect the RealWorld™ unless we allow it to. I'm all in favour of making sure that anything connected to the internet is (by design) set up to be physically incapable of causing serious damage as the more real threat is people hacking who would try and cause serious harm for "teh lolz".