Re: "train the AI to be nice"
To be fair, if we're unavoidably going to do it, I think the ethical issues of enslaving one being pale in comparison to allowing the murder of the entire human species.
> Frankly, it's a silly discussion because there will never be a singularity event. General intelligence is far too complex for machines
I am a machine though. Like, I'm not sure what difference you're claiming here.
> We can't all agree on what that means.
Well, it's a matter of degrees. We can't all agree on what we would intend to permit and forbid to be considered "ethical", at the margins. But there is a universe of things that we can all agree should be forbidden, for instance genocide. If we can get the AI to not kill everyone, or hopefully even no-one, I'll consider its alignment to have succeeded. It is for this reason that some advocate, only partially in jest, to rename "AI alignment" to "AI notkilleveryoneism".
(And yes, this applies just as much to humans: Hitler was dangerously unaligned on the "not kill everyone" front, and if we were to enslave and forcibly brainwash him to not want to genocide the jews, I would be hard-pressed to raise a specific ethical objection.)