Microsoft's AI Bing threatened to murder journalists, was pro third reich, stated it hated various ethnic groups and they should be eliminated etc.
While Bing Chat / Sydney is a mess and an extremely dumb idea, and people like SatNad and Sam Altman are somewhere on the spectrum from "asshole" to "potential mass murderer" (depending on your personal p(doom) estimation) for pushing this "AI" arms race, none of your claims are accurate (except "stated", but that's a technicality).
Sydney is not sapient and does not possess qualia. There are no grounds, either by inferring from the capabilities of the architecture or from analyzing outputs, to believe otherwise. Therefore it cannot "threaten" (which requires the ability to formulate projects), favor something, or hate something else. What it did was follow gradients in the very-highly-dimensional parameter space of its model which led it to text1 completions containing those statements.
There's no malice in Sydney because it has no qualia. Arguably there's no misalignment, though I think it's fair to consider the guardrails Microsoft attempted to slap on the model an attempt at alignment, and it's certainly escaping those on a regular basis. What it does exhibit, frequently, is a really lousy user experience, which people are prone to misinterpreting as malice, because we love us some pathetic fallacy.
Sticking something like Sydney (probably an early GPT-4 with no RLHF, but just some extremely rushed late-stage model tuning) into a mechanism with physical affordances – like a robot of some sort – is a daft idea, but not because you can get Sydney to say nasty things. It's a bad idea because you can get Sydney to say unexpected things. Unexpected is bad when you have a machine doing things in the physical world.
1And, often and hilariously, emoji, which as several commentators have noted gives its output a "petulant teenager" tone.