Doing things because we can, without considering if we should.
FTFA:
"AI guru Andrew Ng once said worrying about killer artificial intelligence now is like worrying right now about overpopulation on Mars: sure, the latter may be a valid concern at some point, but we haven't set foot on the Red Planet yet."
With all due respect to AI gurus everywhere, I don't believe this is a valid argument.
Okay, to be fair, "worrying" is probably not productive, but considering it as a potential problem isn't such a bad idea.
It's a little bit late to start considering the problem once you've already implemented something and it goes horribly wrong. The very concept of change management is built on this idea. and it applies just as readily to overpopulating Mars as it does to AI going rogue.
In the Mars example, why not consider now what resources are required per-person to survive there (including requirements like land area requirements, redundant systems for safety etc. etc.) and then calculate a sustainable colony size that allows for appropriate scaling due to the inevitable population growth (I lived in a town where the only things to do on a Friday night involved two TV channels or stupid amounts of alcohol. Unless your colony is gender segregated, you're going to have space babies at some point, even if only out of boredom).
The same is true for AI's. It didn't take long for those negotiating smart frames to develop their own language, so a small amount of consideration now may well avoid considerable effort to correct an issue later.
To use a (moderately) famous quote: "The avalanche has already started, it's too late for the stones to vote."
We haven't triggered an avalanche yet.
It might be a good time to vote.