Re: Evolution and power efficiency
I agree that the arguments are somewhat implausible because actually getting anywhere close to them is so difficult. I had a few preconditions to the discussion:
"A program capable of having its own goals, understanding the world enough to have a chance at pursuing those goals, and capable of acting on the world enough to be a threat"
Even getting to that point will take a rather long time, and the first is probably the most difficult. Programs could eventually get connected to a lot of systems, but it will be difficult for them to come to goals of their own when they have no reason to do so. Sci-fi authors sometimes get around this by having them misinterpret goals that the humans gave them, but I don't find that particularly likely.