That won't go anywhere, as everybody will be arguing about who gets to lead and why.
Personally, the only rule that I want to see burned into lawbooks is that if you are "judged" in any way by an AI that can have a potentially adverse effect on your life, you have the right to all data collected that made that decision happen, and your legal representation has the right to inspect the AI source code and "environment", that is to say, what it has gleaned from the training data.
My fear isn't that AI will rule the world, or turn up shoehorned into a load of IoT shit. My fear is that AI will be used to make important decisions (bank loan acceptance through to police identity matching) and it will be an opaque system with a decision that people treat as some sort of gospel truth with zero accountability. Basically passing the buck to a machine that can't be reasoned with, nor explained how exactly it arrives at any particular result. "Computer says NO...".