Re: *A* Brit Expert
The whole premise of the theory is bonkers. A machine is not going to be held "liable" for anything. The police are not going to arrest your car and put it in jail.
The people who are held accountable for how the software performs will be determined the same way that the people who are held accountable for how the hardware performs. There are loads of safety critical software systems in operation today, and there have been for decades. There is plenty of established legal precedent for deciding liability. Putting the letters "AI" into the description isn't going to change that.
The company who designed and built the system and sold it to the public are 100% responsible for whatever is in their self driving car (or whatever). They may in turn sue their suppliers to try to recover some of that money, but that's their problem. Individual employees may be held criminally liable, but only if they acted in an obviously negligent manner or tried to cover up problems. The VW diesel scandal is a good analogy in this case, even if it wasn't a safety issue.
There are genuine legal problems to be solved with respect to self driving cars, but these revolve more around defining globally accepted safety standards as well as consumer protection (e.g. who pays for software updates past the warranty period).
The people who have an interest in pushing off liability from themselves are dodgy California start-ups who push out crap that only half-works and are here today and gone tomorrow and don't have the capital or cash flow to back up what they sell. They might try to buy insurance coverage, but the insurers may get a serious case of cold feet when they see their development practices. Uber's in house designed self driving ambitions are going to run into a serious road block from this perspective.