1) AI just isn't that clever. It's snake-oil and statistics, which is why these things roll into ponds.
2) The liability for anything - until it's literally self-aware - is still with the manufacturer.
3) "I, Robot"-esque philosophy aside, if the product takes action that harms, it was in the wrong or designed badly.
4) Though the "through inaction" Asimov sub-clause is a well-thought out literary device, in sensible terms it's stupid, impractical, impossible to implement and leads to only one logical conclusion - protecting humans from themselves (hence the 0th law of robotics!).
Sorry, guys, but this discussion is 50 years too early. At least. And you can't escape liability while you're selling a product that injures someone. You don't even escape liability if you put a real human in a school, say, who then hurts a child. Though product manufacturers would love to throw all liability out the window, when they do you are quite literally into "corporate manslaughter as a service".
The question is moot even with prototype technology.
If you hurt someone, you're responsible. Whether you're human or not. While the devices themselves are not self-aware and declared legally independent entities, they cannot take responsibility, so they they are just devices produced by a corporation - and that corporation has full liability for anything they do while being operated correctly (as judged by a court).
You can't get away with "Well, the lorry shouldn't have pulled out in front of our car, we don't guarantee that we can avoid every collision, even with no driver's hand on the wheel", so the law is currently correct.