Re: There must be clarity on who is responsible:
Oh dear.
The whole point of automating systems that make decisions is that no one is to blame, no one gets fired, and no one gets punished (apart from us underlings, and we were always expendable anyway, so not much of a loss). 'AI' is just another way of confusing anyone who tries to find a causal link or a decision making chain between a disaster and an identifiable living person.
I once worked for a company that was very badly managed. The two owner - directors had told one person to do a piece of work that was literally essential to the continuing existence of the company. They had said that if he failed they would 'cut his head off'.
I then went to this actually very good and competent person and asked "do they understand the difference between delegation of authority and abrogation of responsibility?" Before I had finished he had already replied "No." (His work was fine and the company survived another few years.)
Just look at the astonishing number of corporations that settle out of court, without accepting liability, for an undisclosed sum, having forced a non-disclosure agreement on the victims. When Harry Stanley was shot by two armed police officers who thought he had a shotgun (it was a wooden chair leg he had been restoring, in a plastic bag), the verdict was that the officers were not to blame. There was no consideration of what would happen when the armed police officers told someone who did not have a gun to "put the gun down NOW!" Similarly, Jean Charles de Menezes was shot by accident, but the police officer who emptied a magazine of seven bullets from his automatic pistol into the bak of his head while sitting on top of him was in full control of himself.
Basically if you follow the rules and there is a disaster, you will be ok. It, whatever 'it' is, will not have been your fault. The people who wrote the rules will have moved on and there will be no-one left to take the blame in person. And if the rules are incomplete, contradictory, unclear, ambiguous or merely 'commercially confidential' so cannot be released for general review (like search engine results or social media 'recommendations') so much the better.
SO: The committee said there should be clear lines of accountability when AI systems produce harmful or unfair outcomes.. is pointless and never going to happen.