Re: Err, wot?
Absolutely this.
There's a lot of hand wringing going on along the lines of "think of the lawyers!", calls for new laws, etc. However, mostly, it's already covered. GDPR is just one example, and (so far as I know) there's nothing in GDPR that explicitly restricts it to ordinary IT systems.
My favourite pet hate is the debate about the ethics of self driving cars that find themselves in a kill person A or kill person B situation. The question that gets asked, is, how should a car choose? The actual answer of course is that, if the car had got itself into a situation whereupon a fatality was inevitable, then it had already failed by not having anticipated the potential for such a situation to arise. At least here in the UK, a human involved in such an accident is likely to be found to have been driving inappropriately for the conditions of the road, and guilty of causing death by reckless driving.
We've already seen articles about racist recruitment AIs used by hiring departments. Well, there's laws about racial discrimination in most civilised countries, and a company using a racist AI to hire staff is just as guilty under such laws as if it had been the work of its staff. I think what's interesting is that there is actually a useful role for a racist hiring AI. Instead of using it to make decisions, use it to review decisions. If the decisions of the human hiring staff are found to be matching those of a known-racist AI, then that's a useful warning sign to the company that it's getting things wrong and needs to fix them.
It's going to be the same with things like copyright law. If ChatGPT plagarised someone else's copyright material, then the company running that instance of ChatGPT has broken copyright law in someway or other and should face the consequences. Though this perhaps is an example of where a specific AI regulating law might serve a useful role. Plagarism conducted by humans is a problem, but, generally, one knows who the human is and in principal a court case can be brought and a decision made. The problem with something like ChatGPT is that this could happen on a far larger, switfter, even more opaque scale. So, providing a service like this could be regulated by law and required to always cite its sources, so that it becomes a lot easier for copyright holders to assert breaches of copyright. It would also alert other service users that, if they proceed, they themselves might be unwittingly breaching someone else's copyright. It should not be possible in court in a copyright case to sustain a defence of "ChatGPT told me"...