"Now you can argue that they are, in fact, accomplice to the criminal actions."
You can argue that, and there are ways that would probably work. The one you chose, however, isn't a great one. Just because something is paid for and a criminal uses it doesn't automatically make the provider an accomplice. If I buy a car, a criminal steals the car, and they use it to commit a crime, neither I nor the manufacturer is an accomplice. If I buy a server, and a criminal breaks into that server, than neither I nor the facility in which the server is located is an accomplice. If I bought the server and arranged for the criminals to use it, now I would be an accomplice. OpenAI did not do that with GPT accounts.
If you want them to be an accomplice, it would be easier to try arguing that on the basis of what queries their system will perform. It will cheerfully write malware when told that it is malware, for example. Whether that counts as fulfilling criminal requests or just a computer doing something which proves malicious is a recipe for lots of definitional debates, but many, including me, would decide that OpenAI would be liable for the things they chose to allow their tool to do.