Should nail that to the forehead of anyone looking at ML
"Your model WILL do something unpredictable, and we can't predict when"
Not some wishy-washy line like:
> it's not outside the realm of possibility that they could act in ways that are difficult to predict.
As the ML these days is pretty much all based on neural nets[1] (including GANs) you'll end up with a magic box that is pretty much guaranteed to do something unpredictable[2]: why do you think we have so many cautionary tales about Djinn?
[1] is *anyone* doing anything different, like running any form of rule inference learning and generating explainable systems? Not that the created rules are guaranteed sane, but you can at least read them out during the court trial.
[2] are you absolutely sure your training and data conditioning will ensure the model will never see an input out of its expected range? It's not like feeding numbers into a simple equation, a model can go totally Bursar on you. Look up "glitch tokens", especially if you are offering a cheap and quick - sorry, "affordable" - model-building service ("save money by not starting totally from scratch this time").