Very true; "AI" is fundamentally different from traditional software engineering and should never be used to attempt to solve the same problems. You can never rely on the answer of an "AI" system in the same way that you can rely on the answer of a classical algorithm. And this is not something that can be fixed; it's a fundamental property of how such systems work.
That said, the "AI" system should at least not crash outright or allow arbitrary code execution when encountering weird input. Those are traditional bugs and should be fixed as such.