Re: We are already so screwed
Whilst a miscreant might find a way of staying under the radar for a particular iteration of an AI system once the AI has been updated to detect the malicious code it can process the whole software base much more efficiently than a system that relies on human expertise.
This still leaves the challenge of initially detecting the "new" malware technique and training the AI system - which is also the case with human expertise based systems.
The first part of that challenge is not necessarily entirely one-sided. To exploit the malware the malicious actors need to get it into target systems and interacting with their command, control, and data collection systems. The big operators (major APTs) are subject to a great deal of attention from the cyber-security industry (as well as government organizations and university research centres) so there are multiple ways their activities might be discovered - and once that is done the process of determining how they have breached security should be greatly aided by automated techniques that can anayze large code bases efficiently.
Training AI systems is ongoing research - not all aspects are good news (we have seen articles hereabouts about methods of malciously subverting training models in undectable ways, but this is adding to our knowledge.
AI systems could (and probably will) be used by malicious actors and this will probably advance the "script kiddies" technical sophistication - but they, and the more technically competent APTs, will also face the challenge of keeping their AI system training current with a potentially fast changing battlefield.
We have also seen articles about ChatGPT and how aspects of its creative output can seem impressive but also at times very dumb. It still seems that using AI to be creative is much more challenging than using it in a data processing role - suggesting that AI technology favours the defence team rather than the offence.