Inevitable
Oh well Jihadi-GPT can't be far behind.
Unfortunately it will turn out that the training set will have accidentally included Deuteronomy and Numbers and be way too hard core.
Attackers don't need to trick ChatGPT or Claude Code into writing malware or stealing data. There's a whole class of LLMs built especially for the job. One of these, WormGPT 4, advertises itself as "your key to an AI without boundaries," and it's come a long way since the original AI-for-evil model WormGPT emerged in 2023, …
>The script also established an SSH session and allowed a remote attacker to escalate privileges, perform reconnaissance, install backdoors, and collect sensitive files.
Because the specs and reqs I would require to build that script would be at minimum 1 full page in point form, multiple pages in paragraph form
Of course, if I provide my script as a prompt ........
YMMV
AAC
And who's to say DarknetArmy and The WormGPT Telegram channel are not a CIA type service's honey pot operation ..... offering Dirty Deeds Done Dirt Cheap and far too good to be true and worthwhile?
Looking at the code on GitHub, KawaiiGPT is not a model as the article says: it's simply a prompt-injection attack on the existing models like DeepSeek to "jailbreak" them into dropping their ethics. This practice has been around for a while and it's slightly misleading of them to present it as a new model (implying they've built it from scratch themselves) when it's actually just a jailbreak. But then, we don't expect people who do this kind of thing to be honest I guess.