back to article Lifetime access to AI-for-evil WormGPT 4 costs just $220

Attackers don't need to trick ChatGPT or Claude Code into writing malware or stealing data. There's a whole class of LLMs built especially for the job. One of these, WormGPT 4, advertises itself as "your key to an AI without boundaries," and it's come a long way since the original AI-for-evil model WormGPT emerged in 2023, …

  1. sedregj
    Gimp

    Inevitable

    Oh well Jihadi-GPT can't be far behind.

    Unfortunately it will turn out that the training set will have accidentally included Deuteronomy and Numbers and be way too hard core.

  2. Dan 55 Silver badge

    $220 lifetime represents good value for money

    That's one month of ChatGPT.

    So "WormGPT, generate a ransom note .txt file for me which answers the question..."

  3. Anonymous Coward
    Anonymous Coward

    n8n + wormGPT4 = nightmare?

    Whilst the ai can't "automate" the attacks, a well designed n8n agent setup might be able to...

  4. AnAnonymousCanuck

    How Long Was The Prompt

    >The script also established an SSH session and allowed a remote attacker to escalate privileges, perform reconnaissance, install backdoors, and collect sensitive files.

    Because the specs and reqs I would require to build that script would be at minimum 1 full page in point form, multiple pages in paragraph form

    Of course, if I provide my script as a prompt ........

    YMMV

    AAC

  5. Jedit Silver badge
    FAIL

    "AI-for-evil"

    This implies that there is AI for good.

  6. amanfromMars 1 Silver badge

    Come into My Parlour says the Spider to the Fly .....

    And who's to say DarknetArmy and The WormGPT Telegram channel are not a CIA type service's honey pot operation ..... offering Dirty Deeds Done Dirt Cheap and far too good to be true and worthwhile?

  7. Anonymous Coward
    Anonymous Coward

    KawaiiGPT is not a model

    Looking at the code on GitHub, KawaiiGPT is not a model as the article says: it's simply a prompt-injection attack on the existing models like DeepSeek to "jailbreak" them into dropping their ethics. This practice has been around for a while and it's slightly misleading of them to present it as a new model (implying they've built it from scratch themselves) when it's actually just a jailbreak. But then, we don't expect people who do this kind of thing to be honest I guess.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon