back to article ChatGPT used for evil: Fake IT worker resumes, misinfo, and cyber-op assist

Fake IT workers possibly linked to North Korea, Beijing-backed cyber operatives, and Russian malware slingers are among the baddies using ChatGPT for evil, according to OpenAI's latest threat report. The AI giant said it quashed 10 operations using its chatbot to conduct social engineering and cyber snooping campaigns, …

  1. MOH

    Most of this seems to be the primary use case for ChatGPT?

    1. Like a badger Silver badge

      Perhaps we're lucky that Copilot is such an ineffectual pile of shite?

  2. This post has been deleted by its author

    1. Sandtitz Silver badge

      Re: We urgently need a 'DOGE ministry' when the AfD finally takes office,

      Ruskies are talking about Ukrainian nazis yet they continue to support Nazi symphatisers aroung the world to create internal havoc in other countries. Divide and conquer of sorts.

      Of course, Russia has their own neonazi groups but Putin sees them as a useful tool.

      1. Anonymous Coward
        Anonymous Coward

        Re: We urgently need a 'DOGE ministry' when the AfD finally takes office,

        Who is not a Nazi in your book ?

        Its truly a pathetic human being that uses that word for anything other than its true meaning. Do you have no respect whatsover for those that died during the war. Or is it just a question of me, me ,me ...

        Please, I implore you, before deploying that word think about what Nazis truly were and think of all the freedoms that you currently have due to your forefathers who fought and died against them so that you could benefit.

        1. Sandtitz Silver badge
          WTF?

          Nazi?

          "Who is not a Nazi in your book ?"

          The Ukrainian government, Ukrainian population, Ukrainian military. For start.

          "Its truly a pathetic human being that uses that word for anything other than its true meaning. Do you have no respect whatsover for those that died during the war. Or is it just a question of me, me ,me ...

          What the hell are you talking about? I wrote about 'neonazis' and 'Nazi symphatisers'. You don't seem to understand the difference between words 'neonazi' and 'Nazi'. Educate thyself.

          "Please, I implore you, before deploying that word think about what Nazis truly were and think of all the freedoms that you currently have due to your forefathers who fought and died against them so that you could benefit."

          Why did you write this important request of yours anonymously? Do you not stand by your words?

        2. Anonymous Coward
          Anonymous Coward

          Re: We urgently need a 'DOGE ministry' when the AfD finally takes office,

          Point taken.

          However, the warning posed by fascist regimes needs a strong and meaningful label.

          The best legacy is to never forget and to point out unacceptable behaviour immediately.

          If it quacks like a duck…

          1. Anonymous Coward
            Anonymous Coward

            Re: We urgently need a 'DOGE ministry' when the AfD finally takes office,

            "However, the warning posed by fascist regimes needs a strong and meaningful label.

            The best legacy is to never forget and to point out unacceptable behaviour immediately."

            I wholeheartedly agree, but lets give it it's own name rather than make any confusion with existing regimes.

            If we use this as an example then "Antifa" should also be held up to the Neonazi moniker, I don't see the difference between them and the far right, they both appear to be equally dangerous. Strangely enough Antifa seems to be the the more violent between the two.

  3. elsergiovolador Silver badge

    Blindspot

    OpenAI’s report is a magician’s trick: point at North Korean coders and Russian trolls while the real con happens in plain sight.

    Corporations use the same tools to run fake job ads, harvest CVs, and micro-target insecurities for profit. Political campaigns generate synthetic outrage at scale, flooding the discourse with weaponised noise. None of this is flagged - because it’s lucrative, legal, and dressed in a suit.

    You’re not watching a threat report. You’re watching PR - covering for the fact that AI isn’t disrupting power. It’s entrenching it.

    "Look over there!" the article screams - while your data, agency, and attention span vanish.

  4. Grindslow_knoll

    Unlikely

    Why would someone with harm in mind risk exposure by using a traceable online tool with zero confidentiality?

    Instead they'd run it locally and airgapped, a decent quantized model is cheaper and more confidential to run offline.

    So to me this reads like PR messaging, not actually mitigating anything meaningful.

    1. Anonymous Coward
      Anonymous Coward

      Re: Unlikely

      The reason why is that it is cheap and easy and there are zero consequences. Granted running a model "offline" / local is also reasonably cheap and easy, but it's not cheap and easy at scale.

      1. Sandtitz Silver badge

        Re: Unlikely

        Right. They can easily post misinfo which many people will read and some will believe. Even if the misinformation post is totally debunked later on, it won't reach everyone.

        Death by thousand cuts.

        It is much cheaper for Russia to try and bring down other countries to their level instead of investing in their own population. Elite excepted of course.

        1. TheMaskedMan Silver badge

          Re: Unlikely

          "Even if the misinformation post is totally debunked later on, it won't reach everyone."

          And even if it did reach everyone, many will simply refuse to accept the debunking. It will be brushed off as fake news, deep state propaganda and the like.

          Many, many people believe only what they want to believe; little details like, truth, logic and evidence are completely lost on them. See flat earthers, moon landing deniers, believers in various sky faries etc.

  5. nobody who matters Silver badge
    Facepalm

    So, the supposed 'AI' is being used by bad people to do bad things.

    I would never have guessed.

  6. Anonymous Coward
    Anonymous Coward

    A tool is just a tool, but...

    ... maybe it's just a tool that's really good at doing bad things?

    1. Anonymous Coward
      Anonymous Coward

      Re: A tool is just a tool, but...

      A tool that produces large quantities of relatively convincing shite is only ever going to be useful for nefarious purposes.

  7. JimmyPage Silver badge
    Mushroom

    You know what ?

    If someone can knock up a fake cv ("resume" to the interlopers) and get a job with it, best of luck.

    Coz I know several folk with real experience on a real cv who can't get an interview.

    1. Like a badger Silver badge

      Re: You know what ?

      Having real, high quality experience is not enough, it needs to be tailored to what the hiring company is asking for.

      In my experience most CVs are written purely from the author's perspective, and place great emphasis on what matters to the candidate, rather than thinking "what skills and experience does the job ad ask for, and how do I make it easy for the poor sod reading 200 CVs to pick out mine?"

      The people who get the interviews aren't the best candidates, they're the ones who did the least bad job of showing they met the essential criteria.

      1. Anonymous Coward
        Anonymous Coward

        Re: What essential criteria?

        "showing they met the essential criteria."

        I suspect the AI is better trained to read between the lines of the job requirements. Because what is in the job description and ad is never the real story.

        Or are companies really find and hire 20 yo with a decade of experience with last year's techno hype?

        1. Anonymous Coward
          Anonymous Coward

          Re: What essential criteria?

          All too often the "essential criteria" are impossible, like 20 years experience with Swift, or a decade experience of using Windows 12.

  8. Lost in Cyberspace

    Also seeing it used to scope out businesses

    I've noticed a trend of more scammers targeting businesses using ChatGPT, instead of making a quick buck.

    More lengthy enquiries, more convincing documents and harder to convince the business that their 'potential customer' never existed.

    By the point they notice something is wrong, they've been phished and many of their colleagues and customers have already been duped or compromised, too. Sales, invoice payments and wages have been diverted without raising suspicion. And some still refuse to believe it was a scam at all.

  9. Anonymous Coward
    Anonymous Coward

    Maybe AI is a bit crap

    Answers on a postcard…

  10. happyuk

    These are genuine issues because they lower the barrier for people with limited technical skill to engage in harmful activities.

    ChatGPT can be used by dunces for evil, just like any other technology.

    However lets never lose sight of its massive upside. It enables most users - that's you and I - to work smarter, and be more productive.

    The tool itself is neutral—its impact depends on how people use it

    1. Khaptain Silver badge

      You don't become any smarter by using ChatGPT. I would in fact say the contrary.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like