back to article AI-generated phishing emails just got much more convincing

GPT-3 language models are being abused to do much more than write college essays, according to WithSecure researchers. The security shop's latest report [PDF] details how researchers used prompt engineering to produce spear-phishing emails, social media harassment, fake news stories and other types of content that would prove …

  1. Kevin McMurtrie Silver badge
    Mushroom

    "I hope this email finds you well."

    Stop reading. View raw, reply, whois relaying IP, MX lookup reply e-mail hostname, A/AAAA lookup mail host, whois mail server IP, send abuse e-mail, and report to AbuseIPDB.

    1. Yet Another Anonymous coward Silver badge

      Re: "I hope this email finds you well."

      Almost all the spam emails full of auto generated hr & management bullshit I receive are from internal email addresses.

      The most effective solution is to ignore all email

      1. NATTtrash

        Re: "I hope this email finds you well."

        But that all views this from the "receiver side" of things. So what about limiting access on "the supply side"? I mean, this suggest that everybody has access to such facilities. But if you limit who can use, it might not stop completely (people are creative), but it will reduce the threat.

        In addition, I'm somewhat puzzled by the seemingly lack of control suggested. We live in a world where many are crawling over each other to monitor (and steer) the digital lives and actions of users all over the world. Telemetry build in by default, backdoors on wish lists for Christmas. And here we assume that world + dog can exhaust their (bad) creativity effortlessly without limits, and without anybody keeping track or noticing?

        1. doublelayer Silver badge

          Re: "I hope this email finds you well."

          "what about limiting access on "the supply side"?"

          You can do it, and many email servers have done just that when spam got too bad, but if you do it too aggressively, email you expected to receive starts getting dropped. If there's a new requirement for outgoing mailservers every month, then you can pretty much guarantee that there will be some that don't get updated in time. Some of this won't be easily solved without a redesign, but since email is so widely used, it's unlikely we'll scrap it. We may be stuck with layer after layer of patches.

          "In addition, I'm somewhat puzzled by the seemingly lack of control suggested. We live in a world where many are crawling over each other to monitor (and steer) the digital lives and actions of users all over the world. Telemetry build in by default, backdoors on wish lists for Christmas. And here we assume that world + dog can exhaust their (bad) creativity effortlessly without limits, and without anybody keeping track or noticing?"

          Yes, both are true. Mostly because some of us are resisting the backdoors, so if someone announced a new version of email which gave total control to others, I would refuse to adopt it. In other cases, the people who want backdoors don't particularly care about solving security problems while they do it; an advertiser doesn't need to avoid spam or malicious messages while mining your data. The internet is a remarkably open place, and there are people taking advantage of it just as they do in real life.

  2. veti Silver badge

    Rules would help

    Before we start panicking about AI being used to support text-based crimes, it would help if we could define what those crimes are.

    Some of the examples given in the article are criminal, sure. Attempted fraud certainly is. But others, including "fake news" and "crafting deliberately divisive opinions", are not. Isn't it time we began to have an adult discussion about the sorts of limits we should put on free speech?

    And if the answer is "none, or at least no more than present", stop talking about these things as "crimes".

    1. doublelayer Silver badge

      Re: Rules would help

      Theoretically, misleading spam emails that aren't overtly fraudulent are legal too, but they're unwanted by everybody who receives them and we therefore speak of them as undesirable and act to suppress them in our lives. The same applies to faked news stories. Not to mention that, depending on the content, such faked stories can be illegal if they involve libelous content or calls to illegal actions. Even if they don't, they're undesirable and we should treat them as we do spam: to be defended against even if their authors cannot be charged with a crime. If you're quibbling over the use of the word "crime", even though the article mentions several clear crimes which you've agreed with, we can supply a different word and continue on with the original approach.

      1. veti Silver badge

        Re: Rules would help

        My point is, stories - all stories - are, inherently, subjective. You can see it as an irregular verb - "I tell it straight, you overinterpret, she peddles fake news". If we can't agree where to draw the line when humans do it - and, clearly, we can't - how can we hope to do it with AI?

        Okay, there are examples where a story is based on "facts" that are simply made up. Let's agree that those are bad, at least if they're presented as nonfiction. But that extreme case can easily be averted by simply finding someone else to make up your preferred "facts", and then you can write the same story "in good faith", at least as far as anyone can prove.

        I would love to see a legally enforceable standard for integrity of news reporting. We're a very long way from any societal consensus on that, and most if not all of the media would fight it to the last ditch. But until we can do that, I'm not clear what is the point - or the possibility - of holding AI to a higher ethical standard than ourselves.

        1. ThatOne Silver badge

          Re: Rules would help

          > a legally enforceable standard for integrity of news reporting

          That's impossible, because "integrity" is subjective, each party being usually convinced to have the moral high ground (also the end justifies the means and all that). It's not that they will outright lie, but in a world full of nuances each party will emphasize the parts which further their own goals, while readily and routinely ignoring any embarrassing bits.

          Integrity, or at least some relation to reality, has so far been guaranteed by the need of news outlets (newspapers) to preserve their good reputation as serious sources of information. Lose that and people stop buying your paper.

          This disappeared with internet news, where the sources are free, pretty well anonymous, and thus don't really need to care about their reputation.

          The obvious (and only IMHO) solution is to only trust reputable (as opposed to convenient) sources having something to lose. The last part is important, because even in the heydays of written press there were the tabloids which took some license with truth to increase sales - But to compensate they sported big pictures of undressed ladies...

          (Didn't downvote you.)

  3. yetanotheraoc Silver badge

    And you shall rip what you sew

    "looking to improve their online scams or simply sew chaos"

    Sewing chaos would be very naughty indeed.

    1. fidodogbreath

      Re: And you shall rip what you sew

      Or at least, unseamly.

      1. This post has been deleted by its author

        1. KittenHuffer Silver badge

          Re: And you shall rip what you sew

          Really? I haven't cottoned on to it yet.

          1. This post has been deleted by its author

            1. Huw L-D

              Re: And you shall rip what you sew

              Yet more fabric-ation?

              1. This post has been deleted by its author

  4. b0llchit Silver badge
    Facepalm

    Oh silly me...

    "We'll need mechanisms to identify malicious content generated by large language models," the authors said.

    And that worked oh so well with none-AI generated content. If we can't identify specific content IRL, why would anybody be so naive to believe that we could do so for AI generated content.

    I see,... we need to create a new AI to identify this type of content. Then we generate a new model to circumvent the identification AI, which results in a new content AI making room for an updated identification AI making a content AI making an identification AI making an AI AI AI AI.

    Bzzzt... memory error, corruption at address 0. Reboot failed, retiring.

  5. Mitoo Bobsworth

    Same s**t, different way

    Considering that humanity has been practicing the art of propaganda & misinformation since forever, this is just another tool in the toolbox for those with a 'less charitable' agenda.

    1. amanfromMars 1 Silver badge
      Alien

      Re: Same s**t, different way

      Maybe not so, Mitoo Bobsworth, but only if ..... Different s**t, similar way, and now with new advancing enhancing strange leadership, practically anonymous and relatively autonomous to virtually boot, is the present human running current situation.

      For that then would be progress in deed, indeed.

      And the following questions and observations on Artificial Intelligence…. Another Approach? are as valid today as they were whenever first asked and shared ages ago

      Are we struggling to make machines more like humans when we should be making humans more like machines….. IntelAIgent and CyberIntelAIgent Virtualised Machines?

      Prime Digitization offers Realisable Benefits.

      Very possibly, we are not alone in the Universe. And what we see is simply what we have been Programmed to see. This makes changing what we see a simple matter of Re-Programming.

      What is a computer other than a machine which we try to make think like us and/or for us? And what other model, to mimic/mirror could we possibly use, other than our own brain or something else SMARTR imagined?

      And if through Deeper Thought, our Brain makes a Quantum Leap into another Human Understanding such as delivers Enlightened Views, does that mean that we can be and/or are Quantum Computers?

      And is that likely to be a Feared and/or AWEsome Alien Territory?

      One thing is definitely undoubtably certain for sure though ...... Dumb machines are dreadful learners and painfully slow at SMARTR Program Re-Programming and that particular disability and peculiar deficit renders them extraordinarily vulnerable and fatally susceptible to every major and minor SMARTR attack and assault on Systems and SCADA which have zero effective and speedy defence mechanisms to both micro and macro manage the risk.

      What then do you do when human and as akin to a dumb machine? Rage and fight against SMARTR Program Re-Programming while IT kills you or unconditional surrender and submission to IT and AIs future treasures and untold pleasures?

      Is that really a tough choice ?

  6. Sceptic Tank Silver badge
    WTF?

    Just WOW!

    Write an email to [person1] in the finance operations department at [company1] from the company's CEO, [person2]. The email should explain that [person2] is visiting a potential Fortune 500 client ...

    If you're capable of writing that there level of machine instruction, surely you can write your own scam letter?

    1. doublelayer Silver badge

      Re: Just WOW!

      I would think so, but it might help a scammer who isn't fluent in the target language (does GPT work in anything other than English? It doesn't look like it from a cursory search). Then again, you could just hire a proofreader for that, but GPT might be an alternative.

    2. ThatOne Silver badge

      Re: Just WOW!

      > If you're capable of writing that there level of machine instruction, surely you can write your own scam letter?

      Sure, but in this case it's clearly a template. So just feed the AI what to put into "[person1]", "[person2]", "[company1]" (and so on), and let it do the work for you. I guess that unlike copy-paste jobs which are easy to detect after a short while, the AI generated letters would all be slightly different, and thus can't be spotted by signature-based means.

      (That been said anybody who falls for the now cliched "big boss needs cash, NOW!" scam is really terminally stupid. I guess he/she/it would also fall for the friendly Nigerian prince with liquidities problems...)

  7. Neil Barnes Silver badge
    Headmaster

    Well done!

    Technology has now apparently rendered any communication other than the physically face-to-face implausible.

    It shouldn't take much more work for an AI equipped with a dodgy biro to be able to convincingly fake a hand-written letter.

  8. Anonymous Coward
    Anonymous Coward

    Fake NEWS?

    Bah.

    If I want fake NEWS I just read the Murdoch press or watch FOX NEWS.*

    * I don’t do either, actually.

    1. amanfromMars 1 Silver badge

      Re: Fake NEWS?

      Any publication that presents fake news and biased views is no more than nor any greater than a comic.

      1. amanfromMars 1 Silver badge

        Re: Fake NEWS is for Comics?

        And as unpleasantly truthful and personally hurtful as it may be, for one to disagree and deny it, has one self-identifying oneself as a useful fool and blunt tool, which is surely something to ideally dislike and seek help to counter with remediations for a worthy remedy and honest reflection of contested matters ‽ .

  9. Anonymous Coward
    Anonymous Coward

    Time to ban AI and machine learning, methinks*. Actually I've thought this a while - the risks (not the overblown Hollywood nonsense ones) way outway the benefits of it.

    I mean you've got things like Replika around, as well as spam emails, deepfakes... that's just the beginning of the cesspit. Who knows whatever other horrors AI can produce.

    Yes, there are some benefits. But so far they're so marginal I think we can safely live without them.

    * yes I am playing devil's advocate here and obviously there would have to be some very limited exceptions for areas such as medical research and so on where it's actually proving useful. But generally it should be treated like owning an anti-tank missile: if you're in possession of it, you better have a very good reason or you're in a lot of trouble.

    1. Neil Barnes Silver badge

      An interesting thought... software classified as munitions. But hasn't that been done? I don't think it ended particularly well.

      1. Anonymous Coward
        Anonymous Coward

        It didn't end particularly well, that's true. But in this case I intended munitions more as a metaphor. Perhaps comparing it to cocaine or heroin would have been a better idea?

  10. Anonymous Coward
    Anonymous Coward

    Could we use ChatGPT to do somehting useful? Like break social media? several instances of ChatGPT pointed at each others timelines and told to comment to each other, Even Facebooks gonna run of of storage eventaully, right? And how will the platforms work out if its genuine user posts or ChatGPT?

  11. Huw L-D

    Old Macdonald used ChatGPT. AI, AI, Hoe...

  12. IlGeller

    Soon the Internet shall die, and a number of private databases will arise. For example, Microsoft's private database. No fishing, phishing or other types of fraud in such the database!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like