back to article It's only a matter of time before LLMs jump start supply-chain attacks

Now that criminals have realized there's no need to train their own LLMs for any nefarious purposes - it's much cheaper and easier to steal credentials and then jailbreak existing ones - the threat of a large-scale supply chain attack using generative AI becomes more real. No, we're not talking about a fully AI-generated …

  1. anthonyhegedus Silver badge

    Protection must improve first

    It would seem that the only way to combat ever-better crafted-by-AI attacks is to use AI-powered defences.

    - better EDR and antimalware based on AI

    - AI-based email screening

    - better secure caller ID verification

    - AI-based software (on phones) to screen incoming calls (and text messages)

    This is going to become such a serious issue that maybe the above things need to start to be developed right away. And not just for businesses, but for consumers. Unfortunately, at the moment, these things aren't powerful enough - and of course they cost money. I can't see a scenario where the cost of services won't increase, as service providers slowly up their game in response to this threat.

    Security technology has always played behind the curve against threat technology, but the danger and risk of AI-powered attacks is so great now that we can't really afford to be behind the curve any more.

    We're in for a rough ride!

    1. cyberdemon Silver badge
      Mushroom

      Re: Protection must improve first

      Er, perhaps, but if you think AI is coming to your rescue on the defensive side, you are sorely mistaken.

      AI-based defences are stochastic and will fail a certain percentage of the time. But there are so many exposed attack surfaces to defend, that even if you are good at blocking randomised repeat attacks, chances are that one will get through, and one is enough.

      Basically, the inherent randomness/unreliability of the bullshit machines is bad for the defenders but good for the attackers.

      So we are entering a world where the only defence is attack, and we all know that this path leads to Mutually Assured Destruction..

      The makers of Battlestar Galactica never knew how poignant Bill Adama's message would be..

      1. HuBo Silver badge
        Windows

        Re: Protection must improve first

        Yeah, President Joseph Robinette Biden Jr. should probably warn us all about the dangers of this emerging AI-industrial complex (AIiC) in his upcoming farewell address (Eisenhower-style), and possibly include a paragraph about Mutual assured Cyber-Destruction (MaC-D) as well. The upcoming no-holds barred Trump-Musk wrestlemania pantomime is just about guaranteed to bring us so much closer to cyber-thermonuclear armageddon and massive crypto-currency induced market crashing (and skyrocketing all-collar criminality) that it's not even funny.

        After all, couldn't TikTok represent a countdown to our core cyber meltdown and China syndrome? As the ole sayin' goes: "whoever bytedances in your cyber-AI infrastructure will eventually spit in your water supply pool, healthcare, and grave too". And as Morin opined (nearly), "there's a will, there's a way", and we're laying down the AI foundation model superhighway for wonton delivery of some perty bad gastro as we speak ...

        Better safe than sorry in this, imho.

        1. anthonyhegedus Silver badge

          Re: Protection must improve first

          It was unintelligible to me, but then I had the brilliant idea just to use ChatGPT 4o1 to translate that into a somewhat more usable set of words.

          Here’s a clearer, more straightforward interpretation of that passage:

          The writer suggests President Biden should warn everyone about the dangers of a new “AI-industrial complex,” much like President Eisenhower once warned about the military-industrial complex. They also mention the idea of “Mutual Assured Cyber-Destruction,” meaning our reliance on interconnected technology could lead to catastrophic digital conflict.

          They argue that the hyped-up “Trump–Musk” spectacle could push us towards a kind of “cyber-thermonuclear meltdown,” causing major cryptocurrency market crashes and boosting criminal activity. TikTok is portrayed as a ticking time bomb that could lead to a severe cyber meltdown—what they call a “China syndrome” scenario—potentially contaminating critical infrastructure, water supplies, healthcare systems, and more.

          They also suggest that by building powerful AI “foundation models,” we may be unintentionally laying the groundwork for something harmful. The overall message is that it’s wiser to be safe and cautious about these developments than to risk a disastrous outcome.

          1. HuBo Silver badge
            Windows

            Re: Protection must improve first

            Hot damn! I couldn't have said it better myself (or not)!

            1. Doctor Syntax Silver badge

              Re: Protection must improve first

              Yes, we know that.

          2. Anonymous Coward
            Anonymous Coward

            Re: Protection must improve first

            LOL. There's a reason Biden has been practically invisible since the election. The only way he could possibly give such an address is if it's AI-generated.

    2. Anonymous Coward
      Anonymous Coward

      Re: Protection must improve first

      It’s easy. Like captcha before it we simply need a test Llms will struggle to pass

      Examples:

      How many “R”s in raspberry?

      How to keep cheese from falling off a pizza?

      Are LLMs totally over hyped and there is a collapse coming that will make the dot com bubble burst look like a non event?

      1. anthonyhegedus Silver badge

        Re: Protection must improve first

        I just tried and had no particular problem with these:

        1) How many “R”s in “raspberry”?

        There are 3: r-a-s-p-b-e-r-r-y.

        2) How do you keep cheese from sliding off a pizza?

        • Use less watery sauce: Excess liquid can make cheese slip.

        • Choose cheese with lower moisture: For instance, part-skim mozzarella rather than very oily cheese.

        • Avoid overloading: Too much cheese or too many toppings can cause sliding.

        • Pre-bake or par-bake: A slightly firmer crust can help the toppings adhere.

        3) Are Large Language Models (LLMs) over-hyped, and could there be a collapse worse than the dot-com bubble?

        While LLMs are certainly attracting a lot of attention and investment—some of which may be excessive—it’s not clear that a complete industry-wide collapse is inevitable. There could be corrections or failures for companies relying on unsustainable business models, but many experts believe the underlying technology is here to stay and will keep evolving. It’s best to treat the hype with healthy scepticism but also recognise the genuine value LLMs can deliver in various domains.

        1. Anonymous Coward
          Anonymous Coward

          Re: Protection must improve first

          Someone has been drinking the Kool-aid.

          The first two were examples of mistakes this technology made previously, I cannot be bothered to find current ones but there are current ones.

          If you are going to defend LLMs (predictive texting) then you cannot deny the existence of hallucination which is what these are: The technology comes up with a string of words that meet the criteria but are factually incorrect. Things like "LLM's are AI" are an example of hallucination (or drug taking or just plain stupidity)

          As for the third one I hope you didn't invest all your craptocurrency in this technology over a web3 platform, or you will be struggling to pay for the electricity to connect to ANY technology in the very near future. It is going to collapse, and badly

          It does not work properly today and it is contaminating the data it will be trained on in the future so it ain't going to get any better!!

      2. Snowy Silver badge
        Joke

        Re: Protection must improve first

        How many “R”s in raspberry?

        There are no R's in raspberry, pirates use R's

        How to keep cheese from falling off a pizza?

        Nails pr if you must tacks.

        Are LLMs totally over hyped and there is a collapse coming that will make the dot com bubble burst look like a non event?

        They will be replaced with VLLM and them with ELLM

      3. JimmyPage
        Pirate

        Re: a test Llms will struggle to pass

        Here's one:

        "Remove all the crud from this webpage".

        No "AI" model I have used has come anywhere near that.

        1. cyberdemon Silver badge
          Terminator

          Re: a test Llms will struggle to pass

          It would respond in a similar way as if you had asked "Remove all the crud from this planet"

          Or possibly, it would return the same webpage, but with your AJAX REST API removed.. (i'll get my coat)

    3. Doctor Syntax Silver badge

      Re: Protection must improve first

      "We're in for a rough ride!"

      We certainly are if we're relying on AI for protection.

  2. DS999 Silver badge

    The more expensive allowing your LLM resources to be compromised is

    The more incentive there will be for companies to invest in better securing them. Because when you can throw out numbers like $46K and $100K per day, that's something quantifiable to bosses. An immediate cost that will be incurred no matter what.

    One of the reasons why companies have such poor security in general is that all the costs are potential, or fall onto others. If someone breaks into the company web server and steals customer login credentials or their credit card numbers, that's a cost they don't have to bear. Yeah maybe there's reputational damage but that's mostly theoretical. You can't put a dollar figure on it to plug into a beancounter's spreadsheet and determine what level of investment in protecting against it is justified in ROI terms. And importantly, the more other companies suffer similar attacks, the less your company's reputation is hurt when you're attacked. It happens to everyone, there's nothing to be done, customers just have to be suffering the consequences for our shortcomings!

    So long as you can put a nice round figure, and one high enough that you're talking about such large losses per day PER ATTACKER (because there's no reason to believe you'll have just one such attack) there will be more investment securing their LLM resources to prevent such losses. They still won't invest in preventing attacks that compromise customer data, our data. Because so long as they don't bear the cost of the consequences, they don't have any incentive to invest anything to prevent it. Even if they are ransomed they mostly don't care, because the law stupidly allows them to buy ransomware insurance, making THAT a fixed cost (with only a theoretical and unknown increase in the future if they are successfully ransomed)

    1. Like a badger

      Re: The more expensive allowing your LLM resources to be compromised is

      If big quantifiable risks motivate corporations, then they'd already have implemented proper security. However, despite the multi-billion costs of digital attacks, companies keep getting hit, and the authorities show a complete incapacity to stem the flow of attacks.

      So I don't think that a new attack surface for the bad guys will result in any change in corporate attitudes to ITsec.

      1. MachDiamond Silver badge

        Re: The more expensive allowing your LLM resources to be compromised is

        "However, despite the multi-billion costs of digital attacks, companies keep getting hit, and the authorities show a complete incapacity to stem the flow of attacks."

        If the attack is targeted at PII, there doesn't seem to be much downside to that as credit monitoring purchased in bulk is dirt cheap. A B to C business losing a customer list isn't a big deal to them. A B to B company that loses their customer, vendor and supplier list might be in a pickle. The same could be said for a media production company. Having that information accessible online in arbitrary quantities is the issue. Any true need to analyze a company's data from a outside location should really be locked down so even an AI assisted attack isn't an issue.

        One of my favorite stories is "A Logic Named Joe" by Murray Leinster. It's an amazing piece of work considering when it was written. Joe, a computer in modern parlance, wakes up and just wants to be helpful. Many people Joe helps are just looking to help themselves. The parallels to this article are thought provoking.

        1. Emir Al Weeq

          Re: The more expensive allowing your LLM resources to be compromised is

          Just read "A Logic Named Joe" on your recommendation. Thank you, it was an excellent story.

          To anyone who's not read it: it's only takes about 15 minutes and it's time will spent.

          1. munnoch Silver badge

            Re: The more expensive allowing your LLM resources to be compromised is

            Very enjoyable read.

            "If Joe could be tamed, somehow" ... but until then best leave him turned off,

            1. MachDiamond Silver badge

              Re: The more expensive allowing your LLM resources to be compromised is

              "... but until then best leave him turned off,"

              Especially if you are married and have any attractive ex's that might try to re-connect.

      2. DS999 Silver badge

        Re: The more expensive allowing your LLM resources to be compromised is

        But companies aren't bearing the costs of those attacks, they are externalities. It is like piping toxic waste into the river by your plant or coal ash out your smokestack. Its the problem of whoever is downriver and downwind, not yours.

    2. elDog

      Re: The more expensive allowing your LLM resources to be compromised is

      That means that having a high price tag on defending your resources makes them somehow more defensible. Pardon me, but that's rather silly.

      A very cheap social-media hack or trusted-employee misstep can cause a whole world of hurt.

      The people in business suits that roam the upper floors only know about things that cost a large percentage of the gross for the corporation. They don't care/invest in something that runs under 0.1% of the total spend.

      Until SECURITY is the number one priority for these companies, they will be attacked and penetrated and damaged.

      1. Richard 12 Silver badge

        Re: The more expensive allowing your LLM resources to be compromised is

        The OP is arguing for the inverse.

        When not defending your LLMs has an extreme, even existential cost, then businesses are likely to spend more on defending them - or shut them down entirely as the 'business case' no longer exists.

        The difficulty lies in the fact that most executives have absolutely no understanding of the risk. Or indeed what "risk" even means.

    3. Anonymous Coward
      Anonymous Coward

      Re: The more expensive allowing your LLM resources to be compromised is

      Agree with your logic but you picked a bad example with this phrase:"steals customer login credentials or their credit card numbers"

      The results of a card data breach can be very expensive:

      The contract with the acquirer and on up the chain to the card brands (Visa et al) could incur significant costs for:

      Forced remediation

      Forced forensic investigation

      "Fines" under the contract for every card impacted

      "Fines" under the contract for being non compliant with PCI

      Then there are the legal implications as a card number is classified as PII, meaning DP / GDPR implications

      Plus, if the card brands or the acquirer are not happy and they pull the plug on card processing that's probably the business gone altogether

      I do wish people would realise that as card data protection is - mainly - contractual the consequences of breaches are not theoretical, they are not democratic, they are extremely onerous.

      1. DS999 Silver badge

        Re: The more expensive allowing your LLM resources to be compromised is

        Yes theoretically it can do all that, but how does e.g. Visa link misused credit card numbers to a specific compromise, when compromises that leak credit card numbers are happening all the time all over the place? I think it is near certain my credit card numbers have been leaked multiple times, but criminals can only use so many and the credit card company's fraud prevention is probably blocking some attempted charges before they even hit my card.

        What Visa and Mastercard ACTUALLY do is just roll up all the cost of fraud into the merchant fees that affect every business. You don't get a discount because you haven't been compromised and another guy has. You basically pay an industry wide rate that is only adjusted based on IN PERSON fraud - sort of a modern version of redlining where they get to charge more for businesses in bad neighborhoods because that's where people are more likely to use stolen credit cards to buy gas or groceries.

        1. Anonymous Coward
          Anonymous Coward

          Re: The more expensive allowing your LLM resources to be compromised is

          Without wishing to be rude, you clearly do not know how this works. For example, in the case of Visa the initial report to an acquirer regarding a suspect merchant card data breach is called a CPP - A Common Point of Purchase report, all these cards were used at the sae merchant and then subsequently there was fraudulent activity. There are some extremely sophisticated systems involved.

          Your second comment is, sadly, total nonsense. The fraudulent losses are incurred by the issuing bank, as are the costs of issuing replacement cards. They are not losses incurred by the card brand. The days when the card brands were member organisations are long gone. They are not Visa’s losses.

          Transaction processing fees are set by the acquirer not the card brands. The card brands can only level interchange fees.

          A merchant does not get a discount for not having been compromised, a merchant who has been compromised gets charged more - if he can even get an acquiring deal

          E-commerce acquiring rates are not based on in-person fraud.

          Your comments about where stolen card data is used are also offensive as well as wrong

          I work in card payments and security and I could explain in more detail why every comment you made is wrong but suffice to say it’s wrong

  3. elDog

    They are coming for your CoPilot!

    And any other embedded AI tools that your software manufacturers shove into your private places.

    I can't imagine how easy it would be to take over a fleet of zombie PCs and make them submit the nefarious queries on behalf of the PC "owner" (better known as the tenant.)

    I'm sure Micro$oft, Apple, Google, etc. have hardened their systems where no foreign/external keystrokes can be logged and acted upon by these always-in-your-face AI assistants.

  4. Anonymous Coward
    Anonymous Coward

    The AI pitch is coming back to bite you !!!

    AI is right 'some' of the time BUT as an attack agent it does not matter as you are running your attacks 100s or 1000s times per second so the failures are in the majority anyway !!!

    AI as your defense needs to be right ALL of the time because when it is wrong the 'Baddies' get through and in all probability the breakthrough is used to amplify the attack.

    The same AI cannot be both these things !!!???

    Please tell me now HOW AI is so good that it works to save the 'Crown Jewels' and HOW it always works !!!

    Are you hiding some 'Better' AI that can save us all from the 'Baddies' or are you over egging the pitch on just how 'Good' the AI really is !!!

    Do tell !!!

    :)

  5. Brave Coward Bronze badge

    So let's recapitulate...

    ... what tech has been about these last twenty years:

    a) social medias. Crap.

    b) crypto-money. More crap.

    c) LLM, sold as "artefactual intelligence". Still more crap.

    Brilliant. Thank you very much, tech bros.

    1. heyrick Silver badge

      Re: So let's recapitulate...

      They're not there to help you. They're there to enrich themselves by selling delusions to the gullible.

      1. ecofeco Silver badge

        Re: So let's recapitulate...

        NEVER forget this.

        While I have been reading about niche success in science research, for anyone outside of science research, and especially us peons, it's all a con.

  6. Paul Crawford Silver badge
    Facepalm

    They're going to send you a message from this restaurant that's right down the street, or popular in your town, hoping that you'll click on it,"

    The simplicity of a link being able to screw over an organisation seems to be a far deeper problem than the tricks used to get that click.

    1. Richard 12 Silver badge

      Local code execution can do whatever the user is authorised to do.

      So if the user can edit a document, so can malware.

      "Security" as implemented often means preventing the user from doing their actual job, which then means the security is removed.

      Really, a huge amount of the problem stems from utterly shit developer documentation from Apple and Microsoft.

      Apple are particularly heinous, because they add new "security" every year but refuse point blank to document that it even exists, let alone how to use it correctly. Even to the point where Apple's own developers don't know. So everyone bypasses it.

      1. DS999 Silver badge

        You were saying?

        Apple are particularly heinous, because they add new "security" every year but refuse point blank to document that it even exists, let alone how to use it correctly

        Apple provides plenty of info, and update it regularly (notice the update time on the iOS Security document for example)

        https://www.apple.com/ae/business/resources/docs/iOS_Security_Overview.pdf

        1. Richard 12 Silver badge

          Re: You were saying?

          You didn't actually read it though, did you?

          A four sheet PDF with three links - one of which is a generic privacy policy - is not developer documentation.

          Though to be fair, I'm mostly talking about macOS. Their iOS documentation is somewhat better, though still poor compared to the old Win32 MSDN.

          1. DS999 Silver badge

            Re: You were saying?

            Maybe you should click on the links, the iOS security document is huge.

            I don't know about the macOS side, I assume they have something similar (and a lot of the iOS stuff now applies to macOS since they use hardware and software that are both about 98% the same under the hood, though obviously macOS is more "open" than iOS as far as installation sources and ability to control the OS as an admin.

  7. spold Silver badge

    Remember AI guard rails are the new attack target for pleasure, fun, and mischievousness.

    1. Anonymous Coward
      Anonymous Coward

      Smoke and mirrors are always just smoke and Mirrors no matter what you call them.

      AI guardrails are as effective as Shoji screens in Japanese Architecture vs the Atomic Bomb.

      Guardrails are there to give the impression that there are 'limits' to the data that can be retreived or accessed, BUT as is becoming known they can be quite easily worked around or through.

      The methods being used are under constant evolution and as per ususal it is a 'Tit for Tat' war between the people creating the AI models and the miscreants trying to break them.

      The miscreants are going to win as they have the time and motivation to keep trying !!!

      There is little motivation to highlight the weaknesses of the current AI models by their own creators !!!

      You can even use so called AI against AI to break the guardrails !!!???

      [An example of 'Game Keeper turned Poacher' to reverse the popular saying !!!]

      :)

  8. vogon00

    So. What do we do?

    "Protection must improve first."

    "But one thing LLMs are getting very good at is assisting in social engineering campaigns."

    Full disclosure : I am NOT a fan of AI. Here's my two-penneth:-

    As for 'protection', there isn't any as the things we need to be 'protected' against stand to turn a profit for someone, somewhere...

    Social engineering campaigns can only work where people don't really know one another, Stuff email and social networking - take the time to actually speak and converse with people you work with* so you have knowledge of the *person*, not the machine. That's the best form of authentication - not anonymity, not an email address and not a password, listen to the voice.

    Ende.

    *Work is not your private life. Separate your professional and private lives, and take extra care to keep in touch with *friends*. Ask me how I know.

  9. Phones Sheridan
    Holmes

    Hey, LLM. Analyse all commentard posts on El Reg and identify Anonymous Cowards!

    1. Anonymous Coward
      Anonymous Coward

      [Bestest LLM in the world at this femtosecond]:

      Sure, they're the ones that use an icon of a mask depicting Robin Hood in their posts.

      1. Anonymous Coward
        Anonymous Coward

        Actually, for your information .....

        The mask is a reference to Guy Fawkes & the Gunpowder plot of 1605...

        [This design is from the film 'V for Vendetta' and now used by many protesters etc. Guy Fawkes masks existed in various forms prior to this design.]

        See ... https://en.wikipedia.org/wiki/Guy_Fawkes_mask

        :)

        1. Anonymous Coward
          Anonymous Coward

          Re: Actually, for your information .....

          You're right; thanks for pointing out my mistake. The mask is from the movie V for Vendetta, directed by Joel Schumacher. It tells the story of a ghost that terrorizes the French Opera House cast and crew while tutoring a young soprano.

  10. tehtariqk
    Alert

    People have been the weakest link? Always have been.

    It doesn't matter how good the body of the email might be. Did you look at the email address and it's some crazy string of characters or some weird address like name@gmail but it says it's coming from Verizon? That doesn't make sense.

    People have been spoofing email headers for decades, now. You telling me that they can't do that? They do it all the time. They just did it this month.

    “…most of the time when people answer your phone, especially if you're driving or something, you're not actively listening, or you're multitasking, and you might not catch that this is a voice clone - especially if it sounds like someone that's familiar, or what they're saying is believable, and they really do sound like they're from your bank.”

    I had a friend fall for a money transfer scam because they were rushing, they were multitasking, and they were stressed out and not exactly listening to what the phone scammer was doing. The most interesting thing about the interaction was how they got got — the phone scammer just pretended to be very stupid, and unable to properly understand English, so my friend, who thought themselves as savvy, got impatient and even more stressed out with the scammer, and got careless. The scammer managed several transfers before my friend figured out what was happening. Turns out scammers will use obfuscating stupidity to get people. It's not just idiots and elderly people any more. It may never have been.

    Anyway, the whole focus on technology obscures the fact that, as it was before Kevin Mitnick's day, the way to get people isn't via sCaRy tEchNoLoGy, but with good old confidence scams and misdirection. It's also telling that, despite all the technological advancement and widgets available to us, this is the only advice that can be given to us:

    Just be careful what you click

    The “git gud, scrub” of modern cybersecurity. What a time to be alive.

    1. Anonymous Coward
      Anonymous Coward

      Re: People have been the weakest link? Always have been.

      Good point ... a single human phone scammer could do this to one target/mark at a time, but with LLMs prompted to "use obfuscating stupidity" hundreds or thousands of marks could potentially be hit simultaneously ... lowering the effort level and multiplying the "rewards"! (that seems to be the evolving threatscape that TFA points to imho).

      1. tehtariqk

        Re: People have been the weakest link? Always have been.

        Honestly, I don't see LLMs adding much value to that yet. You still can pull this kind of crap using people you can underpay or coerce. There's still that lag for speech synthesis and voice recognition that might improve, and you could probably use obfuscating stupidity to make scam calls at scale, especially to cover that lag time to generate responses.

  11. John Smith 19 Gold badge
    Coat

    Terrific, LLM --> Nigerian-Prince-in-a-box

    What an amazing achievement to begin 2025 on.

    Some of my UK chums could play a quite good Hugh Grant.

    Their brand of good natured can-you-help-me apparent stupidity could get them to Group IT Director level offices while telemarketing for an agency.

  12. Anonymous Coward
    Anonymous Coward

    There was something in the AI tonight

    The fish were bright

    Fandango…

  13. Ken Moorhouse Silver badge

    Full circle

    In the old days, if you got an email from a Big Corp with spelling mistakes and/or grammatical errors in it, you would identify it straight away as fraudulent in some way.

    Now, if the prose is too perfect, then we're getting to the stage where it is put in the dodgy category.

    The ocassional spilling misteak nowadays is good to prove some text was written by a meatsack... Until such errors start to find their way into AI systems, and round the loop we go again.

    1. anthonyhegedus Silver badge

      Re: Full circle

      I've just asked an AI to comment on whether your comment was written by an AI. They answer is maybe not.

      It’s difficult to say with absolute certainty whether an AI wrote that specific comment. It reads like a person’s casual, somewhat tongue-in-cheek observation, including intentional errors like “spilling misteak,” presumably to underscore their point about authenticity and human fallibility. An AI could mimic these errors, but there’s also nothing overtly “mechanical” about the comment. Ultimately, there isn’t enough evidence to definitively conclude it was written by an AI.

    2. sitta_europea Silver badge

      Re: Full circle

      "... Now, if the prose is too perfect, then we're getting to the stage where it is put in the dodgy category. ..."

      Came here to say the same thing.

      Lately I've been seeing obvious scams which are very well written.

      So well written, in fact, that they immediately make me suspicious because they're way over the top for any email.

      In any case I only know four or five people who would be capable of writing so well.

      Not one of them sports long blond hair and a 40DD bra.

      1. Ken Moorhouse Silver badge

        Re: Not one of them sports long blond hair and a 40DD bra.

        Hello, hello, I think that comment was generated by an LLM in beta.

        Back to the drawing board.

    3. MachDiamond Silver badge

      Re: Full circle

      "The ocassional spilling misteak nowadays is good to prove some text was written by a meatsack... Until such errors start to find their way into AI systems, and round the loop we go again."

      With built-in spell checkers, mistakes are somebody being really lazy. What gives away the fraud are the grammatical errors and odd phrasing. Those and a deal too good to be true. An email telling me that my internet provider is going to 10x my bandwidth for free and I need to call to "confirm" my information rings bells. They don't need anything from me to do that so I'd never call or write back. A Gmail account is also a strong indicator of fraud. Real enterprises have real URL's. I have to counsel people I know to lever open their wallets and invest in their own domain name if they want to look legit.

      Another fraud detector I use in the US is using Whitepages.com to look up the carrier for a telephone number. They get assigned in blocks and when I don't recognize the company name, a red flag goes up since many spam facilitators use phone services that cater to volume calling. AT&T might flag an account if it scores high on a spam detection profile. I also infer how legitimate a caller might be given their story and the carrier information. I'm pretty sure I know just about all of the mobile carriers (and MVNO's) in the US as they all suck and I've needed to change a few times. If the story is they are calling me from their mobile and the carrier is some sort of company that supports large organizations, I can tell I'm being fed a tale.

      1. Doctor Syntax Silver badge

        Re: Full circle

        "Those and a deal too good to be true."

        This is the basis for a con, always has been, always will be, whatever the technology - or its absence. The sad sact is that however many warnings yu issue there'll always be somebody whose reaction will be to see the "deal" part, not the "too good to be true".

  14. Eclectic Man Silver badge
    Unhappy

    Maybe ...

    ... we should just try to enjoy what little time we have left.

  15. Doctor Syntax Silver badge

    "And this is why Crystal Morin, former intelligence analyst for the US Air Force and cybersecurity strategist at Sysdig, anticipates seeing highly successful supply chain attacks in 2025 that originated with an LLM-generated spear phish"

    Is it too cynical to ask what, having achieved a big scary headline, Morin and Sysdig are flogging?

    1. neilg

      Is it too cynical to ask what, having achieved a big scary headline, Morin and Sysdig are flogging?

      Simple: LLMs

  16. O'Reg Inalsin

    Be everyone you can be

    My bank is still asking me, every time I phone them, if I want to enroll in voice validation. I always reply no in a squeaky voice - but it is probably no worse that the last 4 of my social, so ....

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like