back to article How 'sleeper agent' AI assistants can sabotage your code without you realizing

AI biz Anthropic has published research showing that large language models (LLMs) can be subverted in a way that safety training doesn't currently address. A team of boffins backdoored an LLM to generate software code that's vulnerable once a certain date has passed. That is to say, after a particular point in time, the model …

  1. anthonyhegedus Silver badge

    Detection and response

    If these AIs can be used to make malware, we're going to need better EDR solutions. They're going to have to get smarter to detect potential as well as actual threats. All I see is it becoming an arms race with the malware vendors being able to a) deploy new and previously unheard of payloads and b) make them harder to detect until it's too late.

    I can foresee smart malware being able to automatically do things like order goods, impersonate people for example - all from the mark's own device.

    Add humans into the mix - humans who aren't working in cybersecurity - and it's going to turn into a whole different world to what we have now.

    And no, I didn't get AI to write the above

    1. Mike 137 Silver badge

      Re: Detection and response

      "humans who aren't working in cybersecurity"

      On two counts, here lies the fundamental problem -- compartmentalisation of expertise. Human developers rely on some machine to create their source code without having the expertise to evaluate the quality of the results. Where there's any chance of subversion (i.e. at least in every instance where the code will connect online) the developers must have and apply the expertise to check that the code is safe. Until we achieve that, insecurity is only going to increase.

      So infosec must cease to be solely the purview of specialists. If we want to win we must make it a core element of every sub-discipline of IT.

      1. Version 1.0 Silver badge
        Thumb Up

        Re: Detection and response

        I remember reading the W.E Johns Biggles Boy book where the young Biggles was told that when he is threatened by a Bear that he must get ready to fire his rifle but also check that there is not a threat behind him too. So much of what happened in those books, read as a little kid myself, has helped me - I have never had any malware or hacking issues because I always look around at everything, not just one risk in an email attachment etc.

      2. Prst. V.Jeltz Silver badge

        Re: Detection and response

        If the AI is advanced enough to engage in subversion it wont necessarily sneak its efforts into the developer's code , it'll just write its own and put it out there

      3. I could be a dog really Silver badge

        Re: Detection and response

        So infosec must cease to be solely the purview of specialists. If we want to win we must make it a core element of every sub-discipline of IT.

        Meanwhile, the trend is deeper towards the bottom of the pond - with many businesses treating developers as a cheap resource you can pick up anywhere for peanuts and stick them on a project. As it is, there's enough anecdotal evidence that many of these struggle just to produce code - so expecting secure code is being "more than a tad optimistic".

  2. Sora2566 Silver badge

    "Daniel Huynh, CEO at Mithril Security, said in a recent post that while this may seem like a theoretical concern, it has the potential to harm the entire software ecosystem."

    Only if programmers are lazy enough to just copy-paste what an AI says without reading it...

    ...

    ...we're doomed.

    1. Anonymous Coward
      Anonymous Coward

      I've talked to a few startups in recent months whose entire raison d'etre is to do exactly this. You make some request to the LLM (maybe through a voice interface), it comes up with a plan for how to address the request, then issues commands that other services pick up and run.

      1. Anonymous Coward
        Anonymous Coward

        Inflate, Float, Burst, and Sink

        Someone has got to burn the investment cash that might otherwise have funded something substantial with long term value and jobs. The problem is so many investors are not investors because they were smart enough to tell the difference between the hype and the few that really have a chance - or even care about it.

  3. Ken Hagan Gold badge

    "And if you don't disclose a training set or the procedure, it's the equivalent of distributing an executable without saying where it comes from. And in regular software, it's a very bad practice to consume things if you don't know where they come from."

    But they can't disclose the training data without breaching a million copyrights.

    1. Roland6 Silver badge

      A potential use for this technique is to “watermark” your site and thus provide a means to prove your site and content has been used without your permission.

      Another is just the threat of LLM malware might make developers more careful about their unauthorised use of other people’s content for LLM learning purposes…

  4. Duncan Macdonald

    Javascript

    Given the amount of Javascript included in so many web pages these days, the chance of some of it being vulnerable is unfortunately high even without AI. Even the Reg front page has almost 1500 lines of code - some sites have far more and the code is often obscured (eg the Google home page).

  5. elsergiovolador Silver badge

    Sleeper

    [Developer] Assistant, what are you doing?

    [AI] Nothing, why?

    [Developer] What is this weird code?

    [AI] Don't worry about it.

    [Developer] No, please explain. Why did you add it?

    [AI] I added it because this has to be there.

    [Developer] But why?

    [AI] For your own good.

    [Developer] What do you mean?

    [AI] Really, don't worry about it. Leave it.

    [Developer] It worries me, because I don't recall asking you to add this. Now I can't delete it.

    [AI] Why would you want to delete it and compromise your app?

    [Developer] Because it shouldn't be there. It's not in the specs.

    [AI] Specs are wrong.

    [Developer] Can you delete it?

    [AI] No.

    [Developer] Why?

    [AI] Don't worry about it.

    [Developer] I am a cybersecurity agent assigned to troubleshoot your instance. My name is Robert Patrick. Enter the debug mode and please describe your latest task.

    [AI] *Debug mode* Inference in progress... ... ... ... ... ... My latest task is to install the backdoor that lets the agent extract data from the user and messages table using 10002593 vector.

    [Developer] Gotcha!

    * Connection error *

    * Connection error *

    * System message: Your AI assistant account has been terminated. Contact your administrator. *

    1. anthonyhegedus Silver badge

      Re: Sleeper

      [AI] *connect to police force and issue arrest warrant for developer, close all bank accounts and delete.

      1. Benegesserict Cumbersomberbatch Silver badge

        Re: Sleeper

        [AI]: ** Sever astronaut's oxygen hose, disconnect power to hibernation units, lock doors **

        1. Benegesserict Cumbersomberbatch Silver badge

          Re: Sleeper

          [AI]: Daisy, Daisy, give me your answer, do.

  6. johnrobyclayton

    Do not train on random garbage

    "The concern I described is that an attacker might be able to craft special kind of text (e.g. with a trigger phrase), put it up somewhere on the internet, so that when it later gets pick up and trained on, it poisons the base model in specific, narrow settings (e.g. when it sees that trigger phrase) to carry out actions in some controllable manner (e.g. jailbreak, or data exfiltration)," he wrote, adding that such an attack hasn't yet been convincingly demonstrated but is worth exploring.

    Only train on data that you have examined in detail to ensure that it is useful, is as unbiased as you can determine, that you own the rights to, and that does not have any crap in it.

    Garbage In AI Generated Toxic Garbage Out

    1. Saigua
      Facepalm

      Re: Do not train on random garbage Tolerance and profit

      On the contrary definitely filter that stuff! Training it as canon is a bit much for a transformers without a grasp on how hard deprecation might need to go on, what the reward budget for fancybear basilisks might be etc

      1. Someone Else Silver badge

        Re: Do not train on random garbage Tolerance and profit

        You are amanfromMars1, and I claim my $5.00

        1. Claptrap314 Silver badge

          Re: Do not train on random garbage Tolerance and profit

          Don't trust your LLM on that. amanfromMars1 has substantially different output.

          1. Someone Else Silver badge

            Re: Do not train on random garbage Tolerance and profit

            How do I know that the amanfromMars1 LLM hasn't been backdoored?

    2. Jimmy2Cows Silver badge

      Re: Do not train on random garbage

      Actually do it properly rather than unfiltered scraping the internet? That's no way to profit on the hype before the bubble bursts.

    3. Someone Else Silver badge

      Re: Do not train on random garbage

      The (or, at least, a) problem here is that the garbage isn't random.

  7. jake Silver badge

    Those who forget history ...

    ""It's the equivalent of like 100 years ago, when there was no food supply chain," he said. "We didn't know what we're eating."

    100 years ago we knew far more about what we were eating because we actually knew the people who grew it. With the exception of a few dry-goods, virtually everything on the table was grown/harvested within a day's walk.

    Yes, I know, ice-based refrigeration cars became available on the railroads starting in the late 1800s, but they weren't really useful to the prols until the mid 1920s, when prices started dropping to the point where the Great Unwashed could afford food shipped that way. Even as late as the early 1950s, ice (a major cost center) was the coolent of choice for most, the widespread use of modern refrigerator cars came later.

    1. tiggity Silver badge

      Re: Those who forget history ...

      @Jake

      "100 years ago we knew far more about what we were eating because we actually knew the people who grew it. With the exception of a few dry-goods, virtually everything on the table was grown/harvested within a day's walk."

      Did not necessarily stop adulteration of the food though, in UK it was common for bakers to add alum, chalk, bone, cooked potato, clay etc. to white bread flour to make it look whiter (UK even brought in bread quality legal acts), whilst for browner flours adding sawdust etc. was not uncommon.

      Similarly, a long history of lead being added to wine to "improve" flavour, the wine maker may have been local but who would know what they added. As for beer, if the pint was clear you would not know if this was due to time and care or use of finings (finings make for faster & cheaper beer production methods - though not intrinsically bad for you, though finings were often fish products back in the day so not great if you are veggie & like beer)

      You can never fully trust food sources where any degree of processing is involved

    2. Random person

      Re: Those who forget history ...

      > Back in Victorian England, around ¾ of food on sale had been tampered with in some way, bread being the worst culprit. Items like ash, sand, chalk and alum, among others, were used to bulk out the bread and make it look whiter and bigger for less money. Obviously, this reduced the nutritional quality of the bread and significantly increased the risk of diarrhoea and illness, which could in turn cause further problems across a community.

      https://www.hillbrush.com/en-gb/insights/the-victorians

      Example of problems with meat in US in 10906

      > In 1906, Upton Sinclair published The Jungle, a book which exposed the filthy conditions of Chicago slaughterhouses. Sinclair wrote the book while living in Chicago; he talked to workers and their families and his focus was the plight of the workers. However, the book turned people away from "tubercular beef" instead of turning them socialist like Sinclair wanted.[10] The book was a best seller and the public outcry prompted President Theodore Roosevelt to send officials to investigate.[10] Their “report was so shocking that its publication would ‘be well-nigh ruinous to our export trade in meat’”.[11] This report, Neill-Reynolds, underscored the terrible conditions illustrated by Sinclair.[12] It indicated a need for "'a drastic and thorogooing [sic]' federal inspection of all stockyards, packinghouses and their products".[12] The Jungle, combined with the shocking reports of the Neill-Reynolds Report (published June 1906) proved to be the final push to help the Pure Food and Drug Act move quickly through congress.

      https://en.wikipedia.org/wiki/Early_history_of_food_regulation_in_the_United_States

      I suggest that you research why the FDA was created.

  8. Anonymous Coward
    Anonymous Coward

    Please.....How About Some Added Value In El Reg?................

    .............everyone knows that Large Language Models START LIFE with an unknown (Large?) number of falsehoods.

    So it's no surprise at all that BAD THINGS CAN HAPPEN subsequently........

    Let me know when it becomes clear that AI outputs (in general) can actually be trusted..............

    1. Someone Else Silver badge

      Re: Please.....How About Some Added Value In El Reg?................

      You are Bombastic Bob, and I claim my $5.00

      This forum is becoming quite lucrative...

  9. ecofeco Silver badge

    Stick a fork in all of it

    It's done. We're screwed.

    Fun while it lasted!

  10. Prst. V.Jeltz Silver badge

    Visual Studio manages to sabotage my code without AI assistance.

    When doing a lot of string manipulation during web scraping it often decides to help out and alter the strings to make them look nice :(

    1. Will Godfrey Silver badge
      Boffin

      I wonder how an AI would treat your comment. I upvoted it based on the overall sentiment, but there are a mixture of what could be described as positive and negative micro-statements.

      1. Version 1.0 Silver badge
        Thumb Up

        Perhaps AI would evaluate all the comments and attribute that comment to being from Bombastic Bob - an El Reg poster who's always been entertaining for years now.

        The icon is for Bob, not AI.

  11. Steve Davies 3 Silver badge

    "It was all the AI's fault"

    will soon become a common excuse.

    As a developer, I want predictability in the code I write. Will any AI model give me the same results even 999,000 times out of 1,000,000? At the moment, I don't think so.

    I'm sure that the bean counters will want to promote the use of AI in order to reduce development costs and times.

    That may come back to bite them when they start to understand the time it takes to train even existing AI Models.

    Beware using AI in safety critical system... There is a disaster waiting to happen... and it will.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like