back to article Davos discussion mulls how to keep AI agents from running wild

AI agents arrived in Davos this week with the question of how to secure them - and prevent agents from becoming the ultimate insider threat - taking center stage during a panel discussion on cyber threats. "We have enough difficulty getting the humans trained to be effective at preventing cyberattacks. Now I've got to do it …

  1. Sorry that handle is already taken. Silver badge

    As I understand it, the problem with "AI agents" in their current guise is that to an LLM all inputs are prompts, i.e. they can't (because they simply don't) distinguish between data and instructions. I suspect the more guardrails you put in place to try to limit prompt attacks, the less flexible the system becomes.

    Ultimately I think the only way to keep them from "running wild" is to simply not use them.

    1. cyberdemon Silver badge
      Holmes

      > i.e. they can't (because they simply don't) distinguish between data and instructions

      Well, as statistical token predictors without logic, reasoning, programming, never mind intelligence, they simply guess wot a human the training data might do in a given context. So they indeed can't.

      > Ultimately I think the only way to keep them from "running wild" is to simply not use them.

      And not to build and invest the world's finite resources in them.

      Too late for that though, sadly.

      1. This post has been deleted by its author

  2. jake Silver badge

    AI agents are already running wild.

    Have you seen the salaries of these scammers?

  3. Pulled Tea
    Mushroom

    It's actually very simple.

    In order for you to absolutely reduce the risk of AI agents running rampant in an organization, here's a recommendation, boiled down to one sentence:

    Don't use AI agents.

  4. timrosu

    What we actually need are micro models

    I think we should focus on small purpose built AI/ML models. General ones are too sloppy. We don't need more than billion parameters in a model if we break them down and stick to unix philosophy. Companies already have kubernetes clusters full of microservices, add micromodels to that and a few gpus to accelerate the process if necessary and you are golden.

    Some use cases of small models today: facial recognition, advanced ocr, voice recognition.

    I don't know why hyperscalers care so much about big models if the end result is so consistent: slop.

  5. Pascal Monett Silver badge
    Mushroom

    Davos

    This is rich (no pun intended).

    The very people who are responsible for pouring billions into this "AI" scam are now pretending to be actually worried about the consequences.

    Pull the other one, it has bells on it.

  6. amanfromMars 1 Silver badge

    The Future According to AI is a Human Mind Clusterfuck ‽ ‽ ‽

    Jessica, Hi,

    AI agents arrived in Davos this week with the question of how to secure them - and prevent agents from becoming the ultimate insider threat - taking center stage during a panel discussion on cyber threats.

    That opening sentence in the first paragraph is much better started and more accurately stated .......Proxy human agents of AI agents arrived in Davos this week with the question of how to secure them - ....

  7. Fonant Silver badge

    LLMs are not AI

    LLMs are not intelligent, they're statistical pattern matching systems that generate bullshit that may or may not be correct or true.

    Here's an idea to make LOTS of money: invent a complicated but impressive bullshit generator, make $$$ from building and improving it, then make more $$$ from selling add-ons that make it more accurate, more correct, or less dangerous.

    In general, the best solution to a problem is the simplest solution to the problem. The rush to use AI is just making everything more and more complicated and impossible to understand.

    1. amanfromMars 1 Silver badge

      Re: LLMs are not Simple AI nor Simply AI either.

      LLMs are not intelligent, they're statistical pattern matching systems that generate bullshit that may or may not be correct or true. .... Fonant

      That surely then makes them virtually human, Fonant, .... and into programming humans with ideas to make lots of money for the complicated but impressive bullshit generators that they server with the sale of add-ons that make acceptance and resolution of the myriad problems causing their countless difficulties more accurate, more correct, or less dangerous?

  8. ITMA Silver badge
    Devil

    They've failed!

    "Davos discussion mulls how to keep AI agents from running wild"

    Well they've failed utterly.

    How else can they explain allowing Trump on stage, in front of a microphone, spouting his rambling, moroning hallucinations for over and hour.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon