back to article Ex-NSA chief warns AI devs: Don’t repeat infosec’s early-day screwups

AI engineers should take a lesson from the early days of cybersecurity and bake safety and security into their models during development, rather than trying to bolt it on after the fact, according to former NSA boss Mike Rogers. "So when we created this hyper-connected, highly networked world, which we all live in, which data …

  1. Anonymous Coward
    Anonymous Coward

    Baked in or half baked

    NSA chief says we should bake in security - but if we do that, then how will the NSA be able to spy on us?

    1. Anonymous Coward
      Anonymous Coward

      Re: Baked in or half baked

      "how will the NSA be able to spy on us?"

      NSA's only fear is that other nations will become more capable on spying than NSA itself.

      Playbook Tip:

      1/ Detect the spying, use it for counter intelligence and make some money

      2/ Do real intelligence and politics

  2. nijam Silver badge

    > Don’t repeat infosec’s early-day screwups.

    Too late, I need scarcely add.

  3. amanfromMars 1 Silver badge

    Thanks for the unnecessary warning, ex-NSA chief ..... how very kind ....

    ..... and AI devs give further thanks to the likes of thee for all of the phishes now freely available as they ...... all of infosec’s early-day screwups ...... are relentlessly exploited and expanded upon by forces and sources way beyond any mere human and conventional historical and hysterical command and control.

  4. Tron Silver badge

    This may not be possible.

    Tech could have had a lot of security baked in from day one, with better coding practices, better development tools, much more use of underlying layers of firmware, siloing of permissions and protecting areas of memory so that legit software or malware could only ever be the froth on the top of the tech pint. In part this didn't happen because a lot of these security tweaks had been patented, and couldn't be used. That is down to the abuse of patents by big tech, as much as laziness. Modern 'SbD' stuff often isn't. MFA just makes stuff more fragile, entirely dependent upon the most stolen and easily dropped piece of kit in history. Or, worse, biometrics. You can change a password easier than your eyes.

    And of course Microsoft could properly test their bloody updates before they flush them into the wild.

    But gAI is a completely different type of tech. It's not code or hardware based, but data based, and user generated data at that. Almost all of it obsolete, irrelevant or dodgy. There isn't enough quality data out there to build an LLM. The best AI will simply copy human prejudices. You can tweak it to remove them but you are playing whack-a-mole.

    The whole 'guardrails' BS is just politicians pretending to be able to sanitise it in language the proles can understand.

    As for other AI systems. I'm not sure they will ever be more than trending towards reliable, the way graphs can trend towards zero without ever getting there. In enterprise systems and anything important, AI should never be used. If it is, and it goes wrong at widescale, whomever authorised its use should be held liable with very large fines and very long prison sentences.

    As it says on the tin of all AI, this is experimental. Use it at your peril. It will never work to the level that it has been oversold - reliable magic. And it should never be relied upon for anything important without human oversight.

  5. sarusa Silver badge
    Devil

    Bwahahaha

    No, the cattle are already long out of the barn. They Just Didn't Care, this time either.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like