back to article Ex-CISA head thinks AI might fix code so fast we won't need security teams

Ex-CISA head Jen Easterly claims AI could spell the end of the cybersecurity industry, as the sloppy software and vulnerabilities that criminals rely on will be tracked down faster than ever. Speaking at AuditBoard's user conference in San Diego, Easterly said the threat landscape has never stopped evolving. The proliferation …

  1. Anonymous Coward
    Anonymous Coward

    Obvious flaw in the argument..

    Who, exactly produced the code for AI?

    1. Anonymous Coward
      Anonymous Coward

      Re: Obvious flaw in the argument..

      It's not AI all the way down?

    2. Anonymous Coward
      Anonymous Coward

      Re: Obvious flaw in the argument..

      Obvious flaw in the argument is that CISA is now being administered/wrecked by DHS Overload and Moron ICE Barbie.

      1. SundogUK Silver badge

        Re: Obvious flaw in the argument..

        She's a Biden era DEI appointee.

  2. Aladdin Sane Silver badge

    SQL injection

    Little Bobby Tables is alive and well.

    1. b0llchit Silver badge

      Re: SQL injection

      Humans might survive, it is the databases that will be going extinct-by-injection. Now that I think of it,...humans can no longer live without the databases and will go extinct too.

  3. Joe W Silver badge
    Happy

    This...

    .... has to be the funniest thing I read all day so far, and that included some brilliant posts on mastodon.

    1. Anonymous Coward
      Anonymous Coward

      Re: This...

      The whooshing sound was her credibility leaving the building, and a posse of flim-flam men entering it.

      1. drankinatty

        Re: This...

        The orange man promised the "Best People" -- I guess this is what he was talking about. Imagine a government led by the clueless, the corrupt, the mentally challenged, or all three above... If you are here, worry, if you are not, be thankful.

        1. Pascal Monett Silver badge

          How is that different from what's happening now ?

          The White House is being governed by the merger of all three of your points.

          Democracy is dead in the USA. The only question left is can it be revived ? I fear that the paddles have been lost.

          1. Anonymous Coward
            Anonymous Coward

            "I fear that the paddles have been lost."

            And AI is gobbling up all the electricity required to dibrillate but I suspect only a trillion Volt ECT jolt would have even the remotest chance on knocking any semblance of sanity into the US body politic.

          2. SundogUK Silver badge

            Trump won the electoral college and the popular vote. Democracy is doing fine.

            1. Anonymous Coward
              Anonymous Coward

              Democracy: A political system they delivers the government you deserve

        2. Anonymous Coward
          Anonymous Coward

          Re: This...

          Best people like ICE Barbie running/wrecking CISA and if’s parent organisation DHS.

        3. SundogUK Silver badge

          Re: This...

          Hired by Biden.

      2. nijam Silver badge

        Re: This...

        The 1951 novel "the Marching Morons" by Cyril Kornbluth describes all this perfectly, except fot the petty - albeit malicious - vindictivess of the man currently demolishing the US government. And, as an aside, the White House, though that's just collateral damage.

  4. BBRush

    Does not really solve the problem for companies...

    I could see this approach working if everyone ran everything in the cloud and build pipelines could update continuously with fixes as the AI DAST/SAST tooling found vulnerabilties and fixed them,

    BUTT...

    This does not fix the problem with operating systems being vulnerable to things (as they are not 'cloud') nor will it help with locally deployed apps (unless there is near constant updating of the apps), nor will it help with testing compatibility for clients that consume the updates, or the changing user experience.

    I'm torn here between marvelling at the vision of people that think AI can save the world (even when it seems like the use cases are scraping the bottom of the barrel with a plan to throw it against the wall and see what sticks) and the shot-sightedness of the same people's understanding of how normal enterprise IT works.

    1. David Hicklin Silver badge

      Re: Does not really solve the problem for companies...

      Yeah its bad enough that M$ etc seemingly have to change something on a monthly basis without AI automating it even more

      That feature you used 30 minutes ago? its gone now !

  5. Smeagolberg

    "I believe if we get this right..."

    So-callled AI (LLMs) has produced more true believers than anything else for a long time.

  6. An_Old_Dog Silver badge

    IF A IS TRUE, THEN A IS TRUE

    Ultimately, she said, "if we're able to build and deploy and govern these incredibly powerful technologies in a secure way, I believe it will lead to the end of cybersecurity."

    In other words, if we are able to build software securely, we will have software security.

    1. Like a badger Silver badge

      Re: IF A IS TRUE, THEN A IS TRUE

      FFS, does she have no common sense?

      For starters AI isn't going to fix crap software reliably anytime soon (if ever) and then there's the minor problem that human greed and lawlessness are a constant. The software industry have been insecure since forever, and there's NOTHING going on that persuades me that their products are becoming any more resistant to the malcontents.

      We've already seen AI used for cyber attacks, impersonation fraud, and simple malicious spam, and the crims have barely got started on the opportunities of AI.

      1. nijam Silver badge

        Re: IF A IS TRUE, THEN A IS TRUE

        > ... AI isn't going to fix crap software ...

        It will fix the flaws in the various strains of malware long before it delivers any benifit in cybersecuirty, of course.

    2. Blazde Silver badge

      Re: IF A IS TRUE, THEN A IS TRUE

      It's not even true though. It's a profoundly over-simplified view of computer security, naively ignoring the adversarial nature of the endeavour. Even if somehow, miraculously software bugs ceased to exists, humans, and other systems, still need to use that software and that use is itself represents one of the broadest categories of vulnerability.

      'The end of cybersecurity' will come whenever computers cease to exist and not before.

  7. alain williams Silver badge

    How delightfully naive

    to think that a little AI magic pixie dust will solve all security problems.

    The truth is good, old fashioned software engineering practice that starts with a secure design and ends with quality assurance testing.

    Yes: AI might help with this but AI must not be used as an excuse to cut s/ware development costs - which only results in enshittification.

    1. David Hicklin Silver badge

      Re: How delightfully naive

      > excuse to cut s/ware development costs

      but...but...where are the next quarterly bonus's for the board going to come from ?

  8. may_i Silver badge
    FAIL

    What an idiot

    It's always great to see that people chosen to lead these kind of agencies have absolutely zero understanding of what the agency does and how technology actually works.

    1. tfewster Silver badge

      Re: What an idiot

      I read it as Easterly suggesting that vendors used AI vulnerability scanning before releasing systems for hackers to try. Not trusting AI to write secure code.

      Jen Easterly did a lot of good work at CISA, she was pushed out because of politics over the role of the agency.

      1. may_i Silver badge

        Re: What an idiot

        Expressing the opinion that all security incidents are caused by poor quality software and that LLMs can solve the problem indicates a fundamental lack of understanding.

        Whether Easterly did good work at CISA before she was pushed out for not sucking up to the mad orange king does not change my opinion of her lack of understanding.

        1. trackerbelowground

          Re: What an idiot

          Exactly. I mean poor network configuration is a significant vector. Not to mention that most companies are unable to secure vulnerabilities when they are found within a timely manner. More often than not, businesses are incentivized to ship software rather than perpetually. Fix bugs to perfection before release.

      2. O'Reg Inalsin Silver badge

        Re: What an idiot

        "We don't have a cybersecurity problem. We have a software quality problem," she said. The main reason for this was software vendors' prioritization of speed to market and reducing cost over safety.

        ... the real focus should be on the fact that the common factors uncovered by MITRE nearly 20 years ago – cross-site scripting, memory unsafe coding, SQL injection, directory traversal – remain part and parcel of shipped software. "It's not jaw dropping innovation… They were the golden oldies."

        So far so good. However ...

        This is because software companies insisted customers bear all risk and convinced government and regulators that this was acceptable.

        That's far too broad of a statement. Suppose instead a NIST public standard such as "this software contains no listed CVE's as of MM/DD/YY". If a realistic standard existed public companies would effectively have to require it to meet their own standards. But any such realistic standard(s) would not be a cure all.

        Finally, the kicker:

        AI offers a way to address this, she claimed, as it is far better at tracking and identifying flaws in code. And it would be possible to tackle the mountain of technical debt left by a "rickety mess of overly patched, flawed infrastructure."

        Here Easterly is paraphrased as saying "it [AI] is far better at tracking and identifying flaws in code", which is at least very vague - better that what?. AI is a tool that can assist humans, not an intelligence that can replace humans, unless you want to introduce even more convoluted vulns. Perhaps the reporter's paraphrasing of what Easterly said was unfair? It's this last paraphrased statement that has really ticked Reg readers. Probably she she deserves a chance to respond to this paraphrasing before being condemned in the court of El Reg.

        1. may_i Silver badge

          Re: What an idiot

          "We don't have a cybersecurity problem. We have a software quality problem,"

          The biggest problem with this statement is the fact that she blindly assumes that all break-ins are down to poor software quality.

          Many break-ins are actually accomplished through social engineering or stolen credentials. No amount of appealing to LLM wow-wow is going to solve those two classes of problem.

          The LLM wow-wow won't solve software quality problems either - it's more likely to create new ones.

          She's simply making money at speaking engagements by implying her previous employment gives her pronouncements credibility. That might work with middle-management types but it's obvious nonsense when anyone with technical competence looks at what she's saying.

      3. doublelayer Silver badge

        Re: What an idiot

        Based on previous things she's said which I thought were on point, I'm inclined to try to parse her statements as logical. The problem is that they aren't. If you use AI scanning before release, you need people to read and respond to those scans. You can't eliminate the industry if you need a lot of people to read scans no matter how good they are.

        No, I think she's really under the impression that LLMs can do things they can't. It's not entirely on her. Lots of companies say they can automatically detect and then fix security issues without human oversight. They're usually wrong, but they exist. Some of those are companies that existed before LLMs making exactly those scanners that can make the remediation process faster. She probably hasn't used these herself and doesn't know that, while there are indeed times where they detect something real, write a fix to it, and not break anything, there are a lot more situations where one of those three things doesn't happen, meaning it tries to automatically patch something that wasn't a problem, writes a patch that leaves the problem in, or breaks the software in the process. If you assume that all these security software companies wouldn't lie, then things are looking up. Unfortunately, they often are, whether or not they know it, and most of them do know it which is why, instead of automatically adding fixes by default, they put them there for human review. They still advertise the automatic part though.

    2. Dwarf Silver badge

      Re: What an idiot

      Cleary they are of the mind set that If you just wish for a solution, then it will materialise.

      If only technology worked like that. Perhaps they have seen other people chanting the wish words from marketing - Low Code, No Code, Blockchain, AI.

  9. vtcodger Silver badge

    Perhaps a bit overoptimistic

    Reading stuff like this always reminds me of Richard Feynman's appendix F to the report of the presidential commission on the Challenger disaster. https://www.nasa.gov/history/rogersrep/v2appf.htm. In his appendix, Feynman describes a three order of magnitude gap between the reliability estimates of the working engineers (estimate 1 failure per 100 launches) and the project management (1 per 100,000 launches).

    Let's just say that I suspect Ms Easterly probably wasn't the best possible choice for CISA head and that Trump's nominee for the job Sean Plankey doesn't look to be that much of an improvement. Could be wrong about that. Hope I am.

    And, Oh Yes, Trump wants to cut the CISA budget and reduce staffing by a third. Will that make CISA 33% less ineffectual?

    1. Fr. Ted Crilly Silver badge

      Re: Perhaps a bit overoptimistic

      Nominative determination at work again -Trump's nominee for the job Sean Plankey

    2. Anonymous Coward
      Anonymous Coward

      "cut the CISA budget and reduce staffing by a third"

      That sounds like a job for GenAI!

      1. ChrisElvidge Silver badge

        Re: "cut the CISA budget and reduce staffing by a third"

        Or even JenAI (see the IT crowd).

    3. JoeCool Silver badge

      This is going exactly according to plan

      AI has no need for human oversight and verification.

      Just like government, capitalism and criminals.

      This "automated virtuous cycle" of software deployment does benefit some segments ...

      1. David Hicklin Silver badge

        Re: This is going exactly according to plan

        > AI has no need for human oversight and verification.

        So will AI quietly start creating SkyNet ?

        1. JoeCool Silver badge

          Re: This is going exactly according to plan

          What is this "Quietly" ? People are being told to stop hearing, nothing going on here, all ok.

  10. Filippo Silver badge

    >"We don't have a cybersecurity problem. We have a software quality problem," she said. The main reason for this was software vendors' prioritization of speed to market and reducing cost over safety.

    That's actually not wrong.

    Where she goes wrong is with the solution. Software vendors put security at a very low priority not because they're dumb or evil (though some are), but because all the economic incentives are extremely in favor of speed to market and cost reduction, and security costs a lot of time and money. As long as the incentives are the same, shifting the problem to AI won't solve it.

  11. heyrick Silver badge

    I had to look up what CISA is

    And now I know, I'd confidently say that I wouldn't trust this person to competently operate a microwave oven, never mind any sort of "computer".

    No, AI isn't the magical unicorn pissing rainbows and sparkles. And one needs only look at the quality of GenAI pictures, stories, discussions, and code to know that it may well fix the problem it identifies but create a dozen different problems in the process. There's no "intelligence", no "understanding", and very little "memory" (as in remembering context). That's not something I'd let anywhere near actual executable code without plenty of human oversight, and full unit testing.

  12. Anonymous Coward
    Anonymous Coward

    Wow

    Spoken like a true corporate shill.

    Hillarious at best, terrifying otherwise.

    He's clearly not spotted that the introduction of AI coding has coincided with a massive dip in the quality of software being produced.

    It seems to be daily I'm exposed absolute mounds of crap that are proudly released by many a major company. The only elements of these junkware apps that seem to work well are their unlawful levels of data collection and their continued persistance in pushing some form of AI labelled chatbot dungheap.

  13. Matt Collins

    Money

    It'll all boil down to costs. The price of secure engineering is still going to be high with "AI" solutions because the billions invested have to be repaid and a poor sod will still have to be paid to verify and, crucially, be capable of understanding the output and consequences. My bet is nothing much will change once the true cost becomes apparent.

  14. Will Godfrey Silver badge
    Facepalm

    She really doesn't have a clue

    Nuff Sed!

  15. breakfast Silver badge
    Holmes

    AI fixing code so fast we don't need a security team?

    Sounds great except who is going to validate that the AI code solves the problem and doesn't introduce any new ones?

    Especially given that compromising AI systems is trivially easy and there appears to be no way to make LLMs secure.

    Either you have a security team to second-guess your security-team-replacing AI or you don't have security.

    1. The Organ Grinder's Monkey

      Re: AI fixing code so fast we don't need a security team?

      "Sounds great except who is going to validate that the AI code solves the problem and doesn't introduce any new ones?"

      That'd be the end users or the criminals, so very much business as usual?

  16. Primus Secundus Tertius

    Bring back flowcharts

    I always argued that flowcharts were the way to design software. They show logic in two dimensions, making many errors and omissions much easier to see. But the industry has chosen the path of 'foolproof' programming languages, so the flow of disasters has continued apace.

    I would like to see AI creating and analysing flowcharts to find and fix flaws.

    1. An_Old_Dog Silver badge

      Re: Bring back flowcharts

      @Primus Secundus Tertius:

      Are you insane? Flowcharts are horrible!

      I learned computer programming making flowcharts of my programs before I wrote them, and I did not mind doing so at that time, because at that time, my programs were beginner-level-simple.

      When I got to college, the complexity of my programs went way up, and my willingness to use/produce flowcharts plummeted accordingly. Fortunately, we were not required to design or document our programs with flow charts.

      Flowcharts are "stuck" at the lowest-possible level of detail, and can give no overview or understanding of why all the thunder and motion are happening as they are within a program.

      Flowcharts are unwieldy, spread across many pages, with many "connectors". because you can't show much of a flowchart on a single page. So there are all the physical/concentration breaks where you flip forward and backward to find the relevant connector. Further, these breaks are arbitrary, and have absolutely nothing to do with the program's organization or control flow.

      I used/use pseudocode and data flow diagrams for high-level design and documentation.

      I don't know what the mod, cool kids use these days.

      Probably nothing, 'cause docs aren't "Agile".

    2. doublelayer Silver badge

      Re: Bring back flowcharts

      Even if I accept that logic, how do you propose compiling a flowchart to a deterministic program? Programming languages have the advantage that you can run and model with the same thing which helps a lot because, as experience has shown me, any two attempts to write the same thing will have weird differences if they're big enough.

      The other problem with flowcharts is that they only kind of work for a simple type of program which takes one input, runs for a while, and produces some output. If it runs multiple things in parallel, collecting some information while running other things, handling failures and potential problems without direct user interaction, a diagram of what it's doing gets a lot more complicated. You have two choices, neither of which is good. You can split it into lots of little flowcharts including arrows that come in from nowhere, or you can build a massive flowchart which covers an area the size of Wales (although I propose you use a flatter place) and still has arrows coming out of the Irish Sea to describe new events or data that weren't present when the program started.

      1. Primus Secundus Tertius

        Re: Bring back flowcharts

        There are 'subroutine' boxes that summarise a lot of logic, and allow a global view of the program. E.G.

        Init

        Doit

        Exit

        Boxes have little boxes, item by item; little boxes have lesser boxes, and so ad infinitum.

        1. doublelayer Silver badge

          Re: Bring back flowcharts

          Yes, that's the lots of little flowcharts option I already mentioned. The problem being that you now have to deal with all the various ways control flow can be modified. I write software that has scheduled jobs, event-activated jobs where something notifies us to start one, pull event jobs where we need to search for triggers to start one, user-called jobs, and job pipelines. These all run in parallel with mechanisms to prevent them from stepping on each other and to keep them in the proper order of data flow. Flowcharts to define that have lots of problems representing the external sources of information that can cause the jobs, and the flowchart describing how each function operates would be very complicated if you need to get all the concurrency data in there. If you don't get all the concurrency data in there, the system is guaranteed to hopefully crash but unfortunately quite likely deliver false results in a matter of minutes which will deliver an angry person to your desk. This system is not functionally described in a form you can easily break into little self-contained units.

  17. zeos

    The AI company that is paying her to shill is not getting their money's worth

  18. Blackjack Silver badge

    No wonder he is no longer a CISA Chief, so far AI makes more mistakes that those it fixes.

  19. Boris the Cockroach Silver badge
    Boffin

    AI wont

    help until software engineering is treated exactly as real world mechanical, electrical or civil engineering.

    Well thats a bold statement to start with coming from a lowly industrial robot programmer. but consider the following, your m$ powered computer suddenly decides to blue screen for some reason and reboot, well the software industry has somehow managed to convince everyone that "that happens sometimes" and "Its not our fault".

    Now think about an aircraft flying along at 5000 feet suddenly going rudder hard over and plunging into the ground at 500 mph and boeing putting out a statement to the effect of "well 737s do that sometimes". how long before boeing would go out of business with that attitude? or saying "well we had no proper engineering of the rudder as it looked ok and passed our depleted QA department so you cant sue us as we supply every 737 with a disclaimer that any crashes are not our fault"

    Or a building company saying "not our fault the building fell over.. must have been one of the techs altering the rivet temperature when they were hammered in", even in my line of stuff , we have to be so careful to get the machinery to coordinate and to check we're not going to do something stupid such as drill a hole 5 meters into a chuck(that makes a very loud noise that wakes the boss up)

    Until you force the software industry to have professional standards enforced by law, the addition of AI to any software creation process will not improve the security or stability of any software products as the very design of such products can be flawed from the start.

  20. btrower

    Somebody else here said "Hillarious at best, terrifying otherwise." It is clear from other comments that people do not think correcting obvious bugs in software constitutes a fix for security. Things are not secure and there are many reasons for which there are corrections known. As the power and sophistication of attacking systems grows, AI will be aiding attackers and the cure will have to include AI for defense.

    1. khjohansen
      Headmaster

      Quality training

      Where will we find the "best practice" software on which to train AI ?

  21. DS999 Silver badge

    I'll believe that

    When we see the first completely AI developed exploit chain being found by researchers (or more likely used by NSO Group)

    AI can maybe find some stuff that's very similar to stuff already out there, but it isn't going to find anything novel which is usually required for at least one step in the chain. Just look at this, in particular the step where they turn an obsolete compression format designed for fax that's still in open source PDF libraries into a circuit emulator to implement a 64 bit CPU that executes one stage of the exploit! Anyone think that AI would EVER come up with that, suggest a patch to fix it before it was found, or be able to determine what happened when examining evidence of an exploit like Google Project Zero's team did? So no, AI is not going to render CISA obsolete, not even close!

    https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html

  22. scrubber
    Black Helicopters

    Not sure all of govt. agrees

    The Intelligence Agencies love these exploits and jealously hoard their zero day exploits. If this was in danger from LLMs (not really AI) then it would pose an interesting dilemma for them. But I suppose they could always poison the LLM well and use it to add more scurrilous backdoors that they have developed.

  23. RedGreen925

    Yet another clueless moron heard from.

  24. Anonymous Coward
    Anonymous Coward

    AI designed with security?

    “ the need for AI systems that are created, designed, developed, tested, and delivered with security as the top priority.”. And what would those be? A lot of these “AI” code generators seem to be regurgitating stuff they scraped off Stack Overflow or just making things up.

  25. This post has been deleted by its author

  26. Anonymous Coward
    Anonymous Coward

    Probably not even theoretically possible.

    I would punt that in general determining whether code contained a security vulnerabilty can be transformed into the Halting Problem.

    In any case none of this would directly the exploitation of address hardware vulnerabilities like spectre etc

    All in all a spectacular example of American daft bintery.

  27. TangoDelta72
    Facepalm

    It's not the SW development time that's the problem...

    ...it's the remediation time. Discovery of a SW vulnerability is actually pretty quick, and there are lots of methods of detection: white hats, bounty teams, AV/malware subscriptions, RCAs, to name but a few. When there's a discovery, it takes time and money to remediate, and not everyone has an infinite budget or man-hours to throw at the problem.

    "AI offers a way to address this, she claimed, as it is far better at tracking and identifying flaws in code. And it would be possible to tackle the mountain of technical debt left by a "rickety mess of overly patched, flawed infrastructure."

    AI is *not* going to re-code or magically fix decades of old code. System Owners (i.e. - humans) still need to accept and approve the changes. As I just mentioned, discovery is quick, remediation it slow. AI (or whatever you want to call it these days) may develop or improve *new* code. That's yet to be seen as well, given many other nice articles published in El Reg. All I read here is a pipe dream that's about as close to nuclear fusion is to 2030.

    And "no security teams"? That's nuts. So many things wrong with that blanket statement, but that's already been shared in the comments.

    CISA used to have teeth, at least for government systems. Flaws can and do get bubbled up to congressional oversight if the risk and the affected system is important enough. To wit: "if you don't fix this, we pull your funding." I say "used to...".

  28. Anonymous Coward
    Anonymous Coward

    Nope

    Spoken by a person with little practical experience in software systems, and no commercial experience running software systems. It's all been consultancies and working at spy agencies.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon