back to article OpenAI's GPT-4 can exploit real vulnerabilities by reading security advisories

AI agents, which combine large language models with automation software, can successfully exploit real world security vulnerabilities by reading security advisories, academics have claimed. In a newly released paper, four University of Illinois Urbana-Champaign (UIUC) computer scientists – Richard Fang, Rohan Bindu, Akul Gupta …

  1. rafff
    Happy

    Cost of an explooit

    " $8.80 per exploit, which they say is about 2.8x less than it would cost to hire a human penetration tester for 30 minutes."

    Pen testers seem to be quite cheap, though.

    1. elsergiovolador Silver badge

      Re: Cost of an explooit

      They are relatively cheap to hire by an agency, but not cheap for a client to hire them from the agency.

      Agency would charge £2000 a day for pen tester and pen tester will get like 25-30% of that (before tax).

  2. Brewster's Angle Grinder Silver badge
    Joke

    Where's hope? Or we haven't got to the bottom of the box, yet?

    1. Yorick Hunt Silver badge
      Trollface

      Relax. In a couple of decades' time we'll send Arnie back to kill the people who came up with the concept of AI ;-)

      1. lamp

        Who? Alan Turing?

        1. Snowy Silver badge
          Headmaster

          No

          The term "AI" could be attributed to John McCarthy of MIT (Massachusetts Institute of Technology), which Marvin Minsky (Carnegie-Mellon University) defines as "the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level mental processes such as: perceptual learning, memory organization and critical reasoning.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like