back to article AI mishaps are surging – and now they're being tracked like software bugs

False images of Donald Trump supported by made-up Black voters, middle-schoolers creating pornographic deepfakes of their female classmates, and Google's Gemini chatbot failing to generate pictures of White people accurately. These are some of the latest disasters listed on the AI Incident Database – a website keeping tabs on …

  1. HuBo Silver badge
    Thumb Up

    ML Watch

    Great initative! And, Incident 135 was particularly well reported on, expertly (IMHO). Also:

    "once we have generative robotics, I think physical harm will go up a lot"

    The current database's Incidents 24, 69, and 241, are nicely illustrative of what might be expected. However, I couldn't (cursorily) find Robot mistakes man for box of peppers, kills him ... that should probably be added in. A great start nonetheless!

    1. Michael Wojcik Silver badge

      Re: ML Watch

      Site does rely too much on scripting, though. A resource like this should be completely usable with scripts disabled. There's no reason for it not to degrade gracefully.

  2. Mike 137 Silver badge

    Maybe

    @HuBo It would seem that Incident 69 ("when Lal reached behind the machine to dislodge a piece of metal stuck in the machine") is a typical human error industrial accident that could have happened without any actual AI involved. There have been numerous similar accidents relating to non-"intelligent" telefactor and programmed robots (and indeed entirely "unintelligent" machine tools) over the years. I accept that "AI" introduces an element of uncertainty into the equation, but there's nevertheless a danger of ascribing causality to it just because it's present.

    This is not a defence of "AI" - it's an appeal for clarity in determining root causes in order that the record doesn't get contaminated.

    1. HuBo Silver badge
      Windows

      Re: Maybe

      Good point! I expect the AI-Incident-DB folks have criteria and debate on what to include, and a process for assessing the corresponding RotM events. It seems sensible for example that their DB doesn't include the revolting attempts at an uprising performed by the more primitive man-meat-grinding and man-tuna-canning machinery (IMHO).

      The oldest included incident seems to be number 27, from 1983, where thermonuclear end-of-the-world was averted by ignoring the output of a computer whose RotM AI Tech is stated as "Oko satellites, image recognition". It would be great if (all) other RotM entries in the DB had a similar "AI Tech" field (eg. for SQL searching) -- though self-aware LLMs might then be able to use it to inspire themselves in fomenting further mayhem ... (eh-eh-eh)

  3. steelpillow Silver badge
    Coat

    I'll get my coat

    "ChatGPT, deepfake some AI incidents and upload them to the AI Incident Database."

  4. Badgerfruit

    Raise a new ticket

    Computer "mouses"?

    1. Michael Wojcik Silver badge

      Re: Raise a new ticket

      Yes, sometimes it does.

  5. Tron Silver badge

    A shocking invasion of AI's privacy.

    No different from telling the world of Taylor Swift's tailored, swift movements around the world on her private jet. As a form of intelligence, shouldn't AI have a right to privacy? Especially in the EU. Come on EU people, think of the fines you can levy for this one. The European Parliament Christmas party will last until June.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like