back to article Tough Euro crackdown on AI use passes key vote

A sweeping European Union-wide AI regulatory bill is one step closer to adoption, with the European Commission's Internal Market and Civil Liberties Committees voicing their approval by an overwhelming majority. Should the bill become law, it could lead to tough times for AI operators in the economic bloc. Passed 84 to 7 (with …

  1. Anonymous Coward
    Anonymous Coward

    Risk /safety is subjective

    "AI systems the EU decides come with "an unacceptable level of risk to people's safety" would be banned outright"...

    Yep, safety is completely subjective so no worries about setting the bar on this - those with large pockets can bribe <bsp><bsp> lobby for their systems to be safe, or at least for the 'benefits' to outweigh the risks.

    Considering we're not really in the true AI space at this time, just in getting some language models to wobble vaguely comprehensible sequences of words out, I'm at a loss to understand why the EU is kicking this around whilst ignoring all the damage social media has been doing for decades.

    1. Yet Another Anonymous coward Silver badge

      Re: Risk /safety is subjective

      > I'm at a loss to understand why the EU is kicking this around

      There is no such thing as the Eu, just politicians / civil servants working in their own interests(so unlike our own dear government)

      So elected officials can go to their voters (both of them) and say "I'm working to protect you from the evil AI"

      The civil servants can work on cool new AI stuff rather than the 32nd draft of the "Nuts (unground), (other than ground nuts) Order"

  2. Anonymous Coward
    Anonymous Coward

    Won't somebody think of the children!!

  3. Peter Prof Fox

    Jack the Ripper

    There have been many attempts at discovering who Jack the Ripper was and writing a book about it. Names (of dead people) have been named. The method seems to be think of a possible perpetrator then collect/invent/suggest evidence. AI machines are clever guessers with encyclopedias. It's statistics. Insurance companies and mortgage lenders have been using statistics and data for a long time. They statistically infer certain risks from you being married or single. We know sometimes the data these systems use is perverse when applied to individuals. (Good at paying-off debt? That's 10 points. Never had any debt? Ohh! Dodgy.) Then (still not yet in artificial territory) the fact that you visited a web site about drug rehabilitation or went on a stop something protest gets added without your knowledge and without any real reason to your background checks. Now we're into bad-lands. It's not the clever-guessing AI that's the problem but the data-hoovering and ability to correlate it and then come to some judgement. (With plenty of opportunities for circular 'reasoning'.) "Which of these five people pictured is most likely to ..." is a matter of prejudice when done by a human, and a matter of clutching at opaque statistics when done by a computer. It's easy to see that this sort of thing is desperation. But there's an everyday use which is just as pernicious and that's red-flags for organisations without any intelligence. Hello Social Services, I'm looking at you. A red flag should be a prompt for investigation not a one-size-fits-all response. When the staff have no time or skills or trust then the 'safest' or cheapest option will be rubber-stamped.

    Conclusion: Opaque and unjustified data collection is the main danger. Letting 'the system' decide is cheap and 'not my responsibility'... But not fit for purpose.

    1. Elongated Muskrat

      Re: Jack the Ripper

      Reading the article, though, it sounds like this piece of legislation is more about stopping those sorts of things (the one that stands out is "social scoring systems" à la China's surveillance state). Mention of AI like this seems almost like a bit of a gimmick, much as a lot of supposed "AI" is. We're certainly still years away from an actual artificial general intelligence, if such a thing is even possible. As you say, what is getting called "AI" and "Machine learning" is better described as a bunch of techniques for data analysis and pattern matching. Those techniques are certainly getting more refined, and it is potentially worrying that they can produce results that look like they were done by a human, but at the same time, a robot can produce a car that looks like it was welded together by a human. The issues only arise when people attempt to pass one thing off as another. Oh, and "AI" can't do fingers properly...

      1. TimMaher Silver badge
        Headmaster

        Re: Fingers

        That’s because it hasn’t got a diaposed thumb.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like