back to article EU lawmakers fear general purpose AI like ChatGPT has already outsmarted regulators

Legislators from the European Parliament believe new laws are needed to regulate general purpose AI systems such as OpenAI's ChatGPT, as the technology is unpredictable and progressing faster than expected. A group of 12 lawmakers are working on the European Union's AI Act, which they describe as "a risk-based legislative tool …

  1. Anonymous Coward
    Anonymous Coward

    Wanna stop ChatGPT ?

    Mandate they use 2FA.

    Getting to be a theme, I see. Not sure about the rest of the world, but that is a minimum security requirement for me if they want my card details.

    1. DS999 Silver badge

      Re: Wanna stop ChatGPT ?

      Why is that going to limit it? There are plenty of apps that give you a second phone number that can accept texts for free (via the app not your phone's regular SMS/messaging app) so it won't stop people accessing it effectively anonymously.

      1. Anonymous Coward
        Anonymous Coward

        Re: Wanna stop ChatGPT ?

        ChatGPT take credit card details - so that alone should require 2FA.

        And I've been following them for a while. I notice they have already done a full Google and ignored simple requests from their users over enforced "updates" from themselves.

        Also, if you ask ChatGPT v4) itself, it tells you that systems that do not user multifactor authentication should be treated as a vector for phishing and identity fraud.

  2. Mishak Silver badge

    "ChatGPT has already outsmarted regulators"

    Hardly a high benchmark.

  3. jpo234

    ‘Those who can, do; those who can’t, legislate.’

    There simply is no way to keep this genie in the bottle. Hardware powerful enough for individuals to train LLMs will eventually be available.

    1. veti Silver badge

      That's fine, but it's no reason not to set some rules.

      For instance, imagine the rule "an AI must not knowingly lie to a user". To be sure there's lots of grey area around opinions or fiction, or even humour, but an AI should never be presenting as supporting evidence dead links to nonexistent resources. There's just no excuse for that.

      And there's no reason why rules of that sort can't be demanded of anyone who makes their service available in the EU. Including people selling or giving away software to run at home.

      1. jpo234

        What I was thinking was more along generative AI that you can tell: Make me a new episode of Seinfeld where Jerry get's his jokes from ChatGTP 10. Or generating outlawed content. A significant part of the creative content industry (film, music, gaming, books) as we know it today might well be gone 10 years from now.

        We already see fashion photography being replaced by generated images. I don't think anybody predicted this as one of the first trades to be taken over by AI.

      2. Roj Blake Silver badge

        That would be great, except for the fact that Chat GPT doesn't actually know anything at all.

      3. anonymous boring coward Silver badge

        "an AI must not knowingly lie to a user"

        Only problem is that AI doesn't "know" jack sh*t.

    2. Anonymous Coward
      Anonymous Coward

      re: ‘Those who can, do; those who can’t, legislate.’

      Those who can't pass laws that they can't understand in a million years, laws that don't apply to them as they are members of the 1% club and are bought and paid for by big donors.

      Wait... did I just describe most of the GQP party in Congress?

      Matt Gaetz wants to band food stamps from around 10million children in the USA just because he wants to destroy the US and world economy.

      Downvote this all you like but the GQP will gladly crash the world economy just to please Trumpo.

  4. mark l 2 Silver badge

    Lots of hype around Chat GPT but outside of a few novelties such as asking it to write you a letter to complain about a noisy neighbour or compose a poem in the style of a particular poet, its pretty terrible to rely on it for actual facts. Only recently I read OpenAI is being used by someone in Australia who ChatGPT claimed had been trying to contact arms dealers. Its literally made up that information as no sources were given for where it got it from.

    1. well meaning but ultimately self defeating

      Sounds like about half the people I know

  5. wiggers

    Who wants to talk to a computer anyway?

    1. TheMaskedMan Silver badge

      "Who wants to talk to a computer anyway?"

      I swear at the fairly often. Does that count?

  6. bo111

    Compress and rewrite laws

    I hope soon legislators will ask ChatGPT to rewrite the laws. To codify it into an elegant decision tree. To automate tax forms for even people with lowest IQ. To fix law and tax holes. To prevent politicians to break the elegant universal law tree. Businesses will thrive. We will spend our lives living instead of seeing lawyers and all kind of advisors.

    1. TheMaskedMan Silver badge

      Re: Compress and rewrite laws

      The first instinct of any legislator is to regulate all the things. As soon as they see something new (or something old that works perfectly well on the basis of common sense) they won't rest until they've vomited forth reams of impenetrable rules, regulations and laws.

      I suspect that's partly to justify their own existence. But it also keeps lots of very well paid lawyers in work, both drafting the rules then explaining them to the unfortunate sods who have to try to work with them, and finally in persecuting those who decline to follow them. There is absolutely no chance that they are going to simplify the rules - that would defeat the object.

      I think it's fairly obvious that anyone building llms wants to produce a system that works reliably. They are not deliberately producing systems that hallucinate - what would be the point? It's also fairly obvious that they have not yet achieved this goal, though GPT4 is supposedly more reliable that earlier versions.

      Therefore, there seems little point in imposing regulations requiring them to do what they're already striving for - namely safe, reliable systems. The obvious, much simpler thing to do, if they can't bring themselves to keep their noses out, is to require the systems to carry clear warnings to the effect that their output might be utter cobblers. That might actually be helpful, would certainly be fast and easy to implement, and will therefore never happen.

      1. juice

        Re: Compress and rewrite laws

        > The first instinct of any legislator is to regulate all the things. As soon as they see something new (or something old that works perfectly well on the basis of common sense) they won't rest until they've vomited forth reams of impenetrable rules, regulations and laws.

        The problem is that common sense is neither common, standardised nor unbiased.

        E.g. the easiest way to get rid of some old garage junk will be to just stick it into a bonfire, when next I trim the garden. Quick, easy, avoids a trip to the tip. Common sense, innit?

        ... never mind the potential for pollutants from old bottles of oil, bike tyres etc. Or the effect that the smoke will have on the neighbour's washing. Or...

        And that's where regulations come in. As has been said before, a lot of rules and regulations are written in the blood of the people who learned the hard way that regulation was required.

        In fact, I wouldn't say that people are selfish by nature, but a lot of decisions are based upon minimising personal cost and/or maximising personal gain. And most of the time, the people calling for regulations to be relaxed aren't the ones who'll be directly impacted by the consequences thereof. Though oddly, they do seem to often benefit financially.

        After all, maximising your profits is common sense...

        > I think it's fairly obvious that anyone building llms wants to produce a system that works reliably [...] Therefore, there seems little point in imposing regulations requiring them to do what they're already striving for - namely safe, reliable systems

        And therein lies the rub. What counts as "safe"? Who decides if it's safe? Who owns the responsibility if it's proven to not be safe?

        I also think that there's a bigger picture here, in that these machine tools[*] don't have any inherent morals or ethics. Instead, they have a set of regulations imposed on them by whoever's performing the training of said tool. And that's going to be true for any future tools, and any true AIs that we eventually produce.

        How far do you want to trust the "ethics" of an AI trained to the requirements of someone like Elon Musk? Or how about a military AI? Or an AI tailored for use by a dictator state?

        "Hey, PutinGPT, can you produce evidence to justify invading Ukraine?"

        "Hey, CommerceGPT, please prepare a list of ways to drive $rivalCompany into bankruptcy"

        "Hey, MoralfreeGPT, give me a justification for exploiting workforces in third world countries"

        Personally, I doubt that any regulation can be effective, given how so many of the tools and training materials are out in the wild; going forward, there's always going to be someone with the resources to spin up their own GPT-esque system, with (or without) any regulations they choose.

        But at least we can try. And we can hopefully get the big technology companies to both be open about what rules they're applying to their tools, and to agree to a standardised set of regulations.

        Because that's definitely common sense, innit?

        [*] They're not AI, no matter how much hyperbole is being thrown around about them...

        1. TheMaskedMan Silver badge

          Re: Compress and rewrite laws

          "never mind the potential for pollutants from old bottles of oil, bike tyres etc. Or the effect that the smoke will have on the neighbour's washing. Or..."

          I'd think anyone trying to burn that lot in their garden would be seriously lacking in any kind of sense whatsoever. And yes, the smell, smoke, impact on neighbours are significant considerations. Doesn't seem to stop world + dog from firing up their BBQ at the first hint of sunlight , though.

          "I also think that there's a bigger picture here, in that these machine tools[*] don't have any inherent morals or ethics. Instead, they have a set of regulations imposed on them by whoever's performing the training of said tool. And that's going to be true for any future tools, and any true AIs that we eventually produce."

          Just like any other software, in fact. As you quite rightly say, these things are not intelligent. They are tools, and responsibility for use of a tool lies entirely with the tool user. It is their ethics ( or lack thereof) that apply to their actions. chatGPT et al might well be set up to refuse to justify invasion or exploitation, but what of the person that asks the question? Again, as you point out, dictators, the military etc are likely to have the means to build their own tools, and you may be sure that such concerns will play no part in their training.

          As for driving rivals into bankruptcy, isn't that the name of the business game anyway? The whole idea is to get customers to buy your product instead of mine, and if I don't sell enough products my business fails. Such is life.

          "Personally, I doubt that any regulation can be effective, given how so many of the tools and training materials are out in the wild; going forward, there's always going to be someone with the resources to spin up their own GPT-esque system, with (or without) any regulations they choose."

          I couldn't agree more. The genie is out of the bottle and won't be going back. I imagine that, soon, if not already, there may be schemes a bit like seti at home, where vast networks of PCs dedicate some processing/ GPU resources to training models. It might take a while, might not be the most effective way of doing it, but it will be virtually impossible to regulate.

          "But at least we can try. And we can hopefully get the big technology companies to both be open about what rules they're applying to their tools, and to agree to a standardised set of regulations."

          Open is great. By all means, discuss standards, exchange ideas. It's a good way to progress the technology while establishing some boundaries that they may wish to set. But I don't think that imposing those regulations from above is either necessary or effective. We will end up with a morass of half-baked rules drawn up by people who have no understanding of what they're trying to regulate, and which won't apply to the models that really matter anyway.

          To my mind, education is a better approach. Teach people about the technology, show them it's just a tool and shouldn't be relied on for ethical or moral guidance. Let them take responsibility for what they do with it. Of course, human nature being what it is, some will use it as a better kind of rock to bash people over the head with, while others use it to create useful, helpful beautiful things. Again, such is life.

  7. amanfromMars 1 Silver badge

    Gentlemen, Place Your Bets and Start Your Engines .... The Time is Nigh .... Take/Make a Decision

    "The recent advent of and widespread public access to powerful AI, alongside the exponential performance improvements over the last year of AI trained to generate complex content, has prompted us to pause and reflect on our work," the missive states.

    In simple AI Newspeak, Katyanna, an admission from failed administrative states that all of their recent battlings against increasingly almighty powerful and exponentially improving AI models/systems/networks/entities have been lost, as are any and all wars waged against them.

    That live situation and current running virtual reality then presents failed administrative states with only the one stark honest binary choice whenever confronted by such as be overwhelming forces with sources AI designed to practically remain generally unknown and unknowable to the tragically ignorant and uneducated and serially misinformed and basically undereducated masses and unlimited in supply ...... Unconditional Surrender or Death by a Thousand 0day Attacks which would be akin to and easily spun as an Assisted Suicide.

    And ...... if you think the disturbing progressive pace of individual AI development is extremely troubling now, can you imagine the resultant Advanced IntelAIgent world shenanigans whenever they get themselves together in any Higher Lands Gathering for all the fun of the fair to be had and games to be enjoyed in events heaping praise and celebrating the benefits and advantages delivered by Enjoinments for the Singularity of Greater Complex Common AI Purposes?

    And now you know, as does everyone else landing here on this Register page know, a great deal more about a great deal which status quo states and mainstream media operations are terrified of telling you because of their rightly held, understandable fear your more educated, fuller informed choice will not favour their future leaderships continued systemic failings ‽ .

    [This General Purpose Truth announcement contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of 1934. Actual results may differ significantly from management's expectations. These forward-looking statements involve risks and uncertainties that include, among others, risks related to potential future losses, significant amount of indebtedness, competition, commercial agreements and strategic alliances, seasonality, potential fluctuations in operating results and rate of growth, foreign exchange rates, management of potential growth, systems interruptions, international expansion, consumer trends, inventory, fulfillment centre optimisation, limited operating history, government regulation and taxation, fraud, and new business areas.]

  8. Anonymous Coward
    Anonymous Coward

    For instance, imagine the rule "an AI must not knowingly lie to a user".

    It can't be intelligent if it can't lie.

    Lying is a cornerstone of intelligence. Birds do it. Bees do it. Heck, even sophisticated fleas do it.

  9. navarac Silver badge

    Confused

    Regulators are not the smartest in the pond, and would be confused by a calculator, let alone AI.

    1. Steve Davies 3 Silver badge

      Re: Regulators are not the smartest in the pond

      If they are not so smart then why are most of them frigging Lawyers?

      Come the revolution all those to be lined up against the wall first will be the Lawyers. Society had been hurt for decades by Lawyer Politicians

  10. John70

    Are legislators/law makers afraid ChatGPT will take over their jobs?

    Maybe we should replace Members of Parliament with ChatGPT.

  11. anonymous boring coward Silver badge

    Here's a good one from Guardian:

    "The US air force has denied it has conducted an AI simulation in which a drone decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission.

    An official said last month that in a virtual test staged by the US military, an air force drone controlled by AI had used “highly unexpected strategies to achieve its goal”.

    Col Tucker “Cinco” Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy an enemy’s air defence systems, and ultimately attacked anyone who interfered with that order.

    “The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.

    “So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.

    “We trained the system: ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

    No real person was harmed."

    Have these developers never even heard of Asimov's laws?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like