back to article Privacy warriors whip out GDPR after ChatGPT wrongly accuses dad of child murder

​A Norwegian man was shocked when ChatGPT falsely claimed in a conversation he murdered his two sons and tried to kill a third - mixing in real details about his personal life. Now, privacy lawyers say this blend of fact and fiction breaches GDPR rules. Austrian non-profit None Of Your Business (noyb) filed a complaint [PDF] …

  1. ComputerSays_noAbsolutelyNo Silver badge
    Flame

    However ...

    when the Internet searching ChatGPTs encounter stories about the man, who supposedly killed his kids, will it then "learn" from its previous hallucinations?

    1. excession
      Facepalm

      Re: However ...

      Tell me you didn’t read all the way to the end of the article without telling me you didn’t read all the way to the end of the article

      1. Joe W Silver badge

        Re: However ...

        Tell me you don't know how LLMs are trained and how they generate output from that without telling me you don't care...

        In this text, "child murderer" and this dude's name crop up really close together. For an LLM this means the two phrases are related statistically and should be used together. That's why we cannot have nice things.

        if it were me I would ask how the heck they had any information about me and tell them to delete it under gdpr, which they are obligated to do.

        1. Doctor Syntax Silver badge

          Re: However ...

          "tell them to delete it under gdpr, which they are obligated to do"

          Easier said than done.

          Norway isn't in the EU so the EU regulations don't protect anyone resident in Norway no more than they do in the UK. It may well be that Norway has its own version but then does OpenAI have a legal entity resident in Norway that can be brought to book?

          (Yes I know the UK also has its own version which successive govts. seem to want to move out of the way of LLMs and their owners in the interests of putting, as they think, GDP ahead of GDPR.)

          1. dochego

            Re: However ...

            GDPR has been incorporated into the EEA agreement (which, unlike the UK, Norway is a member of), so it applies there in the same way as it applies in the EU.

            https://www.datatilsynet.no/en/regulations-and-tools/regulations/

            1. katrinab Silver badge

              Re: However ...

              England (rest of UK not so much) has very strict libel laws, which would be more useful in this situation.

              1. veti Silver badge

                Re: However ...

                To bring an action for libel, you have to show that the allegations were published, i.e. shared with at least one third party, without the consent of the victim. From the summaries I've seen, I don't think that's true in this case.

        2. The man with a spanner Bronze badge

          Re: However ...

          It seems to me that thetr are 2 issues here.

          1) That these systems are using my personal data without my permision and in ways that breach the law. (GDPR)

          2) We have the concept of a legal person. It seems to me that Albert Idiot aka John Doe should be treated in law as a legal person.

          So, when Albert slanders or libels me I can take him to court and seak redress for the damage that he caused. Also he (AI) could be obliged to publish retractions and correct the record so that further untruthes are avoided.

          In other words, I dont think that we need any new laws or regulation, just the rigourous application of the long estabkished ones that we have already.

          1. Groo The Wanderer - A Canuck

            Re: However ...

            "But, but, but... we're AI - a totally new business! The existing laws don't apply to us!" cried Altman.

        3. big_D Silver badge

          Re: However ...

          Tell me which part of it is illegal to output incorrect information about real persons?

          I have been saying for years, outputting incorrect information is illegal and these companies need to solve this problem before they start pushing their systems on the public. It is the same as the copyright violations instead of licensing the information they use, like everybody else, they want exemptions because they are too important and paying the licences would affect their bottom line... If the cost of licenses isn't calculated into the bottom line, they didn't set up their business model properly. Likewise, if their products are in violation of the law, they should be pulled off the market until they can comply.

          Ford had to pull the Pinto from the market, when it was shown that the fuel tank configuration could lead to explosions in low-speed impacts, my last 4 cars were all recalled because there were defective parts that could cause accidents or break the law, this is no difference, the LLM is defective and can cause damage, it should be fixed. OpenAI should invest the money on developers instead of lawyers trying to make them exempt from the law...

          1. Mike007 Silver badge

            Re: However ...

            The cost is factored in to their business model.

            When they were establishing the project they calculated that by the time the fines arrive they will have enough money that they won't even have to stop using real gold in their toilet paper.

          2. Anonymous Coward
            Anonymous Coward

            Re: However ...

            No, it's NOT at all like copyright.

            Copyright DOES NOT APPLY to LLM training any more than it applies to a person learning from a book they read.

            LLMs don't distribute copies of what they're trained with, they use it to learn how to make new stuff.

            All the publishers whining about LLM training need to STFU, their rent seeking is disgusting. They're not entitled to a penny, and they know it.

      2. Fr. Ted Crilly Silver badge

        Re: However ...

        I didn't get to where I am today without reading all the way to the end of the article without telling you, you didn’t read all the way to the end of the article!

        Goodbye Reggie!

    2. lsces

      Re: However ...

      "will it then "learn" from its previous hallucinations?"

      That personal data is being served up as fact even ignoring the 'disclaimer' just shows that NONE of these systems are safe to use for any purpose. The 'models' that are being used live NEED to be able to learn when they are told something is wrong. Until that time any output is simply a best guess based on the crap that has been input so far?

      Spent yesterday trying to get a section of my own websites working again using a combination of Mistral and raw search. That Mistral simply rewords the same wrong information just highlights another recent story about faulty output. Until I had a combination of facts so I knew just what questions to ask I was unable to fix the configuration so not sure that using Mistral is ACTUALLY improving productivity. Most of the questions were correcting it's mistakes trying to get at a correct answer and I suspect I would have solved the problem quicker had I just skipped straight back to raw searches of CURRENT information.

    3. gnasher729 Silver badge

      Re: However ...

      I think these hallucinations are not based on incorrect input but on forming incorrect connections. The man’s name was correct. And I bet there was a case where someone murdered his child. The “hallucination” is connecting both pieces together incorrectly.

      There was a case of a court reporter who had been writing about some rather serious and rather varied crimes in his court. An AI then claimed these were all crimes he had committed, not crimes he had written about.

      1. Benegesserict Cumbersomberbatch Silver badge

        Re: However ...

        Hallucination is disordered perception. AI models can't perceive, so theycan't hallucinate.

        Disordered processing is called psychosis.

  2. Sora2566 Silver badge

    Again, people: LLMs can't tell you what reality is like, because it has no concept of reality. It just knows that "this token is usually associated with this token". That's it. It's autocomplete on steroids.

    1. Michael H.F. Wilkinson Silver badge

      I like the phrase "stochastic parrot". Sums it up neatly

    2. Nematode Bronze badge

      Spot on

      You beat me to the same comment.

      LLMs desperately need a post-processing layer reality check. No good having an input data/ pre-processing check as that doesn't protect against hallucination.

      I had a really good example recently. It quoted a citation which, unsurprisingly, didn't exist. However, interestingly the make-up of the citation, i.e. authors, journal, title, doi, was very close to real ones of the same. So even for a citation, it mixes up input data and creates the output. Clearly a citation has to exist exactly, but it doesn't seem to understand this.

      1. Sora2566 Silver badge

        Re: Spot on

        If we had a program that knew what reality was and could fact-check arbitrary text, we pretty much wouldn't have any use for LLMs.

  3. Burgha2

    No care, no responsibility

    The whole world of modern IT is based on no care, no responsibility, innit? IT related to control systems might be the only area where developers seem to actually still care about quality, though Tesla is challenging that idea I suspect.

    As a non-IT engineer, I wish my mistakes didn't have the potential for people dying.

    1. Anonymous Coward
      Anonymous Coward

      Re: No care, no responsibility

      Please, please stop just believing the mainstream media, they are as deceiving as any Internet source.

      Re Tesla, I have no idea what their defect rate is - do you? How does it compare? Yet, you disparage them, why? Because the media told you to hate Musk and publicise any failing. Did you hate Musk before he started supporting Trump?

      1. AbominableCodeman

        Re: No care, no responsibility

        Apparently the Tesla delivered to Trump had 57 recall notices against it. I would be interested how many recall notices were typically raised against a car from the 90's containing no software. and in the pursuit of balance raise against a modern non Tesla model.

      2. Burgha2

        Re: No care, no responsibility

        "Because the media told you to hate Musk and publicise any failing"

        Um, the media in the US is mainly at worst neutral in Musk to positively spaffing over him (Fox).

        The media in Australia, where I am, barely mentions him

        I felt favourable to Musk when he appeared to support traditional thought. Once he started acting like a jerk, I no longer took a favourable view of him. Your side won, you need to stop pretending everyone is against you.

        1. Adair Silver badge

          Re: No care, no responsibility

          They don't need to pretend everyone is against their side, because everyone who isn't on their side—having seen what their side is like—really is against them.

          As in, they a FOR: honesty, compassion, doubt, truth, generosity, humility,... (and all the other things that the Trumps, Putins, and Musks of this world are demonstrably against, or 'for' only insofar as it serves their self-interest, so not really for them at all).

      3. Bebu sa Ware
        Windows

        Re: No care, no responsibility

        Did you hate Musk before he started supporting Trump?

        Yes.

        His pedoguy nonsense sealed it.

        Then despise rather than hate but in 2025 he has fully earned the hard core hatred he is being accorded.

      4. Graham Cobb

        Re: No care, no responsibility

        Please, please stop just believing the mainstream media, they are as deceiving as any Internet source.

        No, that is a false statement.

        The "mainstream media" (for whatever definition of "mainstream" that you personally choose) is not "as deceiving as any Internet source". It is often wrong, of course - and some definitions of "mainstream" are wrong more often than others - but there are "Internet sources" much worse than even the worst "mainstream media".

        1. Roland6 Silver badge

          Re: No care, no responsibility

          > The "mainstream media" (for whatever definition of "mainstream" that you personally choose)

          I suspect the original AC regards social media as mainstream media…

      5. Fr. Ted Crilly Silver badge

        Re: No care, no responsibility

        Well, yes.

        The Kids in the flooded cave business finalised it. the 'my Tesla shares have fallen in value from all the way up here' is simply the latest conformation...

      6. Anonymous Coward
        Anonymous Coward

        Re: No care, no responsibility

        I've never liked him, years ago I tried, but just couldn't. There has always been something about him that has always made me dislike the man. The more he does his 'thing' the more I dislike him.

      7. Anonymous Coward
        Anonymous Coward

        Re: No care, no responsibility

        My dislike of Musk came from that cave diver incident. That made me think he's a piece of crap. And the last few years he has done sod all to make me change my mind.

      8. Benegesserict Cumbersomberbatch Silver badge

        Re: No care, no responsibility

        I'll be more likely to believe a source of information if it openly admits and corrects errors of fact.

        I don't care if they're mainstream or not; I just know what Musk's attitude to fact checking is. So no, I won't even read your website if it takes a lawsuit to get it to admit it was wrong.

      9. Anonymous Coward
        Anonymous Coward

        Re: No care, no responsibility

        "Did you hate Musk before he started supporting Trump?"

        Yes, because unlike fucking idiots like you, I knew he was a lying hype merchant, with a penchant for racism and fascism.

        i. he's an utter twat

    2. Androgynous Cupboard Silver badge

      Re: No care, no responsibility

      "The whole world of Modern IT?" Have you seen our licenses?

      THIS SOFTWARE IS PROVIDED `'AS IS″ AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.

      Try selling a car, or a kettle, or anything other than software with that clause and see how far you get. Nothing modern about this trend.

      1. Roland6 Silver badge

        Re: No care, no responsibility

        >” Nothing modern about this trend.”

        The modern addition is the decision to actually deliver software products with no real reasonable attempt being made to make the software fit for purpose. Ie. This “as-is” no warranties statement is being taken as a mandatory design requirement, in the interests of cost cutting.

    3. Doctor Syntax Silver badge

      Re: No care, no responsibility

      "As a non-IT engineer, I wish my mistakes didn't have the potential for people dying."

      Given that there are plenty nutjobs believing whatever AI tells them it's quite likely that AI will, if it already hasn't, led to deaths. The difference is that the techbros don't care because they're unlikely to be held to responsibility in the way they should.

    4. Casca Silver badge

      Re: No care, no responsibility

      "Please, please stop just believing the mainstream media, they are as deceiving as any Internet source."

      Yea, lets trust some random person on the internet...

    5. tezboyes

      Re: No care, no responsibility

      No testing, just break fixing is the new boss

      Combine that with LLM generated code based on, whatever it can find.

      Will the last one who can get a voice response please ask it to turn out the lights.

    6. Anonymous Coward
      Anonymous Coward

      Re: No care, no responsibility

      Toyota software was already killing customers in the early 2010s, and I seem to recall a lot of teething pains when Airbus went fly-by-wire. I really thought there would be a political wave pushing for personal responsibility of software engineers developing safety critical systems, but I guess we are far enough off building bridges and skyscrapers that it's hard to discuss the risks without glazed eyes. I doubt it even hurt Toyota's sales.

      https://users.ece.cmu.edu/~koopman/pubs/koopman14_toyota_ua_slides.pdf

    7. Anonymous Coward
      Anonymous Coward

      Re: The whole world of modern IT is based on no care, no responsibility, innit?

      No, it's not. I've consulted for 5 different organisations in the past 5 years and they all took legal responsibility for the data they processed.

      There's a whole load of money being spent on "AI", and that is pressuring engineers to do shit work, but IME most IT workers takes data processing seriously.

  4. This post has been deleted by its author

  5. IGotOut Silver badge

    Hit them hard...

    ...the whole system I sold as being the answer to everything, yet when I (regularly) gets it's wrong, it "Oh we can't guarantee accuracy"

    They deserve to die

    1. Rich 2 Silver badge

      Re: Hit them hard...

      “…OpenAI previously argued it couldn't correct false data in the model's output”

      Then they should be made to delete the data. And if that means deleting their entire model then so be it

      1. tezboyes

        Re: Hit them hard...

        Quite, and if a person (actual or legal entity).made a statement like that then they should expect to be sued for libel.

  6. Anonymous Coward
    Anonymous Coward

    This AI malarkey

    It’s almost as bad with facts as the dickhead in the Whitehouse

    1. Anonymous Coward
      Anonymous Coward

      Re: This AI malarkey

      I think you'll find over time that the dickhead lost the election and the current incumbent is telling more truth than you believe possible, shaking your world view which you reject. I'll give you a hint the whole system is more corrupt than you can possibly imagine, was working for the benefit of a tiny minority at the expense of the majority. The new 'dickhead' is better than the previous and if you can't see that you are just listening to sound bites from media that was getting paid by the previous regime to lie. But hey don't worry I'm sure the destuction of the West and total collapse will resume after this short pause.

      1. Cynical Pie

        Re: This AI malarkey

        Donny is that you or is it your pimp Vlad?

        1. Doctor Syntax Silver badge

          Re: This AI malarkey

          Or pimp Elon?

          1. Cynical Pie

            Re: This AI malarkey

            Elon isn't a pimp, he's just playing the long con like the Tango Turd

      2. Casca Silver badge

        Re: This AI malarkey

        Another AC right wing moron. Or is it the same?

        1. ecofeco Silver badge

          Re: This AI malarkey

          Would it make any difference?

          Morons all look the same to me.

        2. StewartWhite Bronze badge
          Unhappy

          Re: This AI malarkey

          Let's just ask ChatGPT. Oh wait, I haven't though this thorugh have I?

      3. Andrew Scott Bronze badge

        Re: This AI malarkey

        Sure the new dickhead is working for me. Before his tax reform i never owed the federal government any money paycheck withholding was enough. After the changes i've owed thousands every year and i make less that $50,000. That dickhead paid $750 federal tax while i pay 4 times as much. He claims to be a billionaire. I've been lucky if i could afford to pay heating bills. Tell me again how he's working for my benefit. As for being more corrupt? how is putting the owner of space x in a position to close nasa not corrupt? Not surprised you hide as anonymous coward. Biggest clue that he's not telling the truth is when he says "everyone knows this". He confessed to lying to the prime minister of Canada in front of a live camera and mic thinking it was funny. Likely Trudeau knew he was lying and just kept his mouth shut. He'd been dealing with trade negotiations for years, and if he wasn't sure on a point he could ask one of his ministers who might be up on the question. Sorry. he's unable to tell the truth ever about anything. he's not very bright. Anyone who thinks he smart must be even stupider, and anyone who claims he's telling the truth must be even more mendacious.

      4. Anonymous Coward
        Anonymous Coward

        Re: This AI malarkey

        Seriously, I would like to know there you got 'your' world view ???

        The common refrain from supporters of Trumpf is that they are 'right' you are 'wrong' because they are 'right' ... AKA Trumpian Logic !!!

        Please reference/quote accepted sources to back up your 'World view'.

        NO posts from Social Media/etc as these sources cannot be verified, no posts from other people simply repeating your views word for word.

        Old style coherent logically argued proofs from people who have the knowledge/education/background to support their views with 'evidence'.

        Definitely, nothing from Trump/Vance or any of their cronies that have been placed into position of power/influence, otherwise known as 'Puppets'.

        (Trumpf is a 'fact/evidence free' entity and so are his cronies/puppets.)

        I know this will be somewhat difficult BUT it would be very useful as a means to convince others of your point of view !!!

        :)

      5. veti Silver badge

        Re: This AI malarkey

        Yes, we're all so grateful that President Trump has singlehandedly killed inflation, ended the wars in Ukraine and Gaza, won over the people of Greenland and Canada, balanced the federal budget, and created millions of jobs for Americans. Just imagine if that stupid Democrat had won instead.

        The system works for a tiny minority at the expense of the majority? - well yes, that's true. All systems do that, sooner or later, when a tiny minority of people figure out how to play them. And the current undisputed leader of that tiny minority is Donald J Trump. He's done, and plans to do, nothing but extreme damage to the majority, because that's how he makes himself rich.

        Farmers go bankrupt? Cheap land! Companies fail? Buy up their assets! People jobless? Cheap labor! US-dominated world order destroyed? Great news for his buddy Putin!

        And that's the common thread here. As my dad used to say when checking bills, "if they were just bad at this, you'd expect half the errors to be in my favor." But with Trump, all the actions are consistent: they're all calculated to make rich people happy, and no one in the world, not even Elon, is richer than Putin.

  7. Tim 11

    This strikes me as being quite similar to the situation with software warranties.

    Software in general is so complex with such high probability of bugs that if we forced software manufacturers to warrant that their software was free of defects, they would just pack you and go home.

    But such is the usefulness of software when it does work properly, that at the end of the day we all have to take the risk, and build safeguards into society to handle the inevitable failures.

    1. Sorry that handle is already taken. Silver badge

      I presume your point is that we only put up with it if the software is useful, and the GenAI pushers can pack up and go home?

      1. Doctor Syntax Silver badge

        A better point is that S/W, being that S/W can be fixed. It is orders of magnitude less complex than the data in the LLMs and, unless the vendor has lost the source, accessible for correction. How does an LLM get debugged?

    2. ComputerSays_noAbsolutelyNo Silver badge

      The magic of the EULA

      We sell you a product for real money, but declare in the same instance, that we do not guarantee for anything.

    3. AbominableCodeman

      The mandatory certification of chartered software engineers is long overdue.

  8. Anonymous Coward
    Anonymous Coward

    Perhaps people will start to learn to question what they see/hear on the Internet. If we are really lucky they will question what the BBC and mainstream media promote, their governments and worst of all political partys seeking election! Whilst, I have no particular love of Trump and Musk, I have yet to meet one of their detractors that has actually listened to an entire speech or interview, preferring to be told what they stand for by the BBC and a 3 sec sound bite. Meanwhile European & UK leaders call for WW3 - make it make sense!

    I love LLMs but I don't believe what they tell me unless it makes sense and I can validate. I use one that gives sources, I would hope that is common? I'm probably some weirdo that actually checks those sources when its important or contraversial.

    1. Doctor Syntax Silver badge

      If everything that comes out of an LLM has to be checked and at least some of it discarded what useful purpose does it serve? People ask questions because they need answers. The LLM only seems to be useful if you already know the answer to check it.

      In reality LLMs are going to be put in customer-facing positions where the customer is looking to customer service as the only definitive answer for a problem. When that's done without adding the disclaimer then the customer isn't going to believe they must check elsewhere and, in fact will have nowhere else to check. If the definitive answer comes with a disclaimer where do you go from there?

      1. Pussifer
        Coat

        Checkmate

        Shirley we can get an AI, LLM or whatever they are to check what the other AI, LLM has output? /s

        It's AIs or LLMs all the way down.

      2. Roland6 Silver badge

        > In reality LLMs are going to be put in customer-facing positions where the customer is looking to customer service as the only definitive answer for a problem.

        I think the courts have already passed judgement, if the customer service AI says something the company is on the hook to deliver.

        Ie. If the AI says the $1m product is mine for $1 if I order now, it’s mine for a dollar.

    2. Anonymous Coward
      Anonymous Coward

      Why listen to an entire Trump speech?

      It’s always the same shit over and over again, whining about something or someone being “so unfair” to the Dear Leader and everything bad is someone else’s fault. Ranting homeless people make more sense.

    3. LionelB Silver badge

      > I love LLMs but I don't believe what they tell me unless it makes sense and I can validate.

      Well I certainly don't believe what Trump says unless it makes sense (it generally doesn't) and I can validate it (I generally can't, mostly because it's not valid).

    4. tiggity Silver badge

      I will actually upvote that.

      I am not a fan of Musk & Trump, but they are doing the right wing stuff you would expect of them, not really anything surprising / unexpected (but obviously not great for non right wing people & even MAGA fans might begin to regret some of the stupid short sighted under researched "cost cutting" moves in years to come ).

      I'm in UK* & despise the labour party far more on grounds of hypocrisy as they are targeting the disabled & poor & leaving the rich alone - right wing policies but voted in** on a pretend socialist ethos.

      * Also a working class "proper" lefty old enough to have helped out in miners strike soup kitchens back in the day, so very disappointed by Starmer Labour.

      ** Not by me as it was obvious they were just red tories.

  9. AlanSh

    And the fine is

    4% of gross turnover I think. That should make them sit up and listen

    1. Roland6 Silver badge

      Re: And the fine is

      Or 1% of the greater market valuation at time of transgression or at time of court decision.

  10. Naich

    Is this not libel?

    1. Anonymous Coward
      Anonymous Coward

      Yes, plain as day Libel.

      The solution to avoiding Libel, as with newspapers and anyone else who gets stuck with such a charge, is to check your information. But LLMs can't do that, because they're not intelligent and have no understanding of what they're publishing.

      Which should also make it fraud on a grand scale to call them AI.

  11. ChrisElvidge Silver badge

    "which is generated on the fly using statistics and an element of randomness"

    Should that not be "on the fly using randomness and an element of statistics"?

  12. mark l 2 Silver badge

    If they don't have a way of correcting false information in their LLM, then they should block access to it until they develop a way of doing so.

    You can't just release a service which breaks the EU law and then say well we have no way of fixing that so we will just ignore it.

  13. Conundrum1885

    LLMs

    It isn't merely ChatGPT.

    Other models also create content that might breach GDPR such as Stable and other such things. The problem here is that (with a few minutes work) I can ask it to do something nasty and the AI will happily comply. Usually the model(s) available now have been sanitized but what about those already out there with content scraped from sources all over the place?

    It literally is a complete minefield and this particular horse has left the barn a long time ago with many celebrities taking out 'Deepfake Insurance' to guard against some content they created in good faith being used to generate something unsavory eg from someone finding a 1990s vintage holiday/etc camcorder tape that shows a lot more than is in the public domain.

    Had a word with some folks and legislation may well be incoming that bans certain *types* of LLM eg using models based on unethically or illegally sourced data if they can uniquely identify individuals who have copyrighted their likeness or other personal data.

    On the flip side if LLM content is found and from long enough ago some folks just point out the diferences and call it out for what it is ie copyright infringement then send in the lawyers (tm)

  14. Justthefacts Silver badge
    Facepalm

    This one, at least, should be a trivial fix. Foundational models can simply be equipped with safety rails to prevent them emitting sentences about any named individual. Then we don’t have to worry whether that information is right or wrong.

    LLMs are simply the wrong tool to search information from the internet. That is what a *search engine* is for, clue is in the name. Identify an authoritative location for the requested info, and provide a link, without attempting to pre-process the data.

    If people are using LLMs as search engines, then they are fools. There are plenty of good use-cases for LLMs, this isn’t one of them. But yes, fools will use it that way, so probably safety rails are required.

    1. Random person

      OpenAI are trying to argue that a guardrail to suppress the incorrect information is sufficient.

      The victim of the hallucination objects to presence of the incorrect correlation being present in OpenAI's model. The victim is trying to exercise the right under the GPDR to delete incorrect information.

      Using guardrails to suppress the supply of faulty information means that the model operator/owner will need to to have a lookup table of all the known incorrect information stored in the model. The contents of the lookup table will be reactive, this will end up being an example of "Falsehood flies, and the Truth comes limping after it" (Jonathan Swift https://quoteinvestigator.com/2014/07/13/truth/)

      What happens if the guardrail for a specific piece of incorrect information is deleted?

      LLMs are being "sold" as search engines. Like most technology very few people understand the problems of LLMs and most people are simply not interested enough to come to a view. I know and like a number of people who wouldn't understand the problems if you could persuade them to spend to be interested. If they trust you they may take your word.

      I agree that there are many of good uses for LLMs but they are being hugely oversold.

      1. Justthefacts Silver badge

        I do take your point that it’s even easier, and more defensible, to do this at training-time. Just classify and remove all “Persons Name” in the training data. Or more sophisticated anonymisation, this isn’t 2010 any more, we have robust procedures for data anonymisation. The issue of removing “Isaac Newton” from the dataset, is trivially solved by adding a whitelist of “famous dead people, as defined by having an Encyclopedia Britannica article”. I just don’t see this is as a Hard Problem. There isn’t really a good reason for the LLM *itself* to know or encode people’s names or info. Just use a software agent to look it up on Google like a normal person, it’s not 2022 any more.

        By the way, it’s important to realise this is more an issue with perception and feeling lied to, rather than personal data *actually* being stored. The relevant version of ChatGPT has 200 billion parameters = 200 GB. This just doesn’t *possibly* have room to store actual info about any significant proportion of the population, with dozens of bytes per person (including allegedly the names, ages of children, and town of residence). Otherwise we’ve accidentally discovered God’s compression algorithm. And Llama 7b can do it in 7GB, so it’s 28x better compression…….

      2. gnasher729 Silver badge

        “OpenAI are trying to argue that a guardrail to suppress the incorrect information is sufficient”

        I say let them argue whatever they like, and fix it any way they like.

        And the next time they accuse an innocent man of murdering two of his sons, give them a massive fine that matches the severity of their false accusation.

  15. JimmyPage
    FAIL

    For fucks sake ! How much more proof is needed ...

    that "AI" isn't "intelligent" in any accepted sense of the word.

    It doesn't understand whatever bilge it's producing. It's like someone who can vocalise the latin alphabet reading French phonetically without actually actually comprehending any of it. You could even learn the pronunciation and inflection to sound fluent, but still have no idea what you are saying.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like