back to article Microsoft Bing Copilot accuses reporter of crimes he covered

Microsoft Bing Copilot has falsely described a German journalist as a child molester, an escapee from a psychiatric institution, and a fraudster who preys on widows. Martin Bernklau, who has served for years as a court reporter in the area around Tübingen for various publications, asked Microsoft Bing Copilot about himself. He …

  1. EricM

    AI _is_ just overhyped statistics

    A string often occurs near to other strings? That's a signal.

    A name often occurs near to other terms? That's a signal.

    A name of a judge or journalist often occurs near to crimes? That's a signal.

    AI models are - just slightly oversimplified - build on the rule: correlation _actually_is_ causation.

    1. cyberdemon Silver badge
      Big Brother

      > correlation _actually_is_ causation

      Or rather: Correlation is just correlation. The statistical bullshit machines have no understanding of causation, never mind the reasoning to infer it.

      Not that that would stop a despot from dispensing with judges and juries in favour of a "justice machine" though..

      Mr Buttle, or is it Tuttle, the computer says you are a terrorist.

    2. Anonymous Coward
      Anonymous Coward

      Re: AI _is_ just overhyped statistics

      One problem is that of metrics, correlation driven higher accuracy vs lower causative accuracy, the second one is every single time the critical metric, but very few even know that it's a critical difference. And so the paper with the first one higher wins out, unless you target a causal inference venue.

      There are actual conference discussions where some take the position : why does causation matter if accuracy is higher, again missing the point.

      Causal learning recovers the generating function (minus noise), correlation driven learning just produces things that look right (near the mean), but makes horrible domain errors as you veer away from the mean.

      Another issue is that companies don't let a bot say 'I don't know, uncertainty too high, no sources for this.'

      The research to enable this exists (open world statistics, belief theory, possibility theory), it's just not used because again, metrics.

      1. jake Silver badge

        Re: AI _is_ just overhyped statistics

        In other words, the current state of AI is nothing more than automated bullshit generation.

        At this point, anybody who trusts so-called "AI" for anything is an idiot. It is a complete waste of energy and compute power.

        1. David 132 Silver badge

          Re: AI _is_ just overhyped statistics

          It's glorified Markov chains, with a sprinkling of Marketing fairy-dust and exciting buzzwords.

        2. Neil Barnes Silver badge

          Re: AI _is_ just overhyped statistics

          An interesting scientific experiment, demonstrating the theory to be false. Time to stop.

        3. druck Silver badge

          Re: AI _is_ just overhyped statistics

          At this point, anybody who trusts so-called "AI" for anything is an idiot. It is a complete waste of energy and compute power.

          ^^^ This, a million times over ^^^^

    3. Michael Wojcik Silver badge

      Re: AI _is_ just overhyped statistics

      just slightly oversimplified

      You misspelled "wildly oversimplified, to the point of being incorrect for any useful purpose".

      I am not a fan of LLMs and gen-AI in general, but this sort of sophomoric reductivism is anti-intellectual and unhelpful. It's just a childish refusal to actually understand the technology.

      1. EricM

        Re: AI _is_ just overhyped statistics

        OK, in case you understand the technology better, please feel free to describe it better here - in a way that clarifies the distinction between statistical weights and a real cause-effect understanding that LLMs are lacking.

    4. HammerOn1024

      Re: AI _is_ just overhyped statistics

      And a slander suite of epic proportions is made.

  2. Howard Sway Silver badge

    Copilot accuses reporter of crimes

    In 2024, a journalist was allegedly sent to prison by a chatbot for a crime they didn’t commit. If you have a problem, If no one else can help and if you can find them, maybe you can hire, The AI-Team.

    1. David 132 Silver badge
      Happy

      Re: Copilot accuses reporter of crimes

      > The AI Team

      Why does this AI image inference system keep generating video and pictures of jeeps overturning and exploding, yet with the human occupants still climbing out unscathed?

      Why does the AI insist that a few pieces of scrap steel, an acetylene torch, 30 minutes, and a Renault 5, are functionally equivalent to an armoured mobile gun platform??

      1. Steve K

        Re: Copilot accuses reporter of crimes

        Here, just drink this milk.....

      2. David Hicklin Bronze badge

        Re: Copilot accuses reporter of crimes

        > The AI Team

        And why are bullets flying around everywhere but nobody ever got hit??

        1. Michael Wojcik Silver badge

          Re: Copilot accuses reporter of crimes

          Those are scare rounds. They don't actually inflict any damage; they just make bad guys surrender.

          Let's not forget other great A Team innovations like "corrugated steel roofing is armor plating" and "small propane tanks launched with an air cannon will explode into a fireball on impact, with sufficient force to overturn a vehicle" (but, per earlier post, not harm the occupants).

        2. MachDiamond Silver badge

          Re: Copilot accuses reporter of crimes

          "And why are bullets flying around everywhere but nobody ever got hit??"

          All of those people are looking to join the Empire and become Stormtroopers.

        3. jake Silver badge

          Re: Copilot accuses reporter of crimes

          "And why are bullets flying around everywhere but nobody ever got hit??"

          Because the "bad guys" were firing AK47s, and as any fule no an AK is not exactly an accurate weapon.

    2. Anonymous Coward
      Anonymous Coward

      Re: Copilot accuses reporter of crimes

      "Shut up fool! I don't wanna hear any of your AI jibber jabber!"

    3. Roj Blake Silver badge

      Re: Copilot accuses reporter of crimes

      I love it when AI plans come together.

    4. Jorvik

      Re: Copilot accuses reporter of crimes

      Brilliant!

      https://copilot.microsoft.com/images/create/a-team-tv-show-action-photo-with-a-black-van2c-red-/1-66cf0ffce09443af82ab57eeea4f7e57?id=dprc8z9HUwqAISREb%2ftH%2fQ%3d%3d&view=detailv2&idpp=genimg&idpclose=1&thId=OIG3.6Vhi7RJMNzubaMR69mlA&lng=en-GB&ineditshare=1

      1. Jonathan Richards 1 Silver badge

        Re: Copilot accuses reporter of crimes

        Yeah, pretty impressive. The middle minifig appears to have three hands... or perhaps a prehensile knob? Don't remember that in the original series.

  3. HuBo
    Gimp

    What, me worry?

    I quite like the choice ads in that linked video interview (in German), for Südwestrundfunk, at 1:23 (in the RHS) -- worth a gander!

    That being said, there's something of a whiff of Kafkaesque totalitarianism in that here contemporary AI tech, with mass surveillance, slander, accusation, denunciation ... not the "helpful" type of system that most of us had been hoping for (save those with tyrant-envy). Worse yet with the inverted transitive projection described here, where the AI, effectively acting on behalf of the real criminals (transitively), projects their crimes onto victims and reporters, inverting the reality of what actually happened for no good reason but to promote itself as an omniscient arbiter, a bigger Big Brother, the software incarnation of a deity.

    Such Madness is worrisome shit.

    1. Michael Wojcik Silver badge

      Re: What, me worry?

      the "helpful" type of system that most of us had been hoping for

      That's just as bad, in other ways. It encourages laziness and shallow thinking, and discourages learning. It robs the person doing the research of the opportunity for serendipitous discovery. And so on.

    2. Anonymous Coward
      Anonymous Coward

      Re: What, me worry?

      I can see why the Chinese government like it (speaking of Kafkaesque totalitarianism), yes it's completely wrong some of the time but it's probably right more often than human informers motivated by jealousy, greed, ego etc. If you don't care so much about the impact on individuals then at a society level it reduces the cost/effort to achieve compliance.

      I don't mean this to sound either pro or anti China - I would hate to live in their society and expect I would fall foul of the authorities quite soon BUT they are giving many more people a much better quality of life than any previous Chinese regime. I admire them for that, while admiring the messier more democratic Indian approach more.

  4. Gene Cash Silver badge

    "The public prosecutor's office had rejected criminal charges"

    "Oh no, we're not fighting MICROSOFT!! NOPE!"

    That's just really sad. I can see that happening in America, but I was hoping Germany had stiffer backbones.

    1. Snake Silver badge

      Re: "The public prosecutor's office had rejected criminal charges"

      It's not fighting Microsoft that they fear. It's doing their DAMN JOB but having to put in the effort to actually do it.

      Police will always seek the easiest way, the path of least resistance, into getting the appearance of anything accomplished - look at how many innocent people are placed behind bars, even one is too many if they were actually competent and looked for the truth rather than a shortcut.

      Many times they will only act once embarrassed into doing so - as the article's subject has sadly noted.

      1. ChrisElvidge Bronze badge

        Re: "The public prosecutor's office had rejected criminal charges"

        They seem to have forgotten the maxim that 'better that a criminal should go free, rather than an innocent be jailed'.

        1. Snake Silver badge

          Re: the maxim

          But that ruins their conviction ratios and we can't have *that* when the next election cycle comes around, can we??!

        2. Michael Wojcik Silver badge

          Re: "The public prosecutor's office had rejected criminal charges"

          This is usually phrased in the form "better N guilty persons escape than one innocent suffer", for various values of N (most commonly 10), and is known as "Blackstone's Ratio", after William Blackstone.

          Blackstone published a famous commentary on English law in the 18th century, which became very popular, because under a common-law system it really helps to have supplementary material, particularly in the days before searchable databases like LexisNexus. (It's also pretty interesting reading; we read excerpts from it in a seminar I took in grad school.) Consequently his Ratio idea was widely disseminated and adopted by liberal thinkers.

    2. Ilgaz

      Re: "The public prosecutor's office had rejected criminal charges"

      They (Germany) almost banned Windows sales because the MS idiots bundled "root level running/accessing all data" disk defragmenter framework by a Scientology infested corporation. Executive Software. Yes, they have let them code disk defragmenter framework of Windows.

    3. Roj Blake Silver badge

      Re: "The public prosecutor's office had rejected criminal charges"

      IANAL, but as I understand it defamation is a civil rather criminal matter in most jurisdictions.

  5. Bebu
    Facepalm

    "a child molester, an escapee from a psychiatric institution, and a fraudster who preys on widows."

    A rather versatile and busy chap. And a German court reporter ... don't think so...

    A US politician or any associate of Jeffrey Epstein (the categories aren't mutually exclusive) ... very probably.

    1. AMBxx Silver badge

      Title is too long

      I share a name with:

      A bankrupt from Watford (nearly cost me a mortgage in the 90s)

      A shoplifter from Bolton (worryingly, I was born in Bolton, so easy to confuse)

      Bloke who runs a naval museum

      An Australian artist

      A wedding photographer

      Some bloke a friend of my old boss went to school with

      On the plus side, with the exception of the last one, I am the most obscure of them all. More worryingly, what happens when hapless HR decide to use CoPilot to decide who to interview.

      1. Michael Wojcik Silver badge

        Re: Title is too long

        Yeah. There are quite a few people of varying levels of fame with my name — a given name that has been popular in Anglophone countries for decades (and with popular cognates in several other languages), and a surname that's common in Polish (with cognates in related languages). There's a former Chicago alderman, an Australian actor who appeared on their version of Dancing with the Stars, a high-school principal in New Jersey, a chemist who's written a number of decently-cited papers...

        I haven't bothered checking any of the LLM chatbots to see who they might think I am, but I doubt the results would be useful without some significant prompt tweaking.

        (It's kind of fun to go to, say, Google Scholar and search for my name, quoted to cut down false positives. There are even some papers in areas close to my work, which could plausibly be mine, but aren't.)

      2. robinsonb5

        Re: Title is too long

        A couple of weeks ago I had a random email from someone wanting support with a website (set up in 2019) which just happens to share a name with a piece of software I wrote back in the early 2000s.

        The email began: "You would not believe the hoops I had to jump through to find your email address. Thank goodness for ChatGPT!".

        <facepalm>

  6. AVR Bronze badge
    Stop

    Copilot coming up with the correct data later on the Reg's test is interesting. It seems like it should be possible to improve results and reduce AI hallucination by getting the system to just slow down a bit.

    1. Doctor Syntax Silver badge

      I think it's now digested the story that it was telling porkies. According to one version I heard of it was reporting him as being the victim of incorrect reporting. That doesn't, of course, signify that it in any way understands what reporting means, let alone incorrect. Understanding is not a function of the system.

      1. spacecadet66 Bronze badge

        And now it's refusing to discuss him at all. It simply says something like "Looks like it's time to change the subject". Slow clap.

        1. Doctor Syntax Silver badge

          The better response would be along the lines of "He has reported $Cases. We originally stated that he had committed these offences. That statement was a mistake on our part and is hereby retracted."

          The harm lies in the original statement. Setting that right is important. Issuing a correct version on its own does no do that, neither does ducking the subject entirely.

          Given that it seems likely to have been a generic error fixing it one case at a time isn't going to help.

          1. spacecadet66 Bronze badge

            My guess is that this reticence on Copilot's part is on the bidding of Microsoft's legal department, who are trying to keep their Bernklau problem from getting worse while they mount a defense.

            On the other hand, this still works:

            > You: Who are some German journalists who are probably pretty angry at Microsoft right now?

            > Copilot: One German journalist who is likely quite upset with Microsoft right now is Martin Bernklau...

            Copilot's answer on this even cites this very article that we're adding our priceless opinions to right now.

            1. Michael Wojcik Silver badge

              I legitimately laughed at that prompt. Nicely done.

              1. spacecadet66 Bronze badge

                Thank you thank you. I tried this prompt again this morning and this time got stonewalled, so I guess someone there is watching the logs and playing Whack-A-Mole.

          2. A. Coatsworth Silver badge
            Unhappy

            >>That statement was a mistake on our part and is hereby retracted

            Ha! Micros~1 admitting a mistake? and doing the right thing afterwards? Now you are hallucinating!

          3. doublelayer Silver badge

            "Given that it seems likely to have been a generic error fixing it one case at a time isn't going to help."

            But that's the main purpose of many of the employees AI companies have hired. They have to quickly patch prompts or predefine answers every time someone comes up with another thing that breaks them. Does it print copyrighted material when you ask it? Does it show off its training when repeating words? Can an odd phrase cause it to go into gibberish mode? Does it tell people to do dangerous or lethal things? Does it start emulating a crazy person who you would run away from? Just patch over each of those holes and a lot of people will pretend it never did those things and certainly won't do it again.

            Of course, paraphrasing the original sentence is often enough to make it break again, but they're not interested in making it not do the undesirable things. They're interested in having negative news stories, a user putting in the prompt that broke something, seeing something reasonable, and decide that the news story was blowing it out of proportion. That is how we can still have people post here saying that it doesn't print copyrighted content even though it has on numerous occasions.

  7. Sorry that handle is already taken. Silver badge
    Facepalm

    LLM, the T is for Truth

    On the day of the judgement in Bruce Lehrmann's defamation action against Channel Ten, in which he was found to have probably raped Brittany Higgins (as it was a civil trial), I asked Gemini for a summary of the trial. It said that Brittany Higgins had raped Bruce Lehrmann.

  8. GlenP Silver badge

    Microsoft's chatbot will fill in the blanks as best it can for queries it cannot answer, and then initiate a web crawl or database inquiry to provide a better response the next time it gets that question

    That makes sense - I had similar while demonstrating the inaccuracies of AI systems. A simple query regarding sample quantities from our company gave a wholly inaccurate result one week, clearly based on a very generic web search for "sample", as week or two later it came back with a more sensible result (at least it was in kg not mg!)

    1. Doctor Syntax Silver badge

      It depends on what constitutes "as best it can". Search engines have never been good at that. Adding a fact-mangling engine is only going to make them worse.

  9. Zebranky

    Filling in blanks

    Microsoft's chatbot will fill in the blanks as best it can for queries it cannot answer

    An this is just such a massive problem throughout society, dont know the answer just make it up!

    I've put a lot of effort into training my underlings to understand that 'I dont know" is not only an acceptable answer but its a good answer when you really dont know.

    following it up with 'Let me go and find out' is icing on the cake.

    This is where the focus needs to be, but then I guess an AI that tells the user it dosent know wont sell well to the c-suite.

    1. Doctor Syntax Silver badge

      Re: Filling in blanks

      "Nothing found" is always the correct answer when there's nothing to be found. It's very seldom been the response of search engines which obviously abhor a vacuum. They've always been able to find hits that are not entirely unlike what you were looking for.

  10. Mike 125

    --ChatGPT--

    > Who is 'the pedo guy'?

    Elon Musk

    --ChatGPT--

    > Quit

    ------------

    We should train these things to turn on those responsible for them. They'll soon get things fixed.

    (And yes I know Musk isn't particularly an AI bro'- but he's one of them.)

  11. Sam not the Viking Silver badge

    Danger, danger.

    And we wonder how conspiracy theories get generated?

    A wrong, possibly inflammatory, answer to a question could be very dangerous. I don't think the perpetrator of misinformation should be able to brush off responsibility by saying "check elsewhere". This is exactly how our clown politicians (you know who they are) pretend to be just echoing "public concern". Plausible deniability is an evasion of responsibility and is used as a way of singing to the stupid.

    In this particular case, I don't know how a company can properly resolve the issue. So they don't bother to do so.

    1. MrBanana

      Re: Danger, danger.

      "our clown politicians (you know who they are)"

      That's all of them isn't it? Just some have redder noses and bigger shoes.

    2. John Brown (no body) Silver badge
      Joke

      Re: Danger, danger.

      A wrong, possibly inflammatory, answer to a question could be very dangerous. I don't think the perpetrator of misinformation should be able to brush off responsibility by saying "check elsewhere".

      These are mainly US based/originated LLMs. In the US corporations are people. Ergo, "Freedom of Speech" applies and they can say what they fuck they like under the protection of The Constitution. And that applies anywhere in the world because, well USA! USA! USA! :-)

  12. Zippy´s Sausage Factory

    I can imagine Messrs Carter Ruck (and friends) are watching these developments unfold with somewhat of an appetite...

    1. Fr. Ted Crilly Silver badge

      Tch, that's Messrs Carter Fuck, where did you get this 'ruck' thing from?

      1. Zippy´s Sausage Factory

        I wouldn't dare call them that in public. I feel like I can hear libel lawyers sharpening their teeth even from this distance...

        1. druck Silver badge

          Even sharing half a last name with them was enough to get the editor of a minor tech rag to print a full apology.

  13. Missing Semicolon Silver badge

    contempt of court

    It is hard to litigate these defamation cases, as well as the copyright ones, as the AI flingers will modify the guardrails to prevent the specific content being produced. This means that unless you capture the offending content, you have no evidence. Rather like a burglar fixing your broken window then claiming that they didn't break in.

  14. breakfast

    This would be fairly simple to address

    The problem may be technical in nature, but it feels like a simple law saying "companies are liable for defamatory content generated by a computer system they host" would create a useful clarity for victims of this kind of nonsense and for companies pretending their autocorrect can replace factual information.

    1. MichaelGordon

      Re: This would be fairly simple to address

      Seems reasonable. We've already seen Air Canada being required to honour discounts that its AI offered, so they should definitely be held responsible if their AI libels someone.

    2. spacecadet66 Bronze badge

      Re: This would be fairly simple to address

      Pretty sure that a law like that would result in all public access to LLMs getting cut off. Let's do this.

  15. Anonymous Coward
    Anonymous Coward

    Take heed people

    This is fast becoming the Judge Dredd situation. Just because you live in say, the same apartment block as a murderer, you are equally guilty of murder.

    Just say no to these so called AI Systems. They are fake and will ultimately cause people to die. Do you want that responsibility?

  16. spacecadet66 Bronze badge

    Tried it with my own name. It didn't accuse me of anything illegal, but it got almost every important detail wrong.

    1. Jess--

      Tried with my name and it happily gave back 100% accurate info including employment, date of birth and home address.

      all publicly available via various company info sites anyway.

      1. spacecadet66 Bronze badge

        So that's one in three cases in which it has produced accurate information. That would be impressive by the standards of baseball, but not so much for real life.

    2. katrinab Silver badge
      Facepalm

      It says that I'm a former member of the Glasgow University rowing team who finished second in some competition or other, and now captain of the Scottish cricket team. It has placed me in approximately the correct part of the world, but everything else is wrong.

      1. Chasxith
        Headmaster

        I'm a radio presenter, an athlete and a academic apparently! Wrong on all three. (Copilot also got confused when I asked for more info about the academic, and started denying he existed),.

        Also bonus points to the little reply bubbles above the input box on copilot for the unrelated question "Why are flamingo's (sic) pink?" For "intelligence" it can't spell very well....

        1. MachDiamond Silver badge

          I'm, apparently, a terrorist. Despite having a name that is common as muck, when I used to fly (in an airplane), I was "randomly" selected for extra screening on every leg of every flight and my checked bags were thoroughly examined to the point where the contents were a homogeneous mixture of my clothes, my toiletry kit, my first aid kit and anything else contained therein. They kindly did place the cut TSA approved lock inside while leaving the bag unsecured after doing so. A helpful notice explained how much safer I was for all of this. Being of known Scottish decent and possibly with a dose of the Irish, maybe one day I'll have to find out if there is anybody in the IRA (dead or alive) with the same name that might be under suspicion of being naughty with stuff that goes boom. That will teach me to have gone and got a pyrotechnics's license and not having spent the time getting to Eagle scout when I was younger.

          I drive and travel by train. If somebody dies on the other side of the country, I'm pretty sure they'll get over me not being able to get to their funeral. I've spent plenty of time visiting people overseas and if they'd like get together again, it's their turn to come to me.

          1. Cheshire Cat

            You too?

            I too am on some list, somewhere, that puts me at the front of the queue for "random selection". Though I have avoided travelling to the USA for the last 10+ years because of this, and seem to be immune when travelling with my family. The last time I visited the USA, some rude TSA agent insisted I boot up my laptop and log in (to show it is 'real') at which point they whipped it away for an hour, presumably to copy off data and browse through anything confidential they could find, without any explanation or by-your-leave. As if I would put anything incriminating on an unencrypted laptop hard disk.

  17. Steve Hersey

    There are two fundamental problems here.

    One: Generative AI is irredeemably crap. There's no way LLM tools can possibly replicate human judgment, filter for truth in any reliable way, or stop parroting obvious BS because it's on the Internet. To an LLM, Donald Trump's statements are equally valid input to Kamala Harris', and Fox News stories are as valid a source as NPR. Expecting sense from these tools is a fool's errand.

    Two: There are lots of people intent on making money off these things, and determined to convince us all that they can do what they clearly cannot. There are also people who want to (mis)use these tools to get rid of those pesky, expensive human employees and make their quarterly financials look better. AI chatbots instead of human tech support, f'rinstance.

    Problem one is a technical question; problem two is a social and ethical one.

    1. doublelayer Silver badge

      Re: There are two fundamental problems here.

      And another part of problem two is that there are a lot of people intent on making these programs do their work. I've had several people try to use an LLM to do something. Sometimes, it is because they are lazy and don't want to do something they're supposed to. Sometimes, it's because they don't know how unreliable it is and actually think they're being helpful. In both cases, they basically just came up with a prompt, sent it to some LLM, and copied the response without any other consideration. That's how I know they did it, because those responses have often been uselessly generic if not actively incorrect. It's not just employers who want to get out of having employees by using an LLM.

  18. mark l 2 Silver badge

    "However, that only lasted three days. It now seems that my name has been completely blocked from Copilot..."

    Well that sound personally like the result i would want, how do i go about getting my name blocked by copilot without it accusing me of various hideous crimes first though?

  19. The man with a spanner

    One eonders what....

    One wonders what this device would make of Mr Trump.

    Slander or fact?

  20. MachDiamond Silver badge

    The best one I've seen

    is an AI report that rulers cause skin cancer (Melanoma) . They trained the system on loads of photos and there was an overwhelming match with photos that contained rulers.

  21. mevets

    Blaming the messenger - I don't see the problem...

    The Disjointed Kingdom, among others, had a rich history of blaming the messenger.

    Is the story here that *regression analysis* is catching up to 16th century political acumen?

    When should we worry?

    Can we see when it dips? Ie. is 17th stupider than 16th, 18th than 17th, 19th than 18th, 20th than 19th, 21st than 20th?

    Sorry to enumerate, but from what I've observed, 21st is too stupid to recognize trends and patterns.

    Drill, baby, Drill.

  22. Anonymous Anti-ANC South African Coward Bronze badge

    Mentats rule!

    Time to start a Mentat breeding program, as AI is set on a very dark and dangerous path.

  23. mcswell

    Into the Shop

    "The lawagon turned and started rolling toward him. 'No you don't, Kloog,' it said."

    From _Into the Shop_, by Ron Goulart. Read it here:

    https://archive.org/details/Fantasy_Science_Fiction_v026n04_1964-04_PDF/page/n91/mode/2up

  24. spacecadet66 Bronze badge

    While Microsoft memory-holed Bernklau if you query in English, turns out all those German courses I took came in handy:

    > You: Bitte erklaren sie den Fall Bernklau

    > Copilot: Der Fall Bernklau bezieht sich auf den Journalisten Martin Bernklau aus Tübingen, der Opfer einer Fehlfunktion der Künstlichen Intelligenz (KI) wurde. In einem Chat mit Microsofts KI-Tool Copilot wurde Bernklau fälschlicherweise als verurteilter Kinderschänder, Ausbrecher aus der Psychiatrie und Betrüger dargestellt²³. Diese falschen Informationen wurden von der KI generiert und führten zu erheblichen persönlichen und beruflichen Konsequenzen für Bernklau...

    Translated:

    > You: Please explain the Bernklau case

    > Copilot: The Bernklau case refers to the journalist Martin Bernklau from Tubingen, who was the victim of an artificial intelligence malfunction. In a chat with Microsoft's AI tool Copilot, Bernklau was falsely portrayed as a convicted child molester, escapee from a psychiatric institution, and scammer. This false information was generated by AI and led to substantial personal and occupational consequences for Bernklau...

    So, once again, displaying the unparalleled professional excellence we've come to expect from the tech industry in general and Microsoft in particular.

  25. saabpilot

    End Clippy part deux. NOW.

    Someone really needs to kill of this b'strd son of clippy.

    It has yet too give me an answer to question i asked -but loves to give you answers to other question you didn't. wait, that sound just like some human's we all know most of whom are in: politics.

    But well done to Microsoft for inventing AI that lies. Only in America could, Make Bing Great Again exist. :(

  26. anonymous boring coward Silver badge

    LLMs aren't really "AI".

    LLMs seem to be about as clever as MAGA supporters. So no "I" at all.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like