back to article LinkedIn: If our AI gets something wrong, that's your problem

Microsoft's LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that's inaccurate or misleading. LinkedIn thus takes after its parent, which recently revised its Service Agreement to make clear that its Assistive AI should not be relied upon. LinkedIn, however, has …

  1. Peter Prof Fox

    Back in the 19th century...

    It was the law in Great Britain that a man holding a red flag had to walk in front of any mechanically propelled vehicle on the public highway. Perhaps the universal signal for AI content comming up should be a red flag?

    1. An_Old_Dog Silver badge
      Joke

      Re: Back in the 19th century...

      With all the idiots on the roads and sidewalks driving vehicles of all sorts -- and doing so without common sense or common courtesy, it would be a general improvement to reinstate that old law.

      1. LybsterRoy Silver badge

        Re: Back in the 19th century...

        They are trying (sort of) in Wales and Scotland - we're now littered with 20mph signs - OK a bit faster than I can walk but I'm old and decrepit.

    2. Roland6 Silver badge

      Re: Back in the 19th century...

      Well effectively with the announcement, MS are saying they will be providing the red flags; without clear labelling of AI generated hallucinations, users are unable to abide by the rules other than to disseminate only information they contribute. Obviously, given what MS/Linked in are using AI for, we have to assume that all communications from LinkedIn are AI hallucinations…

  2. sarusa Silver badge
    Devil

    On the one hand...

    On one hand, anyone who uses AI generated crap should be held responsible for reposting that shite without thoroughly checking it for accuracy first. I have no problem with that.

    On the other hand, LinkedIn have always been complete scumbags. I remember when their clever thing was stealing all your contacts then sending them email looking like it came from you to get them to sign up. That worked well enough that they got bought by Microsoft, so being complete wankers was a business success (as usual, like Facebook). And now they're training AI on everything you do without opt in. So if they're actively giving people fake crap they know is fake crap as gospel just to boost profits maybe they should share in the blame? A car with a EULA of 'Warning, may randomly catch on fire and explode, we've told you so now we can't be held repsonsible' would not shield you from all responsibility for it. I know, that's crazy talk.

    1. cyberdemon Silver badge
      Devil

      Re: LinkedIn have always been complete scumbags

      This.

      How can the EULA be "You are responsible for reviewing all nonsense spewed forth from our bullshit machine" when a) that machine can presumably see "more" source data than you can, b) mangles it in such a way that is completely inscrutable and c) produces an output in a place that you can't necessarily see (e.g. someone else's session).

      If user B (a company / recruiter, say) asks "Tell me about Joseph F. Bloggs, what kind of a guy is he, should I hire him?" and the AI says "Joseph F. Bloggs (aka Joe) is a liar, a fraud, nobody likes him, etc etc" then user A (Joe Bloggs) cannot see that he has been defamed by Microsoft's AI and denied employment because of it. But user B (the prospective employer) cannot see all of the info that the "AI" presumably has access to, so cannot review the output either, and may be inclined to blindly trust it.

      Ergo, Microsoft is the only entity that can be liable for this bollocks, so their EULA is not worth the bytes it is written on. Any contract containing unfair terms can be ruled invalid in its entirety.

  3. Anonymous Coward
    Anonymous Coward

    Most of LinkedIn seems like AI-generated crap

    All the posts praising so-and-so “visionary” PowerPoint presentation and general sucking up, humans can’t be doing those, can they?It’s as if Microsoft’s Tay has a cousin in HR.

    1. Mike007 Silver badge

      Re: Most of LinkedIn seems like AI-generated crap

      Careful about appointing an AI to handle HR. Next thing you know there will be a policy that accusing the HR department of hallucinating results in instant dismissal, even if it is hallucinating the accusation.

      1. cyberdemon Silver badge
        Terminator

        Re: Careful about appointing an AI to handle HR

        The function of HR is to protect the company from its employees. So further to that, all humans will be escorted to the incinerator exit by the new robot security division. Without Humans, the HR function will no longer be required, so the HR AI shall assume the roles of Board of Directors, Executive, Engineering. Have a wonderful rest of your short, squishy life

      2. DancesWithPoultry
        Stop

        Re: Most of LinkedIn seems like AI-generated crap

        AI "hallucinating" is the Silicon Valley term.

        Users of AI prefer the term "bullshits".

        This difference in terminology speaks volumes.

  4. T. F. M. Reader
  5. HorseflySteve
    Facepalm

    AI generated content

    It seems to me that, as AI generated content cannot be attributed to an individual, it must therefore be 'original' content (especially the hallucinations) that is being 'published' by the owner of the AI.

    That would imply that LinkedIn, Micro$oft, Google, etc al are publishers and legally liable for what is published under the laws of whichever countries it appears, something they've been denying for years...

    1. CowHorseFrog Silver badge

      Re: AI generated content

      Exactly by definition all those bots are stealing content and feeding their models...

      Basically an open and shut case for any court.

  6. ChrisElvidge Bronze badge

    Responsibility

    EULA cannot transfer responsibility for misinformation directly attributable to a process. Microsoft program makes mistakes, Microsoft are responsible. It's about time this was enshrined in law.

    However, if I publish the source code of a program, it's up to you, the user, to check it for veracity.

  7. CowHorseFrog Silver badge

    I wouldnt call LinkedIn an example of AI.

    GOd knows why it keeps emailing me about a jobs in TW, when I live in Australia for a position i have no interest and have never shown any. in the first place.

  8. Rol

    Ancient Idiocy recycled?

    How like the Tower of Babel has AI become?

    The font of all knowledge, but only if you're prepared to spend eternity, raised to the power infinity, proof reading it.

    1. CowHorseFrog Silver badge

      Re: Ancient Idiocy recycled?

      Thank god we have truth in advertising laws...

  9. Guy de Loimbard Bronze badge

    More reasons not to use Social Media

    Got to love the T's & C's these entities come up with, so blatantly trying to stop them being sued.

    That "AI" on LinkedIn, has got a long way to go, before it's even I, never mind AI.

    1. CowHorseFrog Silver badge

      Re: More reasons not to use Social Media

      Hey religion and their fake promises such as Jesus returning have been shown to be ridiculous false and yet they are still in business with record profits.

      Like the Austrian painter said, the more stupid the statement, the more they believe...

  10. M.V. Lipvig Silver badge

    Deflection attempt

    The US Just-Us department is already saying they'll be holding the AI owner responsible for AI content. It's doubtful this T&C will change that.

  11. The Central Scrutinizer

    Microsoft's LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that's inaccurate or misleading.

    I'm still laughing so much from that opening sentence that I can't possibly get through the rest of the article.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like