Back in the 19th century...
It was the law in Great Britain that a man holding a red flag had to walk in front of any mechanically propelled vehicle on the public highway. Perhaps the universal signal for AI content comming up should be a red flag?
Microsoft's LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that's inaccurate or misleading. LinkedIn thus takes after its parent, which recently revised its Service Agreement to make clear that its Assistive AI should not be relied upon. LinkedIn, however, has …
Well effectively with the announcement, MS are saying they will be providing the red flags; without clear labelling of AI generated hallucinations, users are unable to abide by the rules other than to disseminate only information they contribute. Obviously, given what MS/Linked in are using AI for, we have to assume that all communications from LinkedIn are AI hallucinations…
On one hand, anyone who uses AI generated crap should be held responsible for reposting that shite without thoroughly checking it for accuracy first. I have no problem with that.
On the other hand, LinkedIn have always been complete scumbags. I remember when their clever thing was stealing all your contacts then sending them email looking like it came from you to get them to sign up. That worked well enough that they got bought by Microsoft, so being complete wankers was a business success (as usual, like Facebook). And now they're training AI on everything you do without opt in. So if they're actively giving people fake crap they know is fake crap as gospel just to boost profits maybe they should share in the blame? A car with a EULA of 'Warning, may randomly catch on fire and explode, we've told you so now we can't be held repsonsible' would not shield you from all responsibility for it. I know, that's crazy talk.
This.
How can the EULA be "You are responsible for reviewing all nonsense spewed forth from our bullshit machine" when a) that machine can presumably see "more" source data than you can, b) mangles it in such a way that is completely inscrutable and c) produces an output in a place that you can't necessarily see (e.g. someone else's session).
If user B (a company / recruiter, say) asks "Tell me about Joseph F. Bloggs, what kind of a guy is he, should I hire him?" and the AI says "Joseph F. Bloggs (aka Joe) is a liar, a fraud, nobody likes him, etc etc" then user A (Joe Bloggs) cannot see that he has been defamed by Microsoft's AI and denied employment because of it. But user B (the prospective employer) cannot see all of the info that the "AI" presumably has access to, so cannot review the output either, and may be inclined to blindly trust it.
Ergo, Microsoft is the only entity that can be liable for this bollocks, so their EULA is not worth the bytes it is written on. Any contract containing unfair terms can be ruled invalid in its entirety.
The function of HR is to protect the company from its employees. So further to that, all humans will be escorted to the incinerator exit by the new robot security division. Without Humans, the HR function will no longer be required, so the HR AI shall assume the roles of Board of Directors, Executive, Engineering. Have a wonderful rest of your short, squishy life
It seems to me that, as AI generated content cannot be attributed to an individual, it must therefore be 'original' content (especially the hallucinations) that is being 'published' by the owner of the AI.
That would imply that LinkedIn, Micro$oft, Google, etc al are publishers and legally liable for what is published under the laws of whichever countries it appears, something they've been denying for years...
EULA cannot transfer responsibility for misinformation directly attributable to a process. Microsoft program makes mistakes, Microsoft are responsible. It's about time this was enshrined in law.
However, if I publish the source code of a program, it's up to you, the user, to check it for veracity.