John Deer'ing
Considering the public at large has arned all the AI models of these corporations, you'd think you'd be more appreciated.
Microsoft prohibits users from reverse engineering or harvesting data from its AI software to train or improve other models, and will store inputs passed into its products as well as any output generated. The details emerged as companies face fresh challenges with the rise of generative AI. People want to know what …
But to be the source of training data for Micros~1 is yet another. Although this should really not surprise anyone (and would have been assumed by savvy users anyhow).
And users actually pay for the "privilege" of being grist for the mill. Ain't Corporate America grand?
I wonder how many of those new MS Ts&Cs MS have "broken" themselves by scraping world +dog to train their own AI? And are they still doing it, despite trying to lock down their own toy box? When will we see them add a new condition banning scraping the published results of their AI spewings and how will any webscraper be able to tell what is MS AI generated and what isn't?
I'd argue this should be a feature you turn on (positive opt-in) so you are making an informed decision to agree with the T&C's.
So opting-out would only be needed if you'd previously decided to try the AI and have now decided it's not worth the time taken to check the results.
However, if they're just hooking up their AI to any and every product they sell... and push it out to everyone via updates... then it needs to be disabled by default and a very clear and separate tab presented where you choose to go and enable else they're potentially in breach of GDPR (if in the EU/UK) and probably quite a few other laws, too.
Hmm... it's MS: They'll push this out and not tell anyone it's on by default and by opening Word or whatever, you've agreed to the updated T&C's, even if you didn't get a pop up warning you of this.
Some of us hate Google & Facebook more than we hate Micros~1. Not much more, I will grant you that, but still more than Micros~1.
That being said, I'm not one of the three people mentioned although I do like Bing as a search engine.
And, as for Google, in the end, I will surely love Big Brother but at the moment I still savouring the new cream paper.
The most famous of all the Monsanto patent infringement cases involves Canadian canola farmer Percy Schmeiser. Monsanto’s genetically engineered canola was found on Schmeiser’s land, but it is undisputed that he neither purchased nor planted the company’s seed. For seven years Schmeiser fought to prove that the seed arrived on his land through genetic drift or from trucks carrying seed to grain elevators. Unfortunately, the lower courts were not concerned as to how the seed wound up on the land, only that Schmeiser knew he possessed Monsanto’s intellectual property and had not paid for it. As Schmeiser’s attorney Terry Zakreski, explained: “Monsanto has a problem. It’s trying to own a piece of Mother Nature that naturally spreads itself around.” Even the vice president for Monsanto Canada, Ray Mowling, concurs: “[Monsanto] acknowledges that some cross-pollination occurs, and acknowledges the awkwardness of prosecuting farmers who may be inadvertently growing Monsanto seed through cross-pollination or via innocent trades with patent-violating neighbors.” The Supreme Court of Canada heard Schmeiser’s appeal of the lower courts’ decisions on January 20, 2004, and on May 21, 2004 publicly announced its decision. Schmeiser was found guilty of patent infringement yet not liable to pay Monsanto any damages. We can assume that Schmeiser is just one of many farmers who has been targeted for possessing a technology he neither bought nor planted.
If ti is shown that LLM's provide "transformative" change on data scraped from the web, books, etc, therefore making it legal, then shouldn't using the output of existing LLM's to train new LLM's also be transformative, and therefore legal?
Or is it just the rule that the biggest and the baddest get to own it?
Overall though, training an LLM with another LLM sounds like a recipe for inbreeding weaknesses.