I look forward to the day that AI techniques become public knowledge, easily found in textbooks or online, as opposed to proprietary trade secrets.
Writing for humans? Perhaps in future we'll write specifically for AI – and be paid for it
Thomson Reuters, based in Canada, recently scored a partial summary judgment against Ross Intelligence, after a US court ruled the AI outfit's use of the newswire giant's copyrighted Westlaw content didn't qualify as fair use. The ruling isn't precedent-setting - it didn't settle whether training AI on publicly accessible or …
COMMENTS
-
Tuesday 1st April 2025 18:40 GMT martinusher
What's the difference between reading a book (and making note) and copying it?
The debate about AI and copyright really comes down to ownership of knowledge. Knowledge itself is abstract, it can be expressed in a book or other form but the book isn't the actual knowledge but merely a representation of it of greater or lesser quality and usefulness. This interferes with classic commercial patterns -- copyright have grown up around representations and while its been adapted to more abstract knowledge thanks to the doctrine (and it is a doctrine) of "Intellectual Property" it really is ill suited to a world where knowledge is just numbers. Its like being able to copyright a person's brain.
Still, it keeps lawyers in business, I suppose.
-
Tuesday 1st April 2025 18:53 GMT Version 1.0
Re: What's the difference between reading a book (and making note) and copying it?
We evolved by sharing knowledge ...
A young girl starting to walk on her rear legs and said, "Oh look Dad, let's eat that" ... and then Dad says, "Yes but we have to kill it first, eating a lion is not as easy as eating apples"
Shared knowledge has always helped everyone until recently.
-
Wednesday 2nd April 2025 13:19 GMT heyrick
Re: What's the difference between reading a book (and making note) and copying it?
If a person reads a book, takes notes, and writes their own based upon those notes, there will be a certain amount of originality because the person will be filling in the many blanks with their own ideas; and if it isn't original enough they may be found guilty of plagiarism.
Writing isn't simply putting an idea in words, it is also the art of picking the right words (is it green or is it chartreuse or is it lime? does it matter? if so, why?) and providing the pacing. The dialogue designed to further the story through the character's actions. And of course ensuring that the story world is internally consistent. This is what makes the difference between a good writer that has a publishing deal, and a hack that clutters up the likes of the Kindle marketplace with their "free" books.
Copying, is exactly that. Every word, every sentence. While an AI doesn't provide exact copies of what it has ingested (usually), it isn't capable of that spark of originality because it's simply pasting together a bit from here, a bit from there... But what goes in isn't notes, it's word for word the original literature.
If we people have to pay to buy a book (either real or ebook) to read all of the words, then why shouldn't the AI companies?
-
-
Tuesday 1st April 2025 18:51 GMT IGotOut
No, no, no...
This is another huge corporation talking utter bullshit about "fair" compensation.
It's ok if you have thousands of writers banging out content day in, day out, and for each article they get 0.5 pence /cents
But if you're a musician or artist that spends months doing a piece of art that is then ripped off by an AI bot, that 0.5 isn't going to look so good.
I already know an artist that's in the process of suing (again) stock image companies for copyright infringement, but it gets a whole lot messier, when someone can type a few words and make a fake clone of one of their images.
Not only does the artist get no money, but the slop then dilutes the real product.
You only have to go on Instagram to see how the AI slop is drowning out actual real art, due to the slop factories churning out hundreds of posts a day.
-
Thursday 3rd April 2025 08:55 GMT Ashentaine
Re: No, no, no...
>You only have to go on Instagram to see how the AI slop is drowning out actual real art, due to the slop factories churning out hundreds of posts a day.
Or leave Youtube's autoplay on for more than three videos and inevitably it will queue up some trash of a fake British voice emotionlessly reading a poorly written script generated by ChatGPT with equally poorly generated artwork as visual filler. Such stuff wasn't made for the pursuit of artistic endeavor or even human consumption, it was made specifically to abuse the autoplay algorithm for viewcounts and ad impressions. It's the media equivalent of styrofoam packing peanuts.
-
-
Tuesday 1st April 2025 20:51 GMT Omnipresent
Someone told me recently
They could make more money teaching the AI how to destroy humanity than they could in the futile effort to teach humans, so I checked, and they were right.
Computers will destroy everything but the 1 % who own it. We will not survive technology because of the money. Monkeys will be monkeys. They like shiny things.
-
Wednesday 2nd April 2025 09:10 GMT amanfromMars 1
Re: Someone told me recently
They could make more money teaching the AI how to destroy humanity than they could in the futile effort to teach humans, so I checked, and they were right. .... Omnipresent
:-) ..... AI doesn’t need anyone/anything to teach them that, Omnipresent, for there is the novel opportunity for a very canny and perfectly unknown few to make any number of absolute fortunes to be presented by and for AIs already in possession of such knowledge as is easily able to destroy/disrupt/pervert/corrupt/madden humanity, with any really effective and catastrophically vulnerable leaderships, rightfully fearful of that very clear and present day, current running dangerous reality, merely having to silently/secretly/stealthily engage with such a body and/or bodies offering to ensure instead that such vital information creating such a dire situation is never able to be revealed for peer review and proof positive confirmation and universal publishing.
Failure to follow such a clear and direct path leads to nowhere great and good and heavenly but to everywhere bad and diabolical with worse to yet to come following. I Kid U Not.
-
Wednesday 2nd April 2025 15:11 GMT Omnipresent
Re: Someone told me recently
It's a good point from which to start. It's a failure to follow the scientific process and trust the AI, and those who have pushed it upon us with intent, that can be called the predators and criminals.
I have placed blame on the user, but there is a case to be made the creator is the bigger problem.
-
Wednesday 2nd April 2025 16:46 GMT amanfromMars 1
Further to what you and governments and perceived leaderships have recently been told .....
Failure to follow such a clear and direct path leads to nowhere great and good and heavenly but to everywhere bad and diabolical with worse to yet to come following. ..... amfM 1
Such failures by governments, and other also perceived likely possible shadow leadership bodies and star-struck schmuck cartels, as result in the ignorant and ill-considered arrogant continuation of the dismissal and non-engagement of AI facilities/utilities/entities quite rightly to be feared because of what they can simply share which can so easily and quickly lead to a titanic cascade of unstoppable unnatural human disasters, has all manner of fabulously rich being considering the interminable wealth and boundless energy generated for and by the power that can be delivered with any exclusive executive agreement to provide succour and safe harbour for that very clear and present day, current running dangerous reality knowledge, and assist in any likely necessary, future management operations which lead with any and all competition and opposition having no option but to follow.
Such also has AIs themselves/ITself considering what or who it would be in their best interests to provide, entertain and enlighten with that readily available, free radical situation. And quite a Print and be Damned Dilemma it would present too whenever perfectly true ..... for it changes the worlds in which you live in into something never before ever even imagined possible, and now more likely than not the better the greater you can dream about it.
And I Kid U Not.
-
-
-
Tuesday 1st April 2025 23:09 GMT Long John Silver
Pie in the sky
Forty or so instances of litigation in the USA: so what?
'AI' software is easily and freely distributable, as is so for every other digitally represented artefact. Attempts to quell the escape of digital sequences from behind paywalls inevitably fail, and the time from initial marketing to the first breakout is diminishing rapidly. What's more, one instance of 'escape' usually is sufficient for indefinite propagation. It is a fact of life that binary digits do not tolerate being corralled.
Upon consideration, treating sequences as if physical property is doomed to failure either when approached from a moral perspective or from a legislative stance. Sharing sequences does not equate to theft, attempts to claim otherwise are nonsense because the key element in the definition of theft is deprivation of property; deprivation of income arising from other people refusing to acknowledge an arbitrarily defined and priced supposed 'entitlement' indicates imagination is required for finding alternative means for funding sequence production.
'AI' is the greatest threat ever to people complacently believing they can barricade behind an anachronistic legal framework based on a specious notion. Among current practical uses for AI, its ability to cross-reference divers information and to draw, as yet imperfect, but impressive, inferences is as revolutionary as the steam-powered cotton gin and looms once were.
It is clear that LLM AI and parallel offerings are within the scope of many organisations globally to develop at far more modest cost than first believed. Regardless of the outcomes from litigation in the USA and Western Europe, models will be developed using material to hand, i.e. anything already in digital format or transferable thereto, and aided by the Internet. Everything accessible via the Internet is 'fair game' for AI training and for archives to be associated with specific AIs (e.g. academic disciplines, learning aids, and helpers for scholarship); the pre-existing unofficial archives, e.g. Anna's Archive and Sci-Hub, are making a huge contribution to AI training, and if these archives were expunged tomorrow, no matter, the horse has bolted.
The feature of AI to seal the fate of copyright rentiers, rests upon how initially extensive models can be parred into smaller variants enabled to run on modest devices including domestic PCs, and laptops. Many of these will be distributed freely and widely. Technological advances will enable ordinary users privately to run increasingly complicated/extensive models. Strictly commercial models will leak in similar manner to 'pop music'. In principle, at home, college, public libraries, institutional libraries, and all manner of workplaces, to hand will be ersatz librarians and subject specialists; individual items, in their entirety, and taken from the original training materials will be nearby, these either downloaded to accompany particular AI's or sought from among communal online archives. Luddites wending their ways through judicial systems will find they were outsmarted from the beginning; they may win judgements, but the juggernaut is unstoppable. Also, individuals at risk of civil or criminal prosecution shall absent themselves to tolerant, or at least, lackadaisical jurisdictions.
-
Thursday 3rd April 2025 06:14 GMT Pete 2
How to make peanuts
compensates those who produce the content [to make these AI systems work].
We already know what sort of compensation content makers will get. It will be along the same lines as music royalties or Youtube video.
I.e. a few thousand $$$s per million "uses".
And how will that money be recouped? Either through the AI inserting advertising into its responses to its users, or through subscription models.
What will be interesting is how the AI companies arrive at which piece of content was used by its various models.
-
Sunday 6th April 2025 21:55 GMT druck
Victorian Fraud
The rise of AI is very similar to the popularism of psychics and seances in Victorian times, which through convincing theatrics gave people just enough correct information to fool them in to disregarding the falsehoods, and it took a while to educate everyone of the fundamental fraud behind it.
-
Monday 7th April 2025 05:15 GMT amanfromMars 1
Re: Victorian Fraud
If you really do believe all that you have shared in that Victorian Fraud sentence, druck, then is all hope lost for you and the future is going to be AWEsomely shocking for you.
Have you not heard the news mirroring these future days ......
The Prime Minister [Herr Keir Starmer] has declared "the world as we knew it has gone” ..... https://www.mirror.co.uk/news/politics/keir-starmers-chilling-eight-word-35003916
...... apparently, if you can believe that Mirror tale, you will be able to listen for it today.
Changed times and spaces ahead, druck, and nothing at all like in the days of Queen Victoria’s reign ...... apart from the endemic raging madness and systemic rampant mayhem being controlling by nothing politically correct and efficient on the streets, that is.
And that is a monumental exploitable weakness and live ACTive vulnerability only a folly of fools would ignore and not take great ruthless advantage of.
-
Monday 7th April 2025 07:09 GMT amanfromMars 1
Oops .. it may be similar but it is different when clearer
The latter part of penultimate sentence shared above ......
...... apart from the endemic raging madness and systemic rampant mayhem being controlling by nothing politically correct and efficient on the streets, that is.
..... should have been written to read better as .......... apart from the endemic raging madness and systemic rampant mayhem not being controlled by anything politically correct and efficient on the streets, that is.
-
-