There's actually something the USPTO won't patent? Who knew?
Now extend that to all S/W patents.
Future AI could be a challenge for US Patent and Trademark Office (USPTO) officials, who need to wrap their heads around complex technology that's perhaps not quite compatible with today's laws. Under the Department of Commerce, the USPTO's core mission is to protect intellectual property, or IP. Creators file patent …
In ye olden days a patent application had to include a working model based on the (correct) assumption that you can't patent an idea unless you can also make it work. The idea of an internal combustion engine wasn't enough, you had to demonstrate that you could make one that worked.
Software should have the same requirement.
As to the patent holder, it needs to be a person, actual or legal (e.g. a corporation) who can attest to the patent, negotiate with other parties regarding the patent, and appear in court, which is something no software can do (and may never be able to do).
As to the output of neural networks, of course it can't be patented, anymore than the output of a camera.
I've got a patent for something that has no reference to the working model in the patent application. There is a working model, just that it isn't mentioned in the patent.
If you insist that something can only be patented if you have a working model then you skew the patent system in favour of organisations with big resources to build those models, and against the individuals that actually have the idea.
If you create an algorithm for something and can show the mathematics and proof of it, why should you be excluded from protecting that idea simply because you have not got, say, the processing power required to demonstrate it on?
"AI and emerging technologies have the potential to dramatically improve our day-to-day lives. They will provide countless and unpredictable benefits to our social well-being not just here in the United States, but around the world. But the bottom line is, we need to get this right.
"We need to make sure we're setting laws, policies and practices that benefit the US and the world."
Quite so. However, you need not concern yourselves with getting things right, setting laws, policies and practices that benefit the US and the world regarding what AI and emerging IT technologies are doing, can do and will be doing. They'll take care of all of that themselves thank you very much.
You might like to consider and accept as an indisputable present day fact that Future AI is far too much of a challenge for US Patent and Trademark Office (USPTO) officials whenever unable or disenabled to wrap their heads around complex technology that's clearly not compatible nor understandable with their limited knowledge of the intelligence required and methodologies used to have dramatic improvements delivered to future lives via stealthy networking in Oday vulnerability channels.
To explain how that is surprisingly simply done is something which would divulge top secrets for extremely rewarding trading and proprietary intellectual property which is dangerous to know if one is not prepared to ensure and guarantee it cannot be badly misuse and crazily abused ...... and thus might it be deemed an illogical and unnecessary step to take.
Some things are best discovered/recovered/uncovered by oneself with the hindsight of experience in successful prior practice with almighty results providing the foresight for the enlightened path to future endeavours/realities, both virtualised and base practical metadata physical.
And, of course, that sort of ongoing systemically embedded Almighty Agile HyperRadioProACTive AIdDevelopment registers the status quo establishment command and control hierarchy/patriarchy/oligarchy future redundant and surplus to present requirements for mass media entertainment.
Realise, regarding IT and AI, it is so much more than just Your Brain, Our Choice, rather than imagining any alternative My Br
AIn, My Choice possibility as a valid and viable scenario.
Such is made so much easier for y’all, now that you have no need to search for it with no prior knowledge of what is readily available and widely deployed in all necessary domains virtually guaranteeing every possible success as an absolute certainty.
Capiche? Ты понимаешь?
"...Neural networks aren't easily explainable. The number-crunching process that seemingly magically transforms input data into an output is often opaque and not interpretable. Experts often don't know why a model behaves the way it does, making it difficult for patent examiners to assess the nitty-gritty details of an application..."
"Artificial Intelligence has the same relation to intelligence as artificial flowers have to flowers."---D.L. Parnas
Hopefully no one.
The current system is already fucked when it comes to music. We've had cases where they have successfully sued for "the general sound and feel" of a track.
With AI you could simply generate every single possible combination of notes in 8 bars and then sue every single musician on the planet.
You could easily do the same with art work.Just bang out 10s of millions of images and then sue for similar looking pieces.
The photographer Elliott Erwitt took some photographs of people sitting on deckchairs and one later when the chairs were empty, as if the people had been blown off them (http://www.catherinecouturier.com/artists/elliott-erwitt/gallery/cannes,-france,-1975-beach-chairs/)
As I recall, later an advertising company used very similar images in a campaign and Erwitt sued them as he claimed the IP for the images. (I cannot find a link to the story, sorry.)
There have been many claims in the music industry about bands and composers copying musical motifs, riffs etc from others, some successful, others not.
Wasn't that while Ed Sheeran thing over a "phrase"? Like, this bunch of notes written by this guy sounds like this bunch of notes written by another guy. Well, you can probably discount many potential patterns of notes as they don't go well. What's left? Things that sound good? Major chords? Perfect fifths? Certain chord progressions? They may have had a point if Sheeran's entire song, or at least large chunks of it, sounded alike. But a mere phrase? That's tenuous at best. Think how often you've heard something and thought what song it was, only it wasn't. That bit you heard just sounded similar to something else. With hundreds and hundreds of songs every year, some bits will sound alike.
> With AI you could simply generate every single possible combination of notes in 8 bars and then sue every single musician on the planet.
It might be difficult to sue any musician every again, since someone already owns the copyright. You do not need AI, it has already happened (2020-02): "Riehl and Rubin developed an algorithm that recorded every possible 8-note, 12-beat melody combo".
The full dataset is available from the Internet Archive (~1.17 TiB) and the tools used to create and manipulate dataset are on github. And the license used by "All The Music, LLC" was "Creative Commons Zero" - All works under Creative Commons Zero are dedicated to the public domain, and the author waives all rights to the work worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Music on YouTube is especially fucked. FranLab posted a video from the 1970s, and someone took it down as a copyright strike because the wind noise - yes wind noise - sounded similar to some wind noise - yes wind noise - that had been recorded decades *later*. All these IP copyright systems are basically a scam.
The only patentable bit is the untrained HW/SW (except in jurisdictions where the abomination of SW patents is not valid).
Real world training data are not patentable, they are not an invention, nor is any resulting learned model used to configure the HW/SW, it's just a representation of the data produced by the manipulation of the patentable system.
The user(s) of the data potentially could own the copyright to the resulting model, but only assuming they had the copyright to the input data in the first place, i.e. permission from all person(s) or owner(s) of any property represented, if they don't have proven permission for every single piece of data, they can't copyright the result of combining that data.
That means no mass harvesting of photos/whatever from the interweb, surveillance cameras, patient records etc. and claiming ownership of the derivative work, if it was free data going in, the output is free for anyone to use as they wish.
Applying stock DNN training algorithms to a non-copyrightable set of data, does not constitute some sort of significant effort needed for Sui Generis.
It's just an automated transform.
As for that Sui Generis crap the EU dreamed up, they imaged EU would be some sort of owner of the data, creating value by assembing the data of others into unique collections. See the world buying all their unique datasets of non-unique data? No? Of course not, only EU recognize that, so they sell it to themselves, its just an internal cost barrier that stops them competing in the world.
There's a huge variety even of "stock" training algorithms, and training methods are only one part of developing a successful model. Model architecture is often more complex and arguably more important than training. You're oversimplifying the development process to the point of parody.
I'm not a fan of DL and enormous inexplicable models with billions of parameters and hundreds or thousands of parameters. I think it's a bad direction to choose for engineering production solutions, and while much of the research is interesting, it's interesting in the way that stumbling around a building in the dark is interesting. It's a poor way to produce usable technology.
But developing non-trivial DL applications is by no means "an automated transform".
"The only patentable bit is the untrained HW/SW..."
...except that in 2022 the idea of throwing a lot of data at ML and seeing if it comes up with anything is hardly an innovative step, in the same way that "doing X on a computer" should not have been regarded as patentable at any point from about 1980 onwards.
I don’t see why AI should be more of a problem for the patent office than if the invention was created by a person. As for “Fast evolving technologies, such as deep learning." That problem, not keeping up with technology already exists with current technology of today.
AIs cannot be legally listed as inventors and I can’t see that changing in the near future*. So, AI inventions patent applications are written by a person and to the patent office created by a person.
If the patent application can’t do this “Not only do they have to demonstrate their invention is novel, non-obvious, and useful, they have to describe their work in a way that someone skilled in the same field can understand and reproduce it.” Then the application should be rejected.
“Neural networks aren't easily explainable. The number-crunching process that seemingly magically transforms input data into an output is often opaque and not interpretable. Experts often don't know why a model behaves the way it does, making it difficult for patent examiners to assess the nitty-gritty details of an application.”.
That's no excuse if you don’t know how your invention works then how can you patent it. How can you know if someone with a product that does the same thing is or is not infringing on your patent? If you do not know if it works the same way as your product because you don’t know how your product works.
Also USPTO could get rid of a lot of problems with AI/ML if software could not be patented.
*Well not until an AI knows that it has invented something, and it had better apply to the patent office for a patent. Can file a patent application that explains what the invention does and more importantly explains how it works.
Can't explain how it works, no patent for you.
Doesn't stop you asserting copyright over the model or outputs.
A trained model is not an invention. A way of training or implementing a model might be an invention, but the model itself is a book, painting or a piece of music. Copyright, yes, patent, absolutely not.
I agree with this distinction with one additional aspect: the product of an AI might itself be patentable, for example a design for a component that the program has created to suit the goals programmed into it by the inventor. That design can be patented, just as one created using manual design software could, but by specifying the unique and invented components of the design, not by attempting to reproduce the training process. The idea of assigning the tool as the inventor or of considering any file such a tool produces as an invention is just going to lead to lawsuits with no beneficial purpose.
"Future AI could be a challenge for US Patent and Trademark Office (USPTO) officials, who need to wrap their heads around complex technology that's perhaps not quite compatible with today's laws."
This assumes the USPTO actually do anything more than simply take the money and rubberstamp the application; wrapping heads around complex technology? isn't that what the courts are for>..
> Creators file patent applications in hope of keeping competitors from copying their inventions without permission
All that having a patent granted does is acknowledge the person named as being the inventor. It doesn't guarantee that person any other privileges. Specifically, they still have to personally fight any patent infringement cases themselves. That means paying for the legal defence and (if lost) the accused costs, too.
Having a patent does not mean the state granting it will defend your rights.