Sounds like a recipe for...
Kaos. It seems a bit unControllable in my opinion.
Generative adversarial networks (GANs) are a brilliant idea: get two neural networks and pit them against each other to get a machine to generate completely new, realistic looking images. But in practice they are notoriously difficult to train and deploy, as one engineer told El Reg. Jason Antic, a deep learning engineer, runs …
This could be an indicator that the adversaries have actually learned something by their adversarial nature. When it happens the discriminator may have realised it’s been fooled by the generator, by spotting some new feature by which to discriminate. The generator is then forced to try new random tactics to fool the discriminator, resulting in a temporary reduction in output quality.
There is a lot of truth in this.
Unless the maching learning algorithm, or inappropriately called "AI" system, can understand anatomy along with a good understanding of clothing and to then differentiate between these particularly as some clothing is semi-transparent or holey (lace, for example), any attempt to generalise and brute force this kind of solution is going to fail - to varying measures of fail of course. This failure rate can be reduced but unless the dataset is so exhaustive that pretty much all possible images have been sampled, success will tend towards an infinite number of samples being trained upon.
To be fair though, it's not as dumb and short sighted as training a machine learning system on single object images and then wondering why after the question being asked and trained on is "what single object is in this image" that the system fails spectacularly when there are two objects in the image.
Hmm, I broadly agree, but there are instances where it is worthwhile. Peter Jackson and hus team did a pretty good job with the old B&W war footage when making “They Shall Not Grow Old”, and it really brings the images to life. If you’ve not seen the film yet btw, do it. Now. It’s incredible.
It probably has improved over the last few years, I bought a DVD box set of Laurel and Hardy some time ago. Some of the films have been colourised and it does look rubbish, so much so that when I converted the films from DVD to the computer so I could watch on my tablet, I just converted the b&w versions.
The example linked in the article, Psycho with the zombie hand does look much better (apart from the hand!), but that could be down to quality of source material?
This basically describes every AI I've ever heard of.
As far as I can tell, any kind of machine learning:
- Starts off useless.
- After computer-years of training gets average-to-good results for basic categorisations or tasks.
- Plateaus just after it becomes useful.
- Cannot be "untrained", and to retrain to include extra parameters, data, etc. requires far, far, far more effort and time than just throwing it out, starting clean, and retraining from scratch (it's like a stubborn old middle-ager who won't learn the new computer system, so you have to replace him and start all over again with new staff). The reason for this is clear - it's entire life, it's been told "this is right" and as soon as you introduce another subtlety or data that's borderline, it has to be trained enough to overwhelm ALL of its lifetime training that it had been specifically selected for. It's like the old guy who learned Novell and now needs to learn something new, except that he literally cannot leave his Novell training behind and his entire career is based solely on how good he was at Novell and not anything else.
- Gets written up / deployed / sold off while it's still useful but before anyone can actually do anything about improving it, so the new owner / the person lumbered with taking it on basically has a read-only system that will never improve.
- Distances itself from all claims of being mere statistics, despite being basically a entirely statistical model.
- Cannot be modified knowingly. The same way you can't just stick a knife into someone's brain to extract a memory, especially without knowing exactly where that particular memory is. You can't correct behaviour, you can only try to train it out (overwhelm it), and you have no idea where that behaviour arose from, what metric it's actually operating on, or why it's doing what it's doing. For all you know, it identified that banana in the image because of the Getty Images copyright that other photos didn't have, or something equally ridiculous (e.g. the average background colour of the central left-third of the image).
- Are far too complex and random to analyse post-training.
- Are not even necessarily reproducible between two identical trainings.#
AI is bunk. It's "sufficiently advanced magic" to fool the casual onlooker (idiot in a hurry) but I have real trouble positing it as something to be trusted in any manner. So it helped you upscale some old movie to 4K. Whoopie. No harm done, time saved. But could probably have been done with just a few filters anyway. But for anything serious... get out of here. It's throwing dice into a canyon until you've thrown enough to make a vague outline of Jesus's head and then selling it as an art-creating computer.
Biting the hand that feeds IT © 1998–2020