back to article Good luck using generative adversarial networks in real life – they're difficult to train and finicky to fix

Generative adversarial networks (GANs) are a brilliant idea: get two neural networks and pit them against each other to get a machine to generate completely new, realistic looking images. But in practice they are notoriously difficult to train and deploy, as one engineer told El Reg. Jason Antic, a deep learning engineer, runs …

  1. BigE

    Sounds like a recipe for...

    Kaos. It seems a bit unControllable in my opinion.

  2. oresme

    "it randomly gets worse and then better again" . This sounds to me like a statistical algorithm that just isn't converging. Well, good luck with that one.

    1. YARR

      Randomly gets worse then better again

      This could be an indicator that the adversaries have actually learned something by their adversarial nature. When it happens the discriminator may have realised it’s been fooled by the generator, by spotting some new feature by which to discriminate. The generator is then forced to try new random tactics to fool the discriminator, resulting in a temporary reduction in output quality.

  3. Teiwaz Silver badge
    Coat

    What about a third Neural network

    Trained on the glitchy vs correct images

    Can't really call it adversarial then though, what about 'Love Triangle Black Hole'*

    * 'cause of the obscure nature of neural networks and 'cause I'm an Urusei Yatsura fan.

  4. JimmyPage Silver badge
    Stop

    You can't have artifical *anything*

    until you understand the natural version first.

    Pretty much does for "AI" then.

    1. Nick Ryan Silver badge

      Re: You can't have artifical *anything*

      There is a lot of truth in this.

      Unless the maching learning algorithm, or inappropriately called "AI" system, can understand anatomy along with a good understanding of clothing and to then differentiate between these particularly as some clothing is semi-transparent or holey (lace, for example), any attempt to generalise and brute force this kind of solution is going to fail - to varying measures of fail of course. This failure rate can be reduced but unless the dataset is so exhaustive that pretty much all possible images have been sampled, success will tend towards an infinite number of samples being trained upon.

      To be fair though, it's not as dumb and short sighted as training a machine learning system on single object images and then wondering why after the question being asked and trained on is "what single object is in this image" that the system fails spectacularly when there are two objects in the image.

    2. Roland6 Silver badge

      Re: You can't have artifical *anything*

      AI probably should more correctly be called MI = Magic Intelligence: We don't actually know how it works but it does - sort of by magic work.

  5. Anonymous Coward
    Anonymous Coward

    "Don't chase the trends, chase the results."

    And that's just the kind of heresy which keeps him from working at a proper company with more than two people.

  6. Tom 7 Silver badge

    An intelligent networlk

    would just leave a B&W photo as it is. Colorizing old stuff is an utter waste of time and invariably looks false.

    1. Mathman
      Holmes

      Re: An intelligent networlk

      Looks false? Do you think that means you could train up an AI system to decide if it really is true or false? OK, so can you train up another AI system that can beat your first system? Well maybe it can also defeat the human eye? Worth a try.

    2. David 132 Silver badge

      Re: An intelligent networlk

      Hmm, I broadly agree, but there are instances where it is worthwhile. Peter Jackson and hus team did a pretty good job with the old B&W war footage when making “They Shall Not Grow Old”, and it really brings the images to life. If you’ve not seen the film yet btw, do it. Now. It’s incredible.

    3. Anonymous Coward
      Anonymous Coward

      Re: An intelligent networlk

      It probably has improved over the last few years, I bought a DVD box set of Laurel and Hardy some time ago. Some of the films have been colourised and it does look rubbish, so much so that when I converted the films from DVD to the computer so I could watch on my tablet, I just converted the b&w versions.

      The example linked in the article, Psycho with the zombie hand does look much better (apart from the hand!), but that could be down to quality of source material?

      1. Charlie Clark Silver badge

        Re: An intelligent networlk

        Apart from the fingers, which have become the colour of the door frame – edge and object detection in the discrimnator have probably assigned to the doorframe: it's about the height of a lock – that picture is fantastic.

  7. Lee D Silver badge

    This basically describes every AI I've ever heard of.

    As far as I can tell, any kind of machine learning:

    - Starts off useless.

    - After computer-years of training gets average-to-good results for basic categorisations or tasks.

    - Plateaus just after it becomes useful.

    - Cannot be "untrained", and to retrain to include extra parameters, data, etc. requires far, far, far more effort and time than just throwing it out, starting clean, and retraining from scratch (it's like a stubborn old middle-ager who won't learn the new computer system, so you have to replace him and start all over again with new staff). The reason for this is clear - it's entire life, it's been told "this is right" and as soon as you introduce another subtlety or data that's borderline, it has to be trained enough to overwhelm ALL of its lifetime training that it had been specifically selected for. It's like the old guy who learned Novell and now needs to learn something new, except that he literally cannot leave his Novell training behind and his entire career is based solely on how good he was at Novell and not anything else.

    - Gets written up / deployed / sold off while it's still useful but before anyone can actually do anything about improving it, so the new owner / the person lumbered with taking it on basically has a read-only system that will never improve.

    - Distances itself from all claims of being mere statistics, despite being basically a entirely statistical model.

    - Cannot be modified knowingly. The same way you can't just stick a knife into someone's brain to extract a memory, especially without knowing exactly where that particular memory is. You can't correct behaviour, you can only try to train it out (overwhelm it), and you have no idea where that behaviour arose from, what metric it's actually operating on, or why it's doing what it's doing. For all you know, it identified that banana in the image because of the Getty Images copyright that other photos didn't have, or something equally ridiculous (e.g. the average background colour of the central left-third of the image).

    - Are far too complex and random to analyse post-training.

    - Are not even necessarily reproducible between two identical trainings.#

    AI is bunk. It's "sufficiently advanced magic" to fool the casual onlooker (idiot in a hurry) but I have real trouble positing it as something to be trusted in any manner. So it helped you upscale some old movie to 4K. Whoopie. No harm done, time saved. But could probably have been done with just a few filters anyway. But for anything serious... get out of here. It's throwing dice into a canyon until you've thrown enough to make a vague outline of Jesus's head and then selling it as an art-creating computer.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020