Reply to post:

Good luck using generative adversarial networks in real life – they're difficult to train and finicky to fix

Lee D

This basically describes every AI I've ever heard of.

As far as I can tell, any kind of machine learning:

- Starts off useless.

- After computer-years of training gets average-to-good results for basic categorisations or tasks.

- Plateaus just after it becomes useful.

- Cannot be "untrained", and to retrain to include extra parameters, data, etc. requires far, far, far more effort and time than just throwing it out, starting clean, and retraining from scratch (it's like a stubborn old middle-ager who won't learn the new computer system, so you have to replace him and start all over again with new staff). The reason for this is clear - it's entire life, it's been told "this is right" and as soon as you introduce another subtlety or data that's borderline, it has to be trained enough to overwhelm ALL of its lifetime training that it had been specifically selected for. It's like the old guy who learned Novell and now needs to learn something new, except that he literally cannot leave his Novell training behind and his entire career is based solely on how good he was at Novell and not anything else.

- Gets written up / deployed / sold off while it's still useful but before anyone can actually do anything about improving it, so the new owner / the person lumbered with taking it on basically has a read-only system that will never improve.

- Distances itself from all claims of being mere statistics, despite being basically a entirely statistical model.

- Cannot be modified knowingly. The same way you can't just stick a knife into someone's brain to extract a memory, especially without knowing exactly where that particular memory is. You can't correct behaviour, you can only try to train it out (overwhelm it), and you have no idea where that behaviour arose from, what metric it's actually operating on, or why it's doing what it's doing. For all you know, it identified that banana in the image because of the Getty Images copyright that other photos didn't have, or something equally ridiculous (e.g. the average background colour of the central left-third of the image).

- Are far too complex and random to analyse post-training.

- Are not even necessarily reproducible between two identical trainings.#

AI is bunk. It's "sufficiently advanced magic" to fool the casual onlooker (idiot in a hurry) but I have real trouble positing it as something to be trusted in any manner. So it helped you upscale some old movie to 4K. Whoopie. No harm done, time saved. But could probably have been done with just a few filters anyway. But for anything serious... get out of here. It's throwing dice into a canyon until you've thrown enough to make a vague outline of Jesus's head and then selling it as an art-creating computer.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022