Reply to post: Re: Over the years people have done AI projects in software development.

Machine-learning boffins 'summon demons' in AI to find exploitable bugs

Lee D Silver badge

Re: Over the years people have done AI projects in software development.

Do you mean you don't understand why they don't do that?

Because the results are generally slow and meaningless. There's no "AI" as you might think it. That just doesn't exist.

Take genetic algorithms as an example - literally you pitch a load of algorithms against each other, see one which is closest to what you want, and then "breed" from it to put similar code into another generation of algorithms, that you pitch against each other and so on.

Thousands of generations later, you get something that can do something really quite basic with a very base level of repeatability. It gets *better* with each generation (not entirely true, sometimes it quite clearly goes backwards!) generally speaking, but it never gets to a point where it's in any way infallible or quicker than humanly steering it (heuristics), or reliable.

The big thing in GA is the selection criteria - how do you know who did best, how many of those do you breed from, what kind of breeding crossover size do you allow, etc. If you apply a GA to that, it gets even worse. It's basically a blind, random search. Sure, given a few million years of execution, it might end up somewhere but all you've done is add complexity and increased the time it takes to do anything by an order of magnitude.

Though GA != AI, all the machine learning things you see have the same problem. You know what the criteria for success are, you know what leads there, you can measure all kinds of things (in fact, the more you can measure, the worse it gets!), but applying them to themselves you end up in a "blind-leading-the-blind" situation that just makes everything even worse. And in the end, just tweaking the criteria for success itself achieves the same result quicker (i.e. the criteria for success of the "master" GA gets folded into the criteria for success of each "underling" GA anyway). Except "quicker" is by no means bounded or guaranteed in human terms.

The problem is that people THINK we have AI. We don't. They think we have machines that can learn. They don't. They think that getting the basic of "learning" machines and scaling it up will just work. It doesn't. They think that once something starts to "learn", we can train it into a HAL-9000 by just throwing more resources at it periodically. We can't.

Like compressing a compressed file, setting one to teach another isn't going to achieve anything any quicker than you could achieve by just focusing and "nurturing" the target anyway. It's like being an educated person, training an uneducated nanny to then educate your child. You could do a better job by just doing it directly.

But the biggest problem - machines STILL DO NOT LEARN. Even in the most impressive of demos and achievements (Google's AlphaGo is unbelievably amazing - I know, I studied Maths and Computer Science under a professor who studied machine-algorithms for winning Go for his entire life... you have no idea of the leaps AlphaGo has made. But it's STILL DOES NOT LEARN).

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021