Reply to post: Re: Over the years people have done AI projects in software development.

Machine-learning boffins 'summon demons' in AI to find exploitable bugs

Lee D Silver badge

Re: Over the years people have done AI projects in software development.

"For clarification, specify what you mean by "learn" and perhaps give a specific example."

No problem.

Is your child learning by being told to memorise all the exam answers? They will certainly pass tests, but are they "learning"? Will they be able to apply that knowledge, acquire or infer related facts, or step outside the boundaries of their rote-taught curriculum? Most people will argue "No". That's not "learning". It's memorisation. Computers are perfect at memorising. Feeding in a billion games and telling it "this is a good position", "this is a bad position" and making it memorise those is not learning.

Even simple transforms are not learning - just switching the order of the answers on a multiple-choice exam, so that the child has to memorise the ANSWER, not just the letter assigned to it. The computer equivalent? The same position seen as a rotation, reflection, translation, change of colour, etc. or even seeing a miniature part of it reproduced on a larger board. Is it "learning" to memorise all the positions and then use similarity tests to assign a value? Most people would argue "No."

I'm using the inferred and standard human definition of learning that most people will not argue with, and which are reflected in the dictionaries:

"become aware of (something) by information or from observation."

and "gain or acquire knowledge of or skill in (something) by *study*, *experience*, or being taught."

To "become aware" that you're about to lose a chess game, that you've NEVER seen before, never played that position before, have no perfectly memorised table of losing positions for, but which you can *infer* you will lose without that specific a knowledge? That's learning.

Current "AI" does not learn. It adds to a massive database of experience, yes. That database is associated with a key of "desirability", yes. But outside of that, the computer is unable to infer. Feeding it a billion games from masters might give it enough database to win. But humans quite clearly do not require that to learn the game.

"AI" is also almost entirely heuristical. Humans have told it "this will be a winning position", "this will not", "this is X times more desirable an outcome". Whether by rules implicit in the system, input into the data, or programming which contains such assignment. Though you can feed in a massive games database and it can automatically form an association between "moving into the top-left corner" and "winning 0.184% of matches", that is singularly useless from an "AI player" point of view.

AlphaGo starts down the routes of finding patterns. This is the data, this is the winning items, this pattern that I've formed from those winning games can be described as a board position looking like X, and in other games that I've never seen before, games including board position X result in a win 12.749% of the time. It's pattern-forming, and pattern-matching. But the patterns it can possibly find are described by humans again, whether coded or parameterised.

At no point are such machines as DeepBlue or AlphaGo inferring or hypothesising or doing anything unexpected. They don't have an understanding of the position and the ability to form similar patterns that may improve that association. It has to be coded specifically. AlphaGo is leaps and bounds ahead in this, beating predictions for gaming models of Go by decades. But it's still not inferring as you would need to "learn".

Left to it's own devices, it wouldn't be able to formulate a strategy. It can only take a HUGE database of games and form correlations between their properties.

I like to think of it as the Fosbury Flop principle. Take this as an analogy, not a strict example! Put into a high-jump contest, even the best of today's AI wouldn't have the insight to suddenly invent a different way of working that was still within the rules but not present in the database of all existing high-jumps before it. Similarly, there's a record in the cricketing world where there was a period of time during which there was no rule specifying a maximum width of a cricket bat - until one guy played with a bat wider than the stumps.

"AI" isn't capable of that "within the rules, but outside of their own experience" thinking, true learning. They cannot infer. They cannot build a pattern outside of certain set criteria. And they are still, at the end of the day, expert systems and statistical analysers on large databases.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021