Reply to post: Re: Intelligence

Large language models' surprise emergent behavior written off as 'a mirage'

mpi Silver badge

Re: Intelligence

> From what I've seen these language models seem to have some understanding of our physical world,

I can trick an LLM into explaining to me why a tractor fits in a teacup. Where is that understanding of the physical world?

No, they don't have an understanding. They don't even have concepts of the physical world. The entirety of an LLMs capabilitiy, is sequence prediction, the entirety of their universe are tokens, period. That enables them to *mimick* an understanding, because given the sequence "A tractor does ___ fit into a teacup", the tokens forming "not" are simply more likely in the place of the blank than the tokens forming "indeed".

The trouble with this mimickry of understanding: If I give it a sequence to nibble on that makes the word "indeed" more likely in that place than "not", that's what it will predict.

The other trouble with this mimicry, is that humans are prone to antropomorphization: We naturally jump to the hasty conclusion, that things have human-intellect like agency behind them. For the same reason, people once believed that thunder is a man in a chariot beating his hammer against the clouds, or that rain is the tears of angels.

> I find it difficult to believe their output is merely a random construct of words and letters.

That's because it isn't random. It is stochastically determined to be likely in the context provided by the training datas influence on the weights, and the sequence that preceeds it, aka. the "prompt".

Things not being random doesn't mean they are intelligent however. int count() { int i = 0; while (1) { printf("%d", i); i++; } isn't producing a random sequence either.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon