I think you're confusing machine learning (ML) with artificial intelligence (AI). The latter can extrapolate/interpolate, whereas the former cannot. Both are based on neural networks, but their purpose is different.
An AI can explore the space in and around its training set, based on its nature as an almost-but-not-quite perfectly fitted polynomial curve, extracting novel information.
With ML, the same input produces the same (trained) output (which is what you want). With AI, the same input can yield multiple, different outputs (not to be confused with "hallucinating").
With ML, if you input an item from the training set, you will get an output that corresponds to the trained (and expected) result (if it has been adequately trained). With AI, it is incredibly hard (but not completely impossible) to reconstruct an item from the trained set, given the correct specific input.
For some reason, the popular view is that an AI is nothing more than a database and all it does is regurgitate items from that database. A moment's thought will reveal that that cannot be true, otherwise everyone who entered exactly the same prompt would get back exactly the same response - and they don't (unless they're asking for factual information, but even then, the presentation will almost certainly be different each time).
In addition, the volume of data on which recent LLMs have been trained runs into the hundreds of gigabytes and more, whereas the size of the trained model is measured in tens of gigabytes. HTH.