To assert that intelligence and learning are human traits (implied by the definition of anthropomorphism) is to reject the notion that nonhuman intelligent or learning animals exist.
Unless you're literally arguing with yourself, you've split the compound nouns in my original post and have acted as though the definitions shouldn't have changed (see the tin can for why that doesn't work).
The "statistical dice-rolling games" formulation is insufficient to exclude text generation processes that aren't enjoying a phase of popularity at the moment. Markov chain text generators exist, after all. Saying "Those are just words used to anthropomorphize complicated maths" is equally useful, but with a much more obvious scoping failure.
The term "artificial intelligence" was meant to be among "artificial sweetener" and "artificial heart", not "human intelligence" and "extraterrestrial intelligence" (hi amanfromMars 1!).
A "model" is an algorithm with a set of internal parameters that can be changed to alter the behavior of the algorithm.
"Machine learning" is using data to automatically adjust a model's parameters to improve its performance.
"Inference" is running the algorithm on inputs that may not have been in the original training data.
"Sampling" is taking a selection of weighted outputs and selecting one, usually using a battery of statistical tests.
The closest thing to a statistical dice-rolling game in current LLM architecture is sampling, since inference is just running a really big function and machine learning is an error reduction process that iteratively increases a model's predictive ability.
>"Intelligence" and "learning" used in this currently fashionable context are just marketing bullshit words.
"Artificial intelligence" and "machine learning" are only bullshit marketing words inasmuch as people who actually know what they mean are willing to cede ground to people like you who don't. Learn something or get out of the way.