Stop calling them "parameters", call them, I dunno, "nadans" or just plain old "numbers"
> Meta's LLaMA, a seven-billion-parameter LLM
Aaargh - it is a seven-billion-near-as-dammit-arbitrary-number LLM!
A seven-billion-nadan LLM! There you are, "nadan" - doesn't that even sound more exotic and intriguing than "parameter"?
A "parameter" is an input that has understood meaning - if you "know the parameters of the problem" it means that you can identify and *describe* each parameter, *explaining* how they affect the outcome. That is why we say that a function has parameters - which we give meaningful names to - and we pass over values that we know to be sensible (well, when thing are going to right).
The LLM - and any of its relations, the varieties of Neural Nets - tweaks weights between connections as it is trained. Looking at the result, this huge pile of simple numbers is totally incomprehensible: you won't gain any useful insight by asking "Why is this 7.01" - it just is. You can't say "I want to make this change to the outputs, to do so I will change that 7.01 to 7.02", there is (as yet) no way to determine whether any given number in model even *has* an effect on the outcome (it may be blocked at any level in the layers preceding or succeeding it). Unlike a Markov Chain, the layering makes these large stochastic models totally opaque. There is ongoing research to see if this situation can be improved upon, without "damaging" the "usefulness" of these models but we are not there yet.
Rather obviously, the LLM pushers call them "parameters" - and expect everyone else to follow suit - because it makes them (both LLM and peddlar) sound cleverer than they are (old advertising trick - even when people say "oh just call them parameter, what does it matter? Don't you know language changes all the time!" the mind still attaches some of the gravitas of the word to its lesser usage).
> The researchers noted that they cannot explain why a search result is trustworthy or not.
Because they are just flinging nadans not parameters! These nets have *no* explanatory power, unlike other approaches to "AI".
> They hope to come up with another strategy to increase accuracy and reliability in the future
But, note, not to increase (well, start) providing any ability to *explain* why the result is trustworthy (or not).