Intelligence
Let's think more broadly than we usually do about intelligence as a thing that is unique to humans and that allows us to find novel, never-before-seen solutions to new problems.
Not only have we learned that intelligence is not a uniquely human trait, and that many animals possess it to varying degrees, but it's probably not about "solutions" per se.
Let's look at what most life forms do: they adapt to survive better (actually, that's an oversimplification, and many biologists will disagree and simply claim that the part of a species that learned or acquired a new trait often survived better, purely by chance, while the other part went extinct). What adaptation really means is having certain sensory inputs, building a model of the world and yourself in that world, and modeling your behavior in a way that gives you an advantage over other life forms in terms of your reproductive chances.
It looks like the hardest part is filtering out the least important inputs and not spending a disproportionate amount of time on invalid speculations because other life forms have learned to do that better. It's really a balancing act, you need to predict the world and yourself better, but not spend too much time and/or energy doing it.
And the hardest part of course is that modelling is extremely difficult (that's why you need billions of neurons to do it). When scientists think about problems they often arrive at solutions/conclusions unconsciously, so there's some "processing" (modelling) going on in deeper layers of our brains that we're consciously aware of.
Now the question is, does any LLM do any of this sensory input/modeling stuff? To some extent they do (if you've ever had a chance to use one, you'll attest), but they're a long way from us. LLMs are excellent at mimicking and combing what's already known, but that's unlikely to lead us to AGI.
Finally, since the advent of computers, too many people have thought that what our brains do is "computing," but I'm far from convinced of that.