Re: Stop misusing that term
Yeah, it's definitely not intelligence.
ChatGPT was famous for producing maths such as "2+2=5", along with the usual bland, yet verbose "explanation" of why it was correct. It was all gibberish, of course. Why does it make this mistake? Because it doesn't know what "2" is, or what "+" is, or what "=" is, or what "5" is. It doesn't know what numbers are. It doesn't know any of the rules of mathematics at all. It has no idea what right or wrong are either, so it can't know that it is in error (even if told as much, unless it has a means to understand what being wrong means and why it was wrong when it's told that, which it does not and that's why it'll often argue back - it's just stats-matching training set text from when some people told some other people that they were wrong. Ever seen online "discussions"? When someone says "you're wrong", someone else pretty much always argues back).
The reason it might assert 2+2=<anything> is because that's a maths-y thing which looks statistically like other maths-y things and a lot of the maths-y things which had "2+2" in them said "4". But sometimes people say stuff like, "hey that's nonsense, it's as wrong as saying 2+2=5". And thus, we have "2+2=5" in the training data now, so there's this small stats-based chance (based on billions of other bits of input and nuances that are beyond our own ability to reason about simply because of the vastness of the data set) that the ML system might, indeed, state "2+2=5".
It's a stochastic parrot, full stop. No matter how many times people hand wave and say "we don't know what intelligence is", that's just deflection. We certainly do know that part of our intelligence is based around knowing rules and understanding them and, indeed, earlier AGI studies (1970s-90s era or thereabouts, then just "AI") were often based around trying to teach rules and gain inference from those. A person knows what an integer is, the rules governing those and what addition means and so knows, without a shadow of a doubt, that 2+2=4, because the person understands the governing rules and nature of every part of that statement... Once taught those rules, that is! The trouble is, a lifetime of learning rules turns out to be very, *VERY* hard to do even with modern computing power - the biggest problem, I think, is assembling a machine-readable training set of such accuracy and detail in the first place, rather than creating a computer system capable of processing that data.
But, good news! We discovered a party trick. Enter generative AI, AKA ML.
Even OpenAI themselves acknowledge that ChatGPT is indeed a party trick - that it only gives right answers by accident, readily makes up nonsense and should never be used for anything that requires correct answers, but never let a product's limitations get in the way of the lies of marketing and the holy grail of sweet, sweet profit. Microsoft have a whopping great big share in OpenAI, so - surprise! Suddenly ChatGPT is in front of Bing, a search engine that's supposed to give accurate answers. The tsunami of stories early on about how Bing was, subsequently, frequently returning rubbish was an inevitable outcome. It'll still be doing it, helping to misinform and worsen misinformation problems globally, but it's all old news now so you don't hear about it.
We can carry on refining this junk, at least so long as there's ever-more *human*-generated content online to teach upon, but it'll still be lipstick on a pig. Like the fun artificial landscape generators of the past such as Terragen, or entertaining old-school "human-like chat" bots such as Eliza way-back, it'll still hit its limit. Interestingly, with ML-generated material now spewing out over the web like a broken sewer main over a highway, actually finding new human-authored stuff to add to existing ML model training datasets has become an awful lot harder than it was. We might already be quite close to the peak of capabilities of these systems as a result.