It makes sense that Google are looking to integrate LLM, and I assumed that they (Apple, Amazon, Microsoft, Google & Meta) were all working on it, in the background, already.
The problem is, ChatGPT is a bit of fun, and if it throws up the wrong answers, well, it is still an experiment, it isn't a product that is good enough or consistent enough for daily use. But if you have a voice assistant, you want it to do what you tell it and to give you correct answers. (Yes, I know, that isn't very good at the moment, either.)
If you are sitting at a keyboard and ChatGPT & Co. give you some wild answer, you can go, "wait, what?" And look into it. If you are out and about, using voice, you generally don't have the time to stop & think about the answer and a "wait, what?" moment will just get you frustrated & you won't use it again.
This is where the additional testing is really needed, before you can expose the product to the world. The problem is, you get half finished products like ChatGPT, that provide some good answers, but it is a coin toss, whether you get a sensible answer or an hallucination.
They can't afford that, with a "real" assistant, it is better that it remains half-way usable in its current form, rather than giving wrong answers half the time or not doing what you have told it to do. An AI overhaul for beta users? Yes. An AI overhaul, without sufficient testing & proven accuracy for the general population? No way.