Just as far away as before
For most of my life, experts have been saying that we are "about 20 years away from AI". In that time, we've seen a number of goals achieved: "play chess as well a a person", "play go as well as a person", "recognize a picture of a cat" and in every single case, it turns out that AI (or AGI is it is now called) remains as far away as ever.
Lots of excitement last year when it looked like Chat GPT and the cohort of LLMs could finally pass a Turing test. And yet, it looks like AI is as far away as ever. The biggest advance would appear to be that we don't understand exactly how the new models work any better than we understand how our own cognition works. So, yay, the experts have built systems they don't understand and can't predict.
The biggest clue that LLMs don't replicate human intelligence is the wild disparity in power consumption for both training and running the models. The human brain works its miracles on less than 100W. Good luck doing anything in an LLM with that kind of power draw.
With the confidence of someone who is not an expert, I predict that LLMs will be another blind alley in the search to replicate human level intelligence, although they look like they will have a number of useful applications.