it will take more than that
While there is no question that new layers are indeed needed, it is more a question of need for a better architectural approach. I can't say everything, for proprietary / IP reasons, but the LLM style technology has serious flaws that will limit it even if you put in clumsy architectural patches. One has to go about it from a different base concept. This is because the LLM analysis model falls short in understanding true meaning. No matter how you extend it, it will break at a certain point because mere pattern analysis / stat analysis cannot understand cultural meaning, implication, philosophies, or other things central to human intelligence. No matter how many middle hidden layers you add to NN technology, that will not overcome the problem. The architecture needs a hybrid approach combining overt symbolic processing with NN / vector classifier engines. What I see is that the AI field has to learn the hard way to correct course, and all the hype will be embarrassing to look back on when we update ways.