A Good Idea
Since a larger cache is one way to improve the performance of a CPU, having a wider-bandwidth pathway to memory would obviously bring benefits. Look at the innards of the NEC SX-9 supercomputer, which ties each of its CPUs directly to sixteen DRAM modules.
So not having to worry about pin count, not having to drive an external interface, would be greatly beneficial.
The problem, though, with going all the way to putting everything on a single die instead of just some type of module, the way HBM does already, is that die sizes are limited. Putting, say, eight cores and 16 gigabytes of DRAM on one die isn't likely to be possible for some time.
Of course, though, one thing chipmakers are looking for is a way to eliminate die size as a constraint. If you could have a multichip module where the connections between dies were essentially indistinguishable from on-die connections, imposing no additional delays or requirements for driviing, then, while major units like CPU cores would still have to be within a single die, cache and memory on other dies would be as good as on the same die.