Crossed fingers, but...
I am an AMD supporter from way back, but recent times have been pretty grim. Unless something dramatic happens I will be switching to Intel CPUs on my own workstations or maybe even something like ARM.
What I would like to see is a Lego-style system that allows natural additions of whatever you need be it CPU, GPU, RAM, DISK, etc.
In the 21st century we should not be stuck upgrading entire systems or scrapping perfectly good portions of systems because one portion of the system is not up to snuff.
We keep bashing our heads against artificial architectural limitations because of the single moronic meme "That should be enough. I can't see why we would need more." I understand that for practical reasons it is difficult to produce 6 bit vs 4 bit, 8 bit vs 6 bit, 16 bit vs 8 bit, 32 bit vs 16 bit, 64 bit vs 32 bit, etc, but why are architects unable to see the pattern there? The fact that a chip designer just flat out cannot imagine why I would want a 16K bit register should not be my problem. Their lack of imagination should not be permanently baked into the architecture. Perhaps the implementations need to be crippled due to practical considerations, but the architectures and the APIs should not be dragged down too.
Yes, there are timing and EM considerations, heat dissipation problems, fundamental limitations due to the speed of light, etc, etc. We may *never* be able to physically build some systems but our architectural designs should still not be assuming failure in advance.
I have seen people argue strenuously in favor of the GOF 'singleton' as a valid pattern rather than a corrupt extension of global variables. It is a bad pattern from which significant evil flows: It breaks scope by definition and specifies a specific cardinality (typically 1) that creates havoc in future designs. Examples: mouse, keyboard, window, screen, desktop, CPU, thread of execution, directory, disk, etc. All of those have been deeply crafted into architecture in such a way that breakage continues to this day. If something is architecturally designed with a perfectly artificial limit due to the lack of imagination of the architect it will eventually break.
We should have something akin to an architectural cloud whereby implementation and architecture are deeply separate such that scaling out to address spaces in the trillions of Yottabytes and well beyond is no problem.
There has been in the past a frighteningly moronic argument that we won't ever need addressing beyond a certain point because it exceeds the number of all the particles in the universe. That made sense to too many people for my comfort. That which we can specify is effectively without bound. If your math microscope is powerful enough you can count more points than the number of particles in the universe between 0 and 1, 0 and 0.1, 0 and .000000000001 ... carve it as fine as you please and there are always more points there. There is a simple counting argument used for things like this. If all the particles in the universe are the number X and I wish to specify X+1, I need an address space larger than X. The argument can be repeated ad infinitum. We don't have a rule that the counting numbers stop at a googleplex because it does not make sense. All the artificial limits in the computing architecture universe make no more sense than specifying a particular 'top number' for counting beyond which you have to stop.
</rant>