200ps switching time is NOT equivalent to 5 GHz
Processors that are clocked at 5 GHz switch a number of gates during that cycle, not just one. The "3nm" class transistors in the latest CPUs switch at well over 100 GHz.
Gaze into the temporal distance and you might spot the end of the age of silicon looming somewhere out there, as a research team at Penn State University claims to have built the first working CMOS computer entirely from two-dimensional materials. The team, led by Pennsylvania State University engineering science professor …
So, firstly this is not 2D. The tracks pass over each other, so the equivalent macroscopic topology is a double-sided circuit board of the kind which was ubiquitous in the 1980s. The first CPUs used only single-sided boards, genuinely 2D and not mere marketing-speak 2D, with the first microprocessors being similarly restricted on-chip.
Group III-V (silicon-free) semiconductors were used to create high-speed devices for specialist applications.
So here we are, back to 1970/80s technology. I am not entirely clear how a selenium compound is more commercially viable than, say, gallium-aluminium-arsenide. But then, I seem to have lost my coloured pencils.
P.S. But if it can run Moon Lander, I'm in!
The current speed is rather immaterial. A totally new design with new materials will not be as fast or efficient as the fully optimized tool chain that produces silicon chips.
There seem to be ample room for improvement and optimization. And one atom thick channels sound already like a major achievement.
The one instruction that is the instruction set of this computer turns out to be reverse subtract and skip if borrow (RSSB).
Not something I would be eager to program in. It makes assembler look positively user friendly high level abstract. But you can always build a C compiler for it. But I think it must be easy to find more CMOS efficient instruction sets.
Cool website (esolangs.org)! I had no idea there were so many OISCs and a notion of Turing tarpit involving lambda calculus and Turing machines ... Yesterday's JSF*ck does fit right in.
Not something I would be eager to program in. It makes assembler look positively user friendly high level abstract. But you can always build a C compiler for it. But I think it must be easy to find more CMOS efficient instruction sets.
I have long had an interest in OISC and minimal-RISC architectures, even have a relay-only SUBLEQ design somewhere dusty. It's not likely that would reach even 25 kHz and I have never tried to build it.
The great thing about OISCs is their simplicity, particularly if coding an emulator or taking the first steps into FPGA, real silicon, or non-silicon as here. Some of the OISCs have quite good compilers so getting bogged down in the insanity of machine coding can be left to those who choose to, or are fascinated by compilers, code generation, and such things.
There is plenty of fun to be had for those who have interests in CPU architectures and their ecosystems.
Unfortunately, the downside of one or few instructions is it takes more instructions to do anything so it needs more memory and higher clock speeds, and moving towards CISC is a more practical approach.
I always think of hardware as a kind of function/subroutine implementation.
Just as you implement some functions in "lower level" languages, some into ASM, some into microcode, you can implement some functions directly into hardware.
What the right cut off is for instructions and electronics depends on the speed cost tradeoffs of the available hardware.
So, you could take a single instruction computer and implement a layer into hardware. But then you end up with a cisc and where is the fun in that?
So I leave that to the actual computer builders.
Post-silicon age won't happen for a long while as silicon has the great advantage of being super-abundant (the beaches are full of it) and, even with the refinement and semiconductor doping involved, I'd bet it'll still be cheaper than the materials used in this.
However, all credit to the researchers as there are use cases where silicon is sub-optimal and alternative materials are required.
Good question. The answer I have found is that unlike silicon, which loses efficiency at nanoscale sizes, 2D materials maintain excellent electronic properties at atomic thickness, offering potential for smaller, more energy-efficient, and flexible electronics.
The problem that tends to kill these type of advances is that as they progress towards viability, the huge R&D budget allocated to improving the existing 3D silicon based chips means that they get better faster so the time for the new kid on the block never arrives.
Vapor Deposition. They've spray-painted this onto a passive substrate.
Personally, I think that the main advantage is that a university with access to a vapor deposition machine can run this as a (CPU / RF / Radiation hardened / High temperature) hybrid circuit design exercise, but nothing wrong with that: it may have an application someday.
You ask if the OP is a troll, whilst spelling sulfur/sulphur* as "sulpher", and silicon as "Silicone". Also, the Latin word for gold, which its symbol Au is derived from, is aurum. "Auric" was the name of the villain in the second James Bond film.
If you're going to nitpick, it would probably bear fruit to proofread your own post.
*The "tradiational" spelling being with a "ph", and the IUPAC recognised one with an "f". We'll allow our US cousins that one, if only they'll start spelling "aluminium" correctly.
Must be the ultimate RISC processor.
Well yes, exactly that. It's why they appeal to some of us.
Deciding which One Instruction Computer is the best, has the best instruction, is left as an exercise for anyone who wants to go down that rabbit hole.
My pet fascination is higher-level - How best to encode a RISC ISA to support immediate loads of up to 32-bit integers (or larger), determining the best balance of speed, memory, and implementation complexity.