Carbon nanotube processors in a stack with carbon nanotube nvram would be a great start.
The International Symposium on Computing Architecture has revealed the five architectural challenges it thinks computer science needs to solve to meet the demands of the year 2030. Their recommendations, distilled from the Architecture 2030 Workshop at June's ISCA in Korea and available here, draws on the contributions of …
Wednesday 14th December 2016 11:28 GMT Anonymous Coward
3D integration in silicon
“shortening interconnects by routing in three dimensions..."
I think that there may be a couple of potential problems with this idea.
The first problem is intrinsic to highly 3D silicon; as the number of layers in silicon is increased, to build up the depth, it becomes harder to cool the device as the heat produced by inner layers has to be routed through the outer layers.
The second problem is that the interconnects can only be shortened if they take a 'straight-line' direct route between layers but this would result in the interconnect passing obliquely between and through layers and any oblique interconnect will occupy significantly greater areas of both the layers that it joins as well as other layers that it passes through.
Wednesday 14th December 2016 12:17 GMT Doctor Syntax
Wednesday 14th December 2016 19:36 GMT Anonymous Coward
Virtualised architectures spanning the clouds
As someone who is pleasantly surprised everytime his 'computer' successfully boots, I would have this to say. Wouldn't a virtualise solution spanning architectures in the clouds be intrinsically unstable. Also, is 'workshop' like a conference only there's no one in charge imposing their autocratic opinions on the attendees?
Wednesday 14th December 2016 19:56 GMT rjf
Future of Computer Science is thus Electronic Engineering
sounds like the future is going to hark back to the past. in recent years computer scientists have tended to ignore hardware issues, saying that everything can be abstracted away from the hardware but the days of copius effectively-free processing cycles is ending.
Wednesday 14th December 2016 21:43 GMT Ken Hagan
I could have written that list 20 years ago. Come to think of it, I'm pretty sure I *read* that list 20 years ago. Curiously enough, mere demand does not conjure supply out of a unicorn's butt, so we're still waiting for them. Yes, they'd be nice, but unless you have some evidence that these old problems are newly solvable, there's no news here.
Thursday 15th December 2016 00:27 GMT Mike 16
As reliable as software?
Over my career, I have _so_ often had to deal with faulty hardware that came down to faulty Verilog (or VHDL, or SystemC) that I shudder to think how this turns out. Even when the "compile" (from text to physical silicon) is slow and expensive, and one would expect a modicum of care to be taken, somehow the "Heck, it's just software" attitude seems to percolate into the physical device. Couple that with simulation/verification that emphasize "If we give it the right inputs, it produces the expected output" to the near exclusion of "What could go wrong if we send it slightly odd inputs" is just the frosting on the cake.
Thursday 15th December 2016 20:08 GMT swm
Software Designers Designing Hardware?
As a hardware designer (among other things) I've noticed that software people, when designing hardware, can't think in parallel. They sequence operations that could be done in parallel and add extra logic rather than simply ignore the results (calculated in parallel) of some calculations.
Another problem is the continual triumph of the 8080 architecture through the decades. Given a fast core computer with many registers and some appropriate caching and register renaming and a clever multi-core architecture, a hardware system could be developed that could run specialized firmware at blinding speed.
Adding various modules for floating-point, Galois field arithmetic etc. would be a plus.
Designing good hardware at the gate level is hard.