back to article Top CompSci boffins name the architectures we'll need in 2030

The International Symposium on Computing Architecture has revealed the five architectural challenges it thinks computer science needs to solve to meet the demands of the year 2030. Their recommendations, distilled from the Architecture 2030 Workshop at June's ISCA in Korea and available here, draws on the contributions of …

  1. Anonymous Coward
    Anonymous Coward

    Carbon nanotube processors in a stack with carbon nanotube nvram would be a great start.

    1. Anonymous Coward
      Joke

      Do you want an AI that will kill us all ?

      Because that's how you get an AI that will kill us all !

  2. Anonymous Coward
    Anonymous Coward

    3D integration in silicon

    “shortening interconnects by routing in three dimensions..."

    I think that there may be a couple of potential problems with this idea.

    The first problem is intrinsic to highly 3D silicon; as the number of layers in silicon is increased, to build up the depth, it becomes harder to cool the device as the heat produced by inner layers has to be routed through the outer layers.

    The second problem is that the interconnects can only be shortened if they take a 'straight-line' direct route between layers but this would result in the interconnect passing obliquely between and through layers and any oblique interconnect will occupy significantly greater areas of both the layers that it joins as well as other layers that it passes through.

  3. Doctor Syntax Silver badge
    Devil

    "Make designing special-purpose hardware as easy as writing software"

    Patch Tuesday for hardware.

    1. Preston Munchensonton
      Terminator

      And worse (for some), providing an opportunity to outsource. No need to pay those high wages for hardware designs if you can get a code jockey to put some together for cheap.

  4. Anonymous Coward
    IT Angle

    Virtualised architectures spanning the clouds

    As someone who is pleasantly surprised everytime his 'computer' successfully boots, I would have this to say. Wouldn't a virtualise solution spanning architectures in the clouds be intrinsically unstable. Also, is 'workshop' like a conference only there's no one in charge imposing their autocratic opinions on the attendees?

  5. rjf

    Future of Computer Science is thus Electronic Engineering

    sounds like the future is going to hark back to the past. in recent years computer scientists have tended to ignore hardware issues, saying that everything can be abstracted away from the hardware but the days of copius effectively-free processing cycles is ending.

  6. Ken Hagan Gold badge

    Yawn

    I could have written that list 20 years ago. Come to think of it, I'm pretty sure I *read* that list 20 years ago. Curiously enough, mere demand does not conjure supply out of a unicorn's butt, so we're still waiting for them. Yes, they'd be nice, but unless you have some evidence that these old problems are newly solvable, there's no news here.

  7. Mike 16 Silver badge

    As reliable as software?

    Over my career, I have _so_ often had to deal with faulty hardware that came down to faulty Verilog (or VHDL, or SystemC) that I shudder to think how this turns out. Even when the "compile" (from text to physical silicon) is slow and expensive, and one would expect a modicum of care to be taken, somehow the "Heck, it's just software" attitude seems to percolate into the physical device. Couple that with simulation/verification that emphasize "If we give it the right inputs, it produces the expected output" to the near exclusion of "What could go wrong if we send it slightly odd inputs" is just the frosting on the cake.

  8. SealTeam6

    Cloud is sexy

    Cloud is sexy right now but no one mentions the terrible latency. I would not put a CPU architecture in the cloud because of that.

  9. swm Silver badge

    Software Designers Designing Hardware?

    As a hardware designer (among other things) I've noticed that software people, when designing hardware, can't think in parallel. They sequence operations that could be done in parallel and add extra logic rather than simply ignore the results (calculated in parallel) of some calculations.

    Another problem is the continual triumph of the 8080 architecture through the decades. Given a fast core computer with many registers and some appropriate caching and register renaming and a clever multi-core architecture, a hardware system could be developed that could run specialized firmware at blinding speed.

    Adding various modules for floating-point, Galois field arithmetic etc. would be a plus.

    Designing good hardware at the gate level is hard.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

Biting the hand that feeds IT © 1998–2022