How many Rhode Islands to a Wales?
Extreme ultraviolet litho: Extremely late and can't even save Moore's Law
To save Moore's Law, not only will the semiconductor industry need to move to not-yet-ready-for-prime-time extreme ultraviolet lithography (EUV), it will also have to make a costly switch from the current 300mm wafer manufacturing standard to 450mm. So said Mark Thirsk, managing partner of semiconductor industry consultancy …
-
Friday 18th October 2013 09:48 GMT Phil O'Sophical
Moore's law
During an interview last week one of Intel's senior fellows, Justin Rattner, reckoned that Moore's law was already dead as far as simple silicon improvements go. He suggested that we need to look at the performance of the CPU/system as a whole, where things like multicore and multi-thread acrhitectures can keep the same level of improvements coming.
-
Sunday 20th October 2013 20:33 GMT Anonymous Coward
Re: Moore's law
Or perhaps put more effort into language design so that programs don't have so much bloat. Moore's Law has enabled us to do the equivalent of use a 44 tonne truck to haul a matchbox, because 44 tonne trucks are so cheap. It would be interesting to have a handle on just how many CPU cycles nowadays basically accomplish nothing but move data from one abstraction to another.
-
Wednesday 23rd October 2013 14:01 GMT Michael Wojcik
Re: Moore's law
things like multicore and multi-thread acrhitectures can keep the same level of improvements coming
Except they can't. See the sophisticated study on "dark silicon" by Esmaeilszdeh et al, "Power Challenges May End the Multicore Era", in CACM 56.2 (2013). Even under optimistic assumptions, the combination of device scaling (traditional process shrinkage), core scaling (performance per core), and multicore scaling (the authors consider 8 different multicore approaches), even highly parallel workloads show less than an 8x improvement performance over the next 10 years.
Of course predictions in the IT business are notoriously unreliable, and "Moore's Law" is one of the few that has held up well. But I haven't seen any study with anywhere close to this sophisticated a methodology that suggests we'll do better over the next 10 years. Even for highly-parallel workloads we're not going to see all that much more improvement from multicore and multithread.
-
-
Friday 18th October 2013 09:52 GMT Steven Jones
Slipped place
Rhode Island is 3,140sq km in area, so 1.99 nano-Rhode Islands would be 6.25sq m. A 450mm diameter wafer would therefore be 0.199 nano-Rhode Islands as it's about 0.6sq m...
I'm inclined to think that if the Inhabitants of Rhode Island were only to reclaim another 2sq km from the sea, they could rename their Island kilo-Pi. That would then become a transcendental place to live.
-
This post has been deleted by its author
-
-
Monday 21st October 2013 09:29 GMT DropBear
Re: So wafer sizes moving up from "carpet tile" to "family pizza" size?
Then again, there's the teeny tiny inconvenience that you CAN'T make an octa-core processor from eight discrete ones and expect it to work as the integrated one - nuisances like "signal propagation time" and such sees to that.
-
Monday 21st October 2013 11:36 GMT John 172
Re: So wafer sizes moving up from "carpet tile" to "family pizza" size?
@DropBear "Then again, there's the teeny tiny inconvenience that you CAN'T make an octa-core processor from eight discrete ones and expect it to work as the integrated one - nuisances like "signal propagation time" and such sees to that."
Actually you can, look at the latest high end Xilinx FPGAs; they're composed of multiple die mounted on a silicon interconnect layer that permits many thousands of etched interconnects to be formed between the die. They've done it to reduce costs and increase yields of the high end devices. The same technology could be applied to CPUs as they stand now for the same reasons.
-
Monday 21st October 2013 12:07 GMT Squeezer
Re: So wafer sizes moving up from "carpet tile" to "family pizza" size?
@John172 -- you might also want to look at the price of these Xilinx devices, they've gone this way so they can reduce NRE costs by building a whole familiy of devices out a a few predefined "slice" chips, and get acceptable yield on what would otherwise be enormous almost-zero-yield chips.
Silicon interposers are not cheap yet, especially with the extra processing costs and issues like how to test the die and interposer before assembly, which involves things like bonding/debonding to sacrificial carriers.
In other words, just because McLaren can afford to use a technology in a road car, don't assume Ford can...
-
-
-
-
Friday 18th October 2013 23:27 GMT asdf
450mm
After being in the industry for the quite traumatic transition from 200mm to 300m which took nearly a decade to finish, 2018 may even be optimistic. The transition to 450mm won't be as bad but it will still require new fab tools. The issue is the toolmaker companies have a large outlay to make to convert their tools to 450mm or make new models. The ROI for the conversion to 300mm took longer than anyone expected if I remember right (dot com crash didn't help). Once bitten and all that.
-
Saturday 19th October 2013 10:04 GMT Tom 7
Its not a problem - it really isnt.
We have more computing power than we need - we can easily sidestep any hiatus in Moore's law with simple multiprocessing - though a better approach might not to be to put a little bit of data on a letter, then stick it in an envelope, and then put that envelope in a filing cabinet, and then put that filing cabinet , and then on a container and then put the container in a container-ship to transport anywhere and then expect to be able to manage that little bit of data efficiently by waving container-ships around.
-
Monday 21st October 2013 08:45 GMT Pascal Monett
"Simple" multiprocessing ?
I doubt very much that anything about multiprocessing can be "simple".
But I do believe that, in terms of computing power, we are benefitting much more today from enhanced bus speeds and architecture, increased CPU cache size and better north bridge concepts.
Not to mention that SATA disks have removed the ever-annoying ISA bottleneck and brought improved data throughput as well, which is probably the major element in improving all-around system performance.
So yes, we have "enough" computing power at the moment. But I always welcome more power under the hood.
-
-
Sunday 20th October 2013 12:46 GMT Anomalous Cowshed
A bit of history
The year is 2113 and Earth is still desperately trying to cling to Moore's law.
Intel yesterday unveiled its new wafer, far out in deep space, where the 100 km2 monster, which weighs 30000 tons, was apparently grown from a single crystal of silicon. A big one, admittedly. "The things we do so that people can continue playing Crysys on their Win 877.5 computers" sighed the Chairman of the company, wearing a space suit and wielding a pair of scissors with which he was about to start cutting the wafer into tiny silicon chips...
-
Monday 21st October 2013 00:00 GMT HippyFreetard
Even Moore Than Befoore
Obviously, there's only so many atoms wide we can make a wire, or a transistor. It'll be fun to see what happens, and whether it fits the Tomorrow's World inspired visions I have of the near future.
Bloat may not be a coder's problem. Sure, there's tons of bloated software out there, but all it would take is for someone to come up with a cleverer compiler that could scan for bloat. That's a much simpler solution than teaching everyone C and memory handling (though they should learn that stuff), and will happen anyway. Also, there's so many virtual machines these days, something on something on something. Maybe we need a sort of total system recompiler? Install your stuff, and the recompiler runs through it blobbing all the interpreted stuff and repeated transfers.
Multicore is already standard on PC's but when will we see the first commercial Supercomputer on a Chip? Parallela.org looks quite exciting. I think Intel made a special one a couple of years back, so we may see that sort of stuff everywhere.
Also, clockless computing (I think it's called Asynchronous, too) where it takes as long as the signals to reach the end of the circuit to make a calculation. It's extra wiring due to the "done" signal that has to be sent in place of the clock tick, but you might be able to get more calculations done per second, so it could work out better. It'll start as a small patch inside the ALU and GPU, and will spread from there, like a silicon mould. Couple it with flash or e-ink style memory (where only state change requires power) and booting a PC might be not much slower than switching on a light bulb. Extreme overclocking can be achieved with liquid nitrogen. More for datacentres and extreme hobbyists than home users, maybe.
I'm still hoping for the time when I can download the latest chip, and "burn" it to a piece of blank silicon...
-
Monday 21st October 2013 08:49 GMT Pascal Monett
Re: a cleverer compiler that could scan for bloat
I have a better solution : it's called a cluebat. It is made of very dense indignation and it is used to bash lazy coders over the head until they understand that it is the programmer that is supposed to work hard and clean up his code, not the compiler.
-
Monday 21st October 2013 09:38 GMT Tom 7
Re: a cleverer compiler that could scan for bloat
Or it could be called experience. Lazy coders is not the correct description - time poor would be a better one.
But with a proper 20 year software engineering training most programmers would write efficient code with a very small amount of redundant repetition in it. I've heard someone sat down and took time replace all the multiple functionally identical parts in windows 95 and got it down to less than 1Mbyte of code. Now this would be easy to microcode into a processor and that would speed up a tadge and if the coders had useful knowledge of the 'microcode' things would hum along nicely.
Fortunately we have a thousand different languages and a million inexperienced programmers hacking out code that works 'well enough for the average user' and we probably wont ever see it happen and all those managers can keep making money passing the buck.
-
Monday 21st October 2013 09:50 GMT DropBear
Re: a cleverer compiler that could scan for bloat
Based on what truly hand-optimized code can do (or alternately, on what ye olde hardware was able to accomplish with but a few megahertz as a clock), you're either suggesting programmers today create such sloppy code that is hundreds or thounsands-fold overbloated, or you make no sense at all, sorry. I definitely do admit programmers often create unnecessarily sloppy code, but I suggest the true cause of the problem is the programming paradigm in use - and YES, the compilers used too - instead.
Ideally, a programmer's job is to express the concepts that make up the desired piece of software in a form as concise an minimalistic as possible. If they fail to do that because of lazyness or lack of training, that's their own fault. If they fail because their deadline allows for nothing but the fastest and unefficient way to do things, it's their bosses (and the whole damn IT industry's) fault. But if they fail because those high level concepts can only be expressed in modern languages in an inefficient and bloated way, then it's the current programming paradigm's - and by extension, the compiler's - fault, period. Doubly so when concepts expressed in a fairly elegant way get implemented poorly because that's all the underlaying technologies (of which the compiler is part of) are capable of.
-
Monday 21st October 2013 13:04 GMT Tom 7
Re: a cleverer compiler that could scan for bloat
Have you looked at code about your place of work - I mean really looked into it? As a programmer I'm used to converting ideas to code, I tend to think in designs as constructions of coding methods I'm familiar with. Often you are asked to build something by someone who does not understand coding at all - managers or customers or a combination of the two - and by the time you've discovered what they really wanted you have a few 10s of thousands lines of code which does the job - and you know you could do it in a few thousand if you are allowed that extra week - but the code is working and the only way you get to tidy up is on paid gardening leave.
Its not the languages fault, its not the compilers fault, its part of the job that the customer/manager starts off the GIGO process and they dont understand a simple do_what? loop. It doesn't matter to the customer how elegantly a new language can express a 60 year old concept (that the 10 year old language developer thinks is a new concept because they haven't come across it before) when the design process requires requires 100 year old design concepts (that have been rehashed by the management consultant equivalent of the 10 year old language designer) and are managed by someone who can count up to budget.
Coders will write elegant compact code when they are given the resources to do so - the main one being experience. And by the time they have the experience to do that they will invariably have got so pissed off with bright young things knowing it all and yet knowing nothing earning an order of magnitude more they will have retired early if they can or gone off to write a new language to do 90% of what is needed well and fucks up on the 10% you need on Friday afternoon.
-
-
-