Been hearing this...
... for almost 15 years.
Wake me when it really happens.
Moore's Law, which promises exponentially increasing transistor counts due to chip-manufacturing process shrinkage, is about to hit the wall. As Intel Fellow Mark Bohr once told The Reg, "We just plain ran out of atoms." But there's one industry veteran, however, who looks at the reason for the repeal of the semiconductor …
There are such things as atoms you know - making a Silicon transistor smaller than an atom is tricky.
There are also a few other limits, like the speed of light, the uncertainty principle and the 2nd law of thermo-dynamics. It might be very cool and thinking outside the box to ignore them - but the universe isn't that easily fooled.
Arguably Moore's law has already ended. The original phrasing was that the most economic number of gates to put on a single chip increases exponentially. That is no longer true, the fab needed to make 14nm parts is so mind bogglingly expensive it is cheaper to make 22nm wafers with fewer gates.
I dont know if you are aware but computing doesn't have to be binary. The universe will not be fooled - but it doesn't have to be binary - the results just have to be consistent. I remember in the early 80's people were playing with multilevel logic which gave considerably higher computing densities but was a bit error prone. But we have the technology to fix the errors and now we may have the impetus to look further into non-binary computing - it may or may not lead to significant improvements but if there is a demand for it then there may just be the research that finds them.
Unless we just settle into another bloody patent war.
When I was designing 5um chips in the early 80's 2um process were 'mind bogglingly expensive' but they happened. Gaining more functionality from a slab of silicon (or whatever) will happen - though using it to process data rather than write paper shaped documents on how you would like to colour that data blue for the next release might help.
Crying Wolf, very true, but.........I know it's off topic, but just like IPv4 allocation running out, it's run out, but still keeps giving with no obvious end in sight ?
What they say about necessity ?
Just say'in, I'll get me coat. Off to look for 8 groups of 4 hexadecimals and some colons (that's not meant to rude either)
"IPv4" is not "Still giving". It's hacked together with NAT to fix it. Which is poor and break some required specs for communications (but is ok for most).
Same goes for the comment on "does not need to be binary". Could use signals in tertiary (which has no errors) etc. But I'm not sure it reduce the density problem.
Wake me when it really happens.
Yup, agree. Actually, only wake me if such improvements actually result in a computer that is actually faster instead of being absorbed by UI gimmicks and software frameworks that don't reduce my workload. The fact that I still have to wait for a computer instead of enjoying the fact that the machine now has a clock speed measured in GIGAHertz instead of the 4.7Mhz we started with..
The computation power of the human brain - at least a million times more powerful than any desktop computer, running on a fraction of the energy per computation operation - shows that computers can be improved by at least six orders of magnitude, without even trying.
This decade will see a serious rampup of efforts to reverse engineer the brain. Such efforts should produce a spectacular array of computing improvements, more than enough to keep Moore's Law alive and kicking.
As to people who think that only a brain can produce brain like computation - if all else fails, that is exactly what will happen. The next generation of computers may need nutrient solution as well as electricity.
A 6x orders of magnitude increase of anything requires quite a bit of trying. Just increasing assholish behavior by 6x orders of magnitude is difficult.
Maybe someday computers will act more like human brains but not only are they not directly comparable, medical science still has a long way to go to even understand how the brain works. Right now they are still poking it with a sharpened stick and that's nearly the state of the in neuroscience.
Until science actually figures out the secrets of the brain, it will be impossible to build an accurate calculation model around one. It would just be a guess and likely not a good one at that.
This post has been deleted by its author
More powerful? Well, that all depends on the measurement.
'Speed' and 'power' are well-defined terms. The problem is that to use such terms in such a way that they are useful, you have to be very careful to define exactly what you are measuring.
If you said that a Ferrari was faster than a bus you'd think you stood on pretty solid ground. And you'd be correct if you were measuring how quickly each vehicle could carry the driver to a given destination. But, you'd almost certainly be INCORRECT if you were measuring how quickly you can get 50 people to a given destination.
That ambiguity is why Moore didn't talking about power or speed; he talked about transistors. So, saying the human brain is "1000000x more powerful than a computer" has nothing to do with Moore's law.
Of course, I don't believe that you were implying it _did_ but, again, the question of what is more 'powerful' - the brain or a CPU - cannot be answered unless you define exactly what qualities you are assessing and what measurements you are using to come to a conclusion.
At any rate, the brain is not comparable to a CPU (which is the subject of Moore's law and this article) but to an entire computer; hardware and software. The brain has specialised components for processing different senses, short term and long term memory, speech, etc... It also has very sophisticated 'firmware' to pull it all together.
The brain does use a fraction of the power of a modern HIGH POWER processor but then 1/1.1 is a fraction so that's not really helpful! Average numbers are about 20W for the brain. That sure is less that modern Intel desktop and server CPUs but is more than the latest Atoms.
You might argue the brain is 'more powerful' than an Atom CPU but again, it really depends on what you are measuring.
As for questions about orchids etc..., that's not a CPU problem - that's a software problem. Anyway, you have to feed all the data points into the CPU. Saying a CPU can't understand an orchid is irrelevant because a brain can't either if it's removed from sensory input!!!!
I don't know where you get your "at least a million times more powerful" from. It very much depends on what you want to do. If, for instance, it is 80 bit floating point division, a microprocessor is more than a million times faster than the human brain and uses an awful lot less energy to do the calculation.
The human brain does very different things from computers and they cannot directly be compared. For instance, it is a very good pseudo real time controller that integrates inputs from a range of rather sophisticated sensors and controls a complex range of outputs. But our main problem there is that we have not yet designed an architecture for electronic sensors and actuators, along with the necessary computation. For instance, the eye is effectively a system on a chip that carries out an awful lot of signal processing and sends a very low bandwidth signal (tens of kilobits per second) to the brain. There are, however, a number of optimisations that it carries out that don't always work. (Stare at a blue sky for a while and until you change your point of view you won't notice any clouds moving over, because the rate of change of signal is too low to register). Currently the main use for optical sensing is to create images that work for the human eye. If there was a compelling use case that requires eye-like behaviour, I suspect it would get developed, and that the computing power needed would be fairly low.
"Reverse engineering" the brain has been attempted by the AI community for a long time with little success. You have a system which is optimised for controlling a primate. It can carry out a wide range of signal processing, storage and retrieval processes in a low-bandwidth way. So far computers have been designed to compensate for the weaknesses of this system. It isn't clear to me what benefits you would get from a synthetic human brain. A financial system that could make its own screwups through a desire to impress other financial systems, or through having its software screwed up by a desire to have sex with another financial system? An automated car that gets drunk and drives into road signs and other cars?
Plucking numbers out of the air with lots of zeroes should be left to politicians and Ponzi scheme operators.
Human brain is 1M or 1B faster than any computer in reasoning as an human, as for 4B years of evolution as living being and as for 1M years evolution as Homo.
But I'll not bet on the human for factoring prime numbers, or indexing the www.
That is the downside of benchmarks, how you define "power"?
How you define intelligence?
A dog is far more "powerful" than me in processing odors and in path finding, a bee is far a best citizen than me, a tree may be far more intelligent than me in ways I cannot imagine (think to chemical message passing...), a computer definitely compute faster.
How do you compare apples and oranges?
don't buy the 1million times smarter argument at all.
Almost every time a claim about a good measure of intelligence has been made computers have eventually done a better job (with a few notable exceptions).
Computers are now better than the top humans at chess, jeopardy, chip layout, optimization and path planning, mechanical assembly, specialized vision applications (spot the tanks), library science (index the web), weather prediction, stock market prediction, and probably more I'm not thinking of.
Where they aren't yet even are things like natural language parsing, artistic endeavors, and general vision applications.
Also worth noting that the human brain is about 14,000 times the volume of a single die. So a more fair comparison would be an average human against the Oakridge Titan. Transistors are pretty frickin' good already, it's the power, cooling, and interconnect on the large scale that needs work.
it's the power, cooling, and interconnect on the large scale that needs work.
Very well said. When Moore's law finally runs out, the next major breakthrough will have to be in parallel coding.
A (human) brain has ~10^11 processing elements clocked at maybe 10Hz. A CPU has ~10^9 transistors clocked at ~10^9 Hz. By that measure a humble desktop CPU should be ahead of us by a few orders of magnitude. So what gives?
Well OK, a neuron is a much more complex device than a transistor, but a million times more complex? Unless it's in some sense a quantum computational element rather than a classical one (which cannot be ruled out), I don't think there's a difference of complexity of order 10^6. Surely not 10^9 (comparing against a 1000-CPU cluster).
Which leaves the dense-packed 3D interconnect in a brain, and even more so the way it is able to utilize its hardware in parallel for a new problem without any (or much) conscious organisation of how that hardware will be organised, what algorithms it will deploy.
The next huge breakthrough will have to be in problem definition technology. Something that fulfils the same role as a compiler, but working across a much greater range between the statement of the problem and the hardware on which the code will run. There are some scary possibilities. Success on this front may reveal that a decent scientific computing cluster is actually many orders of magnitude superior to ourselves as a general-purpose problem solver, and might also reveal intelligence and consciousness as emergent properties.
Jill (*) or Skynet?
(*) Jill is the most benign first AI I know of in SF. Greg Bear, "Slant".
Subjectivity, or the lack thereof, is where suck, and always will. A non-faulty computer or calculation design will always return the same answer, assuming other variables remain constant. A Human brain is completely different in that it is likely to return a different answer each time, even if the variables remain constant.
In a Human brain each different answer to a subjective challenge is equally as valid as the previous or the next. Unlike a computer where different answers mean it is broken. Those equally valid answers are what allow Humans to assess and act wholly illogically, but still correctly.
non-faulty computer or calculation design will always return the same answer, assuming other variables remain constant. A Human brain is completely different in that it is likely to return a different answer each time, even if the variables remain constant.
Straw man! assuming other variables remain constant If it's a realtime event-driven system with unpredictable and unrepeatable inputs, that is never the case. A brain is clearly such a system. One may speculate that is a large part of its superiority over a computer (though of course, an operating system is also of that nature).
A decently educated human brain carries a software that was made by accumulating knowledge in written form, for well over 6k years (in one way or the other). On top of that you're improving it yourself to fit your needs every day, and you get daily sensory input from some highly capable devices.
Since I first wrote about Dimension Z in these forums Samsung has proved 24 layer integrated circuits and say the concept should scale to thousands of layers. That takes us to 2035 at least from what we already know, and more will certainly be discovered by then.
I promised you more Moore. There it is.
"Manhattan" chip building has been the holy grail for a long time (it was being talked about in the late 1970s) but it's always encountered fundamental problems which make it unsuitable for computation devices - such as "how do you get rid of the heat?"
The resulting answer always tends to be low power equipment which is more easily replicated in 2D.
Layers are great for flash but I'm not so sure how applicable they are to anything involving non-significant amounts of energy dissipation.
I think dimension Z is indeed the answer.
Power is increasingly less transistor dominated. Leakage is getting under control [trigate, finfet etc].
Interconnect -- RC wire delay and energy is the issue.
I say this having spent many painful months walking gate to gate closing timing in 45nm and 1.6 GHz, and even there wire delay was half the problem. Going finer it will be worse because wire delay depends only
on the aspect ratio of the wires, while transitor delay shrinks with process.
Why would 3d help?
Even with the same number of gates, the average distance between gates shrinks if you arrange
them in a ball instead of spread out in an urban sprawl. You take less energy to wire the chip,
and this is also becoming the bulk of the energy.
So.... Chips will consume less silicon area for the same number of latches, the interconnect will
consume less energy. A 100x in layer count is thinkable as a long term goal.
Suppose we have 2d and a 1mm^2 core.
Reducing the maximum wire length from O(1mm) to O(0.1mm) is possible with 100 layers.
Good description of the physics:
This would naively give another 10x - 100x in clock frequency assuming wire load dominance.
In power, this is only a 10x as the 1/2 CV^2 charge energy is proportional to wire length.
Given the increase in frequency the energy per chip still goes up, but of course not as much
as if we had taken the same gain in frequency in 2d.
Engineering challenges will involve tolerating and alleviating the increase in energy density:
i) heat removal
ii) dynamic clock gating
After the capital of process development has gone, cost per chip should reduce as the silicon
wafer area is 100x less.
At the current level we are down to counting individual atoms. Obviously any sort of optical circuit requires a physical conduit made of matter for the photons to channel through, which will be made of atoms. So not relevant to Moore's law, which is about transistor density. Going optical can increase operational frequency to terahertz and petahertz, where transistors can't go, but that is a different topic. The upper limit of a photon switch is high enough to buy us a fourth dimension to this problem if they figure it out. It Is also either impossible or at least very hard as they have been working on it since the 1960's and haven't had a big product win yet.
How do you get output from a photonic calculation? A photon in transit doesn't really "exist" - in fact, because it travels at c, from the point of view of the photon its life is precisely zero. It is only detected when it hits something and ceases to exist, thus having a detectable effect like bumping an electron into a conducting energy level. Your input and output, in fact, is still going to be electronic unless you can beam the photons straight into a human eye, in which case the bandwidth of your system is going to be that of the eye/brain system, which is quite small.
I beg to disagree. What I believe we'll be seeing is the development of modular and standardized chip designs and design tools, and a better process from the design table to the fab. That would lower the cost of new designs and also allow lots of customization for special tasks. I think that's what is happening right now in the ARM ecosystem.
If Intel -or AMD or whoever- follow this route, they could be able to double their performance every -say- 48 months, instead of the actual 18 months, and the new design&fabrication processes would allow these new designs to be made -relatively speaking- on the cheap. And the market will always demand more processing power, so instead of taking to market newer and faster chips every year it will make sense, from an economic standpoint, to do it every 4 years.
Of course this status quo will also end one day. And then, at last, coders and IT companies will have economic reasons for optimizing their code. :0)
Even if it proceeds exactly as you say, that still breaks Moore's law, which essentially says that the cost per-transistor will effectively half every two years.
The reason Moore's law has been so unerring is at least partly due to Intel baking it into their road-maps and business plans. And the reason they have done that is because process shrinkage leads to bigger profits. This is important as increasing the 'speed' or 'power' of their chips does not necessarily make good business sense whereas shrinking the die very much does.
That's why Moore's law has continued - economics. Once the economics of reducing die size looks unfavourable, Moore's law fails.
That's not to say that CPUs won't get more powerful, just that they won't double the "complexity for minimum component costs" ever 18-24 months as is currently the case.
"Even if it proceeds exactly as you say, that still breaks Moore's law..."
I'm well aware of that fact. I was only addressing the parts of the article that put a cap -due to economy- on performance improvements through design optimization. My point is that If a chip maker takes 4 years to double their design's performance, it will still be profitable to design and market the new chip, as long as they can keep the design costs reasonable. Creating a new chip foundry for a new shrinkage level is terribly expensive, and is responsible for most of new chips costs. In my opinion, there are lots of potential improvements in that area that, due precisely to economic reasons, haven't been tried yet, as any improvement they make with current shrinkage levels has a big chance of not being usable in the next shrinkage level and provides a smaller return in terms of processing power.
There's a 2nd economic force at work, the demand for more power is all but over in significant parts of the market. Not only is it getting more expensive to ramp up performance, the value of that extra performance to buyers is falling - once your PC is fast enough 95% of the time that next 5% isn't worth paying a lot for.
It's the same effect that's driving the move to lower power tablet and mobile devices, laptops before that and the massive slow down in PC replacement. Concentrating computing resources in data centres can keep the quest for more performance alive a while but we're heading for a world without a driving force behind extremely costly measures to keep Moores law going.
There is a big difference between flash memory and logic gates though (density, manufacturing process, clock speeds, etc). A flash die puts out a lot less heat than a CPU die.
Thermals provide the end constraint of these systems - how do you get heat out of the middle layers? Sure you could drop the frequency to reduce the heat output, but that kind defeats the point of adding more transistors in the first place, as there are limits to how well you can parallelise so slow and wide isn't always a win.
"Thermals provide the end constraint of these systems - how do you get heat out of the middle layers?"
Well, there is lots of room for improvement there. Adding a Peltier cooler layer every few 'processing layers' seems doable, and could effectively raise the number of layers to really big numbers, and there are other technologies that could also be of help.
I want to see how you do that. A Peltier cooler has a temperature difference between faces. Adding one every few layers would result in removing exactly the same amount of heat, except that the dies at the bottom of the stack would be cooler and the ones at the top would be much hotter. Unless, of course, you added a thermal block at each Peltier layer with a heat pipe to transfer the heat sideways out of the stack. This adds at least a few millimetres at each layer, and requires big heat sinks and fans to carry away the heat, as the hotter the heat sink gets the more power is needed to drive the Peltier device - and so eventually you run out of overhead. Meanwhile, you can't get signals through your Peltier device because it's a big semiconductor with conducting sheets top and bottom.
Do you see how ridiculous this is getting?
The whole point of 3D is to improve speed by reducing path lengths, and the Peltier solution would actually increase path lengths as the signals have to route around it. It would take up more space than just using a planar circuit board with conventional cooling.
Being downvoted has ceased to worry me, so I'll just add snarkily that as well as all the armchair CEOs on the Ballmer thread who have obviously never even tried to run a market stall, there's an awful lot of armchair engineers on this one who have never actually tried to understand and use some of the technologies they are espousing. Just because technologies A,B and C exists does not mean that they will work in combination to provide synergy. They may well, in fact, be mutually exclusive.
> as well as all the armchair CEOs on the Ballmer thread who have obviously never even tried to run a market stall
According to Leonard Mlodinow (and almost all other statisticians) most of the good and bad performance attributed to CEO's is actually more due to chance. People greatly overestimate how much the CEO affects the success of a company. Especially when it comes to pay.
@ribosome - "Being downvoted has ceased to worry me..."
Saying this is almost always helps not to attract downvotes, which is why I suspect you said it. If you really were not worried about downvotes, you wouldn't have mentioned it. As for rest of your 'snarky' paragraph, it makes me think you are trying to persuade us you are an expert and/or experienced in being a CEO and/or a CPU maker. Maybe you are, maybe you're not. Just being 'right' should be enough without berating others.
This post has been deleted by its author
Both of you seem to be making too many assumptions on the structure of such cooling layers. Just to clarify: No, I'm not proposing to just stick a standard, contiguous Peltier device between two layers, that would be daft. There are other options that should work, e.g. using said Peltier devices to transfer heat to internal cooling channels. I've read several discussions on such channels, and several solutions were proposed, e.g. thermal conducting channels made of copper or even graphene.
As for Ribosome's objection on path lengths... if you need to add a cooling layer every -say- four processing layers, path length only increases by 25%, independently of the number of processing layers in the chip. Which is still a big improvement over single layer designs.
Disclaimer: Circa 2005 I used a Peltier plate in an overclocked system I built, so yes, I'm perfectly aware of the way said devices work.
You still have the problem that you have to route round those cooling channels. They are going to take up a lot of space.
I'm surprised that a Peltier device made a lot of sense in 2005.
Looking at the thermal curves for a typical "high performance" Peltier with a 40W throughput and at a temperature difference of 20C, the waste heat is around 120W. That means that with, say, an AMD64 laptop chip of that era with a 35W TDP, you would be getting maybe a 22C temperature reduction in exchange for needing a heatsink which was capable of removing nearly 160W. Three quarters of your heatsink is just going to remove waste heat from the Peltier. If you were just able to plonk that stonking great heatsink down on the CPU, with efficient heat transfer, it would now need to remove only 35W, and so obviously would be running rather colder.
It made a lot of sense when I was cooling CCD devices in the 1980s, though the reason for that was mainly noise reduction, since the actual power involved wasn't that high. Even so, the cooling load on the system increases considerably as a result of the power needed to drive the Peltier, see above. Our first attempt was a miserable failure because the technician entrusted to the thermal design put the Peltier heatsink INSIDE the box with the CCD, thinking that stirring the air round it would be enough. The box got hotter and largely negated the cooling effect. Only when there was a copper block from the back of the Peltier through the box to the heatsink did we see the expected benefits.
Next step is better heat transmission, lower heat generation and 3D. Layers turns x,y into x,y,z. A single added layer is one more step in Moore's Law. 4 layers is another. 8 is another and so on. How closely can those layers be spaced? 16? 64? 128? 1024?
We are nowhere near the end of Moore's Law. We still have one full dimension to grow in.
Adding another layer onto a CPU will almost certainly INCREASE the cost per-transistor and, therefore, regardless of performance, will instantly break Moore's law.
The crucial 'at minimum cost' part of Moore's observation really relies on process shrinkage. That's not to say some other mechanism won't take up an exponential growth again, but it will be after Moore's law has been broken.
Such an exponential growth would be far more reasonably measured by performance (e.g. FLOPS) than 'complexity' or, more simply: "the number of components per integrated circuit".
Remember - Moore's law is not about performance.
I'm grateful to have lived through the Moore's law years, it has been fascinating. It has also been expensive buying a new PC every year or two. I started out with a Commodore PET which had, if memory serves, a 1MHz clock and 8k RAM. It cost about £800 in 1978. I haven't seen significant performance improvements for a decade. I recently used a five year old laptop and it was fine. Apart from having Windows Vista, obviously.
Agreed - as per the article Intel are really interested in volume, not bleeding edge. Until there's a software requirement for more processing power in a mass market piece of software (I can't think of anything) then todays speeds remain more than adequate. Improvements in code seem to have reduced the need for speed (again, at the mass market).
I completely agree. I thought the whole "Moore's Law" had been put to bed years ago too.
If they had been keeping up with Moore's Law we'd have 14GHZ chips by now, we don't, instead we have the same basic speed as 7 years ago just across 8 cores. 8-cores, who cares, unless the program I'm running is designed to use those cores, it will run the same as if I have only a single core 3GHZ CPU, and that is 8 years old. Seriously, we really stopped increasing speed in 2006 when CPUs hit 3GHZ, did no one else notice this.
The new trend has been to go up in speed by 100MHZ every 2 years, and double the cores every 5.
And the gate oxide is about 1/10 that.
So roughly 2^7 width halvings gets you to 1 atom wide transistors.
At that point you've just about run out of atoms
But what about multiple layers. Well know you've n x 130W per chip to get rid of.
You are probably looking at chip packages with internal fluid heat pipe cooling to do this.
Or you cold go with the very low power neural simulation architecture started by Carver Mead more than a decade ago.
"Well now you've n x 130W per chip to get rid of."
Only if you are over-clocking the thing and squeezing those transistors as close as they'll go. The first is not something we do so much these days (post Pentium 4) and that trend will surely continue. The second is something that 2D layout encourages you to do.
Drop the clock rate, increase insulator spacing (to eliminate leakage current) and you might find that you can now put so many more transistors onto the chip that you get the raw performance back.
I think you're off a bit there ..
The technology node size is commonly understood to be the size of a DRAM cell, not the gate width of a transistor. The gate width is smaller, but the exact details are highly proprietary. An educated guess is that in 16 nm, the gate width is ~7nm. Now, what's intresting: the crystal lattice spacing of untreated silicon is 0.543 nm, plus minus a bit for temperature, doping, etc. At 7nm wide, that's ~13-15 atoms. Which all by itself is tricky to manufacture already.
Yes the exponential increase in the cost of fabs mean that Moores law is close to the end if not already ended. At some point we will be able to builder smaller transistors but there just won't be any point.
We will have to get used to a minimum cost per transistor just as we have got used to a maximum practical clock speed. However there are plenty of worthwhile ways of improving computers to explore other than just blindly throwing more transistors at the problem. None of these are going to give us decades of exponential improvement but they are worth pursuing. The good news is that once the transistor process (with its huge fab costs) stops taking centre stage then it becomes possible for smaller companies to innovate and compete.
The GPGPU market is an example of this. Floating point performance in increased over conventional multi-core by using smaller compute units and using a greater fraction of the transistors for floating point units.
Chip stacking won't reduce the cost per transistor. Each layer needs to be manufactured and you may ruin some good layers by bonding them to flawed ones. However it may reduce energy consumption and drastically improve the communications between different components.
The future is going to be interesting.
But Capitalism falls down if you don't assume infinite exponential growth. It can't be (can it?) that economists are that bad at math, but they build the impossible right into their predictions and policies anyway.
Woe unto you if you point out the obvious failings with the way things 'work' now. You'll be accused of either hating freedom or of being a dirty communist.
"But Capitalism falls down if you don't assume infinite exponential growth."
Even Adam Smith said that growth cannot continue forever. The current crop of economists suffer from a bad case of short-termism.
Single digit economic growth is unsustainable for more than about a century. which is why there are horrific crashes at regular intervals. NO economists plan for a level market (even in Japan, where it's been flat for 20 years) because there's a herd mentality that growth always happens.
Not so much a herd mentality, but the day a bank economist admits that real economic growth is pretty much at an end due to energy and food constraints, and that the only way to improve living standards is to waste less and start to reduce the population - where are the next generation of bank bonuses coming from? Why, in fact, should bankers be paid so much at all? All that money should be going to engineers and scientists to improve the efficiency of what we already have.
(I know there are flaws in this argument, but not as big as the flaws in the "eternal economic growth" argument.)
Isn't the growth capitalism depends on measured in money? Which is subject to inflation? I've always assumed that capitalism works just fine on somewhat illusory growth. In boom times growth is ahead of inflation, in slumps inflation is ahead of growth, and if there's a fundamental reason that this cannot continue for the forseeable future I don't know of it.
"capitalism works just fine on somewhat illusory growth. In boom times growth is ahead of inflation, in slumps inflation is ahead of growth, and if there's a fundamental reason that this cannot continue for the forseeable future I don't know of it."
You need a little bit more history. But in the meantime, try Rory Bremner's new series on BBC Radio 4, the episode about "Where did all the money go?"
Readers of a sensitive disposition be warned: includes Max Keiser.
Money isn't real (you said that yourself above).
But its effects are. Monstrosities like zombie banks, and their inevitable counterparty, austerity.
From time to time you need to reboot the system. Historically, that's either been "wiping the slate clean" (drop the debt, permanently), or revolution.
Historically, stuff hasn't been as global as the Too Big To Fail banks, insurers, etc are today. So it's likely to be more interesting than previously, this time round.
@John G Imrie Off topic but I have to bite. Where on earth did you read that monetary and fiscal policy is based on infinite growth? Stop reading that publication pronto.
You seem to prefer the previous crop of economists, like the ones who advised monetary restraint in the great depression? That worked out well.
Or maybe you prefer no one studied the subject at all, so we could have some "real", but truly clueless, people giving advice?
There are many issues involved in continued die shrinkage. Just to list a few. There's the problem of making masks. Currently there are large sets of design rules in order to be able to create masks with dimensions using light that is of a much larger wavelength. People have talked about moving to shorter wavelengths, but again there is a big economic barrier. Next there is the issue of what exactly scales. Back in the old days, you had 5V and you could just shrink the dimensions and nothing else. But then the electric field (voltage/distance) started getting too high, so the voltage had to start dropping. Second but, it couldn't drop as fast as dimensions shrank. This has a speed/power tradeoff, but basically silicon transistors don't work below about 0.6 to 0.7 volts (the threshold voltage where a transistor switches between off and on). High-k dielectrics were introduced to help with the electric field breakdown issues. Next there is the problem of variability. One of the important aspects of IC design is that while it isn't easy to exactly control the parameters of the transistors (e.g., to have precise resistances and gains), it used to be the case that transistors physically close on the die would be very closely matched in characteristics. When you scale down to small numbers of atoms, then each transistor has much more statistical variation. This makes design much harder.
"it used to be the case that transistors physically close on the die would be very closely matched in characteristics. "
It used to be the case that batches of transistors were cooked up and then assigned part numbers based on their characteristics - and those characteristics would vary widely across the batch.
This was back in the days of 97% reject rates on TTL/CMOS chip manufacture - and most failed for the same reason (widely varying characteristics)
If you can get hold of a copy of Baum's "A little less witchery, a little more craft" it goes into great length about the bucket chemistry approach to semiconducter manufacture in the 1970s/early 1980s.
The holy grail for masks is xray lithography but even that has had issues with finding a stable monochromatic source dating back to the 1980s. You have to start wondering if we're going to see "pick'n'place" atom shifters used instead at some future date (that's the logical end of IBM's atom placintg experiments).
I think he is correct about economics being the factor at play. With less desktop PCs and laptops being sold and most phones and tablets having ARM, Intel is reliant on selling its top of the range chips to datacentres and the hardcore gamers who want the lastest and greatest which is a much smaller market. Sure big enterprises will still be buying desktop PCs and laptops but the current generation of low end intel chips (pentium & celeron) are more than capable of running Windows 7, office and doing email and internet so the money intel will be making will be less but yet need to invest in huge amounts into R&D to reduce the die sizes. Maybe the only way intel can still continue to invest as much as they do is to begin to fab chip for other companies
as far as I am concerned.
Went to local pc shop, and enquired about upgrade.
latest boards would be almost no performance increase over 3 year old board for similar money.
I think what we need to do is redesign software and get back to the days of 1000 lines of hand crafted assembler to replace 10,000 lines of C++ :-)
"I think what we need to do is redesign software and get back to the days of 1000 lines of hand crafted assembler to replace 10,000 lines of C++ :-)"
and/or better assemblers. there are amazing variations in what's produced for the same input.
Those of us with long memories may recall the wee experiment with recompiling Atari ST/STe roms using a more modern compiler instead of Lattice C - and finding that the new code was 1/3-1/2 the original size.
Good luck with the handcrafted assembler. You'd get better results from exposing the native RISC internals of Intel/AMD x86 chips and programming for that, rather than having to pass through the x86 microkernel inside the bloody things.
"I think what we need to do is redesign software and get back to the days of 1000 lines of hand crafted assembler to replace 10,000 lines of C++ :-)"
There was someone who managed to tidy up w95 so it fitted in 1MB. Its not that C++ is bloated its that software engineers write the nearly same bits again and again and again. There is a good case for training software engineers properly - i.e. not letting them write code until they are 35 or so and have learned Knuth and Boost by heart and can break down almost any task they are given into efficient blocks of it without thinking.
And I'd try a different OS or PC shop - my machines are still getting a lot faster. Well not my Pi obviously...
"I think what we need to do is redesign software and get back to the days of 1000 lines of hand crafted assembler to replace 10,000 lines of C++ :-)"
Go look at some open source projects. It's not like it's a magic bullet (if you open source a project and almost no-one looks at the code, there won't magically be improvements made to it.) I've seen a few that are, well, not good, but in general the code quality is a lot higher than you might expect.
1) Speed critical sections *are* in hand-optimized assembly (see every video player, I think the font library, crypto, even some bits of glibc that don't need to be in assembly but are for speed.) 2) Certain projects are blooooooated, but in general on these projects people get called out for bloat and the worst programming practices get kicked right out of the code. 3) The compilers now can do tricky stuff that wouldn't occur to a human trying to write optimized code.*
*Amusing bug report for gcc-4.8, when building glibc some optimization flag has to be turned off, otherwise the compiler recognizes the code for memcpy is trying to copy a block of memory, and optimizes it into a call to the memcpy function 8-).
>"I think what we need to do is redesign software and get back to the days of 1000 lines of hand crafted assembler to replace 10,000 lines of C++ :-)"
Heck I would be happy if we could get people to quit believing managed code is the only way to go. Microsoft figured this out which is why they are hardly betting the company on .Net any more (not that they ever really did, they sure didn't use it to develop their products). A Hello World program should not need multi meg of memory. As for the hand crafted assembler comment all fun and games until you have to maintain and extend hand crafted code somebody else did and they are into esoteric ways of doing things. It has its place though and luckily as you imply people who do it usually know where that is.
Moore's law is the observation that transistor counts will double every two years - breaking this law means that over a two year period, the counts don't double.
It doesn't mean the end of the road for increases in chip performance, but does mean a dramatic change in the economics of faster chips.
If you look at the released costs of the 14nm Intel fabs (>US$5b) and the investments in ASML (>US$4b), the costs for this look to be in the region of US$15-20b once chips start to be produced. This appears to be around double the investment of producing 22nm chips. Note these costs are my estimates - if anyone has better numbers, feel free to add them.
Assuming the costs are accurate (or even vaguely accurate to the nearest US$5b and costs are roughly doubling with each new process node/half-node), and that the traditional volume markets (desktops/laptops) where Intel makes the majority of it's profit are shrinking, at some point it won't be worth rushing to the next process step.
I can't see Intel's high profit margins surviving with this pattern. As pointed out in other stories today, buyers are shifting from quality (faster chips) to quantity (cheaper chips). CPUs are becoming a commodity. We see this both in the server market, where Google/Amazon/Facebook's vast data centres rely on huge numbers of cheap commodity chips, and in the consumer market, where buyers are snapping up cheap new ARM tablets and clinging on to their old Intel laptops for as long as possible.
Developers are complicit in this: they are coding for low-spec computers, rather than the old habit of coding for tomorrow's desktops and forcing the user to upgrade. A website designed for iPad users will run very smoothly on even a four year-old PC. The need to upgrade is weaker than it has ever been in the past three decades.
Faced with cheap ARM chips entering the server market, and ARM-powered Chromebooks in the consumer market, how can Intel's high margins possibly survive?
For years, boffins have been beavering away making ever faster processors, and now they've run out of atoms or whatever. (Please don't bore me with the details!)
Perhaps it's time for someone from an art college to help out.
Can we build an effective processor which, though not so quick, is more amusing to watch doing its calculations, and therefore a more touchy-feely experience than boring old a+b=c?
I'm thinking of rats with red paintbrushes tied to their tails being let loose in a big perspex box.
Could that someday replace the brains in computers? Surely it's worth investigating the possibilities of art-informed technology, and it would fit neatly into the new 'thinking' of the 21st century, where anyone (e.g. me) can spout utter bollocks and be taken seriously.
Nah, we did that back in the day. Old PDP-11s had LEDs all over the front and you could watch the program running. You could see when (say) the wages program was in the individual employee loop, and when it was printing out the summary, or when the floating point library was being accessed, just by familiarity with the light patterns. Training systems for electronic engineers had LEDs all over the place and we had one that could be clocked down so slow you could watch the memory being accessed, the address latched, the data read, and be latched into the accumulator.
They were also groaning slow.
Did you know that IBM salesdrones now actually give presentations in which they quote performance increases as 3 ECKS, 5 EKCS etc?
By scrupulously avoiding saying 3 TIMES, 5 TIMES etc they have a weasel clause when the promised "performance increase" (performance is not defined either) fails to show up in real life.
I wouldn't latch on to stupid stuff like this, but IBM have a dictionary of terms you and I throw about freely but which have contractual impact if you buy their kit. The use of the nicely wooly and undefined (but with an expectation built-in that need not be fulfilled) ECKS has me suspecting it can be found in said corporate dictionary.
"Colwell postulated a future chip designer who accepted the fact that Moore's Law had run its course, but who used a variety of clever architectural innovations to push the envelope. "
ARM? They've tended to ignore Moore's law to some extent, in favor of having *much* lower cost chips that are still lower power. Not the exciting answer Intel's looking for (since they are just assuming max speed I assume.)
Anyway, I don't know if he's right but he has a point -- these foundry's cost billions of dollars these days, and occasionally the next shrink is cheap (some tweak like change the wavelength used etching wafers or something) but then the one after *that* involves basically starting from scratch. Each costly shrink has cost more than the last one. I can see a point where the next die shrink is physically possible but not even close to economically viable -- at which point it just won't happen.
First off, Moore's Law was as much a self fulfilling prophecy as any real "Law". He essentially told their sales and marketing department what to expect and plan for while simultaneously firing a shot at competitors.
Quite frankly, transistors *could* have doubled faster. But there wasn't any real reason to do that. It was fast enough to outpace most competitors but slow enough to ensure a high rate of return on their investment. Once Intel set out their product update cycle the engineers took on the mantle to make sure it happened that way; and they did.
Sure, at some point we won't be able to fit a transistor into those tiny spaces. However, if humanity is anything it is certainly ingenious. Instead of on/off maybe we'll move into tri-state or beyond such that a single transistor (or the equivalent) will be able to hold and pass several different values. Heck, maybe we do finally figure out how to build that computer in "hyperspace".
Point is, someone will have the right idea to keep moving forward. For all I know that idea is already percolating in the back of an engineers head just waiting for the right moment to be revealed in exchange for an appropriately ludicrous amount of venture capital.
Improved coding has already been mentioned, but I don't understand why so many computer operations insist on moving vast amounts of data from place to place e,g, load a programme from HD to RAM, why not have only non volatile RAM and the programme or data is always ready - just needs to be pointed at - this is only one tiny example of of the architecture of computers could be improved and I am not talking about today's magnetic HD or Flash or DRAM but something which is all - I've just got to design it.....
You're asking for something with the performance of DRAM but nonvolatile.
They've been working on that stuff for...about three decades at least. Tech up to now like Bubble Memory and Flash have always had strings attached. Bubble memory was slow and had to be heated up to work, while Flash is known to be slow to write and prone to lifecycle issues.
There are several candidates for the position: MRAM, RRAM, Racetrack memory (inspired by bubble memory), PCM, and so on. Thing is, none of them have reached wide-scale commercial release at this point. And while some are getting close, achieving the same size and scale as current DRAM tech is still going to take time, plus the tech has to survive the transition process AND be economical. Then the memory has to undergo a paradigm shift as it becomes more affordable, first replacing the RAM and THEN replacing the mass storage (which has its own level of economy of scale and will be more difficult to reach).
How fast can computers go? Not well know is that there are computational architectures using other materials that can operate in the several hundred gigahertz range. They exist as functional and available parts used in very special technologies such as signal processing. There is no technical reason these cannot be used for ordinary computing systems, only cost. We are a very long way from the physical limits when looking at the hardware sitting on your desk, even if it is the very latest you can buy. Even the CPU I have has been clocked to over 8 ghz with extreme liquid cooling. With more attention paid to heat generation and intra chip scale cooling the potential exists for next generation chips to run at speeds far higher than they do now.
Biting the hand that feeds IT © 1998–2020