* Posts by mattaw

7 publicly visible posts • joined 12 Sep 2009

Careless tweets cost lives, warns MoD

mattaw
Thumb Up

... surprisingly good, actually

When I arrived at the article I had the preconceived notion that this was going to be another god-awful PR campaign.

Actually it is pretty good. Having been part of the Royal Navy it is incredible what useful information you can pick up by simply listening these days, either electronically or in real life. Complete force deployments and readiness states, etc. etc.

Consider that the lessons of fighting in Northern Ireland are so quickly thrown aside (NB I am not judging who was in the right - just if you want to fight smart guerrillas, beware!)

M

Amazon wraps up Kindle crashes

mattaw

@Liam Go back to the bit about scratches...

I always thought a case was supposed to stop the scratches?

Intel CEO: 'We will win in the tablet market'

mattaw

Dr

Intel has been encroaching on ARM's territory of mobiles for years and now they have returned fire:

The Eagle Has Landed!

http://arstechnica.com/business/news/2010/09/arms-eagle-has-landed-meet-the-a15.ars

Apart from games, video and gfx what could you do on your 200MHz 32 bit machine Pentium I that you can't do on this beastie? An up to 8 core, 2.5GHz 32 bit out of order ARM part. With all embedded operating systems running on ARM as well as a vast amount of software in the mobile space I think we are seeing the beginning of a fight.

LiMo rumoured set for recall to Linux mothership

mattaw
FAIL

that "high level API" would be QT wouldn't it?

Heck, it even ports to desktops correctly.

But no, it is going to be something even worse....

ARM wrestling: Apple iPad chip to overpower rivals?

mattaw

Please get a grip: ARM SoC's are not full custom hardware....

They are unique, yes, but you get a standard processor, a standard bus system (AMBA, the ARM on chip answer to PCIe), a recognised programming model with very good compilers, hardware debuggers, etc. You also get access to a huge library of AMBA modules rather like PCI peripherals from 3rd parties. It is just like assembling a custom PC but much harder.

You get TSMC, Charter, STMicro or one of the other foundries to make them for you cheaply. Lots of CMOS bulk manufacturers out there, only Intel and IBM have their own CMOS foundries now.

What you get is the huge power advantages of having it on one chip (power savings for no pad ring, no unnecessary extra hardware and no pointless charging of PCB traces). You do need to engineer it for low power but they have a team hugely experienced in successful designs - PA Semi.

You also get ultimate lockin - no 3rd party OSs and noone else can use your software.

Lots of programmers in the embedded space who know what an ARM looks like.

Google make their own ethernet switches and PCs for the same reason - done carefully and smart it can be of huge value..

Apples PPC processor was a fail because they relied on Motorola and IBM to design it. Motorola gave up the PPC and IBM doesn't do mobile - only big iron. Apple didn't want to do a desktop and laptop processor and chipset. It takes Intel teams of 100s to design them. ARM SoCs are much easier.

Idle wild: how Intel's mobile Core i7 speeds up to slow down

mattaw
Go

Its all in the leakage...

Modern <=90nm silicon processes no longer lose most of their energy in switching gates (more is the pity) but in simple leakage across the transistors. This exponentially increases as things are made smaller and exponentially increases with temperature. The energy loss happens even if the gates are "idle"!

Fast logic transistors == horrible leakage

So to solve this problem engineers add header and footer power switch transistors (bad logic transistors, so slow) to turn "hard off" whole sections of circuit. This Intel strategy means Intel can have its cake and eat it too.

Running some cores faster and turning "hard off" others saves all that lovely "idle" leakage. The thermal mass of the packaging and die will prevent excessive temperature rises in that one core saving exponential thermal effects.

Awesome,

M

Sun's Sparc server roadmap revealed

mattaw

Die shrinks are now very painful

Seriously, 90nm was annoying to work with (the first technology where stuff didn't "just work"). 65nm is significantly unpleasant. 45nm and below are downright bizarre.

This is where teams of backend P & R engineers have to push structural change on the frontend design team. They then have to re-engineer bits of the system to cope. Its not a few guys and some retiming issues anymore - you have to rebuild the blasted things.

This is why most companies avoid sub 90nm processes like the plague. Too expensive.

Note I am not talking about Intel, Nvidia, AMD, etc who can run their own foundries and have hundreds of engineers to get things working.