twice as fast as qemu?
twice as fast as qemu shouldn't really be that much of an achievement and for $10,000 I'd rather have some xeons
The crafty engineers at embedded software development house Codethink assembled an ARM-application build server for their own use this June, and have now decided that you might want one, as well – so starting this week they'll sell you a commercial version of that box called the Baserock Slab. The reason for Codethink's entry …
"$10,000 I'd rather have some xeons"
Horses for courses. According to their compilation tests, to match the through-put of the Slab you'd need a 16-socket Xeon server if using quad cores (16 x 4 cores to match the 32 cores per unit running twice as fast native as emulation). Get one for $10,000 and i'll have a couple too.
If you need maximum TPS, ARM probably isn't your thing.
The tech is also quite new so economies of scale haven't really kicked in yet. At this stage its a question of selling tech to get people skilled up.
I'd like to see some costings when you compare lots of ARM CPUs to x86 hardware + virtualisation costs.
... we've found that native compilation on each core of the Armada XP at 1.33GHz is almost exactly twice as fast as a Core i7 core running at 3.4GHz and doing the same compilation under QEMU"
That i7 Core wouldn't have been using a SATA II spinning disk with really slow read times, would it?
I really like the creativity here, but people have to face the fact that ARM and XEON are two different animals. There are trade-offs for use and a LAMP stack that smokes on XEON under load, probably won't do the same on ARM. But for keeping energy consumption low while dishing up a static home page, ARM might be a great solution.
There is a reason the tests don't get published and peer reviewed. ARM isn't anywhere near on par with XEONS on traditional workloads. The informal "tests" seem to have serious shortcomings and don't hold up to scrutiny althought they play well in the press. It really isn't that hard to run these tests and the technology is available to do it and I would bet there are a lot of people who have done them and don't want to disparage ARM at this particular fledgling juncture.
Which i7 at 3.4 ghz is the question, and are we talking about it being forced down to 3.4ghz max, or with speed step and turbo boost enabled?
I'd like to see some hard evidence of what they are comparing it to and how, not to mention the difference in work loads are not gonna fare the same across the two platforms. Would also like to know exactly what they have optimized their cpu's for workload wise. Since that is obviously gonna be what they are gonna tote about as being what they are best at.
PXE booting on ARM (or any non x86 hardware) doesn't seem to have gained much traction from my research, and it seems to be due to the many varied ways in which the SoCs implement booting.
In the x86 world, because a BIOS (or BIOS compatible EFI) is the standard , its a fairly straightforward affair to inject the boot code into a known location, but each ARM implementation seems to go about it differently. Its certainly not impossible, but the clever people don't seem to have made many inroads into the whole problem.
My idea is to have the bootloader load something like GRUB off of local storage, which could then boot from network, but I don't know if that is feasible.
If you're building software for ARM and you want to do it natively (lots of software can't be cross-built without significant misery), one of these things is going to be a hell of a lot quicker than using QEMU on an x86 box, not to mention smaller, less cabling, etc, and most likely cheaper, too.
It's ARM, so PXE doesn't make sense. But from the docs on on the Baserock website suggests you can "netboot" them from the management node: "The system management node manages the compute nodes, e.g. power up/down, reset, overall system monitoring and provisioning of images to the compute nodes"
You need to essentially develop your own linux distribution for such nodes, as there is no standardized way to do hardware enumeration on ARM. So you need to maintain your own kernel for each one of those boards. That will be a nightmare for people used to rolling out updates with apt-get.
Plus an STM32F103 as a management controller might seem sensible at first, they are cheap and powerful, with the largest having half a meg of flash and 64k of RAM. Far more than what is typically needed in such an application. However the STM32 controllers have one big problem, unnecessary complexity. You'll spend much of your time fiddling about with clocks and setting your port pins to alternate modes. And no, despite of them being ARM you cannot run Linux on them.
...To my beady little eyes....
?? @ Chrisian Berger - you can't run Linux on them? Sorry, Christian, but "Karnt" isn't a word I'm familiar with. Do please elucidate.
As the Dalai Lama once said, "If science proves some belief of Buddhism is wrong, then Buddhism has to change".
*http://www.theregister.co.uk/2012/09/12/raspberry_pi_supercomputer/
" you can't run Linux on them?"
Mr Berger was attempting to impress with his personal knowledge of the system management controller on this product;; this is a microcontroller that nobody cares about, that indeed cannot run linux because it's.. a very small microcontroller. It's about as interesting as describing the fan blades or the screws in the chassis
"You need to essentially develop your own linux distribution for such nodes, as there is no standardized way to do hardware enumeration on ARM. "
I don't know if this is exactly correct or not, though surely you wouldn't need to build a complete distro to fix this anyway, just those parts that relate to populating the device framework, which I thought was a smallish part of a kernel these days? And in a hardware-constrained environment where the hardware will not (cannot) change, surely the challenge is even simpler? Kernel, busybox, a few other bits of userland. Job done.
Either way, this limitation hasn't stopped pretty much every consumer and professional electronics manufacturer in the known Universe from building their products around ARM and Linux, and doing very nicely out of it thank you. Yes the quality and usability of the end result in the consumer market at least is often dire, with products that don't do anything like what they say on the box or in the badly-translated manual, but usually that's the fault of the vendor not of the Linux itself.
If you want pure grunt, and want to maximise for that, then I don't thing ARM is the right solution ... yet (grin!) ... but if you are running a farm of processors, each of which is handling lots of "small" jobs (serving webpages, doing parallel transactions etc.) and possibly a farm that has variable loads (lots of requests during US business hours, quieter in the evenings) then there are a bunch of variables ... one of the important ones is how much electrical power (and so, how much airconditioning, how much UPS etc.) you need ... a major advantage of ARM chips is that they were originally designed for scenarios where using the least power possible to complete a task was a high priority (mobile phones, battery powered embedded devices, USB powered devices etc.) ... this should mean that if you compare the number of transactions per kilowatt between an ARM server and, say, a Xeon server, then the ARM server becomes a lot more desireable.
In any business, whether you are selling cloud computing out to external customers, or just running your own in-house build servers, cost is going to be important. Anything that brings down the cost per transaction is going to be desireable ...
... and if less power is used, then that's probably good for the planet too :-) (YMMV!)
And if you need less cooling, then your server room can be smaller and cheaper too!
"a major advantage of ARM chips is that they were originally designed for scenarios where using the least power possible to complete a task was a high priority"
Nit-pick: ARM was *originally* developed for the second BBC micro, the Archimedes, and power consumption was not really a huge concern for what was a mains-powered desktop machine. Modern ARM chips are designed for low-power applications, but the fact that the architecture supports this is more serendipitous than anything else. There was an article on El Reg a while ago which told of how surprised they were when they powered up the first ARM prototype and discovered it used almost no power. Might seem obvious in retrospect but it was a surprise, even to the geniuses at Acorn, because they didn't design for power consumption.
The Baserock Slab ARM-application build server is a huge boost for development work. But as others have pointed out, the proof of the pudding comes when we see how fast the applications developed run on the same hardware. If cross compiling is less than 10% efficient; a native ARM @40% speed compiling twice as fast as a Core i7 might be >50% as effective. Would a 32 core ARM server function as well as a 16 core i7; and what would the difference in power draw, etc....TCO be for the customer?