It's SO CUTE... I want to hug it :)
Not sure if I'd put it in a datacenter though... kinda like I would do anything important on a netbook
Motherboard and whitebox server maker Super Micro Computer has launched a rack-mounted server that's based on Intel's Atom processor. This doesn't mean that low-powered Atom and similar processors - like VIA Technologies' Nano and C7 chips - are suddenly going to take over the data centers of the world. But as some vendors …
... that we are forced to buy from a stupid list of IBM stuff. We are buying massively over powered hardware for specific jobs that we dont want to visualise where this sort of server would be perfect (e.g DHCP for a few thousand people who are not permitted to connect to any part of the core network). The energy wastage of our, and no doubt most, data centres is frankly scary.
That system pictured looks like a pretty stupid idea to me. I'd be much more interested in something like a 2U full depth system that has say 10 x atom-based servers sharing two redundant power supplies. You could probably get away with using something like 500W PSUs with 2.5" disk drives.
Sort of like the Rackable micro slice(6 1CPU servers per 1U) or the Supermicro 2U Twin systems(4 2CPU servers per 2U).
I can see the point of this, there's no point running huge machines that aren't doing very much and as much as people like to bleat on and on about "VM's" being the best thing since sliced bread I'd say there's very much a place for small machines doing small tasks.
The problem is they went and whacked 2 gigabit NICs on there when there's no way in hell that the atom/chipset can deliver that level of throughput.
"kinda like I would['nt] do anything important on a netbook"
What? like run the (intelligent) lights in a £3.5m nightclub? My NC10 doesn't have a problem with that (its been used a couple of times as a back-up when the main PC decides to have one of its periodic fits!). Its also quite fond of being used to DJ with in said nightclub with a dedicated hardware soundcard/controller. Not that any of that is slightly important...
First thing I notice in this one is the fan working against the top of the case instead of having a blower. Bad design. There is plenty of space to put a low rev big blower forward from this to take in air from the front. Similarly, what exactly prevented them from putting the power supply into a separate compartment to ensure it gets correct airflow? Disk has no cooling and airflow and is left to stew itself...
Bad... Absolute hodge-podge...
I would rather use one of the much better mini-ITX rackmount cases already available out there instead of using this.
A lot of servers sit there doing almost no processing - they exist to serve and backup files from their large hard disks onto the network, and do nothing else.
Simply serving files doesn't take much processing power - the bottleneck is in the ethernet cable (gigabit or no).
This is never going to replace your SQL back-end and the like, and those take decent iron, but if they get the price right it's a killer for the pure fileserver applications.
I've been hoping a big-name manufacturer would get round to some Atom blades for a while. I wanted them a lot more dense, say eight cores on a blade, but then you'd probably need fans. The idea of a tower with several fanless Atom blades as a branch office server is very interesting.
Now, let's see what hp and IBM do in response!
This is the perfect machine for those firewall / email / proxy / nat boxes that i've been using in small and medium bussiness, to be switched on and forgotten.
Just install them a debian, set up an apt-get update && apt-get upgrade in crontab, and connect to them once a year to get a status report and do some maintenance that can't be automated.
Perhaps is time to replace those power hungry aging computers.
Where is the market for these servers? For single severs, who will invest in a 'non-standard' form factor rack? For small numbers of servers it makes more sense but is there really any great advantage over normal servers and racks? For larger numbers the standard kit becomes quite cost-effective anyway. For specific deployments such as HPC is the connectivity up to scratch? If it's about cramming computing into a small space, many workloads lend themselves to virtualisation. Some don't, but then what size deployment would it need to be to make these servers pay off? Ba\sically the question is who is going to benefit from the smaller size, who cannot already benefit from standard servers with specific setups tailored to the work?
I already have some Atom based servers I'm co-locating. Great little boxes I put together myself. And for the purpose they're serving - not a hugely CPU intensive function vs. the power they consume, they're doing just fine.
Personally I think we need to get back to the bad old days when we really were cpu limitd (and memory limited) and start to optimise things to eek every last ounce out of the hardware, rather then blindly pour more cpu, memory and electricity to a problem... More and more data centres are now chrging for electricity - what's better - one hulking server sucking 350 watts or a few smaller ones sucking 40 watts each...
"""Simply serving files doesn't take much processing power - the bottleneck is in the ethernet cable (gigabit or no)."""
Actually I've found that on a single core Atom, the CPU is the limiting factor in network speed. That's with a 3 disk software raid5 which can read at about 120MB/s, but which yeilds only 20-25MB/s using CIFS (Windows Networking.) NFS gets between 30 and 40, and AoE can manage somewhere in the neighborhood of 55MB/s. The CPU is, of course, pegged the whole time.
I know it doesn't sound like a network file protocol should take much cpu time, which is mostly true. The issue is that between issuing the syscall to read or write and having the action complete, the kernel is locked and nothing else gets to use the CPU. That means the network file server process has a vanishingly small amount of CPU time to work with. And the problem isn't helped when two processes are reading at the same time, causing lots of seeking, and thus more io-wait cpu time loss. In Linux I've seen that a dual core helps, since only one core is totally blocked at a time, and the other is free to actually process things. I haven't tried that with an Atom system though.
Of course I have no idea how io locking works in the Windows kernel, but evidence suggests it doesn't do as well as Linux, since I could only get 7-9 MB/s running WinXP, and 6-8 with Server 2008.
I really don't think these systems are all about economy, since once you lump in a 300W psu and some drives, the difference between an Atom system, and say, something based on a mobile Core 2 (Duo) isn't terribly significant. The Core 2 would cost quite a bit more, but it'd give you a seriously more modern platform to play with.
Biting the hand that feeds IT © 1998–2021