What's really depressing is that most people there have better home setups than I have deployed at work in production. (It's only an SME, but still...)
Then again, I'm near the same point at home so I can't really comment.
A few weeks back VMware’s Mike Laverick told this hack about his home lab and how he used it to further his career. Plenty of you revealed your own home labs in response, so we launched the servers’n’sofas challenge to report on the very best domestic data centres. We’ve sifted through the responses and can now bring you the …
Your home rig has the following:
- No dust
- You worry that even though you've got dual power lines, they're i) 13A not three phase and ii) coming out the same socket
- Your patch cables are bought by length (or better still - custom made) , to ensure no unnecessary slack at the bottom of the cabinet. Naturally, they are all colour-coded from the same reel. You notice when one is not the right shade of blue, so you replace it.
- The patch panel itself is also obsessively tidy
- far too many cable ties in use...but damn, its so tidy and routed perfectly you'll be diagnosed as OCD by a non-IT person. You actually look forward to adding a new U, because it means you can redo the routing, and perhaps improve on it.
- You worry about airflow, and wonder how you'll get cold air retro-fitted into the garage floor
- You used a spirit level to setup the cabinet, and a ruler to measure the height from the floor
- You actually DR test your cabinet, and actively monitor your UPS's for capacity. Even if you only have 4U's worth of kit.
- You keep the 1U blanks in place for empty slots, rationalising this decision for "airflow" reasons
- Your server fronts are millimetre perfectly aligned in all 3 dimensions. And the plastic blanks too. That plastic front panel that sits at a 0.5' angle because of a poor fit, actually has you considering replacing it.
I see that the photo of my set-top-box seems to be missing --- probably a conspiracy of some kind, you know how it goes. Here it is:
(Served from the machine, naturally.)
The writeup also neglects to mention the 100% organic hydrocarbon biodegradable backplace. In other words, it's nailed to a plank.
End-to-end carbon fibre with over one million transfer microchannels per square centimetre via its unique Xylem™ architecture. It's capable of having dozens of cores, although if you put too many in it gets a little unstable and might have a tendency to collapse under heavy load. It has ultralow power consumption and the temperature is usually barely above ambient even when working hard.
I never did get a visit from the police, although I think they might have been confused - I used to refer to it as "the farm" (as was the custom with SETI clusters at the time).
"Yes officer the farm is up in the loft..."
/me hears click of handcuffs, despite the carpet suddenly muffling my ear.
The greatest thing with Fleabay is being able to pick up ex corporate stock that would have cost four or five digits five years ago for sometimes less than three digits. And a lot of stuff still has a lot of grunt for the money. Where it does fall down is the power consumption and heat output. I managed to reduce several HP G3 and G4 servers to a single AMD 6100 based VM Server saving over 500 watts for no noticeable performance loss. So you have to be careful with your purchases and compare old with new.
But good to see I'm not the only one with a bad habit for ex-corp electronics.
Where it does fall down is the power consumption and heat output
And the noise. I picked up a Dell Poweredge 4400 at a garage sale a couple of years ago for just a fiver! A nice machine but, with three plug-in PSUs plus large internal fans, it's like a jet aircraft when running.
I've got 1U servers which are useful, being easy to carry around, but again they aren't quiet.
I've got 1U servers which are useful, being easy to carry around, but again they aren't quiet.
As I understand it, it's because of the fan sizes. Lots of little ones are far noiser than a few large ones shifting the same volume of air.
A DL360 is way louder than a DL580, even if both are idling.
"And the noise."
That was my main problem when I bought a pair of DL380G4s to replace a couple of home servers based on desktop hardware. One of the old desktops had blown it's PSU while I was on holiday (which happened to be my mail server, causing great distress to my other half), which prompted the move to real server hardware.
When I got them home (and installed in my new rack), I found the noise level... unacceptable. Or rather, SWMBO found the noise level unacceptable. I wasn't really too surprised, as they would drown out the noise from our vacuum cleaner from the next room. I did, however, find a fix.
The first part was to retire one to the duty of parts repository, and keep any non-essential services on a desktop-based server. The second was involved a soldering iron, some heatshrink sleeving, some resistors and every fan in the system (8, I think). Then, after a few days of careful monitoring in mid summer, I determined that the heat levels were acceptable so long as I left the window latched open.
I'm actually in the process of heading the other way, now. All my external services are hosted elsewhere, and I am in the middle of consolidating all my systems into one desktop based server. At least it's now in a rack mount enclosure, with custom dual PSUs and two UPSs. The spare hardware, well... It will probably join all my other spare hardware in the cupboard, ready to fire into life when something else goes wrong.
As a side note, I'm not looking forward to the other half noticing the drop in our leccy bill when I switch off the 380. She noticed the rather large increase when it was installed, but didn't figure out what was causing it. I'm pretty sure she will go mad when she realises how much juice that thing was drinking...
A single modern PC can't reliably give you a testbed upon which to develop datacentre-scale applications.
Well. It could, but I think after the first few VM instances things would start resembling treacle flowing uphill in Winter.
Sometimes, to test how a highly-parallel application will work on a lot of nodes, you need.. a lot of nodes. Preferably nodes that you own, that can't be sniffed out by Amazon and your next big idea ripped off and rolled into an Amazon service.
That was my attitude to driving lessons. It was cheaper to just buy a train ticket and the train was much faster than a car - so I would be much better trained for my test than all those people who had driven at 20mph in an old donated car.
We did have server, extra external SCSI drives case, Firewall/Router PC, and PC as packet Radio node controller and router. A visitor to the bathroom was sorry he asked about the strange noises.
Down to a single sever with mirrored SATA. Still in Attic. Power cord to UPS on a kitchen shelf via airing cupboard.
For what it's worth, I have signed letters from a director authorising the removal from the building of the donated kit I have.
The letters - which I wrote for him - include the model and serial numbers. There's also a clause that says if they ever want them back, (very unlikely) they're welcome to them.
my home lap a couple of laptops one for storage and a newer one for a small esx server and a desktop as a router.. I would hate to see the power bills of some of these setups!
I tend to spend most of my lab time on the work machines since I am lucky enough to work in an underutilized office so can use the existing esx infrastructure to play with.
This post has been deleted by its author
I used to have something similar, but now I have lots of vmware images on one quad core 24GB workstation.
disk-share on host, and all kinds of linux (opensuse 11.0,11.4,12.1,12.2,12.3,centos5,6), winxp for all kinds of vpns for work and a win7 for gaming, different work images of our software so I can test at home. etc all in all I have 23 images on 2x2TB disks all running on vmware workstation v8.0.6. Fortunately I do not need to run more than max 2-3 vms at once.
I used to hate the frequent upgrades with OpenSuse, because I have a rather tricky wikimedia setup with some heavy configured plugins and getting this to work on a new system take a lot of time. Now I Just clone the image and start upgrading while the old one is running until it is ready. if it stalls I just start on a new clone.
Gaming in win7 vm on a linux host works fine. The most demaning game I've played have been Skyrim.
I played to level 67 and put 135 hours into it, so it was a fine experience.
But You need to have a nvidia card, I tried an AMD card and then it just froze, vmware complains saying there are issues with this card/driver on a linux host when you start the vm in accelerated mode.
I consider the $179 vmware license well spent money and well within a home budget.
I've used Xen and Virtual box also, but they are not in the same ballpark. especially considering performance and hw-accelerated gfx on guest vms.
... to remember how your home network is set up, then you know you're in trouble.
I dread the power bills these people must get.
I appreciate there is value in having lots of gear at home to play with, because I have a small but growing lab of Juniper and Cisco routers, but its only on as long as I need it. Plus its just too damn noisy to leave running.
Over the past 6 or so years my home network has significantly shrunk. I used to have two routers and two switches, and two servers. Now all I have a single combo router/firewall/switch device (Juniper SRX110) and a single box running ESXi. When I go to work all day supporting this stuff in "the real world" the last thing I want to do is come home and support more of it.
And on that note I leave you with: https://www.youtube.com/watch?v=LcOTcJUh8gA
I have now downloaded all the current big hypervisors (Xen, ESXi, HyperV) and am pricing up smart switches and cheap CPU/mobo combos that can handle 16gb+ RAM.
Have an old Phenom II 920 (quad core, 8gb of ram) but not sure that'll suffice these days. Maybe as a backup node...although it'll need an Intel Server NIC for VMware to see it (damn you, Nvidia and your poorly supported NICs).
I think I might suggest work pick up a nice 48port smart switch mit POE/L3 management (should suffice - and not too pricey these days) and then move my lab into there.
In these days of tightening belts, I'm going to through the process of replacing the servers in my cellar with Raspberry Pi's!!! Do I need a multi-processor uber-server to run BIND, or DHCP, or even the mail/web servers? Nope.
According to my reckoning, when I'm done I'll have 4 x Pi's doing all of the work that 2 x IBM dual quad-core-operteron servers are currently doing. Yeah they'll be slower, but they'll be muchos quieter, and instead of sitting there drawing 300W each, they'll be using a total of about 30W, and silently. In a much smaller enclosure.
That's not to mention the 2 x P4 boxed providing firewall, asterisk, and other servers which will also be covered by the 4 x Pi's.
If I'm careful, I may be able to reduce it to 2 x Pi's!!
All that money I'm going to save!! Bigger beer budget!! Has to be good.
AND I could reduce the size of the UPS - or just live in the smugness that my servers will keep running for a couple of days on battery, instead of about 4 hours for the existing setup.
I have a Sun T2000, 32 CPUs and 32GB RAM (Solaris 10) with a 2TB iSCSI NAS sitting on top, in the loft above the dining room. I have 3 wireless routers and four hubs including a 64-port 1GB/s one. A RPi sits under my main desk, powered with a home-made UPS, running Xymon, monitoring everything.
Biting the hand that feeds IT © 1998–2021