It's the price of the network cards that's surprising me
I remember when you had to pay that to get 10Mbs...
In part one of The Register's Build A Bonkers Test Lab feature, I showed you how to build a test lab on the cheap; great for a home or SMB setup, but what if we need to test 10GbE? Part two is a wander through my current test lab to see how I've managed to pull together enough testing capability to give enterprise-class …
Since you like SuperMicro, I suggest:
http://www.supermicro.nl/products/accessories/mobilerack/CSE-M28E2.cfm
If you don't need the dual-port capability, the basic model is:
http://www.supermicro.nl/products/accessories/mobilerack/CSE-M28E1.cfm
4 bay version:
http://www.supermicro.nl/products/accessories/mobilerack/CSE-M14.cfm
(don't ask me why supermicro.com redirects me to .nl!)
Budget range:
http://intrl.startech.com/HDD/Mobile-Racks?filter_DRIVESIZE=2.5in&page=all
Seriously; we had the Icy Docks on order at the local computer retailer. They were all set to arrive...then didn't. Oh, but they'll arrive! They'll be here in time for your review! Nope. Really should have bought from an etailier instead of waiting for the local retailer to get kit in, but ya can't win 'em all...
"... still supported, a decade later." Not the Small Business edition. Making the outcome even, when inputs are not even is not called fair judgment or open minded by any stretch of the imagination. It only makes you feel like you are being fair or open minded especially when the majority are giving you that look that you don't know what you are talking about. So go on and keep being fair.
Hahahaha. I feel for you though. My test lab is a pile of Pentium 4s, that I got for free, and it's only for home testing. A hobby. Not even paid. I've got a P2 in the cupboard, but I used that to test my drilling, hammering and Frisbee skills.
Mine is the one with a multi tool in each pocket for taking apart the obsolete tech to see "how it works". Only after it's depreciated to a penny or two though.
Just how many readers do you think have 60 grand to throw at a "test lab" that don't already know how to do the same thing cheaper and better?
Honestly, Reg, that's the sort of thing I see on gamer forums with idiots throwing thousands of pounds at Microsoft in order to run a storage array. I don't expect it on here.
I'd have been infinitely more interested in the article if it wasn't just "let's throw money at something that's not a problem if you have that kind of money", and even more if it wasn't focused solely on MS (which the writer claims he doesn't have the correct licensing for any further upgrade at the start - not surprising after the amount he's already wasted on it).
Give us something "real world" that isn't aimed at people who could do a better job and write a better article than your own writers.
"Just how many readers do you think have 60 grand to throw at a 'test lab' that don't already know how to do the same thing cheaper and better?"
Just a couple of things. In case the headline didn't make it obvious enough, this is a bonkers setup: it's obviously an everything-including-the-kitchen-sink "dream rig", as the sub-headline states. Also, you don't have to buy the whole lot; the headline figure is tongue-in-cheek.
We write technical stuff and people complain that's it boring; we write fun stuff and people call it fluff. You can't please all of the people all of the time.
C.
Thanks Diodesign for the article. It's incite full even for novices like me who cannot network 2 PCs successfully or setup a Linux Distro let alone Server 2003 or a Linux based one.
I would ask though, does it run Crysis? Oh... I mean, is there a reason the test system cannot cheat with disk arrays? Do you need to test all the disk setups in the test bed? Or can you get away with "virtualising" that bit to save some cash? Setup half as many disks, but in software pretend it's a duplicate? Or max out a 10GB line by sending duplicate data from ram, instead of trying to feed it all from disk?
I guess that is no help, as the test is if the disks can feed the ram/cache that can feed the network? So virtualising the disks means you could hit an unforeseen problem when switching to real hardware?
Maybe I am off here, but the problem with this - is its a bit of fantasy. Trevor seems to have access to a high level of funding that is beyond the norm. While it is fun to talk 10Gbps and 60K - the harsh reality is that this is all fantasy beyond reality for most of us. We are in the real world having to make and mend and build using very minimal costs. I suppose its nice to look in on this, but what fun is a project you can't actually do yourself, or partake in.
Also, I have to say, MS and Windows are in terrible trouble. The OS licensing when in many cases you can do it for free is a real world challenge. What future for AD when you can't even hook up surface to it and group policy? What are you paying all these huge costs on licensing? For joined up cross device defined design? What with in tune and azure?
Yes, it is a lot fantasy. It is a Bonkers test lab, after all. But the stuff I detailed in Part 1 is realistic and achievable. The Kingston Hyper-X array should also be realistically achievable for most, if the "high speed storage" part of the equation appeals to you.
The 10Gbit network with added WTF was built as much to see "can it be done" as anything. My hope is that having such a test lab lying around will allow me to do better reviews on more relevant equipment for The Register than I would otherwise be able to do.
Do we want to limit ourselves to reviews of the latest iPhone or consumer home NAS? Or do we want to occasionally tear apart some bit of midsize gear or even enterprise kit? If we do want to be able to throw that more powerful equipment on the bench and give it a run for its money, someone is going to have to build a bonkers test lab. So I did.
While out of "most of our pockets", 60K is nothing to a business AFAIK. I've seen people sink more pocket money into a house to buy/sell/rent project. If someone wants to make a server/system/service and it's not property, but internet based, well hopefully this article lets them know it's possible.
Now... where to get funding for my Bonkers Server business for downloading cats eating burgers website... ;)
Technet not enough to cover this lot (instead of spend $$$$ on Windows Licences)? Where does Technet run out of steam? To setup a test bed for AD/Echange/System Centre 2012 you'd be spinning up a fair few VMs - so where's the problem? Especially if you use the free HyperV for the physical boxes.
If you need to licence it properly, it's not really a test bed is it? Better to build a smaller local test bed and run the rest on Azure (some free time for that is included in an MSDN licence!)
More money than sense this man!
Paul
It's a testbed that needs licensing the instant I have to maintain "test" (or as we often refer to them "sandbox") copies of running instances. For example, my largest client has 250 VMs in production, among them there are 23 different "classes" of VMs. Each of these classes needs to exist in my testlab environment so that I can do things like test patches, the latest version upgrades to software and more.
In fact, this testlab just received its last components in the mail last night and they have already been pressed into service. That said, I use a single datacenter license to achieve this, and the rest of my lab runs Linux, as this is now the bulk of what I have deployed, and thus the bulk of what I have to test.
As for running the rest on Azure: no. For one thing, the cost of storage is too much, and my test labs often require the ability to access a significant subset of the live data for testing. For another, the laws of my nation do not allow me to store personally identifiable information on countries without robust civil liberties and privacy protections. That means the US is out, and trans-Atlantic data flinging in order to store in the EU is expensive.
I'll build my own private "cloud" thanks, and run my testlab requirements – and those I need to test the builds my clients have – on it. It's far, far cheaper over the expected 6 year life of this equipment.
"up to 32 gigs of RAM; the maximum currently supported by VMware's free hypervisor." - Use KVM on RHEL or Ubuntu - it scales better. If you want to pay for support, use RHEV and buy commonly available, enterprise grade x86 hardware.
A RAID card is not necessary, software RAID is all you need.
"Unfortunately moving virtual machines from node to node in this configuration is slow and frustrating." - Trevor, you show the example of using 10GbE, you can set up Etherchannel and/or you can use Gluster to have additional resilience - that's the beauty of OSV. See the advantage of a Linux/DAS storage grid, the data is clostest to where it is needed and you get resilience built in - not possible in a in a traditional SAN setup unless you spend a pile of cash.
"Being a recycled server" - you hit it on the head there, using off the shelf hardware you can easily take one or more storage nodes out of the grid, replace motherboard/PCI cards/disks and put it back in again. This reduces TCO and allows you to plan your storage requirements more granular.
http://www.linux-kvm.org/page/How_To_Migrate_From_Vmware_To_KVM
http://doc.opensuse.org/products/draft/SLES/SLES-kvm_sd_draft/cha.kvm.limits.html
Gluster is on my list for later in the year. And RAID cards have some distinct advantages over software RAID. Specifically when you start pushing 1000 megaBYTES per second or higher through them. Software RAID is fine if you RAIN. It isn't so fine if you only have the equipment to build a single, reliable and eye-bleedingly fast storage node.
*shrug*
I did; but I couldn't get anyone to send me Infiniband gear, nor are there local suppliers that offer it cheap. So if I did invest, it would a) be stupid expensive and b) I would be in deep ca-ca-poo-poo if anything whet splor and I needed a spare ASAP.
I call this my "test lab," but I should point out that my "live" corprate VMs occupy 1/5th of this particular setup at any given time. (Actually, they fit just fine in a single Eris 1 node, but that's a whole other story...)
I would love to test, review and otherwise learn about infiniband. With luck, some will show up on my doorstep one day.
Are you asking about the UPS from the dark ages, or the terrible IBM rack? Or the built-like-a-tank-will-never-ever-ever-ever-ever-die orange chair of doom?
Don't question the chair. The chair is indestructible. (And it has an equipment seatbelt.)
For the record though, we spent all night racking stuff last night...
Suerly if the end game is to build a test lab, then you will need to install an OS appropriate to what you are testing. Depending on what you are testing, this could be all Linux (various flavours), all Windows or in most cases a mixture. In my experience (20+ years) - a corporate network is Windows based, webfarms are Linux, Databases are MSSQL if you are using relatively small amounts of data and Oracle if you are using more.
Linux is also the weapon of choice when it comes to firewalls and proxys.
Like somebody has already mentioned - WIndows is used in corporate environments because it is easy to install, use and maintain. There are a lot of people out there who know it and therefore it is relatively cheap for compainies to hire people to maintain it.
Imagine 64 opteron cores and 512GB of memory creating almost any VM guest imaginable for the cost of the hardware only. This is the modern day reality of a Dell R815 and Centos 6.3. I will not argue the religion of what operating system is better for this or that. I will share the most cost effective way we have found to implement virtual guests, and then we can talk about guest management hardened externally facing systems (ie. administration from the virtual machine console and no remote access to the guest over the network) Now we are starting to talk enterprise class!!
another option for cheap 10GbE would be the Brocade 6450-24 which is a 24 port 10/100/1000 with optional 4 10G SFP (I think there is a software license requirement here). The base switch seems to run under $2k at the low end, and has basic layer 3 abilities, the license add-on is in the ~$700 range seems like(2 ports). I'm not sure if the switch includes 2 ports of 10Gb licensing or none.
I haven't personally used them though a co-worker has several deployed for our internal corp IT network, he has no complaints.
for my needs juniper is far too complicated to work with, same for cisco (brocade is similar to cisco). But I suppose if your just configuring it once and not touching it after that then it's not too terrible.
Eadon I understood what you were saying and agree
Yes it is a very nice test lab if you happen to have surplus kit lying around / you have some arrangement with the suppliers or you have some company profits that need to disappear.
If I was getting it all for free I wouldnt kick it out of bed but then again I wouldnt spend money on this setup when there are better solutions that are cheaper.
The headline is a bit misleading as the hardware alone is much cheaper.
I'd love to see how several different on-premise cloud solutions perform on this kind of setup, notably:
- openstack
- proxmox
- SmartOS with the "cloud" GUI from this guy: http://blog.smartcore.net.au
Certainly not a #FAIL article. True, I'm not the least interested in how well HyperV runs on that - but that does not diminish the value of the article or the information contained within it.
It's amazing how often commenters get bent out of shape by a title, instead of the comment. (Or by two paragraphs of an article, ignoring the entire rest of it.) *shrug*
That said...I now have sexy testbed. I have requests from folks to test openstack and cloudstack. I already have plans to test Hyper-V and VMware. I will add your recommendations of Proxmox and SmartOS to me list. What's the point of putting such a lab together if I can't test the things on it that matter to our readers?
The Fat Twin arrived. I was expecting it to come with a variety of configurations, apparently that didn't quite happen. Instead, I have 4 identical nodes: 2x Xeon E5 2680 /w 128GB RAM and 2x 480GB SSD. Should be good enough to give any of the virty stacks a run for thier money, no?
When the petty cash refills, I'll fill the other 4 nodes.
It's true. Every build/lab/howto I've read on the Reg is useless to me. I don't want to belittle those writing, but honestly, it does sound like a bit of filler. I mean, I could write you an article about how to set up the jack under my car and it might contain more useful tips that this. The best part about it was pointing out the switch. I'm looking for good 10GBASE-T hardware. Perhaps do a review of that. I'd read that.
As soon as I get good 10Gbase-T hardware, I'll review it. I should point out that a review of the Supermicro and Dell switches is coming up here soon (i am just putting it in to the CMS now) and that the Dell switch in question does have a 10Gbase-T variant. (Albeit slightly more expensive.)
That said, if and when you have requests for things to review/do a how-to on etc...ask! I am (naturally) limited by what I can get my hands on...but I've been working hard to build a lab that will allow me the flexibility to do reviews on damned near anything. Maybe I can meet the request, maybe I can't...but I promise you, if readers ask for it, I'll do my level best to get hold of it and put it to the test.
You can also help by providing suggestions as to what tests you would like to see run. Contrary to popular opinion – especially those of the berate, denigrate and wail like spoilt chillum crowd – I do this "reviewing products" thing mostly to try to help. Not every article will be thought provoking or insightful to the totality of the readership, but I do hope that each one provides some benefit to at least some of them.
In the meantime, I'll poke some 10Gbase-T vendors and see if any are willing to have their switchen wrung.
I dunno. Blowing a lot of money on something outlandish just doesn't seem that interesting. On the other hand, it is pretty trivial to max out the more common tech. All it takes is a single spindle really. What would be more interesting is seeing how easy it would be to "take it up a notch". How accessable is another level of performance above and beyond what's cheap and readily available.
There seems to be a big gap here and the interesting story I think is how you could make that gap smaller. My own SSD-free setup could probably benefit greatly from such an incremental improvement.
I have 2 2.5inch laptop drives mounted with lego in my rig. One is a spare I had so used for backup, one is "about to die", so I'm testing it to see when it dies and using it as a kind of scratch disk/test disk for now. I don't mind if I trash the broken disk, so it's a good one to experiment on. :D
I'm not a hardware techie, but it would seem obvious to me that, unless all of the IT shops that you deal with are 100% non-windows, your TEST LAB would need Windows in it at some point.
After all, TESTING would seem to be the point of a TEST LAB., whether or not you agree with the decision to use Windows or not.
Keep up the articles like this - while my professional sphere will never get this techy, its an interesting read all the same.
For the large part, our test lab is a strange mix of recycled kit and the odd new bit here and there. We've an old poweredge R805 or two that are out of warranty doing the grunt work of running the virtual servers in the lab. For storage, they back onto a Netapp FAS2050 that used to be in our DR bunker. for one of the line of business apps, we have a pair of wheezing old IBM power5 based AIX boxen, and some associated support hardware. One of our plans is to eventually bring the various business apps up in there so that we can have a tiny, sandboxed version of the company's network that vendors can play around with and break instead of the production environment. :)
Anon to protect my paycheck.