DIY virtual machines: Rigging up at home
A brief look at virtual machines for home use resulted in several requests for system specifications and configuration details. It seems some of you would like to take a go at replicating my setup. The hardware is simple. The motherboard is an ASUS P8H67-I Deluxe, with an Intel Core i5 2500 CPU, two 8GB Corsair SODIMMS and an …
-
Wednesday 11th January 2012 14:48 GMT KSNAP! My server used to be a netbook with 1Tb USB drive, used it for primarily streaming movies, storage and downloads (legal of course :D). But I got a little fed up with with how slow and limiting it was. So I came up with the idea of building my own vSphere 5 server.. Here's the spec CFI-A7879 Case (4 hot swap drive bays + 1 internal only) ASUS P8H67-I Intel i3 8GB RAM 4x 1.5TB HD154UI Samsung HD's Though be aware vSphere 5 does not recognise the RAID on this motherboard, so rather than spend more on another RAID card, I installed Windows 7 and run VMWare Workstation and VirtualBox. Currently only running 3 VM's, but it run a treat and I think could easily couple with another 4 or 5..
-
Wednesday 11th January 2012 15:04 GMT 1RafayalI use HP Proliant servers for my VM's at home, bit cheaper and a bit noisier. I do a lot of blogging on various tools and systems, without a VM handy for some situations, it would be very hard for me to achieve my aims. I also use a VM as a VPN server, so that I can get my media on the go. Given that vSphere is free in some configurations is an added bonus
-
Wednesday 11th January 2012 15:22 GMT Zebulebu
Nod here for Proliants as well
Got three ML115s with another one running openfiler as shared storage. Works an absolute dream running ESXi. Cheap as chips too - under a grand for the lot (though the array card is an absolute pig due to tiny amount of cache) Why anyone would ever run Windoze virtualisation at home when XenServer and ESXi are free and infinite orders of magnitude better is beyond me.
-
-
Wednesday 11th January 2012 15:04 GMT fiddley
RemoteFX
Anyone else running RemoteFX? How are you finding your performance? I've got well specced thin client hooked into a vdi instance of Win 7 Ultimate over gigabit and the RemoteFX performance sucks ass. The display drivers keep crashing and recovering and there's major lag on simple things like dragging a window. Launching Zune drags everything almost to a halt, and Silverlight webpages have a similar effect. This is on an otherwise clean install... Is this normal, or have I screwed something up? I did cut corners on the Hyper-V box processor, only an Athlon II x2 @ 3Ghz, but it's otherwise mostly idle.-
Thursday 12th January 2012 13:28 GMT NogginTheNog
Ahthlon X2
Have a look at maybe unlocking any extra cores on that: I got an X2 Black Edition, and found out that with a BIOS tweak I could unlock the two disabled cores (X2s are X4s nobbled for economies of scale). One of them kept crashing the machine, but one was fine, so I've now got an Athlon X3! 8-)
-
-
-
-
-
Thursday 12th January 2012 13:45 GMT Mark 65
@Ken
My point is more, depending on the any old OS installed it may not necessary come up cleanly without prompting. I have certainly had issues with a linux variant dropping its remote login permissions after an update which meant I needed to attach it back to a screen to re-enable it. Therefore it is perhaps better to use something like ESXi (or equivalent) which expects to be remotely administered.
-
-
-
Thursday 12th January 2012 12:16 GMT BobChip
Absolutely! Homebuild i5 with 8 Gb RAM running Linux Ubuntu. It hosts 2 VMs (Virtualbox) running legal instances of Win xp. One for Win only Corel draw and Turbocad, and the other for old Win games. Fast to boot and fast running. A third one on the way to have a good look at Linux Mint. What more could you want?
-
-
-
-
This post has been deleted by its author
-
-
Wednesday 11th January 2012 16:43 GMT Trevor_Pott
Machine count
Household machine list: 2x HTC Desire 1x Galaxy S II 1x Moto Droid. 1x Galaxy Tab Classic 2x Transformers 2x Samsung NF210 1x Alienware MX18 1x Virtual Server 1x Ridiculous over-the-top Core 2 Quad gaming desktop 1x Holy [bleep] Dual CPU 8-core, 64GB RAM Server --> Gaming Rig Of Doom 2x NAS 2x Wifi Routers 3x Consoles As much as that may sound like a bit of a list...it covers 3 people's worth of stuff. Considering that I probably maintain upwards of 1000 endpoints and 5000 servers in the field - about 200e/50s of them with the help of another sysadmin - the home setup is pretty small potatoes. Still, it's nice that it all "just works."-
Wednesday 11th January 2012 19:13 GMT Anonymous Coward
The Old Days
When I /worked/ with computers, I wouldn't have one in the house. At all. Ever!
Now I spend twice as many hours in front of one as I did before I retired.
Where vitalisation is concerned, I begin with the attitude that a good, multi-tasking operating system will be able to do it all any way, without adding the further overhead of virtual machines so, for me, the ideal machine would have zero virtual babies.
But life is not ideal. My resignation from all things complete requires, at least, a monthly exception, because there is something about my Ubuntu/Firefox/Java combination that doesn't work with my online banking site. This calls for the XP Virtual machine. Trial versions of Linux also call for virtual machines. Virtual machines are superb for sandboxing or running different versions of whatever OS needs to be given a whirl. All of those, however, get run from time to time, as and when needed. None of them run concurrently. Having "grown up" with the power of Unix, I /expect/ that my Linux machine will do all that is required of it in terms of file/print/etc serving, and it does (but hey, the home network is, err... a whole two machines!). It runs samba for one other machine to file server (Just as my RS/6000 ran Samba for 40 machines to share --- as well as an accounting package and two database engines).
My 'downside' of virtual machines (VM Virtualbox), at least when the host is Ubuntu and the client is XP --- poor file access performance; access for sound is to a virtualised device, and the quality is poor; no access to PCI devices (they say it is coming)... is all I can think of just now. Otherwise: much quicker to boot virtual XP than reboot from the actual WInXP partition, and pretty-much everything, in so far as this can be said of Windows, does /just work/.
-
Thursday 12th January 2012 11:07 GMT Jim 59
Sys Admin overhead
Even a device as humble as a Rockboxed mp3 player requires some sysadmin overhead. And just a handful of servers can fill your time if peopled by a busy and demanding user base. This is perhaps less so in the the Windows world, where things are more off-the-shelf I guess.
The sysadmin burden generated by a server landscape depends on many things - if it is homogeneous, if it is non-production, that helps. But I have yet to see a sizeable landscape, real or virtual, that does real work 24x7, without generating a maintenance overhead. Even "lab" systems that nobody really cares about need some love.
Virtualising systems can reduce the overhead in some ways, but increases it in others. Sure you can bump the CPU count with a single mouse click. But you end up with an awful lot of servers depending on the same kit, and sometimes on each other. Eg Cloned VMs often have an enduring dependancy on the source object, as can be the case with Solaris LDOMs cloned with zfs - they all depend on one snapshot.
-
-
-
-
This post has been deleted by its author
-
Thursday 12th January 2012 04:52 GMT Anonymous Coward
I want to cheer for AMD
...but for an always on machine I'd go with the I5 2500 Trevor is using. It is much more power efficient for a use case like this and if you're in the States Microcenter has the unlocked version on sale for $179.
Personally, I'm about to build a file server / HTPC with an E350. I like to see how cheap and efficient of a solution I can put together for a given problem, and am looking for that I5 2500's successor (and new AMD GPU) when I finally upgrade my workstation later this year.
-
-
Wednesday 11th January 2012 15:41 GMT Anonymous Coward
Personally
I use a Gigabyte board, (I can't remember which) it's got an AMD 64 bit chip of some sort and has been ticking along for about three years. The PSU is a similar 80+ high efficiency jobbie. The disks are four Western Digital Green (2x500GB, 1x1TB and a 1.5TB for backup VM only). The software is ESXi 4.1, which was frigged slightly to see the disk controller and actually runs from a memory stick, which should make upgrading the hypervisor a bit easier. My main peeve and word of caution is that after about three years my motherboard is coming up for replacement. The sole reason for the replacement is that it can't take more than 8GB of RAM and I really need to upgrade to about 16GB. This would be my main caution for people looking at designing this sort of system - Make sure it can take about four times as much RAM as you think you'll ever need, the processor really isn't an issue for most installations, but the RAM isn't really negotiable for modern OSes. -
Wednesday 11th January 2012 15:45 GMT AmonynousOld Asus A8N mobo with Athlon 64 X2 processor, 4GB RAM, recycled from a retired games machine, plus a 500GB hard disk and a 20 quid case Have run various incarnations of ESXi and latterly VSphere on it quite happily. The only wrinkle was that I wanted to use it as a firewall, and VMware supports one of the two onboard NICs but not the other (different chipsets powering each), so had to spend 20 quid on an Intel NIC card that was supported. It runs three Linux server VMs happily - no GUI, just console: 1 x Ubuntu LTS as a mail server, 1 x Ubuntu LTS as a file store for backups and 1 x SmoothWall VM appliance. Also has a couple of archived Windows XP VMs in case old files are needed, but the host struggles for RAM and performance when either is powered up - lack of RAM and processor horsepower I think. There are only two issues: 1. Had to buy a new PSU when the old one blew last year, but then it did come pre-installed in the 20 quid case, so had my money's worth out of it. 2. In older versions of ESXi, the upgrade process for us freetards was painless as you could do it all from the GUI in the VMware client software. Since the move to VSphere, you can only do GUI-based upgrades in the full-blown (paid) management suite, otherwise you have to install the command line tools on your desktop and try to remember the semi-arcane process from the last time you did it (which i never can). (On a side note, I got a new Windows 7 laptop recently, but then needed to use some hardware with drivers that were only available for XP. Using Home Premium, so no free XP mode like you get in Win 7 Prof and Ultimate. Never mind, I had a spare non-OEM Win XP license, so downloaded Microsoft Virtual PC and installed it with no problem. Then it was time to run Windows update - THREE DAYS LATER I had finally got the thing up to date with every patch. And that was by manually checking for updates over and over and over again. Dread to think how many weeks it would have taken if I had just left Windows Update to do its thing.)
-
-
Wednesday 11th January 2012 16:47 GMT Trevor_Pott
@Audrey S. Thackeray
Identical specs, runs 2x PVMs, and a CentOS LDAP DC + Samba DFS for 10 people. (Front ends about 25TB of storage.) It's not enough. The RAM requirements for the Samba DFS really should be abotu 16GB on its own. If you upped it to a Micro-ATX board with 32GB of DDR3, you'd be gold. The CPU/network isn't the limitation. The two SODIMM slots are.
-
-
-
Wednesday 11th January 2012 16:48 GMT Anonymous Coward
This is all making me envious
FreeBSD doesn't really handle many VM technologies (well, not as host), and I went with FreeBSD for ZFS. My home server has 12 x 1.5TB drives in it for ~16TB of raidz storage, 6 off the onboard SATA ports, 6 from 3 x 2 port PCI-e. All built with commodity/consumer parts, excluding the disks I think I spent about £450, largely on the case. I record a lot of TV content, and archive it - I don't like deleting media, I have an archive of about 5 years of MOTD. The jump to HD leads to disk disappearing at an ever increasing rate, I think I will need another 6 x 3TB disk expansion soon to add another 15TB, although prices need to come down a bit before I'll bite! I do export an iscsi target on which I have installed Windows 7, which allows my desktop to boot and run over the network, and then backup/archive/snapshot it using standard ZFS tools, which is pretty nifty. -
Wednesday 11th January 2012 17:59 GMT Chris_Maresca
You don't need big hardware
I have 2 Linux VM's running on an old laptop with 4gig RAM under VirtualBox. And because the laptop has a high-end Nvidia card, I can also run XBMC on it.
I paid $110 for the laptop, plus another ~$40 for 2gig RAM. Works great, even using Windows 7 as the host OS. I'm not entirely sure why you need to spend almost a grand on hardware, but to each his own.
And if you are using enterprise hardware, watch your electricity costs. I found that using a 3u Proliant server was costing $50/month on electricity alone - switch that to a NAS + a low powered Atom machine was worthwhile.
-
Wednesday 11th January 2012 18:42 GMT charles blackburnI use an HP desktop machine i got at best buy for 600 bucks... it has (iirc) a 2.8 gig amd cpu 4 gigs of ram and 1Tb hdd space. it runs 3 linux vm's (web, mail, dev/mysql) and a win wk3 server for some other media and windows server stuff I work on. runs pretty well even though a lot of times, i'm hammering the mysql database which currently stands at about 5 gig over 6 databases. the database itself is hung on a NFS share from the host machine that way it's easier to connect to it from my laptops and other desktops. I could do with more ram and a bigger drive though but all in all it does what I want it to do.
-
Wednesday 11th January 2012 18:42 GMT Dave RobinsonI use Proxmox with a mix of KVM and OpenVZ machines, for various purposes, including a couple of gaming servers for my son. Intrigued where the writer got a 250W 80+ PSU from. Most PSUs seem to be about 3 gigawatts these days. I'm using an external brick and a PicoPSU, driving an i3-530, 8GB of RAM, and a 1.5TB green disk. The disk is the weak point, with much rattling when two VMs are accessing it. Next upgrade will be to a RAID array.
-
Thursday 12th January 2012 05:12 GMT Anonymous Coward
250W 80+ PSU
I saw one on NewEgg today - it wasn't the standard ATX form factor but it was a desktop power supply. I wanted to go Pico but the spinup draw for multiple 3.5" hard drives killed that idea, although I will be back for my next HTPC. There is a 400 watt 80-Plus Gold for $69 that I have my eye on for my next build.
-
-
Wednesday 11th January 2012 18:43 GMT Anonymous Coward
3 x DL140 G3's, with HP P400 BBWC RAID (1 incomplete) 1 with VMWare ESX, the other will be a linux KVM as yet undecided
1 x DL385 with a P400 internal and a P212 additional BBWC RAID, and quad LAN again with a Linux KVM host when decided
1 x IBM X3550 with Quad LAN, and BBWC, again waiting for the version of Linux KVM to be decided
the VMWare Machine will host 2-3 Windows 2008 R2 machines, currently one is set up bar a few niggles and will have just a few websites of someone living here switched over in the near future, the other machines will be a more capable version of Windows R2, one for building software and one for additional functionality beyond basic web servers when and if needed.
This machine also has 3 Linux VM's currently just looking at Web GUI based KVM Management software.
-
-
-
-
Thursday 12th January 2012 13:34 GMT NogginTheNog
Windows 98
A blocker for running older OSes under VMware vSphere is the lack of IDE drive support. For desktop products like VirtualBox (a personal favourite), VMware Player, or Virtual PC, this isn't a problem.
A few years ago I fill a week between contracts happily building VMs of all the old OSes I still had lying around. I even got Win 3.11 (downloaded from TechNet!), before common sense kicked in...
-
-
Thursday 12th January 2012 13:55 GMT Anonymous Coward
reWin'98
I don't know if this is strictly applicable here, but way back when I had to remove all but one of my sticks of RAM to get '98 to completely installed let alone to run well. Anything over 512 MB would give it conniption fits. Once I installed the unofficial 98 SE SP Roll-up (it had all the official post '98 SE SP1 patches rolled in as well as some neat hacks/fixes) I could go to the full 1 GB or more. YMMV.
-
-
Wednesday 11th January 2012 19:03 GMT jon 68
Home VM's
I have several VM's i use, I have a ESXI host on a dell 2850 with 8GB, and then i have a core 2 quad box with 16GB or ram that runs server 2008 R2. I've got virtualbox VM's and esx VM's that i float around the six odd workstations in the house, and yes, don't laugh, but i have a sub 300 dollar compaq netbook powering my home theater system. usb 1 TB drive makes it a lot more usable.
have to say i do love the articles, glad to see i'm not the only nut who like to use VMs to keep their workstations clean
-
Wednesday 11th January 2012 20:34 GMT Anonymous Coward
Q re multiple VMs under ESX(i), and disk IO
"multiple VMs access the same HDD, say when applying updates, then it can slow right down to a crawl. "
Something a bit similar observed here at work, but the conclusion was rather surprising.
Reasonable spec Dell server (details irrelevant) running ESX.
With only 1 VM active, an app on the guest gets roughly the disk IO performance (MB/s, IO/s) you'd expect if it were native. With 2 VMs, the disk IO performance seen by the app is halved, ***even if there is only one of the VMs doing disk IO***. Try it with N VMs active (N-1 idling), and the sole VM actually doing disk IO gets around 1/N of the disk IO Megabytes/s and IOs per second you'd expect on the raw hardware.
Is this really expected behaviour for modern enterprise-class IT-department-authorised software?
-
Wednesday 11th January 2012 21:04 GMT Michael Sage
I am running a whitebox VMWare ESXi Server, 8Gb RAM, connected to a Netgear ReadyNAS NFS & iSCSI datastores. This runs my Windows ThinPC machine, Zimbra & a core DC for my testing server.
Another whitebox running what ever virtualisation platform I am playing with at the moment it is running proxmox at the moment. (this machine isn't a 24/7 machine)
I also have a HP MicroServer (also with 8Gb) which is running 2008 R2 as a DC / Fileserver, I would have moved this over but I like it for out putting films and things.
I will also be adding a raspberry pi when they launch! :)
-
Wednesday 11th January 2012 21:10 GMT Christian Berger
I currently have several virtual systems running on a tiny Atom computer. However that is _slow_. It's alright however if you just want to have a mailserver.
I'm going to switch that over to some AMD Athlon II with 16 Gigs of EEC RAM which will also hold my main RAID.
I used to have my virtual machines running on my VDR machine, however that isn't quite stable enough, and after one crash when I was away several hundred kilometers, I decided to get the Atom.
-
Wednesday 11th January 2012 22:30 GMT Anonymous Coward
Virtual on virtual
MacBook Pro with 8GB and an SSD... runs VMware Fusion with multiple VMs, including multiple ESXi VMs (yes, VMs) and an NFS/iSCSI appliance as a shared virtual SAN. Allows me to play with, break and document lots of new tech when designing/demoing/learning. The disk was/is the KEY bottleneck - that is beautifully removed via the SSD.
-
Thursday 12th January 2012 00:20 GMT Mark 65
Never mind
I was originally looking to re-purpose an old Shuttle XPC box I have. It's a nice size for hiding somewhere, gigabit lan etc but has the unfortunate issue of being a late Pentium IV machine. This comes with multiple pain points including: too much heat created (with associated fan noise); too much power use even on standby; and lack of VT on the chip. I'm guessing this will end up as a charitable dump to a relative or actual charity as I can't replace the custom Shuttle motherboard because they won't sell them.
-
Thursday 12th January 2012 01:50 GMT Jacqui
AMD64 multicore and 8TB of green disk
running roughly 30 virtual servers as well as serving HD video and music files via samba etc.
Current status is...
top - 01:46:33 up 46 days, 13:36, 5 users, load average: 0.05, 0.08, 0.02
Tasks: 346 total, 1 running, 345 sleeping, 0 stopped, 0 zombie
Cpu(s): 2.6%us, 3.0%sy, 1.3%ni, 93.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 4033108k total, 4004636k used, 28472k free, 684948k buffers
Swap: 3866616k total, 224k used, 3866392k free, 1924488k cached
I telework and replicate vz images from work systems (16+core servers), run them on this
box amend then replicate the changes back via rsync.
OpenVZ is cjeap and simply brillinat now you have checkpint replication and backups/restore.
SImply ideal for replicable test platforms.
-
Thursday 12th January 2012 11:45 GMT phuzz
Licensing workaround
This is a bit above most people's home level, but for a small/medium business it's quite handy.
If you buy Win 2008R2 Datacenter for a machine, then you are licensed to run an unlimited number of 2008R2 VMs.
Of course, Datacenter is much more expensive than the standard version, but it means we can create servers without worrying about licensing.
Quite how the licensing works when we migrate a VM between the two hosts (each of which has their own Datacenter license) I'm not sure though.
Not running any VMs at home, although I am using my old gaming rig as a file server. Why yes, that is a watercooled Semperon :)
-
Thursday 12th January 2012 11:56 GMT Werner McGoole
Still to be persuaded
I use VirtualBox on Linux and run VMs for a few tasks that require a degree of isolation or experimentation with the OS (the ability to roll back to earlier states is very useful). I have considered a home virtualisation setup several times, but a number of problems always deter me:
1) Windows licences: why would I want to pay for these to run on my VM when I can use Wine to run the few legacy apps that are still windows-specific? When buying a new machine, you get a free windows licence (well, not free, but it's usually hard not to pay for it anyway), so why not use that on the machine it was bought with if you really need windows?
2) You still need to keep all your VM-installed software up to date. At a basic level this is for security, but in practice also for compatibility with external stuff that keeps on changing. The more VMs you have, the more hassle that entails (except possibly if they all use identical software). Eventually, the OS installed on a VM needs upgrading because support stops. That's a whole new load of hassle too. If you installed it to run legacy apps, you may find they don't run after the upgrade, negating the whole point of using the VM in the first place.
3) The guest tools that provide integration with the host often don't run on old or obscure guest OSes so, again, VMs aren't an ideal solution for running really old software.
4) Historically, virtualisation software has had its fair share of bugs. Compatibility between old VMs and new versions of the virtualisation server hasn't always been 100%. Some older OSes won't run on certain versions of certain VMs. Integration between host and guest can break when versions change. Sometimes you can live with the consequences (although they're less acceptable to family members who aren't so computer-savvy), but sometimes they are show-stoppers too.
5) Access to hardware from VMs, as others have noted, is usually fairly pitiful, typically with serious performance issues. Even if your chosen VM allows direct access to devices, this will remove the isolation that the VM provides, so you'll have issues when the hardware changes (and that was what the VM was supposed to prevent).
I think the goal of separating the OS, the personal work environment and the hardware is a laudable one, but I'm not sure that virtualisation really pays off in a home environment.
-
Saturday 14th January 2012 01:03 GMT NogginTheNog
Virtualising helps with maintenance
For me the main benefit of virtualising my home setup was separating out the different functions. I originally had a single server running everything (DC, email, proxy, file server, etc) and as more and more things got added the performance and stability took a nose-dive. I found that putting ESXi on to the same box, and the moving all those functions onto individual VMs made everything more resilient, and allowed me to work on, upgrade, patch, and even replace, them individually without disturbing the others.
-
-
Thursday 12th January 2012 12:59 GMT Joe Montana
Proxmox...
If you want a home VM setup, try Proxmox...
Installation is trivial, pretty much choose your hdd and hit go (but do consider, its designed as a standalone install and not to dual boot, so it will remove anything else already on that drive)...
You can do full virtualised images of pretty much any os, linux, bsd, windows, solaris etc and there's also the option to do paravirtualization of linux images with considerably less overhead via openvz.
Proxmox has considerably lower overheads than vmware or hyper-v (especially when running openvz instances), and can be managed using a standard browser, ssh or vnc client (some other systems require proprietary clients thus limiting what you can use to manage them)...
You also get advanced features like live migration, shared storage, clustering etc although i doubt these will be used much in a home environment.
Also funny you should mention remotefx, X11 has had the ability to do remote opengl for many years... Still, if you want to play games remotely you can install a windows image within proxmox and then allocate a physical videocard to it.
And proxmox is free, including things like live migration that vmware charge a fortune for.
-
Thursday 12th January 2012 13:42 GMT NogginTheNog
My setup
A generic tower PC in the cupboard under the stairs: Athlon X3, 8GB RAM, 2 x 250GB 2.5" hard drives, VMware ESXi 4.1. It runs almost silently.
On this I run a five Windows VMs: domain controller, file/proxy/Tor server, Exchange 2010, and a BES to feed my Blackberry, and the occasional extra box or two for trying stuff out. None of them will break any speed records, but it does all tick along nicely and gives me the freedom to run my own email MY way. Plus it's a good practice environment for the day job!
-
Thursday 12th January 2012 13:57 GMT Anonymous Coward
I've got a i3 530, Asus mobo, 12gig ram and a LSI 8708ELP (3x 2TB Raid5 & 4x 1TB Raid5) running esxi 5.0 on a usb stick. It runs a firewall vm, AD DC, exchange 2010, linux vm and another win 08R2 which does mssql and is my nas box.
I did have several different boxes but they were taking up too much space to i managed to consolidate down to one - thanks to esxi 5 increasing RDM/VMDK sizes above 2TB. NAS has the 3x 2TB raid attached as an RDM drive. performance is great but i do need to watch out for disk I/O, even with a proper raid the sata drives don't really cut it if i reboot all VM's at once.
-
Thursday 12th January 2012 14:03 GMT Kirbini
For the love of god why?!?
My head hurts from reading all of these posts. Not from the specs and stuff because at work I have built and operate 3 ESX/vSphere servers running 25 guests, 3 hyper-v boxes running who knows how many guests (it's the dev teams junk), two XenCloud hosts and one KVM host. In addition we are building an OpenStack cloud with KVM and XEN hypervisors.
And yet i have absolutely ZERO desire to do anything like this at home. My MacBook Pro (albeit with Virtualbox and VMWare Fusion installed but rarely used), my wife's MacBook, the den's super old lampshade iMac and the AppleTVs do everything I'll ever need. I guess I like to do other things when I'm not at work like garden, take the kids to the park, scouting (boy and girl), drink beer with the neighbors around the fire pit, shoot off fireworks, hunt and fish, etc. But that's just me apparently.
-
Thursday 12th January 2012 16:38 GMT geekclick
Some us are just BOFH's by day and geeks by night...
I enjoy sitting at home on an RDP session dicking around with 2k8 or whatever the mood takes.. Even after a day spent dealing with silly request like can someone install Angry Birds or have the wifi key for their iPhone even though they signed the corporate IT policy agreement stating that this is strictly prohibited even if you are a Director or whatever....
I do however write for a motorsport club and do press/pr for a race series as well which means that 6 weekends of the year I am off out and about camping and listening to race cars...
As an avid F1 fan I also loose 20 Saturday and Sunday mornings a year to that...
I was a Scout as a youngun and once me and wife decide to procreate I will ensure they are to and I will take it up with them...(it really is a worthwhile thing in my opinion and taught me a great deal as a youngun, my Uncle is the head honcho for his local Sea Scout group and I take a week out once a year to take them sailing and Kayaking)
In short I do this stuff at home because I enjoy it, I can and perhaps just as importantly i have the time. If it means i can extend my career, earn a better living to provide a better life for my spouse and offspring then its almost critical that I do!
Plus as i never learned a trade and skipped Uni (in order to do more drinking than the average student) my chances of the wife and I's dream of emigrating the UK to the US or Canada will never come to fruition unless I can qualify myself up to the hilt with MS certs etc to make me a viable candidate for a foreign employer, so that's an extra reason for a bonus point too :)
I do however totally get where you are coming from and there are days when it just isn't worth gnawing through the straps!
-
-
Thursday 12th January 2012 16:25 GMT geekclick
I do something similar
Shuttle XPC something or other (about 6 months old XPC SG41... i think)
Core i5
16GB DDR3
320 WD Raptor
Win 2k8 R2 with HyperV
The HV host does all file server type stuff using partitioned NAS storage, Win 7 VM for all my streaming/extender needs for the Xbox etc, 1 VM for "tinkering/dicking around" be that loading Win 8, Linux etc. 2 2k8 R2 VM's for working through my MCITP In Server 2k8 Enterprise Administration!
Jubbly!, does exactly what I need it to with Drop box on them all for backing up docs etc etc...
The only thing I am yet to do is get a decent backup schedule put together that pushes it all to the NAS... Recs welcome on that...
-
Friday 13th January 2012 00:09 GMT b166er
Disinformation?
Joe Montana, from what I can see, ProxMox uses KVM, which according to this page:
http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM
doesn't do physical video adapters, so how have you managed it?
Sometimes it's just easier to accept that Microsoft have done a good job. Horses for courses.
My favourite configuration is Hyper-V core (though I quite like ESXi too) with an R2 guest running AD and iSCSI target and Asterisk, Windows 7, Arch (I'm getting ready for RasPi), VortexBox, Mythbuntu whatever other guest I might be wanting to tinker with at the time
-
Sunday 15th January 2012 20:31 GMT David Halko
SmartOS and KVM
The author writes, "If you don't have the money – but have the time for something a bit more fiddly – check out CentOS and KVM. This combo is free, but lacks RemoteFX."
If you don't have the money – but have the time for something a bit more fiddly – check out SmartOS and KVM. This combo is free, and allows you to move VM's into the cloud.
http://www.joyent.com/products/smartos/
No one does cloud analytics better - best part about it, it works for multiple operating systems.
-
Sunday 22nd January 2012 15:07 GMT the wolf
IT for salary and enjoyment
I think that the author's combo would work great. I am lucky enough to also have the time and need to setup a small host system to provide different services to my family.
I'm running an older Intel DQ965 desktop rig with a Core2Quad and 8GB RAM. The mATX motherboard is equiped with the Intel host assisted raid controller (ICH8 DO) which gives decent performance. The machine is equped with 6 x 2TB WD Green drives boasting 64MB cache, arrayed into a RAID5 resource pool. There is also an orphaned-from-work Intel SAS card and two 500G 10k SAS drives in a RAID1 configuration for the host OS. This config sips power from an APC 1000XL UPS. It averages a low powerdraw with underclocking, HDD power saving and an efficient 88% PSU. I'm lucky enough to posses a separate shed to house this away from living areas, linked to a gigabit switch and AP in the home - which is cabled in CAT6.
The host OS is Server 2008R2 SP1, which runs a few services, VPN, some programs for me and backup. The Hyper-V guests include a Server 2008R2SP1 DC which runs DNS, DHCP, DFS, AD GP, WDS, WSUS, MSSQL, IIS7, NPS and recently DPM. There is two each of XP and 7 client machines to test GP changes, SQL apps and WDS imaging etc. They may not all operate at the same time but have done a few times with good performance - unless large file transfers are running.
This setup allows me to host a very efficient, small domain for my userbase of 15 or so, some remote. The licensing allows the one extra copy of Server 2008R2 to be run in Hyper-V. Client machines include 3 gaming rigs, a netbook, 3 laptops, phones, guest devices, 2 x HTPC (one remote) and 3 workstations (two remote). My machines can be imaged from PXE booting to WDS (locally), apps are deployed and updated by the DC, users have offline files for laptops and redirected profiles/homefolders and I have a familiar administrative interface to tie it all together.
On my gaming machine, I have ample space to accomodate further VMs if needed for trying out new distros and betas. This is an Intel i7 950 on an EVGA 4waySLI Classified, 12GB 2000MHz RAM and 3 x nVidia GTX 480 cards. There is room for VMs and other data on a fast array of 5x1TB 7200 WD Black disks in RAID5, along with the OS on a single 120GB SSD. VMware Workstation provides excellent performance on Win 7 64bit host OS.