An interesting, yet common sense article...
But the more common sense is mentioned, the less of it there seems to be about.
So you know your way around a data center just fine, but you've been told to roll out VDI – aka that highly riveting technology, virtual desktop infrastructure. This is your first time juggling virtualization on this scale. What do you need to worry about? How will you test and benchmark it? Our sysadmin blogger Trevor Pott …
First: yes, I work for a re-seller and implementer of Citrix... we also re-sell and implement VMware.
Second: I just finished a small project to implement a pilot implementation of XenDesktop... and the ease of implementation, and the ability to extend the capabilities were simply impressive.
The latest XenDesktop 7.5 software can be freely downloaded for a basic test of the software... following a couple of the online walkthough guides for XenDesktop 7.x will let you install, configure and deploy desktops in not much more than an hour if you have an available Windows server and one or more desktop images to point clients (physical or virtual desktops will work for this test!).
The Citrix has built in capability not dis-similar to AppSense to really provide some great mobility options, and the Citrix receiver/StoreFront web capability is simply reliable and functional.
While VMware is the "Hypervisor King".... Citrix really has a very deep heritage in remote delivery... and the latest XenDesktop product is showing it.
I was impressed by the product... and think that if you are considering VDI, it would be a dis-service to not download the software and give it a once-over as part of your VDI product consideration.
My $0.02.... and NOT a paid opinion... :)
Well researched and well written. Definitely worth staying to the end.
For those of us about to embark on the journey that is VDI this provides some clear direction and cuts through the bullshit marketing that every single company seems to be spouting when it comes to VDI.
On a side note, there's been a decided lack of articles of this calibre on El Reg lately. Welcome back to the intelligent and classy side of the tracks.
I remember when we were all looking forward to having our own PCs with their own resources and local storage to free us from the mainframe. No more bottlenecks or costly efforts to overcome them. No more struggling to add another user without breaking the camel's back as each new user could simply be given their own PC.
Having the OS and app software close to the user seems the right approach to me. You won't get much better performance than a local OS and a local app running from an SSD, it'll be easier to manage, cause no bottlenecks, and keep things a lot simpler.
So, while this is a decent article on how to do it if you absolutely have to, it doesn't cover why you should. Or shouldn't.
The problem with that approach is even with remote management tools and processes, you end up with what is effectively a single point of failure - the PC. And with the best will in the world, the best discipline in the world, you will end up with a failed PC that can't just be swapped out. You'll need to image a replacement PC. Probably apply updates. Then install the niknaks the user has to have that aren't in the image. No so bad in a small organisation, but if you have hundreds of users, across multiple sites ...
Much better to give the user a terminal, and concentrate on a reliable datacentre.
That said, it does bring a smile to my face ... one of the first uses of PCs was as smart terminals to mainframes, as they were cheaper. (Can't recall how much the fancy DEC and Wyse terminals were in the 80s, but you could probably have bought a car for a pair).
But the failure of one PC is a failure of just one PC, and in the vast majority of cases it can be swapped out without major overhead. With the VDI approach, your risks are lumped together and the failure of that system is several orders of magnitude worse.
The trick to this is automation. Re-imaging a PC should be fully automated, only requiring interaction at the start to kick the process off, should automatically handle updates, and should ideally be capable of self-service. Likewise software distribution.
PCs can be treated as largely interchangeable units, without resulting in huge overheads in dealing with failed PCs. (This is in an organisation with many thousands of users and multiple sites.)
I liked the article too, it turns out I've been doing VDI for over 15 years, primarily the RDS one. That's a relief, sometimes you worry that you not up with the play.
I learnt about VMware vSphere Flash and that might be handy. Various incarnations of this cache, budget and expensive have been available for many years, for standalone boxes anyway. I worked with some engineers ~20 years ago who came from Anamartic (Clive Sinclair) they had designed a wafer stack consisting of several 6" or 8" silicon wafers, totalling 40MB to 160MB, they cost the same as a house. Hasn't memory got cheaper. http://www.computinghistory.org.uk/det/3043/anamartic-wafer-scale-160mb-solid-state-disk/
I remember Ivor Catt visiting us at work and saying how academia was 20 years behind, and they poopooed his ideas despite them being used in real life. Despite the fact he had a B.A. degree from Cambridge University, and had also won two major product awards for his innovative computer chip designs.
I live at the other end of the spectrum - in a small company, with barely enough employees to warrant some basic level of centralized IT, most of the empoyees are techies who prefer to select their PC's for purchase and manage them... It's a pretty retro environment, the centralized services are nineties-level file serving and printing, plus some VPN remote login, plus a Windows terminal server set up to cater for our single application that runs best in an RDP Console on a remote server (a databasey thingy). A major PITA is how to backup the OS on notebooks with preinstalled Windows in a somewhat orderly fashion. With the demise of DOS-based Ghost and with the recent generations of Windows, the amount of work required is staggering - the amount of work to massage the preinstalled ball of crud into a manageable, lean and clean original image suitable for a system restore, should the need arise - with a separate partition for data for instance. But it's less pain than trying to force a company of 20 geeks into mil-grade centralized IT.
To me as a part-time admin and a general-purpose HW/OS troubleshooter, the article by Mr. Pott has been a fascinating reading. There's a broad spectrum of IT users among our customers, and it certainly helps to "be in the picture" towards the upper end of centralized IT, even if it's not our daily bread and butter.
"With the demise of DOS-based Ghost"
Sad indeed. It had less quirks than other image tools. With all those "smart" tools you just cannot be sure whether your image is any good.
Few years back, I finally had enough, and decided to take a path of brute force: dd + gzip. Storage space is cheaper than time. For the time not spent on troubleshooting mysterious restore errors (like 0x80070057), I can buy a fucking crate of disks. Or tapes.
</offtopic>
<brownie mode>
Excellent article. Well-rounded. Which, sadly, does not leave much room for commenting. How are we supposed to live without flamebaits?
</brownie mode>