*Sigh* I guess i'll have to repeat myself...Another round of thin client roulette?
As recently posted to another 'cloud' article.
Does this idea come around every 10-15 years?
Diskless workstations suck. end of story.
To provide enough compute power on the desk to handle the presentation layer of Windows, you have to essentially put a PC on the desk, regardless of how you cut it. Whatever is on the desk has keyboard, mouse, monitor, networking, graphics processing and some local CPU to run the thin client on. the only difference between that and a PC is the HDD and perhaps the amount of memory. The user training is the same, the hardware costs are not significantly different, the software costs are not significantly different. The flip side is that you now depend on those centralized servers. OK some will say - cloud, whatever, it's a cluster of servers, whether distributed or not, it's the same thing conceptually because the user's client connects via a network to the server - for everything. Just like VT100s and VAX systems. Just like 3270 and IBM AS/400 or Mainframe systems. It's the same old crap again. Except now that all the application processing power and data storage has been centralizes you need some big assed servers to handle the load. not only that, but now that your enterprise runs on a virtual desktop, your network and server cloud have to be far more resilient because now your entire operation depends on them. So you have hot stand by servers, much more expensive SAN storage requirements, ridiculous backup requirements and have a damn good disaster recovery plan. All of that costs $$$ and has to be managed, administered and supported by a larger team than just your ordinary app servers require.
All this to save perhaps $200 per desk in hardware costs? Total and complete BS. That doesn;t even begin to cover the issues that this kind of centralization brings. Pretty soon you have disk quotas because that SAN storage is fantastically more expensive than the 1TB drives shipping in desktop PCs today. So people start getting pissed that they can't have everything they want on 'their' desktop. Organizations soon find that many, many virtual desktops all running WeatherBug and all the other innumerable task bar trash soak up CPU time, as does Farmville. So those are summarily banned, causing more user unrest. Then the mainframe cycle is repeated when end user groups get tired of the lack of freedom and flexibility and decide to get a few real workstations for their own use, and pretty soon, you have lost control all over again as departments invest in more special workstations and users migrate to the Personal Workstations instead of the shared desktop.
I've been through this three times now, once transitioning away from mainframe, once experimenting with diskless workstations in a pre-Windows environment, and once dealing with Windows Terminal Server in a predominantly XP environment. I also dabbled with this with Windows NT, but fortunately sanity prevailed and we went with PCs on the desktop. It's the same schtick every time. Overblown reports about TCO of PCs, overly optimistic estimates of TCO for the cloud/virtual desktop/diskless workstation solution. No one ever considers the additional costs on the server side, nor the lack of any real savings on the client side. It all comes down to a bid for control by the centralized IT admin. Which is a poor reason to make a fiscal decision.
The only addition I'd like to make at this point is that this article talks of the cloud providers and talks of monopolies. Well, to me there are two aspect of that that are definitely not plus points from the point of view of business data owners. the first is that a third party is now hosting my invaluable business data, and the second is that in a monopoly or near monopoly situation, even more of the freedom I thought I had has been subsumed by the cloud,
Once again, this is all a poor basis for a fiscal decision.