Eggs in one basket
= single point of failure?
Dell, along with partners Nvidia and Intel, has opened a new center at its Austin, Texas, headquarters to make it easier for customers to deploy virtualized-workstation installations, and for ISVs to certify their software running on such setups. "We know this all comes down to dollars and cents," Dell's general manager for …
Well just like when your fileserver fails. :)
But seriously if you have it in-house, the chance of it failing is pretty low. And once you have an in-house failure, it's likely to be something like power which would knock out your local workstations, too.
This won't work for "the cloud" anyhow, since there you typically don't have the multi-megabit bandwidths you'd need for that.
A server with the requisite power is easy, but nothing on the hard part:
How does that remotely-rendered stuff get displayed locally quick enough not to wipe out all productivity?
There is nothing more irritating than a machine that's a little bit behind on you, causing you to overshoot the adjustments all the time.
There's only about a 40ms latency window to do the entire round trip before it feels "slow", and while getting a ping that low isn't too hard, this needs an entire 1080p or higher (4k?) resolution screenful of rendered content to be delivered to lots of workers in various locations at the same time.
RDP doesn't really cope with that in my experience - at least, not on commodity networks.
It's perfectly fine for "do this series of commands" but horrible for "up a bit, down a bit, no, looks like too far nudge it back" types of adjustments that all visual (and presumably audio) creatives do all the time.
If they do not solve the latency problem, it simply will not be used.
That's the thing though isn't it? Your network infrastructure should be built to provide the services you need to use. If your commodity network doesn't perform suitably, it needs upgrading to something that does.
RDP can cope with that, if you use the latest version with RemoteFX etc... But there are other options such as Citrix ICA.
Your best bet is to see a system in action. It can work, but it certainly isn't cheap to buy. It could, in the right circumstances, reduce TCO though.
> That's the thing though isn't it? Your network infrastructure should be built to provide the services you need to use. If your commodity network doesn't perform suitably, it needs upgrading to something that does.
...which is an entirely new level of pain and cost and complexity for a gain that is dubious at best.
Well something like MJPEG can work with less than a single frame of latency (<20ms). Even if you don't transmit areas which haven't changed, you won't get higher latency.
Of course you'd want to use specialized algorithms and maybe a slight bit of intra frame predictions, but it's a far cry from the multi-second broadcast quality. You can probably process everything you need with single frame latency.
The downside to this is of course that the encoding is not highly efficient. A 100 MBit connection will however still get you around 10 full frame updates per second (at JPEG 10:1 compression) of a 3840*1080 screen. Considering that typically only a small part of your screen changes, and your network connection is probably gigabit Ethernet (allowing theoretical 100 full frame updates), this will most likely work.
So, yes, this can work on the LAN and with more thought than I put into it for this comment, you can probably even make it run well enough to be practically indistinguishable from local usage.
Well there's a view from over a year ago.
nVIdia have been working on VGX technology for a while now and it's publically available. Internet connection does play a part in it's usability but it's usable for a few users on a decent (FTTC/Cable) residential broadband connection. On premise virtualised workstations are now easily as good as their under-desk counterparts, as you don't have to run 10GbE down to the workstation to shift multi-gig files out to your edge network and it's cheaper in the majority of cases.
Everything old is new again.
When we finally get enough power on the desktop, we start moving the processing back into the data centre. I am still trying to figure out how this will not just recreate the problems of the dumb terminal/mainframe architecture of yesteryear.
Any bets on how long until we start moving everything back out of the data centre and back on to the desktop?
Just finished a Linux build of a laptop w/ workstation graphics n 17 in screen & ssd...
my local SAN / printer-scanner farm will stay... and my 6 hr UPS...
IMHO= Just banished my workstation from under my desk... to a Laptop, with benefits...
Q= why would I want to jeapordise my IP to somewhere else ??
Security does NOT = money savings...RS
"having all of a workgroup's valuable files in a central location — architectural models, feature films, automotive designs, whatever – can significantly reduce that aforementioned IT admin's anxiety."
I suspect the only admin whose anxiety is reduced will be the HR admin.
The "aformentioned IT admin" will be very anxious indeed, right before he ceases to be an IT admin @:
HP offers a solution for that (RGS aka Remote Graphics, with either rack mounted workstations or ProLiant sservers with GPUs in the data center) for many years now (I remember first seeing it when the xw8400 workstations were still current which was around 2006). And according to a client who's using it it works pretty well.
It seems the receiver license is free, and if you own a HP workstation so is the sender license.
Back then many universities had X-Terminals along with normal text-based ones. You could simply go to one of those terminals and log into a larger Unix computer. Alternatively you could log into the Unix workstation somewhere in your office.
Of course eventually local processing became more economical as local workstations became cheaper than X-terminals while networking didn't progress as fast as the rest of computing did.
What we have now is enough power to just do video encoding on the fly. It's trivial to do some sort of H.264 with cutback motion compensation on a graphics card... without slowing it down much for other tasks.
What could make this a booming industry would be standards. With open standards you could just access any server with any device. Just like you could use any X-terminal with just about any unixoid computer.
The difference is that back then the graphics hardware was actually in the X-terminal (you could not use graphics hardware on the host) which is not the same as what Dell (and others) are doing here and where the idea is that all computing power (CPU + GPU) is in the host server in the data center and where the client is really primitive.
Back in the days I had to work on a Tektronix X-terminal for which the graphics hardware had the size of a deskside drawer and which was as noisy as a server.
Users putting their desktop PCs on the floor is pretty ghastly anyway.
Firstly, they hoover all the c**p out of the carpet and store it neatly inside.
Then they block up their air filters with it and overheat.
If that doesn't work. users then use them as foot rests.
Then they pour coffee on them.
Then they bury them in various items of discarded clothing.
Finally, they stack lots of things round them to help them overheat again.
So when I set up a new office, I get all the computers off the side of the nice big desks. I go in there the next day and get squawked at by someone complaining that they had to spend 2 hours moving the computers to the "proper" place. Where's that? Lying flat in the middle of the desks to take up the maximum space.
Better than the floor anyway.
This sounds like a good idea if we can deal with performance and find a big enough room for our 3,000 PCs.
What on earth are you doing when you're running 3,000 PCs that need workstation-class graphics (to the order that these boxes are designed for)? I'm not aware of any deployments in that sort of order - but I'd be happy to be wrong.
I suspect what's happened is actually that you've managed to completely miss the various developments of VDI of the last 5+ years.
Interesting, but you can get up to 8 end users on a single HP WS460g8 using VMware or Citrix and dedicated GPUs. That WS460 and it's expansion blade can hold 8x Q1000m or 6x K3100m, That Q1000 would get you 64 users in a 10U space while the K3100 nets you 48 users in 10U of space.
On top of that, you could put additional users on each blade that are not using GPUs at all. That would help get your cost per user down a bit. But that is dependent on your use case.
disclaimer: I'm an HP employee..