Re: Missing "not"?
Yes it should - good find. Let me know if you've found any good AI proof-reading algos.
12 publicly visible posts • joined 10 Feb 2011
Not so! My Windows 10 PC seems to act on impulse all the time.
Behavioural scientists I know argue that people are programmed to do everything that they do, and that impulse really isn't. It just seems that way because we don't yet understand the parameters that made them do it. There may be no whimsy at all in a human being, but simply enough complexity that we can't understand the programming. None of us can really know, because we don't know exactly how people do things.
You seem to be arguing for hard AI, and suggesting that soft AI has no value. I'd argue the opposite. Soft AI has significant value, and from an ethical perspective it's probably safer.
Let's not also forget that someone could subvert the entire system by gaining physical access to the server room and recovering the data in person! Horrors!
You can attempt to invalidate any article by simply saying that someone can step outside the operating parameters to subvert the system. We shouldn't install SAP, because, someone might, you know, hack it. So let's not have any articles about ERP in future without mentioning that it isn't actually built on mathematically verified code. Actually, we should all really be using abacuses, because they're more secure.
If you're going to raise this argument, which *wasn't* the subject of a short introduction to DV, then you need to properly explore the ramifications, including ways to *prevent* someone spoofing a terminal. You're taking one of a series of about 15 articles exploring the subject and attacking me for not covering every single, possible, eventual outcome in 750 words.
This is disingenuous. At best. Outright sophistry, at worst.
In my overmatter file (the bits that I chopped out of the article), I called zero clients "VT100s with big biceps. And tattoos." :-)
But there is a difference. The VT100s just sucked down character streams. These are providing you with access to an entire desktop operating system at the back-end. I sometimes think people gloss over this. Sure, what's old is new - that's the industry we live in. But what's new is not the same as the old. The basic concepts have developed a lot in 30 years.
It's a bitch trying to play Minesweeper on a System 360, ain't it?
They've commissioned me for an article that discusses what the network needs to actually make desktop virtualisation work. That point about single points of network failure is a good one. I'll address that, thanks.
I think your point about people not being able to support virtualised desktops has legs, but the parameters are moveable. It depends how mature your operations are at the back end, and how well you've automated everything. I do think it's fair to say that unless you've achieved a certain level of competency in back-end management, you're going to run into trouble.
No, I've not run across an environment where you'd want to do video editing in a virtualised environment. There are other types of applications, too - particularly high-end visualisation apps, for example - where you'd want to keep the logic local. I wouldn't want to see CAD apps running virtually, because they often need some chunky dedicated processing at the client end. But that's why some of the users I've spoken to have kept a small number of dedicated local PCs operating, for specialised tasks.
But your language gives something away for me, AC:
>involved in a bunch of projects with DV where it has been forced on me from on high
Having an attitude like that is never a good way to begin a project, and will pretty much doom it to failure. If you don't buy in, and do your best to lend your support and enthusiasm for a project, then you're going to end up as a part of the problem.
1) Why are you using Norton AV? It's a notorious resource hog.
2) Make use of offline security and patching of VMs overnight, when no-one's using them, on the server. There are a couple of products that manage that now. I think VMware does one. I can probably find the details if it would help.
I guess there's also the option of making the VMs non-persistent and just instantiating a new one from a central image whenever the user logs on. Then you could pull in their data from a redirected folder.
I see two models. Classic DR, where everything stops breathing and falls over, and then you have to get your playbook out to get it up and running again. Your physical facility goes down, and you have to move to a hot site, transfer all your data to a new set of PCs, and get it rolling once more (that is, if you can afford to have a hot site, rather than a warm or cold facility). Seems to me that in this model, operations and recovery are two separate processes.
The other model is business continuity, which brings the idea of operations and recovery together. So, instead of having your desktops running locally, you have them virtualised centrally, which makes them more manageable. Maybe 40% of your employees are in the office at any one time, with the others working from home or on the road. Or maybe in telecottages (which I still think are a great idea). Your site goes down. Only some on-site desktop clients are affected, but none of the data is stored locally anyway. Your desktop VMs are on a central server, which is replicated, and the data is stored in a SAN which is also replicated. So your recovery period is far shorter.
I don't think it's fair to call that bullshit. I think it deserves at least a reasoned discussion, because it could offer some benefits, if done right.
That's not what I meant. Most organisations *that* are virtualising the desktop are doing so because they yearn for some of the old mainframe efficiencies. We're still at a relatively early stage in the game, and I'm not suggesting that everyone has jumped on the bandwagon yet. But yes, as it happens, I do think that's coming.
There most certainly are some significant differences between between desktop virtualisation and the old client/server software model. You're oversimplifying what was commissioned as introductory piece to start with. As I recall, client/server as it pertained to computing in the early 90s was about applications distributed between the client and the server. It was about taking some of the more compute-intensive and data-intensive work and backing it off to the server, while delivering the results back to a supporting application on the client. It didn't replicate an entire desktop, per-user, at the server level. VDI does.
What he's saying is that the data is stored centrally. In a VDI session you're accessing a screen image delivered over an efficient protocol. The data itself isn't beamed to the PC, and isn't stored there (which is why, for example, you can get away with using a zero client with no operating system to access an entirely virtualised desktop).