Surely no one would have production systems automatically installing updates like this, would they?
VMware to customers: STOP INSTALLING OUR SOFTWARE! NOW!
In a move likely timed to coincide with the opening of Oracle OpenWorld, VMware has made vSphere 5.5 available for download here - but the release of the latest version of its flagship product has been marred by the issuance of a warning to users of vSphere Replication 5.1 NOT to install 5.5. vSphere Replication is a business …
-
-
Monday 23rd September 2013 09:03 GMT Ambivalous Crowboard
Surely
Of course not. Even though the option exists and nothing bad has ever happened in the past and it's a timesaving feature and "VMware should know what they're doing and wouldn't release dodgy patches" and "what could go wrong?", nobody would ever, *ever* select it.
Additional reading: see Windows NT, Service Pack 6
-
Monday 23rd September 2013 08:53 GMT Number6
Automatic Windows updates over the years should have provided a salutary lesson in why you don't let the machine do updates unsupervised. I always set systems, whatever the OS, to tell me when they've got updates available, but never, ever to install them without asking.
If it's a big production system, you don't do updates until you've tried it on your test system and demonstrated that it's going to work with your set-up.
-
-
-
Monday 23rd September 2013 18:02 GMT chris lively
Whether its better to turn on auto updates or not is actually a tough question.
On the one hand you might want security updates to go as soon as they are available. Unless its a virus scanner. Why? Probably because you've been bitten by sysadmins that wait two years ( or more ) to apply patches and your systems have been overrun.
On the other you will want product enhancements to wait. Why? Because there is always some type of breaking change.
Sysadmins on the other hand are quite busy keeping things going. Most don't have an actual environment available to test updates. And those that do, don't necessarily have the capability to run real world traffic through those systems so its a crapshoot anyway.
Compounding the problem is that most updates give very very little information on what changed. Making it near impossible to make an intelligent decision.
For those saying its dumb to turn on auto updates... When was the last time you bothered to actually run an update? More to to the point: how many of your systems are currently running on software proven to be either buggy or hackable? Likely most of you.
There is no right answer. Get bit by hackers or by engineers allowing a bad release to go out.
-
Wednesday 25th September 2013 04:21 GMT dan1980
TL,DR: saying "test all updates in a non-production environment" is easy but actually DOING it is a massive task requiring a good deal of time, money, coordination and staff that not all business can afford that.
---------------------
There is a certain snobbiness with some IT people that shows up a lot in these comment sections. It's typified by the feigned incredulity that anyone would ever apply an update without full, rigorous testing in a parallel environment.
While I don't doubt that many people here really do adhere to every best practice and follow every white paper to the letter, please accept that not everyone CAN do that. There may be people here who finished uni and went straight into a Fortune 500 company, supporting business critical applications and able to budget proper solutions. Good for them. BUT, in this modern world, IT is not the sole domain of large multinationals with turnovers in the hundreds of millions. IT systems are at the heart of almost every business - regardless of size or budget - and not every business can afford the kind of systems that those best practices and white papers call for.
The expense of implementing a REAL test environment is enormous. You must have identical servers, with identical interfaces, processors and RAM, identical network switches and routers, identical SAN infrastructure including representation of all tiers, likely with multiple units for each tier to replicate any redundant/striped configuration. If you have replication - as is the case here - you need test versions of all that hardware too. Of course you have to keep all the firmware up-to-date as well - servers, HBAs, NICs, RAID, HDDs, switches, SAN, backup appliances, etc....
Ditto for the software on the servers themselves, though of course virtual machines can be copied over to ensure up-to-date versions. VM or not, though, you will likely need to LICENSE much of the test environment if you will be running it for any length of time (which you should). Some software licensing allows for this but much doesn't.
You also need the space in your data centre/server room for this equipment and to power it and cool it.
Don't forget internet connectivity or private links either. After all, how will you test that a router firmware/OS update (not to mention a reconfiguration) won't introduce some bug, such as a memory leak (has happened with Ciscos before) that results in decreased router performance to the point where intersite replication latency increases beyond some invisible threshold and the process fails? This is something you might not see in a back-to-back router configuration because the base latency will be negligible and therefore an increase of, say, 30ms might not push it over the edge but when combined with a WAN connection at 10 or 20ms, will.
THEN, after all that is done, you have to actually run a representative workload on it and do so for long enough that any problems become apparent because some bugs only occur in very specific, unlikely scenarios but when they happen can be critical. That means that you have to somehow have live data to work on in your test system because otherwise it's impossible to have users working on the test system with dead data.
Best practice is also to test updates in isolation. Great, but how many bits of software and firmware need updates? You've got firmware, hypervisors, management consoles, OSs, support software, user applications, databases, LOB apps, monitoring and deployment software, backup and replication software, etc... That's just on the servers of course; you still have to look after the client machines, keeping them patched and updated but ensuring they can still access the systems. To do that, you need representative samples of all client machines too because you never know when some Windows update will conflict with the NIC driver and effectively render a portion of your machines useless.
Also, anti-virus . . . so much pain.
But of course, best practice is also to keep systems fully up-to-date and more than a few software vendors will offer only limited support (if any) unless your system is running all the latest patches and hotfixes right down the chain.
Oh, and you must have enough staff to cover the extra workload, but all companies have plenty IT bods anyway so that's likely not a problem for anyone . . . right?
So that's what's involved in testing updates before applying them and will cost a significant fraction of the IT budget to implement and maintain.
Something like the update in question should of course be tested but the question is how much do you test it? It might be simple to test replication but the point is that you don't KNOW what a given update might break so you have to test everything. What if an update breaks USB redirection but only with large (e.g. 2TB) drives or those formatted with FAT32? (Granted, that's not likely to be too critical but you never know.)
And, as a last note, perhaps some people who installed this update actually DID have a test system but it was just a simple host and so they were not able to test more advanced features like replication as they did not have the budget for the extra hardware.
-
-
Tuesday 24th September 2013 08:59 GMT drexciya
That was already so in 5.1
There are some strong reasons to get rid of the classic client and VMware has made some functionality web client-only in version 5.1 already. The main gripe I have with the Web Client is that it uses Flash, which doesn't get much love for various reasons. Also there seems to be a lack of support in Linux for it, so the platform independence argument doesn't hold any more.
If you really hate the Web Client, you can access certain functionality using PowerCLI or another comparable tool as well.
-
-
Monday 23rd September 2013 21:55 GMT ecofeco
Cloud + Bad Updates = Godzilla
One day, bad updates, like the cloud, is going to seriously bite us all in the ass. (previous examples notwithstanding, you ain't seen nothing yet)
Software that is not fully tested should be a criminal offense. There are far too many critical bits of civilization now running on software.