* Posts by Elmars

7 publicly visible posts • joined 5 Jun 2013

This won’t hurt a bit, says Veeam, as it flags end of socket-based licensing


Pure greed, sadly

Depending on the workload, a 2 socket 1.5TB RAM machine can host a huge number of VMs - in my case about 120VMs. Moving from per socket to per vm licensing would blow my backup lic costs out of the water. Fortunately, I have permanent sockets, so will hold on to them as long as possible. All of the alternatives are already on either per core, or per VM licenses.

Funny that HW costs used to be the limiting factor for server deployments. The limiting factor is now license costs. Backups, monitoring, AV, OS, DB, etc are pushing the costs. Each is asking for $5-$50 per month per VM. If you don't have grandfathered socket agreements in place, you can easily end up paying >$1000 year/vm just in license fees.

Tech firms reel from Leave's Brexit win


There goes the EU data Safe Harbor business

This seriously sucks for foreign companies using hosting services in the UK to meet EU data residency requirements. The UK was a great place to park the apps, and came with english language support at no extra charge. Now with the UK out of the EU, it will become much more difficult to explain to continental customers that their data is safe. Before that is the even harder part - explaining this to the sales team - and you can never be sure they don't mess up the message.

So... no choice but to move. Ireland? Germany? Sigh.

Vulture 2 takes a battering in 100km/h test run


Re: Unideal

Rather than adding the weight of a second power source, have you considered placing a supercap in front of the servos? Although expensive, they can deliver the instant power needs of the servo, while keeping weight to a minimum.

'Fan docks' are about to become a thing


right... 3d gesture recognition

Pretty much the only gestures I make to my computer are a fist and the birdie. I just wonder what the computer should do when it sees a birdie...

a) bsod?

b) reboot?

c) launch a porn site?

d) do a Max Headroom and Skype your mom to complain?

IBM rides nightmarish hardware landscape on OpenPOWER Consortium raft


No surprise Apple missing from the party

Had to check my calendar. About 25 years ago, IBM tried the same thing with its power chips. This produced the PowerPC architecture that was supposed to be an industry standard, but ended up being an only-Apple chip. Last time you saw it used, it was called the G5. Then it came back in the game console market, but that is dying off right now.

So aside from all of the optimism, what is to keep IBM from losing interest again?

Dell's new Compellent will make you break down in tiers... of flash

Black Helicopters

Re: Ai... Sick of their promises


Thanks for the response. Interesting to see that you are reading this thread. It is quite amusing to see Sr. Dell executives researching my background in other channels. Did take a day or so to figure out... And you thought I wouldn't notice. :)

We are working with your representatives and (maligned) business partner to resolve these issues. Please take a read through our recent cases to understand the full history of the relationship, and the scope of our frustration.

Please also be aware, that our patience is wearing out. Four years of promises and problems. But oddly, the only fixes offered are those with at least 5 figures.


p.s. So is the fix for the abandoned VMware blocks also two years out? This is burning 2TB/month...


Ai... Sick of their promises

We have been struggling with their product for the past 3 years - and twice upgraded controllers just to resolve issues with the system. Currently running the 6.2 release. We bought into the SSD tiering KoolAid, only to find out it does not really work. Data progression only runs once a day, so when the tier fills up, all writes go to disk. <SPLAT> goes performance. When data progression runs, it does not move all of today's writes down to the next tier, so a bigger SSD tier is needed to cover that. But then if you don't have enough in tier3, then tier2 does not work right.

Then there is another issue we continually battle. When a disk fails, most SANs just mark it as bad, restripe to a new disk and go. Not compellent. they try to keep a flaky disk running as long as possible. So when it starts burping, the performance of the entire array stalls until the disk settles down. In the logs you can sometimes see which disk is having a problem... Then when it comes time to replace a disk, it takes about a day for the SAN to release it. It moves data off the bad disk onto the hot spare... yes, I typed that correctly. So when the bad disk burps as it is now being attacked intensely to move data off of it, the SAN can freeze up again.

Of course, we are running over 500 VMs against this SAN, so when it stalls, bad things happen. One of the side effects is that the iSCSI stack on ESXi gets locked up, bringing the hypervisor to a standstill. 80% of the time we can get out safely - that is with performance similar to a newbie trying to learn to drive a car with a manual transmission. Takes about 2 hours until the jerking stops. 20% of the time, ESXi locks up so bad we have to start rebooting hosts... and writing letters to our customers. Joy.