* Posts by joejones

11 posts • joined 9 Jan 2012

SOLD: Emulex – for 34% less than shareholders were offered 6 years ago


FC will be around for a long time to come, but I can't imagine anyone starting with a blank slate for a storage environment would pick it over NFS or iscsi. We use FC mostly because we already have it, and its nice to have a dedicated storage network that doesn't get stomped on by some network nonsense. These issues don't seem to occur too much anymore, especially with the advent of cheap 10gpbs gear.

We have started to use NFS over 10gb for VMware, and its incredible. The speed is excellent, but the benefits of using NFS vs VMFS are pretty incredible as a start.

But we also are always several steps behind in the technology. We are only now installing 8gpbs switches (used) and cards (also used) in our production environment, and we will likely keep 4gpbs in the dev/staging/DR environment until 8gbps becomes as cheap as 4gpbs is now. 4gps is still fast enough, but we had to replace our prod switches, and it was cheap enough to upgrade.

We haven't purchased any new FC cards or switches in over 6 years, and we don't pay brocade, emulex or qlogic for support either. I don't see how they are going to make much money except from the banks/hospitals/megacorps that are so very entrenched in FC.

Who won all-flash sales sash, sucked up all the cash? – IDC report


These are always questionable metrics

I am not sure why anyone is surprised to see EMC leading any chart that has to do with storage revenues. There systems are ALWAYS more expensive then similarly configured systems from competitors (HDS might be the exception, and with good reason), and they are the least efficient systems out there, meaning you will need more physical storage space than competitors since they require the most disk/flash for whatever you are going to load up on their systems, so of course they ship more raw disk than anyone else. This is on top of them being the largest storage vendor in the world for other dubious reasons.

That said - who really cares about AFA's? What percentage of companies out there really uses or needs them in the first place?

NetApp recently started publishing storage limits to FAS boxes with SSDs in them since some of their customers configured them and bought them as AFAs anyway. This was probably also done to shut up anyone whining about them not having an all flash array.

It is a small portion of their customer base, because few companies require AFAs for their business, and those that do don't need it for everything.

EMCs flash caching solutions are notoriously bad and difficult to configure, and they are all about shaking more money out of customers, so it makes sense for them to push their current customers into AFAs for specific purposes (reporting databases, highly transaction stuff, etc). Not that these wouldn't perform well on a hybrid system from another vendor, its just that EMC doesn't have a good hybrid system and they are already in the position of having locked in customers that buy whatever they tell them to buy, so its an easy sale.

Xen security bug, you say? Amazon readies GLORIOUS GLOBAL CLOUD REBOOT


Re: First BASHing

AWS and Azure (and I assume Google) cloud services are not like a well built internal private virtualized environment. There is no vmotion or storage vmotion (or whatever the citrix and M$ version of this are). There is redundancy in networking (so we are told), but there is no separate path for storage or management.

The old AWS server has a single redundant 1Gb connection going out the server for everything - management, storage and regular network traffic. This is shared by all the VMs on that host. Few people in their right mind would set up a vsphere or hyper-v environment with these sorts of limitations. There are ways to pay for more bandwidth and IOPS with a 10Gb pipe, but those are significantly more expensive.

Lots of people make many assumptions about AWS based on what their own internal virtualization environment looks like, and they would be fools to do so. It is nothing like what one responsible person would set up, unless you were trying to be the cheapest cloud vendor in the world, then its whatever you could do to make it the cheapest possible setup.

You know what Cisco needs? A server SAN strategy


Off topic - misuse of the acronym SAN

Off topic - but I am now officially blaming the media for the misuse of the term SAN - this means you El Reg!

I am exhausted with this battle in the lexicon of IT - and clearly I have lost it.

Why have destroyed the meaning of the acronym SAN? What does SAN stand for?




A server with disks attached to it is not a SAN, its a server with disks that can be shared out - such a nas/iscsi/FC/FCOE shared storage device or array. It is not a network.

Somehow we have turned what was previously called a Storage Array into a Network. I am not entirely sure how this happened. I think it was some lazy marketing person from Dell who wanted to say SAN a lot because they had no idea what they were talking about.

Now we have turn this acronym into a catch-all for anything storage related that isn't DAS (direct attached storage), an acronym that's name has not been changed.

Please, PLEASE, PLEASE for the love of the universe, will you journalists at the Reg stop using the word SAN in this way!

Call Networked Attached Storage a NAS!

Call a Storage Array a Storage Array!

If you want to refer to an entire set of Network/Fibre Switches and Storage devices attached to it a SAN, that would be G@#$#@$d Darn FU#@$#@ING acceptable.


Death by 1,000 cuts: Mainstream storage array suppliers are bleeding


I guess my comment wasn't clear enough.

I understand that cloud storage is a good tool for some companies to look at. If I had an SMB shop, cloud backups would be a superb solution for offsite storage and DR purposes.

The cloud industry (and its salespeople) have been trying to sell the notion of cloud to anyone for everything. My point is this is often not the case.

We had a cloud backup company trying to convince us (a nontechnical director at my company who just sees cloud = job promotion and relevance) to look at their product. This company has their own datacenter with storage to hold the backups, and customers connect to this from the internet to store and retrieve their backup data.

My point: Let's say the northeast region of the US was down due to a power grid problem. This has happened, and is likely to happen again.

If all of their customers affected by this outage started restoring their data to their DR site in Nebraska or wherever, would they have enough bandwidth to accommodate it without all of their customers complaining that the restores are too slow? The people I spoke with never thought about that, or didn't want to discuss it. I think that's a pretty substantial issue, and a good reason not to use their service.

Our company has contacts with customers that pay us based on availability. I think anyone else in this situation needs to look long and hard at cloud solutions before jumping into one - especially if they already have the systems in place that meet their needs.


Re: Bring on the Storinator!

The Storinator looks very cool. Still need to buy a bunch of disks for it and have some level of 24 x 7 hardware support.

What virtual SAN (I hate how the world has changed the meaning of the acronym SAN) software would you suggest placing on something like this?

One would assume you would need to buy a few of these storinators with redundant storage on both so there is no downtime, and the virtual SAN (sideshow bob grimacing and moaning) would need to take care of that redundancy.


Re: Options are good, but cloud storage ...

We have looked into object storage as well, since we have a large number of files stuck in file shares on filers that are pretty unwieldy - tens of millions of files per volume on several volumes for one app alone.

Vendor lock is always the biggest concern with object storage. EMC's offerings with their RAIN products are notorious. Customers move all of there data to the object storage (and its not cheap), and then realize how awful EMC and want to move it to another product. And how do you do that? There is a company in the Boston area that specializes in migrating people from one object storage appliance to something else. They make a killing because its so frickin' complicated.

Switching to an old guard nas vendor with dedupe and compression has saved us tremendous amounts of space (75% in most cases), and while its no fun to backup, its possible, and if we decide we want to switch to another NAS vendor, we can with tools like Richcopy and some downtime, and not have to rewrite the application because there is a new API.


Re: ODMs will enter the fray, too!

Mainframes still run the majority of financial business in the world. They really didn't go anywhere, they just aren't the only option anymore. They aren't cool, but they are still used pretty widely. There is a real shortage of programmers for them.


Options are good, but cloud storage has a lot of issues the never seem to come up in these pieces

I agree. Little storage vendors are making it harder for the big ones to make as much money as they used to. For someone like me that works and purchases gear from these vendors, we are spending less on traditional storage than we used to. Excellent stuff.

The cloud is another issue...

There is a similar but less comprehensive and not-as-well reasoned piece on techcrunch today that speaks of cloud destroying the old storage guard, but what people keep seem to missing about cloud storage is how you get to it.

We had a director tell us to look into getting rid of our expensive mainstream backup solution (they hate paying for anything, which I understand) and use this new fangled cloud based backup some sales person called him about, and I obliged.

They sound like they know what they are doing. They support lots of products (but not Oracle, so already its a no). Then we asked about their bandwidth, and they claim to have OODLES of bandwidth. Then we asked the question if they had enough bandwidth to satisfy the demand of say 20% of their customers at once. Let's say the US northeastern seaboard power grid is down, and now we are all scrambling to restore our data at the same time. They didn't have a good answer for that one.

Then we calculated our costs required to upgrade our internet pipes to accommodate this - and the whole thing didn't make any sense, even if they did support oracle. Even with an incredible fast internet connection -it would never come close to a local 10Gb connection - let's not even get into latency over the internet compared to a local network.

And the rah-rah cloud storage piece is written by a VP of a cloud storage company. I guess its really paid editorial advertising, but they never mention that on their site.

My point is that there are all these other costs around cloud storage beyond the $/GB a month, but people just seem to accept that they aren't a big deal until they have to pay for them.

Who will be David to EMC's external disk Goliath?


No, but I bet they would at least apologize for taking 10 days to replace a disk at a production data center (with very expensive production support costs).

But I can only imagine what Oracle has done to destroy Sun/STK. STK support was pretty fantastic. When Oracle purchased them they started calling immediately about renewing our support contract in a very threatening manner, and we got so turned off we just skipped it an looking to buying a quantum library. BTW, I have only had excellent support experiences with Quantum. I am sure they will get snatched up by EMC or Oracle and be ruined as well...


But how long can it last

I am still baffled by the continued success of EMC. I don't think their products have any clear advantage over their competitors and their tech support is the worst in the business of the major storage firms (I have used them all, and I have yet to have a satisfactory, let alone good support experience with EMC).


Biting the hand that feeds IT © 1998–2021