* Posts by russtystorage

11 posts • joined 8 Nov 2013

IBM FlashSystem chief architect Andy Walls is a two-pools kind of guy

russtystorage

FC-NVMe is one of several viable choices

On the topic of access to next generation storage, I completely agree with Andy. FC is well established, works very well and can support NVMe access. This isn't to say that NVMe-oF using Ethernet access won't also have a place, but rather customers will have a choice. If they are invested in Ethernet, they can use that infrastructure for their storage access, if invested in IB (there are a few) that will be a choice and certainly FC-NVMe will have a place. I completely agree that for many large enterprises that have invested in FC access to storage, there is no reason to rip and replace, FC will support NVMe as well as other choices, thank you very much.

Evaluate this: A VM benchmark that uses 'wrong' price and config data

russtystorage

Chris Mellor took the time to speak with me about this testing and accurately quoted our conversations, which is greatly appreciated. No doubt the reporting is somewhat complicated, but bottom line is the following:

1) This is NOT a vendor benchmark, it is an audited benchmark that has reported results by many large vendors (IBM, HDS, HPE, Intel, VMware) and others such as Tintri, Datrium, etc.

2) Datrium achieved the published results with their storage gear as tested and reported

3) For IOmark-VM, CPU and memory are viewed as fungible, hence reporting the servers for HCI configs becomes a pricing exercise

Happy to discuss with anyone who isn't an anonymous coward.

Listen up, VMware admins: If the array won't support it, VVOL won't help you...

russtystorage

Re: If the Array Won't Support it, VVOL with Atlantis USX Can Help You

Yes, this is (and has been) the promise of storage virtualization since its inception. Transparent pooling of back-end storage resources. There are certainly benefits to storage virtualization and this is something quite a few customers have adopted. However, there are caveats, which is also why storage virtualization isn't used in every situation.

However, this is a great angle to play while many vendors are playing VVol catch-up... Best of luck Seth.

russtystorage

VVols will help VMware admins

Back to the original story.

Chris, your title is accurate, but not entirely correct. Here is why. One design point of VVols is to abstract features and capabilities, and allow them to be instantiated in a common manner.

Without sounding too much like a former programmer, what this really means is that the fact that you create a snapshot from the VMware vCenter menu is on purpose. If you are using VVols, and if your storage system supports snapshots, then that command happens on the array. If you are using VVols and your system doesn't support snapshots, VMware will create one.

Again, this is by design and is transparent. So, I would argue that using VVols does help the hypervisor admin, even when their storage system is lacking. This is because it enables them to do things the same way, without having to go find vendor X's Java vCenter plugin and figure out how to create a snapshot there, or try and remember exactly where the storage GUI is installed.

Six-starved storage bods rush to support vSphere and VVOLs

russtystorage

I believe you are mis-understanding either VVols or this article. Unfortunately, VMware is complicating VVol's by conflating their VSAN offering with VVols. While VMware VSAN storage is a 2.0 product and does also support VVols, VVols are completely independent of VSAN.

That is, most people will have an external storage system and use VVols, all without a hint of VSAN to be found. Some will use VSAN and will use VVol's with their VSAN, but these are all separate choices.

So yes, there remain plenty of reasons to purchase external storage, spinning, hybrid and all flash.

VVOL update: Are any vendors NOT leaping into bed with VMware?

russtystorage

I see vVol's and SMI-S as a "new" battle

Agree that everyone has nice things to say about VMware and vVol support. The "official" beta includes two vendors only "HP" and "NetApp". Yes others were in early on the program, including Dell, HDS and certainly EMC.

The interesting thing is how this will impact storage management for most administrators over the coming 2 - 5 years. VMware's vision with VASA and vVols is evolving. However, Microsoft's vision with SCVMM and SMI-S are already shipping, working products. Granted MS is at least a generation behind VMware, and has a fraction of the hypervisor market, but they are growing.

We are currently doing evaluations of both SCVMM based storage management and the VMware vVol approach. As a prelude, I talk about some of the issues in a blog.

http://www.evaluatorgroup.com/the-next-storage-management-war-russ-fellows/

Brocade-funded study says Fibre Channel faster than FCoE

russtystorage

Beyond performance

There are several issues being evaluated here, performance, management along with power and cooling. If we take away the performance questions for now, the other issues are still quite relevant.

One of the main premises behind FCoE is that it is less expensive to operate and simpler to configure. If an FCoE environment requires 50% more cables and consumes 50% more power and cooling, then the question becomes, is it really less expensive?

For applications that are not highly performance critical, the cabling power and cooling aspects are still very relevant. For performance sensitive applications, FCoE did have more latency under high load. These facts are hard to argue with.

russtystorage

Real world testing

I was heavily involved in setting up and running this test. It was designed to test real world configurations that we see Fortune 2000 companies (our clients) evaluating and deploying. The evaluation criteria was to find if there were differences with performance, cabling or power and cooling between FC and FCoE.

We found that 16 Gb FC provided better performance under load than did twice as many 10 Gb FCoE connections. The target was 100% solid-state storage with 16 Gb FC attach. The target was not FCoE since very few companies are contemplating FCoE targets. As a result a bridge from FCoE to FC was required, which again is quite common in actual deployments.

We were somewhat surprised to find that power and cooling was roughly 50% less for FC, while using fewer cables to provide higher performance than FCoE.

Reg snaps moment when Facebook turned air Blu: 1PB box for unloved pics

russtystorage

1 PB, as if that is somehow impressive? Also "Zero power." really, ever heard of tape? Google and Amazon both use tape for several reasons, it works. Nearly every company developing Optical technologies for the enterprise has exited the market. There are consumer optical technologies, such as DVD / Blu-Ray, but they do not have enterprise reliability or price / performance characterstics.

Tape has outgrown its effectiveness as a primary backup / retention media. However, for long term retention, with high density and proven reliability, it is hard to beat. Look up specs for SpectraLogic, Oracle/STK, HP, IBM or other tape libraries. It is quite possible to get 25 PB in one, 40U rack with current tape technologies. Optical has never had better density, price / performance levels than competing magnetic technologies. Perhaps someday, but 2014 is not that year.

Solidfire unfurls MAP to corporate wallets, large-scale VDI rollouts

russtystorage

What about certified results?

These are certainly good results. The biggest issue is the lack of an audit / certification of their results. Since their method of constructing the workload is non-standard, there is no independent ability to certify their results.

On a related note, Brocade and Nimbus Data have jointly released a certified and audited report, running a standard VDI benchmark IOmark-VDI, which supported over 4,000 users in a 2U system, at a cost of under $40 / user for storage.

SolidFire should consider publishing audited results if they want to play in this space.

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2022