* Posts by Dave@SolidFire

14 publicly visible posts • joined 27 Jul 2011

Burying its head in the NAND: Samsung boosts 64-layer 3D flash chip production

Dave@SolidFire

Re: so how many write cycles is it good for?

The idea that write cycles is a major limitation for flash, or that disk is somehow better in that regard is a badly outdated if not outright ignorant argument at this point and needs to die.

Disk has unlimited writes? What a joke! Take a 10TB spinning disk. What's it's expected life? About 5 years? Now hammer it with 100mb/sec of writes continuously. You'll fill it up once a day. What's the effective write cycles over 5 years? About 1900! That's it! Not "unlimited". Plenty of flash has more endurance than that!

SolidFire CEO Dave Wright spins down at NetApp – for now

Dave@SolidFire

Not true!

Not sure why TheReg didn't try to confirm this with me, but I haven't left NetApp (try my NetApp email, it works!).

I am taking some time off this summer to travel with family but plan to be back in the fall.

We wish Val the best on his future endeavors.

Shrinkage!? But it's sooo big! More data won't leave storage biz proud

Dave@SolidFire
Happy

Beyond misleading

Funny that El Reg seems to be pushing NVMeOF more than any vendor at this point :)

Pluck-filled platter-stuff: Bold disk drive makers fatten up

Dave@SolidFire

Re: Huh?

It's cheaper. You get to amortize the shared components (controller board, case, drive motor) over more platters / capacity.

The only thing disk is good for at this point is low $/GB. So every change from now on will 100% focus on that.

Intel punches out data centre flash cardlet

Dave@SolidFire

Beyond misleading

It's kind of worrisome that whoever wrote this spec sheet for Intel doesn't know that TLC doesn't stand for "tri-level". TLC is more accurately called 3BPC, because it stores 3 bits, which any CS major would tell you requires... 8 different states (levels).

I see TLC commonly referred to as "three level cells" in the press, but you would hope Intel themselves could get it right.

VCs: Can't see an IPO or acquisition for your startup? Don't throw in the towel

Dave@SolidFire

PE works well.... if you're profitable

The only issue with long-term holders (like PE) buying VCs out is that they typically only buy profitable companies. They aren't set up to keep injecting capital in a money losing business - in fact, their model is to typically leverage their investment (via debt) to buy the company with the minimal capital investment needed.

So if you can get your company solidly profitable, a PE buyout is a very real possibility. If you're still in a loss-making phase, you need to find growth equity dollars.

SolidFire not a shiny object chaser

Dave@SolidFire

The question wasn't about flash in hyperconverged architectures, or flash in hybrid arrays - it was about server-based flash caching in front of traditional disk arrays, something that has had modest adoption but is unlikely to be a long-term architecture of choice.

On-prem storage peeps. Come here. It's time for real talk. About Google

Dave@SolidFire

Restore time is for the birds

It's worth noting that the restore performance of Google Nearline is pretty weak - 4MB/s per TB. That's ~3 days to restore your data, regardless of how much you have stored (and assuming you have enough bandwidth to not be the limiting factor yourself).

Nexenta beats off rivals as Citrix testlab rates its VDI offering 'cheapest'

Dave@SolidFire

Beyond misleading

(from my post on the Gartner blog...)

For a Gartner analyst to take these results and simply put them in a table as "comparable" is borderline irresponsible.

As Rob stated, this wasn't apples-apples, or even apples-pears. In SolidFire's case, the capacity used for this workload was 0.2% of the storage system, and even PEAK performance was only 10% of guaranteed IOPS.

You could easily put 7500 users on the same configuration for the same price, or, more likely, put dozens of other workloads on the same system with Guaranteed QoS, rather than add "yet another island" of storage for VDI.

The test situation may not have been ideal, but I don't fault Citrix. They aren't the one putting all the results in a table that implies they are comparable - because they aren't. They simply post the whitepapers so that you get the full context of each test.

Also, fwiw, in addition to the questionable pricing on the "lowest priced" result, it also isn't an HA configuration. Anyone recommending a non-HA storage system for a 750 user VDI workload needs to get their head examined.

SolidFire brings out new Carbon, says it'll make data centres more like clouds

Dave@SolidFire

Re: "Real time" replication?

Generally, it would be considered async - it's real-time, but without waiting for the remote confirmation. We don't use "async" as that term is far from standardized - many vendors still call snapshot-based replication "async" (e.g. compellent) and other async schemes end up with large RPOs which isn't the case here. Semi-synchronous is sometimes used for this type of replication, but that is also not standardized and is even more confusing. Real-time replication was the best description.

Why storage needs Quality of Service

Dave@SolidFire

Re: Indeed, delivering good QoS isn't easy

What I said was that only SolidFire had build the architecture from the ground up to support Guaranteed QoS. 3par has bolted QoS functionality onto a 15 year old asic-based controller -- a good architecture, but not one designed with QoS from the start.

I expect you'll see every other major vendor follow suit, just as they bolted on thin provisioning after 3par innovated in that area. If I've learned anything from 3par's marketing over the years, a bolt-on is never as good as designing it in from the start :)

Dave@SolidFire
Thumb Up

Indeed, delivering good QoS isn't easy

Good to see some of the incumbent vendors acknowledging that QoS is essential in large scale, multi-application & multi-tenant environments. But as the article alludes to at the end, it's not such a simple task on most systems today. Between juggling tiers, RAID levels, and noisy neighbors, it's nearly impossible on most systems to guarantee a minimum level of performance... which is really the key.

Despite claims in the article otherwise, Netapp's QoS features today are just rate limiting ( http://www.ntapgeek.com/2013/06/storage-qos-for-clustered-data-ontap-82.html ).

Fujitsu has made some a few references to "automating" QoS, but there doesn't appear to be any real detail on what that entails.

Only SolidFire has built its architecture from the ground up for Guaranteed QoS, including the ability to easily specify and deliver minimum performance guarantees, and adjust performance in real-time without data movement. ( http://solidfire.com/technology/qos-benchmark-architecture/ )

Going forward, high quality QoS, including guaranteed performance, is going to be essential in enterprise class storage systems.

How to tell if your biz will do a Kodak

Dave@SolidFire
Alert

The media yes.. but the storage systems.. no

While the physical media may be the same locally or in the cloud, it would be a mistake to think that the storage systems that Amazon uses look anything like the EMC & Netapp arrays in most enterprises, or the Drobo and Netgear home NAS boxes in SMB and homes deployments.

The reality is that the migration to cloud will drive significant change in storage systems architecture, if not the media itself (and media is actually a small portion of storage $ spent).

Why should storage arrays manage server flash?

Dave@SolidFire

It may work, but it's just a stopgap

There are certainly advantages to server-side SSD caching, the biggest of which is that it reduces load on storage arrays that are these days taxed far beyond what they were originally designed for, but in the long run I think we'll see server-side SSD caching as nothing but a complex stopgap making up for deficiencies in current array designs.

If you look at "why" it's claimed server-side cache is necessary, it basically boils down to:

-The array can't handle all the IO load from the servers, particularly when flash is used with advanced features like dedupe

-The reduction in latency from a local flash cache

The first is a clear indication that current array designs aren't going to scale to cloud-workloads and all (or mostly) solid state storage levels of performance. Scale-out architectures are going to be required to deliver the controller performance needed to really benefit from flash.

The second is based on the assumption that the network or network stack itself is responsible for the 5-10ms of latency that he's reporting. The reality is that a 10G or FC storage network and network stack will introduce well under 1ms of latency - the bulk of the latency is coming from the controller and the media. Fix the controller issues and put in all-SSD media, and suddenly network storage doesn't seem so "slow". Architectures designed for SSD like TMS, Violin, and SolidFire have proven this. Local flash, particularly PCI-attached, will still be lower, but that micro-second performance is really only needed for a small number of applications.

EMC and Netapp have huge investments in their current architectures, and are going to try every trick they can to keep them relevant as flash becomes more and more dominant in primary storage, but eventually architectures designed for flash from the start will win out.