* Posts by Nick Triantos

13 publicly visible posts • joined 4 Oct 2008

FlashRay bang, wallop: NetApp bigwigs play musical chairs

Nick Triantos

Re: Single controller.....WTF?

FlashRay is under targeted availability today. That means is not available for anyone and everyone so we're fully aware of the single controller ramifications. The thing to keep in mind when you compare an established company lik netapp to a startup is that netapp's existence is not solely dependent on flashray.

Cheers

Nick Triantos

Re: 6U

We had two choices with Flashray

1) Use completely different HW and develop/debug/QA/support new diag packages or

2) Use existing HW we're familiar with, leverage years of existing work, knowledge, QA cycles and therefore be able to bypass dev/test cycles and concentrate on things higher up in the stack.

#2 made sense to us. it's our HW, we know it well, we have the diags packages already, in production in thousands of customer site, so why not use it? Made perfect sense to us.

The 6U package is a dual controller 8060 enclosure but with the internals of the 8080. So when FlashRay becomes a dual controller system the Rack units will not change.

This is as far as I'll go before I step into hard core NDA territory. That said, we will continue to develop ONTAP in the same manner HP, EMC, HDS develop their own architectures all of which pre-date Clustered ONTAP and some pre-date ONTAP (7-mode).

Finally, FR was developed from scratch under a different design center than ontap was.

Cheers

Nick Triantos

Disclosure: NetApp Employee

What benchmarks can tell you about your solid-state drives

Nick Triantos

Inline <name your efficiency> and Benhcmarks

As always a well written article. One important point when it comes to benchmarking, Most of these benchmarks have preceded arrays using inline efficiencies (compress, dedupe, zero detect).

Furthermore, a lot of these popular benchmarks tend to write either repeating patterns or zeros which constitute a highly compressible workload. Needless to say the perf values and latencies produced are highly unrealistic with very little work actually happening on the back-end.

Personally, I like vdbench because it allows testing with certain compression and deduplication ratios that can be more realistic vs a 1000:1 ratio. fio has that some of that capability as well but it's not as granular as vdbench.

Cheers

Nick

Disclosure: NetApp employee

NetApp gives its FAS range a 4 MILLION IOPS dose of spit'n'polish

Nick Triantos

Mental Investment Required

Don't compare the SW architecture of a VNX to ONTAP because If there was a storage police, you'd be in jail by now.

However, everybody needs to make a mental investment beyond the 140 tweeter chars and the soc media noise and I'm willing to help with that. Here's a good starting point in order to understand a few things about "real" AFAs vs ONTAP plus a few other things i'm quite sure you had no idea.

http://storageviews.com/

Disclosure: NetApp Employee

Hang on, lads. I've got a great idea, says NetApp as it teeters on the edge

Nick Triantos

Contrarian

This is my 12th year at NetApp and if I learned one thing over the years is that going with the flow and subscribing to various well intended (in some cases) pundit beliefs is, in most cases, the wrong thing to do because in the end the masses are generally on the wrong side of the fence. I've also learned that hype is not necessarily where true success lies. Over the years, at netapp, we have been told that:

1) Our larger competitors will bury us. That of course has yet to materialize and the company has grown from 1200 employees when I joined to 12,000+ now

2) We were told that we couldn't implement and find success with block protocols because we started with NAS and It was a naughty thing. Today netapp's block protocol business is almost on par with NAS.

3) We were told that deduplication on primary data is a terrible thing and nobody in their right mind would want to do that. Today, almost every new array in the market offers dedupe or compression. In fact, for some it's their value-add, the secret-sauce and the cure to all Data Center ills.

4) We were told, in 2009, that without Data Domain, NetApp wouldn't be able to grow at all. Netapp continued growing for the next 4 years.

5) We were told that server virtualization was bad for netapp, especially since EMC had bought VMware. We embraced server virtualization and have continued to reap the rewards the last 10 years.

6) We were told that because we rejected the notion of Automated Storage Tiering, in the sense of what it means to some of our competitors, it will severely affect us because we, instead, embraced Caching. Needless to say, these days we see more cache based approaches than AST in the market.

7) We were told that our Converged stack (FlexPod) with Cisco would be a #fail because it was a "reference architecture" not a product with a SKU etc etc etc...FlexPod has been breaking revenue records for us and you can see the growth numbers published by IDC

We are now saying, that we embrace the cloud, we don't want to become cloud providers, but we much rather enable our customers that want to leverage it by making it very easy for them to get data ON and OFF our platforms while at the same time enable them to keep the same processes and procedures they are use to in their private clouds. What's so crazy about this?

We also hear a lot things about Flash and how ontap is "naughty" for flash. Some of the comments are due to lack of education and that's our fault. Others comments are ill intentioned. Usually you can tell which is which. So lets level set here. Most if not all Flash Arrays have some fundamental things in common

1) The usually have a file system under the covers. So does ONTAP. It's called WAFL.

2) Some write in 4k blocks. So does WAFL

3) They don't do in place writes. ONTAP doesn't do in place writes.

4) They all do write coalescing...ONTAP does this since 1992

5) Some use RoW snapshots...ONTAP has been doing this since 1992!

6) Most offer storage efficiency techniques...so does ONTAP

So, if these new systems leverage fundamentally similar architectural concepts why is it that they are called "innovative" and ONTAP which has leveraged these techniques since 1992 is not? Is it because of inline compression (hold that thought on ONTAP) and dedupe? Is that the value add? The notion that ONTAP can handle SSDs is absurd and we are showing this to our customers.

For customers reading this, please talk to your NetApp Flash specialists. For all others keep FUDing. It's a great opportunity for us to start conversations with those who are really interested.

Sorry for the long post, but I feel while there are some well intentioned folks on these boards that truly want to learn, there's also a lot of "poison" to the point that if were to give out gold we'd still get blamed for it.

Cheers

Disclosure: A long time NetApp employee working at a truly great and ethical company

Life in the FAS lane: We reveal NetApp's four new flash-disk arrays

Nick Triantos

Virtualized files, smirtualized files, I don't see you complain about VMDKs and VHDs. These are virtualized disk files as well. You may have not noticed but the industry's transitioning to leveraging file systems under the covers and I don't know if you've noticed but even those competitors that threw rocks at WAFL quietly deployed a file system. Virtualization is a GOOD thing because it enables new features, more flexibility and data mobility, and yes LUNs in ONTAP are virtualized but not in the sense of what a file is or means. LUNs have completely different attributes than files and follow completely separate code paths. LUNs have streams. Some streams contain data, some metadata. LUNs also have a VTOC which is volume table of contents. no vtoc no lun.

As far performance goes, we've been consistently publishing SPC1 benchmark numbers over the years for what is a very demanding workload.

Nick Triantos

The differences between Unified Storage and Unified Architectures (netapp) are not solely addressed by a management console. There are distinct processes that take place. Different processes to upgrade, to install, to maintain, to snap, to replicate, to provision, to recover etc. Certainly having a single console is helpful but it does not eliminate the inherent operational overheads with these types of architectures.

Disclosure: NetApp employee

Psst, keep this under your hat, but NetApp's new flash wunderkind will be unveiled on...

Nick Triantos

Re: Top of the world, Ma!

How did you derive to this conclusion? We're making multiple bets in same manner other companies made multiple bets because in the end you'd rather have some overlaps than gaps plus no one in the industry really knows how all these trends are going to pan out and those who claim that do are mostly startups that hope the roulette ball lands on their number...

Disclosure: NetApp Employee

Nick Triantos

Re: Late

I went to my 5th grader's parent-teacher conference a couple of weeks ago and the young math teacher said "Well, Alex is a very nice boy, very polite, and finishes his math tests quickly". To which my response was "If he finishes quickly then how come he gets Cs?". The morale of the story...While admittedly, the folks that move first do have an advantage in the short term, they are not necessarily the ones who will succeed in the end. This is not a 100 yard dash but a marathon.

The enterprise storage market is north of 30bln. If you add in the entry level and the Service Providers this is close to 50bln. Of that number, the AFA segment is almost a $1bln. In the scheme of things this is a very very small % although it's growing quickly.

The existence of the established vendors is not solely dependent on a single product. The existence of startups IS. While certainly startups tend to move quickly, they also tend to cut corners to get their product out. They have to in order to get revenue in as quickly as possible. We're developing a system that will be able to easily adapt to different technologies as these become mainstream and we're doing it in a manner that will not endanger our customers data.

Cheers

Disclosure: NetApp Employee

NetApp dumps Filerview for new model

Nick Triantos

FilerView NOT going away

The title that NetApp dumps FilerView is not accurate and rather misleading. FilerView will not be going away in the foreeable future. It will still be there, fully supported for those who chose to use it. Eventually, it may get replaced but not until a suitable replacement is found.

NetApp's 50 per cent guarantee

Nick Triantos

response

This particular SPC1 test didn't affect just EMC. It also affected Dell which resells this particular Clariion array used for the benchmark. So EMC could have still challenged the result via its Dell relationship because Dell IS a member of the SPC. In fact, the timeframe was not 45 days as I had previously stated, but 60.

Furthermore, while the rules were written assuming that challenges would come from members, the SPC was aware of the unique nature of that particular event and would have permitted EMC to challenge even if they chose to maintain their non-member status. Like I said before, the SPC auditor notified EMC of the SPC's intention to allow them to respond. In fact the auditor has stated the following to an techtarget interview:

"Baker said EMC has not challenged yet. “Absolutely not–and they have been notified, because I spoke with them myself,” he said. He added, “as the auditor I feel the result produced by NetApp is representative.”

Of all the things listed, following best practices according to published TR, snapshots, dedup, Autosupport, TP and grouping VMs of same OS type to get good dedup ratios are things that netapp has been doing for years. They were not invented for the purposes of this Program.

Furthermore, since we're on the hook for this, i see no problem having our PS organization participate in the design and deployment process. Sounds like a no brainer to me. It should also be evident to those that understand dedup that certain types of *primary* data just don't dedup well at all (images, encrypted data etc) so you can't guarantee dedup of this types of data no matter what the dedup algorithm is.This is about deduping *primary* data not backups and archives only.

Nick Triantos

Guarantee

Rob,

NetApp's RAID-DP practices recommend 14+2 RAID Groups not 8+2. Furthermore, there's no such recommendation for 60% utilization. In fact, fo dedup we recommend 3-5% available space in the volume because we need the space to store the fingerprint file and some temp files we use for sorting the contents of the fingerprint file prior to doing the comparisson of the generated MD5 hashes which reside in the fingerprint (byte to byte comparisson occurs if 2 MD5 hashes match). Now, if we recommended 60% maximum volume utilization it would be pretty silly to recommend 3-5% available space in the volume, wouldn't it?

As far your 60% before degrading performance comment goes my comment would be that it is upon those who are making the claim to prove it. At least that's how the legal system works. I can easily claim that anyone who buys a system other than netapp will get sick and suffer from severe diarrhea. I can argue that until I'm blue but i can't prove it. Neither can you.

Just because NetApp's dedup is post processing that does not mean you have to have ALL the space up front. There are 2 ways to skin this cat. One way is shotguning it and move everything at one time. However, that's not how people deploy server virtualization. It's an evolution, not a revolution. So what you do is you allocate some space up front, you deploy, you dedup and then you use the freed up blocks to deploy more. If you follow a staged approach then you don't have to have the entire space up front.

Now the questions here are these:

What does everybody else guarantees on their dime?

What do customers who participate in the program have to lose when dedup is FREE and the provisions are in place to address those who may not benefit from it?

why don't you pay a visit to StorageMojo's Blog and take a look in the comment section the space savings output posted from a netapp customer aleady doing it. he not the only one.

http://storagemojo.com/2008/09/30/de-duplicating-primary-storage/#comments

Nick Triantos

Guarantee

Anonymous, you make it sound as RAID10 provides a triple disk protection? It doesn't. It does provide double protection failure, although selectively depending on which pair fails.

RAID4 = Tolerant of Up to 1 disk loss

RAID-DP = Tolerant of Up to 2 disk loss

SynMirror + RAID4 = Tolerant of Up to 3 disks (any 3)

SyncMirror + RAIDDP = Tolerant of Up to 5 (any 5)

Show me how a RAID configuration provides this level of protection and the associated cost. Btw...people deploy this stuff.

Of course we'd compare RAID-DP against RAID 1. RAID-DP is more resilient than RAID1 or RAID10. And from a performance standpoint we proved we can beat RAID10 using an industry accepted, industry written, cache hostile audited benchmark. Anyone who has had questions or qualms about it had 45 days to respond and didn't...In fact they still haven't responded.