* Posts by Val Bercovici

19 publicly visible posts • joined 15 Jan 2009

It's a Wright off: NetApp confirms SolidFire boss on hiatus, 'will be back' working on other stuff

Val Bercovici

NetApp HCI

FYI - While I had great input and influence into the MRD and PRD for this upcoming product, it has been in the very capable hands of the dedicated engineering & product management plus GTM teams from Boulder for quite some time now. In fact, I felt it was a great time to move on precisely because the project is in such good shape.

I'm really looking forward to seeing it ship but also working with everyone else in the hypercompetitive HCI space to help them better serve their customers via faster and more satisfying support resolution thru Peritus.ai

-Val.

Val Bercovici

FYI - While I had great input and influence into the MRD and PRD for this upcoming product, it has been in the very capable hands of the dedicated engineering & product management plus GTM teams from Boulder for quite some time now. In fact, I felt it was a great time to move on precisely because the project is in such good shape.

I'm really looking forward to seeing it ship but also working with everyone else in the hypercompetitive HCI space to help them better serve their customers via faster and better support resolution via Peritus.ai

-Val.

Between you and NVMe: NetApp dishes on drives and fabric access

Val Bercovici

Nate - thanks for your feedback

Yes, FlashCache now has several generations of evolution under its belt and continues to optimize the performance of disk-based Unified FAS arrays. Our All-Flash I/O pipeline is so advanced at the moment that we don't need to double-buffer reads or writes via FlashCache to our SSDs.

OTOH - The main theme NetApp is conveying to the market now is that NVMe shows great promise when fully end-to-end systems will be deployed. Being ready at some of the App / OS / Hypervisor / Host / SAN / Controller / Shelf / SSD layers is helpful and enhances customer investment protection - BUT...

... It's very important not to overhype the technology today and underwhelm customer expectations.

If performance is a priority - independently audited and peer-reviewed benchmarks are the only way to go! :)

http://www.storageperformance.org/results/benchmark_results_spc1/spc1_v1_results_netapp/spc1_v1_results_netapp_a02002/A02002_ES.pdf

Don’t get in a cluster fluster, Wikibon tells NetApp users

Val Bercovici

Re: Where's the Data Fabric?

OK - let's feed the troll...

NetApp's Data Fabric helps real Cloud users on AWS & soon Azure:

- Accelerate their development workflows via faster thin clones for rapid CD and broader CI testing

- Simply and efficiently replicate across regions

- Minimize their storage costs via dedupe, compression, thin provisioning and rapid snapshots

- Minimize their network costs via dedupe & compressed replication that doesn't rehydrate

- Stay compliant with international data domesticity requirements

- Arbitrage between supported Cloud providers (for NPS also including GCE, SoftLayer & Oracle Cloud)

- Add enormous strategic business value to almost every industry vertical.

For a hands-on demo, see this excellent post: https://www.linkedin.com/pulse/building-data-fabric-aws-cloud-ontap-tony-lee?trk=hp-feed-article-title-share

Val Bercovici

Where's the Data Fabric?

To publish a report on NetApp in 2016 with no mention of our Data Fabric portfolio is .... "interesting" ;)

Former NetApp chief chap boards board at storage upstart Cohesity

Val Bercovici

$50M to $5B

I would not lightly dismiss the business instincts and operational skills of a leader who took NetApp from 50 million to over 5 billion in revenue. That was truly an extremely rare and remarkable feat in both an absolute and relative sense.

I will always be an unapologetic huge fan of Dan's.

-Val.

Fireworks expected from Oracle at Flash Memory Summit

Val Bercovici

Hi Chris,

Quick update, Lee Caswell will be on this Panel - Session 201-F (Hybrid vs. All Flash Arrays - Which is Better for Your Data Center?):

http://www.flashmemorysummit.com/English/Conference/Details_Sessions_Tutorials.html

... while I will be giving the NetApp Keynote #11 Thursday morning @11:05 am:

http://www.flashmemorysummit.com/English/Conference/Keynotes.html

With all the strong Flash momentum @ NetApp lately, you'll just have to come and attend to see whether I discuss FlashRay or not ;)

-Val,

Who ate all the flash pie: Samsung, 'course, but hang on... GOOGLE?

Val Bercovici
Go

valb@netapp.com

Disclosure - I work for NetApp.

Not sure how Gartner calculated their numbers, but as of mid-2013, NetApp has shipped over 50PB of FlashCache (formerly known as PAM II) alone. More impressive is that Real-time data (vs policy) driven auto-tiering technology is accelerating > 5 Exabytes of disk.

Once again viewing capacity shipped relative to revenue reveals much more about the storage industry and value delivered to customers!

NetApp's FlashRay to zap Symmetrix with fibre channel

Val Bercovici
Thumb Up

All-SSD Systems

Hi Op #2,

I woud always defer to your sales team regarding specific configuration recommendations. However, all-SDD FAS systems are supported and available with the appropriate disk shelf. Your SE would know whether they are recommended for a given workload vs the new EF540. Larger SSD's for FAS are coming soon as well, although I don't want to pre-empt any upcoming releases with specific pre-announcements here.

-Val.

Val Bercovici
Coffee/keyboard

FlashRay Interest Abounds! :)

Great to see all the FlashRay interest on this forum! Let me address some of the points above.

1. EF540 vs Clustered Data ONTAP FAS | V-Series | EDGE

Application expectations of performance at the Solid-State level are fundamentally different than performance expectations of disk or hybrid flash+disk storage systems. The latter went from response times of 10's of milliseconds for disk-based to single milliseconds for hybrid systems. Clustered ONTAP-based storage systems do very well in both regards.

However for the former, EF540 uses 1ms as the response time ceiling for most apps and goes on to deliver 300K consistent 4K I/O operations per sec (iops) at that level with enterprise Reliability, Availability & Serviceability (RAS). That performance level is something Data ONTAP and all competitors' disk-based arrays popular today were never designed to do. Hence the need for a different architecture at the <1ms response time level.

Also with regards to SSD capacity, WAFL's log-structured nature (esp during the consistency-point process) benefits from more SSD spindles with relatively small capacity whereas the E-Series controller's pipelined I/O architecture is the opposite and can therefore leverage larger SSD capacities. You can expect to see that relative difference continue across both product lines.

2. EMC Symmetrix Performance vs Reliability

Brevity betrayed me in my original comment. I fully appreciate today that EMC customers don't have to choose between these two. Despite lack of flexibility, today's VMAX & yesterday's DMX are highly mature and reliable Tier1 storage platforms which deliver good performance when configured for the task. Also with excellent RAS.

My comments were made in a historical context. In the early 1990's EMC's ICDA (Integrated Cache Disk Array) architecture used performance to disrupt the IBM DASD market - NOT reliability. EMC encouraged IBM customers to make the move based on PRICE/PERFORMANCE, full period. Stop :)

As customers trusted more and more of their mission critical data to EMC during the back half of the 90's, reliability capabilities and supportability features were gradually added to make it the platform people appreciate today.

So when it comes to FlashRay, NetApp recognizes that solid-state media offers us a once-in-a-lifetime opportunity to leverage a tectonic industry disruption in Tier 1 storage. Early adopters will move to us for the superior Price / Performance NetApp FlashRay will deliver. Especially relative to all other entrenched or start-up vendors who will all lack our rich feature set (N+1 scale-out, QoS/multi-tenancy, variable-block inline dedupe, compression, snaps, clones and of course powerful data replication).

Late adopters will move once we have proven our reliability over time in FlashRay 1.x & 2.x releases.

I hope this helps answer most of the questions above.

Val Bercovici
Go

Why NetApp moved from Single Architecture to the Portfolio Model

Hi Simon,

Great chat today! Let me elaborate a bit on that last point :)

Data ONTAP has served NetApp customers extremely well over the years, enabling file server consolidation, Unified NAS & SAN arrays and lately the most efficient storage foundation for Server & Desktop Virtualization environments. Entry-level, Mid-Range and High-End FAS & V-Series models running Data ONTAP continue to be fully interoperable from an upgrade / downgrade and data replication perspective. New Clustered Data ONTAP even enables any combination of those to comprise a single Cluster. Data ONTAP EDGE adds yet more virtual storage configuration options to this powerful mix. FlashCache/Pools/Accel accelerate it all.

However every once in a while tectonic shifts occur in a marketplace, opening up complimentary new segments, which necessitate new platforms optimized to the new requirements. NAND Flash media (raw or via SSD) is a perfect catalyst for such change. Additional shifts include Big Data and Extreme capacity or performance-sensitive apps, which often drive separate infrastructure decisions including dedicated storage silos. Satisfying this complimentary new market demand is best done via complimentary new products - hence the NetApp Open Storage for Hadoop, StorageGrid, EF540 and upcoming FlashRay product lines.

So the new NetApp "Portfolio" can be summarized as Clustered Data ONTAP arrays for Shared Virtual Infrastructure, EF540 (+ eventually FlashRay) for sub-millisecond I/O, then E-Series based NetApp Open Storage for Hadoop or HPC plus StorageGrid to address Big Data.

-Val.

NetApp's Cloud Czar predicts the death of VMAX

Val Bercovici
Thumb Up

Val Bercovici

Thanks for picking up my blog Chris.

NetApp's Virtual Storage Tier (VST) architecture spans a continuum from the (solid state or spinning) disk through to the array (cache) all the way up to the server hosting apps in question.

Today the Goldilocks scenario reported by Andrew Nowinski from Piper Jaffray is meant to emphasize that DataONTAP 8.1 cmode is the "just right" Unified Scale-out storage array for the sweet spot of the Shared Virtual Infrastructure market, which also happens to be at the center of our VST architecture.

We will also soon fill out the edges of the VST continuum with real-time, granular, self-managed and de-duped tiering at the disk and server/host layers. Stay tuned to my blog for further updates later this summer! :)

-Val.

NetApp dumps Filerview for new model

Val Bercovici
Happy

Survey Says ...

Our customer surveys indicated the overwhelming majority managed their storage from a Windows admin workstation. They also indicated their strong preference for a responsive UI, hence the MMC approach for NSM.

Linux / Unix customers also preferred the CLI to any GUI. Nevertheless, FilerView will not go away with the release of NSM. We will monitor customer feedback for the pace of any eventual FilerView phaseout.

Finally, NSM is supported under most Virtual or Remote Desktop configuration, offering GUI access to NSM from almost any modern client OS platform.

Val Bercovici

NetApp Office of the CTO

What's next for NetApp hardware?

Val Bercovici
Happy

SpinNP is the answer to mainstream storage scale-out scalability

Hi Chris,

Really interesting speculation here. Without raining on VirtenSys' parade too much, our Data ONTAP 8 scalability is based on a transport-independent protocol called SpinNP. While it is capable of accommodating PCIe, I would urge you to look at Cisco's Data Center Class & Brocade's Enhanced (lossless) Ethernet technologies as our interconnects of choice when we ship the first phase of our mainstream scale-out storage family later this year.

Your conclusion is spot-on though! With DOT8, NetApp customers will be able to aggregate any number of FAS controllers (in pairs) as nodes in a single system image with atomic management properties and linear performance as well as capacity scalability.

Val Bercovici

Office of the CTO

NetApp

Storage vendor bloggers - losing data or losing the plot?

Val Bercovici
Go

The plot thickens indeed!

Vinanti (or should I call you FemmeFatale?) - thanks for chiming in here (and on my blog) with relevant objective technical detail!

This is precisely the kind of background info that explains my position against EMC's opaque stance regarding this issue. True to form, EMC's bloggers are now busy shutting down comments on their related blogs just as EMC's PR people did years ago when this Centera silent data corruption issue was first exposed - then covered up by the IT media.

Unfortunately, it's the innocent EMC Centera customers and archive software partners (like Symantec) that now have to live with this Archive Russian Roulette scenario. They'll never know what data went missing forever until they try to retrieve it.

For all those who used the default EMC Centera configurations of collision detection OFF with SIS, I strongly recommend following the "Next Steps" listed on my blog -

http://blogs.netapp.com/exposed/2009/01/emc-centera-cus.html

Val Bercovici
Thumb Up

The Exposure Continues

Hello Coward and other commenters,

Please do keep the comments coming! My goal is to add exposure to the key topic of compliance archive data integrity, not to win tete-a-tete battles over 3rd party knowledgebase semantics.

Transparency on this topic is very important to me, and I've decided putting up with online abuse is a small price to pay for the increased customer trust this exercise will result in once disturbing veils of secrecy around EMC Centera data integrity are finally removed.

-Val.

http://blogs.netapp.com/exposed/2009/02/its-never-the-u.html

Val Bercovici
Pirate

Straight from the EMC Playbook

Hi marc / Barry,

Classic maneuver, right from the EMC playbook. Personalize the discussion and attack the whistleblower to distract from the facts.

Thanks for playing along:

http://blogs.netapp.com/exposed/2009/02/will-the-real-s.html

-Val.

Val Bercovici
Alien

Does the Reg use Centera? 2nd attempt at comment :)

Hi Chris,

First of all - thanks for bringing MUCH needed attention to this whole issue. I also commend your attempts at objectivity.

However, as they say, the plot THIKENS! :-)

As you may be aware, the original title of the Symantec KB Article is:

"Archiving items in Enterprise Vault to an EMC Centera may result in data loss."

I'll leave it as an exercise to the reader regarding what "behind the scenes" activity inspired the change once I highlighted it on my blog :)

Regardless, the simple facts remain that this newly revised KB article still shows one, and only one archiving platform vulnerable to data loss - EMC Centera.

There is no Symantec (or any other popular archiving ISV for that matter) KB article warning of potential risks with archiving to NetApp SnapLock. Will there every be? Who knows, but I like NetApp's odds due to one key difference - SIMPLICITY.

Call it what you will, but the EMC Centera API is a huge and complex beast to work with. NetApp's (optional) SnapLock API is the model of simplicity by comparison and is often unnecessary since nearly all archiving vendors support direct filesystem (based on standards) access anyway.

It is a gross oversimplification to correctly label both Data ONTAP and EMC CentraStor as complex pieces of software - yet then conclude that their resulting levels of data integrity are similar. Especially when a an empirical Google search will yield many examples of data loss with one and none with the other.

I personally find it fascinating that some companies (such as Procedo) have built entire practices (i.e. PAMM) around migrating data away from EMC Centera onto safer platforms.

I guess where there's smoke...

-Val Bercovici

Office of the CTO, NetApp

Pillar towers over rivals in best value storage

Val Bercovici
Go

Welcome to the club

I congratulate Pillar on publishing their first SPC-1 result.

I'd like to encourage them (and others) to add even more value to their results by publishing with rich functionality like Thin Provisioning, Snapshots, Clones, etc... enabled next time. That will help customers get an even better approximation for the scalability of their desired solution.

I'd also encourage all the SPC-1 skeptics to do some research and review the elaborate and transparent policies & procedures used by SPC members to publish. Of note is the independent audit of every report and right of any SPC member to force the publisher (usually a competitor) of a technically flawed or invalid report to revoke it.

In light of all that, has anyone ever wondered why SPC member Dell never requested the CLARiiON report published by NetApp be revoked?

-Val.

Office of the CTO, NetApp

http://blogs.netapp.com/exposed