* Posts by bitpushr

44 publicly visible posts • joined 11 Sep 2013

Infinidat benchmarking beatdown: Glugging slimfastq? Not us!

bitpushr

The article says the Infinidat box has "about 660MB of DRAM". I imagine you mean 660GB? :-)

Kaminario pumps up K2 all-flash array processor speed and SSD capacity

bitpushr

Re: More detail please Mr Kaminario

Agreed. In my experience, the average block size is usually strongly bimodal -- you may get a lot of I/O at, say, 4-8KB, and then you may get a lot of I/O at, say, 32-40KB, with relatively little I/O of other sizes.

Looking at these and saying "Well, the average is somewhere in the middle of 8 and 40" is, in my mind, not accurate.

Fresh and fast little flashers from NetApp

bitpushr

Re: Confused

Which Flash players have a single product beating all of NetApp's product lines? If that were true, NetApp would be out of business yesterday and all the other players would be posting profits. And yet...

bitpushr

If you have a benchmark that all (more or less) other storage vendors will agree to run, I'm sure NetApp would be all ears.

bitpushr

Re: Confused

Clustered ONTAP (by definition scale-out) has had QoS since 8.2, including regular/hybrid arrays and AFF.

In terms of your question, SolidFire and AFF do different things in different ways. SolidFire is SAN-only and was designed for service providers, in so far as it's QoS first (including *minimum* limits) and everything being API-driven. It scales up to 100 nodes, as opposed to ONTAP's 24 (or 12 for SAN).

There's some overlap, but they're different horses for different courses.

Disclaimer: NetApp employee

Pure unsheathes the FlashBlade, cuts out NetApp legacy system

bitpushr

How do you get to 1,000 cables?

Assuming 1,300 disks and our least-dense shelves which we haven't sold for a few years now, i.e. the DS14 shelf (14 disks in 3RU), you'd need 92 shelves. Each shelf has two power cables, so that's 92*2 = 184 cables. Each shelf has four FCAL cables, so that's 92*4 = 368 cables.

Add that together and you get 552 cables. Where are the other 500?

Disclaimer: NetApp employee

Between you and NVMe: NetApp dishes on drives and fabric access

bitpushr

Re: netapp flash cache

Flash Cache can cache writes as of Data ONTAP 8.3, which is a couple years old now. Flash Pool has always been able to cache writes.

Disclaimer: NetApp employee

HPE 3PAR storage SNAFU takes Australian Tax Office offline

bitpushr

Re: I'm familiar with this deal.

NetApp deploys its E-Series platform for video surveillance. It uses RAID-DDP, not RAID-6, and DDP sets aside a portion of each disk as spare space. Therefore disks are not dedicated for spares, and rebuild speeds are dramatically improved.

http://www.netapp.com/us/technology/dynamic-disk-pools.aspx

Disclaimer: NetApp employee.

Dell EMC's Pure-crushing benchmarks are flawed, says, er, Pure

bitpushr

Re: Seriously, that's ridiculous

I work for NetApp, so neither EMC nor Pure, and even I don't believe the results. :)

NetApp facelift: FAS hardware refresh and a little nip ONTAP

bitpushr

In the quiet words of the Virgin Mary, "Come again?"

Who has a faster CIFS box than NetApp? Hint: no one does.

Nimbus Data CEO shoots from the hip at NetApp's SolidFire buy

bitpushr

Re: Snapshot company

I would argue that being the first multiprotocol NAS platform was innovative. And that being the first multiprotocol NAS + SAN platform was innovative. And that being the first platform to offer dedupe on primary storage was innovative. And that being the first platform to use SATA drives on primary storage was innovative. And...

I work for NetApp, and I'll be one of the first to criticize something, but we've hardly rested on our collective laurels since coming up with snapshots.

You're only young but you're going to die: Farewell, all-flash startups

bitpushr

Re: Tintri

Yeah, that's demonstratively not true. I'm sure the formatting will get screwed up, but:

dot83cm::> version

NetApp Release 8.3.2: Wed Feb 24 03:29:11 UTC 2016

dot83cm::> qos statistics latency show

Policy Group Latency Network Cluster Data Disk QoS NVRAM

-------------------- ---------- ---------- ---------- ---------- ---------- ---------- ----------

-total- 430.00us 87.00us 113.00us 83.00us 98.00us 0ms 49.00us

User-Best-Effort 430.00us 87.00us 113.00us 83.00us 98.00us 0ms 49.00us

That's my lab system, running cDOT 8.3.2. QoS is free and always on. Disclaimer: NetApp employee.

NetApp ain't all that: Flashy figures show HPE left 'em for dust

bitpushr

Re: Meaningless Fiction by Joe Unsworth

ASICS? I'm more of a NIKE man myself.

Pure pleasure with stonking final fiscal 2016 quarter

bitpushr

Re: Uh yeah...

If that $1 and its subsequent $12 customer purchase are costing Pure $20 to fulfill, well, I'm sure you can see where this is going.

Don’t get in a cluster fluster, Wikibon tells NetApp users

bitpushr

The idea that it takes a full-time employee between 16-18 months to transition a single $300k storage array is so ridiculous that it defies belief.

NetApp hits back at Wikibon in cluster fluster bunfight

bitpushr

Re: My opinion as a customer

The maximum LUN size for Data ONTAP is 16TB, not 6TB.

Could NetApp buy SolidFire? It would be outside its comfort zone

bitpushr

Please tell me more about the supposed "block inefficiencies" of NetApp. #rolleyes

Where will storage go over the next 15 years? We rub our crystal ball

bitpushr

Re: Snapshots have never been a paid feature from NetApp

The same.

bitpushr

Snapshots have never been a paid feature from NetApp. They've been free since 1994...

Nice try, though.

VMAX flashes its virtues for all to see

bitpushr

Re: all results?

1TB of addressable storage is a joke. The SPC should tell them to sod off.

bitpushr

Re: all results?

The nice thing about the SPC-1 submissions is that, when you read the full report, the vendors are required to list the configurations that they use in their test scenarios. Seeing how the sausage gets made is always interesting reading...

Why the USS NetApp is a doomed ship

bitpushr

Re: where will other vendors be?

So the two big problems are:

1. NetApp tries too hard to sell it's flagship product

2. NetApp is not particularly good at mergers & acquisitions

And this is enough to doom the company?

NetApp cackles as cheaper FlashRay lurches out of the door

bitpushr

Re: Too much Culture Kool Aid ?

SPC-1 is relevant because it's the only SAN benchmark we can all* agree to run.

Is its workload realistic to what most customers run? Probably not. Are the prices and configurations realistic to what most customers buy? Probably not.

Does it show which architectures can and cannot scale? Yes. Does it show which architectures can and cannot run at speed when scaled out? Yes. Could it be improved? Yes.

* With the exception of EMC...

Disclaimer: NetApp employee

bitpushr

Re: It's about time too

The FAS8060 has 32 CPU cores and the FAS8080EX has 40 CPU cores. With apologies to Mr. Gates, that ought to be enough for anybody ;-)

Disclaimer: NetApp employee

bitpushr

Re: Hmmmm

Which fundamental underlying architectural problems are we referring to?

Facebook SSD failure study pinpoints mid-life burnout rate trough

bitpushr

If you have 685 SSD drives, and they've all been running for 4 years, that's 685*4*365 days' worth of data you've gathered.

685*4*365 = 1,000,100 days' worth of SSD drive service.

Cinnamon 2.6 – a Linux desktop for Windows XP refugees

bitpushr

People love cinnamon. It should be on tables at restaurants along with salt and pepper. Anytime anyone says, "Oh this is so good, what's in it?" The answer invariably comes back, "Cinnamon." "Cinnamon." Again and again!

NetApp's customers resisting Clustered ONTAP transition

bitpushr

Re: CDOT fail

cDOT's big feature isn't "federated move and manage". Rather, I would argue that the big feature is "non-disruptive operations at scale". You can move volumes while serving data. You can move aggregates while serving data. You can add & remove nodes while serving data. You can add & remove network interfaces while serving data. You can upgrade node hardware while serving data. Hell, you can even upgrade & downgrade Data ONTAP while serving data!

Nobody else can do this today. And certainly nobody could do this 5 years ago, either.

Disclaimer: I am a NetApp employee

NetApp embiggens E-Series flashbox: Gee, a benchmark... thanks

bitpushr

Re: The low latency is the star here

There is no such thing as "block emulation on file". There is also no such thing as "file systems [creating] additional latency". Good grief.

Disclaimer: I am a NetApp employee.

EMC: Kerr-ching! $430m XtremIO gulp's paying off... Hello, $1bn a year

bitpushr

Re: Only EMC's View

NetApp's flash offerings also offer caveat-free warranties.

Disclaimer: I am a NetApp employee.

Server SANs: Don't throw the baby out with the bathwater

bitpushr

"Some people, when encountering a storage problem, think "I know -- I'll use iSCSI". Now they have two problems."

(With apologies to Jamie Zawinski)

SDI Wars: EMC must FORGET ARRAYS, adapt or disappear

bitpushr

Re: why compete against yourself?

You had me at "periodic orbital bombardment".

'I went from a two-hour commute to a 10-min scooter ride by the sea'

bitpushr

That depends. Are you now, or have you ever been, a Postman?

Oracle crashes all-flash bash: Behold, our hybrid FS1 arrays

bitpushr

Re: Strange...

The mind boggles at such ignorance.

No biggie: EMC's XtremIO firmware upgrade 'will wipe data'

bitpushr

Re: Customer success stories - Due diligence

If working for a storage vendor has taught me one thing.. it's that benchmarking is hard. Someone here (I think Chad?) said, "there's more to benchmarking than making a LUN and pointing iometer at it" and that is 100% accurate.

I have lost count of the amount of times a customer has approached a benchmark of their new NetApp kit by saying "I mounted a filesystem and then run "dd" against it."

If your business makes money by using dd, this is probably a fine way to do it. But I haven't seen one of those businesses yet...

bitpushr

Re: Glass House

I agree a little and disagree a little more. By your rationale, VNX to VNX2 is a forced transition, because they're not developing VNX any more.

The differentiator in this case is that cDOT is so comprehensively different from 7-mode. Much different architecture, much different capabilities, different feature set, different shortcomings, different points of focus, etc.

The closest thing I can come up with is that *if* you had to wipe your system to go from DOT 7.3 7-mode to 8.0 7-mode. Once done you got better features, better efficiency etc. It was a change but not a fundamental one. (In reality, you didn't wipe your system to do this; just a HA takeover/giveback.)

bitpushr

Re: Glass House

As I mentioned on Chad's blog, this is comparing apples to oranges. As I mentioned on Chad's blog, I work for NetApp.

Upgrading NetApp 7-mode from 7.2 to 7.3 to 8.0 to 8.1 to 8.2 has not required you move the data off, blow everything away, and move it back.

Upgrading NetApp cluster-mode from 8.0 to 8.1 to 8.2 (to 8.3) has not required you move the data off, blow everything away, and move it back.

Going from 7-mode to cluster-mode is not an upgrade -- it is a transition. It *is* disruptive, as you correctly point out. But it is, for all intents and purposes, a different OS compared to 7-mode. Comparing NetApp's (disruptive) _transition_ to ExtremIO's (disruptive) _upgrade_ is disingenuous.

bitpushr

Re: this isn't disruptive

"Downtime" is fine. If you have a Windows box, every Tuesday morning it experiences "downtime" when Microsoft patches this, that and the other thing.

What EMC is talking about is a wee bit more disruptive. Imagine if, on Patch Tuesday, you had to *migrate* all of your files to another computer *and* re-install Windows in order to apply the patch. Which is what seems to be the case with this XtremIO upgrade.

Disclaimer: I am a NetApp employee

NSA leaker Thomas Drake says Oz security reforms are 'scary'

bitpushr

Re: unfortunate how many useful idiots justify serfdom

You had me at "public serpents".

Dimension Data cloud goes TITSUP down under... after EMC storage fail

bitpushr

Re: this is what happens...

EMC *has* presented about its reliability in the past. Publicly.

Not everything is a competitive FUD-throwing conspiracy.

Disclaimer: I am a NetApp employee.

Life in the FAS lane: We reveal NetApp's four new flash-disk arrays

bitpushr

Two things:

1) NetApp does not do "block on file". Blocks in LUNs map to 4KB blocks *of disk* in WAFL.

2) All SAN is virtualized. Without virtualization, the only option for FC-attached storage would be JBOD, where the value of "bunch" equals 1.

My first computer had, I think, a 20MB HDD. Files in MS-DOS were mapped to blocks on disk via CHS, the cylinder-head-sector addressing scheme. But eventually drives got too big for CHS, and we switched to LBA -- the logical block addressing scheme.

Which is a form of virtualization.

Enterprise storage will die just like tape did, say chaps with graphs

bitpushr

Re: Partly stating the obvious - SANs are I/O bound

Disclaimer: I am a NetApp employee.

I can't speak for other vendors, but NetApp uses stacks of SAS - not loops. While the speed of SAS is 6Gbit/sec., it is important to remember that this is per lane. Our copper SAS cables feature four independent lanes which are automatically load-balanced, meaning we get 6*4=24Gbit/sec. bandwidth per SAS cable.

Twitteratti at NetApp event spill guts over FlashRay's innards

bitpushr

Re: 0 overhead?

In Data ONTAP, writes are acknowledged to the client when the data is committed *to NVRAM*. We don't have to wait for the data to go to disk; that happens at the next Consistency Point (CP).

FlashRay runs a different O/S, but the fundamentals would be the same in this case: so long as the data is somewhere non-volatile, you can happily acknowledge to the client that it's been written.

Disclaimer: I am a NetApp employee.

Don't bother competing with ViPR, NetApp - it's not actually that relevant

bitpushr

Re: Benefits and Tradeoffs to tightly coupled scale-out vs Distributed archs

Disclaimer: I work for NetApp, but these are my opinions.

EMC needs ViPR because their customers need it. When you have such a broad and immiscible product portfolio as EMC does, there needs to be at least some common ground for management and administration. Frankly, I think ViPR's use cases would be a lot more clear had EMC not chosen to gone with the industry's latest buzzword -- i.e., "software-defined" -- but they're certainly not the only storage vendor out there using that moniker.

Your quote that "[at] the end of the day, the primary need it serves is data services enablement and management across several different platforms" is the most concise description I've read for ViPR's existence.