* Posts by DLow

8 publicly visible posts • joined 29 Jan 2015

HPE brags its latest 3PAR OS shrinkwrapper better protects data


Re: Inline dedupe and compression/compaction?

Not arguing anything dear ACoward, I'm simply asking. :)



Inline dedupe and compression/compaction?

So, I am surprised there wasnt more noise/details about that? I expected HP to dance all over the bar once that announced?

That 75% thing being tied to the "get thin" thing makes me wonder if this really the efficiency we've been looking for though? ;)

Happy to be proven wrong, so please share details.


Daniel - NetApp

IBM's FlashSystem looks flashy enough, but peek under the hood...


Re: Only 3PAR?

Exactly, its quite the opposite. Everyone of the existing enterprise storage vendors; NetApp, HP, IBM, EMC have taken existing systems, enhanced them for flash with added features and performance and are currently going full speed ahead with great results. :)

We can argue who has done a better job at it but I'll leave that for another day.



EMC’s DSSD all-flash array hits the streets, boasting 10m IOPS


Re: More on DSSD

Hi NetApp here! *cue the boo's, fire and pitch forks*

100 NANOseconds. Really? I cant let you get away with that, sorry. ;)



NetApp cackles as cheaper FlashRay lurches out of the door


Re: It's about time too

Hi CoreInfMan! :)

I apologize this will be a little lengthy.

Great that you like it, I hope you’ll like the rest of the stuff that comes with this as well. I wont go into the Pure offering here (not saying it doesn’t look nice, it’s a smart message) but I will drop the old quote of “there are no free lunches”. ;)

If you would like to have our view on their offer and how we are thinking going forward I would encourage you to hook up with your Netapp rep and ask.

In terms of cost, I dare to stick out my head and say that you will be pleasantly surprised once you go and ask for a price indication, we are positioned more than well compared our competition.

And if you havent done it already and want to take the AFF for a spin, ask for a T&B/POC system (we have made that a lot easier and more importantly, risk free, for pretty much anyone involved).

Now in regards to the AFF, CPU and the compression etc, the new TR’s (Oracle and MS SQL) and other performance reports are run with efficiencies enabled. 8.3.1 with compression is faster than 8.3 without.

I have not personally had a chance to try massive amounts of snaps and mirrors on a fully loaded system so I cant say what any impact could be but since almost no one will drive a AFF8060/8080 box to its limits there will most likely be plenty of CPU left.

EF-Series will still be around and expanded, it fills a gaps where consistent super low latency is the name of the game for example, it just cant be beat. [brag]I watched it do close to 100k IOPS, 100% write at 132us the other day with customer supplied workload, not a engineered lab test and none of the competition could do that.[/brag]

For monitoring the new 8.3.1 System Manager will have a new dashboard that will help out with monitoring.

Will it at First release be as good as some of our competition? No.

Is it better than earlier version? Very much so imho.

That said we are doing some pretty neat stuff that will allow for very high detail monitoring, visualization and reporting, our CPOC labs are using it for great effect right now and it will be available for customers too.

Sorry again for tl;dr sized reply.




Re: It's about time too

Hi Daniel from NetApp here!

Actually, and I cant believe we missed stating this (derp), we have done some nice changes around maintenance and warranties as well; lots included along with the FlashEssential package plus up to 7 year warranty, flat fee when extending, set installation costs etc.

Are we perfect? Of course not, but I would lie if I didnt think we are doing a fine job towards it right now and there is more to come. Have a chat with your closest NetApp rep or partner to find out more.

Game on! :)

HDS blogger names HDS flash array as latency winner


Hi! Daniel from NetApp here.

Actually the (big/sharp) hockey stick do say a few things, mainly that the system will not gracefully come “out of performance”, it will dump it on you. But that discussion is for another day.

IOPS is cool but latency is what’s important. The goal has always been lowest/low latency. Or as much IOPS as possible at X amount of latency if you like. It was never a max IOPS race anyhow.

HDS look very good here and they should get some cred and I wont dig into it, maybe someone else will do the detailed work.

In regards to controlling the 100% load results you can. Sort of. But it’s a bit more complicated than that. Easiest way of saying it is that you, as the testing vendor, sets a limit. A latency limit.

For HDD that limit is usually 20ms (30ms is the SPC limit as noted) as that is what most transaction applications consider as max latency before getting angry hence you want to have some head room.

For SSD that limit should be 1ms as SSD and Flash was introduced to drop latency far, far below what HDD can deliver and 1ms is also what all vendors of SSD/Flash systems has as a starting point.

So setting 1ms as the latency ceiling you get X number of IOPS out of the system being tested. The system might be able to deliver 10x the IOPS but not at 1ms or less.

Its not avoiding or cheating the hockey stick IMHO, its showing what a system can do up to a certain latency point. Simple as that.

I know this is a simplified view and explanation and this could turn into a long discussion which I sadly don’t have time for (I have flash to sell. ;) ) so I end it here.

As for other type of tests, the only other I know that has a real life connection is the “VMware mixed workload test” that ESG did. I have not seen them publish that for a while though.

I thought that was a pretty good test; VMware platform and then simulating Oracle, Exchange, Webserver and Backup/table scans/indexing etc at the same time to show how a system would cope with multiple workloads all while being (well) below 20ms for all apps.



NetApp embiggens E-Series flashbox: Gee, a benchmark... thanks


Re: The low latency is the star here

Hi Daniel from Netapp here (with the usual disclaimer of this being me and not the company speaking)

In regards to Spares in SPC tests (and in general) - Its up for debate/depends imho.

Many of the SPC1 tests are with RAID10 and a smaller number of disks involved (per group/total) hence the need for a spare is not critical. And as SPC1 is a write heavy test using RAID10 is of course a performance play too, no raised eyebrows there I think.

Here we are also talking about SSD's where the failure rate is much lower from the start compared to HDD. And on top of that its small disk sizes too, 400GB in our test but SSD's today (in the enterprise offerings) are no bigger than 1.6TB.

So smaller amount, smaller capacity and much faster transfer rates make for shorter rebuild times and time exposed.

Now, if you want spares, close to zero exposure and can live a bit less performance I would suggest looking at our DDP technology. 800GB SSD's can be back into “safe mode” in as little as 15 minutes.

And I know I said DDP gives less performance but in the flash world this is all relative, ~175,000 IOPS at 400us (75/25 8k OLTP) when using DDP isnt half bad.

Anyway, to spare or not to spare? Maybe it comes down to (customer) preference and how you trust your systems. Personally have not suggested or designed E/EF based solutions with dedicated spares for some time now as I, and more importantly, our collected sensor data says we don’t really have to.