* Posts by Nick Dyer

48 publicly visible posts • joined 20 Sep 2007

Sons of Sun DriveScale tempt cloud-lovers with composable infrastructure rig

Nick Dyer

Re: Why ethernet?

Mostly because ethernet is a common, adopted and sticky interface that many network users and DCs around the world have standardised on, and are very comfortable with - and refuse to move from.

IB, whilst superior in many ways - is always pushing water uphill as it's another standard and cable connectivity type for someone to manage.

SK Vinod was one of the three founding brothers of Xsigo, and knows full well the political challenges that IB has in the datacentre.

HPE's Nimble Secondary Flash Array uses... disk?

Nick Dyer

It's 99.999928% actually, polled constantly across >23,000 arrays worldwide... but who's counting the .000028%, right? :)

And not every flash array has to be all-flash. Certainly not for secondary storage.

Troll harder.

Oh, 3PAR. One moment you're gliding along. The next, you're in the rain as HPE woos Nimble

Nick Dyer

Re: more than 8 nodes

...funnily enough, one of Nimble's strengths is that it can perform at 100% writes with small block random or large block sequential IO without performance issues. Regardless of all flash or hybrid thanks to the file system.

NVMe SSD? Not yet, says Pure, but promises to deliver it

Nick Dyer

Re: Upgrade

*Nimble employee*

Lets put this FUD to rest... a Nimble controller CAN be a live upgrade.

We can also do firmware upgrades live in the middle of the day without needing support intervention/handholding/dictation on an enterprise, 6x9 primary storage system. All thanks to Infosight.

I'll just leave this here...

https://www.nimblestorage.com/blog/go-ahead-update-your-storage-operating-system-in-the-middle-of-the-day/

By the way - that's not to say either is a good idea. Maintenance windows should ALWAYS be taken.

Nimble shows that its all-flashers start small – and grow bigger

Nick Dyer

Re: All employees respond - quick...

I'll take the bait...

99.99 vs 99.999 is a pretty serious error on a publication seen, read and digested so broadly in IT as El Reg - whether right or wrong.

Availability and reliability are probably (and should be) the two main criteria for any enterprise class storage solution - so yes it's highly important. You can have as many bells and whistles in a system as you like, but if the thing keeps falling over or corrupting data then it's not worth the fag packet the design was done on.

Fun fact: Nimble systems are actually running at OVER 99.999% availability. This is measured through 5 minute heartbeats of every single array deployed in the field, which currently stands at somewhere over 16,000 systems. We will happily provide the data for any customer that's interested. It's called Infosight Labs - and it's very cool.

By the way, there aren't that many other vendors that shows true >5x9 availability through actually monitored data from real-world systems - rather than 'built for' or 'lab tested'. Most just throw an abundance of overspec'd hardware at the problem to ensure that it will get >5x9 in liu of not being able to monitor and proactively fix issues, such as 3PAR, Infinidat and others.

Performance figures are driven by 4K blocks for IO, 256K blocks for GB/sec. However because NimbleOS is defined by application rather than LUNs the array is able to drive variable block alignment, compression, dedupe and performance for each app. Here's a good read as to why we do that: https://www.nimblestorage.com/blog/storage-performance-benchmarks-are-useful-if-you-read-them-carefully/

Nick Dyer

Re: Dedupe

Nimble's dedupe can be global or per-application. The benefit of it being per-application is that it will dedupe likeminded block sizes together without burning CPu cycles trying to dedupe things that will never be matched.

Nimble whips out fat boxen: We're here for the all-flash array market

Nick Dyer

Customer POC - AF7000 vs XtremIO

Hello everyone. Me again. I work for Nimble, by the way.

This popped up on Reddit yesterday. A customer POC (yes, an actual customer, not legacy vendor trolls like this thread) POC's a shiny new Nimble AF7000 vs EMC XtremIO. Interesting, wouldn't you say?

https://www.reddit.com/r/storage/comments/47epoh/observations_on_extremio_and_beta_nimble_afa/

ACs - You can carry on with the FUD throwing... but remember - it's all about the customer and applications, not about speeds, feeds and BS.

Nick Dyer

Re: Well they're good at marketing at least...

If you speak to your local Nimble SE, they can provide you with the true figures. More importantly, we're happy to put our money where our mouth is with true, production workload POCs. Like I said, we can have a p1ssing contest over 4k block IOPS charts, but in the real world, those figures don't really matter.

Nick Dyer

Re: Well they're good at marketing at least...

"So are you going to publish a workload based on real customer workloads?"

That's what SmartStacks are for. Sadly, the market is driven by 4K marketing benchmarks for.

"Seems like the whole "we're bound by CPU not disk" falls apart when you finally admit flash is faster"

Can't say I agree, Nimble are still CPU and memory bound; by design. Even the AFA platforms are CPU bound. For example, look at the performance difference between an AF3000 and AF9000. Same SSDs, same chassis...

Important to note, the AFA platform uses a newer generation CPU than the HFAs... now I wonder what the performance delta would be if the HFA were to be fitted with the newer type CPUs... I'll just leave that here...

Nick Dyer

Re: Well they're good at marketing at least...

All figures published are with compression switched on (every figure ever published has always included compression, as it's inline but also has minimal performance impact). I'm pretty sure the figures also include dedupe, too.

App Profiles are backend QoS tuned and isolated today within the file system to ensure sequential IO does not compromise random IO (or a heavy write workload does not compromise a read workload, for example), but no user tunable settings just yet.

Nick Dyer

Re: Encryption

Encryption at Rest on a Nimble platform can be enabled either on the whole array, or indeed on a per-volume basis. Reason being that there may only be a subset of data that actually needs encryption capability; why penalise the whole system with encryption overhead when it's not necessarily needed.

There is ~5% top-end performance impact on encrypted volumes.... a feasible tradeoff for an important feature. Also, there's no need for expensive Self Encrypted Drives or SSDs... or any software license fees.

Nick Dyer

Re: Hmm

Absolutely correct; but thanks to our CASL log structured file system, this IO and writing effort is minimised drastically. It was one of the original design principles of the OS back in 2008; when cost effective all-flash storage was mostly a marketing and dedupe-riding pipedream.

It's worthwhile digging out a CASL deep dive video from youtube to check out how it works. The beauty is in the detail.

Nick Dyer

Re: Well they're good at marketing at least...

BTW: IOPS statements and p!ssing contests are fun and all, but it really doesn't answer the real question which is how does it perform with the production workloads required, and how can data analytics give insights and visibility to what needs to change over time.

That's the important part.

Which is why our AFA and HFA are custom tuned and engineered with "application profiles", designed to perform exactly how we believe that workload will want to be treated, which allows noisy neighbour avoidance, block size alignment for the app but also customisation of compression/dedupe for said application, to mitigate away from burning CPU cycles for the fun of it (our inline dedupe is not global, by design).

We've tuned our systems based on a series of data mining exercises we performed last year (see here for the blog post: http://www.nimblestorage.com/blog/storage-performance-benchmarks-are-useful-if-you-read-them-carefully/)

Cheers

@nick_dyer_

Nick Dyer

Re: Well they're good at marketing at least...

Hi. Nick from Nimble here.

The great news is that Nimble's systems draw IOPS (and are throttled by) Intel CPUs, and thus IOPS on arrays (regardless of it being AFA or HFA) is not constrained by the disk media behind it - be it all flash, or a mixture of flash and NL-SAS... OR by the read/write ratio of the IO pattern. Therefore, IO performance of any Nimble system at 100% read are pretty much the same performance at 70/30 r/w, 50/50 r/w, 30/70 r/w or 100% write. Being constrained by CPU in a controller rather than the amount of flash or disk in RAID groups makes it a lot easier, especially when scaling.

After all the sound and fury, when will VVOL start to rock?

Nick Dyer

Re: Customer in a sea of SEs

Expect replication and/or SRM support to be incoming shortly... I agree with you that this is a major sticking point at the moment.

Nick Dyer

Slow Burn

VVOLs certainly are very exciting and has the potential to really shake up the storage world for vSphere and storage admins/designs; however in my opinion it will be a slow burn. Not everything will warrant to be VVOL deployed either, there's still room and need for traditional VMFS, especially for low-end VMs.

Also, VVOLs really just expose the storage platform capabilities to the vSphere Admin... so if the backend is a "legacy" storage system based on controller "SP' mappings, Aggregates, tiers, RAID sets, disk types, or segregated landing areas, then that is the capability exposed withi VMware to the vSphere Admin... potentially not something makes a lot of sense and could be quite dangerous.

If anyones interested I recorded and published a deep-dive presentation and live demonstration of Nimble Storage's VVOL integration beta, available on YouTube here: https://www.youtube.com/watch?v=iBS_MZlpYtk

The network: Your next big storage problem

Nick Dyer
Linux

Re: 7/10

"Andy Warfield (who he?)"

An incredibly smart dude. http://www.cs.ubc.ca/~andy/

Tintri adds tincture of all-flash array to its range

Nick Dyer

Re: Why no performance stats?

Most likely because SPC doesn't allow participation from systems that have always-on data reduction features, as the workload used for SPC is dedupible/compressible and thus could return unfair results vs systems that don't have said features.

Nutanix digs itself into a hole ... and refuses to drop the shovel

Nick Dyer

The shady truth of the storage industry

<Storage industry rant>

This whole charade, whilst bad for Nutanix (and a little for VMware) is actually exposing the little-known truth in the storage industry for many years: pretty much every vendor has set testing criteria of synthetic tests to make their product shine against competitors which are effectively pushed onto unsuspecting customers who believe they have their best interests at heart (spoiler: they don't).

Examples are EMC XtremIO & Pure, who have manipulated IDC's "Flash Storage Testing Guide" (a truly independent testing guide) in order to make everyone else but them look horrendous, to people like Tintri that deploy a VM full of synthetic tests which will fully dedupe in their flash tier and give unrealistic performance experiences.

Another great misleading example is Pure with their "average 32k block is best for performance testing" BS. If you don't know you're being mislead, then sadly it's taken as gospel from viewed industry giants - so we at Nimble used real customer data to debunk that particular myth: http://www.nimblestorage.com/blog/technology/storage-performance-benchmarks-are-useful-if-you-read-them-carefully/.

It's about time the industry as a whole standardised on real-world tools to give customers experiences at 0%, 20%, 50%, 80% capacity, and with variable, mixed workloads... but it's up to customers to demand that requirement, rather than accepting enforced test plans from a vendor.

</Storage industry rant>

Helpful Nimble enhances NUKE THE LOT option on array control panel

Nick Dyer

Roughly 12 new features in total, for free

Nick from Nimble here.

We've released about 12 new user facing features in NimbleOS 2.3 (which is free of charge, and compatible with all 10,000+ arrays in the field), with another 4-6 backend enhancements to benefit the day-by-day usage.

For anyone who's interested (or competitive vendors wondering 'how did they do that'), those 12 user facing features are being blogged technically each day on our forum, available here: https://community.nimblestorage.com/docs/DOC-1494

It pays to fake it: Test your flash SAN with a good simulation

Nick Dyer

Nick Slater makes the most important point of the whole article

"Flash storage has great performance benefits for random workloads over spinning disks, but it’s still relatively expensive and has no performance value when it comes to sequential write workloads".

Well said that man.

Whilst there's huge amount of hype on flash/SSD storage silos for accelerating performance, the biggest misconception is that any workload will be lightning(!) fast on flash... whereas any sequential workload (writes, or reads) can actually be outperformed by using higher capacity spinning rust at a fraction of the price.

But of course, any good marketing veep will never mention that if the desire is to shift as much flash as possible!

Why are enterprises being irresistibly drawn towards SSDs?

Nick Dyer

Workload profiles also matter

Nice article Trevor. It's also worth mentioning that whilst SSDs/Flash are/is fantastic, it's not the answer in certain workloads. Random IO of course screams with flash and is well documented... but running sequential workloads (especially reads) will typically reap very disappointing results outside of synthetic benchmarks.

This is by far the most common misconception of flash in the enterprise today, and is why careful consideration needs to be taken when designing storage deployments that will be fit-for-purpose for the workloads it's being designed for. The notion of an all-flash datacenter for all workloads is a marketing step too far....for now.

Disclaimer - I work for Nimble Storage, but this subject is not a marketing message for our tech.

Let's talk about the (real) price of flash and spinning disks

Nick Dyer

Re: IOPS

Great point, this is usually the unspoken 'Achilles Heel' of All Flash propositions; sticking a sequential workload on it really won't make it go that fast!

Golden age of invention or hyped-up age of overblown marketing?

Nick Dyer

I actually respectfully disagree with Storagebod on this one - IMO the storage industry is the most buoyant it's been for years, and finally there's a wave of genuine innovation taking place outside of "The Big Guys" which are seriously turning the industry on it's head.

Nimble's latest mutants GORGE themselves on unlucky forerunners

Nick Dyer

Re: Was hoping...

Disclosure: Nimble Employee

We've publically stated that Fibre Channel will be Generally Available for our customers by the end of this calendar year, and we're on course to meet that schedule I believe.

Also with these new models it allows us to go further than 3 disk shelves (plus an all flash shelf as a fourth), which will be enabled by a software update in the near future.

Dell mashes up EqualLogic and Compellent: Eat up kids, it's Dell Storage

Nick Dyer

Re: Our perspective

Hello Bob,

Thanks for calling me out directly. However I think you'll find that I was posting factual information on your new product rather than competitive FUD. In fact, I'm a multi-year Ex Dell Engineer (2 spend directly with the EqualLogic range). No matter which way marketing teams try to spin it, introducing a new platform does create an inflection point for customers. The great thing for us all in the market is innovation is rife and gives customers more choice than the usual EMC & Netapp, which is only good for the industry.

Nick Dyer

Re: what does that mean

Negative, this is a Compellent product rebranded as "Dell Storage" and parachuted into the EQL price range. It doesn't scale out, only scale up with another shelf (i think?).

Great news if you love Compellent. Otherwise it's time to evaluate the market as this new product provides an inflection point of change to any current EQL customer (just like Netapp did with the forced cDOT adoption to their customer base).

Pure Storage's latest arrays cost DOUBLE what it claimed earlier

Nick Dyer

Re: Thinking outloud $/GB coupled with Pooled vs. Persistent VDI Desktops

I'd seriously argue against those figures of $/GB vs the new breed of systems, would you be able to present them please, AC? Sounds like you may be a Dell rep in disguise.

To the OP - if you chose the right solution, you can get AFA performance out of the newer breed of storage vendors. For example we can deliver 60K IOPS (reads AND writes) out of 12x NL-SAS drives & 4x MLC SSDs at Nimble, for a lot less price. We also have stacks of references of people who have done exactly that. And also run SQL, Exchange, server virtualisation & file on the same drives without any problem.... with no troublesome tiering involved.

Disclosure - happy Nimble employee.

Waterfront flats plan for IBM's UK HQ as housing market goes bananas

Nick Dyer

Re: I thought you meant the South Bank office

South Bank is owned & rented to IBM by Sir Alan Sugar, IIRC

Nexenta beats off rivals as Citrix testlab rates its VDI offering 'cheapest'

Nick Dyer
Trollface

Re: Misleading

Maybe so, but that's what customers are doing... Blame the marketing of certain all-flash vendors pretending to be the same $/GB as disk...

Enterprise storage will die just like tape did, say chaps with graphs

Nick Dyer

Re: PCI is the new network…..

What you just described was a failed startup called Virtensys, purchased by Micron a few years ago...

http://www.theregister.co.uk/2012/01/23/virtensys_crash/

The major problem here is if a PCI-E bus fails in a server, the whole server bluescreens. So it was always a massive SPOF.

Cisco reps flog Whiptail's Invicta arrays against EMC and Pure

Nick Dyer

Re: Troll elsewhere...

Nope. and your final sentence just shows how little you understand about the architecture & the value proposition of the technology...

Nick Dyer

We must be having some success...

You know what? With all the pandemonium on this thread from various AC's and disclosed competitors, it sure shows Nimble are making an impact in the marketplace and their incumbent customers & deals.

FINALLY storage has become fun again - rather than "who should I buy next; HP? EMC? Netapp? No budget, hmmm Dell?"

Disclosure - Nimble employee and very proud of that fact :)

Nick Dyer

Re: Troll elsewhere...

Yo troller. You are missing what I said. DUAL drive failure, services are halted. Not a single drive failure. As soon as a single failed drive is replaced, everything runs as expected. Keep up.

So whilst you may be enjoying throwing your out-of-date and easy to combat FUD - real customers (of which we have lots now) understand (and are thankful for) the data protection services we've built into our systems.

Final comment - i hate to keep banging on about it... but we've proven we have >5x9 availability across all of our installed systems in the field. All of your FUD bombs are meaningless when we've had <15 minutes of cumulative unplanned downtime across all installed systems in the last 18 months.

Nick Dyer

Re: Troll elsewhere...

Nimble's architecture has one spare per RAID set, which is per shelf... today. I'll let your competitive intelligence team update your FUD in a few months to learn what's going to change..

And correct the interface is different - the interposers converting SAS to SATA have been known to cause quite a few of the overall failures in 7.2K drives in the past from my experience in other companies.

Finally, yes we use Supermicro chassis (as do the majority of newer vendors, and VMware recommend the Twin2 themselves for VSAN) - but as said above, having continuously measured >5x9 availability across the entire user base this is a non-issue. I wonder how many other vendors can honestly say that...

Nick Dyer

Troll elsewhere...

Nice troll, AC. Like to correct you on a few things though...

As any storage professional will tell you, if you have a double drive failure at THE SAME TIME in a RAID set then chances are your system is suffering from something outside of your control rather than just a random drive failures. Could be rack/drive vibration problems, air conditioning failure, or even a rogue employee pulling drives maliciously. If that's the case, the chance of a THIRD drive failing is exponentially higher, which in a RAID 6 means your data is toast.

Gracefully stopping array data services is a pre-emptive measure to ensure NO data is lost and is the default behavior which can be changed should a customer wish to. But any customer who values data protection would rather err on the side of caution and protect their mission critical data rather than run the gauntlet of having no safety net should another drive fail. This idea comes from the Data Domain heritage of Nimble and over 25 years of experience in the field of enterprise storage & backup.

PS - Nimble systems are running at >5x9 in the field right now, and have been since early 2013 (proven from autosupport data) which means your trolling remark means nothing in reality as your described problem would have huge effect on these figures as it would be unplanned downtime.

PPS - all drives are NL-SAS, not SATA.

People really need to stop using this sort of stuff as FUD. It makes you look a tad silly.

Tell it to me straight, vendors: Are you cheap and easy?

Nick Dyer

Define cheap?

The real question here is - what's cheap? If you're used to paying $500k for EMC Symms, then is $200k classed as cheap? Or are we in the ball park of $100k? $50k?

What's cheap to one person in one sector, is wildly overpriced in another sector.

Backers fatten up flash-disk mutant array, sic it on storage giants

Nick Dyer

PS

One last thing - Marketing claims can only take you so far... I think you'll find the majority of storage/virtualisation admins look past the fanfare very quickly to understand the key differences in the systems, which in storage arrays is the filesystem & codebase which delivers the feature sets.

if your system is built on a "house of cards" file system, no amount of claims is going to hide that fact. Regardless of how great "dedupe" and "unified storage" sounds.

Nick Dyer

Oh Rob....

Hi Rob! Thanks for your reply. It's great that the VP of Product Marketing has time to find this post and reply to it on a public forum...(!)

You're right! Customer value is more important. This is why Nimble has been so successful for the last 3 years of selling technology in the marketplace, and even placed on the Gartner Magic Quadrant for Storage this year in the “Visionary” category. I don’t recall seeing Tegile here (or anywhere in the Quadrant) at all. 1500 customers and 3000 deployments in under 3 years of selling is a huge achievement, so we must be doing something right.

Just want to pull you up on a few of your marketing claims if you don't mind...

1. Dedupe vs Compression

We all know the operational overheads deduplication of data requires vs using inline compression. Heck, a lot of our founding engineers (ex Netapp and Data Domain, no less) do. This is partly the reason we chose NOT to do it; it allows us to run more important tasks on the system like data garbage collection & backend performance and cache-hit optimisations, meaning we can fill an array up to 95% in capacity before any overhead in performance. ZFS, as you know, has huge problems in this area - performance tanks to the floor once past 60% capacity on the box (and starts at 30%). The age-old problem of using a hole-filling file system, eh?

However, the figures you quote seem wrong. Our customers (all 1500+ of them) see compression of 40-60%+ on their production environments, not 20-30%. And it seems your customer would agree - this one in particular see's far better compression than dedupe ratios on their system. PS - NICE GUI(!): http://www.iphouse.com/blog/mike/wp-content/uploads/2012/01/20120110-zebi-1-volume.png

2. Unified Access

Sure, having all the protocols under the sun is great. But if your delivery of said protocols sucks, then that makes you underachieve in everything, excellent at nothing. This is something heard a lot out in the field, where I am every day. Whereas we chose a protocol which we could optimise and build a solid foundation on and be the best in the field with that protocol. Which we are.

3. Active/Active Controllers

C'mon Rob... really? Any storage engineer/vendor worth their salt knows that running active/active controllers is a lot more complex to manage with protocol and volume distribution on the system (ever heard of LUN TRESSPASS?). It also means customers only ever run their controllers at 50% load to ensure if a controller failure occurs the system doesn't blow up when everything's running on one controller. It also means storage firmware updates are also FAR more complex, require lots of downtime and may even need engineers onsite to do it...

Whilst we run active/hot standby controllers in our system (yes, data is mirrored from controller to controller in real time in NVRAM), a Nimble firmware update will take 5-6 minutes in total and will cause 4 PACKETS of downtime in the whole process of the update. 4 PACKETS! that's insane. Also, Nimble can upgrade a storage array from 20K to 75K IOPS by live-upgrading controllers on the fly without adding any further disks or SSDs needed in the system. Can Tegile do that? Didn't think so.

4. ZFS

ZFS has HUGE problems, lots of people/customers in the industry know this. You're trying to say that using a legacy code base with a hole-filling filesystem which is maintained by an open-source community with a few India-contractor band-aids is better than engineering and architecting a filesystem from scratch with full in-house engineers and support?! And you're trying to sell these arrays to enterprise accounts?!

By the way, Nimble has been writing CASL (it's patented and proven file system) for over 5 years now with amazing success... so i'd guess you could say we've "got it right". Our large customer base (and world-class 24/7 support team) would.

Nick Dyer

The above poster gets it - Nimble (my employer) and Tintri are unique as they've developed systems with no hangups of previous filesystems, internal/external requirements or politics - so they've had the ability to truly make something new which beats the hybrid offerings from Dell, Netapp, EMC etc in performance, usable capacity (and potentially other things such as cost, management etc).

Tegile is an odd one as it's really just a ZFS storage array with the metadata placed on SSD. It can be compared to (and has all the same problems as) Nexenta, Greenbytes and other half-baked systems. However their cost is drastically low (so much that margins can't be that good).

At Nimble, we've officially surpassed 1500 customers and 3000 systems, which is a very impressive statistic for a young upstart and who have been selling product officially since mid-2010... although what's even more impressive is this recent story detailing a non-disruptive upgrade to our first ever customer installed array from a 20K to 75K IOPS system... this is truly what makes us different to (and more appealing than) the Tier 1 guys. http://www.nimblestorage.com/blog/customers/non-disruptive-upgrade-to-nimble-storages-first-deployed-array/

Dell's new Compellent will make you break down in tiers... of flash

Nick Dyer
FAIL

Typical Dell announcement. Missing the point and timing of the technology and where the market is moving at a very rapid pace. Flash (or SSDs if you're working with legacy storage vendors) works best as a cache rather than a tier. And using their "data progression" software is not a selling feature, I know lots of customers moving off CML purely due to how poor that code works.

Also agree with above poster about the FAIL of not having inline compression or dedupe. However the latter isn't so important for primary storage (unless you work in marketing for Netapp...).

Mutant array upstarts feast on EMC, NetApp's leavings

Nick Dyer
Flame

LOL @ The AC's

I must chuckle at all the AC's posting. I wonder how many work for the "mainstream vendors" as Mr Mellor called them...

RE the FUD about EMC working with VASA/VAAI and so would be able to do the same as Tintri - there is no way that any of the Tier 1's would be able to come anywhere near what Tintri have been able to produce in the next 18 months, without a complete ditch and shift of their core underlying OS and file system and thus install base (something called The Innovator's Dilemma - exactly what Netapp hit EMC with in the early-2000's). Much like how EMC nor Netapp would not be able to match the IOPS, capacity, protection & scaling that Nimble Storage can reach using the same core hardware.

PS - I have no problem stating that I work (and love working for) Nimble Storage, as more deals vs the "mainstream vendors" are being won than lost right now.

Nick Dyer

Re: Tentri based on ZFS?

Correct, Tegile is a ZFS based storage system. So has all the pitfalls of ZFS - most notably the performance cliff when capacity utilisation climbs (starts at 30% used).

There's an outstanding blog by Delphix which details this exact problem, which is available for viewing here:

http://blog.delphix.com/uday/2013/02/19/78/

Note: Neither Tintri or Nimble Storage is built on ZFS - both have their own purpose built & designed file system.

Intel delays quad-core Penryns to pummel Phenom?

Nick Dyer
Dead Vulture

Marketing spin....

Its probably more down to the fact that they can't make the chips fast enough.

In the server market, they can't supply demand for the new 45 nanometer Quad Core chips, and won't be able to flood the market until late Q1 (all tier 1 vendors are suffering major supply shortages right now!), having a knock on effect on their desktop production line, no doubt.

Apple Macs

Nick Dyer

why bother

....when you can run OSX x86!

Microsoft counters VMware insanity with optimistic frown

Nick Dyer

I'm not holding my breath....

Anyone remember the "state" of Virtual Server 2005?

That could only be described as a carcrash at best.