* Posts by DeepStorage

42 publicly visible posts • joined 6 Dec 2012

File software-flinger Elastifile stretches funding further to $65m

DeepStorage

Smaller round not down round.

A down round isn't, as you seem to believe, a funding round that raises less money than a previous round but a funding round that raises money at a lower VALUATION.

This is a strategic investment from WD, more than a fundraise by Elastifile because they needed the money. As long as WD paid the same, or more, for each 1% of equity as the investors in the last round it's not a down round.

To make the math easy if the $35million last round bought 35% of the company and the $16 million from WD 16% of the company that would be the same valuation if it bought 12% an up round.

Hyperconverged leapfrog: Dell EMC borg overtakes Nutanix

DeepStorage

Re: Do some homework

Oh, so it's time for insults? No Independent Analyst isn't code for unemployed. I have never been an employee of any storage vendor (I did own a company that made disk subsystems in the '80s) companies, including your employer, pay me for my opinions and services.

I put my real name on everything I post. I have a reputation to maintain. If you haven't heard of me please ask Dheeraj who I am, and apologize to him again for me for my brain-fart at not reckognizing him at DellEMCworld. He knows who I am, and what I do.

Since your last post I've texted or spoken to several vSAN customers running multiple clusters of over eight nodes proving your statement that it's not possible wrong.

Spreading falsehoods and FUD by folks like you is not good for Nutanix. Cut it out and behave.

DeepStorage

Do some homework

You say you're a Nutanix employee and therefore have a responsibiilty to check your facts about other vendors.

You say "Nutanix is the only HCI player that allows more than 8 nodes in a cluster: All other solutions run exclusively on VMware and are, for some odd reason. limited to 8 nodes. Unsure why. " The spec sheets say different. That you haven't seen a 16 node VSAN cluster is very different than you can't build one.

You then claim "Oh, all those products referenced from EMC as being HCI are not; they are all converged, but not hyperconverged" as if IDC had included VBLOCK and VxBLOCK in thier numbers. The article clearly lists VxRAIL (For sure HCI), VxRack (could argualbly be dis-aggrigated SDS for some customers and VSAN+ Ready nodes (also for sure HCI).

Blow your FUD elsewhere.

I'm an independent analyst that's worked for both Dell/EMC and Nutanix. I got no dog in this fight.

Silver Lake and Broadcom bid $18bn for Tosh memory biz

DeepStorage

Toshiba gets the 2 trillion yen

Toshiba, the conglomerate, is in a deep hole mostly because of losses at Westinghouse. Silver Lake would pay Toshiba (conglomerate) 2 trillion yen cash to plug that hole.

Siver Lake and Broadcom would create a joint venture and that joint venture would borrow money to pay Toshiba and get the memory division in return. It's that joint venture, not Toshiba that would end up with the debt.

SSDs in the enterprise: It's about more than just speed

DeepStorage

Power is small beans

a 10TB hard drive (cost $200) uses <200Kwh of power per year. At $0.20/kwh (high end of US cost) and PUE of 2 (1Kwh of AC for each Kwh of gear) that's $80/year of power. The equivalent (capacity, ent feature set) SSD Samsung's PM863 3.84TB costs $2000. Even if it uses no power at all the payback is MANY years.

OpenIO wants to turn your spinning rust into object storage nodes

DeepStorage

Well yes 10,000 little controllers can be cheaper.

A Kinnetic drive is a key-value store. Disk drives already have ARM processors to run the ECC, LBA, remapping Etc. The incremental cost of a little more memory and a couple more processors is a few dollars compared to $1000 for 12-24 drives for a server/controller.

There's no reason you couldn't use a small KEY (say 20 bytes for SHA-1) and a 4KB value, and build a solidfire like CAS back end for any application.

The TPC-C/SPC-1 storage benchmarks are screwed. You know what we need?

DeepStorage

Re: Better Vendors?

The better vendors hire DeepStorage to test their gear and write reports.

I couldn't resist it was a great straight line.

- Howard

DeepStorage

Re: The problem is that there's a moving target

Agreed and the reason I decided to take on this project. We have to build benchmarks for current systems and we have to continue to advance the state of the art in storage testing to keep up with the state of the art in storage.

The problem is that we created incredibly simplified workloads (4K, 60/40, 100 random) which worked well enough (-+25-50%) in the day of disk arrays with small RAM caches but really broke down as we added flash caches and data reduction.

It's also important to note that The Other Other Operation isn't about vendor reports with hero numbers, although I'm sure well provide a way to do that and than vendors will find a way to game it, but about the cookbook for users to run their own POCs.

The cookbook not only includes the code, and instructions on how to run the code, but also instructions on how to set things up realistically and how to interpret the results. All designed to make gaming the system harder and less valuable.

- Howard

DeepStorage

Re: More nonsense trash-talking on the SPC/TPC benchmarks...

When I said "Vendors publish test reports using data sets smaller than the cache" I was neither referring specifically to SPC nor to a RAM cache.

It's common for vendors to publish reports, or even "How to run a POC" manuals for hybrid flash/HDD systems that test with workloads smaller than the flash cache in the system. Those are the shenanigans I'm calling out.

- Howard

DeepStorage

Sadly true

But of there are 14 bad attempts we still have to create number 15

DeepStorage

Nothing beats real applications - BUT

Testing with your real applications is harder to do than it is to say. Sure the F500 folks have HPE Loadrunner scripts that pretend to be their users and customers accessing the application via their web interfaces BUT in 30 years of consulting to mid-market companies (my clients were typically $500 million-1.5 billion in revenue) I've never seen a client that could generate 125% of their peak day's load against a dev/test copy of their application let alone an application they were only planning to install.

If you're a mid-market customer a vendor will be happy to lend you kit for 30-60 days, but since you're busy you can only spend 10 person-days on the POC. Even though you're planning on spending $100,000 or more on a storage system you just don't have the time, or skills, to make your production applications (all of them if this is your storage for VMs) generate the load they do when flesh and blood users are running them.

So we're aiming at giving those people a way to test storage. Our synthetic workloads will be a decent first approximation of real applications with mixed I/O sizes, realistic hotspot sizes and realistic data reducibility.

To address your other concerns the cookbook will include measuring haw performance is effected with faults introduced. We're even planning to force a drive/node rebuild.

This is very much NOT an SPC like org where the goal of the whole project is to crate a hero number.

- Howard

PS: If you want to be kept up to date on our progress please leave your contact info at: http://www.theotherotherop.org/wordpress/contact-us/

PPS: HPE is a charter member of The Other Other Operation so someone there sees some value.

HPE StoreVirtual gets low-cost ARM-powered variant

DeepStorage

Re: Defeats the purpose of StoreVirtual

Scale out doesn't require a single controller model. Equallogic, Nimble and XtremeIO all scale-out using a dual controller brick. Using 2 controllers per brick makes the system much more media efficient. To be able to survive a node/controller failure and either a drive failure or read error on the rebuild a dual controller brick can use RAID-6 with 20% overhead. By comparison a shared nothing system like Lefthand or any of the HCI guys needs 3 way replication or network RAID-6 like erasure coding.

Mirroring SSDs 3 ways is very expensive, distributed EC is CPU intensive, creates a lot of network I/O and has a negative impact on latency (the system must wait for 5 (4d+1P) minimum ACKs from nodes before it can ack to application.

The 3200 can be a member of a scale-out cluster with other 3200s, 4335s or VSAs (in design, current sw version may have limitations) so why is it not scale-out?

Seagate's flash advice: Don't buy the foundry when you can get the chips for cheap

DeepStorage

Good to buy in a buyer's market, when there's a shortage...

Given that flash demand has failed to materialize as fast as expected, and that the increase in capacity from 3D is just coming to market Seagate will be in the drivers seat with the flash foundries for the next 18-30 months. However when demand catches up to supply and the next shortage hits I'd rather be WD with assured supply via partner Toshiba than Seagate chasing Micron and SK while Samsung and Intel take all the spare chips for in-house SSDs.

I know you were joking but the Kinitic SSD is very interesting. Remember XtremIO and SolidFire have an object back end, 60TB SSDs as Key-value stores could be interesting.

- Howard

DataCore drops SPC-1 bombshell

DeepStorage

No huge RAM cache with HCI

HCI systems most definitely don't have huge RAM caches for two reasons:

1 - RAM is expensive in HCI. The Nutanix storage VM runs in 16-32GB of RAM. If they demanded 128GB for a big cache it would mean they could run fewer VMs on each host. Most VMware environments run out of RAM before CPU. Since VMware and Windows DataCenter Edition (to license Windows guests) cost $15K per host or so MSRP fewer VMs per host makes the whole solution much more expensive.

2 - Power outages. If an HCI system had even 4GB of write cache per node when power failed to the system, and yes I've seen several data center wide power failures generators don't always start like they're supposed to, then all that data would be lost making all the remaining data corrupt. After all if you've written 100GB of data but haven't updated the file system metadata when the system comes back the 100GB of data will be inaccessible.

Storage appliances address this by using NVRAM that detects the power loss and flushes the data to flash using a little battery or capacitor power embedded in the system. If VMware had a UPS monitoring service an external UPS could tell the HCI system that power was failing and have it flush the write cache like an XtremIO does. Unfortunately, unlike Windows or Linux, vSphere has no way to do this.

Tintri is great. But is VM-aware storage still what customers want?

DeepStorage

It's about granularity

Tintri's secret sauce is that they provide the data management services one VM at a time. vROPS is just an automation platform, it doesn't change how the underlying storage works.

If you have multiple VMs in a single datastore the storage sees one volume. It can therefore only snapshot or replicate ALL the VMs in that volume or NONE of the VMs in that volume and since the timing on application consistent snapshots is very tight those snapshots will be at best crash consistent.

You could tell vROPS to create a new datastore for each VM but you'll soon hit the 255 SAN logins limit.

vVols is a solution but it's still evolving and doesn't support replication among other things.

Storage vendors that don’t look like storage vendors any more

DeepStorage

SoilidFire Capacity

No you pay for storage PROVISIONED to hosts, and once you've bought say 100TB of capacity you can provision 100TB to various hosts. it doesn't matter how efficient the data protection or data reduction is, you're paying for capacity your hosts can write to. If you don't thin-provision 2TB for every VM cause you're lazy it's simple.

When it comes time to upgrade the hardware you buy hardware, and transfer the license to the new hardware. SF says they'll sell the hardware at a minimal markup over their cost (they were discussing 10%) since the hardware are standard servers and SSDs (plus the NVRAM device) if you can buy Dell servers cheaper than SolidFire wants to sell them to you go ahead.

Second you only have to pay for capacity that's visible to hosts to use. If you want long snapshot retention the capacity the snaps use doesn't count against the capacity you're paying for so you just need to buy more hardware at about 1/2 the price of a complete appliance since the other 1/2 is software cost.

HPE adds power-fail-protected NVDIMM tech to servers

DeepStorage

Re: huh?

It's the write cache in a storage system. If you're using these servers to build a scale-out storage system, hyper-converged or not, you can receive new data, write it to the NVDIMM, replicate to the NVDIMM in a second server and ACK. There are APIs you can use to ensure the data's been flushed from CPU cache to the NVDIM, and is therefore safe from power loss. Total time 250us or so.

If you only have standard DIMMs you can't ACK until the data is someplace safe like a flash drive. Writing data to an SSD, even an NVMe SSD, will push overall latency up to 1-2MS.

NetApp ain't all that: Flashy figures show HPE left 'em for dust

DeepStorage

Re: Wait!

I don't wonder why Gartner set up a separate AFA MQ. IT was in their interests. As long as there is a vendor in the leaders quadrant who isn't in the leaders quadrant for "General SAN Arrays" that will pay for reprint rights Gartner makes an extra buck.

I object to Mr. Unsworth's artificial and customer hostile definition of an AFA. If the line was "a system shipped without spinning disks" it would be arbitrary but I'd say within Gartner's powers to define product segments (that is part of an analyst's job). My objection is to the line that says the vendor must refuse to sell a customer spinning disks later even if the controllers and software could handle them. Analysts shouldn't get in between vendors and customers.

DeepStorage

Meaningless Fiction by Joe Unsworth

Chris,

I don't know why you even report the Gartner numbers and reinforce the stupid, customer hostile, definition of AFA Joe Unsworth has perpetrated on our industry. If you feel it newsworthy start the article with "as we've previously reported the Gartner definition of all flash is distorted by excluding any system that could possibly hold a spinning disk and some vendors sell most of their all flash in models that accept that ancient device:" Therefore comparisons are meaningless.

I would never use Gartner numbers to say: "Violin Memory is doing better than Fujitsu, HDS and Huawei" knowing that HDS only announced Unsworthworthy models in the last quarter.

- Howard

Don't take this the wrong way, Pure Storage – are you the next NetApp?

DeepStorage

NetApp has NOTHING like extremio

Whiptail as a product was a failure. Cisco killed it off. Whiptail as a company was a success, Dan Crain convinced the folks at Cisco to pay more than $400 million for it and the investors all went away happy.

Dell-EMC merger could leave Lenovo out in the cold – analysts

DeepStorage

S2200/3200=Dot Hill

Chris,

The s2200/3200 are Dot Hill, now Seagate, OEM much like HP's MSA low end for SAN. Once they're finished absorbing the IBM server division Lenovo could be a buyer in the storage biz.

Nutanix digs itself into a hole ... and refuses to drop the shovel

DeepStorage

Independence comes at a cost

It's not just EULAs. Early in my career I wrote a scathing review of a really lousy modem in PC Magazine. The company sued for libel. Of course the truth is an absolute defense in libel cases, as long as you have $50,00 US to pay your lawyer.

The good news is that Bill Ziff paid the lawyers and indemnified me but it was a wake up call.

We've had projects go south, usually because the vendor asked for more than their system can do. What happens then depends on the vendor. Usually it means either redefining the project to be a consulting job on how they can fix their gear and nothing gets published.

So what exactly sits behind Google’s Nearline storage service?

DeepStorage

Old gear won't do

While the shift the old servers to the DR site model works for SMBs that have a fixed IT staff, and therefore fixed operational costs, it doesn't work at scale. Just the extra data center space, power and cooling for servers with 1TB hard drives of 2011 vs 8TB hard drives today would make buying new kit worthwhile. Add in that those higher failure rates mean more guys in the data center doing break/fix and occasionally breaking something else and old gear isn't cost effective.

The space/power/cooling is why they turn over gear faster than average in the first place.

NetApp cackles as cheaper FlashRay lurches out of the door

DeepStorage

Can't Run SPC-1 with Dedupe

Not everyone agrees with SPC-1. The current version of the benchmark writes data in the places and I/O sizes that simulate an SQL OLTP application pretty well it always writes the same data and SPC won't certify a test run if the storage system has inline deduplication turned on because the data would dedupe down to nothing.

The problem is that dedupe is built deep into the architecture of say SolidFire and XtremIO, they can't trun dedupe off because they use the hash to determine where the data is stored.

- Howard Marks

no not the dope dealer, the storage analyst at DeepStorage

Disk is dead, screeches Violin – and here's how it might happen

DeepStorage

Re: The RIGHT format won out. Unlike the VHS/Betamax argument

No, the reason VHS one is that Sony's smaller Betamax tape held 90 minutes at top quality while a VHS tape held 120 minutes. Most feature films fit on one VHS tape, a much better deal for the studios and video stores, vs 2 for Beta.

Sony was also quite tight with Betamax licenses while JVC let anyone that paid the fee make VHS so the RCA VHS deck was cheaper than the Sony Betamax at Macy's.

People incorrectly use VHS/Beta as a victory of marketing over the best product. It's actually lesson that different constituencies have different definitions of best. The studios valued run time over picture quality, the customer low price and familiar brands.

Sony bought up studios in the 80s and 90s for the same reason Comcast bought NBC more recently, controlling both content and distribution is profitable.

All-flash array reports aren't all about all-flash arrays, rages Gartner

DeepStorage

Re: Call them what you want

First for those that don't know I'm the Howard Marks that Chris quoted in the article.

The real point is the "no hard drives ever" rule does nothing but screw the customer and complicate the price list. You have a 7450 you CAN'T ever put a shelf of hard drives on it, even if you buy a new AFA and want to use it for some other purpose. Joe Unsworth has demanded that HP screw you over.

No one should have that power.

- Howard

DeepStorage

Call them what you want

The problem that Mr. Unsworth did not address is how his arbitrary separation between solid state array and an array using solid state memory for storage screws over customers. Because of his rules if I buy a 3Par 7450 and 2 years later decide I want to add a shelf or two of spinning disks HP has to refuse to sell them to me or risk having the 3Par thrown out of the SSA class.

Flash dead end is deferred by TLC and 3D

DeepStorage

It's more complicated, and better than you think

TLC has two major problems, one of which is described in the article that reading the 8 voltage levels can be difficult especially if any electron leakage occurs. The other problem with TLC is the extended write time. SLC has two states, charged and empty. To program it you force some charge in and viola the state is changed.

MLC, and TLC, use a multi-pass programming process, they force in some charge, check the level and then force in more charge to get the state they want. Since TLC is so picky the process may take more than 2 passes and therefore take longer.

Today's flash chips can take some of their capacity and use it as SLC and the rest as MLC or TLC. A flash controller can use the SLC portion to receive new writes and then transfer the data to the TLC portion during garbage collection. This reduces the wear on the TLC significantly.

Traditional enterprise workloads on an all-flash array? WHY WOULD I BOTHER?

DeepStorage

Budgets are binary

Martin,

I've noticed customers buying AFAs simply because an AFA fits within their budget. They're pretty sure a hybrid could meet their needs but since they can afford the shiny new AFA they buy it so they don't have to worry if the hybrid would have been fast enough.

After all a storage guy could get fired for poor performance, but not for staying within budget, even if they could have saved money.

Listen up, VMware admins: If the array won't support it, VVOL won't help you...

DeepStorage

Or go Hyper, converged that is

While ironbush might be the only storage array to do per-VM services some of the HC kit like GridStore for Hyper-V can provide similar.

Chief marketeer walks at Violin Memory

DeepStorage

Eric just tweeted it's IBM

Frankly I was shocked when he left the astounding amount EMC was paying him for Violin, which at the time looked like it was past it's sell by date and damaged by Don Basile's running the company as if the IPO was the finish line. At Violin he got to live and work in California as opposed to commuting to MA every week and of course if they turned Violin around I'm sure Eric would have gotten a sizeable reward of some type.

Now he's off to IBM for an "amazing" amount of money

Howard

EMC: Kerr-ching! $430m XtremIO gulp's paying off... Hello, $1bn a year

DeepStorage

No way to know

Unfortunately Chris there's no way to know how many Xtremio deals fall into each of the following:

1 - We bleed EMC and would buy whatever they are selling. VNX AFA or VMAX full of flash if XIO not available.

2 - We use a lot of EMC kit and were looking at Pure/SF but EMC sold us XIO at deep discount to keep us from seeing how good alternatives are.

3 - We wouldn't have bought EMC if XIO wasn't there to buy

LSI driver bug is breaking VSANs, endangering data

DeepStorage

It sure is VMware's fault. They certified the driver

The strength of EVO:RAIL is supposed to be that there's only one hardware config and that VMware's tested the crap out of that config.

Note that the only SAS HBAs, as opposed to RAID controllers, on the VSAN HCL are LSI based even if like the HP H200 they're OEM versions.

If you tell customers not to buy RAID controllers, only have one non-RAID controller driver supported and then have problems it's as much your fault as the vendor that wrote the controller. VMware's testing needs a major upgrade.

Dell PowerEdge hooks up with SanDisk, stuffs servers with DAS Cache

DeepStorage

No Compellent

It's the SanDisk/FlashSoft stuff. Any back end. The Dell software, which also does write-back caching, requires, and interacts with, a Compellent.

IT blokes: would you say that lewd comment to a man? Then don't say it to a woman

DeepStorage

We can stop this. Gentlemen say "I think you need to apologize to the lady"

I do occasionally, I promise my self, the women in my life, and our industry to say "I think you need to apologize to the lady" more often.

BTW: In a small step forward the ladies working the Bluecat booth weren't in their previous scanty costumes. but real clothes.

Storage faithful tremble as Gartner mages prep flashy array quadrant

DeepStorage

Why AFAs are small

Yes Pure dials back their dedupe (they don't really turn it off but limit how deep they search the hash tables) when the system is heavily loaded. Why care. They post-process it later. The only drawback is if you have a system that's 90+% full and 90+% loaded in terms of performance. In that edge case you could run out of space as the system isn't deduping efficiently. Note that edge case is also really bad planning having the system full and busy for an extended period.

Today's AFAs are relatively small for two reasons:

1 - Almost all of these systems use a pair of 2-socket Xeon servers as the controllers. Processing a million IOPS is the limit of the controller. A vendor could hang another shelf of SSDs on the system but that would add capacity without performance which the AFA customer doesn't want.

2 - Limited customer demand. At the current cost of AFA capacity there just aren't that many users that need PB of storage and are willing to pay that price. Most data is cold enough to store on disk and that is of course a lot cheaper.

- Howard

Note: Pure is a client.

Aw, SNAP. It's too late, you've already PAID for your storage array

DeepStorage

Test Labs are Expensive to Run/Maintain

Back in the 20th Century I wrote for PC Magazine and Network Computing when they could afford to pay well into six figures of dollars year on maintaining their labs. I also spent 25 years consulting to midmarket companies that regularly spent $50,000 a year on storage. None could afford a lab at all.

I run my lab on a shoestring getting donations of gear from vendors and buying other gear used from liquidators on eBay. The lab still costs over $50,000 a year to keep up and running. Rent on space for 5 racks worth of gear, $15,000 for air conditioning installation and electricians and of course a constant flow of new gear.

I'm currently running a 250 user VDI benchmark, that takes a half dozen servers with 96GB of RAM each.

Yes I do reviews for vendors, yes when it's turned out their product didn't work as expected the project was canceled. I count myself lucky that DeepStorage is small enough that I decide I generally like a product before I even pitch the vendor to test it.

You can believe everything a reputable analyst like those Chris mentioned, thanks Chris, tells you. You can't trust what you only think we say.

I've been lucky that my clients realize that "We'd really like to see feature X that's currently lacking in a future version" adds to the credibility when we say the product does Y and Z well.

- Howard

Fujitsu punts all-flash Eternus box... just before NetApp pulls a FASt one

DeepStorage

SPC won't certify arrays with Dedupe

which is why Pure, Solidfire and XtremIO don't have SPC results. The data they use is too compressible/dedupible so those systems would report results better than they would deliver with real data.

Violin Memory axes 100 staff, mulls fate of PCIe flash card biz

DeepStorage

Now that Toshiba owns OCZ they don't need Violin's PCIe SSDs.

Even if OCZ's current Z-Drive cards aren't up to Toshiba's standards for enterprise PCIe SSDs a new model from OCZ, fully in house, will be at least as good for Toshiba as buying from Violin who they own just part of.

How did hybrid flashy bods Nimble Storage's IPO go so smoothly?

DeepStorage

Violin is the leader in AFA only because of old data

The problem with calling Violin the leader in the AFA market is the data that statement is based on is 2012 sales. Most of the competitors including Pure and Solidfire shipped under 10 units in 2012 and didn't start really selling their products until this year.

Now that there are products that deliver 90% the performance of a Violin AND have data services lie snapshots and replication from not only Pure but EMC, HP, HDS and Dell why would the vast majority of customers buy violins?

It's a BYO-slingshot party in the Silicon Valley of Elah

DeepStorage

Great job of multitasking Chirs

Chris,

I think this is a great job of writing something useful while in church. Unfortunately as we're not allowed to use computers on the Sabbath my Rabbi would smack me (after sundown of course) were I to attempt such a thing.

I think I'm going to have to have a go at the all flash array as disruptive technology myself soon.

See you at EMCworld.

- Howard

New Tosh drive can wipe out 4TB 'near instantaneously'

DeepStorage

Re: 5 better then 3 ?

In the 1990s drive vendors switched from a dedicated servo surface to servo information embedded between the data sectors on each surface. As a result the drive adjusts to put the active head in the middle of each track based on the servo positioning data embedded in the data surface for that head.

This allowed tighter track spacing but eliminated the ability to use heads in parallel. When the drive switches heads it has to reposition to align with that head's data. On many drives moving track to track is actually faster than head to head.