* Posts by Rob Isrob

76 publicly visible posts • joined 7 Sep 2008


This flash is too slow. This DRAM is too small. This bowl of data is just right. It's SCM flavored

Rob Isrob

Re: This is ridiculous vendor pimping nonsense

I don't know... I'll take a stab at it. First up is cost. Keep in mind "hot" data makes up less than 10% of data. So in large Enterprise environments with 300, 1000 servers impractical to stick optane on the ones that "need" it. Secondly, latency over NVMe is less than 40 us in shared storage. Not shabby. Then the whole aspect of failuire domain. You move that out of Enterprise SAN to multiple hosts.... congrats, you just increased management and points of failuire. There are probably some of the reasons Pure and others realize optane is good Enterprise SAN fit.

Shingled-minded Western Digital insists its latest hard drive sets disk capacity record

Rob Isrob

Re: Reliability

You need a modern raid scheme ala IBM's DRAID among others. Slices of disks that make up a raidset, not the entire disk. That ship has sailed.

Hybrid cloud's no fun, can't get nothing done – will Rubrik make you whole again?

Rob Isrob

Let me know when this is a thing

"Alta also supports IBM AIX and Oracle Solaris environments and the company claimed this makes it easier for enterprises using these mature, old-style server operating systems to standardise on Rubrik and move their workloads to the cloud."

Just exactly which "cloud" does AIX get moved to, Skytap? I recall IBM will run your AIX instances in their cloud, but it isn't very cloudy like. Plus the whole Power thing... last I checked, the cloud was a bunch of Intel boxes. Wow... and just looking at the Rubrik wording on their site, they make it seem like you are going to move an AIX instance to AWS, Azure or GCP. That can't be right.

Thanks for being so kind , btw... "mature, old-style" it is somewhat more palatable than other monikers.

Dell results: Well done, ice cream for everyone! Er, not you, storage

Rob Isrob

Re: Servers and VMware up Storage down

Not just VMWare/VSAN. Others are up in external storage, see 3rd quarter results from IDC here: https://www.idc.com/getdoc.jsp?containerId=prUS43274017. What we've seen is their heritage VMAX/VNX/Unity segment is being punished... you can further ferret that info here and there. Notice in 3Q17 Netapp and Others are up quite a bit in External Storage. I expect continued growth in Others. Ponder why that might be and why you might see a bunch of drag on heritage for a number of players.

In 2018, how are the big storage 8 handling the industry's challenges?

Rob Isrob

Re: Arghhhh!

They'll IPO. They are actually making money and will go public at some point. But compare/contrast to Pure. They are still losing money and yet viewed favorably from a Wall Street perspective. Go figure. I guess the idea is they eventually make money. Their current raw material cost, R&D, marketing doesn't leave any headroom or they would be making a boatload. Likewise, look at Tintri. They apparently are losing money on sales. I guess their strategy is to make it up in volume.

Bit price drop + compute-storage closeness = enterprise flash use boom

Rob Isrob

Charts as a test

We have seen that second chart on numerous occasions. I hate it. Who in their right-mind uses such a color scheme? It looks like a test for color blindness... I mean seriously, you can't hardly tell the difference in some of those shades of blues AND you have to blow it up, drop it in a tool to zoom in and make sure you see which box lines up with which bar graph. As me ol' friend Tony would say: Pity-full..

Array-pusher Infinidat boasts of sales hike, says it has petabyte-pushin pay pals

Rob Isrob


"other high-end array providers Dell, HDS and HPE and, maybe, NetApp cast covetous eyes over Yanai’s latest storage project?"

Eh? Dell aka EMC? Skate rentals will be flying off the shelf in Hades when that happens. Is it beer 430 already in Blighty? What you been smokin'? Etc.

Salesforce sacks two top security engineers for their DEF CON talk

Rob Isrob

That's what she said . . .

Bada-bing... Meanwhile, Google Trumpets: "You're fired!"

Apeiron demos 'rocket ship' Intel Optane array tech

Rob Isrob

Re: The storage benchmark is dead. Long live the benchmark!

Meanwhile .. where folks buy systems with a set budget and are careful with RFP reviews, etc. I would think once we get to this point (and it is coming), there will still be demarcations. You won't be putting Petabyte level configs in Optane for a while. The nextgen Optane + NVMe IOPS per $ will be high but you won't get enough of it, etc. But yeah, SPC1 and SPC2 are pretty well trashed, eh? That or they will have to sort it based on tech and that would be ugly.

Infinidat puts array to the test, says it 'wrecks' Pure and EMC systems

Rob Isrob

Storage Benchmarks - in general

Back in the day, some of us remember TPC-C... and the I fondly recall the perf debates with our nemesis "The British Champion"... but I digress. TPC-C (yeah, more than storage but a huge storage component) was pretty much gamed and follow-on TPC-D benchmark tried (did?) to get around that. Later SPC came along... same thing. Much to their credit, they keep tweeking and now we have SPC-1 v3, maybe a decent benchmark? I don't know. I gave up following all that - as others point out you really want to POC this stuff. I think that's the whole point of muddying the waters on purpose here... marketing. POC it , it will do decent but the shocking thing will be it will be vastly more affordable. Shoot, Infinibox lists for about a buck a GB.... there is a reason a lot of AFA folks are running in circles and shouting "Squirrel!"

Product marketing veep leaves Infinidat

Rob Isrob

Re: Customers

I trust they are profitable based on Moshe's reputation alone. We might guess they are already at 3 and sustaining profitability.


The start-up is "profitable since last quarter," according to CEO.

It's COTS and not hard to come pretty close to being able to price out all the hardware components to see they are selling at a substantial profit. Yeah, they have to run a business so this isn't an easy thing and that's why they pay Wall Street analysts a lot of money to figure these things out - that ain't me.

Rob Isrob

Re: Slowly Disappearing

They've barely got started. They've recently added (and soon to ship) compression. That brings their "effective" cost per GB down in the $0.50/GB range. They are working on dedupe (google it). With an ingest rate of 10 GB/sec it will make for an even better initial primary backup target. If nothing else, they should dominate that space ... never mind the Enterprise space you intimate is "going away." They could hang their hat on backup growth alone if nothing else. And yes, there are a number of larger players that started using them as backup targets as they kick the tires. Large Enterprises don't often swap out their data vendor of choice on a whim. The big player groupthink, trinkety trinkets obtained, etc. slow that process down considerably even with the CFOs beating on them. B^)

Rob Isrob

Re: Customers

Quite a few. Are you being snarky or just lazy? Asking for a friend, thanks! But yeah.. .they have a number of large accounts and use a few of them as references. It's always good to know a fortune 100 is using your kit and more than happy. Punishing for rivals too. I mean when push comes to shove and pencils are sharpened and the Infinidat solution is half the cost, there is a lot of screaming and shouting for fanboys of the incumbent followed by toys tossed from the pram. . .

Seagate plays disk cricket with a 12TB Enterprise Capacity drive spinner

Rob Isrob

Here ... let me google that for you.


AMD does an Italian job on Intel, unveils 32-core, 64-thread 'Naples' CPU

Rob Isrob

Re: @Titter

I've eaten several times at that original Pizzeria Uno. Other than the 45 minute wait (more reason to get another cool one), it is great. Deep dish pizza isn't for everyone, especially those with weak constitutions - so to speak. I can see indigestion being the result.

Tuesday's AWS S3-izure exposes Amazon-sized internet bottleneck

Rob Isrob

The value prop takes a hit

The problem is in the SMB space. If they have to live in two "replicated" regions, suddenly the reason for going to the cloud in the first place takes a financial hit. Might as well stay local (in many more cases, certainly not all) and continue to send those tapes offsite and hope for the best. The smarmy fella that posted in another comment at El Reg that: "We just flipped from Northern VA to Frankfurt, took us all of 3 minutes" well goody for you all. They are large Enterprise and designed appropriately - duh. Same for Netflix, after an embarrassing AWS induced outage years ago, they spread the love across regions, no problem. The AWS sphincters tightened up quite a bit in the SMB space after yesterday.

Australian Tax Office's HPE SAN failed twice in slightly different ways

Rob Isrob

Re: Web based training....

Maybe my all time favorite "Zinc Whiskers!"

Remember kids when eBay would go titsup on occasion and Sun tried to blame it on Zinc Whiskers when it wasn't? http://www.forbes.com/forbes/2000/1113/6613068a.html

"When the crashes began over a year ago, Sun believed the problem was caused not by its boxes but by some flaw in customer data centers." [Zinc Whiskers]

You'll have to dig elsewhere to see Sun peddling Zinc Whiskers. All part of ancient history now, isn't it?

XIV goes way of the dinosaurs as IBM nixes fourth-gen storage array

Rob Isrob


ROW = redirect on write

Makes sense. COW is kinda stuffed now a days when you open up the floodgates with NVMe, all flash, etc. Most will have to go to redirect on write at some point, eh (obviously...barring architectures that are newer or get around it)? Run out of internal resources if continue down the COW path as IO thruput ballons:


"Consider a copy-on-write system, which copies any blocks before they are overwritten with new information (i.e. it copies on writes). In other words, if a block in a protected entity is to be modified, the system will copy that block to a separate snapshot area before it is overwritten with the new information. This approach requires three I/O operations for each write: one read and two writes"

Well, FC-NVMe. Did this lightning-fast protocol just get faster?

Rob Isrob

FCoE is the future. . .

And always will be. . .

Leaner, meaner screamer from Infinidat dreamer

Rob Isrob

shelves and height

In fairness, I think the transfer of info must have been rushed and not a lot of follow-up questions. I'll bet the 720 drives is accurate so working back from there is how they got 20 drives per tray (which is incorrect). It appears obvious that there will be 360 disks out front and 360 disks out back in those 1U trays each of which hold 12 disks. Regarding the 5, 7 controller heads.. .with the drives taking up 30U , the UPS take up 3U, the controllers take up 3U.. 1U service drawer, they have 37U accounted for. Current box is 3U for UPS, 6U controllers, 32U disk, 1U service drawer 42U in total. New box appears to have 5U for additional controllers (and maybe other things). Imagine one of the goals would be N+2 on controller heads. POC is an even better story: "Go ahead and pull cables to two controller shelves." Customer: "We lost a controller." ... "okay, we'll be out next month to fix it."

Rob Isrob

disk trays

Those trays show 3 lights for each row. Each tray looks like it holds 12 disks, where do you get 20 at? Are they doing front and back? Those trays would be fairly short. That would be 24 disks x 30U to do 720, which makes more sense with the UPS, management and interface? nodes plus controller would bring them to a standard height or less.

The shoemaker, the array refresh and the VMworld smackdown

Rob Isrob

Re: Its still just a Cache box...again

"re-architect with HPE?"


They state they are COTS. You understand what that allows? This for example:


"New technologies, such as Intel® 3D XPoint™ and Supermicro NVMe* connectivity solutions, provide new advancements and opportunities. "

They are already kicking or about to kick Dell out the door. Wouldn't anyone else in the storage space that had a lick of sense and embedded OEMed Dell be looking to Supermicro or another low-cost tier1 whitebox vendor?

Infinidat's big iron array gets data scrunching, no-footprint iSCSI

Rob Isrob

Re: sure it's faster

"but at what cost?"

Why not do some digging and find out? You can read a dollar a gbyte:


If their effective capacity goes from 2.8 to 5 PB, that comes close to halving that $1/Gbyte.

"Don't doubt that for a minute for many workloads especially ones that fit in the cache."

They have DRAM for the hot data and up to 200 TB for warm data. Listen here if interested, it's mentioned: https://www.youtube.com/watch?v=nTYFF56qdvA

How they promote and demote that is pretty involved and patents all around it. The 200 TB is just a copy of what lives on disk. And you can bet it isn't any coincidence that random talk about XPoint keeps showing up. It would be pretty slick to speed those warm cache hits up to much faster than 180 microseconds B^)

Rob Isrob

Much better writeup than that other one

Chris - I can see you either listened to the youtube or paid attention. There's another article out there today that isn't even close on some of the details.

Nutanix, yesssss: Fist-pumping breaks out everywhere storage IPO wannabes are found

Rob Isrob

I'll have what he's having

Apologies to Meg Ryan

Upstart big iron storage supplier maintains monolithic momentum

Rob Isrob

Not everyone has Office or Ad revs as Cash Cows

"Eventually we think these three will come out with revised systems that have Infinidat-class seven nines reliability and comparable pricing. Until then, Infinidat CEO Moshe Yanai's forward progress could continue for a good few quarters yet"

Microsoft can have a number of loss leaders , along with Google. Several other examples about. I've spoken on just what you've written in the above section to several folks. It's not that they can't. The development effort is huge. Think about EMC/Dell re-writing Enginuity again to do what? Create the same patented methods Infinidat is using? Do N+2 on components, maybe.. okay. So they hit the 7 9s. What about the nasty cost curves Infinidat introduced? You see IDC and how VMAX is doing, not good. So how do they make it up by selling cheaper systems - I'm not getting THAT idea. The other big iron players are in the same boat. These guys aren't Google/MS and will eventually whack or sell off severely under-performing divisions. It appears to me that Enterprise big iron is in a very nasty spot because of Infinidat. Maybe the hope is like Pure going away, Infinidat goes away and the others can return to the good old days of fatter margins. Maybe I'm missing the obvious.

EMC XtremIO has its quirks but rumours of its death are overblown

Rob Isrob

Time she's a wastin

"Every month that a new XtremIO upgrade isn’t announced will add more grist to the rumour mill – though EMC may announce a major upgrade in the next 30 days and we'll all have to eat some humble pie. Which way would you bet?"

It's mid-July, does anything get announced in August? I don't think so. EMC/Dell merger set to close in October. Do you or anyone see an EMC product announcement in September? I don't think so. As commentators in that "XtremIO heading for the bin?" point out the silence is deafening. But okay.. XtremIO refresh announced in next 30 days. Very curious timing as all of Europe is on holiday and most of USA.

AWS Sydney's outage shows the value of a walk in the cloud

Rob Isrob

Re: control so much of what goes on outside their fences?

You can only control so much that goes on outside your fence. Probably more than one case where DC access went titsup even with multiple telcom providers - which just so happened to be running fibre through the same trench - that the backhoe cut through. I'm thinking Northwest Airlines a number of years ago for one example where just that happened. Google is your friend . . .

Storage greybeard: DevOps, plagiarism and horrible wrongness

Rob Isrob

Re: Key Differences

> Downvoted for your use of "leverage"

Whatever. Grow up and separate the wheat from the chaff. If I only kept or referred to sources that were spot-on in everything they say, I wouldn't be able to leverage them for much at all.

Rob Isrob

Key Differences

Not sure how far you've come in your reading. But to communicate differences, I've had to do initial "sell" to help folks understand why they should be doing CM and what it gains them. Many better explanations of "why" then I could pen (nor care to waste time penning). Here is what I leveraged and I'm including just a small portion, but you get the drift:


Key pieces:

• Idempotence

• Ease of Dependency Management (versioning)

• Standardized Organization (accepted at an industry level)

• Abstraction to separate server configuration tasks from system level details

• Ability to leverage community knowledge (that is guaranteed to embrace all the above principles)

More on Convergence and Idempotence:


Convergence and idempotence are not Chef-specific. They're generally attributed to configuration management theory, though have use in other fields, notably mathematics.


But tools like this have to exist if you are managing hundreds or thousands of machines/instances.

Scripts don't cut it (beyond a certain point of course) and we see folks that are tipping. Scripts that are no longer sustainable , the scaling fell down.

EMC Unity or VNX3? You tell me

Rob Isrob

Versus other AFA

I'm missing something or quite confused or both. How or why doesn't this totally mess up XtremIO uptake or where does XtremIO positioning end up? I see how it would be the product they need to compete against Pure and Nimble, this is good. Over-under on XtremIO run-rate pre and post Unity, any takers? Percent growth / shrink, thanks. But the EMC marketing for all these products that somewhat or mostly overlap must call for the ability to hold more than a dozen contradictory and competing lines of thought at the same time very painful.

Super-slow RAID rebuilds: Gone in a flash?

Rob Isrob

Re: What about post-RAID?

It is a challenge to manage RAID for many. In fact, I saw an Enterprise storage vendor that sells a 2 or 3 day service to configure the RAID levels on that large whiz-bang frame you just bought. What are you suggesting? Instead of creating RAID at the frame, ZFS raid? All sorts of options, including triple parity. But you are still "managing RAID." I'll do you one better. Now and going forward, no more managing of RAID or you have a handicapped storage offering. A number of solutions just work, you don't have to fiddle with RAID arrays. The aforementioned XtremIO and Pure. Infinidat comes to mind. That work for you? I guess a ZFS admin would be rather bummed because they would want JBODs so they could do the RAIDing. There's no such thing as a JBOD with those 3. But back to your post-RAID. Sure... two or three years from now RAID discussions will be fewer and fewer as most vendors will move on from that except for certain Enterprise storage vendors that face a daunting task of re-write or all new code base to move in this same direction.

Rob Isrob

This hardly applies to many AFA

Maybe Nimble who moved/moving to triple parity? But several top AFAs have their own schemes. RAID5/6 aren't even part of the discussion. Pure has RAID-3D "better than dual-parity" and XtremIO has XDP , a ppt you can google and toggle through 'cause it ain't simple to grasp, able to handle "up to 5 failed SSD per brick." A much better direction on SSD rebuild might have been comparing and contrasting all major AFAs and how they line up.

Intel's XPoint emperor has no clothes, only soiled diapers

Rob Isrob

Re: Dig a little deeper

"The reported latency of 7us is likely due to PCI overhead and the current controller and might be avoidable in DIMM form factor."

See SFD9 Intel presentation referenced below. XPoint media contributes 1us, PCI (the rest) which would be 7 or 8.

"Re IOPS: note that the reported IOPS of 78,500 is for queue depth of 1 "

Hard telling how that came about. But if you peruse the SFD9 presentation, you can see where the presenter shows 96800 IO/sec with a queue depth of 1: https://vimeo.com/159589810

"It is plausible that XPoint has the most advantage over flash for low queue depth applications and in DIMM form factor, and that that advantage dimishes at high queue depth."

That's right. Elsewhere in that presentation he speaks to that, doesn't pay to go beyond 8 or something like that. You can see in the demo a reference where they are doing nearly 160K 70/30 random read/write IOPS (iirc). They must have cranked the queues all the way to 8... So I'm not sure what your point is of diminishing advantage. Are you envisioning architectural design issues?

Rob Isrob

Roj is spot-on

I had a long winded reply but realized, why do all the digging. What Intel did was a bit disingenious, if you go back to the link Chris mentions they are comparing XPoint to flash. No doubt comparing XPoint dimms to "flash" SSD. it should have been comparing 250 ns latency to 80 microsecond latency, which is a 300x or so factor speed-up. But if you look hither and yon, you see SSDs that deliver read streams at 250 microsecond , there is your 1000x speed-up. Marketing (in my opinion) should have spoke about a 300x speed-up and explicitly mentioned they are comparing XPoint DIMMs to SSD flash.

Infinidat adds predictive analytics to Infinibox OS. But what's it mean?

Rob Isrob

QoS for ingest over-runs?

They have superior ingest rates. The problem is (from my view) is disobedient upstream customers or TAs that think "that's your problem." You got large backup servers and DB warehouse servers that will and can over-run ports, etc. I would *think* the only reason they want to intro QoS is for very large customers with disobedient internal clients and very large customers asking for this feature. Most folks will look at the QoS radial and be like: "okay, whatever... don't need this."

Rob Isrob

Re: @Rob

We know about these things. Let's define delays at each point and how much each contributes. Want to play along with real numbers or just do hand-waves like we read in this article? My point is "FC delays" is a canard and no one puts actual numbers to it. It's rather silly ... really. FC-NVMe demoed and coming and apparently 30-40% reduction in transport delays: http://www.theregister.co.uk/2015/12/08/old_school_fibre_channel_gets_new_school_nvme_treatment/. It's good old FC that originates the "FC delay" chatter. But it isn't as if they are ignoring NVMe.

Rob Isrob

FC delays

I get a giggle when I see this, quite often lately:

"Fibre Channel/iSCSI-type network transit delays and provide very much faster access to data by servers."

Riddle me this Batman, what is the delay in 16 gbit FC (more common now and going forward), what will the delay in 32 bit FC be? Google is your friend. Point here is somebody is passing around a really bad batch of koolaid in that folks are so concerned with the big-bad FC delays. Yes... you can cheat for this quiz and look at each others' answers, open book, zero point quiz.

So you wanna build whopping pools of PCIe flash? Say no more, whisper Intel, Facebook

Rob Isrob

Re: Commoditization of everything

"There are days in which I think the next 20 years will see a massive consolidation of power in a very few corporations (the afore-mentioned four being right up there), while everyone else becomes a small supplier, trying to win favor from the goliaths."

Yep... we'll get to the Standard Oil / AT&T stage. Every now and then the gov stepped in and busted things up. With corporate lobbying, not so much. But all is not lost, every now and then a google comes along to change the landscape. Not everything has been invented , VC gamblers lay down their bets and some are winners and boy when they hit do they hit big.

This storage startup dedupes what to do what? How?

Rob Isrob

Re: A question on hashing

And collisions are a non-starter on SHA256/512 and aren't most newer implementations using something other than SHA1? The article you link to is a circa 2007 SHA1 walk-thru.

Pure next in line to put boot into Dell

Rob Isrob

Pure and others meet Kettle Black!

I've been having internal discussions similar to what is in this post... prior to the big news.

Storage Ed in this other thread says this:


"You dismiss EMC dropping their pants. Their long datacenter dingdong will steamroll Pure and all these Flash startups. They can drop their trousers all day cus Xtremio like Pure is cheap commoditty hardware. The materials cost for a 50TB atray from either vendor is around $80k.. Tops. And EMC can operate with a dingleberry of margin while Pure starves."

What I mention internally is that same hammer is aimed at EMC. I'm sure EMC just loves the fact that XtremIO wins the VMAX takeout not "the dreaded competition." A pyrrhic victory. Not sure what it means other than being dependent on hardware margins (and software and maintenance thereof) is not a good place to be.. now and going forward.

Third time lucky? Plucky upstart Infinidat plonks down monolithic array

Rob Isrob

Re: Calling Nimble Nick and Noisy Nutanix to comment & more

Read the article and elsewhere, Infinidat is doing log structured writes also.

And as they describe it , virtual RAID groups. Each disk participating in numerous RAID groups, more here:


And no I wasn't implying that mainframe support is rocket science. It's just that the two have different target audiences. Without mainframe support, is it Enterprise (yes Enterprises run a whole bunch of kit, maybe a fairer description would be it isn't targetted for the high-end Enterprise?) But yeah, Nimble is interesting. Do you think they'll ever turn that standby controller into an active controller so it too can participate in serving IO?

Rob Isrob

Re: Calling Nimble Nick and Noisy Nutanix to comment & more

Nimble went a traditional RAID under-pinning. Even went to triple parity in 2.1. The RAID arrays in Infinibox are 14+2 64k chunks dispersed as described. Caching appears similar but there are a number of similar caching schemes with SSD at this point. Nimble has standby controllers. Seriously? That's kind of lame. Infinibox is geared towards the Enterprise with mainframe support coming soon. Yeah, many Fortune 500s still support mainframes and some have quite a few of them. Regarding fragmentation, I don't think they fear it but embrace it. Writes hit "idle" disks (idle being relative but with 480 disks, some must be more idle than others) and reads are mostly cache hits.

Data centre disk use is spinning down – Wikibon report

Rob Isrob

What about total PB?

Wait a second...

Some of us are paying attention out here, by the way.

What about this, doesn't HD growth still outpace SSD in the future (see chart in link below):


Is it no accident WikiBon study doesn't touch on total PB on the floor and ratios SSD<->HD?

Google chips at Amazon's Glacier with Cloud Storage Nearline

Rob Isrob

Nearline vs Glacier

If one of these guys is using tape, we could hope for a price war. At price parity and consumer as an end-user here, I'm not leaving Glacier. Photos and work docs when I leave is what I have in the big ice cube. Someone writes a nice interface like Fast Glacier and Google cuts their prices in half, I might consider switching. About taking it out, small users like me can trickle it out so I wouldn't pay to do that.

HDS, HP leapfrog NetApp in the land of the MAGIC QUADRANT

Rob Isrob

Automated Tiering

> IBM was the first to introduce sub LUN automated tiering,

Soran and crew created it and introduced it at Compellent in 2005. I'm not seeing anything

that talks about Easy Tier existing in 2005 nor 2006. Hard to tell when Easy Tier was introed via

dated google searches but it looks later than 2006, is that right?

Kryder's law craps out: Race to uber-cheap storage is voer

Rob Isrob

Re: Service Life of HD

As far as I can tell, the ceiling on the service life is mechanical. Everything I read, everything I've been able to google. If you have some hidden knowledge that shows otherwise, please share I'm much interested and would have me re-work my presentation B^).. heh. If there were a 7 year warrantied hard drive, the big players would use it in the 1000+ drive frames and perhaps extend their bundled warranties out further and make their long term costs less, ROI better , etc. There are so many reasons a much longer warrantied drive makes sense, but it isn't here yet - if ever. I believe it is a hard physics/mechanical problem. Again Charles 9, if you can show us otherwise, much appreciated.

Rob Isrob

Re: Service Life of HD

> But note your own words: "other factors being equal".

The only thing I was trying to do there is "apple to apples", in other words a 7200 RPM 2 TB versus the same. 15K 2.5 900 GB SAS versus same, etc. So some twit didn't try to run off in a direction "of what about this and that" which invariably happens.

Rob Isrob

Re: Service Life of HD

> IOW, they didn't have a good reason to build a seven-year drive.

Well.. this is like the 100 miles per gallon carburetor, it doesn't exist but fun to speculate none-the-less. I'd speculate that if vendor A were to deliver a 7 year warrantied hard drive, they would have captured a large segment of the marketplace (other factors being equal). The reason to go beyond 5 years is there and always has been.

Rob Isrob

Re: Service Life of HD

Well actually, the service life (or warranty.. how long it is under warranty - take your pick) is 5 years. The reason manufacturers don't go beyond that is they can't. Spinning parts and the failure rate greatly increases. Plenty in the GooglePlex that speaks to this. These guys think that 50% of their drives will still be running in year 6:


"Going forward, we'll want our 3/4/5/6TB drives to last longer. My current server box is running 2TB drives, and most of the drives have 4-5 years of spin time already, with no urgency to replace them any time soon."

Maybe because you don't understand the risk? Everything you have RAID6? Because at some point UBE/URE may bite you in the ass and a RAID5 rebuild will go belly-up.