Bah!
Thanks for explaining ODM.
Now: WTF is a "hyperscale" data centre? I assume it is people like Google and Amazon, but when I assume ...
Hyperscale data centre spending is driving disk storage system spending higher, but mainstream vendors aren’t benefiting as the hyperscalers are buying direct. It's a big change in buyer behaviour. This is the picture shown by IDC’s latest Worldwide Quarterly Disk Storage Systems Tracker for the fourth 2014 quarter. The market …
This post has been deleted by its author
Flash storage is still at least 10x as expensive as same-size disc storage and also fundamentally flawed as a technology due to having by comparison a very low limit on write cycles.
SSD has 4 advantages over disc, power consumption, size, shock/vibration robustness and I/O performance, which makes it great for a laptop, but in a datacenter environment, most of those advantages are pretty much irrelevant. The advantages of SSD are not enough to outweigh its disadvantages, and probably won't ever be until SSD technology changes again.
Something like 98% of Fortune 500 companies are now using flash in the datacenter very successfully. I've worked with some. While you're sitting around waiting for flash to be 'perfected', your DBs and other high I/O applications are suffering with slow performing hard drives.
It isn't as though flash drives wear out in a matter of months, and even if they did it is something the drive will alert you of well in advance so they can be replaced. You will never see a flash drive surprise you when it reaches its write limit, unless don't monitor your servers.
Agree completely, but I have to add spinning rust has considerably more capacity for the price. No problem, DBs use flash and long-term 'slower' storage uses rust. Obviously, flash (and newer tech) will win out—probably sooner than later.
From long experience, there's one thing I'm certain of, which is there's no such thing as too much storage, our demand for it is insatiable.
Yes, exactly! I've seen a larger increase in DB performance switching to SSDs than the previous decade of multicore CPUs, since compute was rarely the bottleneck for DB servers. Writes have always been easy to optimize, since you can segregate logs onto drives striped as wide as necessary for the required throughput, and rely on an array's write cache to deal with the rest.
Reads were always the problem for physical media, so DBAs sometimes went to great lengths to optimize indexes to minimize the number of reads at the cost of storage, CPU and write complexity.
I've been seeing articles predicting the death of tape for probably over 20 years now.
I fully expect in 20 years time I'll be seeing articles predicting the death of hard drives.
As long as there's a niche for these technologies to exploit they will never die. They both have the advantage of being bit for bit cheaper than their successors, and in the case of tape of being very stable in storage for very long periods.
Long live spinning rust!