"not all dead"
Where are those disks and why aren't they in our preferred white box maker's selection!
DO WANT!!
The market for small and fast disk drives is actually growing – rather than shrinking, as flash array vendors are enthusiastically implying. At Western Digital's executive forum event in Vienna today, a person close to WD said: "In our business 15,000rpm drives are not all dead. In the 2.5-inch, 15K enterprise disk drive area …
Flash will undoubtedly be a feature of some description of data storage solutions moving forward. Irrespective if it's in the server, storage cache or flash as part of the storage array itself.
Flash however is not going to replace disk, simply because data is being created faster than flash is. So therefore where does it go? Disk and tape. The CMO of Seagate (of course he has an interest) recently commented that in order to store 50% of data being produced today flash manufacturers would have to invest $195bn in new factories to make flash. That is not going to happen.
So however pretty and attractive flash is right now and hysteria that surrounds it, disk is here to stay, so is tape for that matter.
Simple supply and demand logistics, irrespective of price point claims and marketing hype.
Can't see it myself. Denser packed shelves of 15k FC disk yes. Flash memory on the shelf. Yes.
A shelf of SSD? No way. Not at those prices.
Now, if netapp supported a tiering system so that key data could be kept on SSD, next level of data on FC then the backup data on SATA I'd be more interested but it's a juggling act by hand at the moment.
The physics of fast drives is better if you keep them small. In particular it's much easier to seek faster if the heads have to span only the width of a 2.5" disk compared to a 3.5" disk. The arms on which the heads are mounted are smaller and so their moment of inertia decreases.
I'm slightly surprised that we haven't seen even faster 2.5" drives yet. They can do 15K 3.5" so 22K 2.5" should be straightforward. Stress on the disk no greater, ditto velocity of the disk at its edge so head-flight physics the same. A bearing technology issue? Seems unlikely, gyros can be spun *much* faster.
My understanding is that actually current mechanical hard drives NEED air, because they rely on the air pressure to support the heads at the required height above the platter. As such I think they would require a pretty major redesign to work in a vacuum as per. the gyroscopes.
The gyro I was thinking of wasn't in an evacuated enclosure. Old tech! If you don't like that example I could have said that a Dremmel tool can do 35K rpm.
A disk couldn't work in vacuum because the head uses aerodynamic effects to "fly" just above the rotating disk. I read once that they could make them go a lot faster if they were fillled with Helium rather than air. The seal is probably the problem on that front.
Anyway, the velocity at the rim of a 2.5" disk doing 22K rpm is no greater than that at the rim of a 3.5" disk doing 15K rpm. You can buy the latter, so why not the former?
I imagine that an SSD will still be called the hard drive. The distinction being made was and is between that and removeable-media drives. Though perhaps the time isn't so far away when it won't be a separately replaceable module for much longer, it'll be soldered onto the motherboard.
My experience is that many non-technical users are vague about the distinction between memory and disk storage anyway.
It looks like their view may turn out to be the correct one in the long run. No disk, just a range of memory from fast-but-volatile to slower-but-persistent - essentially a grown-up version of the smartphone architecture.
In an environment like that, there's little need for shutting down and booting. Still, I suppose Microsoft (and, in my experience, Android) can be relied on to keep the reboot alive.
I'll second that. Every time my dad thinks he's found a deal on a PC he calls me up to ask about it. He always gives me the drive storage capacity as "memory." Fortunately with the difference in sizes these days, I never have to ask for clarification.
So ... what *IS* this number?
Using SSD as cache in a storage tier pretty much assures it of continual write activity, which could be close to theoretical write speed limits. At this speed, the published manufacturer write limits can be reached in months, not years. I can imagine that replacing an active SSD cache component (whether it's array cache, or SSD or PCIe or something else) is going to involve a reduction in IOPs are the very least, if not intrusive failover/downtime.
Does anyone have a reliable source of MTBF for EMC, HDS, >anything else< using FLASH?
SLC is OK. It's rated at a million-plus write cycles. 1M x 256G drive size / 1G bytes per second = 256M seconds to wear it out. Given a decent wear-levelling technology, that's about eight years at a rate somewhat in excess of current drive tech.
Another thing is that flash blocks fail on write. Provided what has been written is tested while it can still be re-written, data-reliability should not be compromised even when a significant fraction of the device has failed.
If it's cache you're using it for, then even a total bricking doesn't hurt much. Just toss it. Plug a new one in and let the cache refill.
"espoused by NetApp a couple of years ago "
NetApp's almost the last one to arrive at the SSD dance (and certainly the last of the big market share owners). In fact, they've been derisive of the idea of SSD enough that they have only started supporting them in the last year. But don't worry, NetApp will use it's PAM cards to up it's performance claims, get sold into a customer, then drop the bomb that PAM cards only accelerate reads (SIGH) and that more disk will be required to actually allow the array to perform.
Violin saying it's now cheaper/GB is interesting, but I don't believe it. It's certainly cheaper/IOP
As a database professional I need to quantify two things - how much data I need to store and how many IOps the IO subsystem needs to be capable of at a realistic latency of say < 5ms per IO.
As a raw comparison, say I want a 900GIB database, I want 10K IOps < 5ms IO; I can get a single PCIe card to do that with flash (easily and for less than £5K), if I want redundancy I buy another (£10K in total).
What about hard drives - 2.5" 15K, the max is 300GiB per drive, realistically around 300IOps per drive - how many drives do I need to buy in order to get the IOps and redundancy I need? Significant, so many in fact that I'd need an external array for a start - more cost, more controllers etc.
In the real world this rubbish about SSD's are more expensive per GB is just wrong, we require IOps in the real world the two go together, remember I need 900GiB of storage space, but I'd probably need 30 drives for 10K IOps with RAID 1+0 at latency of < 5ms per IO - that is about £9K just for the disks themselves without the two additional storage arrays to hold them, the dual controllers required and then the ongoing power and cooling....
T
Have to compromise to complete with flash. Shortstroking is common even in 2.5" format.
As soon as you do that you lose most of the advantage over a PCIe flash card. The differences in power consumption will make sure of that even if the spinny stuff is slightly cheaper.
A lot of the time people just completely miss the point about PCIe connected flash - a single card can easily operate at 1.9GiBytes/second (ref: ocz revo3 maxiops that I've got in this machine I'm writing on), per channel SAS 600 can only cope with 500MiB/sec but the "interface" is dramatically less so if you want to achieve 1.9GiBytes per second from a SAN - how do you go about it? With complexity, cost and the hope that the latency will be realistic.
[written with the context of database storage]
Anyway I wonder if this WD guy was talking about growth in the past 6 months which I'd expect because their factories were flooded so it only stands to reason that now they are manufacturing again there is a short fall to fill :)
T
If you want raw throughput then you're probably streaming and don't need ssds, except as buffering (I can get upwards of 700Mb/s from our large, slow, spinny arrays)
15k arrays are chosen for their IOPS rating. Shortstroking gives higher IOPSs by having the arm move shorter distances (end to end seeks take about 10 times longer than adjacent track seeks) but there's still the issue of rotational latency, etc etc.
For the last 4 years if I've needed IOPS then the media of choice has been SSD. I don't particularly trust it, so the setup is always highly redundant. Even doing that, the costs have been lower than trying to get the same IOPs out of rotating media arrays.
Yes, SAS600 is a limitation, but so is SAN - I have 4 * 8Gb/s interfaces in my current crop of fileservers as a f'instance (now approaching end of life) and the next generation will be faster.
SAN activity can be scattered across all the interfaces in and out of a piece of kit, so individual port limitations aren't much of an issue as long as there are enough ports in play.
If you want high speed SAN-connected SSD then you either slap a bunch of SSD drives into existing kit (which is wasteful and inefficient on a number of levels but gives higher IOPS immediately and cheaply ) or use a design which doesn't limit to the speed of SAS interfaces - it's not that difficult to make a linux-based SAN target which can have a bunch of PCIe SSDs for cheapish, fast access as a f'instance.
At the high end, SAN-attached solid state storage is incredibly fast - there's no way even a rack of shortstroked 15krpm drives can keep up - but also bloody expensive.
The era of enterprise spinnning media is pretty much closing out. Enterprise solid state storage has already eaten the high end of the market and this article is basically showing a push to move 15krpm stuff downmarket before SSDs get there.
Low speed rotating media will stay dominant for a while but it's only a matter of time before 2Tb flash is cheaper than the same capacity HDDs.
Not sure you are agreeing with me, but my point is this - why go to the trouble of a SAN when you can easily and more cheaply achieve the goal with PCIe connected flash?
You don't need to worry about switches and multiple HBA, controllers to get your throughput up.
You don't need to worry about latency nor IOP's
Redundancy is easy.
Remember as I said, I write within the context of the Database space.We need raw throughput because in BI we may be processing over hundreds of GiB's of data in a single query - that data is spread across the storage, something where disk geometry starts to play an effect on latency per IO.
I don't believe the answer is SSD (SAS or FC connected) but its Flash based PCIe; however, I don't believe that will easily become a reality until commoditised distributed database platforms take hold - something that we are already seeing with HADOOP to help deal with "Big Data".
T