NVRAM cache and flush to disk - all sounds very NetApp to me...
Resistive Ram cache to make Flash fly, say boffins
SSDs appeal to ordinary computer users because of their speed and silence. Data centre folk appreciate those qualities too, but also like the SSD's very low power consumption. Energy is no small cost for a data centre, where there can be tens of thousands of drives all slurping electricity at once. But it's not a free ride. …
Monday 25th June 2012 16:16 GMT Steven Jones
There are solutions...
There's an obvious fix for optimising the write cycles on SSDs with enterprise arrays, and that is to make use of the massive amounts of NV Ram (battery backed up) and use NetApp-style "write anywhere" type file systems or SUN's WAFL. (Files can be used to emulate block mode systems). That way, there's no necessity to scrub and write a whole page of SSD just because a few KB need updating. As SSDs don't suffer from seek times, then fragmentation is not a performance issue (speak it softly, but a NetApp with high space utilisation can suffer rather badly from that - it won't happen with SSDs).
Of course, there's still a write-cycle limit, but as it's possible to hot-swap HDDs anyway, then there's surely no fundamental barrier to hot-swapping write-exhausted SSDs. All that's required is a financial model for including write activity in the cost of ownership, and not just capacity charges based on GB.
Enterprise arrays use these very large NV RAMs to optimise write-back to HDDs already (as well as pointers etc.) by decoupling the write operation to the server from the back-end activity, thus you often find such arrays offering sub-millisecond random-write times with 7-10ms random reads as the former are buffered. Only if the number of back-end I/Os saturates the back-end I/O capacity and swamps the NV RAM does the random-write times suffer badly (although people might be amazed how many enterprise arrays hit internal processing and data path limits before the back end disks - and not even SSDs - are saturated).
Monday 25th June 2012 16:21 GMT Buzzword
Monday 25th June 2012 20:30 GMT Fred Flintstone
Maybe avoiding batteries is the whole point? Batteries always present a failure point, whereas non-volatile simply doesn't have that problem at all. In addition, scaling this stuff up to datacenter volume would mean an awful lot of batteries being charged (I reckon it's not going to be based on a boatload of Duracells).
Having said that, it's not my area of expertise so maybe there is a simple way to do this. Anyone?
Monday 25th June 2012 16:25 GMT Gideon 1
"Write information on them too many times and you'll only be able to read them after that."
No, it will progressively get smaller as the flash blocks wear out. When the number of usable spare blocks drops to zero, you will need to delete (erase) data off the SSD before you can store anything new.
The bigger problem is that the filing indexes are also stored in flash, and when you run out of spare blocks to store them, the entire SSD can fail. The vendors tend to over provision the spare blocks to mitigate against this, but how successful has this strategy been?
Monday 25th June 2012 16:46 GMT ArmanX
How much does RRAM cost?
Flash drives are very slow to write.
Volatile RAM is fast, so add it as a buffer - oh, but it loses data on power loss.
RRAM doesn't lose data on power loss, so replace the RAM. Success!
Wait. If RRAM is so much faster to write (and read, too, I'd imagine) than flash, why not replace flash all together? RRAM is faster, and has more write cycles than flash (or it should, if it's being used as a buffer)... is it that much more expensive? Why not just replace the flash all together?
Monday 25th June 2012 17:22 GMT Anonymous Coward
Monday 25th June 2012 19:54 GMT Charles 9
Re: How much does RRAM cost?
If you read the article thoroughly, you'll note that RRAM, like most post-Flash tech, isn't yet at the same economies of scale that NAND Flash possesses. There just isn't enough RRAM to go around to use it in quantity. Furthermore, the chips in use today don't hold a whole lot, especially compared to a same-sized chip of NAND. It is correctly stated in the article: a technological bridge, a means of bringing a nascent tech into the mainstream to take advantage of its benefits, even in small amounts, while economies of scale continue to build.
Monday 25th June 2012 21:49 GMT bazza
Re: How much does RRAM cost?
Yes indeed, I quite agree.
HP's up and coming memristor technology (which has many of the same characteristics to RRAM) apparently scales to 1 petabit / cm^2. Yes, that really is massively huge. Now obviously their very first device isn't going to be anything like that capacious, but the writing is on the wall for FLASH. I've not read too much about RRAM, but clearly that's only going to get better too.
When such devices do become available it will be very refreshing. No more wear levelling, thankfully, decent write / read times, no need to block erase (at least for memristor). That will make actually using these solid state storage technologies far simpler than FLASH. This in turn will bring about even greater performance.
I think that these advantages will place huge pressure on the manufacturers to build bulk storage devices. Using them merely as cache for FLASH bulk storage is just making things more complicated again; he who can sidestep all the complexity by losing the FLASH altogether will have a truly awesome product to sell.
Slight diversion: a chum did a quick sum, and reckoned that if HP really did do a 1 petabit / cm^2 device you'd need only 100 m^2 of them to have enough memory to fully describe every atom in a human body.
Tuesday 26th June 2012 12:36 GMT John H Woods
Re: How much does RRAM cost?
There's a million square cm in 100 square m. So your chum reckons you need a zetabit to "fully describe every atom in a human body"
IS2R there's about 10^26 atoms in a kilo of meat, so around 7e27 in a body. A zetabit gets you about 1 millionth of what you need for 1bit per atom, which is hardly enough for a "full description" (say, which element it is, and its 3D coordinates at a sufficient resolution). Even 100m x 100m wouldn't be enough - I reckon you are going to need 6e12 cm^2 - 600 square kilometres - just to get a bit per atom.
Monday 25th June 2012 20:43 GMT Stuart Halliday
So put a battery in it?
If a RAM/Flash Hybrid Drive needs to keep it's internal power long enough to flush the RAM, then surely it's as simply as fitting a Lithium-ion battery and a tiny UPS in the drive?
The battery only has to power the drive for a few seconds in the event of power failure, so doesn't need to be very big. Plenty mobile phone batteries could be used for example.
Plenty of free space going spare in a 3.5" drive case if 2.5" SSD is used too.
Is this too obvious an idea?
Tuesday 26th June 2012 08:52 GMT pip25
Tuesday 26th June 2012 22:06 GMT Anonymous Coward
The concept of caching writes in big honking NVRAM is used in quite a few places including HDS, VMAX etc. What to do with the data after it's cached is the $1B question. RRAM seems like a stepping stone to a proper flash replacement -- what ever it is (MRAM, MeRAM, PCM). Vendors should stop teasing us with these stories and deliver the replacement tech already!
WAFL and ZFS collect and flush it to disk. I think Nimble's CASL also does something similar. I think there aren't that many Redirect on Write vendors out there to talk about the frag issue discussed earlier ... Any NTAP'ed care to comment on the frag issue. Is there some sort of defragger running in the background to clean up the disks? ZFS'ers, Nimblers?