Pictures
If you're going to put a vulture logo in the pictures to protect yourselves from theft, at least make the photos worthy of stealing? Did you take it on an old camera phone, print it out on a dot matrix and then re-scan it?
To build storage capable of running tests that can challenge 10Gbit network cards and switches, a flash array is required. I chose eight Kingston Hyper X 3K 240 SSDs to provide this high-speed storage layer. Before I built my VMware cluster of ultimate doom, however, the SSDs needed some torture testing. The drives advertise …
I put the vulture logo in because I had some fun burning things with frikkin laser beams and thought it might be fun to toss in. The camera used was that of a Samsung Galaxy S II. Someone nicked the Canon 5D I normally keep around.
Apologies for hurting your feelers by adding things involving frikking laser beams; I don't care who copies the images, and have several more crappy camera shots if you want.
This tech hasn't been around long enough to be "time tested". It needs to be handled accordingly. Thus, I always check the end user reviews and see what products may be a bad egg (and avoid those).
At this point, reliability seems to be a bigger problem than speed.
So speed focused reviews are less interesting. At this point I don't care so much whether or not one of these can keep up with on of my spinny disk arrays.
me too..
Reviews that are performance based are pretty bad, reliability has been their weak point since day 1, and continues to be.
it is interesting to see how far below the "paper" numbers most drives fall under real world conditions (seems to be not uncommon to be 10-20% of advertised performance especially at higher utilization).
@13:07 I have to wonder if that was sue to some of the more infamous southbridge bugs and BIOS-level incompatibilities with SSDs that were common back in the day. I could read you textbooks worth of complaints against OCZ's consumer SSDs, but I've yet to encounter an error with Intel or Kingston.
"The first and last SSD drive I bought was a Kingston, even with the latest firmware (which was supposed to stop the corruption) it still corrupted after a few months."
Let me guess, it was a V100? They were indeed horrible, but other SSDs using the same chipsets were the same. At least Kingston has exchanged many of them for the (much better) V+200 variant if the user complained enough.
The V100 is still offered and, while the firmware has improved, is still not worth buying. The V+200 and the Hyper-X SSDs however belong to the most reliable SSDs on the market.
First (and probably last) SSD is my Crucial M4-128, you know the one where, after 5000 hours of playtime, it spontaneously stops working. Firmware update didn't work under windows as it needs to be set to IDE in the bios where after Windows would blue-screen at reboot. No CD drive so had to image a bootable flash disk. Sooo much trouble and headache. Any other type of hardware in a computer can be automatically updated, so why not SSD drives?
I think I've mentioned this before. SSD drives are the single most effective upgrade for any system. I've got single core 1GB RAM machines out-performing quad core 8GB beasts that have cost me over £1K. Just sticking a £60 SSD into any machine brings it back off the scrap heap, and into meaningful use again. There is one caveat tho. I've installed over 300 SSD upgrades now onsite, and I can't stress how important backup is. When an SSD goes, it goes down in flames! There is no gradual decline or tell-tale clicking you get from plain old rusty HDs that gives you time to perform that backup you really should have been performing but couldn't be bothered to. Your data is simply there one minute, and gone the next!.
Having learned my lesson from the first dozen or so failed SSDs I now enforce scripted backup on my domain, and it's paid off. My failure rate on SSDs is close to 20%, but the demonstrated cost savings through eeking out a couple more years on existing kit has raised several eyebrows in board meetings because previously it was the norm to simply replace PCs every 2 years with £1K machines.
@Aqua Marina
Totally 200% agree. On the SSD equipped devices, I've got Crashplan running to the local SAN first and then remote to make sure I've got them backed up. I've had 2 go south so far, and unlike a "spinning platter of rust", there is no time to workaround a failure. SSD's either work awesomely, or awesomely fail...
I also agree that in the 'real-world use-case' scenario, SSDs make such a vast improvement that motherboard & processor upgrades can be put off a year or two, and time spent waiting for the computer to finish doing things is reduced noticeably. I haven't had the unit failures you describe (just lucky I guess?) but all daily work is uploaded to cloud storage and NAS. Belt and braces.
In fairness, I was an early adopter, and a lot of the issues that caused me to return a drive, were probably resolved by firmware updates 6 months later. As it is now, I won't buy an SSD unless the firmware revision is in the double digits. Even so, SSD failure is still total failure.
On the other hand, our developers all use Netbeans. Even the fastest machine we had took 20-30 minutes from power-up until it was ready to be used. Since I've replaced the drives with SSDs, even the oldest machine we have takes 3 minutes tops from hitting the power switch, till the bod can actually start to type. That's a 10 fold speed increase. Something the boss noticed very quickly when the devs were sat around for an hour each morning drinking coffee and reading newspapers while their PCs warmed up.
Well, then you will probably never ever buy an SSD as rarely any model is long enough on the market to reach double digit firmware versions.
Judging a product mainly by the number of digits in its firmware version is silly. If a product is good and stable right from the beginning it won't require much updates so the version numbers will remain low, and if a product is put to market prematurely but the vendor doesn't give a shit then the version numbers will remain low as well.
A canny developer would turn up for work half an hour early every day, flick the switch on everyone elses machines for them so they're all warmed up in time for their arrival, spend half an hour chilling and drinking coffee himself (with no-one around to notice) then bugger off half an hour early in the afternoon! Missed a trick there...
@Aqua Marina
Are you finding any consistency to the failures? While catastrophic failure in an SSD is a given, in theory their solid state nature should mean that they fail much more reliably. An HDD can go at any time because of its mechanical nature, but your SSDs <i>should</i> be failing at much more similar, predictable times. Have you found that to be the case?
The only consistency I find is that they simply stop. The first I hear of it tends to be the 9am "My PC wont come on" phonecall. I've only managed to pull data from 4 SSDs in 2 years, 3 had bad sectors, and one allowed me to copy all but one set of files, at which point it would simply hang and time-out. I've settled on Crucial M4s now, as I found OCZs RMA procedure to be unreliable (i.e. discs went missing, and I had to produce PODs for every return, then the return would take weeks from the Netherlands).
Having a 20% failure rate, you either have sourced some pretty bad SSD's or your workloads must be of the quite insane type.
I have no idea what kind of type drives you buy, but for consumer level drives I've found that for the gnarly workloads where TRIM is generally ineffective for one reason or another - overprovisioning your SSD by buying one size bigger and then locking out the extra space from the OS using HPA or just not partitioning all of it for the lazy bums - you'll get significantly better lifetime and long-term write performance. This in effect gives the SSD much more spare area to organize its writes in a flash friendly way, resulting in significant reduction of write amplification - in order of magnitude in some cases.
This little trick can give you quite close to "enterprise grade" write endurance at consumer price points. A typical consumer drive have 0-7% spare configured at the factory, pushing this up to around ~20% or maybe even 25% helps a lot. Intel had a paper on this, but I cant find it right now..
If you want to do this on already used drives, keep in mind you'll have to TRIM the entire drive or "secure erase" it before doing this, or else the sectors will generally not be considered free and this entire exercise rendered futile.
Or you could just let the vendor do it for you and charge you 10x the price (for that and a few other features that may or may not be important to you)
Or I could be the kind of guy that buys a 64GB drive, where I know the drive won't be filled beyond 50GB.
I like your idea. I'm surprised I hadn't thought of it, sort of obvious now I've thought about it. Well done, virtual beers are on me!
Like I said earlier tho I was an earlier adopter, and I'm sure that many of my original RMAs have been fixed by firmware updates since.
Still a bloody good idea tho, I intend to give it a try!
Seriously, the screwdriver is awesome. It's hard to explain unless you've tinkered with it, but in terms of "portable screwdrivers that clip into your pocket" I've not encountered better. I think more thought went into that damned screwdriver than OCZ put into their entire line of consumer SSDs. I have no idea where Kingston sourced that screwdriver, but I want to find out so I can buy many.
Good bit of marketing/end-user satisfaction. Whomever thought that up deserves a raise.
Well, yes - it's a good screw driver if it works. I've installed several of these same SSD kits and some of those screw drivers failed - the small magnet behind the head has fallen out (and doesn't like to stay in anymore) and without it the whole screw driver is useless.
The same or similar screw driver is available from DX for a couple of $/£/€ with more heads (torx etc) included.
Shove a little bluetack up there?
It just might render the thing not entirely useless.
(I bought a neat interchangeables miniature screwdriver a few months ago. Managed to loose the tiny ball bearing that holds the bits in place, first time I used it. Sad screwdriver stories of our time!
@sandtitz
We'll we're a month into abusing 8 of the beasties, and no failures yet. Here's hoping!
I doubt they cost as little as $5. The refined aluminium that makes up the screwdriver would have cost more in electricity to deoxidise than $5. They are bloody bullets. That said, I could see $10 or $15 at a unit; solid little widgets. I will investigate similar devices with more heads.
The screwdriver looks a lot like a WorkZone branded one I got from Aldi a few months back. I think they might be back in, I think I remember seeing it in an email a little while ago. Cost about £5 I think, definitely less than £10. I've seen a very similar one at Maplin too.
Mine came with about 8 ends (including some Torx ones) which can be fiddly to get back into the storage compartment, but it works well.
And I've got a 64Gb OCZ Vertex 2 as my boot drive. 2 years old, and SSDLife still reckons it has 100% of it's life left, good until mid-2018.
I picked up a screwdriver that looks suspiciously like those in the shots from my local auto parts store for $5 about 5 years ago. If the similiarity is more than skin deep, I can vouch for its value - it replaced the standard "jewelers" sets I would keep getting from the local Radio Shack and then lose almost immediately. I don't lose this one, and the bits don't show a bit of wear after 5 years of use.
$159 for a 240GB SSD plus a conversion kit including that screwdriver definitely sounds like a deal. Maybe if I spend at least as much on a tablet for the wife, she'll let me buy one...
...because if you want to build something really fast you would go with PCIe storage, skipping the stupid SATA controller bottleneck entirely and getting 1GB/s PER CARD and yes, you can get small capacity PCIe storage cards for cheap: for your $1400 or so you can get THREE OCZ RevoDrive 3 X2 240GB (200k IOPS per card) or TWO 480GB (240k IOPS per card) and run them in RAID... downside is that you probably get a lot wobblier test result lines but still, your lowest result will be likely still higher than your SATA-limited RAID running on a single PCIe bus.
Your solution requires that I spend money out of my own pocket that I worked hard for on something made by OCZ.
No fucking way..
Beyond that, the PCIe Flash solutions offer limited amounts of storage for the money; when I look at the truly negiligable IOPS gains verus my Hyper-X solution and it's far larger capacity, I think I made the right call.
Plus, I'm not counting on OCZ. *spit*
In my case I am thinking of using it as the journalling device for my ext4-formatted HDD RAID array, that might help a bit in dealing with write speed on big-ish files while keeping the redundancy of the RAID.
Might also try it in due course as a ZFS intent-log device, if I ever get round to re-purposing some of the HDD I have accumulated in to a high integrity data store/backup thing.
Windows RAID 5. Yes, yes, I know, Windows RAID - esp RAID 5 - is poo. I just don't have the money for a hardware RAID card right now. If I did, it would be an LSI MegaRAID SAS9280-16i4e no questions asked. Best damned RAID card I've ever touched, does RAID "bloody everything" and could use the 8x Hyper-X drives as a block-level cache array to front end a whole lot of spinning rust.
When I get more money...
When I was doing a bit of research earlier this year on SSDs, I noted that you could RAID them, you could have TRIM on them (which is important to stop performance falling off over time), but you couldn't have both RAID and TRIM. Has this situation finally been fixed?
You might like to know that I used a (non-RAIDed) Intel 520 and a Corsair Force 3 GT (both 240GB) in separate PCs to record all of the Olympics channels (had 8 sat tuners, 4 terrestrial tuners and 24TB of HDD). At one point, I'd be recording 20+ channels at once on a PC! Almost all the hard drives were Seagate 3TB's, which are still the only brand of HDD worth buying at the moment.
I even had to write my own C software to handle the transfer of completed recordings from SSD to HDD (no, rsync wouldn't be suitable) because I couldn't find something that did exactly what I wanted. I really should release that as open source because it's pretty nfty and worked very well on the multiple parallel recordings I was doing to SSD.
If you use Windows RAID you get RAID and TRIM. Modern RAID cards will either do TRIM, or they will handle the data internally in a manner such that the TRIM command is actually not required - it's all about how big the data blobs are that get send to each drive and so forth.
The new LSI controllers are SSD-aware and deal with thigns in an appropriate manner. I think (?) the new Adaptecs do so as well...