back to article Why solid-state disks are winning the argument

Perhaps the most perplexing question I have been posed this year is: "Why should I use SSDs?" On the face of it, it is a reasonable question. When it was put to me, however, I just sat there staring at the wall, trying to form a coherent thought. Where to begin? As it was late at night, I decided that starting with a brief …

  1. Anonymous Coward
    Anonymous Coward

    "As you can see, there aren't many reasons to buy traditional magnetic disk"

    Hmm, you give a list of arguments for buying traditional hard disks and then say there aren't many reasons?

    "SSDs are faster. They have way lower latency. They consume less power. They take up less space.

    Most importantly, so long as you follow the instructions on the tin when selecting the right SSD for the job, there is absolutely no reason not to buy one"

    Apart from the main one of price and/or price per GB? If I want a 48TB SAN then filling it with enterprise SSDs is going to cost me significantly more than HDDs!

    1. Nigel 11

      Reasons for traditional HD

      1. Cost per Terabyte is still much lower for HDs.

      2. I have more faith in mirroring applied to hard disks than to SSDs.

      In my experience, a majority of HD problems show up in the SMART statistics well before the drive fails. Then I replace the drive pro-actively. I also try to pair drives from different manufacturers to reduce the risk of a common-mode dual-drive failure.

      Will SSDs warn in advance of failure? CAN SSDs warn in advance of failure? They're a new technology (more accurately, several new technologies), and I think it'll be a few years before we know. I'm not even certain that running a mirrored pair of SSDs is useful (but given that I'm talking about multiple TB of data, it'll be a few years before I can afford to find out! )

      Will multi-TB SSDs compete with multi-TB HDs? If 3D flash can expand further into the 3rd dimension, that may happen sooner than we think.

      BTW why is putting Flash memory (a SSD) on the PCI bus still regarded as exotic expensive server technology? 6Gbit SATA is now the bottleneck for even consumer-grade 240Gb SSDs. Give us a small and cheap but very fast PCI card to boot and run our O/Ses from!

      1. Anonymous Coward
        Anonymous Coward

        Re: Reasons for traditional HD

        SSD for OS and Applications.

        Mechanical drive for big data storage.

        Win win.

        1. JEDIDIAH
          Linux

          Re: Reasons for traditional HD

          It's funny you should mention that because enterprise storage vendors have had "tiered storage" for many years now. This is not a new problem. These dynamics exist even within the same disk technology (namely magnetic HDD). There's pretty much always been a cost versus speed tradeoff.

          Just add various "grades" of SSD into the mix using the tech that's already there.

          ...again: not news.

      2. richardcox13

        Re: Reasons for traditional HD

        > Give us a small and cheap but very fast PCI card to boot and run our O/Ses from!

        Have you looked at M.2 flash cards (on motherboards with the appropriate support)?

        Of course they have capacities that mean you can skip a SATA flash disk and just back it up with the large spinning rust for bulk storage if required.

        1. Sgt. Pinback

          Re: Reasons for traditional HD

          yup, m.2 is the current answer

          Samsung have a ~$500 512GB m.2 SSD out that does 1000MBps read and write, you will need a 15-30 dollar adaptor to fit in a regular PCIe slot though (needs v2 4 lanes).

          More are coming, wait until the end of next year and you'll get your economical PCIe blazing storage fix.

      3. Boothy Silver badge
        Thumb Up

        Re: Reasons for traditional HD

        Quote: 'Give us a small and cheap but very fast PCI card to boot and run our O/Ses from!'

        Ah-men to that.

        1. Mikel

          Re: Reasons for traditional HD

          http://www.newegg.com/Product/ProductList.aspx?Description=mini%20pcie%20ssd&Submit=ENE

      4. Uncle Ron

        Re: Reasons for traditional HD

        Anything on a traditional I/O bus is bottlenecked. The traditional, legacy I/O subsystem, dating back 40 years, is such a kludge of wiring and instructions as to be, IMHO, the most backward, outdated thing we can see in current information processing.

        No, the sooner we can implement Storage Class Memory, and Storage Class Memory Controllers directly into the fabric of the processor silicone, and thence into the OS's and even into the apps themselves, will any of the article's points really matter. The minor distinctions between HDD's and SDD's are only marginally interesting.

      5. Tom 13

        Re: Will SSDs warn in advance of failure?

        The answer to that is definitely Yes. In fact it is built into the SSD to achieve a usable lifespan. IIRC the standard is that they build the SSD with 4 times the memory of its rated capacity and as potential failures are detected it shunts data to a new location and marks off the suspect block so it isn't accessed again.

        Oh wait, you meant will it warn you as the admin of the system in advance of a failure? Erm, ah, ... Yeah, they really should do that.

        1. Charles 9 Silver badge

          Re: Will SSDs warn in advance of failure?

          "Oh wait, you meant will it warn you as the admin of the system in advance of a failure? Erm, ah, ... Yeah, they really should do that."

          Correct me if I'm wrong, but is the most frequent point of failure in a SSD less the memory chips and more the controller that herds them all (which makes any redundant chips moot)?

          1. Tom 13

            Re: more the controller that herds them all

            Probably.

            But the same is probably also true of spinning metal disks so it's sort of moot to the warning question. In fact, if you're talking about absolute best of class systems, based on my experience if you don't have a tape drive* in the mix somewhere you aren't fully covered. I once had the privilege of working with a group of people who had mirrored desktop drives. Of the six systems in two years that we had to replace for drive failures, I think the mirrors helped with two. I think we actually only had one drive failure. Each of the other instances was a case of data corruption on the drive. So the mirror just dutifully copied the corruption to the second drive and both drives were useless from a data recovery perspective. After we put fresh images on them, everything worked fine.

            *There are some over the wire systems that work sufficiently like tape to qualify as a tape drive, but if you can't go back at least a full year in the archive to restore a file, I wouldn't count it. That's the bit that most drive redundancy systems (whether SSD or spinning metal) don't address.

        2. bdg2

          Re: Will SSDs warn in advance of failure?

          They do NOT put anywhere near 4 times the rated capacity of Flash memory in an SSD.

          A 240GB drive probably has 256GB of flash in it, a 256GB drive maybe 272 or 288GB.

          Certain hotspots in the drive get written to very often (directories, allocation tables etc.) and most other parts are rarely written to. The drive will move the hotspots around to even out the wear but it will move them before any damage is done.

          1. Vic

            Re: Will SSDs warn in advance of failure?

            Certain hotspots in the drive get written to very often (directories, allocation tables etc.)

            They shouldn't do - the wear-levelling system on the controller is supposed to prevent that.

            And this is why it is essential to use an OS that properly implements TRIM; the alternative is to watch the SSD eating itself trying to reallocate data that you've already thrown away...

            Vic.

      6. Anonymous Coward
        Anonymous Coward

        @Nigel 11

        Why in the world do you have more faith in mirroring for hard drives? Both hard drives and SSDs can have two types of failures: controller failure and media failure. Mirroring protects you well against both.

        SSDs add a new wrinkle in that they have a limited number of erase cycles, thus a limited write lifetime, but this isn't a problem iin practice. SMART tells you about the write lifetime, and knows when you're starting to hit it. But if you ignore it, and keep writing, it isn't a bad thing. The controller will get errors trying to erase a block to be able to write a new block, and thus won't be able to write anymore, so the OS will get write errors thrown back at it. You will still be able to read your data just fine, so you don't lose anything. If downtime is your primary concern, well, make sure your OS knows how to interpret the SMART data so it warns you in advance so you can replace that drive before it reaches the point where it can't write any more data!

        BTW, both drives in a mirror won't hit their write lifetime at the same point, even if they've been mirrored since they were new. There isn't a counter in the flash chips that, when reached, causes writes to stop working. It is reached when erases fail, and two drives won't have the exact same number of erase cycles before failure....there will always be a little variation between chips.

        1. bdg2

          Re: @Nigel 11

          I believe there is a third type of failure for SSDs, it seems to be rare now but I'm pretty sure that in the early years of SSDs it used to happen and was responsible for the stories of SSDs suddenly totally and completely catastrophically failing with virtually no possibility of any data recovery. Let me explain. While in use an SSD keeps the mapping of of logical to physical addresses and some usage counts in RAM -- in order to allow wear levelling. When the power goes off the controller in the SSD has to rapidly save that RAM into flash. However if something goes wrong and the RAM gets corrupt it effectively scrambles large sections of the drive.

  2. Shrimpling

    Why I use SSD

    It means I don't break the Hard Disk when I drop my laptop which happens more than it should do!

    1. silent_count

      Why I use SSD #2

      A SSD equipped laptop doesn't have any issues when I carry it around on my motorbike.

      1. John Arthur

        Re: Why I use SSD #2

        And I have a Lenovo X60 laptop that has survived several tens of thousands of miles in my motorcycle panniers without a spinning rust disk failure so SSD and traditional are equal on that.

        1. cons piracy

          Re: Why I use SSD #2

          Much like your crotch rocket, the disks head is parked up nice and safe whilst not in use, which means it can withstand more G's in that state.....i wouldnt advise defragging the drive or allowing updates to finish whilst on the move though ;)

        2. Nick Pettefar

          Re: Why I use SSD #2

          My missus strapped her work's laptop to the rack of her (non-suspension) bicycle and rode to work and back for at least a week before it failed.

    2. JEDIDIAH
      Devil

      Re: Why I use SSD

      My kid has been abusing my spinny disk based Archos for years. The thing refuses to die. If anything, it's the battery that's the problem.

      Hard drive tech moved beyond the 80s style fragility you are talking about a long time ago.

  3. Sykobee

    So it depends on use case, if you're an enterprise you should assess these as part of any expensive procurement.

    The business case for fast writes depends on the business user. Developers need SSDs so speed up build times - saving developer downtime. I guess the same goes for graphic designers, video editors, and so on.

    Home users can probably survive with a hybrid drive, although current drives seem to offer a tiny amount of SSD, whereas Apple's Fusion drive has 128GB on a PCIe link and thus speeds up most operations. SSDs can be stupidly faster, and incredibly amazingly better at seeks, IOPS, etc.

    HD is great for rarely accessed stuff, or stuff that doesn't need high speeds (media, for example). But it's getting more painful to deal with a system that only has a HDD these days, especially once you've experienced a system running off a decent SSD.

    The guy who wants a 48TB SAN, that's going to be limited by the network anyway. Spinning media is obviously the solution here.

    1. Anonymous Coward
      Anonymous Coward

      Developers need what now?

      "Developers need SSDs so speed up build times"

      No no no NO. Developers, like management should be given the crappiest POS you can find. The world would be a better place if developers were not given faster hardware. If a developer is building often enough that it becomes a bottleneck then sack the incompetent so and so because they are probably iteratively trying to work out how to make the code work and are therefore not really a developer. If the code they produce needs faster hardware then sack the incompetent so and so (unless the application is genuinely heavy on hardware of course) and hire someone who can write efficient code.

      Management get the crap because they control the budget, so it flows more freely if they feel the pain :)

      1. Infernoz Bronze badge
        Devil

        Re: Developers need what now?

        AN,

        I hope you always get crappy code from developers, very, very late, because that's what your moronic attitude will lead too.

        I've been a Software Developer for decades, so am competent to tell you to STFU!

        Developer tools, especially IDEs, can be surprisingly heavy users of disks, as can other software we use like database servers!

        Developers /always/ need well better specified machines with ample CPU and RAM because we are not just running the end product, we also run IDEs (which all proper developers use), other debugging and monitoring tools, database tools, source control clients, servers, Virtual Machines etc., often concurrently, and we damned well need to be able do lots of build cycles, including automated testing, to release usable code! Modern development is often not Waterfall, it is often deliberately iterative to reduce total development time for a /useable/ product.

        1. Nigel 11

          Re: Developers need what now?

          Depends what sort of developers.

          People who are coding and building, should have machines that can do it fast. Some sorts of debugging, likewise.

          But people who are testing for release, should at least some of the time be testing using the crappiest PC that a customer might still be using.

          My favorite peeve is websites that were never tested other than on a Gigabit internal net. You do NOT need a gigabit link to develop HTML and Javascripts. You should be exiting the building on a crappy ADSL service from a crappy ISP, and looping back in via the big bad internet. That's what some of your customers are seeing, stuck on the end of too many miles of corroded aluminium POTS cable that's somehow managing to support ADSL at a few Mbit/s (when it's not raining).

          1. Charles 9 Silver badge

            Re: Developers need what now?

            "You should be exiting the building on a crappy ADSL service from a crappy ISP, and looping back in via the big bad internet."

            It would be better still to set up a small intranet backed by a modem. Some people are LUCKY to have dialup access (it can happen: middle of nowhere with view of the south sky blocked somehow--no satellite), so they still need to be considered.

            1. Anonymous Coward
              Anonymous Coward

              Re: Middle of nowhere?

              Try down-town regional service centre, pop 20 000, Australia.

              ADSL - we've heard of it.

          2. Vic

            Re: Developers need what now?

            My favorite peeve is websites that were never tested other than on a Gigabit internal net.

            I once had a customer whose (third party) developers told them that the new site was slow because it was running on a server on the internal LAN, and all would be well once it was in the datacentre. And kept a straight face whilst saying it...

            Needless to say, the entire project was so slow and laggy, it was effectively unusable, The developers blamed the hardware, the nextwork, the colour of the sky. Two of us re-wrote the slowest bit[1] in a couple of hours[2], demonstrating that is was indeed their crap "design"[3].

            Vic.

            [1] They were passing the entire dataset to the client in XML, then parsing that XML in the worst piece of javascript you have ever seen. Some users were giving up after 10 minutes...

            [2] I had to teach the other guy the rudiments of Javascript. Over the phone.

            [3] I use the word quite wrongly...

        2. This post has been deleted by its author

      2. Hans 1

        Re: Developers need what now?

        Dear anon, I regret to write that I downvoted you ... then I thought, anon ? Must be a window cleaner, so yes, when you develop Windows software (clock.exe, calc.exe, browser extension, toobar, adware, or malware) then you should not be allowed to have an SSD ... but, when you build ENTERPRISE software, you ought to have an SSD.

        As for wear ... I believe in the theory of this and after 5 years must say ... it is utter bullshit. There is no wear, I build every workday, multiple times, a multiple Gb code base, the doc alone is 2000+ pages PDF, hundreds of files per build, many < 4kb ... you get it ... I used to build just the doc on spinning laptop rust ... 45 minutes, 15 on Samsung F1's, back in the day on Core2Quad ... now on SSD ? More like 5. Everybody in our team has SSD's, none have worn out, even after 5 years of builds ... AND I am the only one who has toggled swapping, hybernating etc ...

        Besides, say the SSD breaks after 7 years, could happen, how big were spinning rust drives 7 years ago ? Just about 1Tb, iirc ... now, imagine ... forget spinning rust, SSD's will very shortly kill spinning rust price wise and capacity wise ... easy - not even comptetition ... first multi teras out already ....

      3. JeffyPoooh
        Pint

        Re: Developers need what now?

        AC - you are so exactly correct.

        Coder drones with high end PCs results in bloatware that barely runs for the rest of us with normal hardware. They should be assigned normal mainstream hardware at least two days a week. And dog food for lunch if they whine.

  4. Anonymous Coward
    Anonymous Coward

    The exception that proved the rule

    I bought 15 OCZ SSD of various types (Vertex 4,Vertex 3.20, Agility 3) & capacity 18 months or so ago for Windoze PC & Linux laptop builds. Not long before their take over by Toshiba.

    I had to return one after a few months, been fine since.

    In the same time period, I had to return 2 sets of RAM that had failed in those PCs.

    Now Tosh in charge, I'd like to think their QA more in line with the rest of industry.

    1. Tom 38 Silver badge

      Re: The exception that proved the rule

      Right before OCZ went bust and were bought by Toshiba, and after they garnered the worst reputation in the business, they started flogging off factory refurbs of their most problematic drives - Vertex 3 and 4 - for basically nothing. I think I paid £30 for a 128GB Vertex 3 and £60 for a 240GB Vertex 4.

      The Vertex 3 I use as an adaptive read cache for a ZFS array - if it fails, the system doesn't care one jot; I can even un-plug it and plug it back in without applications noticing. This one has never failed.

      The Vertex 4 I used as the OS drive on my desktop. It worked fine for three months, and then the firmware wedged if you tried to do random access - sequential access was fine, so I could move all my data off there with a simple "dd". By this point, OCZ no longer existed, and besides which, the 3 month warranty was up. I asked Toshiba if I could RMA it, they said yes, and they sent me a brand new Tosiba branded Vertex 460, which thankfully has not failed even once.

      SSDs are much more complex beasties than mechanical disks, their firmware does a lot more work than the firmware in a HDD. I have no evidence, but I think the OCZ problems were mainly down to crappy firmware. Hopefully now Toshiba are on board, things are a little better.

  5. Fenwick

    "it is a really dumb idea to take the cheapest desktop hard drives you can find"

    Loads of people say this, and often have anecdotes about a life being destroyed, business failing, etc.

    But the only actual evidence I have managed to find says the opposite (Backblaze, Google, https://www.cs.cmu.edu/~bianca/fast07.pdf).

    Of course no study is perfect. I'm sure that many people think that all of this "research" is wrong. So can anyone please come up with a logical or evidence based counter argument for why "it is a really dumb idea to take the cheapest desktop hard drives you can find".

    1. Nigel 11

      Re: "it is a really dumb idea to take the cheapest desktop hard drives you can find"

      It's in their interests to sell you "server grade" drives at twice the price. So of course they would say that.

      What they will never tell you is that it is very much in your interest to buy half your drives from one of their competitors. This is true even if it's provable that the competitor's drive is half as reliable

      That's because a drive from a different manufacturer is far less likely to contain one of the same batch of defective components. Two drives with near-identical serial numbers will likely contain the same faulty components and therefore are likely to fail at or near the same time. Mirroring won't save you if this happens.

      Give me a manufacturer X desktop drive and a manufacturer Y desktop drive any day, over any two identical Server grade drives with near-consecutive serial numbers.

    2. Matt_payne666

      Re: "it is a really dumb idea to take the cheapest desktop hard drives you can find"

      Cheap drives...

      This is only anecdotal , but here is my experience with cheap disks...

      Seagate st3000 3tb, I have 4 of those in my home array, 2 have died in 2 years, a third making some horrible noise... only after the second failure did I investigate and discover the lifespan is measured at 2000 hours, which is about 2 years in a server...

      Blackblaze can get away with colossal disk failure by using disks in such quantities that they are disposable

  6. Oliver Mayes

    I made the switch a few months ago, replaced my primary HDD with a Kingston HyperX 3k.

    It was fantastic, really fast and simple to install. It lasted 4 months before completely failing and taking my data with it.

    Now restored from backups I'm back to using a HDD. Think I'll wait another few years before risking it again.

    1. bdg2

      Re the Kingston HyperX 3K -- were you monitoring it with Kingston Toolbox?

  7. Fearitude

    I remember the pain of owning and OCZ SSD!

    Got myself a great looking, and cheap, OCZ Octane 128GB last time I rebuilt my gaming/development box.

    Disk 1, lasted 3 weeks. (got a replacement)

    Disk 2, lasted 5 weeks. (replaced again)

    Disk 3, lasted an impressive 2 weeks!

    This time I opted for a refund and upgraded to a 240GB Corsair which has worked flawlessly ever since.

    I wont be buying another OCZ product any time soon!

  8. GitMeMyShootinIrons

    Why solid-state disks are winning the argument?

    Well, for me, my battery life on my power-hungry monster Lenovo improved markedly when I switched out to an SSD, and it also ran quite a bit cooler too.

    And that's before I start looking at performance.

    For enterprise applications, there's a place for both. It's a bit like shiny 15K SAS/FC disks versus 7.2K SATA disks - performance quality balanced against capacity quantity, only with shinier SSDs at the performance end.

    1. Flocke Kroes Silver badge

      SSD can increase power use of a laptop

      The CPU spends less time waiting for the disk to spin and more time doing something useful.

      1. AndrueC Silver badge

        Re: SSD can increase power use of a laptop

        The CPU spends less time waiting for the disk to spin and more time doing something useful.

        If the CPU is waiting for a peripheral there is something wrong with the hardware or the OS. It's true that at the highest level most applications still use synchronous I/O for disks but that's just laziness or ignorance on the part of the programmer.

        Here's how to do asynchronous reads using the Windows API.

        Underneath the covers Windows will be doing everything asynchronously regardless. If a thread asks for a synchronous read Windows just blocks it and gets on with other threads until the read actually completes. The ability to block a thread while waiting for I/O has been a cornerstone of multi-tasking on PCs since the 1990s.

        Now it might be that a given application has nothing better to do (eg;a text editor can't do anything until the disk has served up the text) but while that application is blocked the CPU will be doing work for other applications, services or whatever housekeeping tasks it's got queued up for just such moments. In fact it's possible for I/O to be too fast. If the OS never gets time to do the housekeeping because of short turnaround I/O the overall performance could suffer.

        1. Jaybus

          Re: SSD can increase power use of a laptop

          If ti uses more power because the CPU is idle less, then it is of course doing more work with the same amount of power. So that is a lame argument, comparing apples to oranges. If we are not going to compare power for the same work load, then switching off the power supply will reduce power consumption to zero.

        2. Anonymous Coward
          Anonymous Coward

          @AndrueC

          What you say is only true if the CPU has something useful to do. If (as an example) I'm doing a text search on one million small files, it will be very slow on a hard drive because of all that seeking where the CPU sits around with nothing useful to do other than go into an idle state for a moment while it waits for the drive to seek to the right place and waits for the sector it needs to be under the read head. You might read a hundred files a second (or worse, if your disk is fragmented) That search will take probably three hours best case, so I may as well watch a long movie while I'm waiting. On a good SSD you can read tens of thousands of small files a second, and that search will be so quick I'll barely have time to switch to my browser window and read one Reg article.

          With the hard drive, my CPU utilization will be well under 1%, because it has nothing do because it is spending all its time waiting on I/O. With the SSD, I'll have very high CPU utilization (ideally 100%, meaning the SSD would be delivering data faster than the CPU can search it)

          That's why upgrading to a SSD is such a massive performance improvement. If you gave me a choice of a laptop with a hard drive and a 4 GHz quad core CPU, or one with a SSD and a 1 GHz dual core CPU, I'll take the latter and run circles around the poor slob who is saddled with the other one in just about any task.

          1. AndrueC Silver badge
            Boffin

            Re: @AndrueC

            What you say is only true if the CPU has something useful to do.

            No, it's always true. You just have to read my reply more carefully. I said that a given application might be waiting for the HDD (the text editor blocked because the file hadn't loaded) which is what you're talking about. But the main thrust of my reply was that the CPU is never waiting for the HDD. And it never is - not in a modern computer with a modern OS. The CPU sends the I/O request then it 'forgets' about it and looks for something else to do.

            Quite often as you say there is nothing else to do but the CPU is not waiting for the HDD. It's not waiting for anything really. It's an important distinction. If you are waiting for the postman it implies that you are looking out of the window. It implies anticipation on your part. That doesn't apply in any sense for the CPU. It isn't checking up on the HDD. It isn't pacing the metaphorical floor wondering where the data is. It's kicking back on the patio with a cool beer and when the doorbell goes it has to put the beer down and go and find out who it is.

            That's a big deal in computing. The ability for the CPU to go idle and do nothing has a significant impact on power consumption. It can also mean that the machine as a whole can do much more if you can keep enough CPU intensive tasks queued up.

  9. Pete 2 Silver badge

    Too many words

    > "Why should I use SSDs?"

    Ans: because they're faster. Next question please.

    Seriously, the reason people buy SSDs is the need for speed. Since they passed the threshold price (which is different for everyone: and we're talking home users here) it became apparent that unless you have a burning desire to record and keep for posterior every single episode of East Enders or you have a porn collection of willy-withering proportions, then the need for terabytes of storage or home NAS's is largely driven by marketing (and the fact that the disk manufacturers have to keep the unit price high, hence increased capacities).

    And even if you do need the odd 50 Gig for some purpose, it's a trivial matter to whip out a 64GB thumb drive and put your big stuff on that. Who knows, some strange people might even use them for backups. That way you can lose your entire life's work by accidentally dropping a USB drive down the lav'.

    Even Windows 8.1 leaves oodles of free space, even on a 40GB SSD and with most people leaving their email in the cloud those loving missives from Aunty Flo, replete with humungous videos of her pu cat can be viewed with no hit on the home front. And if you do need more storeage: USB drives are frighteningly large, these days.

    1. Nigel 11

      Re: Too many words

      because they're faster. Next question please.

      Also more shockproof

      Also quieter (silent)

      Also less heavy

      Also longer battery life, or even less weight by using a smaller battery.

      edit:

      Also, with dense-packed equipment in a server farm, less electricity eaten and less expensive air-con needed.

      Don't know, but also suspect SSDs happier at high ambient temperatures than HDs (industrial/ embedded PCs)

      1. razorfishsl Silver badge

        Re: Too many words

        ER no…

        Nand- flash start to act 'strangely' with temp variation or increase, as do all semiconductors.

        And you should consider more about what goes wrong, rather than what goes right.

        go read some of the forensic papers about what a nightmare these drives are to recover data from, then imagine something goes wrong with your setup.

    2. JEDIDIAH
      Mushroom

      Re: Too many words

      > it became apparent that unless you have a burning desire to record and keep for posterior every single episode of East Enders

      Even a machine that's used for light gaming and the occasional bit of web surfing is still going to need a significant amount of drive space. Significant meaning an amount that is EXPENSIVE if you are only considering SSDs. It really doesn't take much in terms of personal media files or just GAMES to fill up a smaller drive.

      Going strictly SSD only makes sense if you're made of money or the device is only intended to be a terminal connecting to some other machine with a decent amount of storage.

  10. Anonymous Coward
    Anonymous Coward

    wrong audience

    me. I'm a happy (hapless?) home user, with a higher-than-average knowledge of ehm... computing (all relative of course, I'd be a bottom-feeder in the here ;)

    anyway, for me and for the rest of the population, the argument presented here is completely irrelevant. What matters to me (and other pond life) is price, reliability and capacity. Those three factors, never mind their sequence, output the only benefit of an ssd for me, i.e. as a system disk, which offers significant improvement to justify the cost.

    ...

    hell, even IF the ssd prices were to match those of the hdds (which I've seen hailed to happen "very soon" - for the last 5 - 7 years) - I would still prefer to go with a hdd. Because folks like me, even if they heard about the word "backup", can't be arsed to do it all the time, every time. With an obvious result that once every blue moon you DO regret this cavalier attitude and spend several hours (days, weeks) trying to recover your precious files and are actually pleased you got back 67%. No can do with ssd ;)

    1. Flocke Kroes Silver badge

      SSDs became cheaper over a year ago

      Put your DVD collection on mirrored spinning rust so any Pi in the house can deliver a film without you having to find the DVD, insert it into a player and wait through five minutes for unskippable adverts. Now that the bulk of your data is dealt with take a look at what is left: If found 36GB (mostly cruft) on my laptop. Choices:

      Store's own brand 160GB 5400rpm drive for £23.99

      Intel 40GB SSD for £24.98

      Store's own brand gets me 124GB of wasted space. I thought 160GB spinning disks ceased manufacture years ago. If I am being optimistic, I would expect this drive has spent over a year gathering dust on a shelf. The pessimist in me thinks it is second hand, refurbished and then spent a year on the shelf gathering dust.

      An extra £0.99 gets me 4GB of wasted space on an SSD. 40GB sounds sufficiently old that I would wonder about this being a second hand drive. I bet Intel would send a bus full of lawyers to any retailer trying to sell second hand Intel SSDs as new.

      The cheapest spinning disk that a manufacturer would put his name on was £34.98 with 464GB of wasted space. I have a choice of 60+GB SSDs for less money leaving me plenty of space for a sack full of new kitten pictures.

      1. Anonymous Coward
        Anonymous Coward

        Re: SSDs became cheaper over a year ago

        What a pointless argument. So if you have more than 40 GB worth of data, what do you do?

        1. Flocke Kroes Silver badge

          Like I said

          Put the cat videos on a NAS spinning disk.

          What is left is usually tiny. I keep seeing computers with ½TB drives that are least 90% empty space. Give it a year, and I will see more 1TB drives that are 95% empty. Perhaps you really do need to carry the complete Debian archive around with you (source code and binaries for 16 architectures is 1TB). That makes you unusual. Last time I looked at laptops, SSDs were not even an option. It would be nice to have the choice.

          BTW - I bet half the reliability problems people experience from USB and SDHC cards comes from buying from a supermarket. The buyers there can get you a crate of fish with a good sell-by date and evidence that the fish have been stored and transported at the right temperature. The same people are less good at spotting the difference between real branded flash and flash made by the same people after hours with recycled half-capacity components and lying firmware. It is worth waiting a couple of days for delivery from a computing specialist - and cheaper.

          1. eldakka Silver badge

            Re: Like I said

            The games I play at least once a week take up about 150GB.

            If you throw in the games I play at least once a month, we've talking about 500GB.

            Having game files on a SSD improves performance significantly. Most games have large numbers of very large textures (why do you think most gaming graphics cards have 1GB+ of video memory?) that are frequently loaded, unloaded and re-loaded.

            Game 'world' data can be huge.

            In games like diablo, there was a noticeable lag, jerkiness, when the game had to load the next 'room' or set of tiles into memory, which could last a few seconds - while not sounding long, when playing an immersive game frequent 2-3 second pauses can be jarring.

            Autosaves can be large, and can cause pauses in games, sometimes 15-20 seconds, which again is jarring.

            Moving all this to a SSD pretty much eliminates the 2-3 second loads, and reduces autosaves to 'hiccups' rather than 10's of seconds.

  11. Anonymous Coward
    Anonymous Coward

    Fail and fail hard

    Failure rates maybe equivalent, however failure modes are not. In my experience hard disks give you warning when the are starting to fail, so give you a chance to take remedial steps. SSD's just stop working.

    If speed isn't an issue then I would always go hard disk (or maybe hybrid). The best strategy is to combine the two. A SSD for the OS and hard disk for data storage

    1. Anonymous Coward
      Anonymous Coward

      Re: Fail and fail hard

      I have one SSD on my home PC, and all I use it for is the OS and Steam games. If it dies, I rebuild.

      It's a good SSD. I benchmarked it and found it outperforming the HDDs in my PC massively. In real life, however, it's not that much faster.

      On my laptop I'll stick with the HDD. Everyone I know who has installed an SSD in their laptop has always justified it by saying "It boots so much faster". Why do you need to regularly boot a laptop? I boot mine every few weeks tops when I have to install a patch that needs it. The applications I use sit in RAM and the only time the HDD gets used is if I need to open or save a file.

      Anyone who is installing an SSD without first maxing out their RAM is doing it wrong.

    2. Mikel

      Re: Fail and fail hard

      If you are relying on HDD soft failure modes preserve your precious snowflake pictures it is you who has hard failed. Redundancy.

      1. JEDIDIAH
        Devil

        Re: Fail and fail hard

        > If you are relying on HDD soft failure modes preserve your precious snowflake pictures it is you who has hard failed. Redundancy.

        ...which is much more feasible if you are not paying 4x the price you really need to.

  12. chivo243 Silver badge
    Joke

    Are they Round?

    Thought not, probably shouldn't call them disks!

    1. Nigel 11

      Re: Are they Round?

      Solid-state Storage Devices. How many people above are calling them disks, as opposed to SSDs? I'd have voted for SSSDs but never mind.

  13. Ben Liddicott

    Long-term deep storage

    SSDs require power to be connected every few months or they start to fade. Here, we are competing with tape though.

    1. Peter Gathercole Silver badge

      Re: Long-term deep storage

      I've noticed this. My old EEEPC 701, which is not used much now, has needed to be reinstalled each time I've left it a few months without being powered on.

    2. Anonymous Coward
      Anonymous Coward

      Re: Long-term deep storage

      Flash is non-volatile, there is no difference in data retention when powered or unpowered. There is no "refresh" mechanism by which having a SSD plugged in would cause it to better retain its stored data! If you have a SSD that is losing data after three months unplugged, it is defective and should be replaced.

      We've all got USB sticks sitting in a drawer that are years old that still work. I just dug up an old 16MB one that I probably haven't used for nearly a decade, and it can still be read just fine....

      1. Sandtitz Silver badge
        Boffin

        Re: Long-term deep storage @DougS

        "Flash is non-volatile, there is no difference in data retention when powered or unpowered."

        That statement is misleading. You could argue that DRAM doesn't need power (for a VERY short time).

        JEDEC SSD standard document JESD218.PDF (google it and read the cached version or use the bugmenot credentials) require consumer SSDs to retain data for at least 1 year and enterprise SSDs to retain data for 3 months when powered off.

        Dell has an SSD FAQ answering the question "6. How long can I expect the drive to retain my data without needing to plug the drive back in" and the answer lies between 3 months and 10 years depending on many factors.

        Page 17 of Fujitsu's SSD FAQ gives a minimum of 3 months for MLC and 6 months for SLC. Environmental and other factors affect the data retention date of course.

        The Intel White Paper - page 3" states: "There is a trade-off though. A standard MLC NAND SSD can retain data for 12 months without power. An MLC NAND SSD with HET can retain data for only three months without power."

        1. Anonymous Coward
          Anonymous Coward

          Re: Long-term deep storage @DougS

          DRAM is volatile, it needs be to refreshed. NAND does not have an equivalent refresh cycle, so there is NO DIFFERENCE in how long a flash cell will retain its contents whether the drive it is in is sitting in a closet or active in a server.

          The JEDEC standard is a worst case, and that's what manufacturers are quoting there because they don't want to guarantee more than the requirements since there is no market opportunity for doing so.

  14. HamsterNet

    Life

    Still waiting on my Kingston 60GB SSD to fail, so I can get a bigger one. Its just running the OS and few choice apps, but its been going for 4 years now.

    The speed difference between SSD and rust is akin to walking, on crutches and a fighter jet.

  15. Rabbit80

    I have a nice mix of SSD and spinning rust..

    2x 512Gb Corsair SSD (Striped) - OS, Large Applications

    2x 1TB Seagate SSHD (Striped) - Small applications, documents, downloads etc

    1x 3TB WD Red - Long term storage / backup

    1x 30Gb Kingston MSATA SSD - Temp files and paging file

  16. Bryan Hall

    Sequential Writes != SSD

    For databases that means logs. You don't want these any where near SSDs unless you want unpredictable, and many times horrible as in several second, write waits.

    SSD's are great at random I/O both read and write, and sequential reads. But at least for now, they are horrible at large sequential writes due to the way they erase/write blocks. Large caches in SANS can't avoid that.

    1. Anonymous Coward
      Anonymous Coward

      Re: Sequential Writes != SSD

      You've got some horrible SSDs if you see horrible write waits. Many of the early SSDs had very low quality controllers that could barely sustain more IOPS than hard drives can, but only a fool would use those for a database - ANY part of a database.

      Any decent SSD these days can sustain at least more than enough write bandwidth for all but the largest DBs. If they can't, get a better SSD. Hell the Crucial MX100 in my laptop can sustain over 100 MB/sec and that laptop is five years old!

      You are of course completely correct that you should put your redo and archive logs on hard drives (and disable any SSD tiering/caching) But not because that'll speed it up - if it does, you have crappy SSDs. No, the reason is because you don't want to waste valuable SSD space on that, you want it all used for random I/Os where SSDs truly excel.

      I'm not sure why you think caches in "SANS" wouldn't fix this if it actually was a problem (which it isn't) since when you write to an array and it is stored in cache the write is reported back to the server's OS as complete. It doesn't wait for it to be written to the SSD or hard drive. So even if the SSD really did sometimes take a couple seconds to write something the database would never know.

    2. Vic

      Re: Sequential Writes != SSD

      SSD's are great at random I/O both read and write, and sequential reads. But at least for now, they are horrible at large sequential writes

      I was doing large transport-stream captures onto SSDs a little while ago. Sequential writes were just fine (and *dramatically* faster than the RAID0 spinning rust I was replacing).

      If your SSDs behave differently - you might have broken devices...

      Vic.

  17. nny4lenore

    I had the 150GB Velociraptors good drive till it start intermittently booting and exactly after the warranty expired.

    I stuck with a SSD since and had no issues

  18. Anonymous Coward
    Anonymous Coward

    No contest

    Any home user who argues against SSDs has simply never experienced the (massive) improvement.

    However, on a desktop they should only really be used for the OS and applications, with an HDD for storage. Windows 7 takes up to 30 GB and other apps maybe another 30 GB. Since SSDs operate best with significant amounts of free space, 128 GB is a sensible minimum size.

    Reliability wise, with this kind of setup, the mechanical HDD is likely to fail first (unless there's a design fault, cough, OCZ).

    SSDs aren't just about boot up times, they make the whole PC far more responsive, every time you do anything. In a laptop with at least SATA2 , it'll feel like a new machine.

  19. cons piracy

    You say tomato, I say...

    Remember the Hitachi 'deathstar' 3.5" and the 'travel sick' 2.5" ?... well those were bomb proof compared with the OCdeadZ

    P.s only works if your from our side of the pond because we say 'zed'... you know, a bit like how the yanks forget there's a second 'i' in aluminium

  20. JEDIDIAH
    Linux

    Old news.

    This is old news. I was working with SSD storage back in 2001. The same issues and limitations existed then as they do now. People and companies are not made of money. Some big talking amateurs like to talk about how money doesn't matter but it does. It's a inescapable part of engineering.

    Any solution you pose for any problem needs to be worth the cost.

    That doesn't change just because it's 2014 and some blogger can get his hands on SSD tech now.

    1. Anonymous Coward
      Anonymous Coward

      Re: Old news.

      "Any solution you pose for any problem needs to be worth the cost."

      Correct, though not in the way you meant.

      In 2010, the PCs at my company of 300 employees took 10-15 minutes from switch on to get to a usable state, due to the corporate security software causing an HDD bottleneck. Replacing them with SSDs would have reduced this to under 1 minute. Using the average the manhour rate, this would have saved over £250K over 3 years, for a £40K outlay. This assumes that each PC is switched on daily - it ignores the hundreds of manhours accruing for the repeated delays of a few tens of seconds to open applications throughout each and every day. It also ignores the fact the software initial installation time (Windows and apps) for each PC reduced from 2 hours to 30 minutes.

      Needless to say, the company ignored the recommendation (due to cost!?!) and is currently upgrading the machines with HDDs (since there was no "perceivable cost benefit").

      They really don't understand that time is money...

  21. Phil Koenig

    Data Recovery

    A monkey could tell you that performance on most things improves with an SSD. That's the easy part.

    The less obvious part is what happens when an SSD fails. As others here have mentioned, with HDDs you often get some kind of warning, and that can be in the form of SMART statistics, slower performance or just good 'ol "funny noises" emanating from the vicinity.

    Whereas not only is it less likely you'll get any advance warning of an impending SSD failure, when it does fail, what do you do? There is likely no expensive practitioner to send it off to to replace the controller board or swap the platters into another HDA and read the imperiled data that your hardware suddenly became incapable of interacting-with. If the NAND goes bad, I'd imagine in most cases you are simply SOL.

    In the course of my work I have had many cases of data on failed HDDs being recovered in precisely that way, so this is not a theoretical question for some of us in the biz.

    At the very least, I think it is more critical than ever to have an effective and tested backup strategy in place, if one is storing important data on SSDs.

    1. Anonymous Coward
      Anonymous Coward

      Re: Data Recovery

      Solution: OS + Apps only on SSD, everything else on HDD.

      If the SSD fails, you haven't lost anything important. And reinstalling Windows on the replacement SSD takes 10 minutes.

      And with 4 TB, USB 3 external HDDs at around £100 these days, backing stuff up really isn't a problem.

  22. Infernoz Bronze badge
    Boffin

    SSDs are great for wide random access, seq speed, portable use and low power use.

    The main reason I got SSDs is not sequential speed, it was because they don't have a several millisecond head seek delay which spinning disk have for wide random access, this can remove loads of delays and remove the need to defragment disks,

    Anyone who uses SSDs for bulk storage either has more money than sense, or really has a good use case for no seek time and/or faster sequential speed.

    I have separate Samsung SSDs for an OS/app 128GB disk and a 256GB for fast data use (including random access maniac Calibre); everything else is on local WD Black, or several WD Red in NAS. All these disks are either RAID1 or RAIDZ5 (6), because I expect one disk to fail at just the wrong moment, given experience, including WD Reds; I regard this as especially important for SSDs!

  23. sianderson

    Ive been using an OCZ Vertex 3 for a couple of years with no issues at all so its definitely not a 100% failure rate

  24. sawatts

    as cheap memory

    One recommendation I found was to treat SSDs as cheap memory, not expensive HDDs.

    Although this was in the context of big data systems, and it is all back to tiering storage in the end.

    1. Anonymous Coward
      Anonymous Coward

      Re: as cheap memory

      Which takes me to a client where no matter how much we insist will not put the SQL Server tempdb database on SSD disks, and we'll have to watch the whole system wait for tempdb....

  25. Anonymous Coward
    Anonymous Coward

    Enterprise perspective

    From an enterprise perspective, it's not the the IOPs or throughput that is appealing with flash, it's the latency (or lack of).

    Pushing <nnnnn>IOPs or <x>GBps from SAS drives is'nt a problem, just add spindles. But you will never get below 4-7ms for a random read IO, no matter how sophisticated the caching algorithms in <insert your array here> are.

    With flash, and good, optimized array code, you can get the random read IOs down to 600-1200mS. That is a huge improvment for lets say a OLTP database doing 3000 read IOPs. VM farms is another example where the latency dramaticly improves the overall responsiveness of the VMs.

    Any application doing a lot of small reads will benefit from flash, and that's where we try to implement it. Any other storage engineers here who have different opinions/experience ?

    1. Aitor 1

      Re: Enterprise perspective

      IOPS per thread is waht you get if you go to SSD.

      I prefer to have the flash inside the server, way faster, but a pain to manage.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020