No story here
"That may well be the case, but this "product" has no name, no price, no availability, and no detailed IOPS and endurance information, making" this Register story totally premature.
Samsung has started mass production of the world's first QLC (quad-level cell) consumer SSD. QLC is 4 bits/cell flash technology and a next step in cell bit capacity from the current TLC (3 bits/cell). Foundries belonging to Intel/Micron, SK Hynix and WD/Toshiba have started producing QLC chips and, in some cases, SSDs already …
calling it a consumer device, then failing to release the pricing is telling for me. I pretty much follow the saying "If you need to ask the price, then you can't afford it".. And given Samsungs ambition to be a "premium" manufacturer, I reckon this is gonna come in about £750-1000.
But wait, what would TheReg be without premature clickbait? It's the entire reason we come here instead of some reputable site, isn't it. Here I am, for example.
TheReg is partially right, even with its trademark suspect reasoning: hard drives are already mostly dead in PCs. Well, except for the really cheap ones and that island is steadily shrinking. And except for media packrats who rely on hard drives to archive their porn collection.
Hard drives are alive and well in server farms. While SSD continues to nibble at the edges, the economics of mass storage that doesn't mind 10ms latency are compelling. That amounts to the vast majority of data storage in the world. With the likelihood of further improvements in areal density, it will be many years before hard drives are as dead as tape.
"Tape is dead?"
It is in the consumer sphere. When was the last time you saw a tape drive at the local Best Buy? At least when QIC drives were around, consumers with a bit of cash could use them. No such analogue exists today, much as I wish there was, as we could really use some reliable way to archive a few TB at a time of stuff. As of now, the closest solution out there is rotating external hard drives.
Each time you add another level, you add complexity to the drive circuitry plus you reduce endurance and so you need more and more complex error correction algorithms to safeguard the data. And all for less and less gain in capacity. From 3 levels to 4 only gives you a one third increase, and (heaven forbid) from 4 to 5 will only give you one quarter. Is it really worth it?
" Is it really worth it?"
Not in my opinion. These cells store multiple bits by using multiple voltage levels instead of just on/off, high/low or +ve/-ve- ie its an analogue system with all its inherent problems. There's a reason we switched from analogue to digital computers 70 years ago and those reasons haven't gone away. One of those reasons is that as components age the charge they can store drops. Thats not a problem if its binary since generally it has to drop a long way before a 1 becomes a 0. However if you have multiple analogue voltage levels it won't take much degredation to flip a 4 to a 3 or 3 to 2 etc.
and yet data 'digital' transmission for cable Internet and digital TV transmission gained massively from using QAM16, QAM64 and QAM256 etc.
This works by using analog levels to cram more bits into the signal, which itself is carried on a carrier wave.
All we care about as consumers is the end result, price, reliability, performance etc. If they make it work and it keeps working, the magic required, be it error correction or redundancy will not matter.
After all, the thought of the head distance to the platter for spinning rust and Giant Magnetoresistant heads had all the naysayers telling us it could never work/be reliable - I think modern hard drives are a miracle to this day.
"This works by using analog levels to cram more bits into the signal, which itself is carried on a carrier wave."
Everything is analogue if you go down far enough - the point is where the analogue values come from. The relative levels of amplitude, phasing or frequency compared to the carrier signal strength of a radio signal will always be the same as its actively generated. The hardware in an SSD however is fixed and so maximum voltage levels in the cells will decline as they age, but more to the point - not uniformly between them. So the firmware can't simply adjust its voltage level parameters to account for it.
> Everything is analogue if you go down far enough
...and if you go down further than that, it's all digital/quantised again! (^_^)
(Disclaimer; yes, I know some phycisist will probably come along and point out that this is misleading, inaccurate or oversimplified).
This post has been deleted by its author
>Michael Strorm
>> Everything is analogue if you go down far enough
>...and if you go down further than that, it's all digital/quantised again! (^_^)
As a physicist, I know that once it gets small enough, everything gets uncertain.
How many electrons have to jump how far for the state to change in a QLC "gate" is the big question, and how long does it take. If the elections take an average of 50 years to tunnel, that's going to be lead to a high error rate across 4 trillion of them.
However, I suspect Samsung will have thought of that.
> The hardware in an SSD however is fixed
More or less, but it changes areas dynamically from QLC to SLC or somewhere in between as the drive decides it needs caching or not - and it's constantly moving things around to keep everything healthy, plus the level of error correction being applied is mind-blowing and adjacently addressed bits aren't necessarily stored physically adjacently.
> and so maximum voltage levels in the cells will decline as they age
These aren't electrolytic capacitors with leaky insulators. They're silicon electron wells - about the best insulated form of FET you can devise. It's about coulombs, not volts.
> but more to the point - not uniformly between them.
They already do.
> So the firmware can't simply adjust its voltage level parameters to account for it.
The firmware already does and is already dynamically recalibrating itself over areas of the die to account for ion drift, else large chunks would become unusable very quickly. That's the point of having all that processing power onboard to actively keep track of and manage the health of the NAND.
Samsung wouldn't be shipping QLC without a large level of confidence in their product - and whilst they put a 3 year warranty on their _consumer_ drives, WD and Seagate have so much confidence in their consumer devices that they best they'll offer is 2 years(*), but more usually 12 months - and I've had to replace far too many drives under warranty in that 2 year period for purchases made since 2011.
(*) One of their fabulous weasel antics is to refuse to honour warranties on anything sold via an OEM and point the customer back to that supplier - meaning if they gave you a 6 month warranty or went toes up, that's what you got. Samsung have zero quibbles about directly honouring warranties.
"Everything is analogue if you go down far enough - the point is where the analogue values come from. The relative levels of amplitude, phasing or frequency compared to the carrier signal strength of a radio signal will always be the same as its actively generated. The hardware in an SSD however is fixed and so maximum voltage levels in the cells will decline as they age, but more to the point - not uniformly between them. So the firmware can't simply adjust its voltage level parameters to account for it."
So maybe they'll do a really slow refresh of cells, like DRAM with a really long refresh cycle?
We were still seeing mainframe computers with analogue/digital architecture in the 1960s. Indeed, they were popular enough to have two flavours depending on which technology "drove" the beast.
I'm 63 and I remember them from when I was about 11. So the "70 years ago" figure isn't anywhere near right.
I believe it also mis-states, albeit contextually, what an analogue computer is and does. Analogue computers, which were still available from Heathkit and other suppliers in the 1970s, are spectacular for modelling continuous solutions to calculus problems. They don't do arithmetic, at least not well, and the one's I've seen and used are not programmed using a high-level computer language, but with a series of patch cables linking the various integrator circuits - rather like the old DX7 used patches (albeit digitally executed) cross connected the six operators that made the noises. The Analog Computer at Coventry Tech was used to model n-body motion issues and on open days was used to display a snooker game.
The analogue computer was thought to be important when digital computers had low clock rates and no memory to speak of. Now the discontinuous nature of the calculation can be hand-waved as too small to matter, and he results can be smoothed using mathematics anyway now there is memory available for the functions involved.
But years ago that wasn't the case.
I'm not sure why you feel the issue of bits flipping can't be mitigated the way it is for "traditional" storage techniques (which can also fail in this way) by use of a checksum. I believe SSD storage has other on-chip mitigation stuff too that deals silently with cell failures, though I'm not clear on the details.
I did some work on a huge machine that converted movie film into video. It was a complex digital and analogue computer. The machine was decades old but still in use because the job could not be handled digitally in 1998 (according to the machine's owner).
Digital is a huge waste of power and bandwidth. It takes about 7 parallel digital circuits to match the precision of one analog circuit. When it comes to mathematics, digital needs massive gate arrays and microcode to perform the same task as a handful of analog components. Propagation delay hits big digital circuits pretty hard and workarounds further increase complexity. Analog computers are still alive and well for any time speed and efficiency is more important than precision.
I suspect the AI singularity will happen when analog and digital processors are efficiently merged together. Last time I read about it, flash cells were going to be the parameter buffers between the two.
You are a bit confused about the distinction between analog and digital. At the nanoscale, its all analog (and at femtoscale it's all digitial again that's a bit deep for this post, just google quantum number). Those rather alarmingly analog-looking traces are coerced into representing digital values via thresholding and latching effects. Ever seen an ether bird? It's a bit like that. A wind-up clock is like that too: the spring is analog but the tick is digital.
Multilevel cells likewise rely on thresholding and latching effects, there is just more than a single threshold value. Still digital by nature, just like single level cells, Maybe the threshold voltages or whatever get closer together in multi-level and risk more errors, or maybe not. Single level logic gets shrunk and packed together as tightly as possible, which also increases the risk of error. Given the same number of bits stored in the same area, it is far from clear that multi-level cells have the higher risk of error, it may be just the opposite.
Each time you add another level, you add complexity to the drive circuitry plus you reduce endurance and so you need more and more complex error correction algorithms to safeguard the data.
And conversely, HDD technology is doing fancy complex stuff like HAMR and shingling, increased track density. So what if the SSD needs advanced software to make it work - we're all pro-technology here, aren't we? And you're reliant on error correction to read this web site and post here, or in almost any form of digital audio operation, from making a phone call to listening to music.
The other thing is that all the studies I've seen (ignoring those from HDD and SSD makers and suppliers) suggests that notwithstanding the known SSD endurance limits, the service life of SSDs is comparable to enterprise HDDs, and the in-service failure rate is considerably lower for SSD than HDD. Whilst it is reasonable to remain sceptical about QLC as with any new technology, I would expect it to do what it says on the tin. I will cheer on the early adopters.
I would say it's less about familiarity and more about cost. HDD is still cheap enough (relatively) that you can have multiple copies for the price of one piece of replacement SDD media.
Tech continues to improve on both fronts. I can almost put my entire hoard on a single piece of spinning rust now. I don't even want to contemplate the cost of duplicating all of my data with SSD.
Other entities have WAY more data than I do.
re. "all the studies I've seen .. the service life of SSDs is comparable to enterprise HDDs"
Well those studies must be based on SSDs that have been in use for 2/3+ years, which probably means they are SLC or MLC (2 level), not TLC / QLC. Even the manufacturers of TLC / QLC admit their endurance is less, which is why they are targetted at consumers rather than production systems. The other worrying trend is that these short lifespan SSDs are increasingly being integrated into devices, so when they fail the whole device is a write off.
Using voltage levels to cram more data on a comms link doesn't justify doing the same for storage devices. A comms link only has to get the right data once, if an error is detected the data is re-sent. A storage device needs to store the right value indefinitely, if an error is detected, an algorithm must guess what the correct data was.
Really_adf correctly noted, "Adding a bit doubles the number of different values a cell can store."
It works the other way too.
One must unfortunately double the number of different voltage levels the cell measuring subsystem can reliably distinguish to add just one more bit.
If they're at 16 levels for 4 bits per cell, the next whole step would be 32 levels for 5 bits per cell.
I expect that they'll invent fractional (smeared) bits first. Maybe about 24 levels and about 4.5 bits per cell.
> Every level adds one bit of information per cell
Hmm is that so? To encode 4 bits you need 16 distinct voltage levels, no?
In that sense the L in QLC is is not a "distinct voltage level" but more than that. If you only had 4, say, 0, 0.33, 0.66 and 1, you could encode 2 bits. Just like the old NeXT monochrome looked good with 2-bit graphics, showing black, white, light grey, and dark grey.
"To encode 4 bits you need 16 distinct voltage levels, no?"
Right, and so L)evel has always been a gross misnomer, it should be B)it. A multi-bit flash cell has 2**L voltage levels, using the highly misleading L)evel terminology.
Like this: https://www.cactus-tech.com/assets/images/e/MLC-NAND-Cell---4-States-of-Electrons-9a049c6e.png
"thus doubling the capacity".....
....and ironically, exponentially increasing the ignorance.
Kudos to the guy who explains correctly that it doubles the charge states, which doesn't double the bits per cell.
For people who have missed the point, I'm just off to practice my two times table according LeoP:
1 (SLC), 2 (MLC), 3 (TLC), 4 (QLC),......
"QLC (quad-level cell)....is 4 bits/cell"
Then it's 16 Levels (not "Quad" = 4 Levels).
Sixteen levels can define 4 bits. And 4 bits can define 16 levels.
They should find the originator of this "Level" misnomer, and give them a good slap. It doesn't even reach the lofty heights of being a dumb error, it's worse than that.
I'd not looked in a while, so I just totted up the prices and capacities of some SSDs and HDDs.
SSDs are now around the 16p/MB level (the 512MB drives are the sweet spot), up to about 22p/MB for the biggest ones (small ones are also poor value).
HDDs are down to 2p/MB (for 3/4TB drives) or about 3p/MB for very small/large drives.
So If Samsung think their new SSD is going to compete on capacity with hard drives, they're going to have to sell it for about 8-10 times less than their current generation of SSDs. That's unlikely, so the age of harddrives for bulk storage is still with us, and probably will be for a few years yet.
Luxury!
When we got a fancy new 286-based PC with removable drive technology so we could all use it as ourselves, the drives had a ridiculously large 5MB on each one!
Given that we used floppy net, often with the Royal Mail as the (literal) transport layer, the speed increase was something else.
And 'git' was done by saying 'Now press delete four times and type this instead...'
The first computer I worked on that was graced with a hard drive was a Burroughs B21. It had 256K RAM and a massive 5MB drive which contained the OS (BTOS), the Pascal compiler (the C compiler was still a year or two away) and all the programs and data that I was developing. One summer, which wasn't quite as hot as this one has been, it spent several weeks with the case off as the heat was making the hard drive power down!
Ignoring for the moment the IBM 360 and the IBM 1620 at college, the first drive I ever worked with professionally was a CDC Hawk drive with 5MB fixed and 5MB removable.
When I described that to a colleague the following year he exclaimed, "five megabytes? You'd NEVER fill that up!"
All the engineers where I was working got dual-floppy IBM PCs---managers got an XT (mainly for the same reason they had offices to themselves and speakerphones and us engineering grunts didn't). My co-workers thought I was nuts for getting an XT-clone (Columbia) with the whopping 10MB Tandon HD. That was soon outgrown as was its 20MB replacement. My next computer had a 40MB HD---more storage than DOS could even handle w/o help. I soon got tired of buying replacement disks so when my 486 arrived, I tossed in a pair of 200MB SCSI disks. Now we don't think twice about running to the store for TB drives. For comparison, up until ~10 years ago, I managed a cluster at an F500 company's running their sales database applications that only had access to about 1.5TB. We've become the Everett Dirksons of home computing: A TB here, a TB there, and pretty soon you're talking about real disk space.
My first hard-disk was 20Mb, the size of a planet, and took about 90 seconds to spin up. It was in an ICL / Three Rivers Perq 1 from 1979/1980 - lovely machine 1024x768 resolution raster (black and white), running a graphical window manager - 1Mb of RAM, 1Mb 8" floppies.... the best graphics tablet I've ever used....
Bear in mind this machine predated the 1K ZX81 ... And no, I didn't get it until 1991 - Manchester Uni flogging off old kit, for £50 - I do believe they were over £100,000 new!
I remember my first hard disk experience.
A local emporium had acquired a bunch of 10mb MFM drives and I wanted one badly because swapping 5 1/4" floppies was such a pain.
I saved my pocket money and did odd jobs for months and months. Then, imagine my surprise when fitting it I discovered that my Amstrad 1640 didn't have a controller card and that would be another £450, or about four times the cost of the 'bargain price' hard disk drive itself...
I bawled my bloody eyes out!
Now we are wringing our hands about this development. I remember being told that 200mb was the theoretical maximum for IDE. They'll make this work if it is worth a gain of 1/3rd. They'll make it work well too. Eventually SSDs will be 'the right price' and spinning drives will disappear.
You could get an 8 bit ISA card which had a 30 MB hard drive to fit in one of the Amstrad 1640s ISA slots - similar to Intel's Optane 900p PCIE drive. Well when I say similar it has a spinning metal hard drive mounted on a ISA hard drive controller card and it took up an expansion slot - so not totally dissimilar but IOPS were not in the same range.....
C15? C15? Wow, you lucky guy!
Try finding your program when it's somewhere on a C90 and the tape counter's broken.
Not to mention the pain of discovering that even after upgrading to a whopping 3k of RAM you don't have the space to implement a high-score table well as use colour graphics.
Youngsters these days...
Ahhh the Amstrad 1640 - to get round the controller issue I bought a 20MB ‘Hard Card’ which was not quite the right size (bit too tall on its edge) to fit in the compartment at the back and then close up the cover but I could live with that for the wooooo 20 MEGABYTES - unheard of capacity.
I also remember creating RAMdrives with the 1640 to run WordPerfect etc in RAM which felt out there on the edge at the time (for me anyway!).
And ditched GEM for a shell program (Brown Bag software rings a bell) to run various programs I had at that time.
Wow, 20 Megs and a CGA screen - I was living it big!
> eighty whole megabytes, that's room for almost eighty floppy disks!
Look at Mister I'm-So-Fancy-With-My-Megabyte-Floppy-Disks! I bet those were cutting edge 3.5" ones as well?
The drive I had for my Atari 800XL held 120KB per side of a 5.25" disk (for a total of 240KB if you were prepared to flip them). (#)
I'd have killed for a 20MB hard drive, let alone 80MB! (No, really I would have and they actually made one for the system. Unfortunately it cost £750, the equivalent of around £2000 today, which I didn't have when I was eleven years old....)
(#) And that was still hugely better than loading via the excruciatingly slow cassette drive...
Ah Atari 800XL floppy drives -huge things and I had a pair of them, one would only work after it had warmed up for 10 minutes
Loved my 800XL, even had a 256KB ram pack hanging off the back and got it to print to a Brother thermal typewriter...those were the days, sold it for a good price just before the market crashed for home PC;s to get a IBM PS2 model 30 via the supplier at work for a decent discounted price.
> my first harddrive was eighty whole megabytes, that's room for almost eighty floppy disks
Well lah-de-dar. Look at me and my multi megabyte scale storage nodes. We had it tough. We had to store our data on a tape using an unwound paperclip and a steady hand, magnetised by rubbing your feet on the back of a cat. But we were 'appy back then.
"So If Samsung think their new SSD is going to compete on capacity with hard drives, they're going to have to sell it for about 8-10 times less than their current generation of SSDs. "
If they sell for 4 times the price of HDDs, most buyers will bite their arms off. That 3 year warranty is a good indicator of expected lifespan for starters.
Then there's the vastly reduced seek times, power consumption, size and massively increased bandwidth (mechanical drives top out at about 105-120MB/s sequential and drop as low as 5MB/s at 120-180IOPS random - and at that rate of sustained random IOPs large enterprise drives shake themselves to death in 6 months, let alone consumer ones.)
The introductory price of the 4TB QLC drives in "evo" format is unlikely to be above £600 inc vat - which puts them about 4-6 times that of NAS drives such as WDreds and I'd happily drop them into my 32TB ZFS NAS rig knowing that they'd save me about £75/year apiece in power bills alone.
Bear in mind that 2TB SM863s have come down from £2k to £1k, whilst you can get old stock 860evo 2TB for £435(M2)/535(2.5") and 970evo 2Tb M2 for £630 and the 860Evo 4TB sata being £880 (those are all inc vat) - these are all about to face runout discounts.
For some use cases yes. But SSD greatly drops latancy, higher IOPS,higher density, lower power usage and better reliability. In addition potentially allows dedup, compression and encryption in situations where spinning disk doesn't.
So cost per GB can be very misleading and not always the same as VFM.
On the other hand, all of my bulk storage sits on the network to be accessed by the rest of the house. So my top speed is really the speed of my network. So SSD being faster really doesn't buy me anything for my bulk storage.
All of the "bells and whistles" you are fixating on are not likely relevant to most consumers.
I agree exactly as you wrote it: “if....compete on capacity....the age of hard drives as bulk storage”
But.... how much bulk storage do we need, really?
For consumers, probably 1TB is enough for everything but video and photo *archives*.
What’s wrong with home NAS or cloud? *Somebody* has to have the HDDs, but it doesn’t have to be in the iPad.....
What’s wrong with home NAS or cloud? *Somebody* has to have the HDDs, but it doesn’t have to be in the iPad.....
Sure, but as a consumer (the article is specifically not about enterprise gear), when you're looking for a storage medium for your archives, where speed and IOPs don't matter much, are you going to pick up some 1TB spinners for ~£35 or a 1TB SSD for £180?
Once the SSD price is down to maybe £70 for 1TB (ie only twice the spinning rust), then people will start using them more for bulk storage.
(Or take the middle road and combine harddrives with an SSD cache, but we're getting above a normal consumer level then)
" The future is not bright for desktop and notebook disk drives."
That has been true since SSD became "consumerised" but still, here we are buying hundreds of millions of traditional HDs each year. In the field of computing that's not bad for a 60's technology!
SSD will exceed HD shipment volumes soon but a lot longer to match price / storage. The new paradigm will be (if not already) SSD for boot / apps / working data and HD for storage / archive.
Which could be on a single hybrid drive.
Which will be just as expensive, if not more so, than the straight-up SSD.
Of course, my 10-yr old Dell laptop has 2 drive bays, so I could just get a smaller SSD for OS & home dir, and put media and documents on the HDD it has now. But so many other things are wrong with it I probably won't bother.
And thus a single point of failure if not backed up to an external device.
I am still on spinning rust but my next machine* will be all SSD for speed and physical shock resistance.
Apart from the big-iron exchangeable disc packs (8MB) my first hard drive was a 5MB Winchester on a CPM machine with 8" floppies, the HD replaced one of them. 1980 IIRC.
*assuming I outlive this one!
My view is that HDD's die when SSD's get the production cost lower than an HDD. While companies will probably exit the HDD market, this just props up the others, until only one is left. When that happens, that manufacturer will probably just ditch all of the R&D (no point in future development after a certain point) and just churn out cheap drives on their existing equipment.
Taking tape drives as a point of reference, i'd expect that HDD's will be with us for the forseeable future, certainly the next 10 years, probably the next 20 although likely dropping into niche markets like SAN's, which have a good use for low cost high volume storage. Of course, if the cost of making an SSD suddenly drops by half (which I can't see happenning) then all bets are off and the SSD probably dies within 6 months.
Of course, if the cost of making an SSD suddenly drops by half
Currently SSDs are eight to ten times more expensive per GB than harddrives, so the cost of making an SSD is going to have to drop by more than half.
Of course, SSDs are fast, quieter, smaller, and use less power, so I don't think the price will have to reach parity with hardrives for them to totally take over. Perhaps when an SSD is only twice the price of an equivalently sized harddrive?
Currently SSDs are eight to ten times more expensive per GB than harddrives, so the cost of making an SSD is going to have to drop by more than half.
Not necessarily.
The lower the price of SSDs go, the more people will use them instead of a HDD. This should lead to better economies of scale, reducing the price of SSDs further (unless we hit a problem with supply, real or manufactured).
Conversely, as demand for SSDs increases, demand for HDDs drops. Initially this would result in reduced prices, but it will lead to fewer and fewer people making them, and the price eventually rising.
So, we are likely to hit a critical point where SDDs wipe out HDD sales before they hit the crossover point, and even that crossover point could well be at a higher price than we currently pay for HDDs.
First of all, pretty near all new systems are going to ship with a 250-500GB SSD, first in laptops and then soon after desktop systems too, because people just won't put up with sub-par performance once they've got a taste for the speed of an SSD.
Spinning rust will survive for a little while for media storage, and possibly ultra-high bandwidth applications.
But ultimately, I foresee some sort of ultra-high density write-once media being developed for long term archival storage.
"Already has with MDISC - a bit pricey but worth it imo."
Meh...pricey AND the capacity sucks. We need something like M-DISC but with capacities in the multi-TB range. I don't mind if it's slow (I once used a floppy-bus QIC tape drive), just to be able to reliably archive lots of stuff, and there isn't one in the consumer sphere at this time.
"When that happens, that manufacturer will probably just ditch all of the R&D (no point in future development after a certain point) and just churn out cheap drives on their existing equipment."
Which is what happened a few years ago at both Seagate and WD. HAMR was the last development to come out of the R&D labs before they closed. It's been in the engineering labs trying to be turned into a commercial product ever since.
"In the field of computing that's not bad for a 60's technology!"
Well... not exactly, most technologies from that time 1960s are still in use. Semiconductor DRAM is one example which is still the most common form of RAM in computers which use more than 32 Megabytes of it. In a way even flash memory borrows its core idea from DRAM.
The same goes for operating systems. Unix lives on in the form of the BSDs and Linux. Multics lives on in Windows and Systemd. People still use Maxima on a daily basis.
...is right I'll still be stuck buying old skool platter drives. Still cheaper than large SSDs. Having said that, where I can afford it, I do replace old drives with SSDs. Like I did in my Lenovo. A cheap 1TB SSD has brought new life into my old war horse. Boots into Windows 7 in around a min now where as with the old skool drive it was taking about 5 mins.
> Still cheaper than large SSDs
Sure, but the SSDs are such a metric assload faster, and so noticeably improved the response of my PC, I coughed up the extra dosh. The performance was worth the price.
At least Samsung isn't doing a Kodak and ignoring SSDs, hoping they'll go away.
>Sure, but the SSDs are such a metric assload faster, and so noticeably improved the response of my PC, I coughed up the extra dosh. The performance was worth the price.
Would you cough up the dosh to store all your photos and videos though? What if you were a commercial operation. Would you cough up 10x to store data that barely changes?
For the consumer it depends on how much physical space there is and how savvy they are. Few laptops have space for even one SSD let alone an SSD and an HDD (they use M.2). I have two of them, and neither have SSDs. Many people don't have desktops any more and will buy a NAS device to store things, or if they're daft, hand it over to Google, Apple etc.
For those of us who still use desktops (or NAS) HDDs are still the best way to go. My main desktop PC has two SSDs and two HDDs in. I probably didn't need the second SSD.
In the enterprise, ideally you want frequently accessed data somewhere quick and infrequently accessed data somewhere cheap and well-protected. The ability to sort data and place it properly keeps the industry going.
"For those of us who still use desktops (or NAS) HDDs are still the best way to go."
If you want to make best use of large HDDs, then you need to front them with SSD caching (read caching and write intent cache) to mitigate the seek penalties.
The size of that cache depends on the kinds of loads you're generating. The way you do it depends on what you have available. I prefer ZFS for large arrays as it's got zero downtime for fsck(*), but you can cache bsd/linux LVM and Windows servers have their own implementations.
"In the enterprise, ideally you want frequently accessed data somewhere quick and infrequently accessed data somewhere cheap and well-protected. The ability to sort data and place it properly keeps the industry going."
In the enterprise old style, that was the case. When you have large scale automatic tiering/caching then this kind of balancing act becomes much easier. That's why ZFS is a godsend when the "infrequently accessed data" suddenly becomes "hot" for whatever reason.
(*) Some of my older installations have 3-400TB of storage onboard. If they decide they need fsck at startup, that makes for a long delay.
This post has been deleted by its author
The fun part is getting a PATA to M2 adaptor and then cropping a suitable msata card into it.
It's cheaper than getting a PATA ssd and _much_ faster (the pata SSDs tend to be crap). You'll find your old workhorses start moving at unbelievable speeds.
This is also a good way of keeping various scada kit and things like ancient CNC equipment alive.
Your file is always being moved around by the wear levelling algorithms. That's why ensuring TRIM is functioning with whatever OS / drive encryption product is so important, so that deleted space is indeed deleted and available for wear levelling by the drive's internal gubbins.
I do wonder how adding more and more layers to a single cell affects it's lifespan though. Anyone?
I do wonder how adding more and more layers to a single cell affects it's lifespan though.
Hugely.
However, amortised across an entire 4TB SSD, lower write endurance should be acceptable for anything outside very high write I/O loads. e.g. database logs, caches in front of large arrays, etc, which is what SLC or Optane or other future technologies like MRAM are for.
Depending on source, typical max program-erase cycles are:
SLC 50k-100k > eMLC 20k-30k > MLC 5k-10k > 3DTLC 1k-10k > pTLC 1-5k > QLC 0.1k-0.5k (aka hundreds, not thousands).
SLC: Single-level cell, 1b/cell
eMLC: enterprise-class Multi-level cell, 2b/cell
MLC: Multi-level cell (consumer-class), 2b/cell
3DTLC: 3-d (stacked) Three-level cell, 3b/cell
pTLC: planar (2D, non-stacked) Three-level cell, 3b/cell
QLC: Quad-level cell, 4b/cell
Speaking as a person who uses a variety of 6-8 TB drives in my media server... and as I've said many times before.
Until consumers can buy a 4TB drive for the same price as a regular mechanical one... spinning rust will live on.
HDD prices have been kept artificially high for years, and the EU is supposed to be investigating this... but I've heard nothing for sometime on the matter.
I've been able to pick up my drives in sales for a lot less than they normall go for. An 8TB purple drive for £150 instead of well over £200 and 6TB drives for an average of £128 instead of around the £180 mark.
My media server currently has a 3TB & a 4TB drive and those I'll be replacing next... So I'll be needing more 6-8TB models and there's no way SSD will be at that capacity and price range for many, many years to come... If ever.
I think I'm on the same page as everyone else as far as price, though I don't think we have to see absolute parity between the two styles--perhaps a 10% premium against the middle of the pricing band would be acceptable to most.
But what I'm waiting for is a SSD that actually can replace my NAS's drives. In that instance, I'm looking for price and longevity mostly. Speed isn't particularly important as the dozen drives can saturate a gigabit link without working terribly hard. So why hasn't a manufacturer come out with a medium format (3.5" rather than 2.5" or 5.25) SSD that can be stuffed chock to the gills with chips from the previous generation--where the fab is paid for and failure rates are low, so easy to make reliable profits on--and designed NAS drives? Start showing me 2-4Tb units in a larger case with good endurance within 10% of the price of something like a WD Red, and I suspect there's a vast amount of money to be made.
Nevermind that that same drive could be jammed into a normal desktop by an OEM for significantly cheaper than a 'fast' SSD of the more current designs, and allow the OEM to put a big sticker boasting about it having "not just an SSD but a bigger one than the competition" on the box, and I suspect they'd be onto a winner.
But then I'm the cheapskate that's still using a dozen used rusty 2Tb drives in my NAS because that was the sweet spot for cost:space, and let ZFS pick up the reliability (which honestly, has been excellent). Makes you wonder just how fat the profit margins are, and how much that's pushing for bragging rights in the speed arena. That said, I just also put a fancy M.2 into my main gaming rig, and the speed is really noticable, but the pricing makes it dumb for storing ...uh... cat videos.
> So why hasn't a manufacturer come out with a medium format (3.5" rather than 2.5" or 5.25) SSD that can be stuffed chock to the gills with chips from the previous generation
Because demand is mostly still outstripping supply for production of those chips and in the larger case formats getting rid of heat becomes a little problematic - especially with older (hotter) generations of chips which in turn kills reliability. Heat is one of the reasons that M2 is becoming popular. Getting rid of the case makes cooling much easier. 2.5" is a legacy case format. Anything larger is from the dark ages.
If your motherboard can't directly take M2 devices, there's a legion of addin cards. I've seen up to 4 mSATAs supported on one card and StarTech sell a neat wee pcie card that takes a NVMe drive on one side, with 2 msata carriers on the other that plug back to the motherboard ports.
There's talk of NAND oversupply, but it's more catchup than anything else. In any case SSD prices _are_ falling whilst HDD prices are relatively static.
Fat lot of good when your laptop ONLY takes SATA (M2 pretty much has to be built into laptops). And desktops will have a hard time using an add on when the only slot that can carry it runs the GPU.
Then you aren't buying the right laptops or desktops.
While that's the case now, I would expect things to change in the future as trends change, as always happens.
One of the motherboards for the new Threadripper CPU (admittedly not exactly in the standard consumer class) comes with 6 M.2 x4 connectors, 2 onboard and the other 4 from an included PCIe add-in card.
Expect this sort of thing to creep down into the consumer space over the next few years. Some consumer motherboards right now have 2 M.2 slots onboard. If M.2 becomes the consumer standard such that it displaces 2.5"/3.5" form factors, then expect motherboards to ship with more.
Yes, if you want 4 M.2 right now on a consumer mITX board that only has one long PCIe slot for a GPU you are SoL. But if you want that many M.2, then you need to purchase a motherboard that can support that many, just like if you want 10 SATA drives you need to purchase a motherboard that can support 10 SATA or has additional PCIe expansion slots to add more SATA ports from add-in cards (or purchase one that supports SATA expanders, which is pretty rare in the consumer space).
"Fat lot of good when your laptop ONLY takes SATA"
I'll leave you with THIS: https://www.startech.com/HDD/Adapters/m2-sata-adapter~S322M225R
Or If you have a truely ancient laptop, THIS: https://www.lindy.co.uk/components-tools-c7/drive-caddies-raid-c321/msata-to-2-5-ide-ssd-drive-7mm-p8706
You can get them considerably cheaper than the figures above if you look around and they both work fairly well.
Of course, in a desktop, space isn't so much of an issue anyway.
You could always use an expresscard SSD, but it's no faster than a sata bus and they've pretty much gone from the market.
If your motherboard can't directly take M2 devices, there's a legion of addin cards. I've seen up to 4 mSATAs supported on one card and StarTech sell a neat wee pcie card that takes a NVMe drive on one side, with 2 msata carriers on the other that plug back to the motherboard ports.
The new MSI X399 Creation motherboard as reported on Anandtech comes with a PCIe x16 card that has 4 (!) M.2 x4 connectors.
"and 100mbit fibre to the home is a reality"
In some areas.
And specifically to the home.
Meanwhile, SATA 3.2 is also a reality, works both ways, is 160 times faster, and is on an uncontended link. This consumer will be keeping his stuff in the PC as a matter of course for the time being, thanks.
makes about as much sense as driving a supercar around with the hand brake on.
While "The age of hard drives is over as Samsung cranks out consumer QLC SSDs" it will not happen while you use such a slow interface!
The device uses 32 x 1Tbit (128GB) 64-layer V-NAND die and does 540MB/sec sequential reads and 520MB/sec sequential writes, looks like it is hitting the limit of Sata.
If we look at a HDD say the Seagate BarraCuda (1TB around £40 and 2TB for about £52) (https://hddmag.com/seagate-barracuda-review/) we see it has does 210.9MB/sec sequential reads and 205.2MB/sec sequential writes. Yes quite a lot slow but at a massively lower price, which in the mass market is key.
For speed you would want something like 970 EVO POLARIS 2TB M.2 2280 PCI-E 3.0 X4 NVME SOLID STATE DRIVE, (3500MB/sec sequential reads and 2500MB/sec sequential writes)
TLDR if your going to go and buy a still relatively expensive SSD over a HDD for increased performance atleast ditch the handbrake that is the SATA III interface.
"makes about as much sense as driving a supercar around with the hand brake on"
For bulk SSD that trades off access time against density, SATA still makes perfect sense. 600 Mbytes/sec is a beefy transfer rate compared to physical disk, but the big win is zero seek time. Use M.2 for your system disk.
> makes about as much sense as driving a supercar around with the hand brake on.
Nope. Sata's still useful, These big SSDs aren't particularly fast. They can max out a sata bus but they're not much faster than that even though they blow HDDs out of the water on latency.The tradeoff is heat.
Think of these as a large Box van. The increased performance is only part of the equation when buying them. Reduced power consumption & noise, vastly faster startup (which means they can sleep faster and that drops the power consumption even more) and massively longer lifespans than spenning drives are where these win out.
The IBM RAMAC was introduced in 1956, 5MCharacters on forty 24" platters. It's really amazing that a mechanical technology from the 1950s has managed to hang on this long. Vacuum tubes were gone within a couple of years of the introduction of the RAMAC, core memory was replaced by DRAMs 40 years ago, tape is long gone. My guess is that SSDs will still be more expensive than hard drives for the next few years so the hard drive may be able to hold on until it's eligible for full social security.
Do spinning HDDs generally all spin in the same direction?
The Earth might suddenly accelerate into a Leap Second (of unexpectedly opposite to most-common direction) as a hundred billion spinning platters are all turned off for the last time.
;-)
I think this is a huge leap at best and a total propaganda piece at best
It will be a VERY long time before the durability and price of SSDs even comes close to that of a mechanical hard drive. With larger sizes you can replace your mechanical hard drive several times before it costs as much as just one SSD at the same capacity would have done and the reliability and endurance of the SSD means that chances are the combination of several HDDs for the same price as the one SSD would have given more of a workload when combined too
The register should archive this story (on an SSD if they wish) for QUITE a few years until 4 TB SSDs are WELL under £100 making them a decent choice for CCTV systems, home NAS boxes, PVRs and the main bulk of storage in computers
Whereas I suspect that for the cost of these 4TB SSDs you will be able to fill a 5 bay nas with 4TB mechanical hard drives and still have some change
Firstly, virtually all SSD failures aren't due write endurance limits, and overall they are a magnitude more reliable than HDDs on a normalized timeframe. Google proved that in their 2016 paper.
Secondly, people are still beating the endurance boogeyman for consumer drives intended, you know, for consumers with inane arguments like "how would I trust my $10000+ intensive I/O critical operation on a <$200 consumer SSD with relative low write endurance?".
Thirdly, I'm not interested in any Samsung consumer SSDs, QLC or not, because they are always overpriced to their competitors.
"Whereas I suspect that for the cost of these 4TB SSDs you will be able to fill a 5 bay nas with 4TB mechanical hard drives and still have some change"
Hmm, I'd like to see some numbers. 4TB rust drives run about $100 or so each depending on the specs, then there's the NAS box itself (which has about a $100 baseline, too) where price and quality vary considerably from device to device. So that's a minimum $600 right there.