What do you mean...
you accidentally dropped the entire British Library down the toilet?
It's not a good week to be in the disk-drive business as solid-state tech news signals their imminent death – all due to a murderous group formed of Intel/Micron, Samsung, SK Hynix and Toshiba/WDC. How so? There are two disk drive formats – 2.5 and 3.5-inch – and three disk drive manufacturers – Seagate, Toshiba and Western …
An SSD can work fine right up to the moment it dies. Then its a messy surprise because your computer is bricked and you have to find another one to order a replacement part.
Disks die as well, but generally you have a bit of warning with corrupt clusters or such, so you can order a new disk, make a full backup, and get the replacement disk formatted and ready before your PC is bricked. Generally. Of course, bad surprises are possible as well.
For businesses, I don't think there is any question that SSDs are the future. With their backup procedures and IT support, if an SSD dies, there will be spares and personnel to get everything set up and running again quickly.
For individuals, I think a hybrid situation is best. I have an SSD for booting Windows, and all of my data is on HDDs. If my SSD disk dies, I order a new one, re-image that and I'm working again. If one of my disks starts showing trouble, I'll back it up on my NAS and have a replacement delivered in time to not be bothered.
I like the idea of TB-size SSDs, but I have to admit that, until SSDs can show a bit more durability, I'm holding off from putting my data on something that can die from one minute to the next. That said, my Windows SSD has been chugging away valiantly since 2011. Might be time for me to order a spare . . .
I respectfully disagree.
If you're only backing up when you think your HDD is about to pack it in, then the data isn't worth backing up in the first place.
Whilst you are right that often a HDD can show signs it's on it's way out, it's far from a certain method. The analogy strikes me that I have a burglar alarm which should tell me if I'm about to be robbed, but I'd still make sure I had contents insurance regardless.
If your data is worth backing up at all, it's worth backing up regularly, regardless if you have a shiny new HDD, a mid-life SSD or if you hear noises from your old HDD. The media used shouldn't make a difference to your backup strategy.
And with the cost per gig dropping like a stone for consumer capacity drives unless you need whopping amounts of space it just makes more sense to use SSD's for anything under 500Gb these days.
What makes you think I don't back up regularly ? Just because I say I'll make a full backup before reimaging has no bearing on what my other backup procedures are.
For your information, there is nothing I consider important that is not copied onto my NAS for one, and onto optical storage for two. But I hardly see why I should bother stitching things back together when I can just backup the current situation and re-copy it when the new disk arrives.
As for my NAS, I took the precaution of staging my HDD purchases. Four disks, one per month. That way I am convinced that there will be no mass failure all at once. And when one fails, I'll start buying four more, one per month. That way, I'll have backups if the rebuild process is too stressful on another one.
I know HDDs fail. Its happened to me like it has happened to everyone else. That is why I included the term "generally". I have never personally had a disk die on me from one minute to the next. There was always time to recognize the issue and get a replacement. But that's just me.
On the other hand, I witness an SSD in a colleagues' laptop go from working to dead in the space of ten seconds. One blue screen, and no reboot possible. There was no SMART warning. That's why I am a bit wary of the things, even though my personal SSD experience is totally reliable up to now.
SSDs also have early warning of failure. Similar to the SMART disk failure prediction. However, if you have a NAS just back up to it regularly anyway and then you would be covered in the event of a failure (even a continuous sync with versioning - in case of 'ransomware').
It's quite reasonable with the price and size of SSDs to run them in RAID1 even for the home user - even laptops can generally take two drives from my experience. If you are clever you can also use one drive for a year before adding a second to ensure that the MTBF will not coincide on the two devices (or swap the second drive out to a new machine after a year or two)
Sorry, hard drives give no more warning than SSDs. I have a bunch of Seagates supplied as part of a RAID array to attest to that. One day, they just stop working, no SMART warning, just dead.
And neither failing should affect any kind of even semi-professional storage. Maybe your laptop dies at home, but you have backups, right? Backups are not the domain of business alone, if you value anything on your computer. The reinstall of Windows alone would cost £50-100, so spend £50 on a complete backup of everything and press the button on it once a month.
SSDs also have good lives on them. I've put them in schools, in heavy-use clients, and no I didn't even bother to turn off swap or turn on all the RAM-cache nonsense for them. Straight swap, HDD for SSD, with the same image, as an alternative to trying to upgrade RAM in machines locked to 4GB max (not OS, 32-bit motherboard limits).
They are all ticking along nicely, nowhere near their write limits (every time I run the numbers again using the real-world usage, it comes out to 10+ years still, and we replace every 4 anyway), and not had a single failure. P.S. I buy the cheapest, unheard-of brands. ADATA anyone?
The only place that worries me is exactly what this article states - high-end servers and write-heavy tasks, where SSDs are not well suited and everything has to be send over a Gbit connection anyway (so why rush?). In clients, they are perfect, and provide ENORMOUS speed boosts for a good price. A price that, if they fail, who cares.
All I want are larger ones. For network clients they are fine, but storage grows all the time and it won't be long before the 128Gb cheapies are no longer viable for clients. They don't even need to exceed SATA speeds (hell, some of our machines are limited to the old SATA speeds and still an SSD makes them FLY). Just storage matters.
I want the £100 1Tb SSD that operates at 500Mb/s read/write. I would literally buy them by the dozen for my workplace, and for myself.
And at that price, they are no more likely to fail or cost me money than an £85 decent Western Digital boring basic HDD version of the same size.
To be honest, HDD is dead except in high-end write-loads. Everything else should be SSD already. I am frustrated how long it's taken to get to the point where computers are actually being supplied with SSD, and even more so by the lack of storage capacity while we focus on "but we're now 20 times faster than SATA if you use this interface that nobody has in their home machines and needs all kinds of adaptors to be backwards-compatible".
Gimme a cheap, large, SATA SSD drive. Hell, make it 3.5". I really don't care. When it dies, I'll buy another to replace it unless it's still in warranty, like I do hard drives. I'd expect at least 5 years out of them, I'd be happy if it's warrantied for 2 years, which I don't think it at all unreasonable.
The only place that worries me is exactly what this article states - high-end servers and write-heavy tasks, where SSDs are not well suited and everything has to be send over a Gbit connection anyway (so why rush?).
What about things like databases where although the traffic isn't high, latency is critically important?
Write limits can be hit much faster than you expect if you don't ensure that TRIM is supported and enabled in hardware/firmware/OS/drivers, especially if you leave default RAM swap and defrag. configured as-is, eventually the SSD gets premature block wear failures; I discovered this the hard way, no thanks to f'ing Intel's refusal to support TRIM in software RAID1 drivers for older X* desktop chipsets!
I only use SSD for boot disks, everything larger uses magnetic disks because they are still _much cheaper_, and I disable swap when possible, because SSD swap will cause faster SSD wear and is still several orders of magnitude slow than RAM; you _will notice this when I/O bottlenecks_, especially on crippled laptop chipsets/CPUs!
SSDs are still several times as costly as magnetic disks, so nowhere near as affordable for high capacity uses like proper NAS RAID e.g. ZFS ZRAID2+ with a many GB RAM cache, which is a lot faster than any SSD!
"Disks die as well, but generally you have a bit of warning..."
I'd have to disagree with that. The last couple of rusty-spinners I've lost have gone down without so much as a "So long, thanks for all the data.". Tried to fire them up and they make a pitiful, weak clicking noise and just refuse to wake up.
No storage, be it disk, ssd or tape is infallible. Always backup without fail and if you can afford it, mirror your disks or at the very least RAID5 them. So when I'm cleaning up old crap I always ask myself, if the data is worth losing is there any point in me keeping it? If need to keep it, then I buy bigger pairs of disks and mirror up. My really precious stuff is triple mirrored and double cloud backed up, just to be sure, like the photos I've shot over the last 10 years. My wife's family photo archive of 150 years worth of images, she has 6 individual duplicated copies 4 in house and 2 out on cloud storage. I lost it once many moons ago, let's say she would be collecting a widow's pension right now if I hadn't recovered it and put mirroring and backups in place!
If anything this conversation thread and the contents of the article show that what isn't happening is the introduction of consumer priced RAID disks.
With SSD's it should be possible to replace my laptop's single 2.5" HDD with a equivalent sized (or larger) SSD RAID drive.
Also isn't it about time remote sync etc. is a normal part of the OS? MS are you listening?
The major point is that we are very close to cost benefit favoring SSD for most enterprise workloads today. By 2018, the needle firmly favors SSD. If Spin Sellers must rely on the occasional home user running out to their local grocer for a half off special on 1-2TB drives - they are toast.
In most enterprises I deal with the recovery, replacement and administrivia for spinners is already a full-time job. Changes in SSD reliability and durability, especially for performance tasks are moving the needle faster and higher into the SSD camp. Advanced storage server paradigms like the NGSSD proposed by Samsung (576TB in a single 1U storage server "module") improve reliability, scale and performance while drastically reducing the footprint in a costly data center.
I agree, but the cost for capacity SSD's are too high for SME's like us and our clients.
However using 3.5" 7200 RPM 2Tb - 4Tb drives for capacity and some decent enterprise SSD's to sit as a write cache have turned some spare Dell 730xd's into a 2 node hyperconverged SAN with performance that literally kicks the shit out of our HP Lefthand P4500 SAN which uses 15k RPM drives if memory serves.
(Literally some 730xd's, 10Gig NIC's with RDMA and Windows Server 2016 using their Storage Spaces Direct software SAN thing. It's actually incredible)
Cost 1/3 of the price for twice the performance.
Another few years when SSD will be the capacity and if you need faster you can use NVMe as the cache will be the end of spinning rust in my opinion. Certainly at scale and for new deployments.
"A 128TB SSD will hold ten times more data than today's highest-capacity disk drive, deliver access to it around 100 times faster, and probably need less power and cooling."
Problem 1: this is comparing a product which doesn't exist today, with a product which does.
Standard SSDs top out at 4TB. You might be able to buy an 11TB drive soon. However you can definitely buy 10TB hard drives right now.
Problem 2: today, HD capacity is still 1/10th of the price of SSD per TB. (e.g. figures from ebuyer.com: 10TB HD costs £275+VAT; 4TB SSD £1050+VAT)
This factor has remained constant over the last few years, and there is little evidence at the moment that the SSD cost-per-TB is going to start falling dramatically faster than the HD cost-per-TB, given that things like 3D-NAND have been with us for a while now.
SSD will save you some power (but at 5W to 1W that's unlikely to be significant). When the high-density SSDs arrive, they will also save you server and rack space, which is more significant.
Let's do the math(s). Imagine a rack containing 10 x 4U x 36 10TB drives, giving 3600TB of storage.
Given the current drive price of around £275 each, that gives you a capital cost of £100K just for the drives, plus say £50K for the 10 servers to host them.
Let's say you pay £20K per rack per year because of the high power consumption, the 5-year cost of rackspace and power will be £100K. TCO: £250K.
Now you swap these for mythical 100TB SSD drives (in same form factor). The capital cost goes up to £1m. Since you now only need 1/10th of a rack, the 5-year hosting cost goes down to £10K, but TCO is over £1m.
The break-even comes when those mythical 100TB SSD drives have a x2.4 cost-per-TB overhead. You pay £240K for drives, £5K for server, £10K for hosting (1/10th of a rack over 5 years)
Note that this requires manufacturing improvements which make them track the ongoing reductions in HD prices *and* a factor of four on top of that.
Furthermore, once we have 20TB hard drives, half the SSD benefit is lost. You need 200TB SSDs to compete, with the same per-TB price ratio.
Very true, but the thing I never see mentioned: (R&D+NRE)/(market size)
As disks get bigger you do not need as many. Making disks bigger costs R&D and probably NRE. While this is happening, flash is eating the small end of the market. At some point, the cost of disks is going to rise and the cost of flash will rise to match until the profit margins tempt new manufacturers.
> SSD will save you some power (but at 5W to 1W that's unlikely to be significant).
What about heat generation? For that matter, for say a peta-byte scale data center, does the heat generation difference between a 12TB versus 10TB HDDs matter that much (difference between 5 12TB versus 6 10TB for total 60TB say)
(Off-direct-topic - maybe) I read somewhere that some recent HDD (I think it was a WD 12TB) had 8 platters in an inch (25.4 mm) That's 3 mm for a platter *and* a head! How on Earth do they make that work (even with helium), and protect against external vibration? And how small is a physical head if the platter density is - what it up to now - 1GB/in^2? (1.5 MB/mm^2 if my maths are right)?
(Completely off topic) Would "maths" in BrEng be pluralized in the previous - where it is talking about only one "type" of mathematics - arithmetic? Or is the "singular math" seeping back across the pond by now?
If only HPE didn't charge extortionate rates for their SSDs for their servers (2 to 3x the price of the exact same drive direct from the manufacturer but without the official firmware change to get it recognised by the server) then we would have SSDs in our servers by now. However to cost to use them over HDD was just not justifiable and the lack of support for using third party drives meant that wasn't an option.
Don't buy them from HP. Buy the caddies separately and buy your SSDs from somewhere else.
We have whole racks of DB servers running Samsung 840 and 850 Pro's and they're about the same price as an equivalently sized harddrive from HP, while of course being way faster.
Of course, if you buy from HP you get a warranty and replacement service, but the cost difference allows you to buy some cold spare SSDs and still come in under budget.
If you buy third party ones with caddies then you will get a constant warning light showing that your server has degraded status and that can't be removed. This means that real degraded issues can be missed.
The latest firmware will also no longer support drive lights on unsupported drives so normal drive operation may not be available. This is on top of needing to remove any third party kit for HPE support to fix your server even if the problem seems unrelated although it does depend on the support tech you get. Thi scan make your support agreement a hassle.
HPE put as many barriers as they can get away with to stop you paying their massive premium for their drives - a 10~20% HPE tax I could live with but the amount extra and the fact that it will usually be an older model makes it that HPE require really irks me.
SSDs are just as reliable as disk. In fact recent SMR drives have started to be quoted with "Terabytes written" figures, making it obvious that manufacturers know that latest recording techniques are becoming less reliable.
The only problem for flash is the price. End users will suck up all the capacity available, so as more manufacturing capacity comes onstream, HDDs will gradually be replaced. Costs will come down. Just look at DRAM prices from the 1990s to now to see how this will play out.
What the HDD manufacturers should have been focused on was fixing some of the underlying problems in the technology; single interfaces for example for data access; single I/O streams; allowing drives to continue working with failed sectors. Most important, making drives that spin much slower that could be used for low power, high capacity storage.
> single interfaces for example for data access
Whenever I see a picture of multi-platter drive innards, all the heads in the head-stack are controlled as one unit. Are there drives that are built to have independent control of each head? (since they all use voice call actuators now) But would that *really* speed up things?
Don't know about you, but as a humble desktop user current multi-TB HDD prices aren't on the cheap end for my tastes. So why would I buy an SSD with similar capacity that costs over twice as much? (Supposedly, 4 years from now, if everything goes according to plan.) I can see the advantages in data centers, but most of those are not worth twice the price for your average Joe, especially if you can use a much smaller SSD as a cache or system drive and get comparable performance.
Fine, so then you still buy a hard drive for capacity and a smaller SSD for boot and commonly used files.
However, I think once the price premium gets much below 2x that the market for hard drives will begin to shrink so fast that production lines will be halted and the only drives you'll be able to buy will be ones remaining in the channel. It isn't going to be a very smooth transition, so if you need hard drives you'll need to plan carefully once you start reading about production lines being shut down.
Provided the manufacturer has implemented it correctly, it will wipe every single block in the SSD (or if it uses encryption, it'll erase the key).
Is that command certified to work on SSDs that have their own internal logic, meaning you can't be sure an erase to a sector isn't going to another part of the drive, including parts that may have been set aside for redundancy? Now, granted, drives with internal encryption are very easy to securely erase, but what about the rest of the lot?
If you are worried about that (i.e. someone opening up the drive and removing the chips to read the raw flash and get at the contents of spared sectors) you need to be sure you are using one of the drives that is automatically encrypted and generates a new key as part of the secure erasure process.
> SSDs in the same format reach 11TB, in excess of five times more, and have far shorter data access times and higher rack space, power and cooling needs.
SSDs have higher power and cooling needs? I hope that's not right.
> Disk technology has little chance of reaching 100TB+ capacity levels in the next few years.
> 128TB SSD coming from Samsung
> New 1U server SSD format (NGSFF) from Samsung to create 576TB server storage
Can you please talk more about why you expect these products won't make it to market in the next few years?
I was about to say the same about the Power etc.
I'd like more info on the costs of these "pie in the sky" SSD's.
Many of us would like that Ferrari (other petrol head cars are available) but can't afford one. The same goes for these SSD's. Do I really have to sell a Kidney to get an SSD with a capacity greater than 2TB?
> Do I really have to sell a Kidney to get an SSD with a capacity greater than 2TB?
The Sammy EVO 4TB are now about the same £/TB as the 2TB and 1TB. But still an expensive upgrade unless you definitely need high-speed access to all that data.
A 500G SSD + 4TB HD combo will be a fraction of the price.
"Can you please talk more about why you expect these products won't make it to market in the next few years?"
They will, that was the entire point - 100TB+ SSDs will be with us soon, disks with comparable size will likely not ever be with us at all. And while a previous AC makes a good point about just how much prices will need to drop to make them directly compete on cost even with that size discrepancy, needing 1/10 the number of drives can have benefits that go well beyond just cost.
Tape drive - too expensive, especially if your tape drive burns in a fire and you're just left holding your backup tape.
Blu-Ray - too small. Stupendously small, in fact. Too slow. Restoring from a bunch of them will take forever.
Cloud - better have a REALLY good Internet connection and be happy for everything to be offline at random (and the most inconvenient) times.
Go with drives: a small home NAS and a USB.
Literally, one of the backup tiers in my workplace, is a bunch of cheap NAS boxes (they can do iSCSI, etc. but we just use them as a file dump for the secondary backups). Cycle one off-site every now and then, and you're done. Restore times at 2Gbit/sec (LACP), or use the files/VMs direct from the storage with the iSCSI functionality if you want.
I'd have a cheap NAS for day-to-day backup, dumping photos, running services, remote access on your smartphone. And then a bunch of cheap external drives as "backups" at regular intervals, stuck in the loft, at a friend's house, etc.
Just as Lee D said: Get yourself a small NAS for fast local backup (some sort of RAID for reliability, and and ideally one that has regular snapshots in case of crypto malware, like FreeNAS supports) and then have some way of making an off-site copy for a major disaster.
That could be encrypted copy synced to some cloud provider, or the odd external HDD kept away from home. One nice thing about having file system snapshots is you can sync a consistent copy from a snapshot over long time periods even while new data is being written to the NAS.
Agreed, FreeNAS OpenZFS (scheduled) dataset snapshots have saved me several times for accidentally deleted files. The Windows 10 versioning seems far more awkward!
FreeNAS offers superior data protection with OpenZFS, and usually doesn't need SSDs for speed, because of the large Parity RAM cache.
I was referring to the fact that it is not possible to securely erase a singe file (eg 'myinternetbankingpasswords.doc') from a SSD whereas it is trivially easy on a conventional HDD. Thus, before using a big SSD to store all your data make sure you will never want to securely erase a file.
Wrong on just about every count.
SSDs have a concept of trashing individual sectors (It's called TRIM) which most hard drives never had. If you TRIM a given sector (which modern OS will do automatically when it's not needed any more because you deleted files), then it gets overwritten, which means it's gone forever (yes, forever... yes, even on hard disks. No, magnetic history doesn't exist. There's £1m waiting for you if you can prove it.
Nobody has claimed it).
However, everything from background sector reallocation on error, to automatic sector-refreshing, to copies still present in temporary locations on an abrupt power-off, to literally everything from Shadow Copies to even temporary files in your OS mean that you cannot "securely erase a single file" on any modern hard disk or SSD. Ever. Not without literally being the people who made the hard drive.
It's beyond the scope of a drive to know the filesystem format it's holding, and just as tricky on an HDD as an SSD, and it's not its job, so it has no idea where that file went or which bits were left somewhere.
That's up to the OS and the OS alone. And most OS are not built with this in mind at all.
Solution: Don't try to securely erase single files, it's not trivially possible at all, never has been. Encrypt the whole drive and don't give out the password. Trash the whole drive if you can't afford for someone to read a file on it.
P.S. Almost all drives have "Secure Erase" commands on their drive interfaces. They work per-drive but also cannot remove from damaged or reallocated sectors reliably.
P.P.S. If you want to trash a drive, any drive, just throw it in a big fire until it's just ash. Anything else is really just snakeoil and messing about, no matter what the technology. No, overwriting it a billion times doesn't guarantee anything if the firmware decided to keep a copy of an old broken sector around that it transparently replaces with one from its stock of spare sector.
Where is this "£1m waiting for you" you're talking about? If you are claiming that it is impossible to read the old data when a hard drive has new data written over it in the same sector, you're wrong. There are research papers where they succeeded in reading data that had been overwritten 4 or 5 times.
Maybe it is more difficult with modern drives that are so much denser that back in the 90s when this was being done. MIL SPEC erasure did not come about to protect against a theoretical attack.
"Then explain MIL SPEC erasure standards"
They were a response to military practice of "blow it up" - some drives were able to be reassembled and read and it was in the days when drives were horrifically expensive/reusable.
If you're that concerned about someone accessing the data on your (ex-)drive, take a blowtorch to the platters.
From a pragmatic point of view, on a 10GB+ drive your desireable data is a very small needle in a very large haystack and voice coil servo-tracking technology is quite different to the stepper motor imprecision of the drives that were tested by Peter Gutmann more than 25 years ago (which is what the multiple overwrite strategy is intended to counter)
For DECADES, there was a data recovery firm offering rewards for EXACTLY what you state. They were never claimed.
Magnetic history is absolute, 100%, anti-science, tosh. There's no way that a piece of magnetic material remembers where it used to be, or that you can recover - statistically or probabilistically - overwritten material on a magnetic medium.
This is EXACTLY the kind of tosh I'm talking about. Military specs were based on "making sure", not on the bare minimum necessary. It's cheaper to write over it ten times "just in case" than take the risk that someone finds a way. Nobody ever has. Seriously. Go look. It's tosh.
Lee D Magnetic history is absolute, 100%, anti-science, tosh. There's no way that a piece of magnetic material remembers where it used to be, or that you can recover - statistically or probabilistically - overwritten material on a magnetic medium.
But if I understand it correctly... if you dissolve the magnetic material in water, then take a drop of that solution and dilute it again, then repeatedly dilute it a dozen more times - then that water will contain a memory of the data.
Er, or maybe I've got my bunkum, codswallop, pseudosciences muddled up.
" There are research papers where they succeeded in reading data that had been overwritten 4 or 5 times."
Yes, on 10 and 20MB MFM drives of the early 90s using stepper motor head positioning
Peter Gutmann did a followup paper a few years later stating that the difficulty of extracting information from more modern voice coil HDDs (of the mid to late 1990s) rendered it effectively impossible. He also had words to say about people believing various voodoo relating to erasure procedures.
Disk density has increased by a factor of 1000 since that followup paper was written. If you want to ensure your hard drive is truely erased, take the platters out and heat them past their curie point, otherwise ATA secure erase is sufficient unless you're facing 3-letter agencies with 9-figure budgets - and they're more likely to use "monkey wrench" decryption when pressed anyway.
"Disk density has increased by a factor of 1000 since that followup paper was written. If you want to ensure your hard drive is truely erased, take the platters out and heat them past their curie point, otherwise ATA secure erase is sufficient unless you're facing 3-letter agencies with 9-figure budgets - and they're more likely to use "monkey wrench" decryption when pressed anyway."
And if they're up against a masochist (HARDER! I'm so close...!)?
the whole hdd vs flsh is just a matter of return on investment
the companies who build the hdd manufacturing plants are squeezing then out for maximum profit, and until the profit they can get out of the old manufacturing still outperform what they could get from new (costly) flash manufacturing plants, the will still continue to produce hdd (at cheap prince)
the moment the beancounters calculate it has become less profitable to make hdd, then yes they will finally die (as in price will go up),
what I wonder is if at that same moment flash price will go down, I mean down to the same pricing as hdd for equivalent storage space.
im pretty sure we are entering a turbulence period and I dont think will exactly be in world+dog customers interest
"what I wonder is if at that same moment flash price will go down, I mean down to the same pricing as hdd for equivalent storage space."
In short: "no" - flash pricing is largely(*) dictated by supply and demand and right now demand is vastly outstripping supply. This isn't a rerun of the days of the Dramurai cartel.
Until there is surplus manufacturing capacity there won't be a price war.
(*) There are minimum costs set by needing to pay back the investment.
"an SSD can have more parallel read/write access streams inside it than a disk drive with six or seven read/write heads feeding a SAS or SATA interface"
Erm, Hard drives only use one head at a time. Trying to parallel them leads to all sorts of problems thanks to track alignment issues and they gave up trying decades ago. (it also means only one set of head drive logic is needed and switched between heads as required)
Biting the hand that feeds IT © 1998–2020