Is this new?
I don't get this. When I had an HP disk storage system to worry about all of the (rather expensive) drives claimed to have dual activators. What's different about these Seagates?
For about three years, disk-making giant Seagate has been talking up tech called “MACH.2” – a conventional disk drive that offers considerable speed improvements. And now the disk giant has found that the tech also cuts its costs, raising the prospect that big, fast, hard disk drives might emerge at keen prices. MACH.2 gets …
No not new at all.. had to think back to my NCR days, but Conner Peripherals "Chinook" drives. 3.5" SCSI more high throughput than capacity though.
It seems Seagate has resurrected/perfected the technology as they eventually bought Conner. Will be interesting to see what arrangement they have to interleave the data to get the additional capacity in the same surface area.
https://upload.wikimedia.org/wikipedia/commons/3/33/Conner_Peripherals_%22Chinook%22_dual-actuator_drive.jpg
Chinook had 2 sets of heads on each platter
This sticks with one head per platter, and is essentially stacking 2 HDDs in one case
heads in an assembly are accessed individually, not in parallel, because thermal (and other) effects mean that calibration will almost always be out if parallel access is attempted. The only way to allow 2 heads in the stack to be active is to have them pivoting independently (ie: two voice coil actuators, etc) and the complexity is hideous whether you do it with two independent head assemblies on each side of the platters (Chinook) or have two mechanically separated head assemblies on the same pivot
Whilst Seagate et al ate putting a brave face on it, SSDs have been taking their lunchboxes away for quite a while. They shut down their main research labs a decade back and HAMR/.MAMR is taking a long time to bring the holy grail of reliably increased density, whilst the shenanigans in the wake of the 2011 Thai floods convinced a lot of buyers that getting away from mechanical drives (supply choke points and opportunistic profiteering vendors) was a worthwhile pursuit - last year's shenanigans with submarined prosumer drives being slipped into NAS channels didn't help their cause..
As much as they talk up these capacities, Soiid state is closing beat them on longevity/endurance a long time ago and is closing rapidly in on mechanical device cost at all densities and a lot of buyers are minded not to reward past bad behaviour by actively avoiding by giving HDD vendors their SSD purchases
I was wondering why moving from "1" to "2" took so long, or what else was involved that made that step suddenly worthwhile. Read "Tom's hardware" analysis, it seems that "As hard drive capacity grew further ... random read/write IOPS-per-TB performance dropped beyond comfortable levels for data centers".
So 2 actuators allows for higher capacity drives without sacrificing random read/write IOPS performance.
Here is the relevant section quoted
> Historically, HDD makers focused on capacity and performance: every new generation brought higher capacity and slightly increased performance. When the nearline HDD category emerged a little more than a decade ago, hard drive makers added power consumption to their focus as tens of thousands of HDDs per data center consumed loads of power, and it became an important factor for companies like AWS, Google, and Facebook.
> As hard drive capacity grew further, it turned out that while normal performance increments brought by each new generation were still there, random read/write IOPS-per-TB performance dropped beyond comfortable levels for data centers and their quality-of-service (QoS) requirements. That's when data centers started mitigating HDD random IOPS-per-TB performance with various caching mechanisms and even limiting HDD capacities.
> In a bid to keep hard drives competitive, their manufacturers have to continuously increase capacity, increase or maintain sequential read/write performance, increase or maintain random read/write IOPS-pet-TB performance, and keep power consumption in check. A relatively straightforward way to improve the performance of an HDD is to use more than one actuator with read/write heads, as this can instantly double both sequential and random read/write speeds of a drive.
Using ordinary drives in a RAID enclosure can also do that, at least for READ operations. The controller parallelizes read requests, distributing them around the disks, and using whichever disk has the heads in the best place for a particular request, that's been around since the 1980s at least. Looks like the Seagate solution is just to put this in one drive box & call it a single disk.
ICL were doing something similar back in 1973 when a 60MB removable disk was a big deal, but it was a software, rather than a hardware, solution.
Their improvement was to maintain two ordered request queues per drive plus a vector showing where the heads were and which direction they were moving. Requests that could be satisfied by keeping the heads moving in the same direction went on one queue and the rest went on the other. The queue pointers were swapped when the 'ahead' queue was emptied.
This made the heads float gently in and out rather than banging madly back and forth across it. The result was roughly doubled throughput and reduced wear and tear on the drive mechanics. It was a standard feature of the George 3 operating system from, IIRC, Mk 6.4 onward.
I can't see any reason why something like this couldn't be implemented within the disk drive: the extra queue storage would be relatively minimal and modern disks already contain a microcontroller.
All modern hard drives do this.
You have a queue of outstanding requests on the bus (SATA/SAS etc). The drive optimises its seek path across the platters, using its knowledge of the rotational positioning of sectors as well as elevator seeking.
The more parallel I/O you're doing - i.e. the deeper the queue - the more opportunity it has to improve the total throughput.
Didn't ICL also do CAFS (Content Addressable Filestore) where the disk head had enough "intelligence" that the main CPUs could offload search criteria so the disk only returned a more interesting portion of the data?
I think we had to reformat the IDMSX database for it to work but it was pretty useful in its time. I wonder if something similar in spirit could be done today.
Maybe I'm more paranoid about data storage than Seagate. I would have thought that testing the drive would mean testing that arm1 can read what arm0 wrote and all the other three combinations. You might be able to do write0-read0-read1, the write1-read0-read1 but still it means the test takes twice as long a single arm disk.
Or is there no ability for one arm to read the other's data? In which case a 30 TB dual arm drive is really two 15 TB conventional disks jammed into one package in a RAID0 format. If that is the case then the only advantage is space and power saving and no speed advantage.
It reads like it's the 2nd one you state. i.e. Two sets of platers, each with their own actuators, sharing the same spindle, case and controller.
Also if it's RAID 0, aka stripe, then you'd be up to doubling the performance, as you can be reading or writing to both sets of platters at the same time. i.e. half the data via 1 actuator, the other half via the other.
Although I'd assume any performance gain is going to be dependent on the structure of your data?
The pictures that I saw were 10 platters, and five read/write heads for the top five platters connected, and five read/write heads for the bottom five platters connected. Instead of all ten heads connected. Like having two hands with five fingers each instead of one hand with ten fingers.
It's very significant space and power savings compared to a RAID drive, and speed advantage compared to a single disk drive.
From the article: "In 1996, Conner Peripherals was acquired by Seagate."
Maybe the multi-head ideas came from there. Or at least the related patents, so they don't have to worry about some troll coming after them because someone had patented the obvious idea of speeding up drives with multiple heads.
"Chinooks had two heads per surface. "
I must admit, from the initial headline, this is what I was expecting. But considering the tolerances on modern HDDs, I suspect two heads accessing the same platter for read/write operations might be a bit trickier than the lower density data tracks of the past.
I wonder if anyone is looking at one head per track? eg a bar across the platter with "heads" stacked across similar to the old drum storage or flatbed scanner technology.
When I heard about this a few days ago I assumed the same. When I saw the claim in the Reg article about reducing cost I was thinking "how in the world can having two actuators and sets of heads reduce cost?" It makes sense though if each actuator can access only half the surfaces that you could test in half the time, and presumably that makes up for the cost of having the second actuator.
And the next version would be the Monty Python triple-headed giant that argues with itself so long the files run away? =-)p
I just want a 30Tb HDD of my own so I can move all my Youtube Hentai porn off my main storeage array...
*COUGH*
I, uhhhh, I mean move all my copies of *nix ISO's to better places! Yeah, that's what I meant. Absolutely no porn at all, nope a nope a nope.
*Wanders away whistling innocently*
If I am reading the article correctly (and it appears from the existing comments that I am not the only one noticing this), the two head assemblies serve distinct areas (platter sets?) of the drive. So Random IOPS are only boosted to the extent they are evenly distributed between the two "logical" (virtual? semi-conjoined?) drives. Enforcing that means they are not exactly "random".
Compare and contrast to the IBM 350 (RAMAC)
https://www.ibm.com/ibm/history/exhibits/storage/storage_350.html
which could be had with two access mechanisms, each capable of reading/writing any sector on the disk. This option was introduced in 1958, and the last shipments were in 1961 (from article cited, so you don't _have_ to read it).
Note this was for "one of the last vacuum tube based systems" from IBM.
The follow on 1405 also (IIRC) had optional dual access mechanisms. Wikipedia
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives#IBM_1405
says "one to three" access arms.
Hmm, I'm sure I can recall one manufacturer selling 3 1/2" drives back in the 90s with two sets of heads - because they'd reached the point where moving one set of heads about meant that they couldn't saturate the SCSI bus on random I/O loads.
Not that I could afford the drives, or a system capable of fully loading it :-(
Clearly a case of right, it's been long enough now, no one will remember the last time it was done - we can get away with calling this a new idea now !
This was an obvious idea since hard disk drives were created. Why didn't the massive performance improvement of zero track-to-track delay (taking into account rotational latency) for larger files, and better performance anyway, from having two complete sets of heads ever become a thing? Or did it, and it never made it to user-level compoinents?
Because having two COMPLETE sets of heads doubled the cost of the most expensive component, and increased power draw. It was tried, but never succeeded.
It probably only became feasible now due to the increasingly long time it takes to read and write a full drive's worth of data, due to the continual increase in the number of tracks per platter. That meant testing times got longer and longer, and it sounds like Seagate decided that with a modification that each actuator carries only half the heads (keeping the cost of heads per drive constant) the cost of the extra actuator was less than the money saved by halving testing time. i.e. they didn't do it for performance reasons, they did it for cost reasons.
The drive has four corners, so there should be room for four sets of actuators. Maybe the "decreased cost" thing is no longer true as adding one actuator and saving 50% of original testing time isn't the same as adding two more actuators and saving another 25% of the original testing time aren't equivalent. But even if it cost more, another doubling of IOPS and throughput would probably be worth it at a modest premium.
Such a large percentage of HDD market is used in RAID parity configurations these days, where rebuild times become longer and longer as capacity increases, that rebuilding 4x faster would be necessary down the road when we get to the promised 50 to 100 TB drives just to keep rebuild time relatively constant. Single parity is already untenable in most situations due to the length of rebuilds, even double parity is problematic in some situations.
I would imagine you probably get more than double the IOPS from MACH.2, because with each actuator having only half the heads, they are lighter which reduces seek time.