back to article Seagate demos hard disk drive with an NVMe interface. Yup, one with spinning platters

At last week's Open Compute Project global summit, Seagate demonstrated a mechanical hard disk drive with an NVMe interface – an interface normally reserved for SSDs. The clue is right there in the name: NVM, Non-Volatile Memory. So the first question is... why? Well, one purported reason is speed. While Seagate has been …

  1. corestore

    "Intel's 3D Xpoint persistent-memory technology has immense promise, even if currently it's struggling in the market. This grade of non-volatile memory can be used literally as memory – it can be fitted into a server's DIMM slots, rendering off-board interfaces such as NVMe obsolete."

    Ye Ghods, they've invented the core store!

    1. bazza Silver badge

      Core store memory is what it is, and I expect there's a lot of software ideas from back then that still apply. Though Intel's thing probably isn't nuclear hardened like core store was.

      There was quite a lot of thought given to the idea of non volatile main memory when HP's memristor was being heavily researched 10 years ago. That promised to be bigger than HDD, no wear life issues, and faster than SDRAM. So it was a natural replacement for everything. Last I heard it was all ready to go, just waiting for Flash to run out of steam.

      One thing that would be very cool is instant on. You'd not shut down a PC, you'd simply turn it off. Provided that CPU caches could be flushed as the last few electrons trickled out of the PSU's capacitors, the PC would be as-was when power was restored. On the other hand a full power on reset could take ages whilst Ram is zeroed...

      Encrypted main memory probably becomes important too, bit like what AMD CPUs can now do.

      1. DuncanLarge Silver badge

        > On the other hand a full power on reset could take ages whilst Ram is zeroed

        All you would need to do is allocate an area of memory as a "working area" and encrypt the entire memory.

        Then when you want to start afresh, just change the keys

        1. Pascal Monett Silver badge

          But you'd still need to overwrite the now-obsolete encrtypted data.

          I'm not sure there's any gain there.

          1. John Robson Silver badge

            Why overwrite it en masse, why not just overwrite on use?

            1. John Brown (no body) Silver badge

              "Why overwrite it en masse, why not just overwrite on use?"

              Because buffer and stack overflows. They won't magically go away. Programmers will be programmers and we just KNOW that sometime very soon, a program will make an assumption about some previously unused memory content, or read back a buffer longer than it should be, pulling in whatever random data was left there 6 months ago.

            2. This post has been deleted by its author

          2. DuncanLarge Silver badge

            > But you'd still need to overwrite the now-obsolete encrtypted data

            Nope, why would you need to?

            The encrypted data is now useless noise as you changed the key. Thus no need to overwrite, its just noise.

  2. Annihilator
    Windows

    "The clue is right there in the name: NVM, Non-Volatile Memory. So the first question is... why?"

    Well, in old-skool definitions, magnetic storage is a form of non-volatile memory.

    1. DuncanLarge Silver badge

      > Well, in old-skool definitions

      Thats still the case

  3. Version 1.0 Silver badge
    Happy

    Hardware vs Software

    "While Seagate has been promising multi-actuator hard disks for about four years now, you still can't buy them."

    This is how hardware engineering companies try to work, had they been a software company then the software would have been released and we'd all have been busy installing "updates" for the last four years. Hardware engineers like to get things working reliably before applying their devices to users.

    1. Alan Brown Silver badge

      Re: Hardware vs Software

      that's because it's hard to apply a hardware update in the field

      This isn't just a case of "working reliably" either. The problem isn't on the head side of the arm but the coil side. Trying to have 2 moving electromagnet assemblies between very strong magnets that won't interact with each other - even with shielding in place - is likely an essentially economically unsolveable problem - meaning that yes it can be solved but by the time they nail it, SSDs will have taken over and surpassed the density/cost of HDDs

      In the same way that MAMR/HAMR implementation delays have doomed that technology for the same reason. SSDs have already overtaken the technology

      1. Pirate Dave Silver badge

        Re: Hardware vs Software

        "that's because it's hard to apply a hardware update in the field"

        Don't we call that "planned obsolescence" these days?

        Your shiny tat's gone bad? Yeah, that was a bad design, but we've fixed it in the new model. Go ahead and send that to the landfill and buy this new model with more extra *Shiny*.

        (I say this as an American, since we don't have the "fit for purpose" protections like you Brits do)

      2. Version 1.0 Silver badge
        Thumb Up

        Re: Hardware vs Software

        I don't disagree but I can remember pulling the CPU out of the computer and getting access to the main buss with my handheld wire-wrap tool and fixing an interrupt issue with a customers PDP11/34 that was processing video images back in the old days. Hardware fixes used to be reasonably easy, because back then most hardware was designed to be fixed if it stopped working.

    2. John Brown (no body) Silver badge

      Re: Hardware vs Software

      "Hardware engineers like to get things working reliably before applying their devices to users."

      IBM DeathStars drives.

      Iomega Zip Click of death.

      XBox, red ring of death.

      ...and lets not even go near the automotive industry!!

  4. Fazal Majid

    Server simplification, really?

    SATA ports are cheaper to provide andmore plentiful than PCIe or NVMe ones (M.2 or U.2), which is why this obsolete interface has survived so long.

    1. Pascal Monett Silver badge

      I have recently upgraded my home PC. My new motherboard has an NVMe slot if you care to check the image, it is behind the RAM and underneath the PSU connector). I have an NVMe drive waiting for it.

      Except that, there is nothing that holds the card to the slot. There is no retaining device, so the card just slots in and, if ever the case is moved or jostled in any way, the card is liable to pop out.

      Very poor design in my opinion. SATA connectors have blocking pins like Ethernet, so where is the blocker on NVMe cards ?

      I'm not slotting that card in until someone does something so that it can't just fall out.

      1. pLu

        No vertical M.2 bracket set in the box? But you've got two normal NVMe M.2 slots too.

        https://rog.asus.com/wp-content/uploads/2014/08/X99-Deluxe-M2-Bracket-set.jpg

      2. DuncanLarge Silver badge

        Thats good to know.

        Same with the M.2 "standard" that isnt. Full of incompatibilities, uncertainties and confusion. Then you have the god awful multiple length design! I have some machines that had the capability of taking a full length M.2 SSD but the standoff that some genius thinks is a nice way to take a screw to screw the SSD down was PERMANENTLY fixed in a position that was incorrect for the replacement SSD.

        I had to DREMEL the thing off!

        Yet the SATA connector is standard, compatible, well known and secure. It also tends to connect to devices that have two, yes two form factors (2.5" or 3.5") which are again well known, secure etc.

        PCIe is just as good, allwing NVMe to naturally find a home.

        Yet M.2 cant replace either. Some designer went all crazy adding wank features without actually designing a standard that can REPLACE what was before. Oh, in specs it can, but not design.

        1. Anonymous Coward
          Anonymous Coward

          "Some designer went all crazy adding wank features without actually designing a standard that can REPLACE what was before."

          So this designer was originally on the IPv6 committee, then?

          (anon because I know I'll get roasted for DARING to speak against that sacred, over-engineered cow...)

        2. John Brown (no body) Silver badge

          "Yet the SATA connector is standard, compatible, well known and secure."

          Well, it's secure now. It didn't used to be. The first iteration of SATA plugs didn't have locking clips. They tended to work loose and even fall off over time if the plastic wasn't quite moulded right to grip. It's a mature technology. NVMe is still a stroppy teenager, as alluded to above, with weird and odd "standards" and incompatibilities.

          1. DuncanLarge Silver badge

            Yes I know, I was betting on the chance people will buy the better locking plugs.

            Although in practice I have never seen a non-locking SATA plug work its way loose. Ever. Ignoring cables that are too short here.

            > NVMe is still a stroppy teenager, as alluded to above

            I'm not talking about NVMe here. In fact you cant compare NVMe and SATA as one is a socket the other is a protocol. NVMe and AHCI are comparable, and both do an excellent job. I like NVMe, thats a very good update over AHCI as it does what AHCI did, that is to provide a standard communication interface between a storage device and the operating system. The connector can be M.2, U.2 or PCIe. NVMe like AHCI, OHCI etc eliminates having specific drivers for controllers etc, if the controller talks NVMe then a standard driver that talks NVMe will do.

            I'm talking about the M.2 connector vs SATA connector. M.2 is too flexible, allowing for confusion between devices and motherboards and what is and isnt supported. For example, it is possible to have a motherboard that has M keyed M.2 sockets that will accept both a SATA and PCIe M.2 SSD yet that motherboard DOES NOT PROVIDE SATA to that socket. There is no key position in use that allows the MB to prevent a SATA M.2 SSD from being inserted when it isnt supported, leading to me scratching my head and delving into spec sheets and manuals to try and diagnose what looks to be a hardware fault.

            1. John Brown (no body) Silver badge

              "Although in practice I have never seen a non-locking SATA plug work its way loose. Ever. Ignoring cables that are too short here."

              I was a field engineer for a number of years and yes, reseating SATA connectors was "a thing" to fix some issues of data read/write errors or HDD not detected. I can't say I saw any that had fell off, but quite a few where just pushing it back in properly was the "fix". Then the lockable connectors came along and those faults mostly went away. Some cheap ones tended to corrode, probably some chemical reaction between the contacts on the drive, the nasty cheap alloys used in the cheap connectors and humidity or contaminants getting in.

              And yes, I constantly mis-remember that M.2 and NVMe are not the same thing. Thanks for clarifying that. I did, of course, mean M.2 :-)

      3. This post has been deleted by its author

      4. Sgt_Oddball

        Normally

        There should be a small screw fixed to one of the 3/4 different holes by the NVMe drive slot. Just remove it, insert the card, then replace the screw where the half screw hole/notch is, which should align with one of the screw holes in the motherboard.

        1. John Brown (no body) Silver badge

          Re: Normally

          Unless the screw position assumes a full length card and you want to fit a half length card and it didn't come with an adaptor plate.

      5. G40

        On my elderly maximus vii the m.2 card is secured with blutack.

      6. John Brown (no body) Silver badge

        "I have recently upgraded my home PC. My new motherboard has an NVMe slot if you care to check the image, it is behind the RAM and underneath the PSU connector)."

        I've not bought a motherboard in years. Do current ones have multiple NVMe slots? Or do you just have to replace the SSD when you want a bigger one rather than adding to it? My current motherboards have 4 to 6 SATA connectors and all have at least 2 drives fitted.

        1. DuncanLarge Silver badge

          Many MB's now offer multiple slots. One Asus board a looked up had 2x M keyed M.2 slots both supporting different connection types so you need to carefully select the slot you need for your SSD.

          1. John Brown (no body) Silver badge

            Thanks. I see I'll need to be especially careful if and when I next buy a board. It's either going to have to have "traditional" SATA as well as M.2 for SSD or provide at least two SSD usable M.2 slots.

    2. DuncanLarge Silver badge

      Re: Server simplification, really?

      How do I plug my BDXL writer into M.2?

      Bear in mind M.2 is a badly designed "interface". Many times at work we have had to divine why we cant replace an SSD in a Dell PC only to realise that that specific Dell has an M.2 slot that does not support the SSD's interface.

      M.2 provides keys that are supposed to help prevent this but if you read the spec you find that whoever designed the M.2 slot had no idea what they were doing as the keys are not standard and totally open to "future use".

      M.2 slots that ONLY support PCIe for example are UNABLE to key themselves to prevent a SATA M.2 SSD from being inserted, leaving the user scratching their head wondering why the machine wont boot. However, a SATA only M.2 slot will prevent a PCIe only SSD from being inserted. Why? Why doesnt it work correctly?

      Also keep in mind that sourcing a replacement M.2 SSD and successfully determining the interface used (SATS,PCIe) is a game in itself. That M.2 SSD you have spare in the IT store cupboard, is it suitable? Good luck trying to find out. Unless its NVMe most product listings seem to completely avoid any accurate description of the SSD's interface! Trust me, I had to deal with this, at zero hour where "it must be up" is the mantra over the phone.

      I saw these issues with M.2 simply when reading the damn Wikipedia page then I learned first hand exactly how crap the spec is. I work in IT and even I am having problems with compatibility and usability of M.2 so god knows how an ordinary user upgrading a home PC will fare.

      I'd rather use SATA and NVMe plugged into PCIe. There you know exactly where you stand.

      1. pLu

        Re: Server simplification, really?

        https://www.mouser.com/pdfdocs/M2ConnectorBrochure201412181.PDF

        1. DuncanLarge Silver badge

          Re: Server simplification, really?

          I rest my case

        2. Gene Cash Silver badge

          Re: Server simplification, really?

          Page 8 of that PDF is the thing of nightmares with no fewer than 11 variations.

          And WTF is this doing marked "Company Confidential"?

      2. keith_w

        Re: Server simplification, really?

        We recently went through this with a Dell 5480 at work in order to double the storage. We looked up the specs on the Dell website, and then checked the model of the SSD that was installed. We then ordered the correct replacement and it worked well. We even managed to image the old drive to the new one successfully.

      3. J. Cook Silver badge

        Re: Server simplification, really?

        Heh. During the Netbook craze of 2008-2009, I bought a Dell Mini-9; it was the right form factor for what I wanted, did what I needed it to... until the SSD blew out on it.

        Turns out, they used a mechanical SO-DIMM style slot, and the SSD was essentially an IDE interfaced storage device. Completely closed, and I was more or less unable to get a reliable replacement for it. Shame, because the rest of it worked just fine...

      4. DS999 Silver badge

        Re: Server simplification, really?

        This is targeted at servers or arrays with hot swap drive bays, not a flimsy slot on the motherboard.

    3. Anonymous Coward
      Anonymous Coward

      Re: Server simplification, really?

      Nonsense. If you look at how (since we're talking about servers) AMD's Epyc SoCs do I/O, you will see that there are for our purposes two collections of high-ish speed I/O lanes: those that can be a SATA lane or a PCIe lane and those that can be only PCIe lanes (I'm ignoring xGMI because either you need xGMI and will have it or you don't need it). That is, there is a 1-for-1 option between creating a lane of SATA and a lane of PCIe, which can be used as an NVMe transport or to connect a chip-down end device or to provide a standard (or nonstandard) card-edge connector. There is no difference in the cost of the SoC, the socket, or the traces.

      That leaves the end device connectors, of which there are many different kinds, each with its own performance, cost, service life, lane count, mechanical attachment, and board space consumption tradeoffs. It's true that connectors can be fairly expensive, but if you want the most apples-to-apples comparison you might look at the M.2 connector you mention, because it comes in several different flavours that can support up to x4 PCIe, or x1 SATA, or either of those in the same connector (keys B and M are what you're after here). Those connectors are literally the same and there is no price difference between keying types. As a system designer building around an SP3 socket, you can provide an M.2 connector that supports only SATA, an M.2 connector that supports only PCIe, or an M.2 connector that supports either one for the exact same price. Note that supporting both requires either that you consume both PCIe and SATA lanes at the SoC or do some very unusual contortions to route the same lane(s) to both sets of pins and then let the operator choose how the serdes are configured.

      The reasons PCIe is better than SATA are numerous: the author of this piece has stated that it's faster, but has implied that PCIe is limited to 20 Gb/s which it certainly is not; assuming he's thinking about a 4-lane interface that has not been true for over a decade. Currently shipping devices support PCIe4 which is 16 GT/s each direction *per lane*, and typical NVMe devices on the market support 4 lanes, meaning any gen4 x4 end device (NVMe or otherwise) can after the encoding overhead handle about 63 Gbit/s or just shy of 8 GiB/s. PCIe isn't 4x as fast as SATA, it's more than 12x faster and will be 24x faster once PCIe gen5 devices become available in the next couple of years. This ignores the overhead of HBAs and software drivers, which only add to PCIe's advantages...

      But there's much more to this than performance, especially since even multi-actuator spinning disks are still, compared with SSDs, sloths wallowing in molasses. PCIe doesn't require an additional HBA (embedded in Epyc and most current PCHs/SBs, but it still requires transistors and software to drive it), it can scale up to 16 lanes per port, and it provides a mostly reasonable collection of error-handling and hotplug mechanisms. SATA is basically still just plain old IDE from 1990, designed from the start for low-cost desktops. It has never belonged in servers and still doesn't. The real competing interface for rotating media isn't SATA, it's SAS. SAS, like SATA, requires an HBA and software to drive it (and, sadly, firmware which is invariably buggy), but it provides far more throughput than SATA and a much more robust and flexible collection of data protocols, including support for PCIe-like switched fabrics, reliable hotplug, dual-porting (like NVMe), and much more. For a lot good reasons, PCIe is winning that battle, but SATA isn't fit to carry water for either one.

  5. Alan Brown Silver badge

    Price isn't everything

    ". In other words, comparing price per unit storage, spinning rust is still under a quarter of the price of flash media."

    Even the _CHEAPEST_ SSD media has durability(*) well in excess of spinning media at these capacities whilst drawing 5-10% of the power when writing (1-5% when reading) and being around 400 times lower latency

    (*) When you compare SSD DWPD with HDD endurance figures you quickly realise that 0.2DWPD is slightly better than proosumer storage drives such as Ironwolf/WD Red and 1DWPD essentially beats the snot out of anything with moving heads

    Warranties are longer too

    The jumping off point for SSD vs HDD around a factor of 5. Once you hit that point, HDD's other disadvantages outweigh the cost multiple - in simple energy+capital cost terms alone the TOC of a system will be around the same over a 5 year period for starters

    This is a last ditch "Hallelujah" move by spinning media. SSD has been taking their lunch money in smaller capacities for years and waltzed into 4-8TB ranges recently. The only reason it hasn't outpaced HDD is the recent shortages of silicon keeping prices up

    1. DuncanLarge Silver badge

      Re: Price isn't everything

      > Even the _CHEAPEST_ SSD media has durability(*)

      No, you really get what you pay for.

      Trust me, we have tills with some of this "cheap" SSD stuff you mention.

      These SSD's are based on a Phison controller known for committing suicide, which is exactly what they did, months after being put into the field.

      1. Alan Brown Silver badge

        Re: Price isn't everything

        "Trust me, we have tills with some of this "cheap" SSD stuff you mention."

        Not 4-8TB enterprise SSDs you don't - which is the context for this story and my comment

        Toy drives in cheap SMB toys are another matter altogether and a good example of beancounters costing companies dealy through having few clues about what they're specifying (Hint: The drives you speak of aren't rated at even 0.2 DWPD and the controller in question isn't intended to have 24*7 uptime - it shits itself at 2^32-1 centiseconds uptime - the infamous "49.7days" problem)

  6. TeeCee Gold badge

    Slight snag.

    ...ship servers with just that (NVMe) interface...

    As it requires four PCIe lanes per interface, a revolutionary development in chipsets and CPUs is a prerequisite here.

    1. Anonymous Coward
      Anonymous Coward

      Re: Slight snag.

      I'm literally about to push go on a server that only has NVMe interfaces - here's the baseboard.

    2. Anonymous Coward
      Anonymous Coward

      Re: Slight snag.

      "As it requires four PCIe lanes per interface, a revolutionary development in chipsets and CPUs is a prerequisite here."

      Where to start?

      First, it doesn't require 4 lanes. You could implement a x1 NVMe port if you wanted to and it will link up at x1 just like any other PCIe device. Doing this would be silly, but you could and any compliant device will work at reduced performance (that's still much faster and more robust than SATA while requiring less hardware and software).

      Second, that "revolutionary development" happened when AMD shipped Naples in 2016 with 128 PCIe lanes per socket, then added more flexibility with Rome to allow 2-socket platforms up to 160 lanes if one is willing to sacrifice inter-socket xGMI throughput and allowed single-socket platforms a few spare slower lanes for things like extra USB controllers or onboard NICs so they wouldn't have to use up the fast ports for that stuff. You can, and system manufacturers do, build single-image servers that support 32 surprise-hotpluggable NVMe targets each. You can even put 2 such boards in a single chassis and wire them up to the storage midplane to allow both servers to access the storage through independent NVMe ports. It is possible to do even more with PCIe switches, but they're complex and expensive and that kind of scale doesn't seem to be in demand by the mainstream server market right now.

      There is no reason to ship servers with SATA interfaces, never has been unless you were targeting the absolute bottom of the market where customers really just want a 1U form factor at the lowest possible price and will sacrifice literally every other attribute to get it.

  7. Roj Blake Silver badge

    "It seems these devices are due to ship to select customers in 2022, it seems."

    So it would seem.

    1. TRT Silver badge

      Re: "It seems these devices are due to ship to select customers in 2022, it seems."

      Using Seemens chips again.

  8. Old Used Programmer

    About that 3D Xpoint....

    I'd like to see them put it into a micro-SD card.

    1. Alan Brown Silver badge

      Re: About that 3D Xpoint....

      "Many a true word is spoken in jest"

      a lot of microSD storage is SSD NAND that failed QC testing (too many defects, which is mapped out/accomodated by the SD controller)

      It's entirely possible that this could yet happen

  9. Anonymous Coward
    Anonymous Coward

    I wonder if these will work in an Xbox Series X instead of the overpriced 1TB SSD drive...

  10. cjcox

    Maybe

    As newer CPUs support more direct lanes to them, the idea of many NVMe "sockets" might make some sense.

    But maybe we're not quite there yet (?)

    Haven seen many things cycle, would not be surprised to see us cycle back to a bus bridge chip again in the future... but we'll see.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like