back to article Disks with Ethernet ports? Throw in some flash and you've got yourself a HGST p-a-r-t-y

Western Digital Corp subsidiary HGST is developing Ethernet-connected drives for OpenStack users – and they won't require any application software changes, apparently. The architecture of such a product will be demonstrated by HGST at the OpenStack Summit, to take place between 12 and 16 May in Atlanta, GA. The presentation …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    What is needed...

    Is a SATA standard that actually comes close to 6GB/s instead of the 450-500MB/s seen in the best systems. I'd like to see a 5.5 GB throughput on a 6GB standard and am unaware of anyone who gets 1/6th of this on current SATA drives.

    That would beat any GB Ethernet connection.

    1. Piro Silver badge

      Re: What is needed...

      Uh, it's 6 Gigabits per second, not 6 Gibibytes per second.

      You've made a fatal mistake: you put a capital B when actually it's a lower case.

      B = Byte

      b = bit

      You're misinterpreting the numbers to the tune of 8 times as much as it means.

  2. Anonymous Coward
    Anonymous Coward

    6Gb does not equal 6GB. Max theoretical speed is just shy of 600MB/s

  3. This post has been deleted by its author

  4. karlp

    An Interesting Future-SAN

    I wonder if this, with some improvements, could act as an acceptable standardized SAN platform.

    If these disk were to be made with dual ports, and preferably POE powered, you could then add a few bog standard servers with 10GB cards which act as the "controllers" for the iSCSI/FC fabric, or even direct NFS/SMB filer frontend.

    Theoretically you could build a reasonable san with a couple standard ethernet switches, servers, and these ethernet drives which if talking to a standard, would let you be vendor agnostic at any of the individual stages.

    Nonetheless, I believe we will see some creative uses for a directly attached ethernet drive in the next few years.

    1. @hansdeleenheer

      Re: An Interesting Future-SAN

      you are missing the point of ethernet connected drives, the appliances that house masses of these drives would be basically switches. there is no need for a block or file header anymore as there is a straight IP connectivity to the drive. these drives are perfect for object based backends.

      1. Roo
        Windows

        Re: An Interesting Future-SAN

        "there is no need for a block or file header anymore as there is a straight IP connectivity to the drive. these drives are perfect for object based backends."

        There is nothing stopping you from doing that with SATA already... Those drives will look just the same as the other block devices in *NIX (I've used ST01, ESDI, SCSI, iSCSI, IDE, SATA, they all look the same at the shell prompt). ;)

        The thing that bothers me about Ethernet connected drives is the power consumption aspect - modern Ethernet is designed to run over 100+m of cable/fibre. Consequently I would expect Ethernet to burn more juice to get the same job done as SATA...

        1. Roo
          Windows

          Re: An Interesting Future-SAN

          Genuinely curious, why the down votes for pointing out that these drives will look just the same from the PoV of the user-land code talking to them ?

          Or is it speculation on power consumption that has driven you to the effort of down voting ?

          Or is it simply that you can't tolerate disagreement and wish that my post would disappear off the bottom of a very low traffic comment section ? :)

          1. Roo
            FAIL

            Re: An Interesting Future-SAN

            Hmm, no technical reason given and another down-vote, so it looks like "you can't tolerate disagreement and wish that my post would disappear off the bottom of a very low traffic comment section ? :)" is the answer. Thanks for responding.

        2. Alan Brown Silver badge

          Re: An Interesting Future-SAN

          "The thing that bothers me about Ethernet connected drives is the power consumption aspect - modern Ethernet is designed to run over 100+m of cable/fibre."

          Your task for the afternoon: Search on Goo for 802.3az - aka "Energy Efficient Ethernet"

          Power consumption at 10Gb/s is less than 1W for very short runs in any case.

  5. Anonymous Coward
    Anonymous Coward

    How long...

    before one of these drives gets infected by a virus and starts sending spam

    or encrypting the entire contents of the disk and ask for a ransom in BitCoin ?

    1. Fatman

      Re: How long...

      before one of these drives gets infected by a virus and starts sending spam

      or encrypting the entire contents of the disk and ask for a ransom in BitCoin ?

      A sobering thought.

      I wonder what kind of security is going to be baked in?

    2. Roo

      Re: How long...

      ... about a couple of days after some prat has hooked them up to a general purpose network instead of a SAN...

      1. Nigel 11

        Re: How long...

        Which maybe points to why full migration to ipV6 isn't likely any time soon?

  6. Wallsy

    Is it cheaper in the long run?

    It sounds great in theory, but what's the cost in terms of switch ports, IP management and cabling? One SAN/NAS can host hundreds of drives, and use only a handful of ports to provide access. There's also a bit of resilience built in, whereas an Ethernet connected drive will need to be protected by application logic.

    I suppose these are aimed squarely at the bold new world of cloud ready apps, not my old-fashioned internal data centre ways.

    1. Anonymous Coward
      Anonymous Coward

      Re: Is it cheaper in the long run?

      If you have 1000 drives, sure it will be 1000 Ethernet ports but you also needed 1000 SAS/SATA ports on the SAN/NAS though. So while the SAN/NAS offers less ports you still need ports to provide access to the other clients in the mix. Anyway you put it, drive ports remain the same.

  7. -tim
    Coat

    It has been heading this way all along.

    At the last breakpoint security conference someone installed Linux on his hard drive. It only crashed when it couldn't find a storage device. I think it was a cheap modern HP drive that had a dual Arm based CPU as well as another very low powered one. The demonstration started off showing how easy it was to hack the firmware of a drive to look for a string in a written block and then return a different sector in place of another request (as in log "user wanted /xyzzy.html, now return hacker:abcdef in place of a sector that looks like a shadow file)

    I've wondered when Flash memory sticks would go to E-sata but it looks like USB3 stole that thunder(bolt).

    Mines the one with the unfiled patent application for adding a video controller and usb hub to a hard drive controller and calling it a PC.

  8. mvrx

    Another reason the standard should have been 10gbit/sec long ago

    It is really too bad the industry didn't move forward with 10gbit/s ethernet long ago. The ASICs would be so cheap by now. It is pretty sad that I can have an SSD on my main PC, and an SSD on my laptop and barely get 250MB/sec after overhead. Just like I was disappointed that SATA3 was a meager 600MB/sec when it should have been 1TB/sec. If you look at the latest SSDs that include onboard ram, your SSD can peak way beyond 600MB/sec. Think if we were already at SATA 3.2 speeds, SSD vendors could be loading up a couple GB of memory on an SSD and flying burst data off like crazy. For many years I've dreamed of a hard drive with a memory controller and a SO-DIMM slot on it. 8 or 16GB memory stick for around $150 and your drive would fly.

    1. Nigel 11

      Re: Another reason the standard should have been 10gbit/sec long ago

      10Gb/s copper as a viable upgrade from 1Gb/s would be a serious power drain, if it could be done at all.

      10Gb Ethernet over copper as standardised today is limited to 10 metres. That's enough for some server room applications (including this one?) but not far enough for networking premises. If 100 metres at 10Gb over copper is possible at all, it would eat considerably more power than 1Gb/s (which in turn eats significantly more power than 100Mb/s - IIRC a good fraction of a Watt per link).

      BTW for performance, Flash-SSD memory shouldn't be on a disk bus at all. It should be a card on the system's PCIe lanes. It's often made to look like a disk drive because that way it can supply a performance boost to existing disk-based infrastructures, but it's hardly the best way to use the flash memory.

      1. Alan Brown Silver badge

        Re: Another reason the standard should have been 10gbit/sec long ago

        "10Gb Ethernet over copper as standardised today is limited to 10 metres. "

        Que? Are you timewarping in from 2006?

        If you're using direct attach cables then yes, but that is very low power. Beyond that you use direct attach fibre or move to 10GbaseT

        10GBaseT will go 33m on cat5a/6 or 100M on Cat7 with maximum power consumption of 5W/port (but more typically 1W/port these days)

    2. Roo

      Re: Another reason the standard should have been 10gbit/sec long ago

      "For many years I've dreamed of a hard drive with a memory controller and a SO-DIMM slot on it. 8 or 16GB memory stick for around $150 and your drive would fly."

      There have been plenty of RAID controllers that do exactly that for OSes that fail at efficient disk I/O, they are probably still on sale.

  9. Alex McDonald 1
    WTF?

    CTO Office, NetApp

    These sound remarkably like what we already have in a different form factor; CPU/network/storage that looks like a disk drive (or SSD) brick rather than a server brick or a switch brick. Or am I missing something?

    I want an impressive set of arguments before I'm convinced this is a flyer; a solid business use case, a good viable technology solution and a workable roadmap and plan. Cheaper, faster and more reliable would help too. I'm not getting any of that from the article.

    As I'm in Atlanta for the launch of these, hopefully I'll find out directly what the value/seekret soss/hype is about.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2022