back to article Pimp my racks: Scale-out filer startup Qumulo bangs up its boxen, er, '4U'

Whacking up high-end capacity by 71 per cent takes scale-out filer startup Qumulo into 3PB per rack territory, in its bid to win enterprise business. Qumulo has taken its top-end QC260 box, exchanged nine of its SSDs for ten disk drives, and so upped raw capacity to 360TB. The box is a 4U enclosure and, in QC260 form, it has …

  1. ColonelClaw

    About 10Gb networking

    I have a question:

    The whole world already has RJ-45 based network cabling installed. When we move to 10Gb why go with SFP+ over 10Gb Base-T?

    What am I missing? Lack of Cat 6/7 cabling? SFP+ is better? Something else?

    I'm asking because we are about to buy some 10Gb kit.

    1. jdarville

      Re: About 10Gb networking

      The whole world has RJ45 cabling, but very few places have CAT6a or CAT7 cabling and patch panels for 10Gb BaseT, Cat6 alone is not sufficient for 10Gb.

      Nearly every datacentre is already wired with OM3 fibre for SFP+, thus why nearly everyone will use it.

      For servers I've never seen anyone using 10Gb BaseT, the only function I see that for is cabling to workstations.

      If you have not already moved to 10Gb in your datacentre (how have you managed with only 1Gb?), I would not even bother now, just go straight to 100Gb for a bit more money.

      1. calmeilles

        Re: About 10Gb networking

        More practically in a datacentre which did have CAT6a* structured cabling but did not have similar OM3 10 years ago*, even 5 years ago SFP+ switches and nics were expensive but readily available while 10Gb BaseT switches and nics were excruciatingly expensive and rare as hen's teeth.

        Whatever the other benefits fibre was the only realistic choice for a number of years.

        [ * I'm aware the CAT6a standard wasn't formally defined until 2009 ]

    2. Anonymous Coward
      Anonymous Coward

      Re: About 10Gb networking

      SFP+ has a power and latency advantage. Depending on your network topology and application it might not make a difference.

      Here are a couple charts comparing the two:

    3. stephanielee

      Re: About 10Gb networking

      Pros of 10GBASE-T

      - Cheap twisted pair cables.

      -Patch panels can be used without messing around with transceivers.

      Cons of 10GBASE-T

      - Higher power consumption.

      - People may get tempted to use substandard cabling, and this would have a negative influence on the speed.

      - No good way to extend length beyond 100m (though this can be somewhat mitigated by choosing switches with mostly 10GBASE-T but also a handful of SFP+ ports) limited choice of equipment.

      Pros of SFP+

      - Lower latency

      - Lower power consumption

      - Cheaper NICs and switches

      - More choice of connected equipment.

      - With transceivers and fiber basically any run length can be covered.

      Cons of SFP+

      - Apparently, it is not a big deal for transmission within short distance.

      - For longer runs or runs that need to go through patch panels needs transceivers and optical fiber. Fiber itself is cheap but transceivers, termination, patch panels, and etc for fiber would cost a lot.

  2. GlenP Silver badge

    Feeling Old

    Whenever I read about these modern storage devices it makes me feel old.

    When I started we'd just got some Digital RA-81 drives, 456MB each, 3 to a cabinet. Now we're getting hundreds of TB is a single 4U high enclosure.

    1. 2+2=5 Silver badge

      Re: Feeling Old

      > When I started we'd just got some Digital RA-81 drives, 456MB each

      Remember DECTape?

      Someone once worked out how many could fit onto a DVD. And the answer was... all of them. Every single one ever manufactured.

  3. Storage Guy

    Taking the Reg Bait

    Disclosure: I work for Dell EMC

    @Chris, what’s up with the infographic?

    Just so I understand . . . You spoke to the Qumulo Dir. of Product Management and came to the conclusion that with the inclusion of 10TB drives, Qumulo’s QSFS file system is now magically equal in scalability to ceph, GPFS (Spectrum Scale) and Gluster file systems and offers far great scale than NetApp and Isilon? (My assumption is you meant NetApp ONTAP and DDN with GPFS.)

    Currently, NetApp’s ONTAP can scale to 88PBs (or 20PBs with FlexGroup). Isilon OneFS is just shy of 100PBs. And Qumulo QSFS . . . Qumulo doesn’t publicly state their FS scalability. Nope, graph can’t be depicting file system scalability.

    Maybe you meant scale in the sense of 4U enclosure density. That metric doesn’t fit either as Isilon supports 60 drive enclosures and DDN offers an 84 drive enclosure. Qumulo’s QC360 is a 40 drive enclosure (4 SSD, 36 HDD).

    Ceph and Gluster are software so it can’t be number of nodes you’re graphing.

    I give up on trying to make sense of the scalability axis.

    Let’s shift our gaze to the relative positioning of Qumulo in Enterprise use cases – right there with NetApp and Isilon. The general assumption is the Enterprise requires features like replication, DR/HA, multi-tenancy, audit, WORM compliance, encryption, mirroring, quotas, snapshots, etc. Apparently, your idea of features required by the Enterprise is limited to snapshots as that’s the only available feature of QSFS.

    Maybe it’s an info-free graphic?

    1. BingoBilly

      Re: Taking the Reg Bait

      No dog in this fight but I did hit the Qumulo booth once... The thinking seemed to be that because QSFS was written specifically for modern server architecture it could theoretically scale to multiple exabytes.

      Seems spurious... could that be true?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022