back to article Witness the future of Brocade

At a presentation to analysts this week Brocade talked about a growing Fibre Channel market, an expanded HBA product line, new fabric core capabilities such as encryption, replication and deduplication, and FCOE and services. There was no mention of FANs (File Area Networks) but file virtualization was discussed. The …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Stop

    FCOE - biggest con of all

    I've yet to hear anyone explain the value prop around FCOE.

    1. COST ? Just as expensive per port as fibre-channel

    2. Aggregration? Nope - will still need dedicated FCOE ports / networks which cannot be combined with TCP/IP ports.

    3. Simplicity? Definitely not - this will always be more complex than iSCSI

    And the competition?

    FC8 - definitely the choice for the existing FC4 crowd.

    10Gb iSCSI - definitely the choice for those looking at iSCSI and wanting a bit more oomph.

    Anyone want to refute this and argue to the contrary?

  2. Jeff
    Unhappy

    iSCSI - the second biggest con...

    iSCSI barely delivers half of the performance of 1 Gigabit Fibre Channel unless you're using it on 10G Ethernet which is just as expensive if not more than most FC solutions.

  3. Chris Cox

    Con?

    Sure.. I'll dispute a bit.

    You're taking a pro iSCSI stand. That's pretty obvious.

    FCoE is a natural extension of FC. Unlike iSCSI. So, iSCSI is a competing technology, it is not a migration or upgrade technology. So while it's nice to SAY that FCoE is a "con" if you have a bias torwards iSCSI (which is something else entirely), the fact is that FCoE, is a better path for FC users should they choose to migrate away from fibre and use a converged network in the future.

    10Gb iSCSI... now that's somewhat of a con job isn't it? Do you even get full gigabit potential out of today's iSCSI? Sure, you can do aggregation, and talk in terms of multiple hosts and such to prove that iSCSI works today. But AFAIK, iSCSI is pretty poor today host to SAN, and I don't expect to see 10Gb performance when folks move to 10Gb either. And, while we're on the topic of 10Gb, you do understand the run length limitations of 10Gb copper right? All I'm saying is that there could be a fairly large number of folks that will find 10Gb to be expensive to deploy because it means moving to fibre just because of the 14M run lengths of copper CX4's.

    My current SAN units SATURATE a 4Gb line... that's a SINGLE host talking to the SAN. Good SAN arrays can easily take full advantage of 4Gb and 8Gb today... something that iSCSI simply doesn't allow even at the lowly 1Gb speeds. Is that worth something? I think so. You can knock fibre channel tody because of its cost... certainly true. You do pay for the ability to get full performance (with or without aggregation). If 100MB/sec floats your boat, and you don't need more than that, I say iSCSI is going to work fine for you. If you need 400MB/sec+, then I KNOW, you'll be better off with FC.

    Do you know how much 10Gb ethernet ports cost? 10Gb ethernet network cards? Sure, it's early on... but at least today, if 1Gb is a joke, and you believe 10Gb to be better than FC, I think you'll find that 10Gb iSCSI likely performs comparably to 4Gb fibre... and I think you'll find the cost to favor FC in that case. And... if you HAVE to move to 10Gb fibre ANYHOW, well.. clearly FC holds the advantage in that case.

    There are other considerations though. The ability to route iSCSI can be a very interesting thing. So that also needs to be kept in mind. However, SANs, whether iSCSI or FC, are fragile. Not sure if adding routing to the picture is necessarily a wise idea.

  4. Anonymous Coward
    Happy

    @ Con? - Chris Cox

    Chris,

    My understanding is that FCOE is not a converged network - still requires dedicated adapters, dedicated networking that cannot be shared with non-storage traffic.

    Secondly, although I think that FC is here to stay, I also see the growth in iSCSI by companies such as NetApps, Dell-Equallogic, Lefthand, EMC at the low-medium end.

    For many customers, who are file serving, doing small VMware consolidations, have an entry level Exchange environment or a few small SQL databases, iSCSI is a really good fit. Nice too for quick and remote implementations, dev/test, etc.

    My argument was for those companies that need the sustained performance and throughput of fibre-channel today. They are currently on FC4. Why would they choose to move to FCoE instead of FC8?

    Performance is going to be very similar, but one's new and the other has wide industry acceptance and experience.

    You won't hear an argument from me against FC - I have 9 years' experience of the reliability of FC. Howevr, I'm not sure I'm prepared to move form this to an untested topology when I se so little performance gain and no cost reduction.

  5. Max

    iscsi performance is "good enough"

    "10Gb iSCSI... now that's somewhat of a con job isn't it? Do you even get full gigabit potential out of today's iSCSI?"

    Yes, I do. Actually with link aggregation, I can get 2Gbps + throughput with MPxIO and *SOFTWARE* drivers using ghetto onboard NICs. With the amount of juice available with today's multicore procs, you've typically got the head room to spare for most jobs. Sure, you need a kick ass array, but that's applicable to any storage setup.

    Frankly, I rarely see applications that require that much throughput or "performance"... If you actually look at the IO requirements of most mid size shops, any of the related technologies are slightly over kill.

    "My current SAN units SATURATE a 4Gb line... that's a SINGLE host talking to the SAN. Good SAN arrays can easily take full advantage of 4Gb and 8Gb today... something that iSCSI simply doesn't allow even at the lowly 1Gb speeds. Is that worth something? I think so. You can knock fibre channel tody because of its cost... certainly true. You do pay for the ability to get full performance (with or without aggregation). If 100MB/sec floats your boat, and you don't need more than that, I say iSCSI is going to work fine for you. If you need 400MB/sec+, then I KNOW, you'll be better off with FC."

    It's been my observation that the performance of a storage network is typically Array or Disk bound; the interconnect is rarely the problem.

    For VMware, NFS over 10GbE is the most ideal solution for scalability.

    ~Max

This topic is closed for new posts.

Other stories you might like