back to article Brocade undone: Broadcom's acquisition completes

Broadcom has completed its acquisition of Brocade. The US$5.9bn deal was announced in November 2016 but the United States' Committee on Foreign Investment took a long, hard look at the deal because Broadcom supplies Cisco with key components for its Fibre Channel products. Buying Brocade therefore represented a chance for …

  1. Mage Silver badge

    Unrelated, really?

    "In an unrelated move, Broadcom's also announced it will move its global headquarters from Singapore to the United States"

    Well, that would certainly help lubricate the deal past the United States' Committee on Foreign Investment.

    I wonder what activities are in or will be in the HQ?

    Next up: Broadcom & Qualcomm as well as other mergers/takeovers.

  2. MJB7

    Check your irony meter

    I think the overload protection has blown.

    (You do have overload protection on the iron meter you use on El Reg, don't you?)

  3. CheesyTheClown

    Was buying FibreChannel a good deal?

    1) FC doesn’t work for hyper-converged, adapter firmware supports initiator or target mode, not both. As such, you could not host FC storage in the same chassis as it is consumed.

    2) Scale-out storage which is far more capable than SAN requires multicast to replicate requests across all sharded nodes. FC (even with MPIO) does not support this. As such, FC bandwidth is always limited to a single storage node. With MPIO, it is possible to run two separate SANs for improved performance but the return on investment is very low.

    3) FC carries SCSI (or in some cases NVMe) over a fibre protocol. These are block protocols which require a huge amount of processing on multiple nodes to perform block address translations and make use of long latency operations. In addition, by centralizing block storage, controllers have to perform massive hashing and lookups Tor possibly hundreds of other nodes. This is a huge bottle neck which even ASICs can’t cope with. Give massive limitation in the underlying architecture of FC SANs, distribution of deduplication tasks is not possible.

    4) FC (even using Cisco’s MDS series) has severe distance limitations. This is controlled by the credit system which is tied to the size of the receiving buffers. Additional distance add additional latency which requires addition buffers to avoid bottle necks. 32Gb/s over 30km of fibre probably requires 512MB of fast cache to avoid too many bottle necks. At 50km, the link is probably mostly unused. Using FCoIP can reduce the problem slightly, but iSCSI would have been better and SMB or NFS would have been infinitely better.

    I can go on, but to be fair, unless you have incompetent storage admins, FC has to look like a dog with fleas by now. We use it mostly because network engineers are horrible at supporting SCSI storage protocols. If we dump SCSI and NVMe as a long range protocol, the problems don’t exist.

    I would however say that FC will last as long as storage admins last. Since they are basically irrelevant in a modern data center, there is no fear that people will stop using FC. After all, you can still find disk packs in some banks.

    1. SuperFrog

      Re: Was buying FibreChannel a good deal?

      Incompetent is way overboard in my mind.

      I do architect storage networks but there is a reason for FC to exist.

      Right now I see a lot of growth with NVMe-oF. Not just replacement installs but additions to hyper-converged platforms like Nutanix and Simplivity as these companies are finding that the internal storage is not enough and external storage is more cost effective and more flexible.

      Not every business wants, needs or is in a position to do hyper-convergence. In one example, a hospital I recently worked with, you have regulatory challenges with respect to where data is stored. In a large bank, with zettabytes of storage, having a dedicated FC network makes sense. Some OS's don't work on hyper-converged platforms. If your a typical small to mid-size enterprise, it may or may not depending on the business.

      To be fair, the cloud is eating a lot of storage growth and in my mind that is hyper-convergence problem. Workloads that a company would usually put on them are good candidates for putting that on a public cloud. Other workloads that you would keep on premises, most companies are doing some sort of array.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021