back to article Microsoft changes the way it certifies network cards for Windows Server

Microsoft’s networking team has made a change to the way certifies network interface cards (NICs) for use in Windows Server. As explained by team member Dan Cuomo, since the time of Windows Server 2008 Microsoft has certified NICs “based on adapter link speed which meant that any adapter of 10Gbps or higher had additional ( …

  1. ChoHag Silver badge


    How about we just change "your configuration is not supported, bugger off" to "it looks like your configuration might not be supported but I'm just a dumb computer so what do I know? Would you like to continue anyway although performance might be degraded?"

  2. Anonymous Coward
    Anonymous Coward


    People still run Windows Server on bare metal?

    1. Dvon of Edzore

      Re: Huh?

      No. According to the New Microsoft they are supposed to run the Hypervisor on the bare metal and the OS & Applications on Virtual Machines in Clusters so you pay for the infrastructure three times.

      Exotic use cases like actually getting shit done will be left to supercomputers running Linux (or devices running Apple iOS) with no Redmondware in sight.

  3. Anonymous Coward

    A network card is a packet shifter. Yes there are some very funky new ones that are basically full on computers.

    This smacks of wankery: "nice NIC you have there, shame if we don't rate it (lol)"

    Funky NICs generally should not have anything fancier than an iSCSI initiator built in that you avoid anyway because a decent userspace implementation closer to the upper levels of the stack work better anyway. NICs that do Ethernet frame checksums or even encroach a little further up the stack to say the TCP level can add value.

    "Microsoft’s own Storage Spaces Direct" - imagine if acceleration for this nonsense starts to get built into your NIC. That's a monopoly bingo call.

    1. HighTension

      What shocked me about Storage Spaces Direct is that they will not support a mirrored cluster if the two storage hosts are not in the same room (within 5m). We asked if they would support a cluster with a 15m separation (2 buildings on the same site with 2 dedicated fibres for cluster and a resilient topology for other VLANs) and they said "no".

  4. HighTension

    If a NIC can't push and pull packets at its rated speed across all possible workloads (not regarding any kind of acceleration) surely it's a) garbage or b) the OS is garbage and is unable to use it optimally?

    Yes, there are some bad 10Gb+ NICs out there, but generally they make themselves evident pretty quickly on any workload. Intel, Mellanox and Broadcom enterprise adapters all work well enough in Linux in my experience. Or is this only about SR-IoV or PCI passthrough to Hyper-V usage?

    1. C 7

      There are pretty significant differences between the 3 vendors you mention, particularly where things like RDMA is concerned; Mellanox has done a good job at ROCEv2 for some time; Intel focused on iWARP which no one really used much, and just finally came out with a ROCEv2 NIC (the E810) recently. Broadcom completely sucks when it comes to RDMA, but has broad driver support across a lot of platforms (and claims to have a new NIC that can compete with Mellanox/Intel on ROCEv2 support).

      Then there are things like crypto offload engines, TCP offload, ability to channelize a QSFP28 port into 2x50 or 4x25 (Intel can do this, Mellanox can't), ability to support 100Gb in 2x50Gb-PAM4 in addition to 4x25Gb-NRZ, ability to support different types and power levels of optics, etc.

      And then, a lot of the 2x 100Gb NICs are only 100Gb NICs, but with 2 ports for redundancy. That's not a failure of design, it's written right in the spec sheets for the NIC. So it can push 100Gb on one port or the other, or 50Gb (maybe 60+, but not 100) on both ports simultaneously (even though the link speed is 100Gb). So you have to understand the use case and the hardware when choosing a NIC, and a lot of people don't. A lot of people just say "I need a 100Gb NIC, let me grab this Broadcom because it's cheaper" and it really might not work well for their use case.

      The fact Microsoft is acknowledging this and addressing it should probably be considered a good thing.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like