back to article AWS creates EC2 instance types tailored for demanding on-prem workloads

Amazon Web services has created new elastic compute cloud instance types for its on-prem Outposts racks, the second generation of which was announced on Tuesday. Outposts are racks full of the same hardware AWS uses in its own datacenters and can run some of the instance types offered in the Amazonian cloud. Outposts launched …

  1. Anonymous Coward
    Anonymous Coward

    400G

    > ConnectX-7 400G

    A 400Gbit NIC. One single port. (Only one port is available, or 2x 200Gbit.)

    Remember SCSI? It used to be the epitome of peripheral connectivity? (not management, not ease of use, but connectivity?) SCSI grew up into SAS (serial attached SCSI). We're at SAS-3 now, a 12Gbit, 4-channel per port (up to 16 channels per card) connectivity standard. That gives it 192Gbit total per card, or 48Gbit per port. SAS-4 is scheduled to be 22.5Gbit per channel, but hasn't even been released yet. More than SAS-4 is necessary to get full bandwidth from a tray of NVMe disks. (a tray of nvme: imagine 48 disks throwing data at 4GB/s each, 32Gbit * 48 == 1.5Tbit/s.) OTOH, more than one SAS HBA would be required to connect it to the host - you'd over saturate a PCIe x16 v5 connection (~50GB/s) trying to do so. (You'd nearly saturate four of them.)

    Networking now beats locally-attached storage, with rather thick cablesthat give you up to 2m reach. Wow. I'm kind of surprised that disk shelves don't use ethernet(-like) interfaces for connectivity -- it's smaller, simpler, and potentially faster. Maybe SAS is lower latency, or more redundant.

    Crazy. The world is really starting to go big-iron again. Mainframes will make a return because individual, disparate servers just can't keep up.

    One fun thought: you can kind of do whatever with SAS: you set up a target and a host, as actual computers with HBA cards (not just external devices) and you can really run a network between them. It's not for the faint of heart, but it can be done - so you could get minimal latency, high-throughput network connectivity from one host to another via a SAS port, say 48Gbit, today, for the cost of a couple cards on eBay and some time (lots..) setting it up. TBH I thought that's how infiniband et al. got their networking done - over something like SAS, but it seems to be another protocol.

    1. JohnSheeran
      Devil

      Re: 400G

      You're talking apples and oranges. While Ethernet may have that kind of bandwidth, it doesn't mean that a single point on the network can use that bandwidth efficiently. Ethernet is also not very efficient and there are a bunch of factors in handling Ethernet devices. It doesn't mean that Ethernet is inferior but you're comparing a network interface to a block device interface. Block device interfaces running a SCSI command set (think SAS, Fiber Channel and even InfiniBand) usually have much higher functional performance than ethernet but also have other limitations that are similar like latency. Also, NVME devices don't require bus interfaces like SAS in order to be used. NVME devices can be connected as direct PCI-E devices and perform at that bus speed. They do often get connected to array controllers so they can be grouped together for other efficiencies such as RAID sets, etc. but that's not always the case.

      Mainframes aren't what you think they are. Fundamentally, they use most of the same technologies in use by x86 platforms today. In fact, stop using the word "Mainframes" like it means some abstract idea that is super powerful. The current "mainframe" is the IBM Z16. The Z16 runs processors based on the Telum platform which operate at around 5Ghz. That clock speed is meaningless however because, as it relates to workloads, it's just a pile of clock cycles that the scheduler uses. In the end, it doesn't matter because your "mainframes" use the S390 instruction set that is a parallel processing platform. It's Big Endian. Porting your workloads to a proprietary IBM platform makes no sense in the modern world.

      Even if it might, widely scalable networks using that amazing Ethernet bandwidth you just touted makes it obsolete and unnecessary because scale out architectures tend to offer better performance than strict scale up architectures unless you have a very special use case that needs massive performance vertically. Oh, wait, the mainframe sucks at that too.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like