back to article Bridgeworks reveals VMware-like tech for TCP/IP cable virty'ing

Bridgeworks has found way to improve wide area TCP/IP transmission of large, streamed files that compensates for packet loss, radically improving the performance of NetApp SnapMirror and SnapVault which are hobbled by packet loss. It’s been working on the technology to virtualise a TCP/IP connection for several years. Here’s …

  1. Anonymous Coward
    Anonymous Coward

    Does the tactic take into account the way Acks are sometimes delayed?

    Some stacks only send an Ack every so often - not for every packet. When they do this is determined by the window size. If they don't get enough data to reach their threshold then the Ack only gets generated on a time-out.

    Seen that happen on concurrent links where the counter-intuitive solution to not getting enough throughput was to reduce the receive window size - not increase it.

    1. Warm Braw

      From the meagre technical information on the company's website it seems that you might need one of their appliances at each WAN connection point, so you do have to wonder why they would employ TCP at all. TCP's design didn't really envisage high-speed connections with significant latency and if you have what is effectively a bridged solution, then you might want to use an entirely different protocol since you're not under no obvious constraint to use TCP between two proprietary boxes.

      The only reason to hack around with TCP that I can imagine would be if you had corporate environments where firewalls couldn't cope with other transport protocols (including UDP), but presumably there oughtn't to be too many of those.

      1. ClaireB

        You are correct, a node (appliance or virtual instance) is required at either end of connection. this can be point to point, one to many, many to one or mesh. You don't need to use TCP but it is understood and trusted. If you use a proprietary solution you have to manage all your jitter, congestion and packet loss and that takes up CPU power and memory. By using TCP/IP we can make use of various TCP/IP stacks and the offload capabilities of the NIC card. That way we have the ability to transmit over a 1Gb Ethernet link across the US at line speed with less power and memory than you have in your average smartphone. Scalability is no issue, we already have a large Telco running 4 x 10Gb lines with a modest server.

        1. Sir Runcible Spoon

          If all the file transfers across a noisy link were implemented with this technology, what are the projected bandwidth savings and would they reduce the overall packet loss on the network through reduction of re-transmission of large packets?

          I can see some people getting the benefit of this technology without having to pay for it :)

    2. ClaireB

      Yes, the technology takes into account delayed Acks and jitter. We allow TCP/IP to try to resolve the issue, if we do get a TCP/IP level timeout we can accommodate this in our jitter function.

  2. Preston Munchensonton

    @Warm Braw, definitely agree on the stupid design choice. Right tool for the job, etc. There's little doubt that opening and maintaining a TCP connection for every packet is the definition of overkill.

    On the plus side, it's clear that they didn't just rip off the work of Riverbed, Citrix, Cisco, Silver Peak, etc.

    1. ClaireB

      I am sorry, I think you may have misunderstood, we do not open a TCP connection for every packet. If you would like to know more please fill in the contact form on the website and we would be delighted to explain in more detail.

  3. Crazy Operations Guy

    Solving the wrong problem

    Any rational file transfer systems isn't going to send one packet at a time, rather it would send a large blob of packets at once, then wait for an ACK back on the full blob (IE, "received packets 1-28,30-56,60-63") and then send a new blob of packets with the missing ones from the previous batch thrown in. Its criminally inefficient when the system knows that its going to millions of packets, but waits for an individual ACK from every single one of them...

    Almost like protocol authors have completely forgot that UDP was a thing.

  4. Anonymous Coward
    Anonymous Coward

    Off the top of my head the classic principle for full utilisation of a link using TCP used to be quite simple - in theory. Much harder to achieve in practice.

    1) An assumption is made that any packet losses are blips that are detected immediately by the receiving end. Also that there are no underlying load-sharing links that might arbitrarily change the order of the IP packets at the receiver.

    2) The transmit data has to keep the outgoing link full for at least as long as it takes for a Selective Ack to return for any missing packet. The solicited packets are re-inserted into the outgoing data stream immediately.

    3) It has to have enough transmit buffering to hold the data until a positive Ack returns - even if several retransmits of the same packet are solicited. This is several times larger than in 2) above.

    4) It has to have enough receive buffering to hold the data until that section of data is complete. Size as per 3) above.

    5) Parallel connections are required if the bandwidth and latency of the link means that 3 & 4 approach the maximum size of the TCP receive window parameter.

    Anything else?

  5. benpiper

    The device takes a TCP connection and splits it into multiple TCP connections to avoid slow start and congestion avoidance. How does it avoid global synchronization?

    Also, why haven't the vendors (NetApp et al.) solved this problem in software?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like