How bizarre
I can download with a 32KB window size and 100ms RTT at well over 400kB/s. And I'm on a 4Mb link.
Maybe that's because "Every packet series transmission is followed by 64ms of waiting." is utter tripe.
Network latency is a fact of life. There is nothing you can do about it, except join the network queue and wait. But Bridgeworks thinks it has solved the latency problem with pipelining and artificial intelligence. Can it be true? SANSlide is the product that does this and it aims to increase the speed of storage backup …
I had to check the date! TCP flow control and manipulation technologies have been around for more than a decade - Remember Packeteer? Likewise, WAN optimization via TCP multi sessions is well understood and practiced by many application technologies. Modern WAN technologies such as MPLS, ATM, Frame Relay can be oversubscribed which will generate retransmissions... worst case scenario is packets out of sequence/data reassembly.... So in the context of storage replication and specifically storage Consistency Groups, the boffins have no clue.
I don't entirely see the point. If you have, for example, an ftp client and server, both with suitably configured IP stacks, they will quite happily fill a link with enormous latency. It all depends how long you are prepared to continue sending data before seeing an ack and if you have the memory to support this.
Having multiple connections doesn't appear to be of benefit - unless you are trying to get around "per connection" traffic management. along the lines of ed2k and bittorrent. On some firewalls, opening too many connections per unit time between the same pair of IP addresses might trigger some constraint mechanism.
I remember ISDN. I also remember the ISDN routers that used to "fake acks", rather than waiting for real TCP acks, to increase throughput, by giving the impression that the traffic had arrived safely and could therefore be discarded from the sender's buffer. Which most of the time was safe. Most, but not all.
Isn't this the same concept twenty years later?
What happens if the fake ack turns out to have been premature and data really was lost in transit? The fake ack means that the data has been discarded from the sender's buffer and cannot be retransmitted. Which is fine when all you're doing is a P2P download of the latest episode of House or whatever, but may not be so fine if the data in question is important for split-site disaster-tolerant data storage or whatever.
Or have I misunderstood, in my semi-senility?
Also, isn't Bridgeworks a trademark of HP (and before that, Compaq, and before that, DEC/Digital)?
http://h71000.www7.hp.com/commercial/bridgeworks/bridgeworks_index.html
it doesn't decrease latency at all, just uses multiple TCP connections to increase the TCP window size? you know there's a setting for that... in fact i'd be concerned about any company whose networking staff think the way to increase the TCP window is to use multiple connections - if they don't know about properly configuring their TCP window size, what other standard settings don't they know how to configure properly?
"The AI manager varies TCP/IP network parameters, such as window scaling, maximum concurrent session number, compression level and transmit buffer sizes, continuously. If they improve performance the changes are kept and if they don't they are not. When the product is first installed on a customer's network it is switched on and given an IP address. A self-learn mode period follows in which initial optimisation parameters are set. Then it constantly monitors and adjusts all parameters to optimise data transmission performance. Trossell says this means there is no user set-up and no user maintenance." = Knock, Knock ..... http://amanfrommars.baywords.com/2009/10/14/091014/
An INDECT Gift from PSNIRobotIQs with Squire ProgramMING ...... in a Colossal Civil CyberSpace Programs with AI Palatial Barracks Loughside.
This could improve usage efficiency, but I can see it potentially causing a headache for network operators who almost certainly include latency as part of their overall bandwidth calculations.
Especially if there is widespread adoption of this technique by say, p2p technologies.
I'd also be interested in knowing what processing overhead is
This post has been deleted by its author