back to article RoCE or roll in the general, er, Vcinity: Thousand mile-plus RDMA makes remote editing seem local

California networking startup Vcinity said this week that its networking products shunted a petabyte of data 7,000km in under 24 hours, providing access to remote files thousands of kilometres away with a 250ms round trip time. How does Vcinity explain its hardware and software driving a link so fast? The firm said it used a …

  1. Norman Nescio

    Latency?

    the ... solution allows video producers to edit remote content across any distance as if it were local to their desktop.

    "Editors are no longer required to replicate content at multiple locations, avoiding redundant copies of the content, leading to dramatic storage efficiencies as well as improved control and security."

    Rant mode on...

    Sigh. Let's put the remote content on Mars shall we. Or even 'just' the Moon. Round-trip delay kills anything interactive. I think the cut off is around 100 ms, but this stackoverlow question and responses goes into detail: stackoverflow:What is the shortest perceivable application response delay?

    What this means is that if the RTD needs to be about 100 ms, then speed of signal propagation (usually light in optical fibre) limits the maximum distance from client to server. If you ignore any processing delays, then the signal in an optical fibre travels at approximately two-thirds the speed of light in a vacuum (it is dependent on wavelength and type of glass, by 2/3 is close enough for a rule of thumb*.), so the maximum distance between the client and server can be before the lag is noticeable is the distance travelled by the signal in 50 ms. Light travels at near enough 3x108 metres per second, so 2/3 of that is 2x108 metres per second, or 2x105 metres per millisecond, which means we are looking at 50x2x105 metres, or 1x107metres, which is 104 kilometres. 10,000 kilometres seems a lot, and is fine for editing within (say) the continental USA, but if your data-centre is in say, Houston, and your video editor is in (say) Soho, the great-circle distance between those two is roughly 7800 km - and optical fibres don't follow great circle routes, and we have not accounted for application processing delays.

    If the editor is in Seoul and the data centre is in Houston - great circle is just over 11,000 km, so your latency budget is already overspent. The application will be noticeably laggy.

    So yeah. Impressive throughput. Full marks for that. Overblown marketing - 'any' distance. Only sad geeks like me take notice, and are generally not listened to when the big boss decides they want to spend big money on this wunnerful noo system. Video editors get dumped on, having to use a crap new application. My understanding is that bad things happen when you bugger up a creative's workflow.

    And breathe.

    *Financial market traders are willing to pay a lot of money to have their data routed by microwave instead of by optical fibre, as the signal propagation of microwaves through air is slightly faster than light through optical fibre, giving them a few milliseconds advantage on long connections.

  2. c1ue

    Looks offhand like a compression + UDP + error correction protocol - TCP latency is the big problem with long distance internet transmission.

  3. Anonymous Coward
    Anonymous Coward

    Weird latency figures...

    So, their "leading edge" super special NAS gear presents files as a NAS 7,000km away with 250ms latency.

    Testing just now from Australia, our low end VM hosted file server in France (using SSHFS for this test) is responding with a latency of ~310ms.

    Guess that 60ms less latency must be important to their audience, to cost so much more money than our $5/month VM. ;)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022