back to article Let the compute see the data... to smash storage networking bottlenecks

Bringing compute to data sounds like a great way to bypass storage access bottlenecks but progress is difficult because of software issues, the need to develop special hardware and a non-x86 environment. As data amounts grow from terabyte to petabyte and beyond, the time it takes to bring data to the processors is becoming an …

  1. Androgynous Cow Herd

    Solutions desperately seeking problems

    Sub millisecond latencies from a SAN are easily and commercially available, and have been for close to a decade. There are corner case applications that benefit from lower latencies, and that’s where the in memory solutions come in. Latency is solved for our generation, until the rest of the application stack catches up...CPU speeds are no higher now than they were 10years ago, so trimming a few microseconds off of data access will make precisely phuckol difference to most applications and users.

    The marketecture fluff of claiming a processor embedded into a disk controller to solve complex computational problems like facial recognition is smoke and mirrors - such an app would also need access to other network assets such as databases, so you just moved a tiny bit of latency/bandwidth around to another part of the stack, at the expense of “Reinventing the wheel” and raising the acquisition cost of the endpoint devices.

    Data Gravity went bust with a similar approach. It was a great solution but it didn’t really solve any problems.

    Latency isn’t a thing at the moment.

  2. MrHorizontal

    With Intel's 3D X-Point and their promise of non-volatile RAM, you will be able to use an 'in-memory database' safe in the knowledge that data is persisted. Then with just a caching mechanism equivalent, you just page it to a cheaper bigger disk. This actually simplifies computing dramatically and is a massive paradigm shift (the right place to use this term) in computing.

  3. PortlandBill

    We support NVME on servers,, the future will be HCI clusters based on these high performance NVME servers. The bottleneck moves to the back-end network but that is part of the bottleneck merry go round.

  4. Nimby
    FAIL

    Niche at best.

    The cost and complexity to tackle a problem that almost no one is suffering from ensures that any company that does try will fail when no one bothers to write any software to support the hardware.

    At best there will be one company who produces both the hardware and software per individual niche use as a single product. And even then, the cost of the niche product will outweigh the cost of inefficient COTS.

    Not all ideas can be great. If / when there is a real need / demand, the market will adapt to create the solution. As of right now, this is still a solution in (desperate) search of a problem (that does not exist).

    Frankly, you'd be better off bolting an M.2 card onto a GPU "video" card with a micro UPS (battery backup) to give it time to dump volatile memory to a reserved section of the permanent storage on power failure. That *might* be generic enough to attract enough niches to share it as a platform that it could almost survive as profitable, if you get lucky. But even then, I doubt it.

  5. JohnMartin

    Network latencies are rarely the bottleneck

    I'll avoid my usual war-and-peact posts here, but honestly outside of a really small number of use cases the network should never a significant part of your latency .. DAS approaches are generally much less efficient overall, and often slower than networked / external storage .. case in point, most HDFS implementations get a fraction of the available performance out of locally attached HDD/SSD resources. The whole "move the compute to the data" approach was genius when you were dealing with 100Mbit ethernet, but the compromises and the overheads involved simply dont have a good payoff when you're dealing with 100Gbit ethernet.

    Even with storage class memory (Optane / Z-Nand etc), the difference between local and remote access on a 100Gbit RDMA network is about 2-3 microseconds with most of that being due to having to run through the software stack twice (once on the local and once on the remote).

    Sometimes I wish people would stop being "forward thinking" by using approaches that solved a problem a from a decade or more ago.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like