1. Chris Mellor 1

    In-array compute ....

    .... is not going to fly. If networked storage takes too long to access then bring it closer to the servers. Don't start putting servers in storage arrays; that's like putting a car's engine inside its petrol tank. If fuel takes too long to get to the engine bring the petrol tank closer to the engine. Simple.

    In-array compute is a dead duck, a dodo, an ostrich, a bird that isn't going to fly except in small market niches where buyers are array-centric in their approach rather than server-centric. To mix the metaphors, in-storage processing is for the birds.

    Is this view pot-smoking or realistic?

    Chris.

    1. Matt Bryant Silver badge
      Happy

      Re: In-array compute ....

      Agreed, running applications on the array is not a good diea. And putting more storage in servers (such as flash) takes us back to the bad old days of DAS and its many problems - lack of redundancy; poor utilisation of disk; individual backups or swamping the network with a centralised backup; lack of centralised storage management; the DR risk of what happens when your one server (even if it is a grid across multiple servers) suffers as part of a site-wide outage; and bottlenecks in client access as they're all hitting one server. Whilst big monolithic arrays may seem dated, they still offer advantages in centralised data management, efficiencies in disk useage, ease of backup and (usually) redundancy and replication to DR sites. With the move to grid-like arrays with multiple controllers (and multiple protocols mixes such as FC, FCoE and iSCSI) the front-end bandwidth is improving, so rather than moving the engine or the petroltank, why not just increase the number and size of the pipes between them?

      1. Denarius
        Thumb Up

        Re: In-array compute ....

        Matt, Chris,

        in general I agree for the following reasons

        (a) commoditisation of hardware does not seem to be slowing down. General purpose hardware performance is steadily improving to keep up with the more resource hungry tasks being thrown at it. Specialist kit is steadily shrinking outside of extreme high performance situations, once fads have passed.

        (b) stored data management is important for regulatory reasons as well as business continuity. SANs make this easier as Matt explained.

        (c ) finally, racks and servers are now so thin that there is no room to put much disk in.

        A counter argument of merit is that there is not much evidence that IT buyers and consumers are rational about long term implications of any decision IMNSHO. This implies a significant spend on kit for a year or three before it is superceded by a rational hardware layout.

    2. PaulHavs
      Go

      But whats the difference? Servers --> Storage; or Storage --> Servers...

      "..... then bring it closer to the servers. ...."

      It will happen, not this year or next, but 5 - 10 years out we will have converged systems which are modern day open-system looking 'mainframes'. The technical argument is whether "servers will move into storage arrays" or whether "storage will return to servers". ...and the internal networking will be totally transparent.

      Its happening today in the form of the converged system stacks from all mainstream vendors - storage + servers + networks in a rack or a few racks - sized as small, medium, large for a range of guest VMs in a hypervisor of choice (choice only from some vendors!).

      Look in your lounge room at home. How many of us still have 'best of breed' component hi-fi systems? Not many I think. Most have smart TV's with integrated DVD / BluRay / DVR / PVR connected to a surround sound system... maybe an external set-top box, but that's a transition issue!! integration and elimination of 'best of breed components'.

      Why otherwise would EMC, NetApp, Cisco be so hell-bent on getting into the converged systems market? Early days yes, but we will have enterprises and service providers running "open systems mainframes" in the foreseeable future before I start pushing up daisies (well, I hope so anyway!)

      Paul Haverfield

      Storage Principal Technologist, HP APJ region.

  2. Joe 48

    Buy a bigger fuel pipe or a better pump

    Why not put the servers in the storage. We do it with CIFS. Rather than using a traditional file server we dish it all out direct from the NETAPP and it works well. However this is an example of a niche rather than the standard approach we use. However it shows that flexibility is really what clients need, the option to use multiple methods depending on what the requirements are at the application layer.

    Development to allow faster connectivity has to be the way forward though. Until then I'll stick with throwing some flash cache at it. Or to put it another way until I can buy a bigger fuel pipe or a better pump for now I'll just stick a small amount of extra fuel in the engine bay ready for when I need it.

    1. Nate Amsden

      Re: Buy a bigger fuel pipe or a better pump

      the CIFS services on NetApp (not really to be confused with a server), as well as the operating systems on all of the other storage systems aren't what EMC was talking about. EMC wanted to have a hypervisor on their system and customers could install their own virtual machines on top of their controllers to run whatever - databases etc, that needed low latency.

      I'm not sure whether or not those VNX/Clariion systems still run embedded windows these days? It was really strange seeing a windows desktop on a CX700 many years ago.

      1. Joe 48

        Re: Buy a bigger fuel pipe or a better pump

        I appreciate its not the same as putting a hypervisor on a controller but I personally still think its acting as a server. The controllers are basically a server themselves and are handling the file sharing just as a virtual or physical file server would. I guess my interpretation might be off a little or I'm looking at it from too simple a view.

        Generally I'm all for new ideas, it might fit someones needs, choice and flexible technology is never a bad thing. I do agree that from my perspective it doesn't fit our needs.

  3. Nate Amsden

    I agree

    I thought I pooped on the idea in the original article's comments but instead I wrote a blog post about it(9/2/11). Never thought servers in array was a good idea, the CPU/memory real estate is FAR too valuable/expensive. The latency of a point to point fibre channel connection is really small - though not as small as a PCIe link. Most enterprise storage arrays don't have PCIe flash anyway though, they're still likely going to talk to their SSDs through FC or SAS links.

    I speculated at the time if EMC wants in on the server flash market they should release some products of their own in that space -- seems they have done that since that time.

    This obviously isn't rocket science, the concept was dumb, dumb dumb from the very beginning.

  4. Hcobb
    Thumb Down

    In memory computation

    When will your DRAM chips come with a "free" ARM processor? You need to run refresh cycles so might as well crunch away while you do so. We know this isn't going to happen because of silicon process issues and the management difficulties.

    The same applies to "in array" processing. While it would be very helpful to have a "database accelerator" built into the array to pre-filter the results before the data gets pumped over the interface for the big join, it is not going to happen. Or if it does happen the resulting Hodge-podge will be poorly supported and quickly outrun by standard server units that simply keep delivering better value for money.

  5. Jim O'Reilly
    Holmes

    It's not about the computing!

    The only rationale I can make for moving servers inside storage boxes is that the storage companies wnat to be in the server business, and need a way to justify that.

    Otherwise, it's good old DAS, and it's been around a long while.

    It seems a bit desperate!

  6. Henry Wertz 1 Gold badge

    Is 1ms latency really a problem?

    Really, I get like 1ms latency out of plain gigabit ethernet. I assume these use faster links, these would have even less latency. I would think the main source of slowdowns would be high utilization of the storage array, at which point having the source of utilization be inside the array instead of outside it will not help these delays at all.

    For me, the savings of having some computations done in the storage array would be cost savings. In theory, of course, if you already have these CPUs sitting there doing nothing then using them would just put to use stuff you already have, saving money. Perhaps it's my cynicism, but I think EMC and the ilk will manage to get more money for this, not less, compared to just hooking a regular ol' server up to the storage array.

  7. cphollis
    Go

    The Other Side Of The Discussion

    So, my view is somewhat different. Chris wanted to get the pot boiling, so I'm going to help him here. But I'm going to come at it a bit differently.

    If we think about "workloads" instead of "applications" (just for a moment!), a different picture emerges. Let's start with a familiar workload: storage replication. You can run it at the server layer, or you can run it within the array. Having it "close to the data" provides some advantages, and you see plenty of array-based replication in the market.

    Now for another example: anti-virus scanning. You're free to run it on the desktop, or all of your servers -- but many people prefer to run it as close to the data as possible.

    So if we think in terms of which workloads don't use a lot of CPU, but use a lot of data, perhaps a different picture emerges? Let's see: backup, search, some forms of big data analytics, etc.

    I don't think anyone is proposing using today's storage arrays as a general-purpose compute platform. But it's not hard to see that there are certain workloads that can benefit from being very close to the data and not having to traverse a network to get their work done.

    Now for the neck-twister: in a near-term world of commodity components and software stacks -- what *exactly* is a server, and what is a storage array? They're starting to look awfully similar in many regards ...

    So the discussion morphs a bit: when is it appropriate to have dedicated roles (compute nodes, storage nodes), and when does it make sense to have nodes with combined roles (server and storage). A familiar Hadoop cluster is an example of the latter -- it runs applications right on the storage.

    My two cents

    -- Chuck

    chucksblog.emc.com

    1. Anonymous Coward
      Anonymous Coward

      Re: The Other Side Of The Discussion

      Don't EMC kind of already do this on VNX, or is that the next version ?

      Clariion-Windows-Flare, Celerra-Unix-Dart running on commodity server hardware via a light weight hypervisor-WindRiver or .........

  8. Anonymous Coward
    Anonymous Coward

    It seems a bit desperate!

    What like ViPR ?

  9. FlashBod
    Happy

    Hyperconverged?

    Hello all,

    The concept is not a popular one I notice but I wanted to make mypoint anyway. Look forward to everyone's comments on this. With converged stacks like flexpod, vblock and HDS's UCP becoming slowly more popular with customers due to severe pushes from vendors the idea of combining compute, storage and networking has become somewhat mainstream already. The next step is already being undertaken by the likes of Nutanix, Simplivity and Scale's H3C who decided not to wait for mainstream vendors and break open this segment for themselves. Though these guys are more or less niche players the concept of having a balanced building block containing all components must be attractive to at least managed service providers. (Nutanix' growth figures seem to show this anyway) For mainstream vendors to hop on this bus would they not cut costly development cycles by adding for starters a Hypervisor to the array? A product like HDS VSP with its modular build and switched backend might benefit? And I remember Violin adding ESXi to the array, although I haven't heard any further from that.

    Personally I don't think its a smokers reality and it could have some millage.

    Marcel.

  10. josh.krischer

    Currently most of the storage Control Units are based on the same technology as the servers; multi core Intel chips. In fact the multi core is used much more effectively is storage CU than in servers. Multicore technology and server virtualization bring some other developments to watch such as the Virtual SAN Appliances (VSAs)* or VNAS** and embedded application on storage control units. The first emulates the server as storage CU, the second using storage CU for applications in Virtual partition or native. Examples :

    Remote replication (RecoverPoint on VMAX 40K, 20K), Drive Encryption – EMC, HDS, IBM high-end subsystems, Real-time Compression to IBM Storwize V7000, SVC.

    To emulate CU in server partition may be viable solution for SMBs with limited budget.

    Future usage of integrated applications:

    De-duplication

    Server-less, LAN less backup

    E-discovery, searches

    Analysis

    *LeftHand Networks pioneered, evolved as HPs StoreVirtual (VSA), NetApp, OnApp, Nexenta, StorMagic, Mellanox Storage Accelerator VSA product accessed over Ethernet or InfiniBand supports DAS & SAN

    ** Houston, Texas-based SoftNAS start-up

  11. Storage_Person

    Actually, yes...

    I think that servers-in-arrays does have a place, and that place is where there are relatively low requirements in terms of numbers of servers but relatively high requirements in terms of availability.

    Take a simple case of a small business that wants to run a couple of servers. Say Exchange and SQLServer, for example. Any highly available implementation of this is going to involve a fair amount of complexity in both physical and logical configuration. But storage arrays have a lot of high availability functions built in to them and so handling the 'server' side of things (which would basically be a pre-configured VM running on a lightweight hypervisor inside the storage array) would be relatively painless.

    Given the complexity (and cost) of building out some sort of SAN configuration with multiple servers and storage arrays I can see this being a viable option for smaller companies, certainly. As to if larger shops or service providers would be interested in them, I think it would come down very simply to if they proved to be cheaper to manage and more reliable than the build-your-own variety.

    1. Chris Mellor 1

      Re: Actually, yes...

      This is all getting interesting and I think I'm getting polite wrist slaps from people who say the demand for in-array processing is more complex and more widespread than my simple little article says.

      Yes, okay, it is. And, yes, okay, the idea of a storage array with spare server engines running VMS looks good and sensible .... but will Cisco, Dell, HP and IBM, who own most of the server market between them, do this?

      It's okay for EMC and DDN and Violin Memory to push the im-array processing idea because it marks out their arrays as having more value but general adoption needs a big system vendor to jump on board and, so far, none of them has.

      CHris.

      1. Storage_Person

        Re: Actually, yes...

        I think you might be looking at the wrong people to move this forward. My guess would be that you're more likely to see it being pushed by the software vendors. Microsoft and Oracle spring to mind, but for any software vendor who can build a VM which contains an image based on the storage vendor's specification and gains high availability/clustering/failover for pretty much free it's a very interesting proposition.

        Quite surprised that no storage vendor has taken that first step, to be honest. And I doubt that it's the type of thing that a startup would want to take on, as they would need to work pretty hard to get the larger software vendors on board. So perhaps it will just not happen through plain lack of drive from the bigger storage vendors; wouldn't be the first time...

  12. DFellinger

    Another POV

    One can always count on Chris for exciting headlines and he certainly was quite inspired on this one !

    As the leader of the DataDirect Networks' team who who pioneered in-storage processing in early 2010 I would like to contribute to the discussion with these few comments.

    First and foremost I agree that embedding processes in the storage is not meant to solve general compute problems. It is, however, meant to reduce latency for iterative processes. Just as GPGPUs are utilized to reduce cycle count, embedding applications that execute continuous data operations increases system efficiency. A file server is essentially a data intensive process that must move data to and from a storage system to satisfy requests from network connected clients. This process of moving data generally entails the use of a serial bus and a cache or socket layer. Regardless of the bus rate or type there must be a protocol which dictates both the bus state, the bidirectional data destination, and the error or retry management. Of course, data is generally copied to the socket before any bus transaction can be called resulting in additional latency. If the file server is in the same memory space with the storage system, a simple page mapping can provide a parallel transfer that is effectively cached without a copy for migration to slower media.

    Filter processes are also iterative and good candidates for embedding. Would I rather download raw data and filter it locally or upload the process and filter the data before network transmission? Filters like FFTs with complex time domain functions could benefit from running without iterative network activity. Why not accommodate the entire service structure in page mapped memory? Managing swap space may not be ideal but compare it to a SCSI bus transaction. A great example of that is the work we do with in-Storage Processing for the Square Kilometer Array.

    Again, embedding does not solve latency in all compute applications but it does go a long way in reducing data transmission latency for data intensive applications.

    Dave Fellinger, Chief Scientist, DataDirect Networks

  13. Joe Landman

    Processing in memory by any other name, would provide as much bandwidth and scalability

    I appreciate that Chris is trying to get some comments going, and this is an important topic.

    First off, my company has been talking about and developing tightly coupled computing and storage for a long time. Far longer than has been fashionable. So we are biased in this regard, but we explain our biases. We are building some of the fastest tightly coupled systems around (pay attention to our web site soon for an example).

    Second off, no shared pipe will ever scale with capacity, unless you scale the pipe. Which you largely cannot do after installation of the pipe. So once you lay down your SAN, you are at FC8 or FC16 until you fork-lift upgrade it.

    Third off, as you get more requesters hitting that single shared pipe, guess what happens to your bandwidth. Its not pretty. You add that more storage, because you need the capacity. If your bandwidth doesn't scale with capacity, your data gets colder and colder. In the era of big data, streaming analytics, and massive data flows, this is not a plan that will end well for you.

    Tight coupling between storage and computing is absolutely essential for most large scale computing and analytics ... you simply cannot get more bandwidth out of shared pipes of fixed size, as compared to a distributed computation among machines with collections of massive pipes to their local storage. Whether you call this putting computing in arrays or arrays in computing doesn't matter. What matters is that designs that cannot scale, wont scale.

    Considering how fast storage and computing requirements are growing, and the ever expanding height of the storage bandwidth wall (size of storage divided by the bandwidth to read/write it ... its a measure of how much time is required to read/write your capacity once), designs which don't allow you to scale bandwidth and computing power at the same time as you scale your capacity are rapidly falling by the wayside for many users, as they need to process that data in realistic periods of time. Placing the storage firmly at networks edge exacerbates the problem thanks to shared pipes and increasing numbers of requesters for using these pipes.

    We only see the problem growing worse with the rapid growth of storage capacity and the relatively fixed size of network and fabric pipes. The only way you are going to be able to process all the data you need to process is to put the processing adjacent to massive data pipes. This is what Google, Yahoo, Facebook, Linkedin, ... are doing. And most of the other folks, not at their scale, are looking to do smaller versions of this. Hadoop and other key value processing engines are implicitly this.

    Call it processing in disk or disks in processing, but its not going away. If anything, its accelerating.

    Joe Landman

    CEO

    Scalable Informatics

    http://scalableinformatics.com

    1. JohnMartin

      Re: Processing in memory by any other name, would provide as much bandwidth and scalability

      Jo,

      I think you're comments may hold true for HPC style applications, but in typical corporate datacetners, most servers utilize less than 5% of their available bandwidth of their storage network, and outside of backup events, the connections to the arrays themselves are rarely above 50%, and this was back in the days of 4Gbit FC ...

      I also believe the amount of low latency east-west bandwidth that's coming around over the next few years will change a lot of things, and from a manageability point of view I suspect that storage-centric nodes and compute centric-nodes will still be popular for a variety of reasons.

      Then again, maybe I'm stuck in the corporate data-center world where multiple 100Gbit RDMA capable links with microsecond latencies, seems like effectively limitless bandwidth.

      John Martin

      Principal Technologist - NetApp ANZ

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon