back to article OpenFlow controller design killing SDN, say network boffins

OpenFlow's architecture is inefficient, and caps performance while sucking unnecessary power. That's the conclusion of a bunch of Comp. Sci boffins from researchers at Australian brain box Data61 and Sydney University, who assessed four major OpenFlow controllers – NOX, Maestro, Floodlight and Beacon. Their paper is at Arxiv …

  1. Duncan Macdonald Silver badge

    Object Oriented

    It is not surprising that Object Oriented designs are less efficient - just like C++ is less efficient than Fortran when it comes to heavy duty mathematical processing.

    Using a preallocated array is going to be faster than allocating space with NEW for each packet but current programmers have been brought up on C++ and other object oriented languages rather than the speed oriented FORTRAN language and do not realize that the elegance of object orientation comes at a cost in processing time.

    1. P. Lee Silver badge

      Re: Object Oriented


      OO is for developer efficiency, not run-time efficiency and as later comments suggest, keeping up with ASIC-based line processing was never going to happen.

      However, I think we probably want to make sure the concepts actually work before committing them to ASICs.

      I didn't know network stacks were written in FORTRAN. Hmm.

  2. Charles 9 Silver badge

    "The authors' proposal is for a new SDN controller design: “treat arriving packets with pre-allocated buffers rather than new objects,”"

    But the problem with pre-allocation is that you set a limit for yourself and there's always a chance the bugger gets overflowed. What then?

    1. Duncan Macdonald Silver badge

      Pre allocated or dynamically allocated - same overflow problem

      Heaps and pre allocated buffers both overflow - the software MUST cater for the no buffer available condition in either case.

      There is always a buffering limit set by the amount of memory in the system.

  3. SeymourHolz

    There was never any chance that general-purpose processor designs were going to compete with ASICs in switching-performance. These results should surprise no one.

    1. fibrefool

      software switching

      This isn't about software switching, it's about the control-plane. The article talks about line speed. But that's line speed relative to a server with 2 x 10GE ports. Nobody is suggesting using an SDN controller for the forwarding plane.

      If you want to look at state of the art for general-purpose processor forwarding then take a look at Though sure, I don't think anyone's suggesting replacing Ethernet switches with X86 servers with large numbers of 10GigE ports.

  4. Anonymous Coward
    Anonymous Coward

    SDN with a sprinkle of HW

    If SDN needs to have HW awareness then it seems to me we're right back to square one.

    Hope it was a good experiment.

  5. The Count

    Don't you just love

    Watching the new kids re-learn the lessons of the past? Ah, youth....

  6. Preston Munchensonton


    Since SDN controllers have to deal with traffic as flows (meaning they have to remember MAC addresses so as to track conversations, compared to an Ethernet switch that only has to know which port it's forwarding traffic to), network scalability is also a big problem.

    First, ethernet switches (bridges) inherently remember MAC addresses, or they wouldn't know which ports to which they should forward traffic.

    Second, can we really assume that these SDN controllers are using MAC addresses to track conversations? I would expect it to be far more likely to use source/destination IP addresses and ports, same as every other L4 traffic controller (firewall, load balancer, proxy, etc). No idea how the SDN controller could function for Internet scenarios, since it won't have MAC info from at least one of the ends.

  7. raphaelamorim

    Those guys have only tested using toy controllers

    Could you please test with a real openflow controller. Like HPE VAN SDN Controller.

    1. mr_tyu

      Re: Those guys have only tested using toy controllers

      Well, not sure that would have helped as HP advertises a performance of:

      - Maximum new flows per second (cbench): 2.3 million (single controller)

      - Maximum new flows per second (cbench): 6.5 million (3 controller team)

    2. Anonymous Coward
      Anonymous Coward

      Re: Those guys have only tested using toy controllers

      You're right. They tested toys. In our company we use the Aruba/HPE VAN SDN controller (27 switches) with SDN Narmox Spear Application and the network works fantastic.

  8. capveg

    As the author of the cbench tool used in this paper and as CTO of a company that sells commercially supported SDN controllers, I felt like I should jump in here and clarify a few things.

    First, the paper's authors are looking at a very specific subset of SDN/OpenFlow architecture called 'reactive flow control' and their results do not apply for controllers that use other techniques, e.g., proactive flow control (check out for more on the difference). It's been well known for a long time that reactive flow control is a bad idea and doesn't scale - that's why none of the commercially supported SDN solutions implement it. The authors should be careful to qualify the limitations of their work so that people don't get the wrong idea. Certainly, plenty of SDN controllers have rock solid production use and scale-out design -- happy to provide customer success stories if folks have questions about this.

    Second, when I wrote the cbench tool, I never intended it as a realistic data center work load generator, which this paper seems to believe it is. Cbench was intended to test various I/O subroutines of a controller (like the marshalling/unmarshalling speed that the authors identify). No one should ever assume that one controller design is better/more robust/more scalable than another because it comes up with a higher cbench score. Particularly once you move to the more robust proactive controller design, the cbench score become an irrelevant measure of controller scale.

    If the authors of the paper would like to chat more about this or in general improve the quality of their research, please feel free to reach out to me directly.

    - Rob Sherwood


    1. gramoli

      Thank you for your interest in our work, below is a brief answer to the comments.

      Regarding our choice of controllers, we wish we had access to the code of more SDN controllers.

      To properly conduct our evaluation we had to port the controllers to different architectures, which required us to obtain the source code of the controllers. Unfortunately, there are too many SDN controllers who cannot be compared to the state-of-the-art as their source code is not disclosed. We chose to focus on NOX(NOX-MT), Floodlight, Maestro and Beacon because they have had a significant impact on the research: as an example NOX was cited more than a thousand times according to Google Scholar.

      The HPE VAN SDN was mentioned to achieve a maximum of 2.3 million flows per second on cbench for a single controller, almost an order of magnitude slower than the highest performance we observed While there might exist other interesting controller designs, there exist many controllers that are known to be outperformed by some of the controllers we tested, or whose source code is not available and whose description is too high level to be correctly re-implemented.

      Regarding the choice of benchmarking tools, we focused on using cbench and profiling tools and we agree that our resulting workloads may not reflect the behavior observed in a realistic large-scale software defined network. To our knowledge, however, there are no alternative tools available to test the performance of SDN controllers. We also believe that the benchmarking of SDN controllers is a topic of interest in itself and we are interested in any form of collaboration with industrials and academics who are working on this problem.

      Finally, we want to clarify that we acknowledged at two places in the paper that "reactive configurations are known to be very difficult to deploy in carrier grade networks” which makes it an interesting research challenge for us. This is mentioned in the introduction and Section III.D. Evaluating proactive controllers with corresponding benchmarking tools on multicore and manycore platforms could be insightful, however, existing non-OpenFlow reactive controllers, like Fastpass (, that achieve high utilization and low latencies are also of interest for us.

      This paper has just been accepted for publication in the proceedings of the 41st IEEE Conference on Local Computer Networks (LCN). We hope to see you at the conference in Dubai in November to continue this discussion.

    2. fibrefool

      SDN != Reactive OpenFlow

      Agreed Rob. We've known that reactive flow control is a bad idea since long before SDN controllers were invented. Anyone remember Ipsilon Networks?

      It's also worth remembering that SDN isn't just about OpenFlow (reactive or proactive). OpenDaylight, for example, also supports OF-Config, OVSDB, NETCONF/YANG, LISP, BGP, BMP, PCE-P, CAPWAP, OPFLEX, SXP, SNMP, USC (whatever that is), SNBI, HTTP, CoAP, LACP and PCMM/COPS. So expecting it to be the fastest reactive OpenFlow controller in the world is a bit like expecting your swiss army knife to be a better saw than your hacksaw. The real value in "SDN" (IMHO) is in writing apps that use multiple southbound protocols (e.g. learn topology from BGP-LS and program paths using PCE-P).

  9. fakington1

    Wait a minute, now you're telling me the SiDeN ain't the future?

    I thought SDN was all Ritz and Glamour, and would nail my wife for me while making me a cuppa; now you're telling me it won't?

    Sort it out, Boffins! Do we need another "Big Talk" episode to sort this?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020