back to article WTF is... NFV: All your basestations are belong to us

Mobile network operators would have had an easier life if it wasn’t for smartphones and the flood of data traffic they initiated. Apps have led to a massive increase in the volume of data moving back and forth over phone networks - not just from users; the ads in free apps helped too - and operators are struggling to cope. And …

COMMENTS

This topic is closed for new posts.
  1. nematoad Silver badge

    If it walks like a duck...

    "That’s needed because the whole point is not to develop standards, it’s to get the whole industry lined up behind a common endeavour to innovate and solve challenges, and that includes being able to talk the same language.”"

    Sounds like they are setting standards to me.

    1. Anonymous Coward
      Anonymous Coward

      Re: If it walks like a duck...

      The telcos of all people should know how essential standardisation is - they just don't like the balls-achingly slow process involved. Just waving a magic wand and saying "we're not developing standards" isn't going to change that. If they don't make this process as rigorous as a proper standard then then they'll just end up locked into the first vendor's "innovative" new product they buy into. It won't help that the telcos have in the past shown themselves to be singularly bad at specifying innovative products and services, or co-operating between themselves, so I wouldn't put a lot of faith in the specifications and use cases they're agreeing on.

      1. Tom 7

        Re: If it walks like a duck...

        Standards are piss easy to work out. I would bet the useful ones are mostly available already, if not all.

        Its the ownership of the same that causes the problem - or more accurately the ability to sue someone else claiming you own the bleeding obvious.

        As I said - standards are piss easy to work out - the hard bit is getting the lawyers out of the way.

        1. Khaptain Silver badge

          Re: If it walks like a duck...

          Lawyers and anyone even remotely connected to Finance.

      2. DonaldC

        Re: If it walks like a duck...

        Just to be clear, when I said that the NFV ISG is not developing standards, I did not mean to imply that standards are not important. To the contrary, standards are vitally important for telecommunications networks (and just about everything else) to be interoperable. In our charter we are very clear that we will re-use existing standards where they are applicable, and work with the relevant standards bodies to address gaps. Our role is to accelerate progress on implementation and standards by bringing the whole industry together to agree what is needed to be done and which organizations (e.g. SDOs) are best placed to do it, and then direct collaborative industry efforts there. Neither vendors or operators can afford that any essential standards effort is fragmented, and we want to encourage implementation to build momentum on innovation (readers have correctly identified management and orchestration as a key challenge) and encourage early efforts towards interoperability. It will not be lost on the readers of this article, that while standards remain vitally important, there are significant shifts underway in how standards are developed that cannot be ignored and it is not an easy problem. But this is a discussion for another day. Don Clarke.

    2. JetSetJim

      Re: If it walks like a duck...

      Indeed - most mobile phone core networks run on standard boxes anyway - be they Sun servers, ATCA chassis, or bog standard routers. Just not Intel boxes, AFAIK.

      The key differentiator here is what they want to do at the edge of the network. Now, with traffic density going up and up, you will need more and more masts. This is quite possibly the most expensive part of the whole network. Some masts can be dumber than others, but LTE ones need a fair few smarts. But, perhaps it can be pooled in the cloud - who knows. But then, if you dimension your "cloud" incorrectly, and a few different and unexpected conditions start placing demands on the resources in rather large ways, you're still going to have blocking due to congestion. Perhaps Intel are on to something, but UMTS had an intermediate "clever" box (the RNC) and they got rid of it as it added too much latency to handovers and call setups. You'd need a moderately major shift in the 3GPP standards to put it back in.

  2. Zebidee

    All your Babestations are belong to us?

    Rooney will not be pleased....

  3. James 51

    Someone is going to go for vendor lock in at some point or plant a patent land mine. It's just a matter of time.

    Still, if they can get their act together perhaps the roll out of 5g or 6g might actually so fairly smoothly.

  4. Adam 1

    I bet various 3 and 4 letter organisations would agree this is the way to go.

    1. Roland6 Silver badge

      RE: I bet various 3 and 4 letter organisations would agree this is the way to go.

      I bet various 5 letter organisations already have a pile of essential patents with futher patent applications in the pipeline...

  5. Anonymous Coward
    Anonymous Coward

    " a Wind River embedded software stack "

    “Two-and-a-half years ago, we started a research programme to build a proof-of-concept platform to test network-type workflows on a standard industry servers,” he says. BT took hardware from HP, loaded it with a Wind River embedded software stack and began seeing what network hardware functionality it could replicate in software."

    Er, that would be "BT took Intel x86 hardware and loaded it with an Intel embedded software stack".

    (Intel bought Wind River in 2009)

    Can you tell Intel are feeling threatened? And so they should be.

    1. Anonymous Coward
      Anonymous Coward

      Re: " a Wind River embedded software stack "

      AFAIK WIndriver's vxWorks has not been update for >15 years.

      At the time I assumed Intel bought to WIndRiver to close out non-x86 CPUs.

      Shame must new developments dropped vxWorks a good 10 years before the Intel buyout.

      Intel Capital - the gift that keeps giving.

  6. Pazl

    Ah inevitable eh

    3 and 4 letter orgs like NSA, CIA etc. A great opportunity to get in on the gound floor eh ?

  7. Anonymous Coward
    Anonymous Coward

    The operators just need to ensure that every RFP, RFQ and RFI they put out clearly ask the responder to state how their proposal aligns with the overall principals of NFV.

    They then need to start awarding contracts to those they feel who are making the most progress. The operators need to be ready to take risks on this and go with emerging stuff. If they don't buy anything from the vendors until ALLtheire requirements are met then it's too big a risk for the vendors to develop for years.

  8. Stevie

    Bah!

    Bespoke equipment out, easily massively hacked infrastructure in.

    Job done, baby in the drain.

    Next?

    1. Anonymous Coward
      Anonymous Coward

      Re: Bah!

      And the beauty is that you can leverage the very hardware that _is_ the NFV to go after the rest of a data center. Much easier if the hardware steps aren't heterogeneous.

  9. Anonymous Coward
    Anonymous Coward

    TL;DR me if you like, but this is important

    I'm not convinced the previous commenters really get it. It's nothing to do with Intel, in fact , they stand to gain a lot.

    <screed>

    Telcos spent years building robust networks in a highly regulated domain. In exchange for the pain and complexity, they got to have virtual monopolies on their services, and a whole raft of behemoth suppliers arose to feed them: Alcatel-Lucent, Ericsson, NSN, Cisco etc. These guys sold custom gear at high prices and it worked - five nines, six nines etc.

    Now, the telcos face a truly existential crisis: over-the-top providers have the money, lack of regulation and smarts to utterly destroy them, and they know it. When you have ubiqitous internet, why would you use a phone at all, compared to gtalk/whatsapp/skype and so on? The OTT guys have leveraged the cloud's economies of scale and the web's development practices to spur product innovation that is orders of magnitude faster than telcos can move. All built around cheapo x86 boxes from white-box vendors.

    So one day the telcos get together and say, "we want some of that". The question is not, "why are they doing NFV?" but more "what took them so long?"

    However, before NFV hype takes off, virtualizing a function (in the lingo, building a VNF as part of an NFV deployment) is only one small step. The hardest challenges are yet to come, and to be honest, it is too early to say whether the industry can handle them. A couple of examples:

    - To deploy VNFs, scale them, and manage their lifecycle, you need an orchestrator. That toy-like GUI you get on Amazon EC2 is fine if you only have one app (function), but not if you have hundreds and you need to chain them together to build your service. There is currently a yawning chasm in the market for orchestrators - everyone agrees you need one, but no one has one.

    - virtualization is just dandy when you are operating in the signaling domain: if your host doesn't schedule your VM for a few milliseconds, you will be OK. In the data plane, for things like transporting voice/video media, or transcoding, this sort of delay is disastrous. So you need to find a way to drastically improve networking performance on virtual machines. So far, the leading candidates are SR-IOV and Intel's DPDK, and neither is a slam dunk.

    - OpenStack, which the industry is converging on as the standard cloud stack, has weak networking performance, and it's going to take quite some time to improve it. Icehouse in 2014 is just the start.

    - sticking an app on a VM is not NFV, although it is a starting point. True NFV requires that your app be able to take advantage of the cloud and do things like scale elastically, store state in a cloud-friendly manner (eg cassandra type storage), etc. There are very few such apps. I think Nominum have a DNS one, and there's an open-source IMS core ("Project Clearwater").

    </screed>

    1. Anonymous Coward
      Anonymous Coward

      Re: TL;DR me if you like, but this is important

      Nice writeup (no tl;dr here thank you).

      "When you have ubiqitous internet, why would you use a phone at all, compared to gtalk/whatsapp/skype and so on?"

      This "ubiquitous internet" of which you speak, when and where is it due to arrive?

      Also, UK air traffic control found out a few days ago what happens when your "phone system" doesn't perform as historically expected from a phone system.

      BBC Radio 4 news programs are finding out too, also rather in the public eye, now that so many of their live interviews are failing because somewhere between the studio and the interviewee, a link now frequently provisioned over retail internet and some flavour of retail VoIP, something somewhere doesn't work as well as the carrier-class systems they've been used to in the past (contention? congestion? what are they ?)

      1. Anonymous Coward
        Anonymous Coward

        Re: TL;DR me if you like, but this is important

        Ha ha, yes, it's not the ubiquity as much as the half-decentness of the Internet. But, weak-and-flaky as the 'net is, telcos are seeing their revenues from every business unit plummet as customers take their services to it. Landlines underwent attrition to cellphones. Cell calling underwent attrition to SMS. SMS is going to lose out to OTT messaging. Each one of the "losing" services was superior in many ways to the "winner", but the latter was good enough, and cheap enough, for it to win. One day, perhaps after I've retired to a place with no Internet, there will be decent, ubiquitous Internet (damn!)

        NFV provides a way for telcos to take what they know about communications and bolt on a big dose of Internet thinking. In that regard, it's a huge step for them.

    2. Destroy All Monsters Silver badge
      Holmes

      Re: TL;DR me if you like, but this is important

      Yes, yes but:

      Telcos spent years building robust networks in a highly regulated domain. In exchange for the pain and complexity, they got to have virtual monopolies on their services,

      I think that's ass (the animal) backwards. This is a highly regulated domain because the Telcos and their suppliers like the cartelization and actively lobby for it. It is not a gift of monopoly by state for slavishly following regulation written by bureaucrats.

    3. Anonymous Coward
      Anonymous Coward

      Re: TL;DR me if you like, but this is important

      Well worth the read. Thank you. These are going to have to be fairly autonomous if VNF's are going to bypass flows being routed to the absolute center which also requires serious orchestration. The maddening thing about orchestration is that nobody I've ever come across that has even a nub of a solution plays well with others, and in many cases, even plays well with that same vendors stuff. That's what I've been researching this week and I need a multidimensional matrix (tensor) just to sort what works with what.

      I have to wonder if what they end up needing on their lines will be somewhat equivalent to FCoE or such due to packet dropping. There's a huge difference to what consumers will tolerate on their systems and systems that demand (require) near perfect delivery. Surgery using telepresence doesn't work very well with high latency and packet drops.

      This will be one to watch.

  10. This post has been deleted by its author

  11. Hurn

    Seems like a key factor wasn't mentioned: ASICs

    I thought the whole idea behind creating, using, and evolving ASICs was to do jobs that General Purpose CPUs + buses & dumb interfaces + software stack couldn't handle.

    Yeah, ASICs are proprietary and expensive, but well designed ones do the job faster and use less energy.

    Prediction: this entire trend will end up being a bait and switch:

    At the same time people are getting used to crappy performance and poor reliability of CPU + virtual stack, the vendors will be designing/debugging their next generation of ASIC / custom hardware based solutions.

    Just when people are getting sick of the whole "gotta wait until the next version of software comes out before a usable implementation of feature <blah> becomes available," the vendors will de-cloak their next gen gear, driving their sales, and stock prices, through the roof.

    1. Anonymous Coward
      Anonymous Coward

      "ASICs are proprietary and expensive"

      "ASICs are proprietary and expensive, "

      Are they? If you design in an HDL with multi vendor support, what really stops you changing ASIC vendor? It won't be zero cost, for sure, but...

      In fact, for medium volume product, why not just pick an appropriate FPGA and stick with it? High volume will always be cheaper overall in custom silicon, low volume will be cheaper overall in field programmable stuff, and in the middle is a big (and, last time I looked, growing) area where the choice between FPGA and custom silicon is no longer obvious (that's what Xilinx told me anyway).

      "this entire trend will end up being a bait and switch:"

      One way of looking at it.

      Another way: Intel are desperate to get access to some of this formerly high margin business, as their x86 high volume desktop and datacentre stuff is looking increasingly threatened, which may leave the high margin Xeon stuff at risk too, and if they have to reduce prices across the range on x86, their whole x86 business is toast. Which doesn't leave much.

    2. Anonymous Coward
      Anonymous Coward

      At the very start of the article, ASIC's were at the top of my mind. They supposedly found overall power savings but I'd really like to see what the criteria for the experiment was. And given that redundancy is going to be a serious requirement I would have, was that factored into the comparison? On the flip side, an awful lot more money is tossed into the R&D of general and gpu processors that aren't similarly tossed in the direction of ASICs unless you are Cisco or one of their direct competitors, I would imagine. But that's all that is, imagination. I don't "know" the resources tossed at that. And when I don't, I admit it.

  12. Gareth Gouldstone

    Software as a (dis)service...

    It seems that these days it is considered acceptable if things work 'most of the time', even if the service is worse than what we had before. Think digital tv dropouts, DAB car radios and VoIP reliability. They all 'work' more often than not but when there are problems it tends to be worse or more annoying than the older, dedicated hardware issues or analog interference. Ditto web-based apps vs full-fat local apps.

  13. Chris Beach

    Software Issues

    Sounds like there also going from well tested hardware (which is easier to test has it has a single task to do) to shed loads of software (on top of varying numbers of hardware bits (which are only really tested in isolation)).

    This seems very risky, as apart from a few computer science projects very few bits of software are anywhere near bug/defect free. Modern development techniques do try to make this easier, but still, the complexity of the software components is going to far outweigh any complexity the original hardware had.

This topic is closed for new posts.

Other stories you might like