back to article WTF is … Routing Protocol for Low-Power and Lossy Networks?

Whether you consider the Internet of Things (all the way up to the Internet of Everything) to be the Way of The FutureTM or just This Year's Buzzword® something of an exaggeration, there's a good chance some of you will run into some of its real-world manifestations in the near future. After all, the building you work in is …


This topic is closed for new posts.
  1. Duncan Macdonald Silver badge

    Neat idea BUT

    This design has a nasty single point of failure - the IP6 access point. If the sensors are important then more than one IP6 access point needed to be provided - which requires a more complex design (at least in the level 2 nodes).

    1. Return To Sender

      Re: Neat idea BUT

      Agreed, the IP6 gateway (DODAG root) is a SPoF. But then it's a logical entity; presumably all the usual tricks apply to make the platform as robust as possible physically (redundant hardware, diverse path etc), but then we all know that guys with big yellow diggers and back hoes are ingenious when it comes to cutting off buildings.

      A *very* quick scan (I have a short attention span) of the RFC suggests that the DODAGID is based on the IP6 address of the root, so maybe the root node can be changed by just moving the address to a different location? I'm no expert, so that could be complete bollocks of course. Hopefully somebody with curiosity and more expertise can comment.

    2. Roland6 Silver badge

      Re: Neat idea BUT

      The design also does not resolve one of the fundamental design considerations: not to add routing load to sensors. From the simplified RPL diagram in the article it is obvious that some sensors are carrying a significant routing/relay overhead, which will cause that sensor to draw significantly more power than that required for it to perform it's intended task.

      From a practical perspective ie. getting products on to shelves, the only real needs are for a sensor - root service interface and protocol to be defined and similarly for the application functionality of the root device to be defined - for which there are several options either already available or being defined by various Standards committees.

      Why do I say this? because we only need to look at wireless networking, where mesh and auto-path configuration has been talked about for decades, but the lack of real implementations hasn't prevent non-mesh WiFi etc. being an outstandingly successful technology.

  2. Charles 9 Silver badge

    What about security?

    Security is one thing that really needs to be baked in to get it right, since it's more of a way of thinking than a way of doing.

    Sensor swapping and sensor spoofing came to mind when I looked at this new sensor network. There would need to be a way for the sensor to positively identify itself, such as with an asymmetric key. But encryption takes time, resources, and (most critically) power. And now we run into some of the tradeoffs systems like POS terminals faced. Although in their case, it wasn't electical power limitations but CPU power limitations mostly.

    In other words, the next problem I see for them is making the network secure while STILL low-powered.

    1. Roland6 Silver badge

      Re: What about security?

      There is another security problem - new sensors! If a new sensor can automatically add itself to a network then I can just scatter my eavesdropping sensors around and let the existing network connect them to my listening post...

    2. itzman

      Re: What about security?

      well yes, and what is wrong with everybody knowing everything about anything?

      Or you could encrypt....

    3. Badvok

      Re: What about security?

      This is talking about Layer 3, I'd expect security to be dealt with at a lower layer.

      1. Charles 9 Silver badge

        Re: What about security?

        The problem is the the sensor is an originator of information. If it doesn't want the information tampered, it needs to encrypt the data from the point it enters the system. That puts the onus on the sensor to encrypt before transmitting. There's just one issue. Good encryption is resource- and power-intensive. It's a physical limitation; otherwise the encryption is too easy to break. So you end up with the issue of having to encrypt in a resource-constrained environment.

        The best bet right now looks like TEA-based algorithms. They're designed for their simplicity, but they've been shown to have chinks.

  3. Tom Cooke


    Vernor Vinge. Localisers. That's all. (Google it :-)

    1. Charles 9 Silver badge

      Re: Sci-Fi

      Those depends on radio transmissions, correct, which as electromagnetic waves do not travel at a uniform speed. That's why there's some inherent inaccuracy in GPS systems (atmospheric interference). That and the low power means it has trouble penetrating solid objects. I don't localizers could overcome those physical limitations, especially if it's using time-of-flight to measure distances in a medium where the speed of electromagnetic waves can vary (GPS doesn't strictly rely on time-of-flight so is less vulnerable).

  4. Pascal Monett Silver badge

    And one day these sensors and connections wil be "baked" right into construction rules - as in, every building will need to include one sensor brick on each wall of each level, or something like that.

    In a society like that, Hollywood will have a lot more trouble having people buy into scenarios like the recent film The Call - or even Shooter.

    On the other hand, the NSA is going to go nuts keeping track of all that data. I predict that, in such a society, the NSA will take over the entire state of Iowa as its center of operations and storage center.

    1. Anonymous Coward
      Anonymous Coward

      You mean they haven't already?

  5. Ruairi

    Why do people need to invent the wheel. The B.A.T.M.A.N protocol exists to solve this exact problem in IPv4, and can be easily ported to IPv6.

    Nothing new here....

    1. theblackhand

      Re: BATMAN

      Doesn't BATMAN fail to scale in reliable networks? It relies on packet loss to work and will flood reliable networks.

  6. itzman

    Isnt this what USENET was designed to do?

    ...multiple path peer to peer routing and trickle through ?The only requirement being to maintain a history of all packets on every node for a few hours, which flash memory could do.

    You could simply broadcast any packet you received, and other nodes could take it if they hadn't seen it, and retransmit it, or discard it if they already had.

    1. Badvok

      Re: Isnt this what USENET was designed to do?

      "The only requirement being to maintain a history of all packets on every node for a few hours, which flash memory could do."

      Yep, that will work and the days of low cost room thermostats wil be numbered, they'll all have to come with 1TB of flash memory.

      1. User McUser

        Re: Isnt this what USENET was designed to do?

        [...] they'll all have to come with 1TB of flash memory.

        Sure but by the time this sort of thing is in wide spread use 1TB of flash (or equivalent) will probably cost $10, so who cares?

  7. Number6

    Really low power?

    If you're expecting a node to receive packets at random intervals then it's got to keep its receiver on, or at least poll a lot more often. This obviously causes more power drain than if the system can let the receiver sleep when the node itself doesn't need to communicate.

    You're also going to suffer packet collision, if you've got a node at level #2 and a couple of nodes at level #3 that can't hear each other, it's possible that the level #3 nodes will transmit at the same time and depending on their backoff algorithm, may fail to get a packet through.

This topic is closed for new posts.

Other stories you might like