* Posts by tonyw2016

8 publicly visible posts • joined 28 Jun 2016

Good news, bad news, weird news – it's the week in networking


Re: Oh, great: civil aviation wants to route messages over the Internet Protocol

"What could possibly go wrong."

Terry 6, if you want an answer to your question, I suggest you read Eurocae ED-228A. The simple answer is "an awful lot can go wrong". ED-228A provides a hazard analysis listing out in all its glory all the possible nasty things that can happen. It also identifies the requirements that the implementations must satisfy to ensure that there is a safe outcome to each hazard when and if it occurs.

If only Banks, etc. took a similar approach.


Some Background

Perhaps the most interesting question is why did the journalist pick on this internet draft and not draft-haindl-lisp-gb-atn-01 - which is an alternative proposal for the same thing.

The truth is that there are several competing candidates for the next generation of ATS communications services and various authors are keen to get something published as part of a very competitive process. The IETF documents need to be read as early drafts staking out a territory rather than as final solutions.

The 25 year old ATN/OSI is in day-to-day use and delivering ATC benefits. It delivers high availability and data integrity but only to the level required by routine ATC messaging.

However, there are new applications that need to be deployed in the foreseeable future if we are to continue to both grow the number of air movements and to make air travel even safer. These applications demand communication services with a very high availability - which means a very high probability of successful packet delivery with no corruption, and that this has to be demonstrated by both testing and software assurance. You are looking at 99.99% and better. And yes, it has to be demonstrably secure given that DoS attacks would reduce the achieved level of availability.

To achieve this requires a re-engineering of the communications service. There is nothing that you can really say that is intrinsically better about IPv6 when comparing IPv6 with the CLNP used in the ATN/OSI. It's just when you are re-engineering a system, it is good practice to bring it up-to-date with current practice.

As for the public internet! Commercially delivered VPNs maybe, IPsec overlays possibly - but that's the limit.


Re: Oh, great: civil aviation wants to route messages over the Internet Protocol

BanburyBill - fake news is the only response to your post.

If you care to glance at the ICAO standard (ICAO Doc 9880 makes for good bedtime reading), you will note that CPDLC (controller to pilot data link communication) includes a high quality 32-bit end-to-end checksum to ensure that any corruption is detected on receipt. The Safety Case for the operational deployments demands that corrupt messages are always discarded and that extensive testing and software assurance is used to certify that the systems really do not work and meet the safety requirements.

Time to dump dual-stack networks and get on the IPv6 train – with LW4o6


This is only part of a proper transition plan. However, the good news part is that it represents a move away from the "NAT is evil" mindset that has bedevilled the development of a proper transition plan.

RFC7594 deals with the part of the problem where you have a local IPv4 network (i.e. almost everyone) and you need to communicate with another IPv4 network (hosting some server) over an IPv6 network.

The other big bit of the transition problem is (hopefully) solved by NAT46 and DNS46 which should allow an IPv4 home network to use an IPv6 Internet with IPv6 native servers. The reverse: NAT64 and DNS64 also exist for anyone who has an IPv6 Home network. RFC 6144 "Framework for IPv4/IPv6 Translation" is a good starting point for further reading.

All it needs is for ISPs to offer IPv6 - and to make legacy IPv4 a chargeable option...

IPv6 growth is slowing and no one knows why. Let's see if El Reg can address what's going on


Perhaps we are starting to see an impact from LISP

IPv6 always suffered from three problems:

1. The benefit of switching is largely to the community rather than to the adopter.

2. The lack of a migration plan.

3. Clever engineers thinking up more ways of improving IPv4 address space utilisation.

The "running out of IPv4 addresses" problem was always over-stated because it assumed that sparse utilisation of an address space would always be the norm because the allocation strategy has to be dominated by routing efficiency - doesn't it?

Technologies like MPLS have greatly increased the efficiency of address allocation and now LISP (RFC 6830) is providing a generalised model that allows global IP Addresses to be densely allocated to Hosts or Autonomous Systems while, at the same time, allowing a separate address space to be used for the underlying network - i.e. allocated with topological efficiency in mind.

LISP is IPv4/IPv6 agnostic and works with both. It's actually a good way of running IPv6 end to end over a corporate network that's still IPv4 based. Maybe some are starting to do that - keeping all their IPv4 kit (with a private address space) - but allowing for IPv6 externally - and hence distorting the stats.

However, that may just be the optimistic view. The point is that IPv4's 32-bit address space always could address all the atoms in the universe - it just couldn't also route efficiently to them. Now, with LISP it is possible to densely allocate the 32-bit address space while still having efficient routing.

If LISP had been around 25 years ago when IPv6 was proposed then I doubt whether IPv6 would have got enough support to have got off the ground.

Of course, we are where we are with a mixed equipage. However, it is now the case that if an organisation can't be bothered to move to IPv6 and already has enough IPv4 addresses for its own use (which most do if you can allocate them densely) then LISP gives a very good technical reason to avoid the move for anything other than externally facing systems.

LISP is now part of (e.g.) CISCO's product line and just maybe we are starting to see an impact from this.

Do we need Windows patch legislation?


Safety Related Systems

The long term availability of Vendor support is a basic problem for Safety Critical and Safety Related Systems and many systems operated by the NHS will be of that nature. Due to the fully justifiable need for design assurance and length pre-service testing, it can often take up to 10 years to get this type of (software) system from initial conception to in service use - and you often want to get 20 years or so of use out of it in order to justify the investment. However, these sort of timescales just don't fit with commercial product lifetimes for a vendor such as Microsoft.

It is no accident that Linux is now widely used in areas such as ATC etc. It is not because it is free, and not just because of its reputation for stability and security, but because it is Open Source and ultimately this means that the end user can take control applying security patches for ancient versions of the Linux kernel rather than having to pay (a ransom) to the original vendor for support.

In practice this allows commercial opportunities for specialist support companies to provide long term support for those users that need to have very long in service lifetimes - even beyond those for Red Hat Enterprise.

The bottom line is that if you are happy for your vendor to dictate the upgrade lifecycle then a product such as Windows may be suitable. If this is not acceptable then Open Source is where you need to go.

Global IPv4 address drought: Seriously, we're done now. We're done


The bad decision that keeps on biting back

It's interesting how bad decisions made 25 years ago are still screwing up the Internet.

When the IPv4 addressing problem first came up, it was a choice between adopting the variable addresses of the Decnet/OSI approach of CLNP and a new protocol with bigger fixed length addresses. The former was an evolutionary approach while the second was a step change with no obvious migration path. The second one was chosen largely because the IETF at the time was dominated by academics who distrusted the "commercial" attitudes of the OSI camp. They also favoured the class based routing approach then used. I recall being told by one of those pushing IPv6 that he supported it because his VAX and PDP11 based routers worked with 16-bit routing tables and he just could not comtemplate the idea of variable length prefix matching algorithms.

Of course, not long afterwards, BGP-4 and prefix based routing became the norm but no-one could bring themselves to reverse what had been a terrible decision. That is introducing both a new addressing plan and a new protocol, rather than keeping the existing address plan, introducing a new protocol (with simple old to new gatways) and only when that new protocol had been fully adopted would the address plan be extended.

Let's hope that future network engineers learn from the mistakes of the past.

Looking good, Gnome: Digesting the Delhi in our belly


And finally...

It's hard to add to the above. Gnome has become a byword for arrogance, ignoring users and the worst in Open Source. But for Redhat's support the project would have surely ended long ago. Regrettably, even outside of Gnome there are still some examples of this arrogance.

When I moved to Mint 13 MATE it was great, not just the Gnome 2 Desktop Environment but the MDM was GDM as it used to be with support for multiple seats, remote logins and so on. Then came Mint 17 and some Bozo got to work on MDM and removed the remote login code. OK, there are security weaknesses without firewalls but this also broke Xvnc which depends on the XDMCP protocol. At least the venerable XDM is still around to support remote login and can run in parallel with MDM.

The moral of the story: don't remove features you don't like or deprecate - just make them configurable options.