IPv6 has been an ‘emerging technology’ for more than two decades. It was born in a time when architects and standard bodies knew there was an inevitable end to the de facto standard deployment model and communication medium.
IPv4 was never intended to provide what it is used for today. It has limitations that have been fixed with temporary add-on and structural changes that have become known as ‘normal’ over the span of 25 years. The most notorious of these add-ons is Network Address Translation (NAT).
Today, IPv6 provides a number of much-needed mechanisms and abilities that are so often taken for granted. Here are three of what I believe are the ‘killer apps’ and reasons for small-to-medium-sized ISPs to deploy it.
End-to-end connectivity has become an increasingly important specification in this day and age of interconnectivity. Apart from the security benefits, end-to-end connectivity, or rather the lack of it, is also a cause of much discussion (and angst) among the gaming community — an ever growing and vocal customer base of ISPs.
Many systems such as XBox Live actually tunnel IPv6 in IPv4 in order to work around issues associated with a lack of end-to-end connectivity. A quick glance through netflow or sFlow data for protocol 41 (6to4) or UDP port 3544 (Teredo) will show just how much of the traffic is actually tunnelled.
Enabling native IPv6 is the only viable work around; it provides a much better experience for end users.
Reduced network complexity
While NAT may have given IPv4 a vastly extended lifespan, it was only meant to act as a transition mechanism. Unfortunately, it has become commonplace; so much so that emerging engineers expect and plan for it as a natural part of any deployment, certainly at the Customer Premises Equipment (CPE), and, in many cases, within the ISP itself.
With the proliferation of Network Address Port Translation (NAPT), we have created a world where the de facto expectation is that end users have, at most, a single public IPv4 address and a translation device in their path. At worst, there are no public IPv4 addresses at the end user location; instead there is a series of NAPT and Carrier Grade NAT (CGN) devices translating over and over.
Where this leaves the end user (and the operators in the middle) is inside of a very hard to parse web of complicated state and translation tables. Forget about being able to easily identify end hosts, user patterns, or security events — the complication of translating makes this a much more onerous task, all for the simple, and arguably misguided prolonging of a legacy protocol.
And despite widespread and popular belief, NAT was never intended to be, and is not, a security mechanism.
Support and content have become common place
IPv6 has far more support in hardware, software, and content than was available even five years ago.
Modern equipment, much of which is Linux based, has had good support for a very long time. Common CPE management protocols are able to leverage IPv6 for day-to-day operational needs. And core Internet hardware has had both software and hardware capability for quite some time.
Content is also more widely available over IPv6. Major content providers, including the two largest in Google and Facebook, provide dual-stack content at scale. When deployed in a controlled, methodical way, the end user rarely notices that they’re using IPv6 unless they specifically look.
There are also more and more examples of successful IPv6 deployments. The majority of large carriers in the USA have been deploying dual-stacked CGN networks for quite some time with very good support and profitability. And perhaps the most successful IPv6 deployment thus far has connected almost half a billion people in India.
IPv6 is not the future, it’s the present. The content is available, the growth has exploded, and both the hardware and software are well travelled. What are you waiting for?
Adapted from the original post that appeared on The Forwarding Plane.
Nick Buraglio has worked in the network provider industry for more than 15 years, holding network engineering positions at regional Internet providers as well as at the National Center for Supercomputing Applications.