NNSquad - Network Neutrality Squad
[ NNSquad ] Purpose vs discovery and the Internet as a dynamic
I’m writing this in response to the myriad discussions
about how to make sure that the Internet continues to “work”
despite P2P or whatever the current threat seems to be. Behind much of the
discussion is the presumption that the Internet has a purpose in the sense of
making some applications like video games and VoIP work. Yesterday we feared
modems, today we fear P2P. This confusion arises from the success of the Internet. People
see what works and assume that the purpose of the Internet is to support those
applications. The fact applications that require low latency (delays) and low
jitter (variability) work in some cases means that we must now promise to make
sure they work everywhere. The very innovation that made the application work
so well at such a low cost is now seen as a threat to the new status quo. We “prove” this by example – but that’s
just the opposite of science. Science is about testing ideas rather than
looking for confirmation of our presumptions. We then add more confusion by
building in the mechanisms we presume are necessary thus which, of course, “proves”
that they are the reason the applications work. In fact the applications started to “just work”
because the mechanisms are not built in – it’s this seeming paradox
that keeps us arguing in circles. As I explain in http://frankston.com/?n=InternetDynamic
VoIP did not work in the 1980’s. Or, to be more precise, it might’ve
indeed worked over local networks but you couldn’t presume that it would
work between LANs, especially when dialup connections were involved. Instead we
used the Internet for more tolerant applications such as email and file
transfers. If we had to make voice work we could’ve built it into
the network by being dependent upon the network giving us a dedicated path.
This is the basic design choice made in SS7. Both the Internet and SS7 were
done by CS people but with different assumptions. SS7 achieved its goal and
supporting high quality (56KB in the US) voice but at the price of being
dependent upon high priced gear (as per 1970’s prices). By eschewing dependence on such gear the Internet (as a
thing) couldn’t make such promises. Instead we had to find out what
worked and go with it. Email worked because it was very tolerant and typical
messages were a few hundred characters with people flagging messages over a
thousand words or so as large. Experience suggested that applications such as voice and
video couldn’t work without special help because they were too sensitive
to latency and jitter. Then the Web happened. The web itself was initially about
text so the latency and jitter weren’t issues but it did generate a lot
of traffic. And people predicted the Internet would collapse. Instead the
opposite happened – demand created supply. One reason is that we’re
able to take advantage of any available bits with congestion being an annoying
but not fatal. For example lost packet used to close a click but we’ve
learned how to smooth over such problems. It turned out that the capacity increased quickly where there
were no disincentives. The increase came in many forms which can be
loosely called Moore’s law effects. But as I wrote in http://rmf.vc/?n=BL the physics aspect of
Moore’s law is secondary to my marketplace formulation. If you decouple
systems and embrace any opportunity you get hypergrowth. It didn’t matter
if you made bits run fast or you had more paths the net effect was more. (no
puns intended but you can find them if you wish). Thus fiber in the ground was able to carry more bits thanks
to improved gear at the end points. Yet DSL didn’t improve much beyond
the initial 1980’s implementation of ADSL because the carriers had no
incentive beyond the original purpose of Interactive TV and improvements
threatened their ability to charge high prices for T1 lines and for bits
themselves. The same thing happened with fiber when the improvements led to a
glut in capacity about 2000 and the carriers reacted by purposefully limiting
capacity and then pretending the limits were inherent. Meeting the demand for “web bits” had a
side-effect of giving us copious capacity and the low latency, low jitter
application started to “just work”. But we mustn’t forget
that this is a process of discovery not one of promise. We might argue against
the asymmetric “broadband” connections but we also discovered that
they are indeed useful for video, especially since that was the original
purpose of the underlying architecture. To make this even more confusing is that once VoIP started
working over high performance links clever folks started making it work seeming
unsuitable paths and then voice-grams started to blur the distinctions between
conversations and messaging. On the surface it all looks like telephony but it’s
not – and trying to preserve telephony becomes counter-productive. The problem today is that observers who see these
applications “just work” confuse discovery with purpose and want to
bake in the applications in the same sense that some people supposedly wanted
to close the patent office in the 1800’s because everything had already
been discovered. (A useful story even if not true). I do think that we need new protocols less dependent upon
network managers tilting the playing field and protocols which limit the
ability of “bad players” to prevent others from discovering new
possibilities. This is one reason I’m wary of “proper network
management” since “proper” can reflect a presumption of
purpose. The fears of network collapse have a basis in reality just
as the warnings that modems would destroy the phone network were real modulo
their assumed architectural limits. But the solution was not in better network
management and was instead in increasing the capacity so such problems became
moot. As with my comments on DSL the issue of incentives and
funding are fundamental. Today’s network in which network operators are
threatened by abundance is the new “modem crisis” with concerns
about neutrality being a countervailing force. The real danger in the purpose-driven network is that we
focus on how to manage scarcity by favoring applications on the presumption that
the “network” is making promises rather than providing opportunity.
We should be addressing the root (and route?) causes of the problem – the
very idea that carriers have to monetize the traffic and thus have every
incentive to limit capacity and make promises they that they can charge for.
Indeed carriers proving bandwidth and video are making promises rather than
just providing opportunity. Alas the discussion of “neutrality” gets lost in
this controversy over purpose. I’d like a simple formulation in which
providers have no stake in making promises but the discussion may be too
polluted by the presumption of purpose. These considerations are very much on my mind as I
write about Ambient Connectivity. It’s
about creating opportunity and decoupling the applications from providers who
need to monetize each path. Understanding AC requires a nuanced understanding of the
Internet’s dynamic and the success of the experiment in decoupling the
applications from the accidental properties of the transport and its
“owners”. |