The sentence was: "Let network engineers do their jobs, with
appropriate oversight, and consumers will benefit." I don't
see why that's a problem.
To reiterate the essence of my observation: Real-time apps,
especially ones with high bandwidth requirements, will always be
problematic without QoS on a network in which most flows are
governed by TCP. TCP's congestion avoidance behavior creates the
need for QoS.
RB
On 8/14/2010 1:57 PM, Bob Frankston wrote:
Bennett's letter is a warning against trusting network management.
Layer 2 is not real -- the ISO stack is a model. Just one possible
decomposition. I remember a presentation in 1965 -- the speaker had to rush
along as the slides started melting after a few seconds. Perhaps it was a hint
of things to come. Treating a model as hard reality is a classic newbie error.
"Layer 3 people are consumers making a buy-or-don't-buy decision" misses the
entire point! If the networks are not for people then what are they for? I
have to pay for network services but don't make buy decisions? Huh?
"Let network engineers do their jobs"? What are their jobs -- just to take
orders?
As I've said a pedestrian engineer does what he's told. A great engineer also
checks back against reality and learns.
I could go on but these examples should be enough to make it clear we
shouldn't trust network engineers who know they know what the users want even
if the users disagree.
-----Original Message-----
From: nnsquad-bounces+nnsquad=bobf.frankston.com@nnsquad.org
[mailto:nnsquad-bounces+nnsquad=bobf.frankston.com@nnsquad.org] On Behalf Of
Richard Bennett
Sent: Saturday, August 14, 2010 15:59
To: Lauren Weinstein
Cc: George Ou; nnsquad@nnsquad.org; 'Vint Cerf'
Subject: [ NNSquad ] Re: Irish Times: "A modest proposal on internet
neutrality"
RFC 2475 is an architecture document that was written to explain the
thinking behind the standards track RFC 2474, the RFC that defined the use of
the DSCP field. Architecture documents are always informational, so there's no
knock on it for that.
QoS is implemented at Layers 1 and 2; the mechanisms above Layer 2 that relate
to QoS tend to deal with means by which the applications specify their desired
QoS from Layer 2 rather than with the implementation. QoS for Internet
protocols is simply a question of whether IP should utilize the QoS that Layer
2 implements. IP engineers therefore don't need to know much about QoS; IP is
essentially a consumer of the network services provided by Layer 2.
Having worked in Layer 2 networking since before the Internet was designed, I
can tell you that QoS is not a controversial feature in that field, and that
there's always been a lot of head-scratching among network engineers about the
reluctance of internetwork engineers to use the features that networks
provide. The discussion of QoS among internetwork engineers, especially those
whose familiarity goes back to the ARPANET days tend to be of a more hand-wavy
nature than it is among the people who design, model, and implement the QoS
control mechanisms in Layer 2 networks, but it's a different discussion. Layer
2 QoS engineers are solving an engineering problem, while Layer 3 people are
consumers making a buy-or-don't-buy decision.
Layer 2 deals with issues on a very different time scale than Layer 3 does.
Prioritization as a form of QoS is typically managed across congestion periods
of less than 1 second. It doesn't address the problem of chronic under
provisioning, but it does address TCP's aggressive behavior in terms of
constantly seeking to consume all available bandwidth. Regardless of the
amount of bandwidth provisioned, when most flows are governed by TCP, the
network will oscillate between periods of light use and overload. This is
baked-in to the TCP metrics, and DiffServ is simply a means by which real-time
applications can succeed in the face of it.
Fred Baker tells a story about a network segment that Bell South used to
operate. The admins noticed 25% packet loss on the segment, so they upgraded
it from OC-3 to OC-12. That's a 4 x increase in bandwidth. What do you suppose
happened to the network? Packet loss declined, but not all the way to 0: they
still about 5 percent. That's TCP for you, it wants to get all the bandwidth
that's available, even though its applications are flexible in terms of their
completion time.
Internets of the future need to accommodate diverse and heterogeneous
applications. Layer 2 networks are constantly tweaked, tuned, and provisioned
for the application mix. This isn't evil, it's network engineering.
Let network engineers do their jobs, with appropriate oversight, and consumers
will benefit.
RB
On 8/14/2010 9:05 AM, Lauren Weinstein wrote:
1) Confusing RFCs that are explicitly "informational only" with IETF
standards is sloppy and not recommended.
2) Dan Bricklin's essay: "Why We Don't Need QOS: Trains, Cars, and
Internet Quality of Service" is still very good reading:
http://bit.ly/bL1W1J (Dan Bricklin's Web Site)
My own view is that there may be some role for carefully crafted QoS
-- but that a) it's critical that it not be capable of being unfairly
"gamed" - b) its use should be as limited as possible - c) if used at
all, it should generally apply equally to all traffic of the same
class - d) you should not be able to "buy" higher priority for
arbitrary data across the public Internet - and e) I'd much prefer to
see bandwidth capacity increases avoid the need for QoS at all.
Note that QoS under these terms does not make it impossible (in
theory, anyway) to associate a higher class of service for designated
real-time public safety data, but would (thankfully) make it difficult
to buy high priority for spam. But again, the Internet user community
overall is best served by increases in bandwidth that potentially
benefit everyone.
--Lauren--
NNSquad Moderator
- - -
On 08/14 00:42, George Ou wrote:
Acceptance by who? RFC 2475 says:
"Service differentiation is desired to accommodate heterogeneous
application requirements and user expectations, and to permit
differentiated pricing of Internet service."
Furthermore, this is already an accepted practice on the Internet.
ISPs like TeliaSonera already sell access to Blizzard with enhanced
priority.
Business connections routinely have enhanced priority. Global
Crossing sells enhanced priority to business customers and they even
extend that priority to partner networks in Asia and this has been
happening for a while now. Who is Google or anyone to say this is wrong?
The FCC's NPRM proposal bans charges for "enhanced or prioritized"
access to content/application/service providers and that is a pretty
broad paint brush. That potentially outlaws a number of beneficial
models I outlined here
http://www.digitalsociety.org/2010/01/preserving-the-open-and-competi
tive-ba
ndwidth-market/.
If you're a content provider, why are you no longer a "business"?
Furthermore, a ban on Paid Peering harms smaller websites that can't
build their own infrastructure and negotiate free peering. Is it a
coincidence that this harms Google's competitors? Wait, I thought
Google cared about the "two guys in a garage"? Oh wait, that was
just lip service and Google actually doesn't care.
http://www.digitalsociety.org/2009/11/the-hypocrisy-of-google-and-skype/.
Lastly, Net Neutrality doesn't even allow for user-approved prioritization.
If a user explicitly gives an ISP permission to prioritize a
particular website or a general class of applications, who are you or
anyone else to say no? Would you suggest that user isn't smart
enough to know what's good for himself or herself? As far as I'm
concerned, a user should be allowed to discriminate in favor of
content he/she likes or against content they don't care about when it
comes to their own broadband service. They should be allowed to
implement this discrimination themselves or authorize someone else (like
the ISP) to do it for them.
George Ou
-----Original Message-----
From: Vint Cerf [mailto:vint@google.com]
Sent: Friday, August 13, 2010 6:59 PM
To: George Ou
Cc: Lauren Weinstein; nnsquad@nnsquad.org
Subject: Re: [ NNSquad ] Re: Irish Times: "A modest proposal on
internet neutrality"
George,
I think that there is acceptance that charging more for more capacity
(bits/sec) but that differential charging for priority, regardless of
the type of traffic (eg real time, low delay or file transfer, or
...), could lead to anti-competitive consequences in which
established competitors might prevent new competitors from gaining
adequate access simply by consuming available capacity at high
priority to squeeze out the competition.
vint
On Fri, Aug 13, 2010 at 1:47 PM, George
Ou<george_ou@lanarchitect.net>
wrote:
"You pay your service provider a fixed charge, and it mostly keeps
no eye
on
who you connect to, or who connects to you. In a non-neutral world,
the
ISP
could block your access to a popular website until you paid an extra
fee (like extra satellite or cable channels)"
That is clearly a clueless and misleading statement for anyone
that's even semi up to date on the actual policy debate. �The
FCC's net neutrality proposal actually doesn't prohibit broadband
providers for charging customers for higher priority; it prohibits
broadband providers from offering "enhanced or prioritized" services
to content/app/service
providers
on a truly voluntary basis. �That's the real sticking point that
many reasonable people have a problem with.
George
-----Original Message-----
From: nnsquad-bounces+george_ou=lanarchitect.net@nnsquad.org
[mailto:nnsquad-bounces+george_ou=lanarchitect.net@nnsquad.org] On
Behalf
Of
Lauren Weinstein
Sent: Friday, August 13, 2010 8:56 AM
To: nnsquad@nnsquad.org
Subject: [ NNSquad ] Irish Times: "A modest proposal on internet
neutrality"
Irish Times: "A modest proposal on internet neutrality"
http://bit.ly/bm2rw7 �(Irish Times)
--Lauren--
NNSquad Moderator
--
Richard Bennett
Senior Research Fellow
Information Technology and Innovation Foundation Washington, DC
--
Richard Bennett
Senior Research Fellow
Information Technology and Innovation Foundation
Washington, DC
|