NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information

 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Re: [IP] BitTorrent uTorrent 2.0 uTP will self-throttle to protect networks


I don't think the proper argument is about credentials.  The proper argument is about design and architecture.   Bennett keeps trying (on this and other lists) to impute motivations to people, to make claims about their deviousness, etc.

Vint's original point is about putting the user in control of priority, since the users (and the services the user uses on the other side of the network) know what's important and what's not.   Reading tea leaves by inspecting the contents of packets for clues is a piss-poor way to judge what the user wants.  Let the user say "this stream of packets is life and death" even if it looks like a music download, and this is "not terribly important" even if it looks like a telephone call.

It's not up to the ISP to decide for the user what services are important and what protocols are important.

Trying to claim that Vint's view on this is a "fringe" idea is just nonsensical.

On 11/02/2009 06:34 PM, George Ou wrote:

Vint Cerf says: “I don't understand why a low bandwidth application is necessarily higher priority than a high bandwidth one for example.”

 

With all due respect to your credentials, it seems like you’re taking end-to-end too literally and even more so than most of the authors of that paper.

 I think you have a fringe position that a lot of great engineers and academics would vehemently disagree with.  In fact it violates the most fundamental concepts of fairness that a low bandwidth and non jitter-inducing applications such as VoIP or online gaming (sub 100 Kbps) shouldn’t have their packets forwarded first.  Despite the lower priority state, the high bandwidth applications WILL STILL receive the highest average bandwidth application and the overall file transfer speed of a P2P application would be unchanged.  So the P2P application will experience zero degradation (I’d argue it would improve in performance because fewer people would shut P2P off if it is less toxic) and VoIP or online gaming would experience close to zero jitter.  The fact that we’re doing round robin on the transmit queue is fundamentally more fair than a First In First Out (FIFO) system.

 

The system if implemented very accurately if it measures protocols based on data patterns and not just simple port number identification, it would prevent protocol masquerading abuse and it would avoid misclassifying some “P2P” protocols such as Skype as a “background” application.

 

 

Vint Cerf says: “Nor do I see that low duration should necessarily have precedence over high duration (regardless of bandwidth).”

 

I have made it clear in figure 4 of my article on why this makes sense (concept thanks to Bob Briscoe).  Assuming that both applications are bursty e.g., they take whatever bandwidth the network can feed them, the lower duration application with much smaller transfers should always get higher priority.  Again, this would result in no performance declination for a large bulk transfer since the low duration application would simply get out of the way sooner.  The difference is that the low duration application (web surfing) would run MUCH better than before which would allow users to leave their P2P or any other file transfer application running 24x7 without fear of degrading their network.  The result is that web browsing and other low duration applications run much better and P2P would run faster due to the increase in available seeders.

 

 

Vint Cerf says: “I can readily understand, for example, the shaping of the overall traffic envelope for a given user, based on the service class to which that user belongs (here I am thinking of maximum burst capacity as a measure of "class").”

 

As I’ve mentioned before, you are severely limiting the tools available to engineers and services by suggesting that the only permissible differentiator between classes should be maximum bandwidth.  Moreover, there’s no reason that users shouldn’t be allowed to purchase different levels of fractional ownership (in the form of usage caps) and be permitted to purchase multiple usage caps e.g., one for low/medium/high priority where the lower priorities will have the most generous usage caps (possibly no caps).

 

 

George Ou

 

From: Vint Cerf [mailto:vint@google.com]
Sent: Monday, November 02, 2009 3:06 PM
To: George Ou
Cc: 'David P. Reed'; 'NNSquad'; 'Lauren Weinstein'; 'Richard Bennett'
Subject: Re: [ NNSquad ] Re: [IP] BitTorrent uTorrent 2.0 uTP will self-throttle to protect networks

 

george,

 

Yes I do have problems with the default choices as I am not sure it is clear that these defaults make sense. I don't understand why a low bandwidth application is necessarily higher priority than a high bandwidth one for example. Nor do I see that low duration should necessarily have precedence over high duration (regardless of bandwidth). These choices seem bereft of clear rationale.  I think one area that may drive our differences is whether there is an overall workable way to allocate capacity among users, independent of priority within that capacity. I can readily understand, for example, the shaping of the overall traffic envelope for a given user, based on the service class to which that user belongs (here I am thinking of maximum burst capacity as a measure of "class"). In times of congestion, I think I would be inclined to argue for user prioritization within a "fair share" of the available capacity for that user.

 

vint

 

 

On Nov 2, 2009, at 5:58 PM, George Ou wrote:



Dr. Cerf,

 

I understand that Google and others like to tout “user preference prioritization”, but you haven’t addressed some of the key limitations to that system.  I am fine with user-labeled priority so long as it operates with reasonable priority quotas and budgets, but what do you do about the vast majority of users and applications that fail to label accurately or fail to label at all?  So my question to you is this:

 

·         Do you have a problem with a default priority mechanism - one that would cede control to user or application preference so long as it is within quota – that is implemented by the ISP which always gives higher priority to low bandwidth applications over high bandwidth applications, and gives priority to low duration applications over high duration applications?  Do you have a problem with this type of good discrimination?

·         If you do have a problem with a default ISP priority, please explain your reasoning.  Is the objection based on a concern that a default prioritization scheme would inaccurately classify information (even though we can classify based on packet patterns rather than simple port identification), or do you have a philosophical problem with it?  And if so, how would this be any different Comcast’s “Fair Share” system which prioritizes low bandwidth users (average measured over 15 minutes) over high bandwidth users which the FCC reviewed and considers fair?

 

 

 

George Ou

 

From: Vint Cerf [mailto:vint@google.com] 
Sent: Monday, November 02, 2009 2:33 PM
To: George Ou
Cc: 'David P. Reed'; 'NNSquad'; 'Lauren Weinstein'
Subject: Re: [ NNSquad ] Re: [IP] BitTorrent uTorrent 2.0 uTP will self-throttle to protect networks

 

George,

 

This discussion suggests that users should have something to say about the priority of packet flows WITHIN the capacity they are paying for (capital letters just in lieu of italics; I am not shouting). If the access ISP can do traffic shaping to keep users within their pro-rata envelopes and also respond to user-specified priority, I would think we would be moving toward a balance that seems useful.

 

vint

 

 

On Nov 2, 2009, at 1:54 PM, George Ou wrote:




I’ve published my results here.

 

Dr. Reed.  Your use of the words “rhetoric” and “tricks” aren’t very useful to this discussion, and I would take issue with your comments.

 

1.       BitTorrent still hogs over 90% of my broadband connection over HTTP.  This has significant ramifications beyond just real-time applications like VoIP and online gaming.

2.      You shouldn’t be so quick to discount VoIP and online gamers.  A very large number of BitTorrent (or any P2P app) users also do online gaming and VoIP, and they’re forced to shut down their P2P application when the use VoIP or game and that actually hurts the P2P upload and download throughput for the entire P2P community since there are fewer seeders.

3.      Don’t conflate wireless with wired broadband.  Just because 150 ms ping for wireless is best case doesn’t make 70 ms additional on a wired network bearable for online gaming.  Maybe you’re different, but I don’t know any gamer that will put up with an additional 70 ms if they can help it.  I thought it would be tolerable for VoIP, but my Lingo VoIP phone service drops a significant amount of audio even when I merely upload with BitTorrent.

 

 

 

George Ou

 

From: nnsquad-bounces+george_ou=lanarchitect.net@nnsquad.org [mailto:nnsquad-bounces+george_ou=lanarchitect.net@nnsquad.org] On Behalf OfDavid P. Reed
Sent: Monday, November 02, 2009 8:43 AM
To: 'NNSquad'
Cc: Lauren Weinstein
Subject: [ NNSquad ] Re: [IP] BitTorrent uTorrent 2.0 uTP will self-throttle to protect networks

 

I find the word games/rhetorical tricks that Ou and Bennett use fascinating.  We'll see whether Farber posts my response below.

George Ou wrote:

Subject: RE: [ NNSquad ] BitTorrent uTorrent 2.0 uTP will self-throttle to protect    networks 

Too bad nobody ever bothered to test if these claims actually hold water 
before repeating them endlessly.  I just tested uTorrent version 2 build 
16850 today and it still grabs all the bandwidth and jacks up the ping to 
unbearable levels for online gaming and VoIP.  It certainly does NOT protect 
my network.

I will do some testing myself, because I am curious about the mechanism in uTorrent 2.0.   I do note that "unbearable levels for online gaming and VoIP" is an interesting statement.

If true that means that ping times might be 100 msec or more.   Now, since I have been recently measuring ping times on networks where there are no "uTorrent" or other P2P services running, I can tell you that on a variety of commercial providers, 150 msec. ping times are common - and on ATT 3G in several cities, there are stable ping times that can be measured that are on the order of 2000-5000 msec.

So the "data" presented by Mr. Ou represents a very, very interesting choice of phrase.   Say that it is "unbearable" for two of the most sensitive-to-latency applications (only).   

I would, myself, stick to scientific measurements: how many milliseconds?   Clearly he has measured that data.   But I presume the hope of a talented columnist is to get the word "unbearable" to stick in the mind, and leave the "bumper sticker" impression without the qualifying information.

Rhetorical trickery?  You be the judge.   I'm gonna report numbers.