NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information

 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Re: [IP] BitTorrent uTorrent 2.0 uTP will self-throttle to protect networks


On 11/02/2009 08:40 PM, George Ou wrote:
>
> It's not just Richard Bennett that disagrees with Vint Cerf on Net 
> Neutrality.  Dr. Robert Kahn and Dr. David Farber are on that list as 
> well and they do not have an aversion to intelligence on the Internet 
> and they don't believe that intelligence needs to be regulated out of 
> existence.
>
I don't understand what point you are trying to make here.   Is this 
some kind of allusion to a weird kind of democratic voting 
institution?   Are all the "Dr." mentions supposed to be some kind of 
credentialism?
I haven't spoken to Bob Kahn recently, but I have spoken to Dave 
Farber.  His view as explained to me about Net Neutrality is that he 
doesn't trust the government to implement any kind of rule without 
screwing it up royally (I'm happy for Dave to correct me).   I could ask 
Bob - we're good friends  and have been since he recruited me into the 
Internet project.   However, knowing Bob, I suspect his concerns about 
Net Neutrality are similar: he doesn't think regulators should design 
outcomes.   I personally tend to agree with that sentiment - but I 
believe regulators are needed, not for design, but to insure against 
misbehavior by entities that have too much market power.  And we see 
lots of claims being made by companies' lobbyists that are similar to 
the old Hush-a-phone and Carterfone claims: if users and entrepreneurs 
were to run their own applications and define their own priorities, the 
whole telecommunications system would fall apart, so the operators 
should be screw around with user-paid-for applications like BitTorrent, 
calling them "stealing".

No one has ever suggested that "intelligence should be *regulated* out 
of existence".  Those are your words, a "straw man" of sorts - how you 
would characterize someone else's views.   I don't use the word 
"intelligence" because I think (with Weizenbaum and others) that it 
cheapens the term to apply it to powerful technical methods of whatever 
sort.   However, I would argue that having the network elements involved 
in transport try to "optimize" functions other than efficiency of bit 
transport and flexibility of switching (routing included) is not a good 
choice.   Blurring the terms together to argue bizarre constructions 
about "intelligence" and "regulation" makes for great political 
speeches.  It's a lousy approach to technical design and architecture.
>
> You are mischaracterizing the problem by suggesting that a network has 
> to inspect the contents of the packets (not that there is anything 
> wrong with content inspection and DPI 
> <http://www.digitalsociety.org/2009/10/understanding-deep-packet-inspection-technology/>) 
> to classify priority.  There are very accurate ways to classify 
> traffic and it has nothing to do with inspecting the content of 
> packets and Richard Bennett described one of them on a comment to my 
> site where he stated:
>
> "There's actually a simpler way to do this that doesn't require the 
> ISP to examine the traffic to determine what's what at all: divide the 
> time into small sampling intervals (sub-second) and give the first few 
> packets in each interval highest priority; then lower the priority of 
> each following packet linearly. During periods of inactivity, allow 
> credits to accumulate that increase the number of high-priority packets.
>
> That's not the whole story, of course, but it's a good start."
>
  Perhaps I miss the point, but how does this algorithm work at all to 
give what the user wants priority to be?  It sounds like a scheduling 
queue discipline in a time sharing system where it is assumed that light 
users are supposed to get more performance than users who are using 
complex algorithms but need real time response.   It doesn't tell you 
"what's what" - all it does is define a sharing discipline where all of 
the packets are treated them same, except for their rate of arrival.   
When an engineer says: no one should ever want to do that, I hear "I 
know what's good for every user".  That's arrogance.
>
> There are very accurate ways to simply analyze the traffic pattern to 
> correctly identify the needs of applications and to fairly allocate 
> bandwidth and queue management.  Just alternating the queue between 
> different applications and different users would be infinitely more 
> fair than a dumb FIFO system.
>
Who proposed a "dumb FIFO system"?  Again, just a strawman argument - is 
this anything other than mischaracterizing what others are saying so 
that you can be right?  The Internet is full of pretty damn 
sophisticated technology that works smoothly under the IP abstraction.  
IP is not "FIFO" - the "in-order delivery" requirement of *some* 
applications is provided by TCP or by (timestamp ordering in) RTP.  "At 
most once" delivery is achieved by packet labeling and discarding at the 
receiver.  "At least once" delivery is achieved by source retransmission 
until acknowledgement.   The underlying transport networks can work by 
ESP if someone figures out how ESP works.  No rule against using 
intelligence in the underlying networks.  The issue is just that the 
end-to-end protocols are not *dependent* on the underlying network.

> Lastly, you keep ignoring my statement that user or application 
> preference *SHOULD TAKE PRECENDENCE* over the ISP's default settings 
> so long as it is within quota.  Sorry to yell, but you seem to be 
> ignoring this very important point and you're misrepresenting my 
> position by doing so.
>
There are no quotas today.   I cannot respond to your point in the 
absence of the quotas you imagine to be real.   What is the language of 
"default setting" expressed in?

I take it you are proposing a network architecture other than the 
Internet because you refer to a very differnet approach.   Feel free to 
construct a worldwide network other than the Internet, get lots of 
applications to use it, finance its deployment.  Fine.  Don't call it 
the Internet, and you've got a potential winner.  Perhaps it should be 
called "The Bell System".   It didn't deliver web pages that are sourced 
in pieces from lots of different servers within milliseconds, but it was 
damned good at delivering isochronous single fixed rate streams from one 
point on the globe to another.    Maybe it would have been a better 
direction.  The ITU thought so.  We could write an "alternate history" 
science fiction story, just like the idea of digital steam-powered 
computers created "steampunk".

But the Internet has done pretty well.   Why break it?  To prove that 
Metcalfe was right that the Internet couldn't work at scale?
>
> I believe that it is fair to say that the belief that low bandwidth 
> applications (especially real-time) don't deserve to be prioritized 
> over high bandwidth applications is fringe and I think many good 
> engineers would share that position.
>
Many people are more flexible in their thinking - they are capable of 
thinking that priority has no inherent connection to bitrate.  But if 
you cannot imagine that, I guess you think that more flexible thinkers 
are fringe thinkers.   I don't know what makes a "good engineer".  
However, since my father was an engineer, and I was trained by 
engineers, I suspect that the tradition of engineering focuses on what 
*users* want and not what *engineers* want them to want.


    [ The amount of quoted text on the messages in this
      thread has gotten out of hand and has been triggering
      extra digest transmissions due to size.  I have removed
      the quoted text beyond this point in this message.  Please
      try to keep use of quoted text in replies to the minimum
      necessary.  Thanks.

           -- Lauren Weinstein
              NNSquad Moderator ]