NNSquad - Network Neutrality Squad
NNSquad Home Page
NNSquad Mailing List Information
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[ NNSquad ] Re: [IP] Re: a wise word from a long time network person -- Merccurynews report on Stanford hearing
- To: NNSquad <nnsquad@nnsquad.org>
- Subject: [ NNSquad ] Re: [IP] Re: a wise word from a long time network person -- Merccurynews report on Stanford hearing
- From: Barry Gold <bgold@matrix-consultants.com>
- Date: Wed, 23 Apr 2008 13:45:41 -0700
Vint Cerf wrote:
In oversold conditions, the upper bound per subscriber appears to have
to be variable if some form of "fairness" is to be achieved. Moreover,
one might have to consider different upper bounds per subscriber,
depending on the capacity offered to the subscriber. One can imaging
differential pricing for higher (theoretical) capacity upper bounds.
Yes, I've been thinking about this a little. The current tools we are
using are too blunt.
We have:
. RST injection: terminates the connection, and may force the entire
file segment to be sent again. If applied at random, it can result in
one P2P user experience resource starvation while another gets nearly
full speed.
. Source Quench: Many TCP/IP implementations don't support/respect this
ICMP message. Some firewalls block ICMP, so the other end won't see it.
And IIRC it is currently deprecated.
. ICMP Destination Unreachable: when it works, it has the same effect as
RST: it will tear down the connection because the TCP/IP stack will
think the other end is "permanently" unreachable. It also suffers from
one of the problems of Source Quench: an ICMP message may not make it
through a firewall.
What we need is a less blunt instrument. Designing this will be a
little tricky, as it will probably be generated at the IP level (or OSI
layer 4) but must be processed at the TCP level (OSI layer 5, or an
application using UDP).
Basically, we need a way for any node in the network to tell a host
elsewhere, "you need to send less data". Included with that message
would be the following parameters:
1. How much data (octets/second) the node is willing to accept
2. The destination the throttling is to apply to:
a. This subnet (IP address + mask)
b. Specified endpoint (IP address _or_ IP address + port)
c. Specified connection (IP address, port quartet)
3. The source the throttling is to apply to:
a. Specified host
b. Entire network
This can't be layered on top of ICMP, because ICMP is frequently
blocked, we need a new kind of datagram that won't be filtered out by
existing firewalls (and some ISPs).
So, a network that is being overwhelmed(*) with data from a given host
can specify a variety of things:
1. Throttle traffic to a given destination down to X bps -- destination
= entire subnet, specific host, specific host/port, specific connection.
2. Throttle traffic from *all* hosts on your subnet aimed at my subnet
(or a specified host, IP address, or host/port)
The message should be quite short, so it consumes as little bandwidth as
possible. (Although I suspect that in today's networks a maximum length
packet consumes not much more resources than am ICMP echo with 1 byte of
payload).
(*) Overwhelmed can mean a variety of things. Traditionally, it has
meant that the network itself is at capacity, or that a router has
filled up its memory with packets to be retransmitted, and is about to
start dropping some. But the realities of modern networks include cost
considerations, so "this is costing us too much money" is also a valid
reason to send a throttle message.
Also needed: a specification of _how often_ a given node may send such a
request -- we don't want the network overloaded by, say, generating a
new throttle packet every time it receives a packet from the source. It
should generate one packet, then wait "a while" and retransmit only if
it hasn't had the desired effect (possibly lost in transit).
And of course we need mechanisms to protect against malicious use of
these packets. It should be legitimate for an ISP's router to tell a
host connected to it, "hey, slow down with the traffic already," but not
for a host (or router) in Bangladesh to tell a host in NJ, "hey, send
less data to New York."
This would provide a _legitimate_ way for Lariat, Comcast, and other
ISPs to control their costs and network congestion, slowing down the
hosts that are generating the most traffic while allowing traffic from
other hosts to proceed at full speed. The router could even maintain a
running average of traffic and notice that, e.g. port 7107 on host
137.222.33.11 is receiving a lot of traffic, and tell the remote host to
slow down traffic to 137.22.33.11/7107, while allowing other packets
(e.g., ports 80, 23, 25) to run at full speed or nearly so. This would
probably be desirable for most users: they would continue to see zippy
performance on their mail, HTTP, etc. connections, and their P2P -- or
other resource-heavy -- applications would slow down but not stop entirely.