NNSquad - Network Neutrality Squad
NNSquad Home Page
NNSquad Mailing List Information
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[ NNSquad ] Re: [IP] a wise word from a long time network person -- Merccurynews report on Stanford hearing
- To: NNSquad <nnsquad@nnsquad.org>
- Subject: [ NNSquad ] Re: [IP] a wise word from a long time network person -- Merccurynews report on Stanford hearing
- From: Barry Gold <bgold@matrix-consultants.com>
- Date: Tue, 22 Apr 2008 12:25:06 -0700
From: Tony Lauck [tlauck@madriver.com]
Sent: Saturday, April 19, 2008 1:48 PM
To: David Farber
Subject: Re: [IP] a wise word from a long time network person -- Merccurynews report on Stanford hearing
I have no objection to Comcast's managing its network performance. My
objection has been to the *form* of Comcast's management, namely the
forging of RST packets.
Yes. This is important. I was thinking about this last night, and
realized that a protocol, like the "public" interface of an OO class, is
a _contract_ between the provider and user of a service (or between two
communicating peers).
If you violate the "contract" -- changing your behavior in ways that
don't conform to the protocol/interface -- things are likely to stop
working. Often in mysterious ways.
An example floated by here recently. Some ISPs have been "capturing"
unsuccessful DNS lookups. When the remote name server returns an
NXDOMAIN record, the ISP replaces it with an A record that specifies its
own server(*) that presents advertising, some sort of search service or
whatever else the ISP thinks will make them some money.
This is not a big deal when the query comes from a web browser. The
user gets a (possibly useful) page instead of the default Server Not
Found (Firefox) or The Page Cannot be Displayed (IE). This is either
a) an improved experience for the user (who can use the lookup page to
find what he's really looking for)
b) non-harmful (the user doesn't find the replacement page useful, but
doesn't care), or
c) mildly irritating/puzzling (I was looking for foo.bar, why did I get
this page?)
But when you see a DNS query, you don't know what application generated
it. It might be a web browser. It might be an FTP client. It might be
WAIS or Gopher or IRC. Is the ISP's substitute server prepared to
respond in some sensible way to connections on those ports?
And it might be a mail server checking to see if the HELLO, From: or
some other part of the email header is legitimate. If you intercept
NXDOMAIN and return a substitute server, you break anti-spam filtering.
(*) Or the server of some partner that paid the ISP for the privilege of
having failed DNS queries routed to them.
[much snipped]
I am encouraged by Comcast's newly stated intention to cooperate with
Bittorrent. There are significant economies to be realized if all the
players cooperate. Unfortunately, there are other factors that may come
into play, for example Copyright issues that may prevent ISPs from
running their own P2P caching clients.
I don't see why. COpyright violations can come from HTTP connections
just as easily as BitTorrent. Is the ISP at risk if it caches videos
from YouTube and photos from FlickR? ISTM that the DMCA immunizes them
just like any other server that stores content they didn't originate --
they must respond to takedown notices, but are immune to suit for
copyright violation, defamation, etc. as long as they respond in a
timely manner.
Granted, *at the moment*(2) P2P has a higher percentage of copyright
violations than HTTP traffic. That might mean that managing a P2P cache
*might* be more labor intensive -- and hence more expensive -- than
managing a similar HTTP cache. (More takedown notices to comply with,
implies more people to receive, comply with, and respond to the takedown
notices.) That's in theory. So far, I haven't heard of any copyright
owners serving takedown notices on ISPs for cached copies of copyright
violations. It looks to me like they are serving the webservers
directly (YouTube, RapidShare, etc.) and relying on the fact that copies
will vanish from cache within a short time.
So I don't see any reason why an ISP shouldn't cache P2P traffic. And
of course they can change their mind and turn off cacheing if it turns
out to cost more to manage than it saves in backbone bandwidth costs.
(2) Subject to change as some major corporations discover that P2P is a
useful way to distribute legitmate content.
[ Apart from other issues, it's not clear to me that the DMCA
protects ISPs from copyright violations that *they themselves*
commit. That is, they are not held responsible for materials
that their subscribers run through the ISPs that may be
copyright violations, but I don't believe that the intent of
the DMCA is to insulate ISPs from their *own* actions that
might be actionable as copyright violations, however those are
defined. The idea of the DMCA was the assumption that ISPs
couldn't reasonably a priori control what their subscribers
might post, but presumably ISPs are supposed to be able to
control themselves.
-- Lauren Weinstein
NNSquad Moderator ]