NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information

 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Re: Economics of P2P (was Re: Re: Net Neutrality vs.Illegal Acts)


On Sat, Mar 22, 2008 at 1:32 PM, Robb Topolski <robb@funchords.com> wrote:

>  Less than ... An FTP transfer from a single host? no.

If the HTTP or FTP server is in a data center, it IS far less
economically efficient.  The cost of bandwidth to a major data center
is significantly less than the cost of bandwidth to Laramie, Wyoming.

>  Less than ... A localized data center (such as Limelight or Akamai)?  no --
>  BitTorrent's optimistic-unchoking method is designed to ensure efficient
>  peering.

Again, yes it is economically inefficient.  With a localized data
center, if its not in the cache its like 1.  If it IS in the cache, it
is because you don't have the uplink.

Remember, P2P requires the nodes to send as well as receive, and only
in rare cases does the sending stay within the local loop.

Unless and until ISP uplink bandwidth costs the same as data-center
bandwidth, or P2P traffic remains almost entirely in the local loop
(and even then, the ISP doesn't do caching on HTTP to keep that in the
local loop or in the case of Akami, have a node in the local loop),
P2P bulk transfer is economically inefficient in aggregate.

Since neither of those two cases seem likely (except in some rare
events/application models), P2P bulk transfer is economically
inefficient in the aggregate.


>   > IF said P2P protocols were "super friendly", that is, friendlier than
>   > TCP,
>
>  ALL P2P networks transport data over TCP.  Therefore, by definition, they
>  can't make TCP more aggressive than it is.  Furthermore, all P2P clients
>  have additional congestion controls built in -- and even more options that
>  are available.

Just because an application uses TCP doesn't mean it is actually TCP friendly:

It is well known that an aggregate flow of many TCP streams will
outcompete a single TCP stream by a wide margin.

A simple cartoon example:  Alice and Bob share the same 1 Mbps point
of congestion.

Alice is running P2P software, which has 9 simultaneously active TCP
flows for that file.

Bob is running an HTTP bulk-data transfer, with a single TCP flow.

Assume that RTT for all flows is identical, and the point of
congestion does not favor or disfavor any TCP stream.

As a result, each flow will average out to .1 Mbps (more or less).

Thus Alice's P2P application will grab .9 Mbps of bandwidth, while Bob
will only get .1 Mbps bandwidth.

And, well, thats on the incoming link.  In ADDITION, Alice's will also
want to grab .9 Mbps on the uplink as well, bandwidth not required at
all for Bob.

So can you really call Alice's P2P application "TCP friendly"?  Just
because P2P applications are TCP doesn't mean they can be considered
"TCP friendly", as the use of mulitple peers for bulk data transfer is
specifically TCP-unfriendly. [1]


You would need to use a congestion control which detects WHERE the
congestion occured (local shared or remote) and use that to decide
whether to throttle ALL flows in a P2P session or just that particular
flow.  A hard problem, and would probably require ECN or a similar
explicit congestion notification to make the right decision.


This is also why I say if you tell bittorrent it gets "X", you are
pretty much guarenteed that if X + a reasonable epsilon exists, it
will be able to grab it.  Many flows will significantly outcompete
single flows, absent agressive traffic shaping.


You don't get this on HTTP transferring large files, because in
general, you get a SINGLE flow per file rather than N flows per file.

And even with just 3 active peers and a local bottleneck, BitTorrent
will grab 75% of the link and an HTTP/FTP transfer will grab 25%.

And how many BitTorrent users are only getting ONE file at a time?

And thats not even getting started with "ISPs can cache HTTP, but
caching BitTorrent would get you sued".

>  I don't know Blizzard's motivations.  Do you?  Even so, Blizzard can't
>  control the speed of a BitTorrent swarm.  No publisher or tracker can.  It
>  is completely determined by the peers and the network conditions.  Any
>  throttling going on is happening either by the sending or receiving peers
>  (or their ISPs).

>From what I understand, Blizzard's downloader/updater is a standalone
bittorrent client which is  agressively throttled, regardless of what
is going on, given the common MO of friends of mine to "Find an HTTP
mirror and download from there so they can start playing sooner" and
"its the only BitTorrent I've dealt with that is that slow",  as if I
recall, you can't actually play until the update is downloaded in many
cases.

Its a way for Blizzard to not pay $.18/GB for user data download at
high speed, or have to buy more bandwidth at their facilities (one of
the two).


[1] Of course, I'm dreading the day that P2P software starts shifting
to UDP to avoid RST injection.  I really don't expect them to get
congestion control right in that case.  I hope I'm wrong.