NNSquad - Network Neutrality Squad
[ NNSquad ] Re: Economics of P2P (was Re: Re: Net Neutrality vs.Illegal Acts)
"Effective" gets the job done. "Efficient" gets the job done in some qualitatively better way. There's a reason that uplink bandwidth costs more. There's a reason that the cost to one edge (a datacenter) is greater than the cost to another edge (a residence). These reasons have more to do with the pass or current popularity of the client-server model of doing things. We shouldn't automatically assign something as "efficient" or "inefficient" based on factors that are based on past demand. > Remember, P2P requires the nodes to send as well as receive, and only > in rare cases does the sending stay within the local loop. Staying within the ISP is not necessarily efficient, either. If a certain segment within the ISP is congested, then this is a good time to avoid that segment and other pieces from outside of the ISP's boundaries. With BitTorrent, this is what happens. With ED2K (#2 top protocol), it does not. > A simple cartoon example: Alice and Bob share the same 1 Mbps point > of congestion. > > Alice is running P2P software, which has 9 simultaneously active TCP > flows for that file. > > Bob is running an HTTP bulk-data transfer, with a single TCP flow. > > Assume that RTT for all flows is identical, and the point of > congestion does not favor or disfavor any TCP stream. > > As a result, each flow will average out to .1 Mbps (more or less). > > Thus Alice's P2P application will grab .9 Mbps of bandwidth, while Bob > will only get .1 Mbps bandwidth. > > And, well, thats on the incoming link. First -- your cartoon example is a great example of why BitTorrent is superior for bulk-data transfer as it avoids the possibility that one HTTP server failure will break your mission-critical transfer. Stick a pin in that for now -- I'll continue that thought later (2-3 more paragraphs down). Your example sounds like a great hypothesis, but you'll run into problems when you go about proving it. The first one people run into is that the incoming link never seems to converge like that. (Trying to do QoS on incoming data is a bit like sweeping back the ocean with a broom.) And as inbound flows do converge at some bottleneck, their sending endpoints are responding to dropped packets and changing the percentages. The scenario also does not scale to any real-life broadband distribution -- the incoming side of the transactions are generally not throttled. > So can you really call Alice's P2P application "TCP friendly"? Just > because P2P applications are TCP doesn't mean they can be considered > "TCP friendly", as the use of mulitple peers for bulk data transfer is > specifically TCP-unfriendly. [1] The use of multiple peers for bulk data transfer is TCP friendly. Using multiple peers means that the job at hand (downloading a file) can work around problems that appear and disappear during the transfer. Bob and Alice may share that one congestion point, but Bob's HTTP connection has several possible points of failure. Alice's P2P application is bound to have far fewer points of failure. > You would need to use a congestion control which detects WHERE the > congestion occured (local shared or remote) and use that to decide > whether to throttle ALL flows in a P2P session or just that particular > flow. A hard problem, and would probably require ECN or a similar > explicit congestion notification to make the right decision. This problem is not unique to P2P and has been solved. TCP's own existing controls will throttle all senders. There is no need to reinvent this wheel -- well, there is one need. Hardware vendors have figured out how to make DPI equipment and now they need to create a market for them. > From what I understand, Blizzard's downloader/updater is a standalone > bittorrent client which is agressively throttled, regardless of what > is going on, given the common MO of friends of mine We're both guessing here... My point remains that using Blizzard as an example that P2P doesn't work because it congests the network doesn't hold up. > [1] Of course, I'm dreading the day that P2P software starts shifting > to UDP to avoid RST injection. I really don't expect them to get > congestion control right in that case. I hope I'm wrong. Yes -- and it works, too (for now). But the motivation for any shift to UDP has been to do NAT hole punching. Messing with DPI has been a pleasant side-effect. In doing so, it is still as able to be inspected, forged, and blocked as before. It is not anonymous. So ISPs will simply have an easy adjustment to make in order to detect it (and then shape/thwart it). Since it is TCP over UDP, I'm guessing that it still should follow all of the congestion control rules. I don't know if it uses the native TCP code/configuration to do this or if the TCP implementation used within UDP has its own thresholds. To avoid RST injection and other ISP attacks, my guess is tougher obfuscation and end-to-end encrypted tunnels leading to a new distributed P2P file-storage model that provides both end-to-end anonymity and anti-interference measures from both within and without the network. Robb Topolski > -----Original Message----- > From: Nick Weaver [mailto:nweaver@gmail.com] > Sent: Saturday, March 22, 2008 2:32 PM > To: Robb Topolski; nnsquad@nnsquad.org > Subject: Re: [ NNSquad ] Economics of P2P (was Re: Re: Net Neutrality > vs.Illegal Acts) > > On Sat, Mar 22, 2008 at 1:32 PM, Robb Topolski <robb@funchords.com> > wrote: > > > Less than ... An FTP transfer from a single host? no. > > If the HTTP or FTP server is in a data center, it IS far less > economically efficient. The cost of bandwidth to a major data center > is significantly less than the cost of bandwidth to Laramie, Wyoming. > > > Less than ... A localized data center (such as Limelight or Akamai)? > no -- > > BitTorrent's optimistic-unchoking method is designed to ensure > efficient > > peering. > > Again, yes it is economically inefficient. With a localized data > center, if its not in the cache its like 1. If it IS in the cache, it > is because you don't have the uplink. > > Remember, P2P requires the nodes to send as well as receive, and only > in rare cases does the sending stay within the local loop. > > Unless and until ISP uplink bandwidth costs the same as data-center > bandwidth, or P2P traffic remains almost entirely in the local loop > (and even then, the ISP doesn't do caching on HTTP to keep that in the > local loop or in the case of Akami, have a node in the local loop), > P2P bulk transfer is economically inefficient in aggregate. > > Since neither of those two cases seem likely (except in some rare > events/application models), P2P bulk transfer is economically > inefficient in the aggregate. > > > > > IF said P2P protocols were "super friendly", that is, friendlier > than > > > TCP, > > > > ALL P2P networks transport data over TCP. Therefore, by definition, > they > > can't make TCP more aggressive than it is. Furthermore, all P2P > clients > > have additional congestion controls built in -- and even more options > that > > are available. > > Just because an application uses TCP doesn't mean it is actually TCP > friendly: > > It is well known that an aggregate flow of many TCP streams will > outcompete a single TCP stream by a wide margin. > > A simple cartoon example: Alice and Bob share the same 1 Mbps point > of congestion. > > Alice is running P2P software, which has 9 simultaneously active TCP > flows for that file. > > Bob is running an HTTP bulk-data transfer, with a single TCP flow. > > Assume that RTT for all flows is identical, and the point of > congestion does not favor or disfavor any TCP stream. > > As a result, each flow will average out to .1 Mbps (more or less). > > Thus Alice's P2P application will grab .9 Mbps of bandwidth, while Bob > will only get .1 Mbps bandwidth. > > And, well, thats on the incoming link. In ADDITION, Alice's will also > want to grab .9 Mbps on the uplink as well, bandwidth not required at > all for Bob. > > So can you really call Alice's P2P application "TCP friendly"? Just > because P2P applications are TCP doesn't mean they can be considered > "TCP friendly", as the use of mulitple peers for bulk data transfer is > specifically TCP-unfriendly. [1] > > > You would need to use a congestion control which detects WHERE the > congestion occured (local shared or remote) and use that to decide > whether to throttle ALL flows in a P2P session or just that particular > flow. A hard problem, and would probably require ECN or a similar > explicit congestion notification to make the right decision. > > > This is also why I say if you tell bittorrent it gets "X", you are > pretty much guarenteed that if X + a reasonable epsilon exists, it > will be able to grab it. Many flows will significantly outcompete > single flows, absent agressive traffic shaping. > > > You don't get this on HTTP transferring large files, because in > general, you get a SINGLE flow per file rather than N flows per file. > > And even with just 3 active peers and a local bottleneck, BitTorrent > will grab 75% of the link and an HTTP/FTP transfer will grab 25%. > > And how many BitTorrent users are only getting ONE file at a time? > > And thats not even getting started with "ISPs can cache HTTP, but > caching BitTorrent would get you sued". > > > I don't know Blizzard's motivations. Do you? Even so, Blizzard can't > > control the speed of a BitTorrent swarm. No publisher or tracker can. > It > > is completely determined by the peers and the network conditions. Any > > throttling going on is happening either by the sending or receiving > peers > > (or their ISPs). > > From what I understand, Blizzard's downloader/updater is a standalone > bittorrent client which is agressively throttled, regardless of what > is going on, given the common MO of friends of mine to "Find an HTTP > mirror and download from there so they can start playing sooner" and > "its the only BitTorrent I've dealt with that is that slow", as if I > recall, you can't actually play until the update is downloaded in many > cases. > > Its a way for Blizzard to not pay $.18/GB for user data download at > high speed, or have to buy more bandwidth at their facilities (one of > the two). > > > [1] Of course, I'm dreading the day that P2P software starts shifting > to UDP to avoid RST injection. I really don't expect them to get > congestion control right in that case. I hope I'm wrong.