NNSquad - Network Neutrality Squad
[ NNSquad ] Re: Economics of P2P (was Re: Re: Net Neutrality vs.Illegal Acts)
On Sat, Mar 22, 2008 at 4:01 PM, Robb Topolski <robb@funchords.com> wrote: > "Effective" gets the job done. > > "Efficient" gets the job done in some qualitatively better way. > > There's a reason that uplink bandwidth costs more. There's a reason that > the cost to one edge (a datacenter) is greater than the cost to another edge > (a residence). These reasons have more to do with the pass or current > popularity of the client-server model of doing things. We shouldn't > automatically assign something as "efficient" or "inefficient" based on > factors that are based on past demand. No, there is fundamental costs involved in stringing a fiber bundle. For stringing wire to a datacenter, its cheap and already done, and adding more bandwidth just involves lighting another strand. For stringing wire and more bandwidth to Laramie, Wyoming, the cost is considerably more, as you often need to bring NEW wire. And this is a fundamental problem with P2P: Unless the economics of pulling a wire change, it is fundamentally less economically efficient to serve bandwidth from Joe Random user than it is from centralized data centers. Therefore, the thesis stands: Unless something radically different changes, bulk-transfer P2P is economically inefficient in the aggregate, but effective for the content provider in shifting costs onto the receiver. With the only exceptions being very rare (local loop unutilized, uplink utilized, no cache in the ISP, and data stays in the local loop). Except in those cases, P2P for bulk file transfer was, is, and will be economically inefficient in the aggregate. > First -- your cartoon example is a great example of why BitTorrent is > superior for bulk-data transfer as it avoids the possibility that one HTTP > server failure will break your mission-critical transfer. Stick a pin in > that for now -- I'll continue that thought later (2-3 more paragraphs down). No, resumability is orthoginal to the multi-flow problem. A single TCP flow doesn't mean the protocol can't support resumable transfers. FTP can, that HTTP doesn't is HTTP's bug, not TCPs problem. In contrast, the multi-flow problem is a fundimental limitation of TCP's fairness model: it tries to be fair per flow, not fair per host. Apps which use many, high volume flows over TCP are inherently TCP-unfair compared with apps that only use one or two TCP flows. > Your example sounds like a great hypothesis, but you'll run into problems > when you go about proving it.... No, there's no problem proving it. Its been proven in simulation, and in practice, on many occasions. [1] The "Multiple flows outcompete single flows" problem is well known, just as the "Low RTT time flows get more bandwidth" problem. You can ask any of the TCP congestion control gurus. Or look for the papers. See, for example, this IETF draft: http://www.ietf.org/internet-drafts/draft-briscoe-tsvwg-relax-fairness-00.txt Fairness should be among "users", but TCP's fairness is among flows. Since P2P uses many flows, it is inheranty unfair compared to other ways of moving bulk data with TCP. > The use of multiple peers for bulk data transfer is TCP friendly. Using > multiple peers means that the job at hand (downloading a file) can work > around problems that appear and disappear during the transfer. Bob and > Alice may share that one congestion point, but Bob's HTTP connection has > several possible points of failure. Alice's P2P application is bound to > have far fewer points of failure. NO. Resumability is orthoginal to multiple flows. You are comparing a bug in HTTP (no "resume" option) with a fundimental flaw in P2P with TCP: multiple flows produces global unfairness. > > You would need to use a congestion control which detects WHERE the > > congestion occured (local shared or remote) and use that to decide > > whether to throttle ALL flows in a P2P session or just that particular > > flow. A hard problem, and would probably require ECN or a similar > > explicit congestion notification to make the right decision. > > This problem is not unique to P2P and has been solved. TCP's own existing > controls will throttle all senders. There is no need to reinvent this wheel > -- well, there is one need. Hardware vendors have figured out how to make > DPI equipment and now they need to create a market for them. See the IETF draft, or many other discussions: It has NOT been solved because TCP's fairness is among flows, not users. If you want fairness between users, and you want bulk-transfer P2P, you got to accept traffic shaping. Because, without it, you won't get fairness, TCP or no! > > From what I understand, Blizzard's downloader/updater is a standalone > > bittorrent client which is agressively throttled, regardless of what > > is going on, given the common MO of friends of mine > > We're both guessing here... My point remains that using Blizzard as an > example that P2P doesn't work because it congests the network doesn't hold > up. No, I use Blizzard as an example of why content PROVIDERS want to use P2P: Its cost shifting, pure and simple. And that in order to NOT congest the customer's network, performance is perceived as dismal by users. If you want good performing P2P downloads, until you get some new congestion control technology, because unless the point of congestion uses agressive traffic shaping to limit the P2P flows first, you will outcompete normal TCP flows at congested links. > To avoid RST injection and other ISP attacks, my guess is tougher > obfuscation and end-to-end encrypted tunnels leading to a new distributed > P2P file-storage model that provides both end-to-end anonymity and > anti-interference measures from both within and without the network. P2P bulk transfer will LOSE this fight, because P2P bulk transfers, at least a bulk enough transfer that the ISP would care, is ALWAYS detectible by the ISP. Period. They won't be able to say WHAT you are sending, but they will be able to tell you are sending it, and shape or block it should they desire under whatever regulator resigm they operate under. [1] Talking with Sally Floyd (TCP congestion guru) about P2P and why ISPs hate it, she immediatly jumped on the multi-connection problem, while I was focusing on the aggregately inefficient load shifting.