NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information

 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Practical Congestion Solutions (Real Soon Now)


________________________________________
From: Dave Burstein [daveb@dslprime.com]
Sent: Tuesday, July 29, 2008 6:10 AM
To: David Farber
Subject: Practical Congestion Solutions (Real Soon Now)

Folks

David Reed asked about how is Comcast managing traffic today, and what is likely later in the year. I have some of the data.  I would enormously appreciate better information on what we can expect from DOCSIS 3.0 upstream when it's widely deployed (?late 2009.) I'm guessing it will be very good, but traffic models or other info that translate from 160/120 shared to realistic expectations per user would really help. My guess is 50/25 will be achieved 95+% of the time and that most of the cable upstream problems will be solved, but I just don't have enough data.

Takeaways:
1) The traffic at AT&T is rapidly shifting from p2p to streaming video such as YouTube and Hulu, per data below on the record and factchecked at a senior technical level. This matters, because any p2p solution will not do very much if p2p is much less of the traffic. It might postpone the need for an upgrade for 6 or 12 months, but p2p throttling can't do much more for most networks.

2) The updated model from Sandvine, Comcast's main supplier, allows much less obtrusive shaping. It can be tuned to neighborhood CMTS units rather than a wide territory, can automatically shut off when congestion clears, and enables more careful policies. Things like "if the user has uploaded more than 3 gigabytes in the last five days, reduce that connection to 256K until the congestion clears." Whether Comcast will choose a system like that, and whether the software will deliver everything promised, is not yet known. It certainly could come closer to "reasonable" than what they are doing now.

3) There really is a problem on (at least some) cable upstreams today, based on what I hear from people I respect who have the data. My hope - which won't be tested until 2009 - is that the DOCSIS 3.0 upstream will resolve most or all of the problems for the next few years. Full DOCSIS 3.0 has a minimum of 120 megabits upstream (shared) among typically 300 homes, something like 400K per subscriber. Current cable modems typically have 8 to 30K per subscriber. This is a huge difference.

     Verizon, AT&T, and Free.fr are strongly on the record they do not have significant congestion problems. They do not have a shared local loop, and have allocated much more bandwidth per customer. I've sat at the network console of a large ISP and seen they had essentially no congestion amongst the millions of subscribers they served. They allocate 250K per subscriber, much more than current cable. So my best guess is that when cable gets to levels like that, the problem will drastically diminish. DOCSIS 3.0 120 meg shared downstream is now available to nearly 20 million homes (8M at J:COM Japan, millions more at Numericable France, Virgin UK, Videotron Quebec, Korea, and some others.) Very few systems can send at 50 and 100 megabits, leaving enough capacity for customers to actually receive at 50-100 megabits despite the sharing. Unfortunately, the upstream is not ready. Vendors say 2008, many think late 2009. 50/25 to 20M Comcast customers (promised for 2010) and many o!
thers is totally destabilizing if 
the telco doesn't have fiber and the cablecos price like they already are in Japan (100 meg is only about $5 more) and France (100 meg, shared, is effectively $25-30 as part of a bundle.) The price will be the key.

Separately, a note came from Singapore to this list as I was writing, sensibly discussing whether problems are due to the ISP or the carrier. I wish we could think like that over here, but in the U.S. and Canada something over 95% of consumers get everything from the telco or cableco. We just don't have many surviving ISPs.

---------------
I unfortunately don't have the time to integrate these longer comments with the above, so forgive the length. If I had more time, I'd write shorter. The AT&T traffic data at the end is particularly interesting; the rest is not thoroughly factchecked yet but I think mostly on target.


1) The latest Sandvine quarterly call described what they say is now available for testing. Unlike the current models, which need to do something broad like slowing all the p2p across large systems, the new software allows much finer tuning, to the level of the usage of the individual customer. It essentially is a policy server tuned to let policies be set for each customer and enforced in real time.  Choice of policy would be by the carrier.

    The first improvement is that it can read the network and the individual CMTS, and only invoke policy if that customer is on a congested CMTS. This dramatically cuts how much is shaped.

     The second improvement is that it can choose who to shape in a much more sophisticated way. Currently, they get total usage for each customer, so implementing a 250 gig cap for an individual or grossly shaping a whole region is what they are mostly limited to (I believe). Now, they can set a policy that chooses to shape only those who use more than 4 gig upstream between 6 p.m. and 10 p.m. , and only partially shape them (say to 256K up) if that's enough. It can then turn off the shaping automatically when the circuit drops to a more normal pattern (say, less than 60% utilization for 15 minutes.)

   Contrast that to today, where apparently Comcast is leaving the stuff on for 18 hours a day, and filtering closer to the core, affecting many locations without problems. (The primary problem is at the neighborhood CMTS, and the current system I believe requires shaping dozens or hundreds at a time because the boxes are deeper in the network.)

      How sensible will Comcast be?  I can't answer that, but  since Sandvine is designing the software working closely with their large customers like Comcast, I suspect the capabilites they are discussing are what the customer intends to use.

    None of which means we can let up, but suggests much of the problem can be technically fixed.

2) Neither Comcast nor any of the other cablecos has shared enough data publicly, but privately some have provided information I believe I can trust about one crucial question.  There is a real problem with (at least some) cable upstream, which typically designs for 6 to 30 kbps per subscriber. p2p frequently outruns that, even if they do ordinary upgrades doubling and tripling the total shared upstream but staying within that range.

     The very interesting question is whether the DOCSIS 3.0 upstream will have similar problems. Verizon and AT&T, with far more effective bandwidth per user, aren't seeing a similar problem. So my first guess is that the the p2p congestion issue is important up to a certain level of bandwidth, but not above that. This is totally unproven, and the best cable engineers I've asked are unsure until they get thousands of live customers. DOCSIS 3.0 is at least a 12x (vs 10 meg upstream cable), for many a 30x (versus 2-3 meg upstream systems.)  There's a lot of folks with this "voracious p2p grabs any upstream you can give it" idea, which seems to correspond to some of the experience on cablecos today,

      However, Verizon, AT&T, and Free.fr tell me they simply do not have this problem. Bell Canada provided the CRTC with data that shows similar - the only problems of significance were at the back of the DSLAM, which I've determined was they haven't upgraded the link out of the back for many years, and have many OC-3's,etc. I've sat at the network console watching the realtime network map for millions of subscribers. The only congestion at that moment was at an interconnection with Telecom Italia, where Italia refuses to upgrade their side of the link. I believe this is typical of their experience; the network engineer told me he has authority and budget to simply upgrade any link that is a problem. They have fiber throughout, all IP, and are running it as the prototype "stupid network." If they see a problem, they add another Gig-E to the fiber link.


      Comcast is going all 3.0 by 2010 - 20 million homes.  So one of the most important questions is whether that will be enough to avoid upstream problems. All insight welcome.  I have some interesting new data points. AT&T is seeing an overall slight decline in bandwidth demand growth per user, with p2p becoming less and less of a factor. YouTube, Hulu, and other commercial streamers are growing much more rapidly.

 Here's a story I have, factchecked by a VP at AT&T Labs.

20% Drop in p2p on AT&T Backbone
Other video, like YouTube and Hulu, twice as high

Easily a third of AT&T's downstream traffic is now "web audio-video," far more than p2p and the gap is widening rapidly. Hulu and YouTube are taking over, while p2p is fading away on DSL networks. One likely result is that managing traffic by shaping p2p is of limited and declining use, perhaps buying a network 6 months or a year before needing an upgrade. The p2p traffic shaping debate should be almost over, because it simply won't work very much longer.

Jason Hillery provided some current AT&T information. p2p is currently still growing but "at a slower pace than other traffic." On the Tier 1 AT&T backbone p2p actually dropped 20% during a period last year. AT&T Labs VP Charles Kalmanek points out that a shift in customer mix, rather than an absolute drop in overall p2p, may explain that surprising statistic. Around the world, the trend is clear: web traffic continues to grow at something like 25-40% per user each year, right in line with the trend since 2001. Video is growing rapidly, but not enough to change the trend so far.

Direct from AT&T

"Overall traffic on the AT&T IP backbone network is growing at a pace of more than 50 percent per year.  This growth is a combination of customer growth and growth in traffic per customer.  Average growth in traffic per customer is about 25-30 percent per year.

To gauge the application breakdown of broadband traffic, we measure downstream traffic during the weekly busy hour.  With this measure, as of June 2008, traffic was about 1/3 Web (non video/audio streams), 1/3 Web video/audio streams, and 1/5 P2P (with other applications making up the remainder).  For the first time in June 2008, Web video/audio was the highest traffic-generating application over our IP backbone network.

As for the trends we've seen over the past few months:
-- Web video and audio is growing at a much higher pace than overall traffic (more than 70 percent/year);
-- P2P traffic continues to grow, though at a slower pace than other traffic;
-- and Web traffic is growing at a pace consistent with overall growth."

Many of the policy people believe that p2p is a ravenous monster that is devouring the Internet. The data show that simply isn't true, with the possible but important exception of some cable upstream. Traffic management has become so political few are willing to talk on the record, so reconciling conflicting data is very hard. It appears that large carriers are not having congestion problems on DSL or fiber backhaul, but cable upstream is. With friends, I'm trying to figure out what's going on and in particular whether the problem disappears with DOCSIS 3.0 which will have 12-30+ times more upstream bandwidth. Advice and data very welcome.

Video streamed/downloaded from sites like Daily Motion, Hulu, YouTube and others are coming to dramatically dominate web traffic. YouTube's market share is tending down, Hulu, BBC, Facebook presumably climbing. Heise has a German report that general interest in music over the web is dramatically down, consistent with my belief that most people who want music collections already have more than they can listen to. Hulu has now added movies, free with commercials. Most are old blockbusters The Fifth Element, Ghostbusters, Men in Black, Jerry Maguire and other old favorites. They have also Kurosawa's great Kagemusha, Quills, Enter the Ninja, Some Like It Hot,


    AT&T has sensible plans to handle the load without disruption. They are already moving from 10 gig to 40 gig in the core, and planning a transition to 100 gig in a few years. The current projections are they can do these upgrades without raising capex, bringing per bit costs down along a Moore's Law curve and keeping bandwidth costs per user essentially unchanged.  Most of the optical vendors believe they can meet those goals, although some worry that the pace of innovation may slow down as the optical components industry is struggling.
Sorry I didn't have the time to write something shorter.

db
Editor, DSL Prime


-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
RSS Feed: https://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com