NNSquad - Network Neutrality Squad
[ NNSquad ] Re: nnsquad Digest, Vol 2, Issue 137 (Substantive)
[I am re-focusing my comments to issues pertinent to NN and the Net, and removed older portions of the discussions. I also don't think we are in strong disagreement. - Rahul]
Neil,
The main issue is congestion, and what to do about it.
1) Where does congestion occur? Near the end points, or in the middle?
Power systems sometimes have congested transmission capacity, but generation limits are usually more of a bottleneck. In the net, server capacity was the old bottleneck. P2P changed some of that, in theory. It's worth emphasizing that electricity load need not be as binary as you stated, especially when aggregated up to points of any bottleneck. It's very rarely the last hop that is congested in power grids, while in cable systems (or other shared, e.g., wireless) these are more likely to be. In power systems, the retailing utility is unlikely to be the physical point of bottleneck (utilizing transmission and generation entities upstream), unlike ISPs who work with backbones, and then servers.
2) Was congestion unplanned/unexpected?
Electricity in the US grows by a few % per year (or so), and in places like India/China, by 5-16%. The Internet can see growth 10x this rate. Does that imply carrier costs will rise by 10x? Not really. Carriers must be in the position to meet anticipated demand. Certainly they can and should charge for it, but, like the power world, we deal with a lot of averaging. Most power consumers don't pay based on the share they contribute to coincident peak demand. [the larger few consumers may]. I don't pay for for my net connection based on congestion.
This relates to expectations. Most people want a flat rate for a given pipe (retail). That is certainly the case for a backbone connection, sold in STMs/OCs/GigE. Is a system based on not just USAGE but perhaps CONGESTION unfair? Not inherently, and it may be more efficient, but there may be enormous overhead, and it may be counter to expectations. Far trickier is how demand could shift in response.
3) What are the options for overcoming congestion? There are only two - add supply or reduce (perhaps shift) demand. It's worth emphasizing a difference between power and the Net. While it takes a few hours to ramp up *an existing coal plant*, and, as you stated, days to months to add capacity on the Net, adding generation or transmission capacity in power systems takes years.
Can we realistically determine marginal congestion charges? Could or should this be by application? I'm not sure just yet. For one, people can spoof ports. Overcoming such issues then requires adding layers of authentication. = $$$$. I never cease to be underestimated by how people respond to such things. In the 1930s, the power companies would use analog radio signals to control water boilers (peak load). And consumers were paid to participate. Then some folks figured out all they had to do was essentially cover their antennas with foil/wire. Viola. I don't have a sense of how complex or meaningful such granular pricing would be for the Net. In the electricity world, a few people talk of "Prices to Devices". That's a very fundamental shift not to be taken lightly. The same can be said for the Net (to applications). IF we can do it in a transparent, lightweight manner, then it may be a good thing. I just don't know. Who gets to decide how to allocate scarcity? We could allocate it equally, but we all know that the value (and price) of a bit varies in the market, going from SMS/text msgs on one hand to cable video on the other extreme.
Rahul
Neil Davies wrote:
Rahul
2008/6/12 Rahul Tongia <tongia@cmu.edu <mailto:tongia@cmu.edu>>:<mailto:tongia@cmu.edu <mailto:tongia@cmu.edu>>>:
Neil,
I admit any comparisons between electricity and the Internet have
to be simplified, and storing bits changes the temporal issue
somewhat. Storage could span from milliseconds to infinite, and
also be along midpoints (including so-termed p4p). I also think
we're in "vigorous agreement" about many of these issues! :)
Please relook at my post for the subtleties of how localization
matters differently for the power grid vs. the Internet. You
mention economic incentives for people to modify their traffic
incentives, and it working well. That's great, but I speculate
the improvements are relatively local (?).
I don't believe that they are local - small changes in the behaviour of the "average" consumer make substantial differences to the offered load at any ISP. The rise of use of streaming in the UK (iPlayer from the BBC) being a case in point - each active iPlayer user is approximately doubling the peak bandwidth costs in the ISP's cost model (which is how the wholesale ISPs are charged), it is translating to about 20%-25% increase in the costs.
In the approach that we're using we are (by collective agreement) giving higher preference to the more interactive activity during peak hours, we have the controls to run the transmission capacity at high utilization (~100%) while delivering the packets of the more interactive traffic with suitable low delay and low loss.
The bulk transfers still complete quickly enough and interactive users don't even notice their effect. Economically we are using what would have been a DSL link that would nomally support 1 or 2 users to support 10 to 15 times that number.
As we are saying the real economic cost is in the provision of the peak supply - and data networks take a lot longer to increase that supply (days or months) than electricity networks.
Consider this example that shows some differences at a broader
scale. If we hadgigabit connections per user, and want to watch a
movie, one can get a reasonable buffer built up in seconds, even
for a HD movie.
When using, say, TCP transmission speed is not the critical factor. Round trip time is critical to starting the streaming (to get the arrival rate >= the streaming rate), low loss rate is essential to maintaining the streaming (keeping congestion management at bay). Yes transmission rate represents a minimum lower bound - but once achieved it is not the critical factor.
Electricity congestion (peak summer) is likely to go away so
soon, and the ability to react (by the grid) is MUCH slower,
except for spinning reserve which is 5-10% of the capacity (any
more becomes very expensive).
Turning on a coal plant takes upto 2 hours to go full steam.
Interesting, I would say that the situation is entirely the opposite. Provisioning more capacity in a data network takes days to months - need to either get more capacity switched on in existing physical connections or new physical connections installed. Of course, once enabled it is available 24/7 at a fixed price .
People have talked about "interruptible power" contracts but these
haven't found enough takers. One issue is regulatory/pricing.
Consider Demand Response (something the so-called Digital or Smart
grid is meant to enable). I have several PhD students working on
this, and we find that if even a small fraction of consumers
modify their demand in response to grid signals, all the consumers
benefit (at the extreme, averting a blackout, but certainly with
lower prices for the generated electricity). What is not yet
clear is how to compensate stakeholders for such things.
I'm familiar timeshifting demand on electricity demand, I was peripherally involved in EU projects on this 10-15 years ago - in networking it is substantially easier to do this - you can provide classes of service for particular applications where you can reduce their "supply" at times of peak. Electrical devices, typically, are discrete in their energy consumption - they are either on or off. Network devices are far more continuous.
The same thing doesn't exist in the Internet world, and even if it
did (which you state you've deployed), any
congestion-alleviation-participant would not help the entire Net,
only some (very small to small) fractions of it.
Ah, this is where the analogy ceases to work. Electricity grids (to a first approximation) are about keeping the sum of the supply points (slightly) greater that the sum of the demands - in the network world the issue is the overload of discrete points in the distribution topology. Having a system that alleviates the issues on just a few of those makes extremely good economic sense. There are vast portions of the global internet where average usage is <<0.1% (that is why they deliver good quality) - it is the points that are contented/saturated that need to be addressed.
I also didn't follow your example of stand-by passengers - aren't
all "best effort" users of the Internet standby passengers?
The issue is that they are all one class and any the "best" that any pre-emption mechanism (and you can think of queues in network devices as being that) is to be "fair" [don't start me on fairness!].
If you have a class of users (or more appropriately a class of traffic for certain applications) that say they would willingly be pre-empted some of the time then you have offered load that you can shed at peak times at your contention points. This means that you can carry other packets - keep those applications happy. It reduces your overall cost structure - meeting expectations without having to increase transmission capacity. Standby passengers play this role for airlines.
In the "intelligent electricity network" you have the same issues. You need applications that can be "load shed" (either on/off or demand reduced) - things like refrigeration or heating - things where interruption of the supply doesn't ruin the expected outcome (can't just turn off the power to a lift!). But you can't turn off the power to those things indefinitely or they will not fulfil their role - ruined food. It is about time shifting not eliminating the load. There is obviously economics in this - take water heating; do I have an "instantaneous" water heater or one that has a (suitably sized) water tank to cope with the effects of time shifting the demand - the price differential in the electricity costs helps determine the economics of both the equipment and the "happiness" of the consumer with the two different approaches.
Trying to understand that economics is one of the things that I hope we achieve in our work.
Neil
Rahul
Neil Davies wrote:
Rahul
Unfortunately your assertion about storing bits is bit specious.
Yes the information itself is not lost but, as it is with
electricity networks the issue is not the movement of the
electrons but the transmission of "power".
Power here being "the work" done by moving the bit -
basically transmission capacity, it can't be stored for later
use. The difference
The good news is that the effects of demand for power
exceeding the supply don't have to be applied across the board
(as they do in electricity networks - until you shed load). We
have the mechanisms to deliver different fractions of the
requested load to different end-users (and different
applications for such end users).
This works well in networks that I have helped design and
deploy, ones in which the end-user gets the ability to
"define" the traffic that is important to them and to
subscribe to per service "quality" agreements.
It appears that if you build the right economic models and
implement the sharing policies appropriately you can deliver
what user's need and at a lower cost.
As has been said elsewhere - the issues is demand NOW - it is
the right to get on the plane even if it's full - that assumes
the existence of standby passengers.
It is that lack of a "standby" population and the mechanisms
to make a market out of the differential that is, in my
opinion, the barrier that has to be overcome.
Neil
2008/6/9 Rahul Tongia <tongia@cmu.edu <mailto:tongia@cmu.edu>
One has to remember that electricity is rather hard to
store (in
volume) while bits can be retransmitted. There aren't that
many
takers for "interruptible" power, and there is a challenge
(regulatory and otherwise) in that the "selfless" actions
of these
few help everyone for the cases where generation < demand,
i.e.,
getting 1-5% load reduction is a very big deal, and enough to
avert a blackout. In the Internet world, not only are spikes
often much higher, there is the enormous issue of the
localization
of the bottlenecks. Even so-termed backbone bottlenecks are
usually between a pair of points (consumer and content server).
Vint's point about p4p is interesting, in that it could
make this
a much more distributed problem, and thus one where more
"seflless" souls could benefit everyone. Instead of everyone
trying to apply electricity models to the Internet, there
is also
the future where the electricity grid begins to look more
and more
like the Internet, i.e., one wants to explicitly choose and
control flows between, say, green power, and my appliance. Of
course, such considerations are economic, and under
catastrophic
conditions, all such bets are off. Thus, in the Internet
world,
if there is a congestion/chokepoint, one possible solution
isn't
just to worry about that particular flow, but also everything
else. Makes it a much more complicated problem...