NNSquad - Network Neutrality Squad
[ NNSquad ] Re: Editorial Comment on "Entry level pricing"
Lauren
I totally agree that without measurement no progress can be made. I also agree that the correct approach should be bottom-up and consumer-experience led.
Perhaps it would be useful to share the approach that we've been using in our network quality and performance analysis. It leads to a characterisation of the emergent properties of the network (really its data transport quality) which both permits comparison between technologies as well as between ISPs.
Using tcpdumps as input (the essential information is just the timestamp not the contents) you can calculate the delay experienced by each packet and from this calculate the following:
G: the delay due to geographical effects; e.g transmission delay S: the delay due to the size of the packet; i.e. big packets take longer V: the delay (and its variation) due to the sharing of common paths, equipment etc.
In general, for any given two end points G is constant - speed of light rules; S is function of the packet size, which is dominated by the lowest link speeds (typically the access links) which leaves V.
Given that I am at particular location and have purchased a particular access technology (be it DSL, Cable, 2G/3G Mobile, WiMax, LTE, Satellite - MEO or GEO, etc), G and S are fixed.
V, which is really a distribution, is all of those sharing and contention effects. V has various interesting properties:
1) V will be non-zero; a V which is always zero implies that there was never a time when your packets were delayed by other packets - that implies that there was never any other traffic in the network; if you want V to be (nearly) zero that implies minimal or no sharing which has, tremendous, cost implications.
2) even for 'the highest priority' traffic V has to be non zero - even that traffic has to wait for the resources to become free to be used (residual packet service time effect)
3) V will be bounded, whether those bounds are reasonable or not is a different question, how bounded V is and, consequently what the observed loss rate is, defines the difference between ISPs.
All this measurement does not need the active participation of the ISP - it is just appropriate statistical analysis of the observations. If the ISP will work with you, all the better, as the <G,S,V> measurements 'compose' so you can then work out the contribution of each hop or sub-network on the end to end path.
We've measured this in the UK and have found that, for the common shared infrastructure (BT Wholesale's IPStream network) not only are G and S calculable (e.g. G is between 6ms to 25ms depending where you are in the country, 99% of S is due to the access link speeds at either end) but that V is well defined - it is less than 20ms, broadly independent of packet size, location and, slightly surprisingly, time of day.
I can just hear UK readers saying, "what a minute - 20ms delay variation, independent of time of day, that's not what I see". I'd agree - this is the quality attenuation for the wholesale portion of the end-to-end path, and quality attenuation only 'adds' so you are going to see more delay (and variation) than this - but that will be due to other factors like your ISP.
We've measured some ISPs that deliver within few milliseconds of this (to their most valued customers) and others where the measured bound on V is not just in seconds, but 10s of seconds!
Now when you come to measure 2G/3G mobile, cable, satellite very interesting patterns start to emerge, including, because you can work out which bits of the network generate which 'V': the effect of investment patterns on long term service quality; where the best return on managing 'V' is to be found, and; which vendors equipment is the best at giving you the most control over 'V' at the highest loading factors (guess where most of those 10 seconds came from!).
What this all tells me is that there is a reasonable scientific method out there to start talking about data transport quality aspects of network neutrality and that there is plenty of scope for service providers to use their smarts to construct more effective networks and manage their costs while delivering bounds on 'V'. Plenty of scope for innovation and market/service differentiation.
Measuring ISPs delivered data transport quality is possible, it can be done with scientific rigour. ISPs shouldn't be scared - you can use the same science to optimise and improve and to demonstrate that you are operating efficiently.
To complete the chain of thought - how does this related to customer experience? Well if the application I am working with has its packets delivered within appropriate bounds then my experience will be good (all quantifiable). ISPs are never going to guarantee experiences involving your application, but they might well be willing to go some way to assuring the transport of the data packets that belong to that application - that would be an appropriate boundary as that all the factors, at least in principle, all within their control.
As you can imagine there is more to this, and this is only really a taster of both the approach and the outcomes - I hope it was interesting.
Neil
On 6 Oct 2009, at 01:18, Lauren Weinstein wrote:
Neil,
I agree with much that you're saying. But part of what makes this so complex is that we tend to often conflate different aspects of the "network neutrality" debate into one rather large and dense lump.
For example, it's possible to separate -- to a significant extent -- the technical details from statements of societal policy. E.g., "ISPs should be free to conduct business however they see fit without any regulation of any kind." Or, "Reasonable regulation of ISPs is deemed necessary and appropriate by society in keeping with society's established interest in promoting the general welfare of its citizens."
We also need definitions. I believe we'd pretty much all agree that "up to this speed" advertisements for Internet access services are largely useless without additional data that usually is not available to potential or current subscribers. So how to define the "Internet access experience" in a flexible but meaningful manner? Not easy. I'm reminded of Microsoft's "Vista Experience" rating that attempts to suggest how well any given hardware configuration will run Vista. I've found that rating to be essentially useless. Coming up with a consumer experience rating for Internet access would be even harder (though, as we've heard previously on this list, there have been proposals for more rigorous methodologies for such ratings). Then you're faced with how to get ISPs to accept such rating regimes absent regulatory pressure, given that it isn't necessarily in an ISPs own self-interest to reveal such details to consumers.
And you need measurements. Old saying: If you can't measure it, it isn't science. Basically, in the Internet access world, there are only two main ways to get network measurements. One is to depend on ISPs to do so fairly and effectively, and for them to make the resulting data widely available. But again, what's in it for them in an unregulated, at best largely oligarchical environment? Much of the measurement data we'd really like to see to better understand what's going on is often considered to be proprietary by ISPs.
Or computer users and the Internet sites that they frequent can take, analyze, and share their own measurements. This concept was a key aspect of the network measurement meeting at Google last year, which was the genesis for GCTIP ( http://www.gctip.org ). I personally feel that a bottom-up, consumer-led approach to this problem has the best chance of success, but it's still not easy. Not only are there a variety of technical, logistical, and privacy issues involved, it can also be quite nontrivial to analyze the resulting data without knowledge of ISP internal topologies. This can lead, for example, to consumers assuming that they are being purposely blocked by an ISP, when in reality an ephemeral routing or other temporary and purely technical problem is at fault. However, I do believe that these issues could be overcome with sufficient dedication and effort, and I still very much support the consumer-based approach.
In significant ways, getting Internet access service is like buying drinks at a bar. For any given drink, how much genuine booze is mixed with how much water or other diluting agents? How much do these ratios vary from day to day and with time of day? Does the bartender tend to dilute the drinks more when the bar is crowded, rather than buy enough extra liquor to keep the ratio up to standard even during Happy Hour? And how do you judge the ratio anyway? Taste? Buzz level?
It's easy enough to weigh a bag of M&Ms or count the number of tasty chocolate-coated morsels that were provided. But most any time that we're dealing with products or services of a less physical nature, especially when their "contents" can be easily altered or finessed, it's all a much tougher proposition.
--Lauren-- NNSquad Moderator
- - -
On 10/05 23:32, Neil Davies wrote:Lauren
I could not let your editorial commentary (below), just pass:
But the issues of sharing and oversubscription are relevant across all forms of Internet access, not just wireless. ISPs make essentially arbitrary decisions about how many customers will share most elements of the physical plant. Yes, DSL is a dedicated pair back to the CO or terminal, but after that it's just as subject to oversubscription performance problems -- from the subscribers' standpoint, as anything else. And of course, as lowly subscribers, we usually have no clue how bad that oversubscription or other undercapacity problems will be at any given time.
While nothing you've said is false, it doesn't do justice to how
fundamental these issues of 'sharing' and 'the decisions' really are.
There is a real truth that is hinted here. One that, I believe, goes to
the very heart of how 'neutrality' can be expressed, and in principle
measured. Let me see if I can explain.
It is all about experience (or emergent properties if you want to be more formal) - specifically the delay and loss characteristics that a subscribers traffic 'experiences'. That experience is, in turn, the composite effect of a the 'sharing' and 'decisions' being made at the network elements. As a subscriber I don't care about all that detail I only care about the composite effect - the 'total' delay and loss my traffic experiences.
This is not a concern about the fate of any individual data packets - it is about the general trends, the distribution, of those delay and loss characteristics over several packets.
What do I want? I want to know that my application will get (with a
reasonably
high probability) sufficient of its data packets through the network so
that
the application delivers a service to me that is fit-for-purpose. I want
to
have a bound on the extremes of delay and loss delivered to my traffic
which
is published and, preferably - at least for some of my applications -
has
associated with it a contractual commitment to be delivered. I may even
be
prepared to pay more for such a service - because I can now rely on it.
This is all measurable and quantitative. Aspects of neutrality can then
be
expressed as 'deliver me the same loss and delay characteristics as X' -
ISPs
can then express their services in those terms - and what their
restrictions are
for getting that those services, be they by time of day or quantity over
a time period.
Yes, all of this is as a result of the 'sharing' and the decisions - how
the equipment
is configured, how much over-subscription etc. As subscribers we don't
want to know how
bad the ISPs problems are - we need to know is what we can rely on.
Neil