NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information

 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Re: The myth of isochronous and the risk of baking-in the past


I hope that this is the ends the “discussion” on the isochronous Internet. There are too many errors here to make a reply meaningful. I’m still waiting for George to respond to http://frankston.com/?n=DarnInternet in order to demonstrate some understanding of the Internet.

 

Perhaps the most fundamental problem is an inability to understand that the Internet itself is a peer to peer architecture and all P2P is is a rediscovery the basic nature of the Internet that got lost in the business model of the web and in the expedient use of TCP even when it wasn’t the best architectural choice for many applications.

 

The arguments defy logic – how do we get from discussing the Internet as a video distribution system to arguing that it must now become a high performance video conferencing system? And games are not P2P? If P2P is so bad why this emphasis on naïve implementations of twitch games which require extremely low latency – beyond anything that bit torrent and others would dare expect. And now George wants to both ban P2P and require it?

 

There are so many errors and so much misunderstanding I don’t see how we communicate. Jitter and latency are related but completely different. But again, this is all about presuming the purpose of the network and making promises. There is no understanding of creating opportunity and driving a dynamic – it’s like a blind man defending the idea that an elephant is like a fan or a like a snake.

 

From: nnsquad-bounces+nnsquad=bobf.frankston.com@nnsquad.org [mailto:nnsquad-bounces+nnsquad=bobf.frankston.com@nnsquad.org] On Behalf Of George Ou
Sent: Sunday, October 04, 2009 14:31
To: 'Bob Frankston'; nnsquad@nnsquad.org
Cc: 'ip'
Subject: [ NNSquad ] Re: The myth of isochronous and the risk of baking-in the past

 

Bob,

 

Let’s come back to the real world.  Here are the facts.

 

P2P (even 10% of capacity) induces massive jitter on the order of 50 milliseconds to 1000+ milliseconds on the broadband segment.  Based on my tests, higher P2P utilization causes more frequent spikes in packet delay and lower P2P utilization causes lower frequency but often the same amplitude in delay which is still very problematic.  This makes gaming, VoIP, IPTV, video conferencing, or any other isochronous real-time application unbearable.

 

We can use jitter adaptation techniques on VoIP and video conferencing to minimize packet discards, but it is severely limited in how much jitter it can mitigate and there are tradeoffs even when the jitter adaptation works.  If you adapt the buffer to 200 milliseconds just for the occasional spike in packet delay due to jitter, you’re causing the base latency to rise 200 milliseconds which gets tacked on to the 20 ms packetization delay and the network latency which could be as high as 200 ms when doing intercontinental calls.  400 ms lag time between the time you say something and the time you hear something back is not a desirable way to make a phone call.  Now what happens when the jitter goes up to 1000+ ms?  Are you going to jack up the base latency 1000 milliseconds or just discard those packets?  I suppose you could, but that wouldn’t be what people expect from a phone call, their television service, their video call, their online game.  I don’t want to just “absorb” the jitter.  I can, but I won’t.  We’re not talking about playing online chess where the delay could be one minute long for all I care; we’re talking about a game of human reaction times to see who can virtually kill their opponent.

 

Now the solution is to simply fix it in the network with multiple transmit queues or we can insist on the end-to-end dumb pipe dogma (not to be confused with the end-to-end arguments) and give people a crappy experience.  As someone who uses P2P, VoIP, video communications, and online gaming, I prefer using the intelligent network to make the network more efficient.  Without it, I and everyone else I know simply shut P2P off during the day which ultimately harms the health of P2P because I’m no longer seeding.  I’d rather not make idiotic compromises by insisting on a dumb network to satisfy someone’s delusional concept of what the Internet is supposed to be.

 

 

 

George Ou

 

From: nnsquad-bounces+george_ou=lanarchitect.net@nnsquad.org [mailto:nnsquad-bounces+george_ou=lanarchitect.net@nnsquad.org] On Behalf Of Bob Frankston
Sent: Sunday, October 04, 2009 8:42 AM
To: nnsquad@nnsquad.org
Cc: 'ip'
Subject: [ NNSquad ] The myth of isochronous and the risk of baking-in the past

 

We keep getting told that P2P traffic interferes with isochronous IPTV.

 

Why this concern with isochronous (and its cousin QoS)? It misses the point of the Internet which is about learning to take advantage of opportunity. Isochronous was an issue in the early days of analog signaling when we had systems that barely worked and there wasn’t even the concept of buffering.

 

Today if you switch between an SD and an HD stream (AKA channel) on a cable system you’ll notice many seconds of difference between the two – we don’t really care. We also have the infamous “7-second” delay in US which is scared of dirty words.

 

So why not just have a buffer to absorb any jitter? In face now we have now protocols like SVC that are adaptive and will adjust the number of bits needed depending upon the capacity available so that you can fill the buffer and/or show more content or apply whatever clever approaches you are thinking of.

 

I notice that if I turn away from a stream on my Verizon FiOS connection and go back to it after a short time it will catch-up from where I left off! Clever – I presume it uses a simple algorithm to speed up the stream without any perceptual difference. Such techniques are not uncommon – you don’t notice them.

 

You’d think isochronous would be vital in sports but the US did fine with a 12 hour buffer when the Olympics was held in Sydney. We don’t depend on millisecond timing for watching sports – you don’t notice existing compression delays.

 

I argue that isochronous (and QoS) killed IEEE-1394 (AKA Firewire) by restricting it’s reach and over-defining the solution. It didn’t help that it was a silo with application protocols included. IEEE-1394 may be a good example of risk in the attitude that we know what the applications are and should bake them into the network.

 

The focus on Isochronous IPTV is problematic. It posits that we must design networks within the limitations of television circa 1940. It also presumes that the purpose of the network is television. As I explain in http://rmf.vc/?n=IAC that attitude is simply an artifact of the fact we discovered that if you repurpose a video distribution network you find it’s good for video distribution.

 

Instead we need to recognize that video distribution can be very tolerant of network behavior. With a little buffering we can stream in real time but as network capacity increases we can send the data faster than real time and have an arbitrary amount of buffering available. There are many ways to make video available depending on what tradeoffs you choose to make. As drive capacity goes to terabytes buffering becomes the norm. In fact, FiOS seems to buffer content “just in case” on their DVRs.

 

The best way to make this process fail is to fixate on isochronous and bake the past into the network architecture. Alas, just as we discovered that if you repurpose a broadcast network you find video distribution is its purpose – if you repurpose companies whose business is selling video distribution you get the misguided notion that the purpose of a network provider is to provide the same old services.

 

Let’s not forget video is just another app – it works better with more speed but if we restrict ourselves to video we won’t get other vital services.

 

Time to move on from network services to creating opportunity.