NNSquad - Network Neutrality Squad
[ NNSquad ] Re: [ PRIVACY ] Would You Know if Your ISP Tampered With Your Web Pages?
Lauren,
Kevin
[ A couple of issues. First, while RFC 1864 states that only the originator is supposed to create such headers, in a non-encrypted environment there is no obvious way to enforce this rule, absent other mechanisms. Therefore, header substitution would still be possible, including of the Content-MD5 header. In a completely benign transmission environment this might not be an issue, but we're postulating a more "antagonistic" situation. Also, in the common scenario of Web pages being generated from various independent servers, a method to determine that the total page is valid (e.g., some elements -- such as particular items or ads -- not substituted by intermediate parties, perhaps with valid Content-MD5 headers of their own) would also be necessary. So while Content-MD5 is particularly useful to check for transmission errors, its viability to help prevent active tampering seems more problematic.
-- Lauren Weinstein NNSquad Moderator ]
[ Some specific technical suggestions in response to the message below are already arriving -- I will forward them as appropriate to this list after receiving redistribution permission from their authors. -- Lauren Weinstein NNSquad Moderator ]
------- Forwarded Message
To: privacy-list@vortex.com Date: Sun, 06 Jan 2008 17:47:29 -0800 From: privacy@vortex.com Subject: [ PRIVACY Forum ] Would You Know if Your ISP Tampered With Your Web Pages?
Would You Know if Your ISP Tampered With Your Web Pages?
http://lauren.vortex.com/archive/000351.html
Greetings. Would you even know if an ISP spied on or tampered with your Web communications?
While encryption is the obvious and most reliable means available ( http://lauren.vortex.com/archive/000338.html ) to avoid unwanted
surveillance or intrusions into the data streams between Web
services and their users, it's also clear that pervasive encryption
will not be achieved overnight.
In the meantime, we see ISPs apparently moving at full speed toward various data inspection and content modification regimes, and laws to protect Web services and their users from inappropriate or unacceptable ISP actions are being fought tooth and nail by ISPs and their corporate parents.
Some announced concepts, like AT&T's alarming plans to "monitor" Internet communications to find "pirated" content, appear most akin to wiretapping in the telephone realm (would people accept the monitoring of all phone calls in search of any illegal activity? Even given the current telcos/NSA controversies, I would tend to doubt that this would be widely applauded).
Others, like Comcast's unacceptable disruption of P2P traffic, appear to partly be extremely aggressive "traffic management" and partly outright packet forgery in the furtherance of interfering with communications.
And of course, we still have the ongoing Rogers saga ( http://lauren.vortex.com/archive/000337.html ), where direct
modification of data streams to insert ISP-generated
messages or, as suggested by a related hardware vendor, advertising
( http://www.perftech.com ), is the order of the day.
Encryption is the only sure approach to deal with the potential for ISP (or other) surveillance on Internet connections, and even encryption will permit a significant degree of traffic analysis in the absence of anonymized proxy architectures.
But in the case of ISP tampering with data streams, is there anything we can do for now -- short of the goal of full-page encryption -- to inform users that their Web communications are being adulterated? Can a Web service be sure that their users are able to see the actual Web pages that are being transmitted -- unmodified by ISPs? Can this be accomplished with the highly desirable attribute of not requiring major server-side modifications to the Web pages themselves?
There are a number of non-trivial issues to consider. First, a Web page, as we all know, is frequently composed of many disparate elements, often hosted by a variety of completely different servers under the control of multiple entities. How can we define "a Web page" in a way that takes all of these elements and data sources into account, especially when each user may see not only differing primary text and images, but totally different ads?
Would the amount of real-time data coordination necessary to create and communicate such a single-user page "validation snapshot" be practical, or useful in a relative sense given the amount of work that would be required?
Assuming that we can create such a snapshot, a secure mechanism to
immediately transmit this validation data to the user's Web browser
would then be necessary, bringing back into the mix the probable
need for some encrypted data, albeit of a very small amount as
compared with fully encrypted Web pages.
The last step in the validation process would be for the user's Web browser (or a suitable plugin) to alert the viewer in the case of suspected data tampering, along with providing necessary details that would be useful in logging and/or reporting the incident.
I won't get into technical details here on approaches to the nitty-gritty aspects of this concept. I have some ideas on implementation techniques, though I'd much rather see a rapid move toward full encryption.
However, I would certainly be interested in your thoughts regarding
this concept of Web page validation and whether or not it might have
a useful role to play, particularly to help gather evidence that
might be useful in the ongoing network neutrality debates.
Thanks as always.
--Lauren--
Lauren Weinstein
lauren@vortex.com or lauren@pfir.org Tel: +1 (818) 225-2800
http://www.pfir.org/lauren Co-Founder, PFIR
- People For Internet Responsibility - http://www.pfir.org Co-Founder, NNSquad - Network Neutrality Squad - http://www.nnsquad.org
Founder, PRIVACY Forum - http://www.vortex.com Member, ACM Committee on Computers and Public Policy
Lauren's Blog: http://lauren.vortex.com
------- End of Forwarded Message