There's no "dogma" in the
statement before the parentheses, and the parenthetical statement was
never intended to be the "most important part of the statement".
Please don't try to put words in my mouth, Jerry Saltzer's, or Dave
Clark's. I hardly forget what we meant - and it's both rude and
insulting (as well as "snarky" as hell) to do so.
As for your comments on what Tim Moors, Larry Roberts, or Louis Pouzin
say or don't say: since you try to make what I said (and I wrote those
words carefully) into your own solipsistic view of what I said, I don't
feel a need to comment on *your interpretation* of what others have
said. I may agree or disagree with something Larry Roberts (a friend
with whom I often agree, but often disagree) actually is willing to
defend. But I don't argue with strawman interpretations of what he
means.
I note that you never respond to what someone said, just change the
subject and try to provoke. You certainly have not responded to my
observation that the facts don't support your idea that "internetting
was a flop", which seems to be one of your fundamental assertions for
why the Internet's core architectural principles should be viewed as at
an "end". Instead you request a response to some phrase (e.g. "TCP is
actually part of the network") that you make up to suit the occasion.
On 09/29/2009 07:02 PM, Richard Bennett wrote:
David, may I suggest that you read "End-to-End Arguments in System
Design" again? I know you were a co-author, but you sometimes don't
seem to remember what you and your friends wrote 28 years ago. This
passage is especially important:
"The function in question can completely and correctly be implemented
only with the knowledge and help of the application standing at the
endpoints of the communication system. Therefore, providing that
questioned function as a feature of the communication system itself is
not possible. (*Sometimes an incomplete version of the function
provided by the communication system may be useful as a performance
enhancement.*)"
The parenthesized part has always struck me as most important, because
it un-dogmatizes the preceding portion to a certain extent.
It also strikes me that the vision of the "communications system" is a
bit ambiguous. Is TCP part of the "communications system" since it
controls data integrity and congestion management, or is a part of the
"application?" Most implementations put it in the kernel, which is not
exactly how the term "application" is understood today, but not as we
understand "the network" either, so it's a bit ambiguous.
Tim Moors, Larry Roberts, and Louis Pouzin say that TCP is actually
part of the network, even though it's clearly resident inside the host
OS. I wonder what your thoughts are on that.
RB
David P. Reed wrote:
RIc hard - this is just a
bullshit response. You change the subject with this typical rhetorical
device: "without intending to, it agrees with my original point". Your
original
point wasn't that at all, and you know it.
Your original points were that the Internet is no longer an
"internetwork" architecture at all, but a "self similar" recursive
structure, and that its historical rationale was this: to create a
space for experimentation.
Neither is true as a matter of fact - both are revisionist, and the
first (integrating a very wide number of technologies and applications
being what the network *does* today) is just plain wrong.
There is no slavish devotion to the status quo in the Internet today,
and there never was. However, good modularity of architecture has gone
a long way, and it continues to be important. That's why the
"end-to-end" class of arguments remain crucially important - precisely
because they give guidance to placement of functions: placing them at
the edge, rather than in the core. Even when "short-term" or
narrow-minded arguments are made to put more function into the core.
It's why MPLS wasn't put into IP, but instead used within subnets of
the network. It's why we don't adopt the IPv6 header as the Ethernet
protocol header structure, but instead embed IP in Ethernet. These are
not "optimal". The modularity, however, is.
To the extent
On 09/29/2009 06:05 PM, Richard Bennett wrote:
[Adding
John
Day because he's mentioned.]
This is actually kind of an interesting response, because without
intending to, it agrees with my original point. The last sentence says:
"No other architecture for networking has been so successful." This is
exactly correct, and it underscores my point. The Internet architecture
- however you trace its origins - has been incredibly successful for
networking. My point it that it wasn't successful for its initial goal,
something that was called *internetting*. What happened was that the
Internet ate all the networks that were interconnected through it, took
away their diversity, and turned them into little Internets. Questions
about traffic shaping, congestion, and flow control aren't
*nternetworking* questions, they're questions that arise on single
networks not connected to other networks. And of course, the issues of
"flow control" on 802.11 aren't even networking questions, they're data
link layer concerns (and I will note that 802.11 doesn't do any
routing, it merely does data link layer switching, rather a different
thing.)
The separation of TCP from IP in the late stages of Internet design was
a very interesting thing, which had the effect of bringing the Internet
into closer alignment with CYCLADES, but had the unfortunate
side-effect of breaking the addressing architecture of the Internet.
CYCLADES, DECNet, and XNS all solved the addressing matter in a much
more competent manner than TCP/IP did.
None of this is to say that there isn't diversity above TCP/UDP and
below IP. There is. But this isn't network diversity, it's application
diversity on the one hand and data link diversity on the other; and
frankly, there isn't as much diversity in either place as I'd like to
see. We could have a much more open and diverse set of applications if
the network were more capable, and we could have a much more diverse
and efficient set of data links of IP weren't so stubborn. But
understanding why this is the case requires the exercise of some
imagination, not just slavish devotion to the status quo.
RB
David P. Reed wrote:
On 09/28/2009 10:25 PM, Richard Bennett
wrote:
The real rationale for the datagram
network
architecture was to create a space for experimentation; that's why
everybody embraced it as soon as it was formulated. This internetting
thing was actually a flop; we actually have one big network made of
self-similar parts, not a bunch of different ones. Interconnection
works best if everybody runs all the same protocols, so we do.
Richard, this is what they call a "just so story". A myth made up to
explain origins. Like Kipling's "How the Elephant got his trunk".
1) Asserting that "the real rationale" was to create a space for
experimentation is historically wrong, as is the claim that "everybody
embraced it as soon as it was formulated". This is neither true of the
project to design and build the Internet protocols, nor is it true
about earlier datagram arguments. I say this as one of the people who
argued strongly for splitting TCP into two layers - TCP and IP, and
creating the space for the User Datagram Protocol. This decision, made
by the group of TCP designers, and was hard fought for many reasons.
It was a good decision, but it was hardly "embraced by everybody" - in
fact much of the Internet community continued to claim it was a mistake
- that datagrams were a bad idea for congestion control and other
things, in other words for "network management".
The same thing is true for Pouzin's arguments, of which we were *all*
aware - at least those of us who fought to put first-class datagrams
into the Internet. Pouzin's ideas were resisted, in both the
traditional "bell" community's approach to packet networking, and the
research community, except by a few folks who saw that networks between
computers that were only imagined would be much better suited by
message-exchanges and complex multiparty protocols.
So much for that part of the "just so story" - "How the Internet got
its architecture"
2) The idea that we actually have "one big network made of self-similar
parts" is meaningless as a description of the Internet as it is.
Perhaps that is the "ideal" that is described as a desirable state of
affairs in John Day's book. But it's not real. First of all, the
network functions are not present in the same form or the same way at
all levels. Day considers only routing and flow control to be "network
functions", and even there he is wrong: the routing and flow control
mechanisms of the 802.11 MAC (which I can't resist pointing out that
*you claim to have been responsible for*, though most doubt your claim
has much to it) are not "similar" in any respect to the routing and
flow control within the DOCSIS access network or the original ARPANET
that carried some of the Internet's traffic in its latter days.
In fact, the Internet is full of diversity, both below the neck of the
hourglass (IP) and above it. And the neck of the hourglass, while
providing some unity among many diverse parts, hardly makes it a "one
big network" - it retains a great deal of flexibility because the IP
layer divides very clearly what endpoints can expect and what networks
can do in a way that is universally adoptable. But that layer implies
a minimal set of agreements - each one crucial to creating the
internetworking story.
3) Finally, your language in saying the "internetting thing was
actually a flop" is absurd. The entire history of the Internet was
growth by including more and more networks into the overall thing.
The design was specified to be able to run over any kind of network -
for the reason of being able to integrate networks of many diverse
types. Unlike John Day and others who might think that we could throw
all those old technologies (DSL designed for video dialtone, ISDN for
smart phone instruments, Ethernet for local areas only, Bitnet, Frame
Relay for backoffice interconnect, ...) away and build a new network
from scratch, the Internet succeeded, and continues to succeed, by
absorbing new technologies and networks, according to the relatively
successful formula of transporting IP packets on top of any old kind of
network that can be made to be:
1) addressable by some kind of binding (ARP, ...),
2) capable of forwarding IP datagrams between gateway routers,
3) not needing to inspect the content of IP datagrams to do the
job,
and
4) capable of signalling congestion or failure by dropping (and
optionally marking) IP datagrams.
It took a long time to sort out all the issues of the Internet's
deployment, because of the diversity of infrastructure, not because of
illusory "self-similarity"
This is actually not a "flop", but the opposite. No other architecture
for networking has been so successful.
--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC
|