>> Jonathan Grudin: Yes. Okay. We're very... have him here to speak for us this morning. ...

advertisement
>> Jonathan Grudin: Yes. Okay. We're very pleased to introduce Glenn Kowack, to
have him here to speak for us this morning. I've known Glenn for 20 years, for over 20
years, 25 years. We met in Tampa, Florida, and I have stayed in touch ever since.
You've seen his bio. He was the founding CEO of EU-net. He's had a very interesting
and varied career which he will probably tell you more about in the course of his talk.
One thing that's not in the bio is he's had a number of things in entrepreneurship and just
in the last few days he has started river wants LLC, which was too recent for inclusion
there. So he now has an affiliation as well. And I don't need to say more about Glenn
because so much of this particular talk is -- has an autobiographical element to it.
I will say that he has -- there's a small piece of this that he wrote up for the timeline ACM
interactions timeline's column that I edit, and it's my favorite column in that three year
series. And so I'm very glad to have him here to give this presentation about it.
>> Glenn Kowack: Great. Thanks very much.
[applause].
>> Glenn Kowack: So in the spirit of innovation, we have a small problem with inclusion
of several images that are supposed to appear on the next slide. So what I'm going to do
in the spirit of adaptation is Jonathan's going to walk through the audience and show you
-- and attempt to hold it up for the television cameras insofar as they are there and
there's anyone on the or side of them. The magic wall.
>> Jonathan Grudin: In sequence?
>> Glenn Kowack: That's exactly right. So you can get a sense of what's going on. And
you know, I've never tried the clicker before, so I'm going to do that. And here is where
the image appears. So maybe 10 or 15 years ago I found myself reading an article in the
New York Times about an early demonstration of telephony, one of the first long distance
telephone demonstrations ever. This was done during the time when the Internet was
taking off and exploding and it seemed appropriate to talk about the telephone briefly.
And this demonstration was from someplace oh, some distance away in New Jersey to
Manhattan. And they were having a long distance phone party; that is, they had people
at a party in New Jersey and people at a party in New York and they were talking to each
other. This is in the manner of people used to have digital watch parties about 25 years
ago and digital calculator parties. I actually attended some of those. Pretty
embarrassing. But they were talking back and forth and just having a great ole time and
the reporter asked this question. He says, you know, what can this service be useful for?
And he actually upon toward how can we use the telephone, how can it -- how can it be
useful?
A young man might use it to pop the question to his young lady. It seemed very practical
to do that over the telephone. And the point of course was that they couldn't figure out
immediately how useful the telephone might be. And that seems sort of fanciful and a
little bit absurd until you start thinking about what, hmm -- let's see. Is there something
magic I need to do? So I'm going to ask for a little bit of assistance here. Is Henry
around or -- [laughter].
So I should use that and not this?
>>: [Inaudible].
>> Glenn Kowack: I see. So go to.
>>: [Inaudible].
>> Glenn Kowack: Oh, as in that. Fine. Thank you. And now Jonathan will show the
next picture which is a picture for those of you who can't quite see it of a pneumatic tube
system any think London around 1910 or so.
One of the reasons why the telephone was incomprehensible was because it was so
new. It was just people had to think about what it was good for. But another reason was
that the world was shaped around all these technologies that they already had. The
biggest signal technology was the geographic span of cities and nations and their
markets. Because all -- at that point, cities were an invention largely based on the fact
that we couldn't telecommunicate across space conveniently except for the existence of
the telegraph, which of course wasn't enough to break down the structure of the city.
So between 1853 and 1875, they developed pneumatic tube systems in many of the
major leading western cities in the world. And you could actually by pneumatic tube send
a letter from one end of a city to another. You could send -- you could send a signed
contract over several hours from uptown Manhattan to Wall Street area in a few hours.
Now, think about that for a second. Here's this telephone and you could talk at a long
distance. Seems like a really good idea, doesn't it? Sounds like a good idea to me. But
then again you could send a hard copy contract across Manhattan in about a two and a
half hours. Doesn't seem very competitive to me. You're going to talk on the phone, you
could have sent a telegram, you could have sent a runner or you've got this pneumatic
tube system. So why is the telephone even useful?
And that's where -- that's really the basis of a lot of where my talk goes today about the
Internet. And that is new technologies are difficult to understand because we already
have a matrix of markets and systems and people and technologies and procedures that
work for us very well. And those new technologies that come in might actually be a step
backwards in particular the step backwards that we saw with the telephone was that it
couldn't send hard copy, it couldn't send a contract, it took until about 1980 before we
started commonly sending faxes anywhere. So the telephone was a step backwards.
Why can that be any good? So what I want to do with the Internet today is talk about the
fact that the Internet, the great disruption that it was, was in fact something that wasn't
understood when it was started and enjoyed or suffered from, depending upon how you
like to look at it, from a process that I called the process of anticipation realization and
reanticipation. As technologists and I assume most of the audience is working in the
area of technology, we have an interesting problem that we address everyday, and that is
if we're going to bet on a particular technology, either a technology that's somewhat
known or a technology that hasn't been invented yet, we have to make all sorts of
assumptions about what that's going to do, and if it's going to work the way we think it's
going to work. And the extent to which it diverges from that is that going to be better than
we thought, worse than we thought, or just so different that all of a sudden we're at a
completely new world in and the Internet is an interesting, how should I say, an
interesting exposition of how that sort of stuff happened.
Now, in particular in the case of the Internet, I look at younger people who come in in the
Internet nowadays and I know they think that all this stuff is obvious, you know, and that it
probably was the result of Monday tonic improvement over the years; that is we had this
plan, we did this next thing, we did this next thing and then we got this, and we new what
we were doing. And building the web was obvious and doing e-mail was obvious.
And in fact, it was anything but. And I want to show how that developed in this particular
case. Some of the influences on how the Internet rolled out are technical, that is within
the technology itself. Some of them are external, also having to do with technology. And
some of much more broadly determined in the manner of huge shifting social tectonic
plates and let's take a look at those in just a moment.
I have an interesting pet peeve about the way education is done and engineering in the
United States. Having had a little bit exposure to -- and only a little bit to business
schools I notice that they're based on the case study method. Case study method works
for business schools. And it sits at that intersection of process like things like accounting
systems and marketing techniques and so on. And then just the complexity of working in
the real world. When we educate our engineers, by and large we teach them the first
part, we teach them the processes, but we don't teach them much about case studies of
what happened in the world so they can develop intuitions. And what I'm trying to do with
this talk is to rectify that error just a tiny, tiny bit and see if it might help.
So my exposition rather arbitrarily starts with the beginning of the telephone and what
that engendered in terms of industrial development across the world. Very quickly, the
telephone spread arm the world largely through the efforts of AT&T setting up
subsidiaries throughout the world, and those were very quickly nationalized over the
following two or three decades by the countries in which they occurred.
They largely did this for what I like to call a conservative control and stability first model.
Said a little bit differently the countries that had these phone systems realized that they
were vital resources for the survival of the nation and its prosperity and they simply didn't
trust corporations or any other actors to run those things, so they were very heavily
nationalized. And these organizations were unbelievable. They were quasi military in
their own structure, they had tremendous spans of operations, they did their own
manufacturing design, R&D, deployment testing, impacting, insofar as there was
marketing. They were just gigantic. And they were often run inside ministries of
telecommunications.
Now, when you hear ministry of telecommunications, you might think of one or two
buildings someplace in center of Paris or Department of Communications in the United
States, something relatively distinct that sets policy and direction and law maybe. Well, it
did all of those things and it also ran the phone system. It also ran the postal system and
the telegraph system and hence the term PTTs, post telephone and telegraph. So these
guys ran everything that had to do with remote access within their countries and outside
of their countries. One of the things they did was to apply enormous hidden taxes to their
populations. In other words, they made a huge amount of money and very quickly
developed self interested bureaucracies that were vital for the survival of the nation. You
couldn't do without them.
In the United States we had a very unique situation. We actually had a private monopoly
thanks to the work by AT&T at the turn of the 19th to 20th Century but in the rest of the
world he were really part of ministry of telecommunications. To say they were
playgrounds for monopolistic practices would be kind of an understatement. The theory
at the time was that these were natural monopolies. There was no other way to run a
phone system largely because they were scale businesses. 20 phone systems would be
really quite counterproductive economically and inefficient. One would be extremely
inefficient. Society generally thought that it was useful to have that trade-off. Monopoly
for efficiency. It's not clear that that worked.
And in later times as we'll see in my talk, that started to break down quite a bit.
Some fun things that happened through this are very interesting is they had norms
enormously long term financial horizons. So for instance, AT&T issued as you can see
here 40 year bonds, which they stopped doing by think 12 years ago. 40 year bond is
really impressive. It means you have a lot of knowledge about where you're going to be
in 20 or 40 years, which means basically you're controlling the government.
So on a technical point, all work they did was based on the original work by Alexander
Graham Bell and others which was essentially circuits, a circuit meaning that if I was
going to make a call from Seattle to Chicago I would rent for the duration of my call a loop
of copper, literally a loop of copper from here and all the way back. If I was going to do
the same thing from here to Paris I would rent that loop of copper. And all the research
that happened for nearly the next century, in fact longer than the next century, was
revolving around this idea of circuits, a way to bridge space through this one simple
structure.
And all the PTTs that work together worked pretty much in the same way, and they knew
they had to coordinate their operations. So they had equally heavy weight structures to
develop standards, usually under the UN, although there's a lot of work as a precursor to
the UN. And in those periods it was done by the CCITT, the -- and the acronym fades on
me, but it's the consultative committee for international telephone and telegraphy.
So the PTTs marched under this model, and this brings us to roughly 1960 when the US
Department of Defense and others was very concerned about the survivability of the
American communications systems in the event of a nuclear conflict or a nuclear
standoff. I was actually involved in some modernization of the worldwide military
command and control system in the late 1970s and even though the United States at that
time had 43 or so we were told different ways to get the go codes out, they were certain
that all of them were completely unreliable and would disappear within milliseconds of a
surprise attack.
So one can imagine it was a Cold War. There were a lot of hot rockets sitting in a lot of
silos on both sides of the globe. They were very concerned that there was no way to
actually communicate in a robust manner. The Rand Corporation, a recent invention of
the understand government had a very brilliant researcher by the name of Paul Baron
who came up with an idea of coming up with a system base that would provide for
survival communications under hostile, that is battlefield conditions.
And what made that possible was the fact that we had advanced to the beginnings of the
digital age and the general purpose digital computer. Before this time, and all the work
that was done on circuits was based on cheap copper lines which they were relatively
speaking and exquisitely expensive switching elements. The kind of stepper relays and
the other components that were used in the phone systems were brutally expensive. And
that meant that if you had a choice between doing some computation which was next to
non existent or transmitting something at a distance, you always chose the transmission
because it was more expensive, or much cheaper, rather than actually doing the
calculation.
But in the case of the appearance of the computer now and digital circuits, we had the
opportunity to apply a little bit more intelligence to each one of the nodes. And looking at
that Paul Baron came up with a really interesting idea, and I describe that as a bucket
brigade or post office model. Now, to explain packet switching takes a little more time but
in brief, imagine that you've got a series of post offices in the country. Hey, we do have
post offices in the country. That's easy. Okay. And I want to send a letter from one to
the other. I address a letter, I handle it to a post office, they look at it, and they sent it to
the next appropriate relay or post office to send it off. And they do the same thing.
Now, imagine if you've got instead of a post office, you've got electronic system or a
computer and you receive instead of a physical letter you receive a series of bits with an
address on it, and you receive it here, you forward it on to this one, you forward it on to
that one, and so on and so forth. It's that simple. It's that stupid. And it is the entire
basis for the way the Internet works. Think of it as a bucket brigade where each member
actually looks at the side of the bucket and says oh, this needs to go to New York, let's
see, he's closer to New York than I am, boom. And sends it on.
The fundamental idea behind Paul Baron's work was to create a resilient network. This is
resilient because every time someone picks up a bucket and looks at the address for its
destination, they get to say oh, that's close to New York, that's close to New York, and
that one's been destroyed by a nuclear bomb, hmm, I think I'll send it to that one instead.
And they do. And that's the fundamental structure that makes the Internet work.
Now, when Paul Baron did his work, he had absolutely no idea that he was going to
create an Internet. He was doing a study in a environment that was largely based on
statistical work which there's a lot of statistics behind these models. And no 1 had any
vision they were going to create a Internet. They were maybe, maybe, maybe beginning
to solve the problem of survivability of communications under hostile; that is battlefield
conditions.
In the meantime, these computers were being developed and they developed under a
very interesting direction. By the early 1960s, the world of computing was in the United
States which was the world of computing in the world, was dominated by five
corporations known as -- I'm sorry, six corporations known as IBM and the BUNCH.
BUNCH stands for Burroughs, UNIVAC, NCR, CDC and Honeywell. And what's
interesting is it's really not fair to say IBM and the BUNCH because in fact it was IBM and
nobody else because IBM had north of 80 percent, sometimes 90 percent of the market.
So much that they tippy toed around all the time trying to avoid antitrust suits from the
Justice Department.
The industry was making a transition from tubes to solid state components in the early
1960s. In a manner very similar to the PTTs the things that they were developing were
being built in a highly vertically integrated and a very broadly structured large industrial
base. So IBM did its own research, development, planning, marketing, manufacturing,
deployment and so on. IBM was so big that I heard multiple engineers at IBM tell me that
if you were writing in an internal technical journal you weren't allowed to reference an
article that wasn't in an IBM technical journal. Now, I'm not entirely sure that's true but I
heard it enough times that there was certainly some truth in there. They were enormous.
And not only were they enormous organizations but they made well over 360 degrees of
their entire product lines. So they built the hardware, they built the operating manuals,
they built the training, they built the operating systems, they built the software and the
services and everything else that went on top of them. And each one was completely
unique and bound to that single manufacture, hence the proprietary model of computing
and software.
In the meantime research was ongoing. And how do we make these computers more
useful? Now, remember they were big multidollar installations for anything of any
effective computing power. So a lot of the thinking went around in to two dimensions.
One is how do we make one computer more useful for more people, time sharing was
developed by John MacCarthy at MIT in the 1950s to help take that into account, and
second of all, they started thinking of computers at a utility. How do we make them
available to the public? Well, we've got a power plant in every city. Why don't we now
take these time sharing systems, beef them up and have some way that people can get
to that computing facility? They're physically or maybe even over the phone. And the
greatest project to implement that was Multics, multi-user computer system developed at
MIT in the late '50s and 1960s and participates besides MIT included General Electric
and AT&T which will turn out to be very important later on in the development of the
Internet.
Data processing departments were extremely powerful priest hoods. They ran
everything. You had to beg them for any kind of service. They smelled, acted and
looked like PTTs. Said differently you weren't really a customer if you were using a data
processing department. They would tell you what you could get and when.
By the late 1950s, we did see one upstart appear and that was digital computer
equipment based in Massachusetts and was itself a spinoff of MIT by Ken Olsen, an
engineer at MIT and others. They started to attempt to get out of the world of the
BUNCH, Burroughs, UNIVAC, NCR and so on, by simply going a place that IBM didn't
think they needed to go, which was small, mini computers, a computer that could operate
in your department or sit on your desktop or work into your lab. And they were really
quite successful.
But things were still very nice and computers were still things used by eggheads and
boffins and so on and so forth. Things like hypertext that we take for granted was still
only the beginnings of the intellectual exercise. I think it was coined by Ted Nelson in the
mid '60s or so. Moore's law had not yet been conceived. People thought about the
growth of computing but there wasn't a driving fantasy about how fast computing could
advance. Moore by the way only named it in 1965 -- I'm sorry, only called it out in 1965
and Carver Meade [phonetic] only named it in 1970. So really fairly late in the day in the
industry.
In the meantime, universities also had a feeling like the PTTs and like the great computer
companies. They were large ivory tower institutions, they were separate from society,
they generally went their own way. Unlike the other organizations, they were involved in
almost strictly pure research and had a huge sensitivity towards doing anything that was
too commercial or would distort their seeking the truth. In their pursuit of the truth, they
didn't pursuit intellectual property. What they did do is they pursued ideas which would
then get on the -- and the rest of the academic world and create a certain degree of buzz
or a certain degree of interest there and maybe, just maybe someone would
commercialize it but generally that wasn't the model. And certainly there was no model to
retain any kind of intellectual property. No one expected to go ahead and build patents or
anything of the sort.
Things like you ever see licensing offices known as office of technology management
were nonexistent. They simply let this thing out the door and didn't retain any rights to
get into any kind of cash flow. In the meantime a new enterprise formation and financing
was completely primitive compared to today. They hadn't figured out the basic rules of
how entrepreneurship or how innovation works, which were done by Peter Drucker,
probably by about the '60ss. Venture capital was relatively nonexistent. If you wanted to
fund a new company you needed to go to a great family like in the early days the
Rockefellers.
So generally if you were in the academic environment you didn't necessarily think about
the direct application of what you were going to do. That was something that someone
else was going to take care of, maybe sometime in the future. While all this is going on,
the development of a computer industry, the state of the academics, the evolution of the
PTTs continued. And I forgot to mention at the beginning of my talk. If you want to
interrupt with questions, please feel free. If they are non appropriate for the moment or
I'm going to talk about them later, I'll ask you to defer. But was there a question at all?
Did someone have something? Okay. Great.
So in the meantime in the PTTs were evolving and they were involved in the great analog
to digital conversion that the computer industry represented, and now they're trying to do
the same thing once again with the PTTs. Enter cuts instead of renting a full loop of
copper from Seattle to Paris what you would do is you would rent a time share, a little
time slice or a statistical slice of that loop of copper. Still the concept with circuits they
were now virtualized but circuits none the less. Whenever the PTTs did anything, again,
remember this control and stability first model, they based all their work on three
fundamental criteria which they never wavered from. The first one is that anything they
did had to be profitable, it had to have a viable business model. Not only did it have to
have a viable business model, but it had to make strategic sense for the organization not
just now but in three years, five years, 10 years as they would turn the crank. And one of
the big questions they would ask themselves is does this threaten our business, does this
threaten our way of work?
Second thing is they always had service level guarantees. Now, remember telephony is
realtime. If you call someone from here to Paris you've got to get a response on your
voice in about 35 milliseconds or the lack of synchronization will drive you nuts and it
won't work at all. That's pretty snug for the kind of dances you're going -- not in terms of
speed of light but in terms of all the other operations that need to take place
technologically. So they were fundamentally married to the idea that they had to have
realtime performance and they had to be able to guarantee it.
Now, when I say guarantee it, I don't mean it lightly. Even in the 1950s, the design model
for the AT&T system in the United States was four minutes of uptime. I'm sorry, four
minutes of downtime, excuse me. Four minutes of downtime in a year. Which when you
think about it is stunning. And they did that with battery backup, with their own diesel,
generators in every central office and every town. They did it with tremendous
redundancy and they built these things to be completely bullet proof.
And then finally everything they did required security. The world of security in the 20th
Century was very different from the world of security today, because largely it was based
on physical security. You couldn't get to the phone lines. People didn't have access to
the technology. And things were very distinct in terms of here's a phone system, here
are the wires, here are the data closets or the phone BUNCH block closet and so on and
so forth. So security was largely a matter of being separate from other areas and most
people didn't have access to it. But they nevertheless took security very seriously.
As the evolution of the phone system was proceeding, they moved on through technical
directions. One was the development of asynchronous transfer mode which was a low
level transmission system for the phone systems. They also pursued an open system
interconnection model and finally they pursued what we call multimedia. They knew, as
most people in the scientific field understood at that time, the digital technologies could
eventually embrace all media. It wasn't obvious in the 1950s and 1960s that we could in
fact take the voice and music and digitize it to the point where it would sound good to
people or even be useful at all. But that was the direction they knew they were going and
much of the work they were doing here sought that multimedia holy grail.
The Defense Advance Research Projects Agency which was developed in response to
Sputnik in the late 1950s, was addressing multi-issues related to the Cold War and the
defense in the United States. And one of the things they looked at was the question of
more robust telecommunications. And in the early 1960s they put together some
programs to see if they could develop new more robust technologies in that same vein.
And by the late 1960s, they had sponsored the development of a -- of the first American
major packet switch oriented network, and it was call the ARPANET, it was called the
Advanced Research Project Agency Network in those days. And they had two goals.
The first goal was to connect expensive computers and to do research on packet
switching. I'm sorry. Connect the expensive computers. And the second was to
research packet switching. What was so fascinating about their work was that they really
thought that they would use the machines or the network was to computer different
computers at let's say UCLA, UC Santa Barbara, Stanford, University of Utah and so
forth and make time sharing more effective. But they didn't think it was going to happen
very quickly. They thought instead that they need quite a bit of time to go and do their
work with packet switching before really become all that practical. And it was exactly the
opposite. The network worked extremely well right out of the box. In fact, it worked so
well that the people who were using the network for a daily operational requirements
were getting in the way of the researchers. And this was quite a bit of tension there. It
was the researchers realized they were getting blown over by this wave of use. It was
really fascinating.
So sometime in October 1969 the first ARPANET packet transmissions occurred
between Stanford Research in Menlo Park, UCLA in Los Angeles, and UC Santa
Barbara. People at DARPA knew that they needed very desperately to make it possible
for people to use machines remotely because they were just so expensive, but they
never anticipated that it would take off anywhere near as quickly. Now, they had no
commercial agenda whatsoever at all. They were solving a problem while they did
national defense and a little bit of cost economy in terms of use of computers inside the
academic research and Department of Defense world.
They had no idea that any of the structures they were using were going to scale the way
they did. They had no idea that the work they were doing was going to be useful
commercially per se. They really had their heads down to solve their own problem in the
manner of the academic world that I described in the previous slide and in the manner of
the work of the DOD in those days. To reenforce it, they had an acceptable use policy.
Essentially you couldn't do commercial work on there. And if you did, you would get your
wrist slapped and get told to get off the net real fast. But nobody did anyway.
This connection was network had some major unexpected consequences that no one
really understood at the time. Now, I want to step over and talk about their pursuit of this
technology. Now, I mentioned that they started ARPANET in 1969, and you recall they
had this bucket brigade model that Paul Baron developed. The bucket brigade model is
extremely flexible. And you could set up virtual circuits if you wanted. But it was much
simpler and much more productive, much faster, and you can get your development done
much more quickly, if you simply didn't bother with setting up a lot of what we call
resource reservation on a per node basis. All you simply do is get a packet and move
forward. And that's called connectionless. It means there's no virtual circuit there, there's
no circuit of any kind of you're just getting a letter, looking at its destination, looking in
your tables for the next hop, for the next location and out it goes.
And this idea of connectionless networking took off extremely well as I've described a
moment ago in the context of the ARPANET. And now we discovered that we had two
worlds of telecommunications which no one saw coming. I don't think the world
connection connectionless appears in Baron's work, although I want to be careful, but it
may. But he certainly didn't say, oh, I'm going to invent connectionless networking. He
was trying to solve a different problem.
In connection oriented networking, the kind of stuff that's done in the phone company
with virtual circuits you set up a path of hop to hop to hop to hop where you say, okay, if
I'm going to do a connection from Seattle to Paris what I need to do is make sure there's
enough computing resource in each node between here and Paris to keep that phone call
going. Because if somebody from over there wants to connect to this node and
somebody from over there wants to connect to this node and they all collide there aren't
going to be enough resources for all the intermediate hops. And remember, telephony is
exquisitely sensitive to realtime behaviors.
So if you're going to do work with connection oriented networking and if you're in the area
of the PTTs and you start to take advantage of all this new computing power hop to hop,
you're going to say right from the get go we need to set up connections, we need to do
resource reservation. Well, it's complex. It ain't simple because you have to have all
sorts of ways to talk to a central service that sets up the circuits that talks to all the other
guys who are setting up circuits that research connections and then what do you do if
one of them collapses and so on and so forth?
So it's messy. But the guys at ARPANET on the other hand said, no, we don't have to
solve any realtime problem, we just want to see how this packet switching thing works.
We're not going to bother. We're just going to let the packets go where they are and see
if they work at all. Surprisingly it did. I'm going to step back and look at my slides and
see if there's anything I really want to add, things that I've said before. In sum, though, if
you're going to set up virtual connections in a packet switching environment it's
exceedingly complex to do it. Huge overheads. You need to do a lot of work and you
have to have bigger nodes that are more expensive. If you're going to do it in -- if you're
going to do connectionless networking with this bucket brigade model it's exceedingly
simple but it's also exceedingly robust because you don't have any connections to break
in the first place. You knock out a guy in the bucket brigade and you send packets to the
next guy instead.
Unlike the PTTs, the three critical ideas of a business model guaranteed performance
and security were irrelevant and I want to emphasize the point it's not irrelevant in the
sense of well do we need security here in TDs, do we need a business net -- it was
nothing like that. They simply weren't doing that kind of stuff. It was just not on the plate.
And furthermore, ARPANET applications didn't need any kind of high performance
applications. They were doing things like file transfer and remote job entry and e-mail, all
of which can be sent in minutes to hours to days even and be far more productive than
anything we had before. The academic environment turn out to be extremely compatible
with this kind of research because although academics can be very demanding they also
appreciate the fact that they're working on something that has not yet been fully invented.
So there's a discovery process of figuring out what that's about and they're willing to live
with it and flex with it.
So we saw by the 1980s the creation of two worlds in telecommunications. The guys in
the PTTs, connection or generated guys, the virtual circuits guys, and the guys over in
the ARPANET doing connectionless networking. There really wasn't an antipathy
between these two groups, they'd actually gone to the same schools and they knew each
other and there was a moderate amount of mixing although career paths and the like
tended to create two very distinct communities. But they knew each other. They knew
what each was doing. And they knew they were doing completely different things.
The phone companies were making systems that people needed to run realtime to be
reliable to have a business model to work and so on and so forth. They knew the Internet
insofar as they even thought about it couldn't satisfy real world requirements, it wasn't
going to do the job. So it's kind of those above offense to use the British expression out
there in the corner. Fine, let them go do it.
The Internet community knew that they were a prototype, knew that they weren't going to
do anything commercial, knew that it wouldn't work for the phone system anyway and just
kept barrelling along. It was real quite amazing. They simply didn't pay that much
attention to each other because they knew one guy was doing one thing, business, and
hard real world things that people need, you know, down in the trenches and the guys up
in the ivory towers who were just mucking around with stuff and maybe it was pretty neat
but it was ivory tower stuff and that's it.
>>: [Inaudible] phone line systems for the -- to accepted the data?
>> Glenn Kowack: Absolutely. And in fact, there were some -- there were a lot of meet
points in terms of people and technology. But by and large what occurred was that the
standard phone systems in those days were still analog and what the ARPANET people
would do is they'd simply rent long distance analog lines and then they would put their
motives on either end and take it from there. So even that degree of cooperation at the
level of packets was nowhere near existing. It's a really good question. But so it's really
crazy, they're just going on completely separately. Now, this point can't be made too
many times, so I'll try to make it one more time. They just weren't thinking about
commercialization. It wasn't an issue. It's not that they didn't get economics right, they
didn't get realtime right, who cared? Right? They just didn't think about it.
The PTTs in the meantime didn't pursue connectionless networking. Why? As I said a
moment ago, they knew it couldn't work. It just wasn't the technology. And they had all
this momentum anyway and not to mention what I like to call career equity. If you've
been working on technology you're entire life, you're not going to stop when you're 47.
You're just going to keep barrelling. Maybe we can -- oops, go on to retirement age.
Hold on a second.
So there we are. They could have pursued packet switch networking and a
connectionless model but they didn't. You know, time horizons, all sorts of other things,
completely compatible. They just didn't bother. But history marched on. So after that
first ARPANET transmission, circa 1969, we started to see some new technologies
coming in out of left field. Nobody ever anticipated these technologies. UNIX was
developed by some former MIT Multics researchers, most particularly Ken Thompson
and others at Bell Labs in Murray Hill, New Jersey, pretty much as an internal research
project. Either it was flesh on navigation or game program and they needed a new
operating system.
So Ken Thompson wired something together on an old discarded PDP7. The system he
designed was based on Multics, he had worked on Multics. AT&T was in Multics that I
mentioned before, but it pulled out for reasons of focussing their resources elsewhere. In
fact, that's a good on trade to a very important point which is thanks to a 1956 content
decree AT&T was not allowed to do anything in software or computer services. It had
agreed with the government that it was going to do telephony, it was a monopoly, it was
going to stay there; otherwise it could unfairly go out and eat one industry at a time using
its monopoly funding of the phone system.
So the stuff they did at Multics, the stuff they do with UNIX was never done with
commercialization in mind because they couldn't. And from my reading of history, I think
this is probably pretty safe, no one was out there thinking okay, well, maybe in 10 or 15
years we'll start nudging into that. These researchers were simply researchers at Murray
Hill, very ivory towerish, just like the academics of the era.
UNIX operating system everyone knows a bit about it, so I'll just make it quick. It's
simple, it's stupid, it doesn't do much but it does it well. It's very adaptive, and it didn't
solve every problem but it solved the problems the guys had in Bell lab. Sounds an awful
lot like connectionless packet switching, doesn't it? At the same time Ethernet
technology was being developed. Now, there's a wonderful story here. If you want to
see things that are contingent, this is just a perfect example. In the 1960s, the people in
the research community, academic community in Hawaii needed a way to communicate
between the islands. They couldn't lay sub-ocean cables, too expensive or didn't have
the capacities they needed. So they started doing things by radio. And they started
doing computer to computer communication by radio. Sounds pretty simple. But some
of those islands are a few hundred miles apart, which means there's some time lag. And
if you start sending packets over radio, it might be that I start sending a packet to him
while she starts sending a packet to him and they both land at the same time and he
can't hear a thing because they're both on the same frequency.
So the guys in Hawaii developed this thing called ALOHAnet, cute name and it said
basically we'll just let anybody broadcast to anybody in these 20 or 30 nodes, however
many they had, probably a lot fewer than that to start, and if somebody can't hear, if they
don't get it, they'll just ask for a retransmission. Or even better, everyone will listen and if
they realize during the time of their transmission of a packet, one or two or three, that
somebody else was transmitting, they'll back off for a second or two and then they'll
transmit again. And everybody that used random variable, so one guy would back off
one second, one guy would back off five seconds, and so on and so forth. Pretty
interesting idea.
Well, Steve Crocker, one of the great leading lights of the Internet, the guy who largely
invented the request for comment series and was one of the original organizers of the
Internet engineering task force, the place where all Internet standards are developed,
was talking to his house mate in Cambridge in the late 1960s and that happened to be
Bob Metcalfe. And I think Bob had just had his Ph.D. thesis rejected from Harvard. His
thesis was on Multics, and they found it insufficiently novel or new or something like that.
So Metcalfe's running around for a new topic and they're hanging around the living room
together one day, this according to Steve Crocker about two years ago, and Crocker said
why don't you go take a look at that thing called ALOHAnet, which Metcalfe proceeded to
do and he proceeded to do his Ph.D. thesis around that, which was eventually accepted I
think in '74, thereabouts, by Harvard, and based on his work with taking the ALOHAnet
and putting it inside a coaxial cable, basically taking the radio environment of the
Hawaiian Islands and putting inside a single radio enclosed environment, a coaxial cable,
applying the same rules and some better statistics and some other details of engineering
you had a way to have people get on local area network with individual transmitters that
didn't know about any other individual transmitters that just sent packets out there and if
they collided they retransmit and it turned out to be dumb, simple, terrific, and terrible for
realtime because you could never guarantee a delivering, just like connectionless
couldn't do not phone system.
It's kind of like there were these waves of things happening and something I didn't
mention with the UNIX operating system, not only was it basic and simple but it couldn't
do realtime either, which everyone knew in those days an operating system needed to
do. It just had a schedule that gave everybody a shot which seemed fair. You know,
here are queues and you wanted to do a job, well, we'll take it off the queue when we
think it's about right and we'll kind of balance it out, make sure no one gets starved, make
sure no one gets too much and so on and so forth.
Ethernets in the same vein, it was just simple and stupid and worked very effectively.
Now, there was no reason to think that this stuff would actually work because it's based
on all sorts of frankly very exquisite statistics. Leonard Kleinrock at UCLA did some
traffic work demonstrating that, well, you know, you're going to get congestion but a lot of
the time you're not. In other words, most of the time it's going to work pretty well.
So the Ethernet technology worked very, very well, not quite out of the box but pretty
close. And then in the I guess mid 70s, 1975, a Mike Lesk and others at AT&T came up
with this program called UNIX to UNIX copy. Now, they got all these UNIX machines that
have been distributed to different -- actually UNIX software operating system software
that's been distributed to dozens of universities around the United States, UNIX took off
like wild fire in this small academic community. People were having a great time with it.
And AT&T was allowing people to freely use the software and trade it and fix it, do
whatever they like. Really a very important input to the beginnings of the Open Source
movement. It was being done that way because AT&T couldn't make money on
software. They couldn't sell it, they couldn't service it, they could only give it away. So
they proceeded to do that and created this huge community of interested people.
But they were giving away tapes and it was taking a lot of time. They were thinking
maybe there was some way we could distribute this over the phone system. So they
developed this system called UNIX to UNIX copy, really simple, you got a UNIX machine
over here of any kind and it's capable of dialing a phone number, you've got another one
over here that's capable of receiving a call, the two of them call each other and they
agree to exchange bits. And UNIX to UNIX copy takes care of that. Using this copy
system was a huge capability for almost no cost and like UNIX and like the Ethernet it
wasn't terribly high performance, it wasn't realtime, it was reasonably reliable, but it got
the job done, orders of magnitude better than before. Anything over zero is pretty good.
So also around this same time, 1975, they had enough experience now with the
ARPANET that they started to figure out a huge division of labor between the different
layers and we won't get into that too much, but the different layers of functionality that
were happening in the world of ARPANET and they broke a program called NCP,
network control program, that run the early ARPANET into two layers, the superior layer,
TCP and the lower network layer, the IP protocol that was defined by Cerf and Kahn in
1974 in their famous IEEE paper.
So while this is going on and all these developments are happening, a lot of people are
finding this stuff very useful and very exciting. So their develops in the United States a
UNIX users group which after receiving a nasty letter from Bell Laboratories renamed
themselves USENIX because you can't use UNIX, it's a trademark term, in spite of the
fact that no one can sell it or make money off it. So USENIX developed in a very large
community. Starting in 1975 a UNIX enthusiast originally at universities and a lot of
corporations including digital equipment and others started to play with this thing and
have a lot of fun.
The same thing happened in Europe several years later, in 1979, Europe had a
differently complexion because being a region of dozens of countries rather than one
country it had national groups and so on and so forth. But they are generally a huge
community of enthusiasm and a lot of ferment, a lot of interaction. Not a small population
after time. And one of the things that happened simultaneous to this now in the late
1970s was Tom Truskett and some of his buddies at Duke University, grad students said
you know, we've got this the UNIX to UNIX copy thing but we've got people calling
computers back and forth, why don't we set up a really loose system and we'll just have
our system here at Duke and we'll say call us at night, dump your mail and then call us a
few hours later and we'll send you the mail we got from the other guys. And by the way,
you do the same thing. If you want to be part of the -- and actually it was originally called
the UUCPNET, but people know it as Usenet now, if the only rule is if you're going to, you
know, get stuff from us, you've got to also act as a relay. So if you use it as a relay, use it
with somebody else, and within a few years they had 50,000 nodes running. That's a lot
of nodes in those days. That's a lot of UNIX systems.
I think that the vast majority of the more in fact mini computers. So there were large
corporations all over the place and the folks at USENIX agreed to become an umbrella of
what was going on there. Still completely free. Oh, fun point. It used to cost about $2.35
to call Chicago to Florida, which I did a lot in the '70s, 2.35 for a minute. So one of the
nice things about the Usenet and UUCP is that you could make local calls pretty much all
the way across the country and have effectively a free e-mail service. It's not quite that
simple. But they did piggyback on lots of universities. They had WATS systems, a terms
that's not used anymore, wide area telephone service, and so on and so forth. So it was
a really great way to hide under the radar.
Also, you already had an account at the university for your phone system. You didn't
have to get anybody's permission. You just started making phone calls. Although the
bills did start to pile up and people got surprised. Soon after this 1982, the guys in
Europe looking across the pond said, you know, they're doing this in the United States,
hmm, maybe we can do something similar, maybe we can get out there and have a way
to do this. In Europe it was much more complex. You might not know, for lack of
experience or having heard the history, that it was illegal to hook a modem up to the
phone lines until Jonathan, do you know the date, 1970, 1972? Very late in the day.
Illegal to hook up a modem unless it was an AT&T modem and even then they were
huge, expensive and nobody used them any more.
In Europe it was even worse because there were dozens of modem standards, they were
very illegal to connect in Europe, and you couldn't make phone calls from Paris to Berlin
to save your life. If you thought 2.35 a minute was expensive, you should have tried
European calls. So they set up a two minute hub and spur network where they would
have, just like Tom Truskett and friends at Duke they had a concentrating center in each
country, and they called into their country and then everybody at night would call into
Amsterdam and exchange the second layer up in that manner. Worked very effectively,
all very convivial and a volunteer network.
In 1983 ARPANET converted to TCP/IP and started to grow technologically. More on
that in a moment. And then completely out of the blue, something everybody didn't
expect, remember, this was the world of mainframes and mini computers up until now.
And then this thing called the personal computer invented. We saw the Apple, the Apple
II, IMSAI was around during those days and soon thereafter the rise of the IBM PC, which
I suspect people in this room know a little bit about. Also not expected some of you may
know that -- well, the famous quote by Ken Olsen. They thought that there would be no
market there. IBM in fact in the 1960s and '70s did a huge future systems project to
determine if think should move computing to the desktop. They decided it was not viable,
decided they would not do it, although the big politics involved in that one. But in short,
no one saw it coming.
And in 1986 we had the NSFNET-ARPANET cutover. Now, the reason why this is
significant is because up until this time, the ARPANET was connecting universities all
over the United States and a few in Europe via this one backbone, so computers directly
connected from their LANs to the ARPANET is the University of Illinois, Stanford, MIT,
University of Maryland, Washington and so on and so forth. But what the NSFNET did
was by a huge evolutionary leap in protocol design, that is the software that controls
communications in digital networks, they went and they broke the ARPANET into two
parts. They maintain the backbone but they set up regional networks for all of the major
which I call them confederations in the United States. So the big 10 had a network,
CICNet, New England had NizerNet, the Bay area had BarNet and so on and so forth.
And the effect that this had which was so important was that you're not just head of
closed small community of researchers who were working directly on ARPANET, now
you had lots of communities in different regions of the country and developed a huge
amount of knowledge and skill on how to use these networks. Said differently, training
was occurring through normal use and expansion of the technology.
In the meantime the PTTs are moving forward on developing their own standard that's
going to take care of multimedia and embrace the entire world solving the hard problems,
the ones that we know normal connectionless packets switching can't do. So I'm going to
pause there and talk about the great solution of OSI. Now, recall that the pursuit was for
multimedia, becoming the medium that would work for everybody everywhere for every
use all the time. And OSI was one of the major elements to make that go. It was
championed by everyone, the PTTs all over the world, the United States Department of
Defense, the European commission, all the major international actors. Every major
manufacturer built an OSI stack. The people in the Internet community, the people in the
regional community expected that they were going to transition to OSI as well? Why?
Because the Internet was a prototype, it was never designed for commercial use. This
OSI stack did have an economic model behind it by and large, it did have better structure,
it did allow for much greater control, and it was much more universal. It didn't do just one
thing, it did everything everybody needed. It was really a very, very broadly designed
system and very lodge will cal. It was there to solve the big problems that everybody
knew they had, including supporting telephony and other related things.
There was extensive oversight employment. Marshall Rose wrote a great book called
The Open Book which is found on the desk of almost every networker in the world in
those days talking about the transition from the prototype system, Internet protocol and
the Internet and ARPANET to this OSA model that they were going to go to. All the PTTs
were behind it. But there were some problems. And the biggest problem was that it tried
to do everything. It tried to make everything work. And it was designed by organizations
that really were extremely heavy weight. In other words, they were making the thing work
for them first and foremost rather than the actual material needs of the user community.
Although with a hundred years of experience, they could legitimately say that they
understood the user community and in some ways they did. Problematically, you need
every part of the OSI stack to work. Problematically you could never get all the parts
because they were never part done. Problematically once you put them together you
couldn't figure out a way to make it go.
Said a little bit differently they tried to solve everything. And it never worked. It just never
took off. So networkers time after time after time after time tried to make OSI work and it
didn't work. In the meantime, they've got this IP stuff which is showing up now on Sun
work stations and Apollo work stations and it's available under UNIX on AT&T machines,
digital equipment machines, the guys at Berkeley have now done a standard UNIX
distribution, they've got sockets, they've got all the networking code, it's everywhere,
often it's free. AT&T was still attributing software in those days and it's just taking off. It's
going everywhere. And it's solving everybody's real world problems. But they all knew it
was temporary. It was just doing what they needed to do to get things done today.
And among the other things that they did to get things done today was Dan Lynch, who
was running the datacenter at UCLA would host every few months what he called an
interoperability get together and basically he'd hire, get all -- not hire but get all his
buddies in the industry, and they'd all meet for two days over a weekend, they'd bring in
all their machines and they'd try to make them all work because Dan was buying lots of
different machines. He wanted them to work together. And this grew and grew a little
more and finally he spun out of UCLA and started the interop trade show and found
himself running what became a hundreds of millions if not billion dollar business simply
based on the growth of all these people solving their day-to-day problems.
In Europe, an environment that hugely dominated by the OSI model with the national
research network and the national post telephone and telegraphy operators, a group of
them including Daniel Korenberg were sitting around one day and said you know, we
needed an IP association for Europe. In those days they had a thing called [inaudible]
and Daniel says, you know, I think we need -- we need a rare for IP, so why don't we call
it [inaudible] and they started not a trade association but an interoperability group for
engineers. And that took off very actively and now there was a umbrella in Europe for
people who were trying to make their Internet working work and just get practical
networking done everyday while the other guys were taking care of the big picture.
And because the early and later day ARPANET always connected the military bases
because it always connected to commercial sites doing research, not commercial work
but research, there were a lot of people in the world who were actively using these
systems. And when the regionals were developed, a lot more people found themselves
now becoming customers so IBM, Tandem Computer, HP, started becoming customers
of regional networks. Now, remember it was still academic but it was starting to get a
little bit big and a little bit heavyweight. So some of these guys, let's say, Paul Schroeder
[phonetic] at I think NizerNet started to realize, you know, this is getting a little bit crazy,
we need to start seeing about getting out there and doing a commercial company, so he
spun NizerNet out and started Performance Systems International, one of the first
commercial Internets. The Usenet association went to Rick Adams, one of the systems
operators at Seismo, the place at the University of Maryland or affiliated with Maryland
that was doing seismic testing to look for nuclear bomb explosion in the world, they were
a branch of the DOD at the time, and they had a node there that probably had 5000
dialup customers every day. And it was getting huge. And USENIX looks out and they
say, remember the UNIX association realizes this is getting too big, you know, we've got
to get rid of this. It's out of control. Rick, would you start a company. I think Rick turned
them down twice. And then with a $50,000 I think no interest loan, he finally agreed to
take his machine out of the university of Maryland seismic center and actually start
running Usenet at least one node of it as a spinoff commercial venture, which was called
UUNET Technologies, which became quite famous in time.
In the meantime the guys at EUnet foolishly went off and hired me in 1990 to see if I
could figure out what to do with this crazy volunteer network and we incorporated EUnet
in 1992 as a Dutch and Irish company and started selling services more actively all the
time subject to acceptable use policies. So in fact we were selling -- and this applies to -not to the Usenet guys because they were a dialup network, but the guys at PSInet, the
guys at EUnet, the other spinout networks, they were all on this very fuzzy edge of trying
to get stuff done while they couldn't engage in commerce while they were engaging in
commerce. It was a pretty special time to put it mildly.
Something I didn't mention before in the early 1990s, all the researchers around Europe
who time after time and by the way, I mean if the major universities time after time were
trying to get their work done with OSI and it wasn't working, they were using IP on their
local campuses with Ethernet and other ring networks started going this isn't working, we
need to do something, so they banded together, and they formed a thing called E bone,
the European backbone where they simply started sharing leased lines across Europe
and all of a sudden IP starts exploding there and by 1994 EBONE starts to incorporate.
So we've got this case where people are just trying to solve their problems, it's an
academic world, they're trying to do what they need to while realizing the big heavy stuff.
Well, maybe it's going to happen now. They're not really sure. But they have no choice.
Momentum is carrying them forward. And then the commercial Internet exchange is
formed. And I use the commercial Internet exchange as probably the real breakup
moment for the commercialization of the Internet.
Rick Adams at UUNET, I believe Schroeder PSInet and maybe the folks at IBM got
together and agreed that, you know, we're using -- remember, everyone's using all this
public infrastructure, right, they're using the regionals, many of which are not
incorporated, they're using the NSFNET as the backbone and they're all exchanging
things over that. It's getting pretty dicey because they're using these resources with
acceptable user policies that are not supposed to do that.
So they finally agree, you know, we'll do a deal with Metropolitan Fiber Systems in
Washington, D.C. on this optical ring they've got around the city, and we'll all agree to
meet there and exchange traffic there. And that was the first time where major traffic
exchange at level of least lines and heavy weight IP was done without having to resort to
public resources. And that breakout moment's very important because it now meant that
they could move forward completely unfettered and say when a commercial guy talks to a
commercial guy, it's legal and legit, we're not going over public resource. If we do go
over the public network, it's to get to somebody who's on, and when I say public network,
I mean the research networks, if we get to one of those research networks well it's to talk
to a research guy about researchy things. What's wrong with that?
NFSNET was very happy to see this by the way because it was getting pretty crazy
inside the offices at the same time. Bit of a commercialization tiger by the tail. So more
and more regional networks continued to become commercialized Ps or they were
acquired. In the meantime, also will out of the blue, Tim Berners-Lee did his terrific and
seminal work leveraging the existence of the domain name system and the name space
that it provided to make two small little viral tweaks a transport protocol and a markup
language actually viable to glue all these systems together using hyper text. And of
course the great interface advance, the first web browser mosaic got out there and was
distributed like wild fire. And the Internet was completely exploding before we'd had
e-mail that was going across these systems and some FTP but all of a sudden everything
was doing things with visual information, lots of pictures, a lot more text and a lot more
population and all of a sudden the demands on the New York were exploding.
Things were really doubling every maybe six months, maybe even faster in some venue.
But in the meantime, OSI was supposed to become the standard. There's this huge
wave of utility going out there and OSI was to become the standard, and it just never
happened. The model that everybody needed -- knew they needed, the PTTs
supposedly the incumbents, the guys that were going to take over the world of
networking were blown past by venture funded stock fueled and entrepreneurially
advanced Internet technology. As I've said before, it never really worked completely,
they could never get the entire suite, it was exceedingly complex. So whenever there
was a choice between trying to make OSI work and a great cooperative fashion or just
going ahead and using IP and getting your networking done, it worked time after time
after time in favor of IP.
Europe in particular excelling at diplomacy over all things was using OSI originally as a
way to knock down IBM and that is a way to maintain some bull work against the huge
technological juggernaut of the United States realized that their efforts were being
dashed upon the rocks and eventually came out to support IP in spite of the fact that OSI
was supposed to be the model that was going to allow them to continue to propagate
their view of technology operations and economy.
>>: [Inaudible] even within the Internet community wasn't very competitive even within
the Internet community a lot of people thought that it should get turned over to OSI at
some point?
>> Glenn Kowack: Well, I hope I'd said that, but maybe I wasn't clear and that's not just
sort of, I mean it's flat out. Everybody had the open book on their desk. That was for the
transition. The US government was in favor of -- in fact, there was a series of meetings
in 1992 in the IATF where the IATF was going to officially decide to go ahead, so it
seemed to replace some of the IP components with the OSI stack and that would start a
huge element of that, and it completely blew up for reasons I'm not yet entirely sure of.
I know -- I know the mechanisms of how it blew up, but I don't know who grabbed the ball
and said it's IP, IP, IP above all things and I'm not sure what their motivations were. And
I'm going to keep poking on that. I think it's very cool. But I'm glad you asked the
question.
Everybody thought OSI was going to take the day, and everybody thought the PTTs, the
business guys were going to do it. By the time the entrepreneurs started, they realized
you know, I'm doing this thing. Even the PTTs will buy me but I don't have any choice to
wait for that. I've got to do it. That entrepreneurial moving sidewalk is a great motivator.
So as I've said now multiple times, yes, the IP Internet worked terrifically well, it was
available relatively simple to use, cheap, it was universal, everybody had it, grew
fantastically fast.
And Steve Goldstein, who was in charge of I believe international connectivity of
NSFNET in the late '80s and 1990s spent years wondering when is the world going to
figure out that we're out there. And one day I think it was in news week it announced the
existence of the Internet, and he pasted that on his wall at the National Science
Foundation with a big magic marker, hey, they found us. Overnight sensation, right, only
23 years in the making or something.
So the great disruption began. And we know a lot about that disruption, and it overturned
telecommunications, which largely collapsed for the better part of a decade, trade,
structure work, personal relations, group interactions. And my argument here is that the
major inputs where the things I've now said many times but the single-most important one
was that it was not obvious and even nonsensical. Here we are staring at this thing that
works really well, but it was just a place where we were experimenting. The real game
was over here. So I think -- I don't know if I want to call this my thesis, but my
observation is that the reason why the Internet took over the world and remember it
destroyed -- almost destroyed the telecommunications industry, is that everybody knew it
wasn't the real deal. Everybody knew it couldn't work. And therein lies a little bit of
magic. So here is 1996 and I'm on a plane from Europe to United States or something,
and I happen to be sitting next to a very well dressed venture capitalist and we're having
a great ole time talking about the industry, this, that, and the other thing. And something
came up and I said blah, blah, blah, they said blah, blah, and I said well, when the next
Microsoft gets invented, then, you know, they'll be a disruption of some kind of and he
looks at me with this great knowing smile, and he goes they'll never be another Microsoft.
And I said what do you mean they'll never be another Microsoft? He says we know too
much now. Microsoft only happened because they could work in obscurity, people didn't
get computers. But we know to look out for technologies now.
I mean they built that company without even venture capital until the last three months or
whatever it was, somebody knows better than me about those details, that I never be
another Microsoft. We know too much. Guess again. Obviousness every generation
has their own feelings about what's obvious, what's not, what's going to work, what's not,
and there's always room for a surprise. The interesting thing about the Internet was that
it took place in the light of day, everyone looked at it every day. Well, I take that back. It
was somewhat obscure. Even mighty Microsoft didn't discover it until, what, 1994. And
with good reason. It was hiding. But it was hiding in a huge community and it couldn't
work. It was just the experiment. Remember, we had other technologies, too, we will
bulletin boards which were more public, people knew about them, not as powerful, not as
widespread, but more a common place that people saw.
We had commercial services like General Electric Network, we had MCI Mail, we had
Compuserve. So we had lots of things that were bumping out there, but we didn't have
this engine of enormous productivity around a simple attribute. And that attribute is that
the Internet did not do much and it could not succeed with the requirements that
everybody in the telephony world knew were necessary. But it turned out that four
minutes a year was too stringent. People would accept a day of down time a year, they'd
accept a week if it would cost them one-tenth as much, if they could do e-mail and the
web and lots of other things.
So when it came to, do I need four minutes a year of downtime reliability and just have a
phone system and don't have much else or maybe I do have a mini tell in France but it's
really expensive, or I've got this thing that's just exploding and everybody's using it and it
seems to be up when I need it, which one wins? Well, it -- I think it's obvious which one
won.
So there will be another Microsoft, the Internet worked in obscurity and that brings me to
my closing quotes. So if you have any questions, I think we have another Jonathan, 15
minutes or so?
>>: You didn't mention Al Gore.
>> Glenn Kowack: I didn't. There are some stories about Al that are the subject of
another talk. Al was important. Al was the point man in the senate who did make sure a
lot of good funding happened. And so to a certain extent he does do I have credit for
being a really positive player in that space. It's kind of unfortunate he's become a joke.
But he probably did stumble and take a little bit too much credit once or twice. And that's
probably enough to say about our formally elected officials. Yes?
>>: I mean, what were like the main lessons going forward about how we think about
[inaudible].
>> Glenn Kowack: You know, I don't have a general theory, and it would be humorous to
say so. I've never studied future studies and things like that except in passing you know
faith pop corn and a book here and there. The model I like to use instead is to say that
there's something that engineers and technologists and just people are good at and that
the just accumulating lots of stories that there's that cross between oh, I've been here
before, my story, or my intuitions are telling me this. So I'm hoping through this MBA
style case study to just give you more intuitions. But I think at least the following
attributes are to be attended to. The first is have huge respect for things that don't make
sense, and the extent to which sensibility is a representation of the current regime,
whatever that is. The second thing is, and I wish I had a diagram for this, I still haven't
quite figured out how to put it together, but you have the inherent technology and your
expectations for what it's going to be. Let's say it's based on new models of statistics like
Black-Schultz equation or something. All right. So the question is does that work in and
of itself? Then there are questions of what pure technical inputs are necessary to make
that work, which you may understand deeply or just a little bit.
Then the third thing is are there external technical inputs which can change your
competitive position or which just upset your fundamentals that you base on, so for
instance the rise of sea moss destroyed a company I was this a director in, big company,
a quarter billion a year revenue which in the '80s was a lot of money. Sea moss blew
past our expectations. So watch out for changing tides.
And then as I mentioned, and I liked the metaphor a lot of tectonic plates, be aware that
the platform you're standing on may change drastically. The transition we saw until three
weeks ago was this move towards more and more -- more and more great utilization of
statistical processes in business. The IP is statistical in ways that the PTTs circuit switch
model isn't. Markets are statistical. Maybe they're going away in ways and so on. So be
aware that there are these huge societal trends, you know, emergence of venture capital,
the emergence of the statistical process, the rise of international trade that are the
platform on which your stuff works. And I think it's really wise to do what they did, at least
anecdotally that I heard they did at NASA in the 1960s, they put together all their designs
and you know, hammer on them in terms of the inherent technology really hard and then
step back and say what can go wrong with all the tech -- and then they'd back up and say
what can go wrong bureaucratically, you know, in the bigger environment. And then they
get all that done and then at one point they say okay, no more questions, we're locked in,
let's go forward.
So that's the way I'm looking at it right now. But it's tough. I mean, it's -- I don't think
anyone can predict markets. I think the best you can do is either realize that it's not
going to work because of something that's going to happen or realize you got to keep
your eyes open for something that a particular thing out of left field. But there are always
surprises. And this of course depends upon the two dimensions of span of your work.
One is how many disparate things are you bringing? Are you building a chip or are you
building a new kind of social networking, you know? And this is very narrow, it's
understood prescribed. And this is broad.
And then the other one is how big a world are you planning it for? I mean, is it -- and
especially what's its duration? I mean, six months you can probably plan, 18 months you
can probably guess, two years. Which is why I think things like incremental development
and software has an inherent advantage because if you could do a spiral development,
you know, stepwise you got a much better chance of doing this kind of thing. Great leaps
of course can do more good, but they're really exposed. So that's my short answer.
Yes?
>>: There's no code for [inaudible] how much it influenced other things but I know IBM
had a network among its -- all its mainframes at least for the commercial customers.
>> Glenn Kowack: Sure.
>>: [Inaudible] but at least '87 which stood for because it's there.
>> Glenn Kowack: Right.
>>: And they want to be able to update all their customers from ->> Glenn Kowack: You bet.
>>: So they could send data. And since it was there, they let all the customers talk to
each other.
>> Glenn Kowack: Right. That was ->>: [Inaudible].
>> Glenn Kowack: Oh, yeah. Huge.
>>: So they ->> Glenn Kowack: There are two stories that for lack of time for reasons of complexity I
just couldn't even bring in today. One is the rise of the data processing world and all the
no, it engendered, and so exactly the things you're talking about. The IBM networks,
system network architecture, which was supremely important for the development of the
Microsoft. And Decknet and all the other practical solutions that were being done under
what I'll call a more central control pre IP model. And that had influences on this. And
they were also on approach like the PTT OSI model and they were great advocates of
OSI by and large.
They were also a group that got dashed by just the explosion in the Internet group use.
Hunt Newsbucker [phonetic] started Bitnet with Ira -- his last name fades out of my head
at this moment, but that was an IBM to IBM machine network, I believe, and Bitnet
became CREN or I believe Computer Research and Education Network, which was kind
of a mirror of the ARPANET, and I believe that moved over to IP sometime in the
early '90s.
Similarly in Europe there was a very big IBM sponsored network base. IBM moved to
Europe and gave away machines and said connect them together using our systems.
And that was called the European Academic Research Network or EARN. Very
successful.
But you're right, lots of other tough going on. But just to be able to make a point I need to
narrow down the threads.
>>: Do you know how much it is for say [inaudible].
>> Glenn Kowack: You know, what was -- one of the reasons -- one of the ways it
influenced it certainly was it was yet another element of confusion and I don't mean
confusion in a negative way, it's here we've got the PHT model, here we've got data
networking which was big, I mean multi million dollar business, IBM, digital, Hewlett
Packard, Tandem, others, CREN, Bitnet, EARN. Over here you got IP. So trying to
decide which pony to pick out was crazy. In 1984 I was directing an R&D lab and my
chief scientists and I were -- we owned the Internet protocol work in our corporation for
sales. We sold one of the first IP suites on the decks, PPL 1170. And we were trying to
figure out what to bet on. We all knew that IP was old and out of juice. It was 10 years
old.
And so we were looking at XNS, Xerox Networking System, which by the way has some
very distinct improvements over IP and for good reason. They got another seven or eight
years of experience and made some different decisions. Lots of stuff in there. And it's
funny at some point being accurate flies in the face of being understandable. Because
there's just so many players, so much stuff going on. There are hundreds of thousands
of threads.
>>: [Inaudible] actually thinking about [inaudible] lessons to [inaudible] thinking about
what I would say is can't we ignore the importance of Open Source [inaudible] because a
lot of case studies, the people actually did things that believed in what they were doing.
>> Glenn Kowack: Absolutely.
>>: They had the vision. They saw it. They may not have seen the scope of it, but they
saw what they were trying to achieve. But what happens is you unless it's sort of like
make it available, easy, all that good stuff that happened in Open Source and people
contribute can't built this big [inaudible] can't really ignore [inaudible].
>> Glenn Kowack: I completely agree although as correct as you are, I'd like people to
be aware of the historical point that the idea of Open Source as named by Stallman and
others circa 1990, kind of the beginnings, all this stuff had -- the momentum was already
there. So let's say there was an open community. And actually of course there was the
open group and other people who owned eventually the UNIX standard and its
compliance suites.
So let's just expand that a little bit. There was a community of interest that made this go
and it was a community of interest of practical use and the openness not so much Open
Source, although that was true, it was simply open on all fronts. They shared lines, they
shared equipment, they shared ideas, they met at USENIX, they met at the European
UNIX User Group. Now, there was a lot of other ferment going on, but this one seemed
to have so many dimensions of infectiousness going. And oh, by the way, another point
is we didn't understand what viral meant in those days. We didn't understand that we
were basically building these autonomous forces, anything from means to an actual
computer virus to just an industrial standard is that just started to take on a life of their
own. So your point's a very good one. Certainly an interesting -- another interesting
story.
>>: Yeah. Part of the point is that you just -- one of the things going on was that you
also had,000 of people being trained up on how to engineer as being trained up on UNIX
and on the Internet while you had a smaller number inside the company is working on
OSI.
>> Glenn Kowack: Yeah, I think that's right.
>>: I'm not so sure that, the fact that you train on something essentially will give you
[inaudible] even though people make that mapping all the time, like you train on this so
you're much more comfortable on that. I think it's more along the lines of how many
people are contributing something. And if you've got more and more people trying to
contribute different things that there's so much inventory that is created that so much
[inaudible] so much IP that is created right there that you just, you know, for example we
suffer from this research here, we [inaudible] something [inaudible] versus look at
[inaudible] all the other stuff, a lot of times we're catching up, which is so sad, because
we don't [inaudible] work on. But if you were just to work on some of the stuff that was
existing we would do a lot more. So I [inaudible].
>>: Yeah, I agree. I didn't mean training in the sense of students, I meant strange in the
sense of network administrators and all those people who are doing problem solving and
contributing, yeah.
>> Glenn Kowack: Yeah. And actually to go back to an element in the presentation,
what was happening while OSI was trying to get it write with -- and by the way the OSI
community was huge, make no mistake about it. But it want getting the opportunity to
gain both technical, operational and then social learning about how this thing would work.
And in a sense it's like water, you know, finding its own level, right? OSI was still heavy
weight structured. And look at the difference again, something I couldn't get into for lack
of time, look at the different between the IATF and OSI. IATF would promulgate a
standard if it had two running versions.
Invariably someone would design something, you know, offer a standard, someone else
would do it, boom, it's out there. And it was this and the philosophy in the IATF was and
kind of persist at least nominally to this day, although one wonders how well they do it at
this moment because it's now become a big locked up system, but the idea was that get
something out there that works a bit, get started and it will accrete which is a
fundamentally different approach from saying make sure everything's going to go right.
Make sure everything is going to go right has two elements that I think are worth
mentioning. The first is that if you run a business and you don't want something to go
wrong or you don't want to destroy your business model, you've got to do it that way.
But the second thing is that it's based, and this is a really interesting breakpoint
conceptually for engineers and others, it's based on the assumption that you know, you
can run into dead ends. You got to make sure you got the whole picture together. Now, I
got to tell you, this is where my heart lies. I got this math degree from Illinois and I want
to make sure the whole thing hangs together before I go investing either my career or this
stuff. But here are these Internet twice and this is kind of one of the great untold stories,
although I'm going to try to tell it right now, we'll do an increment, we'll do an increment,
we'll do an increment, and there's no dead end. Or if there is a dead end, it's just a
simple matter of software. We'll fix it. And it worked. That's the crazy think. I mean
even Vince Cerf, one of the co-authors with Bob Kahn of the original TCP/IP paper says
we never thought this would work so well.
Now, that said, they have wrapped hundreds of protocols around TCP/IP and it has
changed quite a bit. But it still works. Now, you can put the lie to that and say well, V4
doesn't work, version 4 and now we need version 6. Well, maybe not because now we're
developing network translation and so on and so forth.
>>: It works for what it was designed for because, you know, the people [inaudible] it
didn't work we just under the legacy of that system. [Inaudible].
>> Glenn Kowack: Well, and that's why I had that quote up at the beginning between
Zhou Enlai and Edgar Snow: What's the meaning of the French revolution? It's too soon
to tell.
So here's the big bugaboo, right? Markets have just crashed all over the world, largely
because our faith in derivatives turns out to be if not unwarranted, then misplaced in
terms of who was running them. Well, IP has got some similar attributes. Here, it's this
running down the railroad tracks as fast as you cannot looking for the light coming from
the other end of the tunnel and it's statistical and IPB 6 isn't moving anywhere near fast
enough and IPB 4 is out there, and you know, are we on our way to thermador [phonetic],
are we on our way to it's a simple we'll keep lashing it together. We'll see.
>> Jonathan Grudin: Okay. I think let's thank Glenn.
[applause].
>> Glenn Kowack: Thanks everybody.
Download