Historical overview of the Internet and WWW

advertisement
CHAPTER TWO
HISTORY OF ONLINE JOURNALISM
Waves of Online Journalism Development
I. The First Wave of Online Journalism: 1982-1986
Both Times Mirror and Knight Ridder opened ambitious, experimental videotext services in 1982, closing them in 1986. The failed service
showed how not to do online journalism. Videotext made print publications and other interactive services available via TV screens. The
videotext services had these characteristics:
Applications
Text in digital form, primitive graphics for news, information communication (chat and bulletin boards), home banking, shopping, games,
services (e.g. movie tickets).
Control
Owned by major media companies; end-users were taken for granted; attempts by users to shape network innovation were discouraged
by management, who used conventional marketing practices in an effort to build an audience. The product was available only to
subscribers, at a monthly fee equivalent to a cable TV service.
Hardware
Mainframe computers accessed by phone lines via 900-baud modems attached to set-top boxes on TV sets. The service was slow and
clunky and permitted almost no end-user control. The database was next made accessible to early PC owners, many with 1200-baud
modems. This new group of end-users sought greater control of the network and demanded changes: better content, more uncensored
chat, more games, more shopping and better services. Printing capacity was somewhat limited.
Conclusions
The videotext services failed to meet the demands of end-users and closed down in 1986.
At the same time, CompuServe (1980), Prodigy (1984) and AOL (1989) formed proprietary services with no traditional journalism ties
(CBS owned part of Prodigy but bailed out). AOL provided anonymous chat on any topic, including sex. Prodigy offered richer content and
services than the videotext companies. CompuServe had forums on a wide variety of subjects and courted consumer loyalty. The new
online companies were willing to share some power with end-users but still held most of the control, especially over third parties, who
submitted content for a fee and had no stake on how it was published. Most early subscribers used the services for chat with friends,
family and -- the novelty that drew many in -- with strangers who were often willing to develop deep intimacies with those they met online.
It was an elaborate form of CB radio.
Membership in proprietary services steadily increased in the late '80s and early '90s as PC ownership spread, software became more
powerful and modems increased in speed. Each software and hardware improvement gave more leverage to end-users and allowed them
to participate more actively in reshaping the networks. The architecture changed dramatically from one-to-many in 1982 to many-to-many
and one-to-one as proprietary chat and bulletin boards and networks like The Well (1985) developed.
II. The Second Wave of Online Journalism: 1993-2001
Media companies and dot com entrepreneurs watched the success of the proprietary services in the early '90s with much unease. Many
publishers joined the services as content providers, but they had no control and got only a small piece of the pie. They needed control of
their own networks. Mercifully, along came new technology -- HTML (1990-1992), the Web, the graphic interface browser Mosaic (1993)
as well as Internet e-mail, and the second wave began to build. Netscape's first browser was commercially released in 1994; Microsoft's
Internet Explorer browser was released in 1995.
Applications
Text in digital form, sophisticated graphics, video, audio, links to almost unlimited databases. Widespread use of e-mail. E-commerce.
Control
A mix of closed networks controlled by owners like AOL, open networks controlled by media companies in partnership with end-users and
of smaller sites controlled completely by end-users and third parties.
The gatekeeper role in journalism continued to be valuable, but Yahoo and others developed software that allows readers to be their own
gatekeepers. The personalization software lets readers decide which news is delivered to them, giving increasing control to the end-user
and to news aggregators like Yahoo, which grew at the expense of traditional news organizations.
In the beginning of this wave, most content was offered for free -- companies built their business models around the idea that they would
take their profits from ad revenues. By the time owners realized that ads would not sell online, it was too late to begin charging for content
and services. An empowered consumer force had become accustomed to getting just about everything for free online. (Heck, online users
didn't even have to pay sales tax.) Economic control had shifted to end-users, who opposed subscriptions (except for the most exclusive
or valuable content) and rejected intrusive ads, demanding a free product on an open architecture. Standard marketing techniques, such
as focus groups and promotions, failed to control end-users.
1
Hardware
PCs connected to mainframe computers by increasingly faster modems, dedicated phone lines and cable systems. As the second wave
built, some began distributing content over cell phones, PDAs, and by wireless and satellite delivery.
More powerful computers and more sophisticated software allowed end-users to save and share music, video and other content through
programs like Napster and Morpheus. Elaborate database, architecture and multimedia software allowed greater innovation, such as
customization. This facilitated the shift of control to the end-user.
Printers became cheaper and more powerful, allowing the end-user to have a printing press. E-mail technology became more
sophisticated, also giving greater power to the end-user in terms of organizing, accepting, rejecting and forwarding e-mail and HTML
content.
Conclusions
As the second wave broke, the grassroots end-user survived and flourished -- thanks in part to new, inexpensive content management
technology that helped thousands of end-users launch their own publications. Smaller, innovative news sites became more prominent and
niche sites, self-publishers and Web loggers proliferated, creating a new model for online news.
Innovation and investment by owners came to a halt by about mid-2000: End-users' refusal to pay for content -- or anything else online -meant most owners failed to come up with a working business model. The realization that no one was making money led to
disillusionment and the market collapse of March 2000. Dot coms closed and online news operations retracted. An estimated 500,000
jobs were lost in this sector from 2000-2002.
The next wave of online journalism is all about partnering with end-users -- and giving them what they want. The first wave, from 1982 to
1992, began with several publishing experiments and was later dominated by proprietary services such as AOL and CompuServe. In the
second, starting in 1993, news organizations began to go online. The third, only just now beginning, is the wireless/broadband era. This
one promises to be more powerful, sustained and profitable than previous phases. It will bridge the gap between mass information and
what is specifically useful to you -- when you want it.
Predicting the future of news is, of course, risky. But we do have a 20-year history of online journalism to analyze and project from. We
can look at the engineering laboratories and see what kinds of new tools are being developed. And the analytical work of
communications scholars such as Francois Bar at Stanford can help us project where online technology will take journalism in the future.
Bar sees the development of information networks as a power struggle between media owners and end-users for control of a network
and the way it will innovate and grow. As Bar's work notes, users from the beginning of the online era saw the new technology as an
opportunity to control communication, rather than have it spoon-fed to them. Users demanded that companies give them interactivity,
control and software that would allow them to speak to each other without editors, without censorship -- without a media intermediary.
The companies that survived were the ones that gave the users the interactivity and control they demanded.
Taken together, Bar’s theories and Crosbie's wave concept shed an interesting light on where our industry might be headed. It is also
helpful to look at network development, as does Bar, in terms of three factors: Application, Control and Hardware.
III. The Third Wave of Online Journalism: 2001This wave is characterized by more-sophisticated owners and better-trained staffs, end-users dependent on traditional news
organizations for the daily global report, proliferating mobile platforms and new software that enables powerful forms of publishing, such
as wireless push and immersive technologies.
Owners are developing more information that consumers are willing to pay for. Some owners are becoming better innovators, improving
their products.
Owners are also developing new revenue streams in a partnership with end-users. Some managers say this cooperation has helped their
news sites become more profitable.
New industry organizations, such as the Online Publishers Association and Online News Association gained memberships and gave the
industry cohesion.
Many strong online news sites remained and even flourished: nytimes.com, washingtonpost.com, latimes.com, wsj.com, CNN, CNET,
MSNBC, USAToday, CBSMarketwatch, as well as many sites at the regional level, such as the startribune.com and Topeka CapitalJournal's cjonline.com. Traffic on the Web increased at news sites by about 15% in 2001 and soared by more than 70% at many of the
major news sites. But the successful news sites had listened carefully to the needs of the end-users and became more profitable by
allowing end-users to shape the news networks.
Applications
Text in digital form, sophisticated graphics, video, audio, unlimited databases; widespread use of e-mail, Web logs, diaries and personal
sites; games; e-commerce; services; news alerts; music and software downloads; local and community news; chat and forums; classified
ads; instant messaging. Also:
 More efforts to combine print, broadcast and online into a single news organization -- in a word, convergence.
 More emphasis on services and proprietary content as a source of revenue and less on banner and pop-up ads
2



Services are closely matched with end-user needs, such as classified ads (Morris Communications Web sites, for example,
pioneered the "send flowers option," a link from an obituary to a local florist)
Increasing use of next-generation portable devices that, in turn, open new revenue streams
Better use of resources, such as turning archives into historical feature packages
Control
The relationship between network owners and end-users is becoming more of a partnership. Networks are:
 Partnering with Web loggers and other independent voices to make their content available to end-users
 Developing new ways to present community news and building more interactivity into their coverage. Many are using
forums and surveys tied to stories as ways for users to have a dialogue with each other or with a reporter
 More sensitive to privacy issues when developing policies about how they'll use and share information about end-users
 Creating more effective ad strategies, like the nytimes.com's "surround sessions" where the user is "owned" by an advertiser
throughout a session
 Using better tools, including new publishing tools that allow advertisers to write their own copy
 Showing quicker response to user demands, especially as major news breaks
 Creating partnerships (The New York Times and the BBC; Tribune Company and Knight Ridder) to offer news and classified
advertising
Hardware
PCs connected to other PCs and to host servers over broadband paths owned by cable, phone and satellite companies; cell phones,
PDAs, electronic tablets and e-paper connected by wireless and satellite links; multifunction end-user devices, both stationary and mobile;
more broadband capacity and compatible transmission standards; sophisticated compression techniques and synchronization
mechanisms.
Conclusions
In the first wave of online journalism, the owners controlled all, and end-users had little say in how the product was developed. In the
second, end-users fought for control, spurning ads and declaring that content be free. In the third wave, control is being shared. Network
owners see value in cooperating closely with their audience; the audience is more willing to let the owners make a buck.
"Free is Utopia, and people begin to realize that," said MSNBC senior writer Bob Sullivan at the UT conference. "They are more receptive
of subsidizing what we do for them."
The second wave demonstrated that networks and end-users need each other to survive. Networks are nothing without an audience, and
while end-user contributions are valuable, many self-publishing end-users would have little to talk about without the news that networks
supply: Most blogs spend much of their "air time" commenting on news reported and published by major news organizations.
Gathering news is an expensive process requiring heavy capital investment and professional expertise. End-users do not and will not
have the resources necessary to do this important work.
A new publishing model seems to be emerging in the third wave: Control is being shared and innovation is developing through a
partnership between owners and users.
"Everything we do in product development … begins with doing user scenarios based on what we know of our audience," said Stephen
Newman of nytimes.com. "Then we conduct rounds of user testing."
The growth of mobile platforms will inevitably return some control to the owners; mobile is more of a "push" technology, giving owners a
stronger gatekeeper function.
The big fear is that monopolistic merger trends will bring unacceptable and socially destructive power to network owners. But that fear is
based on the idea that major network owners will ignore end-users' demands.
Non-traditional owners are ready to step in and accept the refugees from Big Media. Multiple paths of information - cable TV, high-speed
phone lines, satellite and wireless transmission, as well as emerging paths such as lasers - make it difficult, if not impossible, for media
conglomerates to lock up online technology.
News organizations online are increasingly realizing that the key to success lies in giving readers what they want, when they want it. That
means churning out news on more than a once-daily deadline, respecting privacy, creating interactivity, giving readers options and
control. Many of these demands -- especially the need for more constant news updates -- mean major changes in the way newsrooms
work. Many organizations are working to meet the needs of this new online audience.
"We haven't figured it out yet as to how to pull all this stuff together," Doug Feaver, executive editor of washingtonpost.com told the UT
audience. "But we're in better shape now than three years ago and we're getting better at it every day."
3
Technologies that have shaped the history of online journalism
1. Teletext
A teletext is a generic term used to describe one-way broadcast information services for displaying pages of text and pictorial material on the
screens of adapted TVs. This term should be distinguished from teletext, which is a text communication standard for communicating word
processors and similar terminals combining the facilities of office typewriters and text editing.
Teletext (or ‘broadcast Teletext) is a non-interactive system for transmission of text and graphics for display on a TV set. The TV set must be
equipped with a decoder box or built-in chip in order to capture and display teletext information. According to Kawamoto (2003), teletext, the
first type of OJ was invented in Great Britain in 1970. Teletext involved displaying words and numbers on TV screens in place of regular
programming. The invention was patented by Great Britain in 1971. In the US, teletext never became a hit and has never been, but in it was
and still is popular in Europe and much of the rest of the world.
A user uses a remote control to select various pages of information from menus that are created by the teletext operator. Punching the number
300 for example might bring up the sports headlines and entering 305 might bring up a particular sports story. This makes the system to appear
interactive, but it is not really interactive. All the so called pages in a teletext system compose a set. The set is transmitted continuously, one
page after another, over and over again. Imagine a big loop or circle of pages sent one after another. When the user enters a page number, the
requested page is captured by the tetelext decoder the next time it is transmitted. Then it is displayed on the TV screen. No signal or command
is sent from the consumer to the teletext system operator. Teletext can be sent to the home in three ways: TV broadcasting including satellite
transmission, cable TV systems or radio broadcasts, but the last is rare. When sent as a TV signal, it can occupy a full channel or be encoded
in a small part of the channel along a regular TV programme. This tiny part of the channel is called the Vertical Blanking Internal (VBI). If you
have mistuned your TV set so that the picture rolls and noticed the wide, black line that appears between the frames, then you have seen VBI.
Benefits of teletext to consumers
i) The technology created a new use for the TV
ii) The systems are free to use and to provide useful news,
information and advertising
iii) No telephone or computer is needed and many TV sets
sold outside the US have a teletext code chips inbuilt in
them
iv) Teletext enables broadcasters to generate advertising
revenue from a part of the broadcast signal that had gone
unused before this technology emerged
Problems of teletext to consumers
i)
News and information must be very brief to fit on teletext
pages
ii) Graphics are poor, this prevents photo quality display
iii) Amount of time a consumer must wait for a page to display
after entering its number in the remote control varies from
a few seconds to nearly a minute and some people find it
unacceptably long
2. Videotext
Videotext (sometimes referred to as viewdata) is a generic term used for a two-way interactive service emphasizing information retrieved, and
capable of displaying pages of text and pictorial material on the screens of adapted TVs. Videotext (or "interactive videotext") was one of the
earliest implementations of an "end-user information system". From the late 1970s to mid-1980s, it was used to deliver information (usually
pages of text) to a user in computer-like format, typically to be displayed on a TV. Videotext in its broader definition can be used to refer to any
such service, including the Internet, bulletin board systems, online service providers, and even the arrival/departure displays at an airport. In a
more limited definition, it refers only to two-way information services, as opposed to one-way services such as teletext. However, unlike the
modern Internet, all traditional videotext services were highly centralized.
Videotext systems are interactive, computer-based systems that electronically deliver text, numbers and graphics for display on a TV set, video
monitor or PC. The data travels over telephone lines, two-cable, computer networks, wireless data networks or any of the combination of these
four.
Similarities and differences between videotext and teletext
There are some significant differences in the way viewdata and teletext operate, although with the arrival of the teletext standard, it was
decided to make the viewdata signals compatible with those for teletext. This included making the same range of characters available for
pages.
A large number of effects are available with both teletext and viewdata systems. Up to eight colours (including black and white) can be
displayed, as either text, graphics or background colours. Flashing and double height text and graphics are also available.
The most important difference is viewdata is not transmitted using TV signals. Instead, the information is transmitted using the standard
telephone network. This allows for more two way interaction by the user, such as the transmission of messages (known as responses) to other
people.
There are also some other differences, mainly to do with the page numbering schemes. Teletext uses a 3 digit numeric identifier, while
viewdata uses identifiers with up to 10 characters, with the last character always being a lower case letter and all other characters being
numeric. This allows for a much greater range of pages.
There are also some operational differences, concerned mainly with movement around the system. The viewer can jump directly to one of 11
different pages, by using keys 0 to 9, and the # key. The page that is linked with the # key is always the frame with the same 9 digit numerical
part of the page number, but the next lower case letter in sequence. For example, if # was pressed on page 0a the system would attempt to
4
move to 0b. This method of navigation uses a similar idea to the teletext fastext system, but allows a much greater number of pages to be
linked. This is the main method of navigating around a typical viewdata system.
Every interactive online system that has existed including the Internet and WWW has the following definitions/characteristics of videotext:
i)
It is interactive, meaning two-way communication is supported.
ii) Data is stored in computers, often called PCs but sometimes minicomputers or mainframes.
iii) Users enter commands on a keyboard, dedicated terminal or computer. The commands are sent to the host computer and the
requested data is returned to the users.
iv) Navigation can be achieved either through menus or command-driven interfaces. Menu-driven systems allow users to browse, much
as they would a newspaper. Command-driven interfaces allow very fast searches for specific data.
v) Graphics are much better than those presented in teletext, eventually even allowing photo display.
vi) Messaging and bulletin boards, among the first truly interactive, participatory services are supported. The latest and the most
popular versions of videotext are the Internet and WWW.
The first attempt at a general-purpose videotext service was created in the United Kingdom in the late 1960s. In about 1970 the BBC had a
brainstorming session in which it was decided to start researching ways to send closed captioning information to audience. As the Teledata
research continued the BBC became interested in using the system for delivering any sort of information, not just closed captioning. In 1972,
the concept was first made public under the new name Ceefax. Meanwhile the General Post Office (soon to become British Telecom) had been
researching a similar concept since the late 1960s, known as Viewdata. Unlike Ceefax which was a one-way service carried in the existing TV
signal, Viewdata was a two-way system using telephones. Since the Post Office owned the telephones, this was considered to be an excellent
way to drive more customers to use the phones. Not to be outdone by the BBC, they also announced their service, under the name Prestel. ITV
soon joined the fray with a Ceefax-clone known as ORACLE.
In 1974 all the services agreed a standard for displaying the information. The display would be a simple 40x24 grid of text, with some "graphics
characters" for constructing simple graphics. This standard was called CEPT1. The standard did not define the delivery system, so both
Viewdata-like and Teledata-like services could at least share the TV-side hardware (which at that point in time was quite expensive). The
standard also introduced a new term that covered all such services, teletext. Ceefax first started operation in 1977 with a limited 30 pages,
followed quickly by ORACLE and then Prestel in 1979.
Prestel was somewhat popular for a time, but never gained anywhere near the popularity of Ceefax. This was due primarily to its delivering
much the same content, yet requiring the user to pay for the terminal (today referred to as a set-top box), a monthly charge, and phone bills on
top of that (unlike the US, local calls are paid for in most of Europe). Although Prestel's two-way features (including e-mail) were interesting, the
end-users appeared to be unwilling to pay much for such a service, not as much as it cost to run it at least. In the late 1980s the system was refocused as a provider of financial data, and eventually bought out by the Financial Times in 1994. It continues today in name only, as FT's
information service. A closed access videotext system based on the Prestel model was developed by the travel industry, and continues to be
almost universally used by travel agents throughout the country.
Using a prototype domestic television equipped with the Prestel chip set, Michael Aldrich of Redifon Computers Ltd demonstrated real-time
transaction processing in 1979 and thus invented teleshopping or online shopping as it is now named. From 1980 onwards he designed, sold
and installed systems with major UK companies including the world's first travel industry system, the world's first vehicle locator system for one
of the world's largest auto manufacturers and the world's first supermarket system. He wrote a book, Videotext - Key to the Wired City (Quiller
Press 1982) about his ideas and systems which among other topics explored a future of teleshopping and teleworking that has proven to be
prophetic. Before the IBM PC, Microsoft and the Internet, he invented and manufactured and sold the 'Teleputer', a PC that could receive TV
programmes and communicate using its Prestel chip set.
North America
Interest in the UK trials did not go unnoticed in North America. In Canada the Department of Communications started a lengthy development
program in the late 1970s that led to a "second generation" service known as Telidon. Telidon was able to deliver service using the vertical
blanking interval of a TV signal or completely by telephone using a Bell 202 style (split baud rate 150/1200) modem. The TV signal was used in
a similar fashion to Ceefax, but used more of the available signal (due to differences in the signals between North America and Europe) for a
data rate about 1200-bit/s. Some TV signal systems used a low-speed modem on the phone line for menu operation. The resulting system was
rolled out in several test studies, all of which were failures.
The use of the 202 model, rather than the Datapac-ready Bell 212 created severe limitations, as it made use of the Nation-wide x-25 packet
network essentially out-of-bounds for Telidon-based services. There were also many widely held misperceptions concerning the graphics
resolution and colour resolution that slowed business acceptance. Byte magazine once described it as "low resolution", when the coding
system was, in fact, capable of 2^24 resolution in 8-byte mode. There was also a pronounced emphasis in government and Telco circles on
"hardware decoding" even after very capable PC-based software decoders became readily available. This emphasis on special single-purpose
hardware was yet another impediment to the widespread adoption of the system.
Amongst the first services were The Source and CompuServe, both begun in 1979. One of the earliest experiments with marketing videotext to
consumers in the U.S. was by Radio Shack, which sold a consumer videotext terminal, essentially a single-purpose predecessor to the TRS-80
Color Computer, in outlets across the country. Sales were anemic. Radio Shack later sold a videotext software and hardware package for the
Color Computer.
5
In an attempt to capitalize on the European experience, a number of US-based media firms started their own videotext systems in the early
1980s. Among them were Knight-Ridder, the Los Angeles Times, and Field Enterprises in Chicago, which launched Keyfax. The Fort Worth
Star-Telegram partnered with Radio Shack to launch StarText. (Radio Shack is headquartered in Fort Worth).
Unlike the UK, however, the FCC refused to set a single technical standard, so each provider could choose what it wished. Some selected
Telidon (now standardized as NAPLPS) but the majority decided to use slight-modified versions of the Prestel hardware. StarText used
proprietary software developed at the Star-Telegram. Rolled out across the country from 1982 to 1984, all of the services quickly died and
none, except StarText, remained after another two years. StarText remained in operation until the late 1990s, when it was moved to the web.
The primary problem was that the systems were simply too slow, operating on 300 baud modems connected to large minicomputers. After
waiting several seconds for the data to be sent, users then had to scroll up and down to view the articles. Searching and indexing was not
provided, so users often had to download long lists of titles before they could download the article itself. Furthermore, most of the same
information was available in easy-to-use TV format on the air, or in general reference books at the local library, and didn't tie up your phone
line. Unlike the Ceefax system where the signal was available for free in every TV, many U.S. systems cost hundreds of dollars to install, plus
monthly fees of $30 or more.
Comparison to the Internet today
Some people confuse videotext with the Internet. Although early videotext providers in the 1970s encountered many issues similar to those
faced by Internet service providers 20 years later, it is important to emphasize that the two technologies evolved separately and reflect
fundamentally different assumptions about how to computerize communications.
The Internet in its mature form (after 1990) is highly decentralized in that it is essentially a federation of thousands of service providers whose
mutual cooperation makes everything run, more or less. Furthermore, the various hardware and software components of the Internet are
designed, manufactured and supported by thousands of different companies. Thus, completing any given task on the Internet, such as
retrieving a webpage, relies on the contributions of hundreds of people at a hundred or more distinct companies, each of which may have only
very tenuous connections with each other.
In contrast, videotext was always highly centralized (except in the French Minitel service, also including thousands of information providers
running their own servers connected to the packet switched network "TRANSPAC"). Even in videotext networks where third-party companies
could post their own content and operate special services like forums, a single company usually owned and operated the underlying
communications network, developed and deployed the necessary hardware and software, and billed both content providers and users for
access. The exception was the transaction processing videotext system developed in the UK by Michael Aldrich (1980) which brought
teleshopping into prominence and was the idea developed later through the Internet. Aldrich’s systems were based on minicomputers that
could communicate with multiple mainframes. Many systems were installed in the UK including the world's first supermarket teleshopping
system. Nearly all books and articles (in English) from videotexts heyday (the late 1970s and early 1980s) seem to reflect a common
assumption that in any given videotext system, there would be a single company that would build and operate the network.
Although this appears shortsighted in retrospect, it is important to realize that communications had been perceived as a natural monopoly for
almost a century — indeed, in much of the world, telephone networks were then and still are explicitly operated as a government monopoly.
The Internet as we know it today was still in its infancy in the 1970s, and was mainly operated on telephone lines owned by AT&T which were
leased by ARPA. At the time, AT&T did not take seriously the threat posed by packet switching; it actually turned down the opportunity to take
over ARPANET. Other computer networks at the time were not really decentralized; for example, the private network Tymnet had central
control computers called supervisors which controlled each other in an automatically determined hierarchy. It would take another decade of
hard work to transform the Internet from an academic toy into the basis for a modern information utility.
Why did newspaper-oriented videotext services fail to gain enough market shares to survive?
i)
Expensive dedicated terminals were required for access.
ii) They tied up the family TV and the telephone.
iii) Costs were high for the times, and pricing schemes were complicated
iv) They had poor messaging capabilities
Historical overview of the Internet and WWW
Korgen, Odell and Schumacher (2001) define Internet as “a huge network of computers and smaller networks linked worldwide which allows
people to access information and contact each other. Franklin et al. (2005) traces the origins of the Internet to 1957 when Russia launched
Sputnik, the first artificial satellite that was to herald the beginning of global communication era. After the launch of Sputnik, ARPANET began
conducting military and academic research to develop an experimental computer network which would function even if part of it had been
destroyed in war.
Defleur and Dennis (2002) say that the Internet was first used by scientists and university professors. According to these authors, an important
aspect of Internet development was the introduction in 1993 of Mosaic, a Web browser. With a Web browser, a user views Web pages that
may contain text, images, and other multimedia and navigates between them using hyperlinks. However, the Internet took off in the 1980s
when the National Science Foundation used ARPANET to link its five regional supercomputer centers (The Internet, 1999).
Origins of the Internet in Kenya
Mweu (2003) asserts that the Internet first became available in Kenya to a small group of technical enthusiasts in 1993. It was accessed only
through Gopher that offered access to text-based information. African Regional Centre for Computing (ARCC), a Non-Governmental
Organization based in Nairobi became the first provider of Web-based information services by providing their subscribers with the first ever
6
Web browser called Mosaic. Connection to the global Internet was through analogue leased lines. While there are differences on when actually
the Internet came to Kenya, it is evident that the country tapped into the global public accessibility of the Internet when it first emerged in the
early 1990s.
Kenya has experienced phenomenal growth in Internet and it is ranked fifth among the top ten countries in Africa for numbers of Internet users.
The country also leads the number of Internet users within East Africa. It has approximately 1.5 million Internet users while Uganda and
Tanzania have 500,000 and 333,000 users respectively (The 2006 Kenya ICT strategy; Internet World Stats: Usage and Population Statistics,
2007).
The World Wide Web often abbreviated as WWW, Web or W3 is an important Internet service or resource developed by Tim Berners-Lee in
1990-91 and which caught up in 1993, when a freely available Web browser called Mosaic started the Web revolution. The World Wide Web is
“a global web of interconnected pages which (ideally) can be read by any computer with a Web browser and Internet connection” (Gauntlett,
2000). Gauntlett further notes that WWW and the Internet are not the same. He says that the Web is something that runs on the Internet. It is a
popular face of the Internet. It is not, however, the same as the Internet. The Internet is a network of computers.
7
Download