Network Integrity and Information Assurance

advertisement
Network Integrity and Information Assurance
Professor Stewart D. Personick, Drexel University
Introduction
Ever since men and women have been able to express, in words, their thoughts and
feelings, they have expressed the high value they place on their privacy. Furthermore,
ever since men and women have competed for the acquisition of wealth and power, they
have recognized the importance of accurate and timely information in achieving their
goals; and the benefits to themselves of denying such information to their competitors
and adversaries.
If we look back at British Common Law, we find the doctrine: “a man’s home is his
castle”… an expression of the importance of privacy, and of the sanctity of one’s
personal dwelling. In the U.S. Constitution, there are guarantees against “unlawful search
and seizure”… again highlighting the importance of privacy. In the United States,
wiretaps require a court order, and the subject of privacy is continually being raised in the
context of the evolving information age. For example, there is an ongoing debate between
privacy advocates and law enforcement entities within the United States about the rights
of individuals and businesses to utilize strong encryption methods to protect the privacy
of their electronic communications and their stored data. The introduction of “caller ID”
in the United States raised deep-rooted privacy concerns in the 1990’s, and was argued to
be a violation of wiretapping prohibitions. In many European countries, privacy is even
more stingently protected than in the United States.
If one examines the classic texts on warfare, dating back millenia in some cases, one sees
an awareness of the great value of: knowing where one’s own forces are, knowing their
status, knowing where the enemy’s forces are, knowing their status… and denying such
information to the enemy. Warfighters speak of the “fog of war” (i.e., the lack of needed
information about one’s own forces and about the enemy’s forces). In planning for 21st
century warfare, U.S. strategy is based on “information superiority”, “network-centric
(coordinated) warfare”, “cooperative engagement”, and “getting inside the enemy’s
decision cycle” (i.e., knowing or guessing what the enemy is going to do next, before the
enemy’s commanders even know it themselves).
In the 1990’s and continuing at an accelerating pace, U.S. industry, and that of other
developed nations, made great strides in productivity and in the speed at which
successful new products are brought to the marketplace through the application of
information technology. The ability to create, process, move, and store information has
made possible such things as the paperless design of the Boeing 777 aircraft. The creation
of the World Wide Web is changing nearly every aspect of business-to-business and
business-to-consumer commerce. Individuals have unpredented, convenient Web-based
access to on-line information. More and more critical systems (such as power grid
management systems) are being networked in increasingly sophisticated ways to gain
efficiencies through coordinated, rapid response.
A result of this is an increasing dependence of developed nations, particularly the United
States, on their evolving networking infrastructures. Accidental disruptions of those
infrastructures, or intentional acts of mischief, criminal activity, or terrorism can produce
very serious and widespread damage. Such damage could include not only financial
losses, but also losses of life and unnecessary human suffering.
Thus, as we move forward into the information age … gaining the visible benefits of
timely access to information … we must be increasingly concerned about how we can
protect that information from unauthorized access and unauthorized modification, how
we can ensure that the information we receive and use is authentic, and how we can
ensure that the networks over which that information is communicated are reliable and
dependable in the face of accidents and attacks.
Classical Approaches to Protecting Information
How can one protect information? One can begin to answer this question by first listing
the types of threats or attacks that one might wish to protect against:
-Unauthorized access to the information
-Unauthorized, undetected modification of the information
-Destruction of the information
-Denial of access to the information by authorized personnel or computer systems
-Receipt of information that is not authentic (i.e., not from the source it is claimed to
originate from, and/or modified since it was originally created)
The above threats and attacks are, in some cases, special cases or generalizations of each
other.
The classical methods for protecting against unauthorized access to information are all
based on making the protected information inaccessible to unauthorized persons…or at
least making it possible for an intended recipient to detect unauthorized access. Such
classical methods include:
-using a trusted courier to deliver a message to a designated recipient
-placing a message or document in a locked box (or safe) whose key (or combination) is
available only to authorized personnel
-recording a message or a document in a language that is only understood by authorized
personnel
-utilizing a mathematically-based coding method to transform a message or a document
into an unreadable form that can only be decoded (made readable again) by authorized
personnel
-sealing a document or a message in an envelope in such a way that the authoirzed
recipient can detect that the envelope has been opened
-hiding a message or document in a manner that will make it undetectable, except to
authorized personnel (e.g., a piece of microfilm hidden in the frame of a picture)
Classical methods for protecting information against unauthorized, undetected
modification include:
-making the information physically inaccessible to unauthorized personnel (e.g., locking
the information in a safe)
-using trusted individuals to guard the information, and to ensure that it is not modified
by unauthorized personnel
-recording the information in a medium that cannot be changed, or where attempts to
change the information will be readily detected (e.g., writing the information in ink on
paper that cannot be erased without visibly damaging the paper)
Classical methods for ensuring the authenticity of information have utilized:
-signatures
-trusted witnesses (e.g., notary stamps and signatures)
-sealed containers that can be traced back to a known source (e.g., a wax seal on an
envelope, impressed with the sender’s ring insignia)
All of the above classical methods of protecting information have modern counterparts
that are based on the science and technology of cryptography.
Protecting Communication Services and Information Assets from Accidental or
Malicious Disruption
Traditional methods of protecting communication services and information assets from
accidental or malicious disruption have relied on
-control of physical access to equipment and facilities (guards and ID badges)
-the use of physically reduntant equipment and facilities
-establishment of safety and security procedures, and associated training and certification
of personnel authorized to perform specific actions
In the context of modern information networking infrastructures, these methods can still
be applied, but the complexity, ubiquity and interconnectedness of networked systems
and their components present difficult challenges. Furthermore, software defects create
new types of correlated failure mechanisms that are difficult to alleviate through
redundancy.
Cryptography as a Tool for Protecting Information, Systems, and Services
While, as we will discuss below, there is no complete solution, or set of solutions to the
challenges of providing network security and information assurance… cryptography is an
important tool for implementing many of the partial solutions that are currently employed
or that will be employed in the future.
Cryptography, as applied specifically to encryption, involves the use of mathematical
transformations to make information unreadable until these transformations are reversed.
A desirable attribute of any encryption method is for such a transformation to be
extremely difficult to reverse, except by authorized persons who have the necessary
decryption “key”.
A very simple cryptographic method, called the Caesar cypher (after Julius Caesar, who
allegedly used it) is often employed by children to encrypt their secret messages.
Example:
Message (not encrypted): Meet me in the cafeteria after gym class
Encrypted Message:
Phhw ph lq wkh fdihwhuld diwhu jbp fodvv
This simple Caesar cypher is easily decrypted….particularly if one knows what the
transformation process is… even if one does not know the decryption “key” in advance.
While a child could readily explain this cypher, in simple terms… here is the
mathematical expression of the algorithm
Y = [X+K] modulo 26, where
X is the numerical order, in the alphabet, of any letter of the message (not encrypted)
K is the encryption key…a number between 1 and 25
Y is the numerical order, in the alphabet, of the corresponding letter of the encrypted
message
In this case, there are only 25 possible encryption keys, and the simplest way to decrypt
the encrypted message (without having the key to begin with) is to try all 25 possibilities.
Another encryption method, that was used in World War II, is the “one time pad”. In its
simplest form, the one time pad could be a book of statistically independent random
integers, chosen from the range: 0-25. To encrpyt a message written with the English
alphabet, consisting only of letters from that alphabet, one could use a variation of the
Caesar cypher. In this case, unlike the simple Caesar cypher, the number added (modulo
26)… to the numerical order, in the alphabet, of each successive letter of the message…
is not the same number for all letters of the message. Instead, it is a different number for
each successive letter of the message….chosen sequentially from the book (pad) of
random numbers. To decode the message, one needs a copy of the book of random
numbers, and one has to know which number in the book was chosen for the starting
point in performing the cypher. One can prove that it is impossible to decrypt a message
that has been encrypted using a one time pad (without a copy of the pad), provided that
the same portion of the sequence of random numbers contained in the pad is never used
more than once to encode a message….thus the name “one time pad”
A problem with the one time pad method is in the creation and controlled distribution of
the pads (sequences of random numbers) to pairs or groups of authorized holders of those
pads (which are the encryption and decryption keys).
Consider an intermediate approach between the simple-to-break Caeser cypher, and the
one time pad. Suppose one has a pad of random numbers 0-25, as in the one time pad, but
this pad is used over and over again to encrpyt (and decrypt) messages….each time
starting with a randomly chosen number in the sequence. This starting point must be
agreed to by the sender and the recipient of the message, both of whom have a copy of
the pad.
If an eavesdropper were to intercept one message, encrypted this way, he or she would
not be able to decrypt it (assuming that the message is short enough so that it does not
require multiple uses of the pad’s sequence random numbers). However, if an
eavesdropper were to receive multiple messages encrypted with the same pad, there are
various things he or she might do to decrypt the messages. As a simple example, suppose
the eavesdropper receives a message that is encrypted with the pad; and suppose he or
she happens to obtain an unencrypted version of that same message. Given both the
encrypted and the unencrypted message, the eavesdropper can determine what the
sequence of random numbers (used to encrypt the message) was. Now the eavesdropper
could try different offsets of this sequence to decrypt the other messages.
It is generally the case that an eavesdropper’s job of decrypting an encrpted message is
made easier if the eavesdropper has access to an unencrypted portion of that same
message (or both the encrypted and unencrypted versions another message encrypted
with the same encryption key).
Private Key Cryptography
Modern encryption methods, used in electronic commerce and other applications of
information technology, are all currently based on variations of the concept shown in
Figure 1.
At the left, there is a cryptographic encoder which accepts a message (typically a
sequence of 1s and 0s of any desired length), and a encryption key (typically a sequence
of 1’s and 0’s that is 56-256 bits long), and performs a mathematical transformation on
the message which is believed to be exceedingly hard to reverse unless one has the proper
decryption key. We say that the transformation is “believed to be” exceedingly hard to
reverse because mathematicians have not been able to prove this to be the case. They can,
however, prove that the difficulty of reversing the encrytion is it least equal to the
difficulty of solving other problems that are believed to be exceedingly hard.
At the right of Figure 1, a cryptographic decoder can decrypt the message if it is provided
with the decryption key that is the complement of the encryption key.
Private Key Cryptography
Secret
Key
Secret
Key
Encrpyt
Decrpyt
Figure 1
In many cryptographic systems, the encryption key and the decryption key are the same,
and they must be kept private (secret)…i.e., known only to the sender, and the authorized
recipient(s). This is referred to as symmetric or private key cryptography.
Public Key Crptography
In the 1970’s cryptographers invented a cryptographic system that eliminates the need to
distribute keys to all of the authorized parties to a specific private communication. This
system is referred to as public key crptography.
Public key cryptography utilizes a principle that is analogous to a typical pad lock.
Suppose I were to place a set of open padlocks in a basket, in a publicly accessible place,
under the watchful eye of a guard. Anyone who wants to send me a private message can
come in and request one of the padlocks. If you want to send me a private message, you
write it on a piece of paper, place it in a lockable container, and lock it with one of my
padlocks. To lock the padlock, you only have to snap it shut. However, once snapped
shut, only I can open it…because only I know the combination. Thus anyone can send me
a private message, without having to know the combination of the padlocks I use.
In public key cryptography (See Figure 2) we rely on the existence of “one-way
functions” One way functions are mathematical transformations that have the property
that they are easy to apply, but exceedingly difficult to reverse…unless one has an
associated secret key. Associated with every secret key, there is a corresponding public
key that is used to apply the one-way function to a message. The public key is not a
secret. In fact, it is desirable to have a public key (a sequence of 1s and 0s, typically 256
bits long) that everyone knows, and everyone recognizes as your public key.
The Concept of Public-key
Cryptography
• Public key encryption
Public
Key
Private (secret)
Key
Encrpyt
Decrpyt
Figure 2
To use public key cryptography to send private messages, you must know the public key
of each intended recipient, and you must encrypt your message, separately, for each
recipient, using his or her public key. However, since the public keys are not secret, they
can be stored in a readily accessible directory. There is, however, the need to have some
trust in the entity that maintains the directory, so that you know that when you obtain
Jane Doe’s public key, it is really Jane Doe’s public key (and not someone else’s public
key).
Question: In public key cryptography, what entity is analogous to the guard who watches
the basket of padlocks? Why?
Digital Signatures
The invention of public key cryptography as led to a number of important information
protection applications that go beyond encrypting a message.
One of these important applications is the implementation of a digital signature.
The purpose of a digital signature is to be able to prove that a specific document was
created by a specific individual…and that no part of the document has been changed
since it was signed. Note that encryption of the document (preventing it from be read by
unauthorized persons) is something that can be done in addition to applying a digital
signature, but is not necessarily required when applying a digital signature.
The concept of a digital signature is shown in Figure 3. As a first step in applying a
digital signature, one must create a “hash” from the document to be signed. A hash is a
summary of the document formed by applying a mathematical algorithm (transformation)
to the document itself. It has the following properties:
If one creates a hash from a document, and if one then changes the document in any way
(i.e., even changing one bit of the digitized document from a “one” to a “zero”), then the
next time a hash is created from the modified document it will not match the original
hash. Thus any change in the document can be detected by comparing the current hash of
the document to the original hash.
Digital Signatures
Sender’s private
key
Message
Hash
Signature
Encrypt
Figure 3
As shown in Figure 3, the sender computes the hash of the document and appends, to it,
other (optional) information such as the date and time the document was created. The
sender then encrypts the hash using the sender’s private (not public) key from the
sender’s private-public key pair. The encrypted message can be decrypted by anyone,
since it requires only the sender’s public key, from the sender’s public key-private key
pair, to decrypt it. However, no one except the sender (the holder of the private key) can
construct an encrypted message that will decrypt properly with the sender’s public key.
Thus, if the message decrpyts properly with John Doe’s public key, it must have been
encrypted (sent) by John Doe (using his private key).
Now, to verify that the message was sent by John Doe, and has not be changed in any
way since he sent it, the recipient first decrypts the encrypted hash using John Doe’s
public key (which can be found in a trusted directory of public keys). The recipient then
compares the decrypted hash to the hash that is locally computed from the received
document. If the two hashes match, then the document has not been changed in any way
since John Doe sent it.
Electronic Commerce Applications of Cryptography
Figure 4 shows an example of an electronic commerce application of cryptography. John
Doe wishes to purchase books from Books.com. using his credit card; and he wants to
ensure that his credit card number is not disclosed to anyone eavesdropping on his Web
access session. In addition, John wants to make sure that the Web site he is connected to
is really Books.com. [and not someone who has managed to change the Domain Name
System translation of Books.com to a fake Internet address].
After John logs on to www.Books.com, and selects the books he wants to purchase, he
clicks on the “check out” hot spot. The Books.com Web server sends John a message
An electronic commerce protocol
Server
Client
Obtain server’s public key
Use server’s public key to
decrypt message
Send encrypted message to
client
Send encrypted session key to
server
Receive/decrypt session key
Use session key
Use session key
Figure 4
containing the current date and time, encrypted with its private key (digitally signed).
John uses the Books.com public key (which his browser can obtain from a trusted
directory service) to decrypt the message... thus verifying that he is, indeed, connected to
Books .com. John then uses Books.com’s public key to send the Web server a secret key
to be used for symmetric private key cryptography between John and the Web site. Using
this shared secret key, John and the Web server can now exchange private information,
such as John’s credit card number.
The reason that John reverts to symmetric private key encryption is because it is
computationally easier to perform symmetric private key encryption and decryption.
Thus John used public key crpytography only to send a symmetric private key to the Web
server.
Protecting Networks and Computers from Unuathorized Access
Organizations and individuals who use computers and computer networks to conduct
their business and personal activities generally prefer or require that these computers and
networks only be accessed by authorized individuals, and only for limited, authorized
purposes. For example, a bank may authorize all of its employees to access the bank’s Email applications, but only a limited subset of those employees may be authorized to
access customers’ account information. Furthermore, the bank may wish to accept E-mail
from anyone connected to the global Internet, but it may wish to prevent anyone from
“outside” the bank’s own internal network from downloading files from the bank’s
computers. I may prefer that a password be required to activate and access my personal
computer; and I may make some of my files accessible by other computers (e.g., via the
Internet when I am logged on to the Internet), but only on a “read-only” basis.
Thus, mechanisms are needed to implement such access control policies. Note that a
policy is an expresson of what users and owners of networks and computers (and
controlled information contained in computers) would like to happen (or not happen). A
mechanism is a technical (or other) approach for implementing a policy. Door locks,
surveillance cameras, human guards, and window bars are all mechanisms that can be
employed to implement a policy of keeping unauthorized personnel out of a restricted
area. Likewise, passwords, firewalls, and intrusion detection systems are all mechanisms
that can be employed for keeping unauthorized users and unauthorized applications out of
restricted networks and computers.
Protected Enclaves
One of the mechanisms that is employed to prevent unauthorized access to, or
unauthorized use of restricted networks and computers is to create protected enclaves, as
shown in figure 5. A protected enclave defines a collection of protected physical
networking and computing assets (e.g., computers, local area networking links and
nodes) that are are contained within the boundaries of the enclave. The boundaries could
be physical (e.g., a guarded building or a campus with walls around it) or conceptual (a
physical box is either inside or outside of the enclave, by designation). A key element in
the successful implementation of a protected enclave is that there is only one, or a very
limited number of “gateways” connecting the enclave to the outside world. It is at these
gateways that various other mechanisms are employed to implement access control
policies, and to protect the networks and computers within the enclave from cyber
attacks.
Note that within an enclave, there may be other enclaves, as shown in figure 6. This
enables, for example, an organization to implement policies where some employees are
authorized to access some computers and applications, but not others. Enclaves within
enclaves may have increasingly higher degrees of protection from unauthorized access;
that comes at a cost of increasing inconvenience on the part of authorized personnel
trying to access those enclaves. For example, authorization to access a highly protected
enclave may require the entry of two secret passwords, held by different individuals.
Enclaves
Gateway computer
The rest of cyberspace
ENCLAVE
Figure 5
Enclaves Within Enclaves
The rest of cyberspace
Enclave
Enclave 1.2
Enclave 1.1
Figure 6
Firewalls
A firewall is an application or set of applications, implemented on a computer that acts as
the gateway to a protected enclave from “the rest of cyberspace”. Its purpose is to allow
only authorized users to execute authorized applications within the enclave, when
accessing those applications from outside of the enclave. Examples of policy-driven
functions that might be performed by a firewall are:
-Requiring users to identify themselves with a password, or other form of identification,
before allowing them to “log on” to any machine connected to the enclave’s internal
network.
-Screening all TCP connections at the time they are established to determine whether
they have been pre-authorized… based on such things as the source and/or destination IP
address, whether the connection is being initiated from within the enclave or from outside
of the enclave, the port number that the connection is originating from or terminating on.
-Terminating applications, such as mail, to check for viruses or other harmful code…and
then forwarding those applications into the enclave.
Passwords and Other Forms of Identification
One common way to control access to the computing resources and stored
data/information within an enclave is to require a user who is attempting to log on to a
host machine inside of the enclave to present a login-ID and a password that have been
memorized by the user. A problem with this approach is that users are required to
remember many different passwords, in different formats (e.g., at least six symbols,
including one symbol that is not a letter of the alphabet) to log onto different
applications/systems/networks/enclaves. As a result, they often resort to writing their
passwords down, and posting them in places where those passwords may be seen by
others. Another problem with this approach is that passwords may be “sniffed” by
eavedroppers who are monitoring unprotected communication links that lead to the
enclave.
One way to solve the password sniffing problem is to use public key cryptography to
encrypt passwords before sending them over links that might be compromised. In fact,
the process of establishing an encrypted tunnel (channel) using public key cryptography,
for subsequent use of symmetric private key cryptography (as described above and shown
in figure 4) will serve the purpose of identifying oneself to a protected network. This
requires the user to have an access appliance that is capable of implementing public key
cryptography, and some method for storing his or her complementary private and public
keys. For example, these keys could be stored in a “smart card”.
An alternative solution to the password sniffing problem and the memorization problem
is to use a “one time password”. An example of the one time password methodology is
shown in figure 7. The associated product is called SecurID®. The computer which will
accept of reject the passwords that are presented by users has access to a database of
secret “seed” sequences, each of which is associated with a SecureID card that is carried
by its authorized user. These same seed sequences are contained (one each) in the users’
SecurID cards. A SecurID card is a “token” ( a physical object) that also contains: a 6
digit LCD display, an embedded processor that executes a crytographic alogorithm, and a
clock with an accuracy of a few seconds per month. Every 60 seconds the cryptographic
algorithm within the SecurID card is used to calculate a six digit number that is based on
the stored seed sequence and the current time. This number is displayed on the card’s
LCD display, and is used as a one time password. When a user attempts to log on to the
protected network or machine, the one time password presented by that user is compared
to a one time password that is locally generated by accessing the user's stored seed
sequence and performing the cryptographic algorithm based on the local time. If they
match, the user is given access. This password is accepted only once. The user’s log-in
ID is used to determine what seed sequence to select from the set of stored seed
sequences. To compensate for small time differences between the clock in the user’s
token and the local clock used for authentication, the local system can calculate the six
digit number using three different times: one, a minute ahead; and one, a minute behind;
and it can accept a match to any of these. An important vulnerability of this system is the
set of stored seed sequences. These must be carefully protected from compromise.
SecurID Password Generator
607385
seed, algorithm, clock
Figure 7
Biometric Authentication
As an alternative to passwords, one could employ various types of “biometric”
identification methods. These include such things as voice print identification, fingerprint
identification, and iris scans. In each case, one must obtain and store, for each individual
to be authenticated, a digital representation of the bio-attribute to be tested; one must
make this representation accessible to an authorizing entity; and one must measure this
attribute locally (e.g., perform an iris scan on the individual seeking access) at the time of
authentication.
In all bio-metric authentication systems, one must ask:
-Given the means that an imposter might reasonably employ, how easy would it be for
the imposter to fool the system….e.g., what is the probability of a false acceptance?
-Given the variability of an individual’s bio attributes (e.g., a head cold will affect your
vocal cords), and the variability of the measurement process, what is the probability that
an authorized individual will be denied access?
Packet Filters
Most Internet firewalls perform a packet filtering function on individual IP packets
(datagrams). For example:
-One might refuse to admit packets that have specified source addresses (although source
addresses are easily faked) that are associated with administrative domains that have
sloppy security policies.
-One might refuse to accept a packet arriving from outside of an enclave whose source
address belongs to a host that is known to be located inside of the enclave.
-One might refuse to accept a TCP connection set-up packet (a “SYN” packet) from a
source address that is outside of the enclave, except those addressed to certain tightlycontrolled destination hosts inside the enclave (e.g., a carefully administered mail server).
The objective of packet filtering is to attempt to control the machines and the types of
applications and data that can be accessed from outside of an enclave by various users.
Configuring a firewall to protect an enclave is a complex and error-prone task that always
involves a compromise between the level of protection provided and the inconvenience
experienced by authorized users (whose legitimate applications may be blocked).
Denial-of-Service Attacks
A denial-of-service attack is an attack on a network whose principal purpose is to
overload or damage shared network resources (e.g., routers, servers) so as to make the
services and applications provided by those shared resources inaccessible or less
accessible to legitimate, authorized network users. There have been several denial-ofservice attacks that have occurred recently, and that have been reported in the general
press. There have also been several major outages of network services and network-based
applications that have been caused by accidents, but which have, in effect, resulted in
denials of service for the associated authorized network users.
Most the the malicious denial of service events have been executed by sending large
numbers of messages to a targeted victim computer (or set of computers), overloading the
computer(s), and significantly reducing the level of service available to legitimate users.
One example of this is the “SYN flooding” attack… where the attacker arranges for a
large number of TCP connection set-up (SYN) messages to be sent to a targeted victim
server. The set-up of these TCP connections is never completed, but the state information
that corresponds to these partially set-up connections fills up the buffer in the server
computer that stores that state information. This prevents new, legitimate connections
from being established. There are a number of “band aid” fixes for defending against
SYN flooding attacks, but this type of attack remains as an open problem that has not yet
been completely addressed.
As another example, attackers have managed to place malicious code in large numbers
of computers that belong to un-knowing third parties (“zombie” computers). Then a
command is sent to all of the zombie computers that causes them to send large numbers
connection and information requests to a targeted victim server (or collection of servers).
The requests from the zombies overload the server(s), and result in difficulties for
legitimate users trying to access the server(s). Meanwhile, the victim server(s) cannot
easily distinguish between requests originated by zombies and those originated by
legitimate users.
As an example of an unintentional denial-of-service attack… a system administrator
accidently uploaded corrupted data to the principal servers that form the Internet Domain
Name System (DNS). As a result, domain name-to-IP address translations, that are
required for most E-mail and Web applications, became unavailable for several hours for
most of the Internet’s users, and longer for some users.
To a limited extent, denial-of-service events (attacks or accidents) can be avoided by the
application of common-sense network design and operational procedures. For example,
one should avoid designing a network in such a way that physical damage to a single
network asset, or a single error, on the part of a network administrator, can cause a
service disruption that effects large numbers of users for a long period of time.
However, computer networks are susceptible to much more subtle denial-of-service
attacks than simply causing damage a single shared network asset.
The challenge, therefore, is to invent and implement technologies and methodologies for:
making attacks harder to launch, detecting subtle attacks, limiting the scope of the
disruptions they cause, and rapidly restoring (to the extent possible) critical services and
applications.
Using Encryption to Close Some Entry Points
Although the use of cryptography to prevent unauthorized persons or machines from
accessing information being sent through networks, and to authenticate messages, will
not remove all of the vulnerabilities of networks to denial-of-service attacks,
cryptography can be used to eliminate some types of easy opportunities that attackers
currently exploit.
For example, routers (Internet packet switches) continuously exchange information
among themselves to establish and update their routing tables. The routing tables are used
to direct incoming packets to appropriate outgoing ports that lead toward their ultimate
destinations…based on address and label information carried by those packets. This
process of communication between routers is formalized in what is called a routing
protocol. In most of today’s IP networks it is relatively easy for an intruder to break into a
network link and send malicious routing protocol messages that could, after being acted
upon by the routers, disrupt the flows of packets to their intended destinations. The
industry is moving toward the implementation of standard methods of encrypting the
information that is sent over the links between routers, and applying digital signatures to
routing protocol messages used to update routing tables, in order to remove this
vulnerability.
Viruses, Worms, Trojan Horses, and Other Forms of Malicious Executable Code
Almost everyone who uses a computer has heard of computer viruses. Most viruses are
fragments of executable code that cannot execute on a stand-alone basis, but that can
attach themselves to stand-alone executable applications or files that are used by standalone executable applications. For instance, many viruses are designed to infect
documents (files) that are opened by Microsoft® desktop productivity applications like
Word®. Those documents are designed to include, as an option, pieces of executable
code called macros, that are associated with certain useful features of the applications.
The viruses incorporate themselves into (attach to) files, and spread when files are shared
using networks or disks. When an infected document is opened by its associated
application, and the virus code is executed, the virus replicates itself and causes the
computer to find and attach the replica to another, uninfected file. Some viruses are
relatively benign, in that they don't cause any serious problems to the computers whose
files or applications they infect. However, those same viruses can be modified to cause
major damage to the computers they infect… such as destroying files, and reformatting
hard drives. Simple viruses are detected by virus scanning applications that look for the
tell-tale signatures that correspond to the executable virus code. However, sophisticated
viruses can transform themselves on every replication to make virus scanning much more
complex and/or less effective.
Other examples of malicious code are: worms that can execute on a stand-alone basis,
and propagate, for example, by sending replicas of themselves to victims’ mailing lists;
trojan horses that take the form of unwanted “features” that are included in other useful
software that is downloaded or otherwise loaded on to a victim computer; “time bombs”
that are activated long after they are inserted into a victim machine; etc.
Protecting computer systems from malicious code is an extremely difficult problem to
deal with. Some approaches that are the subjects on ongoing research include: running all
applications in a “sandbox” (e.g., Java) that tightly constrains the resources and files that
an application can access; or requiring applications to declare, in advance, what resources
they need to access, how often, etc., and then closely monitoring the execution of each
application to ensure that only approved access privileges are allowed (i.e., running the
application and an application-specific sandbox). Challenges associated with these
approaches include ensuring that the sandbox doesn’t have any vulnerabilities that allow
applications to “get out”.
The Attacker’s Advantage
An attacker, attempting to execute a denial of service attack, or any other type of attack,
can take plenty of time to plan his or her attack. The attacker can attempt to exploit any
vulnerabilities he or she discovers. Meanwhile, the defender has to anticipate and defend
against every possible attack. Thus, the attacker has the advantge… and it is unlikely that
the defender will be able to anticipate and prevent every conceivable attack.
A Strategy for Defending Against Network Attacks
The strategy that is currently being pursued within the industry (including government
agencies responsible for creating strategies and methods for protecting national
infrastructures) has been summarized as:
-Protect
-Detect
-Respond
While this strategy will not, in and of itself, solve all of the problems associated with
potential attacks on networks, it presents a path forward for reducing the vulnerabilities
of networks to attacks.
The first dimension of the strategy is “protect”. This recognizes the value and importance
of eliminating and reducing known vulnerabilies, to the extent possible, in order to
increase the sophistication, level of effort, and time required for an attacker to execute an
attack. Part of the strategy is to implement “layered” defenses, where the objective is to
to cause an attacker to penetrate multiple, different types of defenses that slow the
attacker down and increase the likelihood that the attacker will be detected.
The second dimension of the strategy is to “detect”. While very difficult to implement,
the objective is to monitor and distill such things as actions of specific users, traffic
levels, and other indications of normal and abnormal network behavior which,
collectively, would produce useful indications of an attack in progress. For example, one
might consider unusual actions by a particular type of user or a particular individual user
as one of many indications of a possible attack. [Why is a person who services retail
customers in a bank trying to access a server at a defense department facility?]
The difficulties here are not limited to the technical challenges of deciding what to
monitor, how to do the monitoring, and how to distill and react to all of the information
that is collected. There are also very significant issues related to protection of privacy of
individuals and organizations.
The measure of success of an intrusion detection system is the probability that a real
attack will be detected rapidly enough to take useful preventive and recovery actions,
combined the the frequency with which “false alarms” are generated.
One important strategy for implementing intrusion detection systems is to design such
systems in ways that foster and enable improvements and refinements to be incorporated
into those systems. Thus, for example, as new detection and distillation/integration
technologies and methodologies are created, one wants the cost of inserting those new
technologies and methodologies into existing intrusion detection systems to be as low as
possible.
This strategy has to potential for neutralizing the attacker’s advantage. If we assume that
there are orders or magnitude more defenders than attackers…each of whom can
contribute to the improvement of the intrusion detection system(s), then hopefully the
likelihood of detection will be very high, even though attackers can attempt to discover
and exploit any vulnerabilities.
The third dimension of the overall strategy is “respond”. If we can quickly devise and
execute a response that contains the attack, and repairs the damage caused by an attack,
then the impact of the attack will be minimized. Key issues are the speed and accuracy
with which we can diagnose the nature of an attack (at least the nature of the damage it
has caused) and devise a set of proposed recovery actions. These recovery actons may
have to be approved by human decision makers, but it is far better to diagnose the
problems that need to be dealt with, and to devise a recovery plan in 20 seconds than to
do so in 20 hours.
Summary
As societies, organizations, and individuals become increasingly dependent upon
network-based applications, and the networks that provide the infrastructure for those
applications, concerns about privacy, protection of data from unauthorized access,
protection of intellectual property rights, and the dependability of networks and
applications become increasingly important.
Technology provides some tools and methods for addressing these concerns, but there
are many open issues where new tools and methods are needed.
Protecting networks from attacks is an ongoing “journey, and not a destination”. While
we can anticipate significant progress in reducing and eliminating vulnerabilities to
attacks, clever, patient, and resourceful attackers will continue to discover new
vulnerabilities. Detecting and responding to network attacks are, therefore, a necessary
part of any defensive strategy.
Download