Uploaded by Diya Shah

network security

OSI Reference Model………………………………………………
Network Security Threats…………………………………………..
o Denial of Services (DoS) Attack…………………………….
o Distributed Denial of Services (DDoS) Attack……………...
o IP Spoofing…………………………………………………..
o Network Scanners and Sniffers……………………………...
o Virus…………………………………………………………
o Worm………………………………………………………...
o Trojan Horse…………………………………………………
Securing Networks………………………………………………….
Secure Network Devices……………………………………………
 Bibliography……………………………………………….
 Appendix A – Glossary……………………………………
 Appendix B – Presentation Hand-outs…………………….
The rapid growth of the Internet has fuelled the demand for universal connectivity.
However, the open environment of the Internet is a double-edged sword. While the
migration from private to public networks has made it possible for any organization to
extend the global reach of its business, it also exposes the enterprise to a larger variety of
security threats. The recent number of high-profile security breaches experienced by
prominent industry players reveals the inherent vulnerabilities in operating businesscritical applications over public IP-based networks.
According to a study conducted by the Computer Security Institute,
Take a look at these statistics:
Estimated computer crime losses range from $300M to $500B annually
Computer fraud in the U.S. alone exceeds $3B each year
Computer security breaches are rising 20% per year
91% of survey respondents detected computer security breaches
94% detected computer viruses
91% detected employee abuse of Internet access privileges
40% detected system penetration from the outside
Less than 1% of all computer intrusion cases are detected
34% of detected cases are reported
There are over 3,000 hacker web sites
Many organizations think that if they have a firewall or intrusion detection system (IDS)
in place, then they have covered their security bases. Experience bears out that this is not
the case. While important, a firewall alone cannot provide 100 percent protection. In
addition, the rate at which new security threats are unleashed into the public Internet is
phenomenal. Yesterday’s security architectures and general-purpose tools cannot keep
up. For this reason, incumbent network security policies and systems require ongoing
scrutiny in order to remain effective. As Internet access continues to expand and the
quantity and speed of data transfer increases, new security measures must be adapted. In
order to understand how best to secure the enterprise against newer-generation security
threats, it is first necessary to understand the nature of the problem.
2.The ISO/OSI Reference Model
The International Standards Organization (ISO) Open Systems Interconnect (OSI)
Reference Model defines seven layers of communications types, and the interfaces
among them. (See Figure) Each layer depends on the services provided by the layer
below it, all the way down to the physical network hardware, such as the computer's
network interface card, and the wires that connect the cards together.
(Fig: OSI Reference Model)
Physical Layer - Transmits bits over a medium
Data Link Layer - Organizes bits into frames and deals with node-to-node
Network Layer - Provides Internetworking
Transport Layer - End-to-End message delivery
Session Layer - Establish, Manage and Terminates Sessions
Presentation Layer - Translate, Encrypt, Compress Data
Application Layer - To allow access to Network Resources
TCP/IP (Transport Control Protocol/Internet Protocol) is the ``language'' of the Internet.
Anything that can learn to ``speak TCP/IP'' can play on the Internet. This is functionality
that occurs at the Network (IP) and Transport (TCP) layers in the ISO/OSI Reference
Model. Consequently, a host that has TCP/IP functionality (such as Unix, OS/2, MacOS,
or Windows NT) can easily support applications (such as Netscape's Navigator) that uses
the network.
Open Design
One of the most important features of TCP/IP isn't a technological one: The protocol is
an “open” protocol, and anyone who wishes to implement it may do so freely. Engineers
and scientists from all over the world participate in the IETF (Internet Engineering Task
Force) working groups that design the protocols that make the Internet work. Their time
is typically donated by their companies, and the result is work that benefits everyone.
TCP is a transport-layer protocol. It needs to sit on top of a network-layer protocol, and
was designed to ride atop IP. (Just as IP was designed to carry, among other things, TCP
packets.) Because TCP and IP were designed together and wherever you have one, you
typically have the other, the entire suite of Internet protocols is known collectively as
“TCP/IP”. TCP itself has a number of important features that we'll cover briefly.
Guaranteed Packet Delivery
Probably the most important is guaranteed packet delivery. Host A sending packets to
host B expects to get acknowledgments back for each packet. If B does not send an
acknowledgment within a specified amount of time, A will resend the packet.
Applications on host B will expect a data stream from a TCP session to be complete, and
in order. As noted, if a packet is missing, it will be resent by A, and if packets arrive out
of order, B will arrange them in proper order before passing the data to the requesting
This is suited well toward a number of applications, such as a telnet session. A user
wants to be sure every keystroke is received by the remote host, and that it gets every
packet sent back, even if this means occasional slight delays in responsiveness while a
lost packet is resent, or while out-of-order packets are rearranged.
It is not suited well toward other applications, such as streaming audio or video,
however. In these, it doesn't really matter if a packet is lost (a lost packet in a stream of
100 won't be distinguishable) but it does matter if they arrive late (i.e., because of a host
resending a packet presumed lost), since the data stream will be paused while the lost
packet is being resent. Once the lost packet is received, it will be put in the proper slot in
the data stream, and then passed up to the application.
 IP
As noted, IP is a ``network layer'' protocol. This is the layer that allows the hosts to
actually ``talk'' to each other. Such things as carrying datagrams, mapping the Internet
address (such as to a physical network address (such as 08:00:69:0a:ca:8f), and
routing, which takes care of making sure that all of the devices that have Internet
connectivity can find the way to each other.
IP has a number of very important features which make it an extremely robust and
flexible protocol. For our purposes, though, we're going to focus on the security of IP, or
more specifically, the lack thereof.
UDP (User Datagram Protocol) is a simple transport-layer protocol. It does not provide
the same features as TCP, and is thus considered ``unreliable.'' Again, although this is
unsuitable for some applications, it does have much more applicability in other
applications than the more reliable and robust TCP.
One of the things that make UDP nice is its simplicity. Because it doesn't need to keep
track of the sequence of packets, whether they ever made it to their destination, etc., it
has lower overhead than TCP. This is another reason why it's more suited to streamingdata applications: there's less screwing around that needs to be done with making sure all
the packets are there, in the right order, and that sort of thing.
4.Network Security Threats
There are various types of network threats. We will discuss a few very common threats
here –
Denial of Services (DoS) Attack
DoS (Denial-of-Service) attacks are probably the nastiest, and most difficult to address.
These are the nastiest, because they're very easy to launch, difficult (sometimes
impossible) to track, and it isn't easy to refuse the requests of the attacker, without also
refusing legitimate requests for service.
The premise of a DoS attack is simple: send more requests to the machine than it can
handle. There are toolkits available in the underground community that make this a
simple matter of running a program and telling it which host to blast with requests. The
attacker's program simply makes a connection on some service port, perhaps forging the
packet's header information that says where the packet came from, and then dropping the
connection. If the host is able to answer 20 requests per second, and the attacker is
sending 50 per second, obviously the host will be unable to service all of the attacker's
requests, much less any legitimate requests (hits on the web site running there, for
Such attacks were fairly common in late 1996 and early 1997, but are now becoming less
Some things that can be done to reduce the risk of being stung by a denial of service
attack include
Not running your visible-to-the-world servers at a level too close to capacity
Using packet filtering to prevent obviously forged packets from entering into your
network address space.
Obviously forged packets would include those that claim to come from your own
hosts, addresses reserved for private networks as defined in RFC 1918 [4], and
the loop back network (
Keeping up-to-date on security-related patches for your hosts' operating systems.
Ping of Death Attack
The most common kind of DoS attack is simply to send more traffic to a network address
than the programmers who planned its data buffers anticipated someone might send. The
attacker may be aware that the target system has a weakness that can be exploited or the
attacker may simply try the attack in case it might work. A few of the better-known
attacks based on the buffer characteristics of a program or system include:
1. Sending e-mail messages that have attachments with 256-character file names to
Netscape and Microsoft mail programs
2. Sending oversized Internet Control Message Protocol (ICMP) packets (this is also
known as the Packet Internet or Inter-Network Groper)
3. Sending to a user of the Pine e-mail program a message with a "From" address
larger than 256 characters
Attacker doesn’t need to know anything about the machine he is attacking, except for its
IP address.
e.g. windows: ping –l 65527 –s 1 <hostname>.
SYN Attack
When a session is initiated between the Transport Control Program (TCP) client and
server in a network, a very small buffer space exists to handle the usually rapid "handshaking" exchange of messages that sets up the session. The session-establishing packets
include a SYN field that identifies the sequence in the message exchange. An attacker
can send a number of connection requests very rapidly and then fail to respond to the
reply. This leaves the first packet in the buffer so that other, legitimate connection
requests can't be accommodated. Although the packet in the buffer is dropped after a
certain period of time without a reply, the effect of many of these bogus connection
requests is to make it difficult for legitimate requests for a session to get established. In
general, this problem depends on the operating system providing correct settings or
allowing the network administrator to tune the size of the buffer and the timeout period.
Teardrop Attack
This type of denial of service attack exploits the way that the Internet Protocol (IP)
requires a packet that is too large for the next router to handle be divided into fragments.
The fragment packet identifies an offset to the beginning of the first packet that enables
the entire packet to be reassembled by the receiving system. In the teardrop attack, the
attacker's IP puts a confusing offset value in the second or later fragment. If the receiving
operating system does not have a plan for this situation, it can cause the system to crash.
Smurf Attack
In this attack, the perpetrator sends an IP ping (or "echo my message back to me") request
to a receiving site The ping packet specifies that it be broadcast to a number of hosts
within the receiving site's local network. The packet also indicates that the request is from
another site, the target site that is to receive the denial of service. (Sending a packet with
someone else's return address in it is called spoofing the return address.) The result will
be lots of ping replies flooding back to the innocent, spoofed host. If the flood is great
enough, the spoofed host will no longer be able to receive or distinguish real traffic.
Distributed Denial of Service
Another form of DoS is the DDoS attack. This uses an array of systems connected to the
Internet to stage a flood attack against a single site. Once hackers have gained access to
vulnerable Internet systems, software is installed on the compromised machines that can
be activated remotely to launch the attack. Although recent DDoS attacks have been
launched from both private corporate and public institutional systems, hackers tend to
favor university networks as launch sites because of their open, distributed nature.
Programs used to launch DDoS attacks include Trin00, TribeFlood Network (TFN),
TFN2K and Stacheldraht.
(Fig: Distributed Denial of Service Attack)
5.IP Spoofing
Criminals have long employed the tactic of masking their true identity, from disguises to
aliases to caller-id blocking. It should come as no surprise then, that criminals who
conduct their nefarious activities on networks and computers should employ such
techniques. IP spoofing is one of the most common forms of on-line camouflage. In IP
spoofing, an attacker gains unauthorized access to a computer or a network by making it
appear that a malicious message has come from a trusted machine by “spoofing” the IP
address of that machine. In this article, we will examine the concepts of IP spoofing: why
it is possible, how it works, what it is used for and how to defend against it.
There are a few variations on the types of attacks that successfully employ IP spoofing.
Although some are relatively dated, others are very pertinent to current security concerns.
Non-Blind Spoofing
This type of attack takes place when the attacker is on the same subnet as the victim. The
sequence and acknowledgement numbers can be sniffed, eliminating the potential
difficulty of calculating them accurately. The biggest threat of spoofing in this instance
would be session hijacking. This is accomplished by corrupting the data stream of an
established connection, then re-establishing it based on correct sequence and
acknowledgement numbers with the attack machine. Using this technique, an attacker
could effectively bypass any authentication measures taken place to build the connection.
Blind Spoofing
This is a more sophisticated attack, because the sequence and acknowledgement numbers
are unreachable. In order to circumvent this, several packets are sent to the target
machine in order to sample sequence numbers. While not the case today, machines in the
past used basic techniques for generating sequence numbers. It was relatively easy to
discover the exact formula by studying packets and TCP sessions. Today, most OS’s
implement random sequence number generation, making it difficult to predict them
accurately. If, however, the sequence number was compromised, data could be sent to the
target. Several years ago, many machines used host-based authentication services (i.e.
Rlogin). A properly crafted attack could add the requisite data to a system (i.e. a new user
account), blindly, enabling full access for the attacker who was impersonating a trusted
Man In the Middle Attack
Both types of spoofing are forms of a common security violation known as a man in the
middle (MITM) attack. In these attacks, a malicious party intercepts a legitimate
communication between two friendly parties. The malicious host then controls the flow
of communication and can eliminate or alter the information sent by one of the original
participants without the knowledge of either the original sender or the recipient. In this
way, an attacker can fool a victim into disclosing confidential information by “spoofing”
the identity of the original sender, who is presumably trusted by the recipient.
Misconceptions of IP Spoofing
While some of the attacks described above are a bit outdated, such as session hijacking
for host-based authentication services, IP spoofing is still prevalent in network scanning
and probes, as well as denial of service floods. However, the technique does not allow for
anonymous Internet access, which is a common misconception for those unfamiliar with
the practice. Any sort of spoofing beyond simple floods is relatively advanced and used
in very specific instances such as evasion and connection hijacking.
(Fig: IP Spoofing)
Network Scanners and Sniffers
A "sniffer" is a program that monitors communications on a local area network, or
Many of LANs are made up of shared Ethernet network segments on which all systems
communicate using the same physical medium. Practically any systems on these shared
Ethernet LANs can be turned into a sniffer that can be used to steal passwords of users
connecting to and from hosts on that LAN.
Sniffers work by monitoring the communication flow on a LAN to find when someone
begins using a network service, such as a terminal emulator session using "telnet", a file
transfer session using "ftp", or a remote electronic mail session using IMAP or POP
All these services are all handled with "protocols" and each protocol, or service, has its
own identifying number. When you connect from one computer to another computer
using a particular service, its like making a call to a switchboard, where an operator asks
what extension you want and then transfers your call, going back to wait patiently to
accept a new call.
Similar to the diplomatic term, "protocols" are strict rules that define how a particular
session is established, how your account is identified and authenticated, and how the
service is used. It is the authentication part of these protocols, which occurs at the start of
every session, that the sniffer gathers.
The first part of many protocols goes something like this:
A: Hello COMPUTER B? I'd like to start a file transfer session.
B: Hello, COMPUTER A. For whom should I transfer files?
A: USER "swar" would like transfer files.
B: What is the PASSWORD for "swar"?
A: The PASSWORD is "saurabhsule".
B: That matches the password for "swar" that I have stored; "saurabhsule"
may now transfer files.
...and so on.
Password Sniffing
To understand how the sniffer works, lets use an analogy of the LAN as a hallway in a
building, with each room being a computer.
Each room (computer) has a doorway connecting it to the hall (the network), and there is
a person standing in each doorway (a "network interface") to facilitate communication. A
client is a person sitting in one room, and they will communicate with a server, which is a
person sitting in another room.
The client and server communicate by sending each other postcards (which are the
"packets" of information that travel on a real LAN). Each postcard has a source address
(the client's identification and room where that postcard is sent FROM) and a destination
address (the room where the postcard is going TO). The server is also identified, by its
service, or protocol, number (FTP, used for file transfer, is service #21).
To handle just the first part of this protocol (establishing the FTP connection), someone
in room A addresses a postcard to someone in room B, requesting an FTP session, and
the postcard is passed out into the hallway. Each network interface sees each postcard as
it travels down the hall. If the postcard is not addressed to someone in that room, the
interface ignores the postcard and nobody inside the room sees it.
If, on the other hand, the interface is put into a special mode called "promiscuous mode",
that is like the person standing at the door making a photocopy of every postcard it sees
and passing it into the room to someone (the sniffer) who asks to see every postcard.
They aren't supposed to do this, but there is nothing to stop them in this scenario and no
way to tell they are doing it (sniffing is a passive activity, that leaves no trace on the
network itself; it does, however, leave a trace on the computer that is being used as the
Playing out the protocol for transferring files shown above, but on postcards this time, the
sniffer in room C ends up with a stack of postcards that look like this:
From: A, To: B, service FTP – connect
From: B, To: A, service FTP -- connection accepted, USER?
From: A, To: B, service FTP -- USER swar
From: B, To: A, service FTP -- PASSWORD?
From: A, To: B, service FTP -- PASSWORD saurabhsule
From: B, To: A, service FTP -- READY
The sniffer only cares about the first few postcards that start the session, because this is
where all the good information is found. In this case, the sniffer makes a note in their
sniffer log that looks something like this:
Computer A => Computer B [FTP]
USER swar
PASS saurabhsule
This shows that we made an FTP connection, to an account on computer B with the name
"swar" and that our password is "saurabhsule". The person reading the log can also infer
that we may also have an account on computer A (if it is another Unix system and not a
single-user PC or Macintosh) and that the odds are good that we have the same password
on that system.
The key is that the sniffer is (a) able to monitor the communication channel and (b) our
password travels the channel in readable form, often called "clear text".
Typical uses of such sniffing programs include:
Logging network traffic.
Solving communication problems such as: finding out - why computer A cannot
communicate with computer B.
Analyzing network performance. The bottlenecks present in the network can be
discovered, or the part of the network where data is lost, due to network congestion
can be found.
Detection of network intruders - to discover hackers/crackers.
Retrieving user-names and passwords of people logging on to the network.
A virus is basically a computer program, which is coded such that it causes a lot of havoc
on the computer. This code produces certain unrequired tasks on our computer such as
producing sounds, crazy displays or even a hard disk crash!!!
Different types of viruses
 Boot sector virus
These viruses infect either the master boot record of the hard disk or the floppy
drive. The boot record program responsible for the booting of the operating
system is replaced by the virus. The virus either copies the master boot program
to another part of the hard disk or overwrites it. E.g. Michelangelo, Stone
File or program virus
These infect program files like files with extensions like .exe, .com, .bin, .drv and
.sys. Some file viruses just replicate while others destroy the program being used
at that time.
Multipartite virus
They are a hybrid between file and boot viruses. They first infect the boot sector
and once they are loaded into the memory they start infecting other files on the
hard disk.
Stealth virus
These viruses avoid detection by redirecting the head or reducing the size of the
program file so as to avoid detection by stealth antivirus software’s.
Polymorphic virus
These viruses have the ability to mutate implying that they change the viral code
known as the signature each time they spread or infect.
Macro virus
These viruses are evil visual Basic applications which create lots of problems on
our PC. E.g. Melissa virus.
A worm is a self-replicating virus that does not alter or damage files, but rather duplicates
itself to other computers over the network, Internet, or through e-mail.
Worms often go undetected until their attempts to spread overwhelm the computer or
network, bringing everything to a grinding halt.
e.g. Klez (sends itself to users listed in the Windows address book using forged return
addresses), Bugbear (spreads across Windows network shares and causes printers to spew
garbage), SQL Slammer (targets unpatched Microsoft SQL Servers)
Trojan Horse
The Trojan legacy was started in an ancient myth, according to which, during the war, the
Greeks presented a wooden horse to their enemy and during the night, Greek soldiers
jumped out of the wooden horse and defeated the enemy. It was restarted in the
computing world when "Cult of the Dead Cow" made "Back Orifice", which is the most
famous Trojan ever, and it's port 31337 is one of the most popular numbers.
A Trojan horse is a program that works against a user, more or less like a virus, and is
mostly contained in programs that look legitimate, but have a very dark side. These
Trojans work in the "background", i.e. invisible to you. They do things that can render
you almost powerless. All Trojans have a specific cause, for which hackers use them.
Most of them are RATs (Remote Administration Tools). These programs are used by
hackers to attack lamer. Having most Trojans on your computer is harmless. Executing
them causes the problem.
Different types of Trojans
a. Remote Access Tools (RAT)
b. Key loggers' Trojan
c. Password Retrievers
d. FTP Trojans
What does a Trojan do
A RAT Trojan runs a server on your computer, which enables the hacker to connect to
your computer and execute various functions. Even if you have some idea on these
Trojans, you most probably won't know that you're infected. This is because newer
Trojans are being developed everyday, that are better and more effective than the older
ones. Powerful Trojans give the hacker more control of your computer than you yourself
have, sitting in front of it! Others just allow some easy fun functions, and still others have
common functions like downloading/uploading. The Trojan also restarts every time you
put on your computer. About what a Trojan can do? It can at the most destroy your
How does a Trojan work
A RAT Trojan is mostly contained in bigger programs. So, when you run the program,
you automatically trigger the Trojan. This Trojan runs a server on a particular port, which
will enable the hacker to connect to the port in your computer with utmost ease and do
God Knows What! He now has access to all your system resources, if he's using a
powerful Trojan, he can do almost anything. There is nothing you can do to stop him, if
you don't know which is the Trojan and don't have any clue about what it is. The Trojan
then copies itself to a location on your computer, which, here there is almost 100%
possibility that you won't see, and even if you see, you won't realize that it is a Trojan.
Then, the Trojan makes a registry entry or changes the win.ini file, to enable itself to
restart every time you put on your computer.
6.Securing the Networks
Network administrators, besides maintaining computer networks, frequently play a major
role defending an organization’s critical information assets. It is much easier for them to
be effective if they choose a systematic approach to network security. One such method
was developed at the CERT Coordination Center. Called the Security Knowledge in
Practice method (or the “SKiP” method for short), it consists of steps to secure network
software, “harden” a network (make it difficult to break into), detect and respond to
network intrusions, and then to improve the system based on a review of events.
There are seven steps in the SKiP method, each with an associated set of security
1. Select systems software from a vendor and customize it according to an organization’s
2. Harden and secure the system against known vulnerabilities.
3. Prepare the system so that anomalies may be noticed and analyzed for potential
4. Detect those anomalies and any other system changes that could indicate evidence of
an intrusion.
5. Respond to intrusions when they occur.
6. Improve practices and procedures after updating the system.
7. Repeat the SKiP process as long as the organization needs to protect the system and its
information assets.
Customizing Vendor Software
The first step is to identify tasks a system must perform and configure it to fulfill
essential functions while eliminating those that are unnecessary or vulnerable. Because
securing a system is challenging (especially for a novice administrator) it is often
neglected. In this step, network administrators do the following:
 Eliminate services that are unneeded and insecurely configured
 Restrict access to vulnerable files and directories
 Turn off software “features” that introduce vulnerabilities
 Mitigate vulnerabilities that intruders can use to break into systems
Harden and Secure the Network
In the Harden/Secure step, network administrators configure their system to meet
organizational security requirements, retaining only those services and features needed to
address specific business needs. Securing a system against known attacks eliminates
vulnerabilities and other weaknesses commonly used by intruders. The practices
performed during this step may change over time to address new attacks and
To meet the challenge of recognizing new vulnerabilities, network administrators
characterize their system in the Prepare step. They work to understand and describe
normal system behavior since any deviations may indicate an intrusion. After completing
a system characterization, an administrator knows what to expect in terms of –
Changes in files and directories and the operating system
Normal processes, when they run, by whom, and what resources they consume
Network traffic consumed and produced
Hardware inventory on the system
In the Detect step, network administrators monitor the hardened and prepared system to
detect changes. While some changes are predictable and constitute normal behavior,
administrators concentrate on detecting signs of anomalous or unexpected behavior since
it may indicate possible intrusions and system compromise.
Administrators also watch for early warning signs of potential intruder actions such as
scanning and network mapping attempts. This step occurs as administrators monitor
systems running in a production environment (such as looking at the logs produced by a
firewall system or a public web server).
In the Respond step, the network administrator responds to an intrusion and contains it. A
successful response means that the system is returned to its normal operational capability.
There are many actions that make up this step. For instance, if some unexpected system
behavior caught the attention of the network administrator, they may choose to –
 Analyze the damage caused by the intrusion and respond by adding new
technology or procedures to combat it
 Monitor an intruder’s actions in order to discover all access paths and entry points
before acting to restrict intruder access.
 Eliminate future intruder access
 Return the system to a known, operational state while continuing to monitor and
Improve the System
After completing a review of the incident, network administrators improve their system
in this step. They may –
Hold a post-mortem review meeting to discuss lessons learned
Update policies and procedures
Select new tools
Collect data about the resources required to deal with the intrusion and document
the damage it caused
Repeat the Cycle of Steps
Finally, network administrators add any changes they made during the first six steps back
into the system’s characterization baseline. Now operating at a higher level of security,
the system will function as designed until a new security challenge arises. Then the
administrator can once again call upon the SKiP method.
As we've seen in our discussion of the Internet and similar networks, connecting an
organization to the Internet provides a two-way flow of traffic. This is clearly undesirable
in many organizations, as proprietary information is often displayed freely within a
corporate intranet (that is, a TCP/IP network, modeled after the Internet that only works
within the organization).
In order to provide some level of separation between an organization's intranet and the
Internet, firewalls have been employed. A firewall is simply a group of components that
collectively form a barrier between two networks. A number of terms specific to firewalls
and networking are going to be used throughout this section, so let's introduce them all
Bastion host
A general-purpose computer used to control access between the internal (private)
network (intranet) and the Internet (or any other untrusted network). Typically, these are
hosts running a flavor of the Unix operating system that has been customized in order to
reduce its functionality to only what is necessary in order to support its functions. Many
of the general-purpose features have been turned off, and in many cases, completely
removed, in order to improve the security of the machine.
A special purpose computer for connecting networks together. Routers also handle
certain functions, such as routing, or managing the traffic on the networks they connect.
Access Control List (ACL)
Many routers now have the ability to selectively perform their duties, based on a number
of facts about a packet that comes to it. This includes things like origination address,
destination address, destination service port, and so on. These can be employed to limit
the sorts of packets that are allowed to come in and go out of a given network.
Demilitarized Zone (DMZ)
The DMZ is a critical part of a firewall: it is a network that is neither part of the untrusted
network, nor part of the trusted network. But, this is a network that connects the untrusted
to the trusted. The importance of a DMZ is tremendous: someone who breaks into your
network from the Internet should have to get through several layers in order to
successfully do so. Those layers are provided by various components within the DMZ.
This is the process of having one host act in behalf of another. A host that has the ability
to fetch documents from the Internet might be configured as a proxy server, and host on
the intranet might be configured to be proxy clients. In this situation, when a host on the
intranet wishes to fetch the <http://www.interhack.net/> web page, for example, the
browser will make a connection to the proxy server, and request the given URL. The
proxy server will fetch the document, and return the result to the client. In this way, all
hosts on the intranet are able to access resources on the Internet without having the ability
to direct talk to the Internet.
Types of Firewalls
There are three basic types of firewalls, and we'll consider each of them.
Application Gateways
The first firewalls were application gateways, and are sometimes known as proxy
gateways. These are made up of bastion hosts that run special software to act as a proxy
server. This software runs at the Application Layer of our old friend the ISO/OSI
Reference Model, hence the name. Clients behind the firewall must be prioritized (that is,
must know how to use the proxy, and be configured to do so) in order to use Internet
services. Traditionally, these have been the most secure, because they don't allow
anything to pass by default, but need to have the programs written and turned on in order
to begin passing traffic.
(Fig: Application Gateway)
These are also typically the slowest, because more processes need to be started in order to
have a request serviced. Figure shows a application gateway.
Packet Filtering
Packet filtering is a technique whereby routers have ACLs (Access Control Lists) turned
on. By default, a router will pass all traffic sent it, and will do so without any sort of
restrictions. Employing ACLs is a method for enforcing your security policy with regard
to what sorts of access you allow the outside world to have to your internal network, and
vice versa.
There are fewer overheads in packet filtering than with an application gateway, because
the feature of access control is performed at a lower ISO/OSI layer (typically, the
transport or session layer). Due to the lower overhead and the fact that packet filtering is
done with routers, which are specialized computers optimized for tasks related to
networking, a packet filtering gateway is often much faster than its application layer
cousins. Figure 6 shows a packet-filtering gateway.
Because we're working at a lower level, supporting new applications either comes
automatically, or is a simple matter of allowing a specific packet type to pass through the
gateway. (Not that the possibility of something automatically makes it a good idea;
opening things up this way might very well compromise your level of security below
what your policy allows.)
There are problems with this method, though. Remember, TCP/IP has absolutely no
means of guaranteeing that the source address is really what it claims to be. As a result,
we have to use layers of packet filters in order to localize the traffic. We can't get all the
way down to the actual host, but with two layers of packet filters, we can differentiate
between a packet that came from the Internet and one that came from our internal
network. We can identify which network the packet came from with certainty, but we
can't get more specific than that.
Hybrid Systems
In an attempt to marry the security of the application layer gateways with the flexibility
and speed of packet filtering, some vendors have created systems that use the principles
of both.
(Fig.: packet filtering gateway)
Other possibilities include using both packet filtering and application layer proxies. The
benefits here include providing a measure of protection against your machines that
provide services to the Internet (such as a public web server), as well as provide the
security of an application layer gateway to the internal network. Additionally, using this
method, an attacker, in order to get to services on the internal network, will have to break
through the access router, the bastion host, and the choke router.
8.Secure Network Devices
It's important to remember that the firewall only one entry point to your network.
Modems, if you allow them to answer incoming calls, can provide an easy means for an
attacker to sneak around your front door (or, firewall). Just as castles weren't built with
moats only in the front, your network needs to be protected at all of its entry points.
Secure Modems
If modem access is to be provided, this should be guarded carefully. The terminal server,
or network device that provides dial-up access to your network needs to be actively
administered, and its logs need to be examined for strange behavior. Its password need to
be strong -- not ones that can be guessed. Accounts that aren't actively used should be
disabled. In short, it's the easiest way to get into your network from remote: guard it
There are some remote access systems that have the feature of a two-part procedure to
establish a connection. The first part is the remote user dialing into the system, and
providing the correct user-id and password. The system will then drop the connection,
and call the authenticated user back at a known telephone number. Once the remote user's
system answers that call, the connection is established, and the user is on the network.
This works well for folks working at home, but can be problematic for users wishing to
dial in from hotel rooms and such when on business trips.
Other possibilities include one-time password schemes, where the user enters his user-id,
and is presented with a ``challenge,'' a string of between six and eight numbers. He types
this challenge into a small device that he carries with him that looks like a calculator. He
then presses enter, and a ``response'' is displayed on the LCD screen. The user types the
response, and if all is correct, he login will proceed. These are useful devices for solving
the problem of good passwords, without requiring dial-back access. However, these have
their own problems, as they require the user to carry them, and they must be tracked,
much like building and office keys.
No doubt many other schemes exist. Take a look at your options, and find out how what
the vendors have to offer will help you enforce your security policy effectively.
Crypto-Capable Routers
A feature that is being built into some routers is the ability to session encryption between
specified routers. Because traffic traveling across the Internet can be seen by people in
the middle who have the resources (and time) to snoop around, these are advantageous
for providing connectivity between two sites, such that there can be secure routes.
Virtual Private Networks
Given the ubiquity of the Internet, and the considerable expense in private leased lines,
many organizations have been building VPNs (Virtual Private Networks). Traditionally,
for an organization to provide connectivity between a main office and a satellite one, an
expensive data line had to be leased in order to provide direct connectivity between the
two offices. Now, a solution that is often more economical is to provide both offices
connectivity to the Internet. Then, using the Internet as the medium, the two offices can
The danger in doing this, of course, is that there is no privacy on this channel, and it's
difficult to provide the other office access to “internal” resources without providing those
resources to everyone on the Internet.
VPNs provide the ability for two offices to communicate with each other in such a way
that it looks like they're directly connected over a private leased line. The session
between them, although going over the Internet, is private (because the link is encrypted),
and the link is convenient, because each can see each other’s internal resources without
showing them off to the entire world.
A number of firewall vendors are including the ability to build VPNs in their offerings,
either directly with their base product, or as an add-on. If you need to connect several
offices together, this might very well be the best way to do it.
9.Limitations of Existing Security Solutions
Apart from providing only part of the overall security system needed to secure today’s
business networks, most security methods in use today are limited in terms of
functionality, capacity and performance.
For example, most firewalls use packet-filtering techniques to accept or deny incoming
packets based on information contained in the packets’ TCP and IP headers, such as
source address, destination address, application, protocol, source port number or
destination port number.
Firewalls and IDS solutions take this to the next level with intelligence capabilities that
allow them to look more deeply into packets and provide more granular application
traffic analysis. However, this often causes network latency problems for these solutions
alone. In order to determine if a flow is legitimate traffic, an intelligent firewall needs to
analyze a series of incoming packets in sequence before allowing or blocking each
packet’s entry into the network.
This can take its toll on the firewall’s server processor. If a firewall has to wait for five or
six packets to line up before making the appropriate determination for each packet, this
creates a 500 to 600 percent increase in network latency. This traffic flow slowdown is
counterproductive to the increased bandwidth provided by today’s high-speed WAN
environments. At speeds approaching a full T3 line (45Mbps), today’s firewalls do not
have the throughput to keep pace, especially when network traffic consists primarily of
small packets.
With LAN speeds and Internet link speeds pushed to 100Mbps and beyond, securityfiltering platforms need to be accelerated to keep up with the flow.
An additional limitation in most existing IDS solutions is the inability to entrap network
intruder data and redirect it to a secure location in order to perform forensic analysis. This
can be key to discovering network intrusion patterns and guarding against future attacks.
Furthermore, a current trend by organizations to centralize anti-virus protection on server
or firewall systems further contributes to overall network latency due to system
Security is a very difficult topic. Everyone has a different idea of what “security” is, and
what levels of risk are acceptable. The key for building a secure network is to define what
security means to your organization. Once that has been defined, everything that goes on
with the network can be evaluated with respect to that policy. Projects and systems can
then be broken down into their components, and it becomes much simpler to decide
whether what is proposed will conflict with your security policies and practices.
Many people pay great amounts of lip service to security, but do not want to be bothered
with it when it gets in their way. It's important to build systems and networks in such a
way that the user is not constantly reminded of the security system around him. Users
who find security policies and systems too restrictive will find ways around them. It's
important to get their feedback to understand what can be improved, and it's important to
let them know why what's been done has been, the sorts of risks that are deemed
unacceptable, and what has been done to minimize the organization's exposure to them.
Security is everybody's business, and only with everyone's cooperation, an intelligent
policy, and consistent practices, will it be achievable.
 Network Security – Ankit Fadia
Data Communication and Networking – Behrouz Forouzn
J.P. Holbrook, J.K. Reynolds. “Site Security Handbook.'' RFC 1244.