Uploaded by sourav777_au

COMPTIA-NETWORK+STUDY-GUIDE-EXAM-N10-008-Smith-Liam bibis.ir

advertisement
COMPTIA NETWORK+ STUDY GUIDE:
EXAM N10-008
LIAM SMITH
Copyright 2022 © Liam Smith
All rights reserved. No part of this publication may be reproduced or distributed in any form or by any means, electronic or
mechanical, including photocopying, recording, or by any information storage or retrieval system, without the prior written
consent of the author.
ISBN:
Table of Contents
INTRODUCTION
CHAPTER 1: HOW TO PASS YOUR N10-008 NETWORK
CHAPTER 2: UNDERSTANDING OPEN SYSTEMS INTERCONNECTION MODEL (OSI)
LAYER OF COMMUNICATION
DIFFERENCES BETWEEN THE OSI MODEL AND TCP/IP
DATA COMMUNICATION
CHAPTER 3: NETWORK TOPOLOGIES
NETWORK CONNECTORS
NETWORK TYPES
NETWORK CABLE
NETWORK ARCHITECTURE
VIRTUAL NETWORKS
CHAPTER 4: WAN TECHNOLOGY
CHAPTER 5: ETHERNET STANDARDS
CABLING BASICS OF THE ETHERNET
CHAPTER 6: INTRODUCTION TO IP
IP PROTOCOL
IP ADDRESSING
ADDRESSING SCHEME
INTERNET PROTOCOL VERSION 6 – IPV6
CHAPTER 7: DNS OVERVIEW
DYNAMIC HOST CONFIGURATION PROTOCOL (DHCP)
NTP
STORAGE AREA NETWORKS
CLOUD MODELS
CHAPTER 8: NETWORKING DEVICES
HUBS
SWITCH
MODEM
BASIC ROUTER
NETWORK CARDS
WIRELESS ACCESS POINT
CHAPTER 9: PROTOCOLS AND ROUTING
VLANS TRUNKING AND HOPPING
WIRELESS TECHNOLOGIES
CHAPTER 10: NETWORK MANAGEMENT PRACTICES
IDENTITY, ACCESS, AND ACCOUNT
POLICIES, PLANS, AND PROCEDURES
RISK MANAGEMENT
CHAPTER 11: SECURITY POLICIES
NETWORK SECURITY
CRYPTOGRAPHIC SECURITY
PHYSICAL SECURITY MEASURES
CHAPTER 12: ON-PATH ATTACKS
APPLICATION/SERVICE ATTACK
WIRELESS ATTACKS
MALWARE AND RANSOMWARE
CONCLUSION
Introduction
The CompTia Network+ Certification is an international, vendor-neutral and technology-focused
credential with a core focus on the modern networking industry.
The Network+ credential has six domains of knowledge: network technologies, network defense
mechanisms, network operations and administration, risk management, troubleshooting wired
networks and wireless networks.
Individuals who earn the CompTia Network+ certification have demonstrated skills required to
identify risks in a given situation; use basic tools such as ping, telnet or ssh; interpret the output
of show commands such as ifconfig; describe how to connect devices using CIDR notation or
static routes.; configure basic properties on switches; identify common device problems.
The N10-008 exam has a focus on TCP/IP Networking, including addressing and routing
protocols, network infrastructure, network security and more.
CompTIA Network+ certification is a globally recognized credential with more than 375,000
individuals holding the title of CompTIA Network+ certified.
The CompTIA Network+ certification is earned through a combination of coursework, hands-on
lab training and job experience. Individuals who earn the CompTia Network+ certification are
able to enter their desired career path.
N10-008 Exam Facts: The N10-008 exam has a total of 93 questions that are worth a maximum
of 200 points. This exam is for IT Professionals who position themselves in positions where they
need to manage, deploy and troubleshoot network infrastructure.
The path for CompTia Network Cerification is the most recognized certification for IT
professionals looking to enter the field of network support. With a focus on network services,
CompTia Network Cerification Advanced Technology Specialist (CompTia AT5) is one of the
biggest certifications in the world and covers many of today’s emerging areas such as cloud
computing, virtualization, and security.
If you are interested in a more advanced career in networking or just your personal future in IT,
this certification may be perfect for you. The cost to complete this certification as part of
CompTIA’s study guide is $1,500.00. This is a lot of money, but this is the most rigorous
certification offered today by CompTIA and will also provide you with the most credentials as
well.
The path to obtaining CompTia Network Cerification Advanced Technology Specialist
(CompTia AT5) begins with earning one of four possible certifications available through
CompTIA. Once you have earned your first CompTIA certification, which includes A+ Core
Series, Network + or Server +, or Linux+, you can begin on your journey toward achieving
CompTia Network Cerification Advanced Technology Specialist (CompTia AT5).
The pathway for an A+ certification is relatively straightforward, and consists of taking the 100101 Powered by CompTIA course and passing a series of exams. This certification will get you a
job as an entry-level computer technician, but it won’t help you stand out from other tech support
staff.
The next certification is a gateway to CompTIA Network+ certification, and occurs in two parts.
First, you will take one exam from the CompTIA Security+ list (SY0-401), and second, you will
take another exam from the CompTIA Network+ list (N10-006). At this point, you have earned
an intermediate degree in computer support. At this level, you can work as a junior network tech
or junior server admin. You can also get certified on most CISCO products at this point because
they are similar to the courses offered by CompTIA A+.
The last step to CompTIA Network+ certification is the bonus 10 (CL0-001). This is an
advanced security course that covers the most recent security threats in detail, with a focus on
information security.
For the CompTIA Advanced Security Practitioner (CASP) certification, you must take two
exams. First, find your way to CASP Practice Exams Online and complete all four of these
exams. Second, find your way to a CASP exam testing center and sit for one of the four exams
offered there (CASP-002-301 or CASP-302-301). You will be tested on the following topics,
among other things:
Security Objectives
Security Policy & Processes
Ethical Hacking & Countermeasures (on-site certification only)
The CompTIA Advanced Security Practitioner (CASP) certification is a great stepping stone for
anyone working in computer support. Although it is not required in order to follow a specific
career path, it shows employers that you understand the basics of information security and have
an interest in learning more.
Chapter 1:
How to Pass Your N10-008 Network
The Security + exam is intended to assess students’ understanding of basic security best practices
and principles. CompTIA Security + might be an entry-level security certification exam, but it is
not easy to pass. To help you better understand how to prepare for the CompTIA Security +
exam, I have explained each step to you.
First, familiarize yourself with the domains of the Security + exam.
Create a list of the domains to scan, as well as the individual things within each domain. Gather
the resources that are most appropriate for your needs. If you have a weakness, focus on it first.
Starting with the most challenging topics is usually the most excellent strategy. Once you get the
hang of them, the tone and speed will be set for the rest of the domains you will need to learn.
Before moving on to the next domain, make sure you fully understand the first one.
Create a study plan
Remember that creating a study plan include:
•
What is the best time to study?
•
How many hours a day/week can you dedicate to study?
•
To ensure you have a complete understanding of each topic included in the exam,
look for official and licensed study and training resources.
•
Figure out the most effective training approach for you. Some people prefer an online
classroom, while others prefer self-study to complete the entire program in a certain
period of time.
Study the official CompTIA study guides.
To study for the CompTIA Security + exam, you should always use the official study guide and
resources provided by the authorities. This book will provide you with the material you need to
pass the exam.
Take some practical tests.
Considering this is an entry-level security exam, many candidates may not be familiar with test
delivery methods or timestamp testing strategies. Taking a few practical tests can help you
determine how long it will take for each question and your understanding of each topic. A smart
first step is to take practice tests that focus on single domain topics. Take comprehensive practice
tests once you’ve mastered each domain area to ensure you’re ready for the actual exam. You
can start taking a CompTIA practice test. Low scores on practice tests are not a cause for
disappointment; They are only meant to help you prepare better.
Familiarize yourself with the exam.
The CompTIA website contains all the required information about the Security + exam. You can
also find more links to additional valuable information, such as official training providers, exam
content, practice tests, and other study resources. This test guide will also give you an overview
of the certification, including all the requirements and the type of questions you will find on the
exam.
Enroll in a test preparation program
Self-study to prepare for the CompTIA Security + exam may seem like a good option, but it is
not the ideal approach. Taking a certification course gives you the chance to spend time learning
from a qualified instructor who knows how to pass the exam. It is a great opportunity to answer
all your questions, exchange experiences and techniques, and even network if it is face-to-face
training. As a result, you will have a better chance of passing the certification exam.
Join a Security+ community online
Participating in online study groups for the CompTIA Security + exam can help you gain a solid
understanding of areas that were previously difficult for you. Online study groups will help you
because you will be surrounded by other people who are studying for the CompTIA Security +
exam or have already passed it. These people can offer you the best advice on the subject and
solve your problems using their solutions.
Plan the exam date
After that, you must take the CompTIA Security + exam at an authorized location. CompTIA has
partnered with Pearson VUE test centers that have offices around the world. You can schedule
your exam session both online and offline with the help of Pearson VUE.
Create a strategy for test day
To clear your thoughts and stay focused during the exam, follow these guidelines:
•
Get to the test center on time. If you are late you will not be able to take the exam,
and if you are taking the exam online, please log in 15 minutes early. For an online
exam, make sure you have a working desk with a webcam and a stable internet
connection.
•
Make sure the VUE application is compatible with your system.
•
Make sure you have all the documents necessary for the exam.
•
During the exam, remember to flex and relax your muscles. A calm mind would help
you answer difficult questions. Mental fatigue is the cause of failure for many
candidates.
•
You have a limited time, and you don’t need to be in a rush. Take your time, read
every single question and answer them carefully, and make sure you understand the
question.
•
Remember, a calm demeanor will help you focus better. Your results will certainly be
great if you have followed your study plan correctly.
Chapter 2:
Understanding Open Systems Interconnection Model
(OSI)
The Open Systems Interconnection network model is a set of concepts that describes how
different computer networks communicate. A simplified view of the OSI model includes seven
layers: physical, data link, network, transport, session, presentation and application. The OSI
model provides a conceptual framework for understanding the way computer networks such as
Ethernet communicate with each other and is widely adopted as a standard for describing
computer communication protocols in the IT industry. Understanding the OSI Network Model
can help you understand everything from your laptop’s connectivity to how you can send an
email on your phone to someone at Starbucks near you!
The Open Systems Interconnection (OSI) model is a conceptual framework that characterizes
and standardizes the functions of a communication system. The OSI model separates the
functions of an organization into different layers or levels, with each layer having unique
responsibilities. This modularity allows for new technology to be layered on top of an existing
system without replacing it.
One of the most important aspects of any network communications is being able to reliably
transmit data from one router or computer to another in a timely and accurate manner. To
accomplish this, a communication system must be able to handle data in multiple forms and at
speeds that vary according to the type of information being transmitted. This kind of flexibility
required by modern networks is difficult to achieve because each layer has different aims,
priorities, and responsibilities. The OSI model helps make sense of complex network
communications by providing a way to understand the process of data transmission and how it
differs across different layers. So what exactly is the OSI model? How did it come into
existence? And how does it affect today’s networking technologies like Ethernet?
The first paper on the Open Systems Interconnection , which was published in 1969, aimed not
only to detail the functions of a communication system but also to determine its structure.
The researchers who wrote it later formed an organization called the International Organization
for Standardization (ISO) to oversee the development of this proposed system. The second and
final paper on the OSI model was published in 1980 by ISO, and the model itself was registered
as an international standard.
Layer of Communication
One of the most interesting aspects of the OSI model is that, even after more than thirty years, it
has not been widely adopted in its entirety, although many aspects of it have become universally
accepted.
Layer 1: Physical
Layer 1 is also known as the physical layer because it defines how data will be transmitted and
received physically. This layer determines the cable type, voltage, and other physical properties
such as flow control. Layer 1 also determines how data will be transmitted on the cable. For
example, if a computer is transmitting data across a network cable to another computer, does it
send one bit at a time (as in RS-232) or many bits at once (as in Ethernet)? The most common
protocols that operate at Layer 1 are Ethernet (IEEE 802.3), Asynchronous Transfer Mode
(ATM), frame relay and 802.11 wireless LANs.
Layer 1 can include sublayers for specific types of physical media like fiber optic cable.
Layer 2: Network
When data is transmitted through a network, it must be formatted in a way that computers and
other devices can understand. Layer 2 defines the encoding of protocols needed to transmit data
from one network node to another. Commonly used protocols at this layer include Internet
Protocol (IP), Transmission Control Protocol (TCP), and User Datagram Protocol (UDP).
Networking technologies like Ethernet, TCP/IP, and Hypertext Transfer Protocol were designed
to operate at Layer 2.
Layer 2 can include sublayers for Media Access Control (MAC) addresses, frame types, and
error detection.
Layer 3: Transport
Layer 3 is also known as the transport layer because it transports data between two nodes on a
network. It provides reliability, flow control and error checking that Layer 2 does not offer. The
most common protocols used at Layer 3 are the TCP and User Datagram Protocol (UDP).
Protocols like these ensure that data reaches its destination intact by verifying the receiving node
before sending it any data. Another example of Layer 3 is X-Windows which sits on top of UDP
to provide a graphical interface to remote computers.
Layer 3 can include sublayers for network addressing, transmission control, and end-to-end error
checking.
Layer 4: Session
The session layer is a part of the OSI model that controls the type of data allowed to be
transmitted through a network. Layers 2 and 3 ensure that data is transported properly, but the
session layer determines what kind of information can transit a particular network. Commonly
used protocols at this layer include Internet Protocol (IP) and User Datagram Protocol (UDP).
Session layers are often implemented over Layer 2 as well as Layer 3 protocols in order to
control the type of traffic that can travel through or stop at each node on a network.
Layer 4 can include sublayers for Transmission Control Protocol (TCP), User Datagram Protocol
(UDP), and Internet Control Message Protocol (ICMP).
Layer 5: Presentation
Layer 5 is also known as the presentation layer because it controls how data is displayed to users.
Common protocols at Layer 5 include Hypertext Transfer Protocol (HTTP) and Simple Mail
Transport Protocol (SMTP). The HTTP protocol provides basic access to data stored on a server,
while SMTP ensures that email messages are delivered properly so that they appear tidy and
correct when received at their intended recipients. Applications like Web pages, databases, and
VoIP systems use information offered by Layer 5 of the OSI model.
Layer 5 can include sublayers for data encoding, encryption and compression.
Layer 6: Application
This is the layer that most closely resembles our idea of an operating system, as it allows
applications to communicate with one another. The six layers are defined by the OSI as a general
model to facilitate interaction among different communication systems, thus making application
development possible in a range of network environments. In today’s modern world we have
many kinds of software applications for storing and accessing information all running on top of
the OSI model at Layer 7 or above. These include databases, spreadsheets and word processors –
even browsers like Internet Explorer use this kind of “client-server” technology.
Layer 6 can include sublayers for file formats, naming and communication conventions.
Layer 7: Presentation
In the OSI model, the presentation layer is responsible for organizing the information residing in
an application to prepare it for transmission to a remote computer. With modern communications
technology, this layer has become much less static than it was when developed 30 years ago. The
design of this layer often determines what kind of interface your computer will have to other
devices and computers. Beyond defining how data will be displayed on your screen, Layer 7
defines any security protocols that protect data from being accessed by unauthorized users as
well as methods for authentication (establishing a user’s identity). Common protocols at Layer 7
include Secure Sockets Layer (SSL), Common Internet File System (CIFS) and Simple Mail
Transfer Protocol (SMTP).
Layer 7 can include sublayers for security, authentication and data formats.
Network protocols are vital to the way data moves through a computer system as well as across
networks. By understanding the OSI Model, you will be able to move more efficiently through
your network, troubleshoot network problems and even understand the various layers of
communication that make modern business systems function so efficiently.
Differences Between The OSI Model And TCP/IP
Many organizations do not use the OSI model anymore. They have adopted the TCP/IP stack as
a more universal and capable means of data communication. Here are some examples of where
each is used:
TCP/IP
Internet Protocol (IP)
TCP
UDP
OSI Model
Network Access layer (Layer 2) Data Link layer (Layer 2) Network layer (Layer 3) Transport
layer (Layer 4) Session layer (Layer 5-6) Presentation Layer( Layer 6-7) Application layer(
Layer 7-8).
TCP/IP vs. OSI Model The TCP/IP model operates on a protocol stack and the OSI model
operates on a data encapsulation model.
The TCP/IP model TCP is the transport layer of the OSI Model, while IP is the addressing
protocol.
The TCP/IP model is used to route packets from computer to computer and from location to
location. The OSI model works with data encapsulation which can result in other problems if you
don’t use it correctly because it just doesn’t know what to do with it as well as its protocols being
dictated by manufacturers of computers and devices that often have their own proprietary
considerations built into them. It is hard to find an operating system that actually works with the
OSI model. Organizations that don’t use it often base their communication on proprietary
standards and protocols. With TCP/IP, as long as you can communicate from one terminal or
computer to another, you can communicate across a network without a problem.
Differences Between Network Protocols and IP
Network protocols are different from IP protocols because the TCP/IP model is designed for
communication between networks and computers while the OSI model is designed for network
communication within devices. They both operate over top of TCP, but they mean different
things. IP protocols require a host to be configured before it can communicate with the network
and a computer to be configured before it can communicate with other computers.
IP protocols are used for addressing, routing, sequencing and sequencing acknowledgements.
The OSI model is used for application identification in terms of numbers, names and functions.
Network DLP/IP DLP Network Protocols Network protocols are used when computers are
connected to a network using Ethernet or Token Ring. In addition to the TCP/IP protocol stack,
the OSI model is also used as an underlying framework that allows standardization across the
entire computing industry. The IT industry uses its own standards such as TCP (Transmission
Control Protocol), IP (Internet Protocol) and others.
Trusted computing base or TCB is a security policy that deals with how much of a system’s
resources and capabilities will be available to users, subsystems and administrators. The TCB
also defines what is allowed and forbidden for users as well as what types of auditing are
required.
IP vs. OSI Model
The OSI model requires more pre-configuration than the TCP/IP stack, while the TCP/IP stack is
easier to install. The TCP/IP model uses IPv4 (Internet Protocol Version 4) and IPv6 (Internet
Protocol Version 6), while the OSI uses IPX (Internetwork Packet Exchange) and SPX
(Sequenced Packet Exchange). IP addresses are less secure than OSI Layer 5, but they’re more
widespread.
TCP vs. IP, Layer 7
As mentioned above, the TCP/IP model is for computers and networks that communicate across
a LAN and does not need to communicate with routers. The OSI model requires at least one
router or gateway (Router), and must be used in combination with a hub or switch which is an
intermediary device that connects the computers on a Local Area Network (LAN). A network
switch is a device that can act as both an access point for devices on the network and as a router.
The TCP/IP model is usually set up on devices like routers, firewalls and servers. The OSI model
is used in workstations, servers, firewalls and gateways (routers).
Difference Between OSI Model And TCP IP Model
The OSI model is more flexible and allows more customization. This is the layer of the stack that
determines what kind of interface your computer will have to other devices and computers. The
TCP/IP stack defines protocols for any kind of connection from an Ethernet cable to a wireless
connection. Unlike the OSI model, the TCP/IP stack does not require pre-configuration of
network interfaces. The OSI model does not use IP addresses, but it does require the creation of
Layer 3 addressing for the private networks that a company has set up and connected to other
networks. Examples of applications that require the use of the TCP/IP stack are e-mail, web
browsers and network management tools.
TCP and IP work well together because they go hand-in-hand in a network infrastructure. The
OSI model is mostly used on workstations while the TCP/IP stack is used on servers, mainframes
and routers. If you are using a TCP/IP infrastructure, you might have a networked printer or file
server, while the OSI model can be used to communicate with devices that do not have an IP
address.
TCP and IP are what make up your internet connection. The OSI model is used for applications
such as FTP (File Transfer Protocol), HTTP (Hyper Text Transfer Protocol) and SMTP (Simple
Mail Transfer Protocol). The OSI model describes how different computers and devices in a
network should work with one another so that all of the applications that are running on them
will work properly with each other and the network as a whole.
The OSI model is an Ethernet communication standard while TCP/IP is a computer and network
protocol. The OSI model describes how different computers and devices in a network should
work with each other so that all of the applications that are running on them will work properly
with each other and the network as a whole.
In most cases, the TCP/IP stack is used on desktop computers, mainframe computers and routers
(which use an OSI stack) while the OSI model is used on workstations, servers, firewalls and
gateways (routers). The TCP/IP stack uses IP addresses to identify devices. IP addresses are
easier to use, but they’re not as secure as OSI Layer 5.
The TCP/IP model is a suite of protocols that is used to connect and communicate between
computers over the internet. The OSI model is used by applications such as FTP (File Transfer
Protocol), HTTP (Hyper Text Transfer Protocol) and SMTP (Simple Mail Transfer Protocol).
The OSI model can be used for networking purposes if you choose the correct layer or
application, but it can also be used for file systems or directory services.
The OSI model is used by applications such as FTP (File Transfer Protocol), HTTP (Hyper Text
Transfer Protocol) and SMTP (Simple Mail Transfer Protocol). The OSI model describes how
different computers and devices in a network should work with each other so that all of the
applications that are running on them will work properly with each other and the network as a
whole.
The TCP/IP stack is used for connecting, communicating and sharing files between computer
networks. The OSI model is used for designing the internal workings of a computer or network
device so that all of the applications that are running on an operating system can use it to
communicate with one another.
The OSI model is used on workstations, servers and gateways for connecting, communicating
and sharing files between computer networks. The OSI model is used for designing the internal
workings of a computer or network device so that all of the applications that are running on an
operating system can use it to communicate with one another.
The TCP/IP stack is used mainly by companies in the United States while the OSI model is used
by companies in other countries using different languages than English (like Japanese).
Companies in other countries also need to know how to set up Layer 3 addresses because they
don’t use IP addresses.
The OSI model is used for designing the internal workings of a computer or network device so
that all of the applications that are running on an operating system can use it to communicate
with one another. If a company uses an OSI stack on its network, then it can set up Layer 3
addressing for its private networks to connect and communicate with other networks.
The OSI model is used for designing the internal workings of a computer or network device so
that all of the applications that are running on an operating system can use it to communicate
with one another. If a company uses an OSI stack on its network, then it can set up Layer 3
addressing for its private networks to connect and communicate with other networks.
The OSI model is used to create a networked environment that allows computers and devices to
communicate with each other via pre-configured software settings that are already established on
the operating system. The TCP/IP stack is used to transfer files and folders from one computer or
device to another, but it is not typically used on local networks because all of the devices
connected to it will have an address in the same range, which makes the network vulnerable.
The OSI model is a computer networking standard while TCP/IP is a computer and network
protocol. The OSI model describes how different computers and devices in a network should
work with each other so that all of the applications that are running on them will work properly
with each other and the network as a whole.
In most cases, the TCP/IP stack is used on desktop computers, mainframe computers and routers
(which use an OSI stack) while the OSI model is used on workstations, servers, firewalls and
gateways (routers). The TCP/IP stack uses IP addresses to identify devices. IP addresses are
easier to use, but they’re not as secure as OSI Layer 5.
The OSI model is used on workstations, servers and gateways for connecting, communicating
and sharing files between computer networks. The OSI model is used for designing the internal
workings of a computer or network device so that all of the applications that are running on an
operating system can use it to communicate with one another.
The OSI model is used by applications such as FTP (File Transfer Protocol), HTTP (Hyper Text
Transfer Protocol) and SMTP (Simple Mail Transfer Protocol). The OSI model can be used for
networking purposes if you choose the correct layer or application, but it can also be used for file
systems or directory services.
The OSI model is a suite of protocols that is used to connect and communicate between
computers over the internet. The TCP/IP stack is used mainly by companies in the United States
while the OSI model is used by companies in other countries using different languages than
English (like Japanese). Companies in other countries also need to know how to set up Layer 3
addresses because they don’t use IP addresses.
The OSI model is used on workstations, servers and gateways for connecting, communicating
and sharing files between computer networks. The OSI model is used for designing the internal
workings of a computer or network device so that all of the applications that are running on an
operating system can use it to communicate with one another. If a company uses an OSI stack on
its network, then it can set up Layer 3 addressing for its private networks to connect and
communicate with other networks.
The OSI model is used by applications such as FTP (File Transfer Protocol), HTTP (Hyper Text
Transfer Protocol) and SMTP (Simple Mail Transfer Protocol). The OSI model describes how
different computers and devices in a network should work with each other so that all of the
applications that are running on them will work properly with each other and the network as a
whole.
The OSI model is used on workstations, servers and gateways for connecting, communicating
and sharing files between computer networks. The OSI model is used for designing the internal
workings of a computer or network device so that all of the applications that are running on an
operating system can use it to communicate with one another. If a company uses an OSI stack on
its network, then it can set up Layer 3 addressing for its private networks to connect and
communicate with other networks.
The OSI model is used for designing the internal workings of a computer or network device so
that all of the applications that are running on an operating system can use it to communicate
with one another. The TCP/IP stack is used mainly by companies in the United States while the
OSI model is used by companies in other countries using different languages than English (like
Japanese). Companies in other countries also need to know how to set up Layer 3 addresses
because they don’t use IP addresses.
The OSI model can be used for networking purposes if you choose the correct layer or
application, but it can also be used for file systems or directory services. The OSI model is used
by applications such as FTP (File Transfer Protocol), HTTP (Hyper Text Transfer Protocol) and
SMTP (Simple Mail Transfer Protocol). The OSI model is used on workstations, servers and
gateways for connecting, communicating and sharing files between computer networks.
The OSI model is a suite of protocols that is used to connect and communicate between
computers over the internet. The TCP/IP stack can be used for transferring files, but it’s not
typically used on local networks because all of the devices connected to it will have an address in
the same range, which makes the network vulnerable. The TCP/IP stack is used to transfer files
and folders from one computer or device to another, but it is not typically used on local networks
because all of the devices connected to it will have an address in the same range, which makes
the network vulnerable.
The OSI model is a computer networking standard while TCP/IP is a computer and network
protocol. The OSI model describes how different computers and devices in a network should
work with each other so that all of the applications that are running on them will work properly
with each other and the network as a whole. The TCP/IP stack uses IP addresses to identify
devices. IP addresses are easier to use, but they’re not as secure as OSI Layer 5.
Data Communication
There are two main methods of data communication via the internet: stream and packet. Stream
communications use continuous data streams to transmit information, while packet-based
communications divide the information into discrete messages and then reassemble them (or, if
necessary) for the client. CompTia Network+ certification covers both methods of
communication. In order to pass this exam you must understand what is involved in each method
and how they compare to one another in terms of speed and performance.
A stream is a sequence of bits that communicates in exactly the same way as data is transferred
over a network. It is either used to download or to upload information.
Packet-based communications methods assert that messages must be partitioned into “packets”
by the sender and then reassembled at the receiver. The primary benefit of packet-based
methodologies is that they enable multiple different types of network traffic (such as video,
social media, email and other web browsing) to be transmitted simultaneously over a connection
without affecting one another or the performance of the connection itself.
Packet-based transmissions are significantly faster than those streams because packets can be
prioritized and there are less overhead elements involved in creating them. Packet-based
transmissions are also more secure and reliable than stream transmissions because each packet is
split up into segments that then can be sent over different paths. The information travels from the
server to the client in a much more organized form, which enables the data to get through any
firewalls or other security measures along the way.
Stream transmissions require all of the data to be sent before it reaches its destination. Packetbased methods allow this data to arrive partially, even as it continues on its way. Because of this,
packet-based transmissions are able to achieve better latency than stream communications when
they travel across long distances via slower network infrastructures (such as satellite).
In the CompTIA Network+ certification exam, you will be required to identify which method of
data communication is being used in each scenario as well as explain what advantages and
disadvantages are involved with that communication method.
Chapter 3:
Network Topologies
A network topology refers to the arrangement of devices on a network. Further, based on this
arrangement, the topology identifies how data flows within the network. The Network+
objectives refer to several common network topologies, which are covered in this section.
Star
The majority of networks in use today use a star topology or a hybrid topology that includes a
star and another topology. Network clients connect to a central device such as a hub or a switch
in a star topology.
Figure 1-13 shows the layout of a star topology with devices connecting to a central device. The
graphic on the right shows how it can resemble a star. While the figure shows a logical diagram
of connected devices, it’s important to realize that the hub or switch is rarely in a central physical
location. For example, you’ll rarely find a switch in the middle of an office with cables running
from the computers to the switch. Most organizations mount switches in a server room or a
wiring closet.
Figure 1-13: Star topology
Many networks in both large and small organizations use twisted pair cables. Additionally, the
network clients usually don’t connect directly to the hub or switch, but instead are connected
through different cables. Here’s a common standard used in many organizations:
One cable connects the computer to a wall socket. This cable has RJ-45 connectors on both ends.
Another cable attaches to the wall socket and runs through walls, ceilings, and/or floors to a
wiring closet or server room, where it is attached to a wiring block.
The front of the wiring block has a patch panel. A patch cable connects the wiring block to a port
on a switch.
While this connection uses three separate cables, it is electrically the same connection.
Mesh
A mesh topology provides redundancy by connecting every computer in the network to every
other computer in the network. If any connections fail, the computers in the network use
alternate connections for communications.
Figure 1-15 shows an example of a mesh topology. It has 5 computers, but 10 connections. The
number of connections quickly expands as you add more computers. For example, if you add a
sixth computer, you’d need to add an additional 5 connections for a total of 15 connections.
Figure 1-15: Mesh topology
You can calculate the number of connections needed in a mesh topology with the formula n(n1)/2 where n is the number of computers. For example, with five computers, n=5 and the formula
is:
5(5-1)/2
5x4/2
20/2
Add another computer and the calculation is 6(6-1)/2 or 15 connections.
Due to the high cost of all the connections, full mesh topologies are rarely used within a network.
However, there are many instances where mesh topologies are combined with another topology
to create a hybrid. This hybrid topology has multiple connections to provide a high level of
redundancy, but it doesn’t connect every single computer with every other computer in the
network.
Network Connectors
Point-to-Point vs. Point to Multipoint
A point-to-point topology is a single connection between two systems. Each of the systems are
endpoints in the point-to-point topology. A simple example is two tin cans connected with a
string. One person talks into one can, and the other person can hear what they say. Similarly, if
you and a friend are talking on a telephone, you have a point-to-point connection.
In some cases, a point-to-point connection is a single permanent connection. However, it is more
often a virtual connection or virtual circuit. A virtual circuit still establishes a point-to-point
connection but the connection is created on demand and might take different paths depending on
the type of connection. For example, telephone companies use circuit-switching technologies to
establish connections. A telephone call between you and a friend in a different location might
take one path one day, and another path a different day.
Organizations sometimes lease lines from telecommunications companies to create a point-topoint connection. For example, the gateway-to-gateway VPN shown in Figure 1-12 is a point-topoint connection. As a leased line, it is a semi-permanent line and often referred to as a virtual
circuit.
In contrast, a point-to-multipoint connection goes from one endpoint to many endpoints. You can
think of it as a broadcast or multicast transmission described earlier in this chapter. Wireless
access points use point-to-multipoint transmissions. A single access point can transmit and
receive from multiple wireless devices.
Peer-to-Peer vs Client-Server
Computers in a peer-to-peer (P2P) network pass information to each other from one computer to
another.
BitTorrent is a P2P protocol used with many software programs, including the BitTorrent group
of P2P programs distributed and sold by BitTorrent, Inc. Files downloaded with a BitTorrent
program are distributed in multiple small Torrent files from different computers in the P2P
network. The program then puts them back together on the client.
Some of the challenges with P2P networks are legal issues and malicious software (malware).
From a legal perspective, many people illegally copy and distribute pirated files. For example,
you could spend a year writing, editing, and finally publishing a book. If this book is available as
a P2P file, criminals can copy and distribute it but you wouldn’t get any funds for your efforts.
Many criminals also embed malware into files distributed via P2P networks. Users that
download P2P files often unknowingly install malware onto their system when they open the
files.
Most legitimate eCommerce sites use a client-server topology. For example, if you use
Amazon’s Kindle service, you can download Kindle files to just about any device including PCs,
iPads, or Kindles. These Kindle files are hosted on Amazon servers and delivered the to the user
device.
Remember This
Computers in a peer-to-peer (P2P) network share information between each other. P2P networks
are often referred to as file sharing networks.
Workgroups vs. Domains
Peer-to-peer networks and workgroups are sometimes confused, but they aren’t the same. Within
Microsoft networks, a peer network is a workgroup. Each computer on the network is a peer with
other computers so the network is often called a peer network. However, computers in a
workgroup do not use file sharing technologies such as BitTorrent.
Each computer within a workgroup is autonomous and includes separate user accounts held in a
Security Accounts Manager (SAM) database. If users want to log onto a computer, they must use
an account held within that computer’s SAM. If users need to log onto multiple computers, they
need to know the username and password of different accounts held within different SAM
databases.
Figure 1-18 shows both a workgroup and a domain. If Sally wants to log onto Computer A, she
needs to use an account held in Computer A’s SAM. If Sally needs to log onto all four computers
in the workgroup, she would need to have four accounts, one in each of the four computer’s
SAM databases. As more and more computers are added to a workgroup, it becomes more
difficult for users to remember all the usernames and passwords they need to access the different
computers.
Figure 1-18: Peer-to-peer vs client-server topologies
In a domain, each computer still has a SAM but accounts within the local SAM databases are
rarely used. Instead, a server includes a centralized database of all accounts in the domain. In a
Microsoft domain, the centralized server is a domain controller and it hosts Active Directory
Domain Services (AD DS). Users can use the same account held in AD DS to access any
computer within the client-server domain.
MPLS
Multiprotocol Label Switching (MPLS) is a WAN topology provided by some
telecommunications companies. Organizations rent the bandwidth through a WAN provider’s
cloud without needing to run individual lines to distant locations.
Figure 1-19 shows an example of an MPLS. One company is renting bandwidth to create a WAN
connection between their headquarters and a regional office. Other companies are also leasing
bandwidth but this isn’t through the same WAN provider. Each company connects into the
MPLS cloud with a customer edge (CE) device. These CE devices connect to the WAN
provider’s provider edge (PE) devices. These PE devices connect into the MPLS cloud through
one or more internal MPLS devices.
Figure 1-19: MPLS network
Data transferred between the headquarters site and the regional local office can take multiple
paths such as HQ CE ->PE->S1->S2->PE->Office CE or HQ CE ->PE->S3->S4->S2->PE>Office CE. An organization that rents the bandwidth typically doesn’t know what path the data
takes. The WAN provider manages the cloud and companies simple send data in via the CE and
get data back via the CE.
Short labels identify the actual path takes between individual nodes. For example, the path
between S1 and S2 could be called S1_S2, The actual path data takes through the MPLS cloud is
identified by short labels.
Admittedly, there’s a lot of depth to MPLS, but if you can remember the acronym stands for
Multiprotocol Label Switching, it’ll give you hints of what MPLS does.
Multiprotocol. MPLS supports many different access technologies such as Asynchronous
Transfer Mode (ATM) and Frame Relay.
Label. Paths are identified by short labels. In contrast, Ethernet networks use IP addresses to
identify the source and destination addresses.
Switching. Devices within an MPLS topology provide switching capabilities connecting
individual devices based on the path labels.
Hybrid
A hybrid typology is any topology that combines two or more other topologies. As mentioned
previously, mesh networks are very expensive to create all the required connections. Instead, a
partial mesh is often connected with another type of network such as a star network, creating a
hybrid.
Network Types
Networks are the backbone of computer systems, they are the paths through which data is sent
and received. There are many different types of networks, each with their own features, strengths
and weaknesses. The three main network types include a wired LAN (Local Area Network), a
wireless LAN (Wireless Local Area Network), and an internetwork such as the Internet. Each
network has their own set of advantages and disadvantages when it comes to speed, latency or
range.
What is Wired LAN?
A Wired LAN connects computers through a wired connection such as Ethernet or coaxial
cables. Generally speaking, Ethernet is a faster connection than coaxial cables; however, the
amount of network cabling in an office or building limits the number of computers that can be
connected at any given time.
What is Wireless LAN?
A Wireless LAN connects computers wirelessly through wireless networks such as Bluetooth,
Wi-Fi(802.11), (N)Direct and Infrared(IR). All of these systems allow for limited bandwidth,
typically 100 Kbps and less. In addition wireless networks are prone to interference from other
wireless devices.
What is an Internetwork?
An Internetwork links multiple computer networks together so that data can be exchanged
between networks as if they were a single network.
*Wi-Fi is a trademark of the Wi-Fi Alliance.
*N = number; D = data; A = Address
Data between computers can move reliably and quickly when the network has a high bandwidth
(more bits per second) connection. This type of connection relies on devices such as routers and
switches that connect the networks together. The speed at which data moves through an
internetwork is dependent on how many devices are connected to it, how those devices are
configured, and what sort of quality of service they have.
The performance of wired networks depends largely on the configuration and material used in
their construction. For example, wiring that has a large number of open loops can cause a large
amount of electromagnetic interference (EMI).
A properly configured wired network should allow for a maximum bandwidth of about 100
Mbit/s. In addition, a single data path (such as Ethernet) is typically limited to about 10 Mbit/s.
This means that if the overall bandwidth of the network is approaching that limit, one or more
devices may become saturated or overloaded and data may be lost. Connections that exceed this
limit typically require an increase in the number of devices connected to the network and
changes to their configuration to improve performance.
Another aspect of cabling is the cabling length required to achieve maximum performance. This
can be measured in either feet or meters, depending on which is more meaningful in the context
of the application. A network with a long distance from one device to another can be much
slower than a network that is close, because the signal has to travel further. For example, if a
switch or router is located on the other side of a large building or campus, data will have to travel
further over non-optimal cabling to get back to that device.
Another important consideration for wired LANs is that they depend on devices such as hubs and
switches for signal reception and transmission. An operation called jamming can occur when
multiple devices are connected through hubs, which limit one device’s ability to transmit data.
To combat this problem, switches, which are dedicated devices that enable data to travel only
one way, are required.
Wireless networks generally require a greater amount of cabling than wired networks. For
example, all of the cabling that is used in building a wireless LAN (such as Ethernet) will be
used at the very least for the wireless transmitters and receivers. The backbone cabling will be
used for both the wired and wireless transmissions. Considerations for wireless LANs include the
strength of the power source required to operate the devices (such as batteries), radio interference
from other radio devices, and radio frequency temperature control so that devices operate within
legal limits.
To bring all of your clients in the same room to collaborate with you is a great advantage.
However, as networks grow larger, performance issues can become severe. For example, one
could have 100 people working on a project, and only one person may have to check email while
the other 99 are working on their project. The chances of everyone having internet access at the
same time is unlikely.
To overcome these issues, applications can be developed that will allow keeps things organized
and allows for an array of users to work together at once. These applications are referred to as
virtual LANs or VLANs. A VLAN can keep all of the computers within the same subnet off of
each other (VLAN1) and allow for specific computers to communicate with each other (VLAN2,
VLAN3, etc.)
Firewalls are used to restrict access from different subnets to those systems that require access.
For example: All users on VLAN2 can access system A and all users on VLAN3 can access
system B.
Make sure you use a feature called port trunking, which allows multiple devices on the same
subnet to talk, as opposed to just one device at a time taking in data or sending out data. Also,
make sure that the routers are configured to support all of the applications that you want
everyone else to have access to.
If a company has a large number of employees or clients who need access to the internet or local
network, there needs to be a way for them to connect. One method is called DHCP or Dynamic
Host Configuration Protocol. This protocol allows for hosts on an IP subnet (which is similar to
an Ethernet subnet) to request addresses from a server and then acquire it from that server
automatically . The disadvantage with this system is that if an event occurs where hosts stop
receiving these addresses from the server, they will not be able to function properly on the
network until they are assigned new ones by the DHCP server. Another disadvantage is that
when one host is assigned an IP address that is already in use on the network, network
performance can slow down or become unresponsive.
One solution to this problem is to have each of your clients connect to the internet using a router
instead of connecting directly by plugging their computer into the modem (this process of
plugging computers into modems is called “hard-wiring”). This allows for more IP addresses and
helps avoid IP address conflicts. Also, if one router goes down then users will still be able to use
the other routers to access the internet and local network. However, you should be aware that
routers require configuration and maintenance.
One way around this dilemma might be Virtual Private Network or VPNs. VPNs are typically
used in server-based or office networks by allowing data to be sent out over public lines and then
decoded by a private network. The public network is the internet. The LAN network is private
and can only be accessed by those who have authorization. These networks can be configured to
allow access only during certain times of day or certain days of the week.
Although wireless networking technologies were originally intended for use in personal
applications, they are now widely used in local area networks (LANs). Wireless technology
allows users to have a greater degree of mobility within their offices, homes, schools, and
libraries than wired techniques which may require wires or cables running from devices to an
access point.
Wireless Local Area Networks are used in applications such as home networks, small office
networks, and campus Wi-Fi networks. They can also be used in other places as long as they
have a wireless access point (AP). Other devices that work with Wireless Local Area Networks
include smartphones and tablet computers.
Wireless LANs are increasingly common in homes and small businesses, replacing wired
alternatives like DSL. “Around 30% of households had some form of wireless Internet access at
the beginning of 2006.” In 2008 it is expected that 50% will have access to Wireless LANs.
Wireless access points (APs) are used to connect the wireless devices to a wired network. The
access point is connected to an Ethernet network. Wireless APs are accessed using the same
standards that wired LANs use, allowing them to access the Internet or intranets. Some common
standards for Wireless LANs are 802.11a, 802.11b, 802.11g and 802.11n. These standards
operate within a specific range in which each standard differs from another in speed and range:
<br>
802.11a: operates at speeds of up to 54 Mbit/sec with a maximum range of 300 m
<br>
802.11b: operates at speeds of up to 11 Mbit/sec with a maximum range of 185 m
<br>
802.11g: operates at speeds of up to 54 Mbit/sec with a maximum range of 400 m
<br>
802.11n: operates at speeds of up to 600 Mbit/sec with a maximum range of 100 m indoors and
300 m outdoors. This standard also has improved security over its predecessors as well as
compatibility with other standards like 802.11a, 802.11b and 802.11g
All the above standards can be accessed with an IOS enabled device like a router or switch by
entering the command line interface (CLI).
For a basic Wireless LAN system, the following components should be considered:
1.
A wireless access point (AP) to connect the wireless devices to a wired network
2.
A network switch or router to connect the wired devices to the AP
3.
Access points are sometimes available as part of computer systems such as
laptops and desktop PCs - these are typically referred to as “Adapters”.
4.
A security system can be implemented by placing access points in different places
around an area, usually with a radius of several hundred metres for radio signals
not to broadcast outside its location
To achieve this, each access point is attached via Ethernet cables connected to a
switch.
5.
Wireless repeaters (repeaters) are used to extend the range of an access point by
repeating its signal.
6.
A powerline network is a wired alternative to a wireless network that uses the
same technology but connects the devices to a traditional power socket instead of
an access point (AP).
A basic setup for a wireless network:
1.
The location where the AP will be installed should be determined. This can be in
an area with high-speed Internet coverage, or it can be a location with many other
wireless devices already installed.
2.
The type of antenna required is dependent on coverage requirements.
3.
The AP should be installed, with the connecting cables attached.
4.
The wireless devices should be installed, e.g. wireless laptop and smartphone.
5.
The wireless devices should be configured to receive their IP address from the
AP.
6.
Any additional network switches or routers required can now be installed and
configured and linked to the source of Internet access.
7.
The devices can now begin to communicate with each other.
8.
Any computers that have wired LANs such as laptops - or any other device that
requires an Ethernet port - can now be connected to another switch and continue
communicating with other wireless users or wired devices.
A wireless network is subject to FM broadcast, which must be filtered or blocked in order to
prevent interference from the AP. In cities where there are many different wireless networks in
close proximity, an AP may be on the same frequency as a nearby access point. This can cause
“frequency hopping”, which can result in other wireless devices connecting with this access
point and sending corrupt data packets over a larger area than intended.<br>
Network Cable
The different wire types used in networks are categorized as twisted-pair, coaxial, and fiber
optic. What are the differences? Which one is best for your application?
Twisted-pair cables have been around the longest and can be found almost everywhere. They’re
designed to carry electrical signals over short distances. Twisted pair cables have plastic
insulation. The cable pairs twist to reduce interference from other electrical sources that could
affect or disrupt the signal being carried by the wire. Twisted-pair cables come in a variety of
different grades, speeds, thicknesses and capabilities depending on what you need it for. The
most common type of twisted-pair cable is Category 5e. Also called patch cables, these are the
copper wiring used in most home networks. Fast (Category 6) patch cables are relatively new
and can support data transmission speeds up to Gigabit Ethernet.
Coaxial cable is another popular wire for networking because it’s so inexpensive to use. Today,
coaxial cable is commonly used in fiber optic communication systems and TV antenna systems
that transmit data at between 10 and 100 mbps (megabits per second). It’s flexible like twisted
pair but has a metallic core that allows it to support greater signals than tangled copper pairs of
wires. Coaxial cable is a thin, almost invisible round wire that can be found in walls, ceilings,
and coaxial connectors (which are just like the jacks that come with computers).
Fiber optic cables are the newest and most advanced wiring for networks. Because fiber optic
cable is so thin and flexible, it resembles a strand of hair. Unlike twisted pair or coaxial cables
where signals are usually routed from one location to another through connections on the cable
itself, fiber optic cables move data using optical communication signals via beams of light
instead. The advantage of using fiber optic cables is that they do not share electrical frequency
bands with other devices or communication devices. With fiber optic cable, data travels only as
fast as the cable can transmit it. Fiber optics are much more expensive than twisted pair and
coaxial cable.
You can use the following table to compare the differences between different types of wire:
Twisted-Pair: Coaxial: Fiber Optic
-
Transmission distance up to 10 km (6.2 miles) -Up to 100 mbps -Up to 10 km (6.2
miles) -Up to 100 mbps
-
Flexible & invisible -Flexible & invisible -Flexible & visible wire
-
Best used for internal connections -Best used in short range connections
-
Cannot be cut or damaged easily -Can be cut or damaged easily -Can be cut or
damaged easily
-
Cost is highest, not as flexible, easily damaged, and pricey Cost is lowest, can be used
in any application, and affordable Low cost and ubiquitous application
Other important features to look for when buying a network cable are the length of the cable and
what type of connector it uses. The type of connector is the shape that the ends and cables are
made from. The most common types of connectors are “RJ-45” and D-sub.
Network Architecture
Industry-standard frameworks and reference architectures
Industry-standard frameworks and reference architectures refer to conceptual frameworks that
help define the structure and operation of IT systems. They help align security and IT with an
organization’s business strategy. Frameworks are more generic than architectures.
As for Security +, there are four main groups of frameworks:
•
Regulatory frameworks are typical of industries subject to government regulation.
•
Non-regulatory frameworks are those that are not industry-specific or regulatory.
Instead, they focus on technology. An example is the NIST Cybersecurity
Framework (CSF). In the case of the NIST CSF, it is mandatory for government
agencies but voluntary for all others.
•
International and National refer to the frameworks developed and enforced by a
government agency. The US government has the federal licensing and risk
management program, while the EU has its own laws. In certain situations,
governments cooperate to create shared cadres that approve the rally in accordance
with their respective laws. An example is the US and EU Privacy Shield framework.
•
Industry-specific frameworks are frameworks developed by entities within a
particular industry. They could be developed due to industry-specific regulatory
needs or issues.
Benchmarks and secure setup guides
These guides “provide guidance on the configuration and operation of information systems to a
level of security that is understood and documented.” I guess this means something less rigorous
than frameworks and architectures. Benchmarks are based on consensus.
These guides can come from manufacturers, the government, or an independent organization
such as the Center for Internet Security (CIS).
When it comes to configuring secure services, vendor or platform-specific guides can be helpful.
•
Web servers provide the means for users to access web pages or other data and are
therefore subject to attack. Providers, such as Microsoft, Apache, etc. They can
provide guides. CIS as well.
•
Operating systems are the interface to the applications used to perform tasks and the
physical hardware. The CIS and the Department of Defense DISA STIG program
have guidelines.
•
Application servers are critical to business IT systems. Examples include messaging
platforms, email servers, database servers, and so on. Again, the manufacturer, CIS,
or DISA STIG of the Department of Defense should have guidelines.
•
Network infrastructure devices are “switches, routers, hubs, firewalls, and other
special devices” used to operate a network. Their criticality makes them objective.
Defense in-depth and multi-level security
Defense in Depth is a security principle that uses several different security features to increase
the level of security.
Vendor diversity means having more suppliers. You can have multiple independent vendors of
firewalls, operating systems, etc.
Control Diversity is multi-layered security in administrative and technical policies to guide user
actions. Administrative controls are laws, policies, and regulations that apply to the management
aspects of an organization. Technical controls are those that operate in the system through
technological intervention. Examples include passwords, logical access control, AV, firewall,
IDS / IPS, etc.
User training is very crucial because users are a vital part of any business. They are essential for
defense and are also a major source of vulnerability. The book recommends specific user training
for each person’s role.
Zones and topologies
Zones and topologies allow layers of defense in an organization, and the innermost layers have
maximum protection.
DMZ is a term I’ve never heard applied to computer / IT, but here it is. It is a reference to the
demilitarized zone and refers to a semi-secure area. This means an area that is protected from the
rest of the Internet by an external firewall and the trusted network area by an internal firewall.
This zone can have web servers serving external content, remote access servers, and external
email servers.
An extranet can simply be described as “an extension of a selected part of a corporate intranet to
external partners.” It involves both privacy and security.
An intranet is a network that exists entirely within the trusted zone of a network. This signifies
that it is under the security controls of the system administrators. If intranet users need to access
external information, a proxy server must be used to mask the location of the requester.
Wireless networks are the transmission of data over radio waves rather than physical cables.
These networks can be concentrated and radial (a primary access point and wireless clients
connecting to this AP) or a mesh network.
A honeynet, described in a previous chapter, is a “fake” network designed to look like the real
one and thus attract attackers.
A guest zone, on the other hand, is “a network segment that is isolated from the rest of the
systems that guests should never have access to.”
NAT
Used to Translate private, non-routable IP addresses into public, routable IP addresses. It can be
to compensate for the lack of available IP address spaces. Not all systems require all IP addresses
to be routable, and, in fact, it is best if your organization’s topology is mostly hidden from
outsiders.
A NAT device will remove the internal source IP address from packets and replace it with the
public (routable) address of the NAT device. It does the opposite on the way back to the grid.
There are several implementation approaches to NAT: a static internal to external address
mapping, a dynamic mapping, or port address translation (PAT). On the other hand, PAT allows
many internal private addresses to share a single external IP address.
Finally, ad hoc networking, which is a mesh topology in which systems can send packets to each
other without a central router or switch. This is an easy (laughs) and cheap direct means of
communication. On the other hand, it is more difficult to manage traffic and security statistics.
Segregation, segmentation, and isolation
Rapid Spanning Tree Protocol (RSTP) and Spanning Tree Protocol (STP) were developed to
avoid loops in Layer 2 networks. However, it takes time to calculate routes traffic, which can
cause efficiency problems. This has led to the push for flat networks, which are not necessarily
good for security.
Instead, they use “enclaves,” which are sections of a network that are logically isolated from the
rest of the network. They are like the gated communities in your network.
There are several ways to segment a network:
•
Physical segmentation uses separate physical equipment for each type of traffic. This
means that the switches, routers, and cables are separate. Safer, but also more $$$.
•
Logical segmentation is done using a VLAN. A LAN is a collection of devices with
similar communication needs and functions that work with a single switch. This is
the lowest level in the network hierarchy. VLANs are a logical (virtual)
implementation of a LAN.
Virtualization provides logical isolation of the server while allowing physical co-location. Think
virtual machines.
Air gap is when there is no data path between two networks that are not connected in any way
except by physical air gap. There is no logical or physical path that connects them directly.
However, this security is broken if someone uses a USB or other medium to transfer information
in or out.
Tunnels and VPN
We will cover VPNs again! The virtual private network allows two networks to connect securely
through an unprotected section of the network. This link can be from one site to another. This
means that more than one network is connected through an intermediate network layer (usually
the Internet). There is also remote access, which is when a user needs to access a network but
cannot establish a physical connection.
Device positioning and security technology
Now we will cover where they should be placed. Hint: the answer is almost always “online.”
Sensors acquire data. These can be network-based to cover more ground. They can also be hostbased, limited to one machine but can detect more specific data.
Collectors are essentially hubs for multiple sensors. Your collected data then goes to other
systems.
Correlation engines take the collected data and compare it to known models. Of course, if traffic
is routed around a sensor, the engine won’t “see” it either.
Process filters examine packets on a network interface and filter them based on
source/destination, ports, protocols, and so on. These filters must be placed in line with the
traffic.
Proxies are servers that act as a bridge between clients and other systems. For this to work, the
proxy must be in the natural flow of traffic.
Firewalls are devices that determine whether or not traffic can pass through according to a set of
rules. They need to be in line with the traffic they are regulating and are typically placed between
network segments.
VPN concentrators accept multiple VPN connections and terminate them at a single network
point. Wherever this termination occurs, it is best to be on a network segment that allows all
users to connect directly.
SSL accelerators help accelerate SSL / TLS encryption at scale. They must be placed between
the web servers and the clients they serve.
Load balancers help distribute incoming traffic across multiple servers. The load balancer must
reside between the server that provides that service and the requestors for a server.
A DDoS mitigator helps to protect against DDoS attacks, so it must be outside the area it is
protecting. It would be the first device to find a packet on its way from the Internet over a
network (assuming the device was present).
Aggregation switches provide connectivity for many other switches. This is a many-to-one
connection. It must be older than the “many” devices.
Switch Port Analyzer (SPAN) or mirror port allows to copy the traffic. This can be a problem if
the traffic is very heavy. A test access point is a passive signal copy option between two points
on a network. TAP devices would be your best option to monitor large amounts of traffic.
Examples
There are several routing examples where you need to identify the correct one. The outermost
device would be a DDoS relay. Then a firewall, then “DMZ,” then another firewall. So an SSL
accelerator, then a load balancer, then a server (database, web, etc.).
Virtual Networks
Virtual networks are a set of computers that have been linked together so that they share
information and devices. They are also called private or isolated networks. Virtual networks can
be used in the following scenarios:
-
Sharing resources among different departments within an organization
-
Sharing resources across geographic locations
-
Privately owned network with only one user account connected to it at a time
-
Pitfalls and risks of virtual networks
There is no physical connection between hosts on a virtual network. The communications tunnels
use software to get around firewalls, proxies, NATs, etc.
Virtual networking devices can be used to provide isolation between virtual networks on the
same physical device. The virtual networking device software is built into Cisco IOS Software
and the switch hardware. Virtual devices like Ethernet, FDDI and Token Ring do not exist in the
physical hardware. When framing is used for tunneling it depends on the traffic that is being
carried across the virtual network. For example, if unicast traffic is being sent across a virtual
network then an EtherChannel must be used to transmit frames in a multicast and unicast
manner. If a broadcast or multicast is sending across the tunnel then a local transmit or receive
needs to be used.
The following are some scenarios where virtual networking can be useful:
-
Neutralizing the risk of having only one user account connected to a network with
security concerns (double-homed)
Enabling Departmental LANs with privacy and manageability between remote sites
-
Providing physical separation between equipment in different locations without
requiring separate wiring/cabling
-
Any other use that meets the requirements of the organization, such as connecting end
users’ PCs in a library to corporate servers through router access control lists
Virtual networks have been used for many years by organizations. Their usefulness was realized
with the introduction of the Open Systems Interconnection (OSI) model. The OSI model
describes how different applications and devices communicate over a network. Using a virtual
network allows tighter integration between the two networks because they are both using the
same communication method.
Virtual networks also provide increased privacy between users on different virtual networks
because they use software to provide session keys as a means of encrypting traffic. Encryption
provides advantages to organizations such as increased security by preventing eavesdropping,
and is particularly useful when each host is isolated from one another within a virtual network.
Virtual networking also provides configurations where it is possible to have no connectivity
between two physical devices but where messages can be exchanged surrounding two or more
physical devices. This configuration is commonly used in data centers where two physical
devices are directly connected to each other, but their virtual devices are not.
There are many pitfalls and risks when dealing with virtual networks. One of the most significant
risks is that faster processing speeds and technology have increased the amount of sensitive data
being transmitted over the Internet. There are special precautions that need to be taken so that
this sensitive data is protected in case it travels across a virtual network. Another risk is that
Virtual networks do not have as many firewalls as physical networks do. When one device is
compromised on a virtual network then the other computers are also affected because traffic
between them can be hacked using the same method. Another risk is that virtual networking can
be used by malicious individuals to attack other computers. They can easily use the fact that their
computer is isolated from the others, to get around a firewall.
Chapter 4:
WAN Technology
The WAN acronym originally stood for wide area network, but now stands more broadly for any
kind of network that spans over a very large geographic range. The Internet is the most wellknown example of a globally-spanning WAN, but there are many other examples of WAN’s in
every day life.
This book will explore what exactly makes up a device on the internet, how these networks
function and why they’re an essential tool to have in today’s world.
The rapid growth of the Internet has been on a steady rise over the last two decades. The
technology behind the Internet has become an essential part of our society and everyday life.
However, it has also become apparent that we need to take a step back and start learning about
some of the basics of the Internet in everyday life. WAN technologies, for example, are less well
known than other internet-related topics such as TCP/IP or DNS. We need to be aware of some
basic knowledge when it comes to our services and Internet usage.
What exactly is a WAN?
WAN stands for wide area network, but this definition doesn’t tell us much about what makes up
an actual WAN system. We also have to differentiate between a wired and wireless LAN to fully
understand the WAN concept.
A WAN is a network that spans over a very large geographical area. It can be a private network
of one’s own, but it can also be the connection to the rest of the world, such as connecting to
Facebook or Google. Obviously, those networks span over a large geographical area, but there
are other types of WAN networks.
The Internet is sometimes called a WAN. The internet consists of many different networks and
servers around the world, most of which are connected with each other through switches and
routers. This is why we sometimes just call it the “Internet”.
The Internet isn’t just one single network that connects all devices with each other though. It
consists of several different networks that are connected to each other. It is often said that the
Internet connects people, not computers.
There are also many other networks that span over a large geographical range, such as the
network connecting railway systems around the world. The systems aren’t connected through
switches and routers, but they do connect to each other through lines to allow for easy transfer of
information between different train stations in different countries.
Despite its common usage as an abbreviation for wide area network, WAN technology is
something we use in every day life. We don’t always think of it as being part of a large network
spanning over a very large geographical area though. It is often just something we are used to, as
something that is a part of life.
There’s also a different kind of LAN, called the wireless LAN (WLAN). A WLAN is usually
built specifically to connect devices in a small area together. This includes things like passing the
connection from the router to another device in your home, or going out and sharing your
phone’s (mobile) connection with friends. However, there are also WLAN networks that span
over large geographical areas, similar to small-scale LAN networks. We use this kind of network
almost exclusively with WiFi hotspots.
The main difference between a WAN and a LAN network is the size of the network, and where
those networks connect. A WAN network is almost always connected to other networks through
switches and routers, whereas a LAN network is only connected to other devices in its vicinity.
The Internet is a WAN
As we mentioned before, people often refer to the Internet as a WAN. However, it can also be
referred to as a LAN (since it involves multiple devices in its vicinity). How can this be? Is it
both? Or something else all together?
The answer lies in how we use the term “network” in everyday life. Rather than referring to the
network as a LAN or WAN, we often just call it the Internet. However, the Internet is a WAN
that connects multiple LAN networks that span over a very large geographical area.
If we look at the technical definition of both LAN’s and WAN’s, we can see that there are no
real differences between the two. They are simply connected devices within close proximity. The
difference lies in how we use these terms in everyday life.
How does a WAN change my life?
There’s no denying that there would be something missing if we were to lose our connection to
everything on the Internet. WAN technology plays a huge part in our lives. It is what allows us to
connect to other people around the globe and share things we find interesting with each other.
Even though the Internet has rapidly grown in popularity over the last few decades, this doesn’t
mean that it is without fault. There are many different problems that might occur when it comes
to connecting to the Internet. These problems are caused by many different factors, such as using
outdated cable bundles or having poor wireless signal strength in your home or office. These
factors can often be hard to detect, but some are easier to notice than others.
The problem with WAN technology lies in its very nature: A WAN spans over a large
geographical area.
In many cases, WAN technology will help us to find solutions for our problems. However, there
are some issues that WAN technology can’t really solve. This can be a big issue if our internet
connection is dependent on another service in order to function properly, such as our phone
connection. If this is the case, problems can occur when that other service fails or if the lines are
blurred due to multiple incoming calls at once. These situations are often extremely difficult to
solve, but there are some things you can do to help prevent them.
WAN technology will continue to grow in popularity over time, and it is expected that more and
more problems will arise because of this. It’s important for us as consumers to be aware of these
things so that we can take steps towards solving these problems when they arise.
What is a WAN adapter?
There are many different processes we rely on in our daily lives, but one of the most important
ones is the process of receiving and sending data from one device to another. This is the reason
why your computer works, and it’s why we can search the Internet so easily.
When you send a file over to another device, or transfer information via a network connection,
this process takes place through your computer. However, before it is able to do this in an
appropriate way, it needs to be able to connect with other devices on the same network or those
that are connected to the same switch. This process is known as bridging.
Your computer will often connect with other devices via bridging before it can send any data
over.
WAN adapters are sometimes referred to as network or WAN adapters. This is because WAN
networks are often used to transmit data from one device to another over a long distance, as
opposed to a LAN connection, which would be used for a local area connection.
How does a WAN adapter work?
A WAN adapter works by bridging the gap between your computer and the other devices in your
vicinity. This includes everything on your local LAN and those that are connected through other
switches or routers in different geographical areas. Through this bridging process, the computer
will be able to communicate with all of these devices.
WAN adapters are often used in a variety of ways. You can use them to connect with other PCs
and devices on your network through a wired connection (Ethernet), or you can use them with
wireless network cards to get connected to multiple routers and switches around you.
There are many different types of WAN adapters, but these will usually fall into one of three
categories: ISDN adapters, DSL adapters, and broadband ADSL/VDSL adapter. These tend to be
the same as their names suggest and function slightly differently from each other. ISDN adapters
are often associated with phone lines, DSL adapters are used for DSL connections, and
broadband ADSL/VDSL adapter is used for an alternative broadband connection.
WAN adapters are often used in order to fix problems with internet connections. These problems
can arise over time due to a variety of different issues.
How does a WAN adapter differ from other network adapters?
There are many different types of network adapters that you can use in order to connect with
other devices on the same LAN or a different one. However, WAN adapters may not always be
the best option.
If you are only using wired Ethernet, these adapters may not be necessary. Wired Ethernet
connections don’t have any moving parts, so they are a much better choice in this case.
This is because they work in the simplest way possible; with nothing more than a physical
connection between the two devices. In this way, they are very similar to a modem and a laptop
or PC, which means that if you aren’t connecting with other objects on the same LAN , there is
no need for an additional device to transfer data between them. This is also useful if you don’t
have any other devices on the same LAN .
However, if you are connecting to other devices through a wireless connection or via a switch or
router, then it’s extremely important that you use a WAN adapter in order to keep everything
connected properly. These adapters work in a slightly different manner from other network
adapters; they don’t simply connect with your local computer and alter its settings, but instead
they will send data from one device directly to another so that it isn’t passed through the
computer’s set-up.
This is the biggest difference between WAN adapters and other network adapters. This is why
it’s essential for us to use them when we are using different types of networks. Another
important difference is the way they connect with routers and switches. While other adapters
may connect to these devices directly via a cable, WAN adapters are able to connect with them
wirelessly, which means they will be able to reach locations that would otherwise be difficult or
impossible to access with a physical connection.
In the next section of this chapter, we’ll discuss some of the most common problems associated
with WAN and how you can solve them.
Common problems associated with WAN connections
As we have seen in other parts of this guide, there are many different types of networks and
connections that you can use for your computer. Although these are extremely useful to us, they
can sometimes come with problems of their own.
WAN adapters do function in a slightly different manner from other adapters and the networks
they connect with, because they bridge the gap between devices on the same LAN or between
two LAN’s that are located in different geographical areas.
This allows us to connect more devices than we would otherwise be able to if we were using an
Ethernet connection. However, this also means that our connections aren’t as stable as they could
be. The way a WAN adapter works doesn’t account for this; it simply sends data back and forth
between other devices without worrying about whether or not there are any problems with the
network.
Chapter 5:
Ethernet Standards
One of the most important standardization efforts in the Internet world was Ethernet, which
describes the physical and electrical properties of cables and networks. The IEEE standards for
Ethernet came about in 1985, with the first version having been released in 1988. Through these
standards, it was ensured that communications on different types of equipment would always
work together seamlessly. In terms of wiring a large number of facilities to each other or
connecting them to the public Internet, it is nearly impossible to have any interruption as long as
one adheres to these standards. The Ethernet standard is so important that it has even been used
in the design of the US military’s standard for satellites and missiles.
Initially, the network was defined in IEEE 802.3, with virtually no differences between a
100MBit and a 1GBit Ethernet cable. The most important difference was that the 100MBit
version required repeaters every few meters to get around this limitation; however, this proved to
be insufficient for large networks. So as part of the 802.3j amendment, which entered into effect
in 1997, it was ensured that all cables would not have to have any repeaters. This was
particularly important for companies operating large networks.
Ethernet’s physical standard is so important that even though it has been standardized for over a
decade, it still remains one of the most commonly used protocols. In turn, this means that
Ethernet is also one of the most important applications in modern business operations.
GT.P, or Gigabit-rated Twisted Pair, is yet another standard, which defines data rates up to
2GBit/s using electronic signaling over twisted pairs of wires. It was developed in parallel with
IEEE 802.3, but was unable to meet the deadline. It is a standard that has been available since
1993, and although it has later standards such as IEEE 802.3z, the lack of significant changes
have led to little traction in the market compared to its older brother.
What is an Ethernet?
An Ethernet, for those of you not in the know, is a computer network consisting of one or more
devices to which communication can be transmitted. It is called an “ethernet” because it links
together those devices using cables. The system was first introduced in 1973 and was
revolutionary at the time as it overcame problems experienced with other types of networks like
copper wire based telephone circuits and coaxial cable television systems.
Ethernet is an example of a data link layer protocol. This means that it defines how the various
layers (e.g., the physical, network, transport and session layers) communicate by ensuring that
they use a common set of rules and protocols. Other examples include the Transmission Control
Protocol (TCP) and User Datagram Protocol (UDP).
In this chapter, we will take a look at the different types of Ethernet networks while discussing
their capabilities. We will also go into detail on both wired and wireless LANs. We will start out
with the basics, which will be followed by specifics on wired LANs.
First, we will talk about the different types of wired LANs in existence today:
Then, we will talk about the different types that exist for wireless LANs:
Next, we will take a look at how a network is organized. This is followed by an introduction to
Ethernet adapters and switches. We will then discuss how these components are combined
together to form networks. At the end of this chapter, we will finish off with a look at some
common products and services available for wired and wireless LANs. Now let’s start off with
wired Ethernet networks!
Note: Throughout this chapter, wherever I mention “LAN”, I also mean “WAN (Wide Area
Network)”.
A wired Ethernet network is essentially a half-duplex system. As a result, information sent over
the network is sent at its maximum speed (900 MB/s) during normal circumstances. The speed
can be further increased if needed. For example, the theoretical top speed of a 100 Mb/s Ethernet
connection is theoretically possible, but it would definitely not be necessary because LANs are
predominantly used in businesses or homes with computers connected to them.
As mentioned previously, Ethernet networks use a special type of cable that allows for faster
transmission speeds than conventional copper wire systems used for cable television and
telephone lines in the past. The special cable used is known as category-5 cable. This type of
cable is capable of transmitting data at speeds ranging from 10 Mb/s to 1000 Mb/s (1 gigabit per
second). Now that we have established that wired networks are capable of transmitting a
maximum of 1000 Mb/s, let’s examine how this data can be transmitted from one device to
another.
The most common form used for transmitting LAN information is known as baseband
transmission. This form of transmission is based on frequencies in the video range (usually
between 30–100 MHz). The synchronization of signals across the network happens at each end.
For example, if we have a station at one end of our network sending information to another, the
information is sent over an electrical wave. The wave starts out as a radio wave, but once it
reaches its destination, it goes through a process known as frequency conversion. After this
process is complete, the signal becomes a baseband signal that can be processed and then
transmitted.
While baseband transmission is the most commonly used form of transmission, other methods
exist. One such method is known as broadband transmission. This type of transmission uses
television waves (of all frequencies) for broadcasting. These waves combine in order to transmit
information from one device to another. This form of transmission is not a good choice for
networks that are going to be used in offices. Also, broadband signals are susceptible to noise
and interference. Therefore, it is not recommended for use in office environments.
Another way of transmitting LAN information is known as frequency division multiple access
(FDMA). This type of system divides radio waves into different frequencies and then sends this
data across the network to a specific device at its corresponding frequency. FDMA transmission
can be used with both baseband and broadband transmissions. This form of transmission requires
that an analog adapter be used on each end of the network in order to fully utilize the full
potential of the system.
Another form of wireless transmission that can be used for Ethernet networks is known as time
division multiple access (TDMA). This type of transmission divides a radio wave into different
time slots and then sends the data across in each slot. TDMA is not compatible with other forms
of transmission and can only be used with Ethernet networks.
While all these forms of transmission are available on Ethernet networks, the most commonly
used kind of baseband transmission is known as 10BASE2. This means that it uses two coaxial
cables for transmitting data: one for transmitting information, and another cable to shield that
information from other devices on the network (the device or devices on which a piece of
information is being sent).
Note: 10BASE2 is a shortened version of the name (10Base2).10Base2 is just another way of
saying that it uses 10BASE-T.
Note: There are up to 16 devices running concurrently on one network segment. The maximum
amount of network speed possible through an Ethernet network is therefore just under 900 MB/s.
For this reason, all Ethernet networks only use short lengths of coaxial cable between each
device and the central hub. These short cable lengths help limit interference and crosstalk which
would slow down transmission speeds if they were used with longer cables.
The way in which Ethernet networks are set up (the cabling) is known as a star configuration.
This type of network uses a center hub (which is the only device we will be discussing for the
remainder of this chapter), wall jacks for each computer, and a central concentrator or switch for
connecting all devices together.
Cabling Basics of the Ethernet
The following steps are needed to start laying out an Ethernet cable:
Step 1: Determine how many sections you need on your network. For example, if you want to
have two 100 Mb/s networks in your house, you would need three LANs (for the 100 Mb/s
links).
Step 2: Determine which cables you need for each cable. There are four different types of
Ethernet cable available, and each has a different purpose. These are the 10Base2 category-5
twisted pair, category-5 unshielded twisted pair (UTP), category-5 UTP with a Media Access
Control (MAC) address, and category-6 UTP cable. The 10Base2 is the most common type used
for short (up to 100 meters) networks because it is inexpensive to install and provides for an easy
installation in comparison with other types of cabling. Category 5 is the most standard form of
fiber optic cable used for long distance transmission distances. Category 6 is the newest form of
fiber optic cable. It uses a smaller form of fiber optic cable which allows the maximum length to
be up to 100 km.
Step 3: Once you have determined which cable is needed for your installation, you need to locate
a place for that particular type of cable in your home or office. It is recommended that every
10Mb/s area have access to dual 10Mb/s cables for easy redundancy.
Step 4: If you are using category-5 UTP, then use a punchdown tool or crimper to secure the
ports together and into their connectors on the network equipment.
Step 5: If you are using category-5 UTP with a MAC address, then you will need to patch the
cable into the appropriate ports on your network.
When connecting two networks, there is usually a standard procedure of which cable to use. A
common one is to connect one network with Category 5 (currently the most commonly used
cabling), and another one with 10Base2. If you have two 100Mb/s links to your frame relay or
ATM network, then it is recommended that you use 100BaseTX as well. When using fiber optic
cables longer than 1000 meters, there are different approaches because of the different distance
standards in use (1000Mb/s vs 1Gbit/s).
The standard for each type of Ethernet cable is to have three pairs of copper wires. These wires
are typically used to carry signals from one device to another; it is not used for sending data over
the cable itself. There are two pairs of copper wires that carry the main Ethernet data, while the
third pair of copper wire is used to send any timing signals.
Ethernet connectors come in several varieties, mostly relating to their size and shape. The type of
connector used is entirely dependent on the size and shape of the network card’s port. The
physical connection happens inside the network card with some sort of electrical contact,
generally referred to as a pin. It is entirely possible for there to be more than one type (size or
type) of physical connector on a network card; this gives users more flexibility and less waste
when installing cabling.
There are several categories of Ethernet connectors.
Most 10 Mbit/s, 100 Mbit/s and 1000 Mbit/s Ethernet networks use the RJ-45 connector. The
twisted pair category 5 cable can support speeds of up to 1 Gbit/s. Twisted pair cables come in
two sizes: standard and stranded. Standard cable is used for permanent installations or cables that
will not be moved around or installed in tight spaces. Stranded cable is used for patch cords that
are moved around regularly and installed in tight spaces. It has a higher gauge (thickness) wire
and is more resistant to damage that standard wire would suffer in this environment. The two
types of cables are suited to specific installation needs. Fiber optic cable has a maximum length
of 100 kilometers per fiber, with 1000Mb/s uses theCategory-6 UTP cabling.
Both twisted pair and fiber optic cables contain four pairs; for each pair, one wire is used for
transmitting data and the other wire carries a return signal. The pinout layout of an RJ-45
connector identifies which wire carries which signal.
A typical network cable consists of six twisted pairs and can have up to 24 wires in total. Coaxial
cable contains only four pairs, while fiber optic cable uses only three pairs, with typically four or
five fibers per bundle.
The design of an RJ-45 connector provides for eight different variations, based on the number of
pins. These connectors are identified by a number and a letter, respectively. For example, a cable
with five wires inserted into an RJ-45 connector would be referred to as “RJ-45/RJ-45,RJ-45/RJ48.” The total number of possible cables is 1,024.
Most Ethernet stands for 10BaseT or 100BaseTX (both are the same), which stands for 10 Mbit/s
or 100 Mbit/s respectively. 10Base5 was the first version and was an official standard from the
IEEE before being superseded by 10Base2. 10Base2 became the most popular, since it was faster
and more reliable than 10Base5.
The standards for some of the different types/categories of Ethernet cable are independent of
each other. This means that while one type may be limited to 100 meters in length, another one
can be limited to only 10 meters but both can be used together in the same network.
When installing a network, there is a need to connect two or more networks together and to make
all of them function as one network. If, for example you have two 100 Mbit/s links to your
Frame relay or ATM network then it is recommended that you use 100BaseTX as well.
When connecting two networks, there is usually a standard procedure of which cable to use. A
common one is to connect one network with Category 5 (currently the most commonly used
cabling), and another one with 10Base2. If you have two 100Mb/s links to your frame relay or
ATM network, then it is recommended that you use 100BaseTX as well. When using fiber optic
cables longer than 1000 meters, there are different approaches because of the different distance
standards in use (1000Mb/s vs 1Gbit/s).
Category 3 twisted pair cables were defined by the IEEE 802.3u specification in 1992 and was
intended to be used with 100 MBit/s networks. It operates at two different frequencies of TVcarrier signals: one for 100 Mbit/s networks (4 MHz) and one for 200 Mbit/s networks (2 MHz).
Category 3 was never popular, since 100BaseTX was available at that time and offered higher
speed at less cost.
In 1998, the IEEE and the TIA released a new standard of twisted pair cabling called Category 5.
It nearly doubles the bandwidth of a single 100 MBit/s pair, operating at one transmission
frequency of 10 Mbit/s. Category 5 wasn’t really popular as it was slower than Category 3, but it
was easier to install, since it could be run parallel to other cables. As result, Category 5 became
very popular in larger office and commercial buildings where 100 Mbit/s network cards were
installed overhead on the cable trays.
Prior to 1998, 100BaseTX was not allowed on some UTP cabling, even if it was a straightthrough connection. The reason for this was that 100BaseTX networks could operate at two
different transmission frequencies on the same cable, which made it possible to operate two
networks in the same cable. This is something that ran counter to the notion of ‘shared channels’
with other Ethernet networks, since every network in a cable segment would get its own
frequency on the same cable slot. Only after 1998 were these restrictions lifted and 100BaseTX
became standard everywhere.
Similar restrictions on using 100BaseTX exist when using UTP cable with twisted-pair cabling.
This is because, although the bandwidth and data rate of UTP is identical to that of twisted-pair,
and the same cable can carry both types at the same time, there is no way to ‘share’ a
transmission frequency between two different standards. Thus if a cable contains both types of
cabling (UTP on one end and twisted-pair on another end), one type cannot transmit at the same
time as another. This means that in order for two 100/1000 Mbit/s networks to be ‘shared’ in a
single cable, they must use different frequencies.
The most popular cabling for Fiber optic is the type 5 UTP cable, which uses four pairs for data
transmission. However, if you want to go longer than 100 meters you will need fiber optic cable.
fiber optic video cabling is generally referred to as fiber-optic network cabling.
15Base2 is the original unshielded twisted pair and was commonly used in computer networking
before Ethernet was introduced. It operates at a frequency of 1 MHz on most networks, although
it runs at 500 kHz on copper cables attached to a CO node or other non-repeaterable device. The
maximum length is approximately 185 meters and the maximum cable segment is 4 meters.
There are two types of “15Base2” cable. The coaxial cable version has an inner conductor which
is a solid copper core with a braided shield, and the twisted pair version has 2 conductors (one
for transmit and another for receive).
15Base5 was introduced in February 1993. It is an unshielded version of the copper twisted-pair
cabling that was originally specified by IBM in their Token Ring specifications. The maximum
length of a 15Base5 segment is 500 meters, while the maximum CAT 5 cable segment length is
100 meters.
This cable is commonly called “Thinnet” and has been used on IBM Token Ring networks.
Thinnet was commonly installed in older buildings, as it was relatively easy to install, plus it cost
less than the shielded version of twisted pair cable, called “Thicknet”. The maximum segment
length for Thinnet is 100 meters.
The maximum length for Thicknet is 600 meters. You can also daisy-chain up to 5 segments
together, for a total distance of 3200 meters. Thicknet is not that popular anymore, but it is still
used occasionally in older buildings for connecting centralized network equipment such as hubs
and concentrators.
10BASE5 was introduced in 1984 by Digital Equipment Corporation (DEC) in the DECnet
Phase B protocol and is defined by IEEE 802.3. The maximum cable length is 500 meters, while
the maximum length of a segment is 100 meters. 10BASET was a popular type of network that
was installed extensively in many offices and commercial buildings in the late 1980s and early
1990s.
This was very popular from its introduction by Novell in 1988 until it was replaced by Fast
Ethernet around 1995. The maximum segment length for this standard is 100 meters, but you can
connect up to 4 segments together for a total distance of 400 meters between nodes.
100BASE-TX is every LAN based on Ethernet that runs at 100 Mbit/s. The type of cable used
for the main cabling is Category 3, also called UTP. This twisted pair cable operates at a
frequency of 100 MHz, which can be compared to the original 1000Mbit/s Category 3 twisted
pair cabling that was in use up to 1998.
The maximum length for a segment run between devices is 100 meters and you can connect up
to 16 segments together for a total of 1600 meters between nodes. It was first introduced by IBM
in 1989 and was completely compatible with the Novell networking standard Fast Ethernet.
100BASE-FX is a technology for Ethernet over optical fiber, which is specified in the IEEE
802.3u-1995 standard. 100BASE-FX was defined as a cost effective method for 10BASE-FL to
be replaced on short range applications in the Fiber Distributed Data Interface (FDDI)
infrastructure. Since FDDI uses fiber to connect nodes and coax uses copper cables, there was no
way to run the two standards side by side. 100BASE-FX was used mainly for connecting NCs
(Network Concentrators) together in an FDDI ring topology, where cable segments between
individual computers are more than 100 meters.
The first version of the standard was IEEE 802.3u-1995 and it was updated with the IEEE
802.3u-1996 amendment that added 100BASE-FX support for 100BASE-T4 (100BASE4).
1000BASE-T is a standard for Gigabit Ethernet over twisted pair cabling. It has been around
since 1998 and is therefore also called “Gigabit Copper”. The maximum length of a 1000BaseT
segment is 100 meters, while the maximum length for CAT 5 cable is 100 meters (for each
direction).
1000BASE-LX/LH is a Gigabit Ethernet standard that uses fiber optic cable. It is specified in the
IEEE 802.3z-2002 standard. 1000BaseLX operates at a transmission frequency of 2.5 GHz,
while 1000BaseLH operates at a frequency of 1.3 GHz. Single-mode fiber has an advantage over
multi-mode fiber because it has much lower insertion loss (the amount of power lost when
transmitting into the fiber) and can therefore be used for longer distances without the need for
amplifiers or repeaters. It was first introduced by Cisco Systems in 1998.
1000BASE-SX/LX is a Gigabit Ethernet standard that uses fiber optic cable. It is specified in the
IEEE 802.3z-2002 standard, and it operates over multi-mode fiber which is less expensive than
single-mode fiber. The maximum distance for 1000BaseSX is 550m for 62.5 µm multi-mode
fiber and 2 km for 50 µm multi-mode fibers, while the maximum distance for 1000BaseLX is 2
km with 62.5 µm multi-mode fiber and 10 km with 50 µm multi-mode fibers. It was first
introduced by Cisco Systems in 1998.
1000BASE-CX is specified in the IEEE 802.3z standard, and it specifies operation over singlemode fiber optic cabling like 1000BaseLX. The maximum segment length for 1000BaseCX is 30
km, while the maximum length of a segment is 500 meters. It was first introduced by Cisco
Systems in 2000.
1000BASE-T (Category 6) uses four pairs of wires rather than just two pairs like 100BASE-T
(Category 5). This is because of the higher speeds at which information must be sent. It was first
introduced by the U.S. National Electrical Manufacturers Association (NEMA) in November
2001, and it is specified in IEEE 802.3ab-2002. 1000BaseT requires a minimum of Cat 5e
cabling and a maximum of Cat 6 cabling, but it can use Cat 5e cabling if it is more cost effective
to do so. The maximum segment length for 1000BaseT is 100 meters and you can connect up to
4 segments together for a total distance of 400 meters between nodes (for each direction). 1 km is
the maximum total length for both directions.
Chapter 6:
Introduction to IP
In order for our computers and devices to become connected, there are a wide variety of
protocols used across the world. One of the most significant is Internet Protocol, or IP. IP
connects all devices in one network and makes sure your written data gets from point A to point
B without errors. This is why the internet is so powerful, since it houses the entire world without
any problems. The main job of IP is to ensure the data flows on your network are accurate. That
means if you type something into your computer, it should show up as what you typed. IP also
connects networks together to make sure that data from one network can travel through another
network without any errors.
IP Protocol
The Internet Protocol
IP, short for Internet Protocol, is the protocol that allows you to communicate with other
computers on the internet. There are two different versions of IP: IPv4 and IPv6. This book
explains how these work and how you can either use them or use DNS (Domain Name System)
instead. The most important thing about IP is that it takes care of all the details of how data
works on your computer and sends it to a different location without any problems .
Connectionless Protocols
There are many protocols that don’t have any way to talk to other computers on the network.
This can cause a lot of problems for the data packets, where you may think you are sending one
thing but instead are sending something completely different. For example, the way a cell phone
sends it’s phone number is very different from how a fax machine sends it’s address so you can
be sure that the phones and fax machines will always send what they should.
Connection-Oriented Protocols
These protocols are ones that require that a connection is made between two computers before
any data can be sent. This is important because it prevents any data from being sent to the wrong
computer, or computer at all. For example, if you were to call a person and asked them a
question, they would respond with what you wanted to know even if they did not want to share it
with you because the connection was already established.
Communication Layers
1.
Application Layer: The Application Layer consists of two sub layers: Presentation
and Application, which manage how information is communicated between
applications on separate machines as well as how each application processes the
message .
2.
Presentation Layer: The presentation layer consists of two sub layers: Session and
Presentation, which control how information is communicated between the
application and the client application.
3.
Session Layer: This sublayer serves as translator during transactions and ensures
the right data is sent to the right place at the right time.
4.
Transport Layer: With TCP (Transmission Control Protocol), it provides a
standardized way for different end systems to communicate with each other
without any additional overhead. It also provides error correction, flow control,
flow monitoring, end-to-end packet ordering, retransmission, congestion control
and sequencing mechanisms .
5.
Network Layer: Now we move on to Network layer, which actually handles
information flow and routing. The main protocol for the network layer is IP
(Internet Protocol). IP supports addressing, packet fragmentation, packet
reassembly and header checksums.
6.
Data Link Layer: This layer handles physical addressing (MAC address) and error
detection.
7.
Physical Layer: As the name implies this layer handles communication between
two machines at the hardware level(broadcasting wires/cables, fiber optics, etc.).
Virtual private network (VPN) is a technology that creates a virtual encrypted link through a
public network such as the Internet or over an MPLS infrastructure between two points of
presence (POPs). This link passes IP packets, but they are encapsulated as they pass through
each network node. Consequently, no packet is altered at any point in the delivery process. This
approach allows a secure tunnel to be created with minimal impact to the underlying network
infrastructure and without pre-configuration of network devices.
Most VPNs use a tunneling protocol such as Layer 2 Tunneling Protocol (L2TP), which is built
on UDP, or Point-to-Point Tunneling Protocol (PPTP), which is built on top of PPP frame relay
protocols.
Virtual Private Network Protocol Standards
There are six main VPN protocols that most companies use today:
1.
SSL VPN
2.
IPSec VPN
3.
PPTP VPN
4.
Layer 2 Tunneling Protocol
5.
Cisco Systems VPN Client
6.
Microsoft Virtual Private Networking (VPN)
The following list is the most common deployment configuration with a WAN router:
1.
Static IP address
2.
Subnet masking
3.
Default Gateway IP address and Subnet mask
4.
Domain name resolution DNS server IP addresses and subnets (if needed)
5.
WAN connection type is PPPoE or Ethernet, if PPPoE is used, you may need to
configure the authentication settings for the connection on the specific ISP’s
server .
The following list is the most common deployment configuration with a LAN switch:
1.
Static IP address.
2.
Subnet masking
3.
Default Gateway IP address and Subnet mask
4.
Domain name resolution DNS server IP addresses and subnets (if needed)
5.
WAN connection type is PPPoE or Ethernet, if PPPoE is used, you may need to
configure the authentication settings for the connection on the specific ISP’s
server .
“When you are planning a WAN solution, you should consider them in order of the following:
1) Performance for applications that require very high bandwidth and low latency.
2)
Cost effectiveness for applications that do not require high bandwidth but do require
low latency.
3)
Security for applications that are sensitive and have regulatory compliance
requirements . “
DC-Networks is a flexible approach to wiring data center buildings without investing in costly
cabling infrastructure. Traditionally, IT networks were discretely connected via taps (a single
cable coming into or going out of a building). In this model, each connection was treated as an
individual interconnect between two buildings. Today, most data center tenants want to eliminate
these so-called taps and connect their entire network across the interior of their buildings.
WAN connections usually use virtual private network (VPN) technology. This allows companies
to extend the distance and security of private networks via the internet or telecommunications
system. A VPN creates a private connection between two locations that provides immunity from
outside intrusion, data tampering, and man-in-the-middle attacks. VPNs are secure because they
use a data encryption standard such as IPSec which makes it virtually impossible to view or
change data packets without proper encryption key authorization policies.
In this example, private traffic leaves the office and interconnects with other WAN sites via a
web proxy server. Traffic is encrypted and secured before being sent over the Internet or
telecommunication system. It should be noted that each VPN connection can have unique
properties configured by the administrator to meet business needs.
IP Addressing
IP addresses are, quite simply, the identifier for a device connected to the internet.
IP Addresses: Bits of Numbers That Define You Online
Every device that connects to a network needs some sort of identifier in order for other devices
on that network to know where it is. This identifier is called an “Internet Protocol” or “IP”
address, and without it you would have no way of transferring any data whatsoever between
yourself and others connected on said network.
What Are IP Addresses? These are unique chunks of data that are attached to any connected
device that essentially defines it. Every device connected to the internet will have an IP address
attached to it and without one, you would have absolutely no way of communicating or
transferring data with any other devices on a network.
How you choose to create an IP address is up to you, though there are some methods that I’ll
outline below.
Creating an IP Address: A Primer
IP addresses are the unique identifier for any connected device that sits at the edge of your
network. Each and every connected device will have one of these IP addresses and without one,
you would be unable to communicate with anyone else on your network.
Of all the different devices that are connected to the Internet, some may be more important than
others (like your main router) while others may be ignored entirely (like a printer). Either way,
each and every single device on your home computer needs to be assigned an IP address in order
to communicate with anything else. Here’s how it works...
How Do I Create an IP Address? These are incredibly simple and if you’re looking to get started
on it, then you’re in for a treat. With some knowledge of networking fundamentals and an
internet enabled device, the process is easy!
How to Create a Computer or Laptop IP Address There are several ways to do this out there.
You can use a web browser like your PC/mobile browser, but often times that just isn’t enough.
Using a Tool to Create an IP Address Utilizing the tools built into Windows, you can easily
create your own IP address. The process is incredibly easy but it is something that I suggest you
really go into detail on if you’re going to be creating numerous computer IPs for multiple
devices!
How to Create a Wired or Wireless IP Address There are several ways to do this out there.
Again, regardless of the method that you use, the process is something that I suggest you really
go into detail on if you’re going to be creating numerous computer IPs for multiple devices!
How to Create a Static IP Address A static IP address is one of the easiest ways to create an IP
address. It’s incredibly simple and something that I suggest you really go into detail on if you’re
going to be creating numerous computer IPs for multiple devices!
How to Create a Domain Name Server (DNS) Entry Performing this task is quite easy as well
and something that I suggest you really go into detail on if you’re going to be creating numerous
computer IPs for multiple devices!
How to Configure a Windows Server and Assign an IP Address to a Computer You can either do
this by purchasing your own server and just setting up it on your own personally, or you can
purchase hosting for your website and have someone else manage the servers for you.
Using NAT on Your Router The premise behind NAT is that it creates an internal (or private)
network from all machines that have been assigned IP addresses by the router. In other words,
NAT creates a single virtual system known as a “Network Address Translation (NAT)”, where
multiple devices now exist in the same physical space.
Be Prepared: How to Configure Your Network for NAT Do you want to configure your router
for port forwarding? Port forwarding is easy, but if you are new to it then it can be
overwhelming! I will walk through the simple process of configuring your router so that you can
forward ports.
How to Populate a Windows Server with Domain Information Creating your own domain on an
existing workstation is simple and something that I highly recommend doing. It only takes a few
minutes and has some great advantages. For example, you can use the domain to login
anonymously into services that require a user name/password (like Facebook).
How to Register a Network Location in Windows 7/8 You can either do this by purchasing your
own server and just setting up it on your own personally, or you can purchase hosting for your
website and have someone else manage the servers for you.
How to Add a Zones File Using the Global Configuration tool in Windows, this is something
that I highly recommend doing. It only takes a few minutes and has some great advantages. For
example, you can use the domain to login anonymously into services that require a user
name/password (like Facebook).
How to Configure Your Network for a Domain Name Server (DNS) Do you want to configure
your router for port forwarding? Port forwarding is easy, but if you are new to it then it can be
overwhelming! I will walk through the simple process of configuring your router so that you can
forward ports.
How to Configure Your Router for Port Forwarding Do you want to configure your router for
port forwarding? Port forwarding is easy, but if you are new to it then it can be overwhelming! I
will walk through the simple process of configuring your router so that you can forward ports.
How to Set Up Your Router for Port Forwarding Do you want to configure your router for port
forwarding? Port forwarding is easy, but if you are new to it then it can be overwhelming! I will
walk through the simple process of configuring your router so that you can forward ports.
Addressing Scheme
If you’re a person who is looking for an easy-to-understand starting point for IP addressing, this
guide will help.
Learn about the three types of IP addresses and how they are split between networks class A, B,
and C. Read about subnetting and explore the different ways in which masks work to help map
ranges with many hosts onto smaller numbers of bits or vice versa. Finally, see what makes IPv6
different from other forms of IPv4 and why it was developed in the first place.
In a nutshell, an IP address is a number that is used to identify a machine on a network. This
number can then be used to route the messages that are sent to and from that machine.
An IP address is made up of four numbers (usually between 1 and 3 digits long), each separated
by dots. Each of these numbers represents one of four different parts of the IP address; its
network, subnet, host and router portions. These parts are explained below:
The first section of an IP address (the one with all the zeros) identifies it as being part of the
network portion. The two versions that are common today both start with a number between 1
and 128 (usually between 1 and 3 digits long) which indicates the number of bits that make up
the network portion. Each of these numbers can take on values of either 1 or 128. Technically,
however, they are supposed to take on only 2 values: 1 or 128.
The second section of an IP address (the one with all the ones) identifies it as part of the host
portion. The two versions that are common today both start with a number between 0 and 255
(usually between 4 and 6 digits long) which indicates how many bits make up the host portion.
The last section indicates where on the rest of the network it should be located. This information
is given by adding a number between 0 and 63 to the host bit, with values of 0 meaning it is a
network boundary and 63 meaning that it is part of the primary network.
The original IPv4 addresses for which this scheme was created were assigned in four blocks,
each block covering different regions of Europe. One block each was assigned to the United
Kingdom (1xx0), France (2xx0), Germany (3xx0) and Italy (4xx0). There are a total of
approximately 2 numbers available for each region because there are a total of 127 networks
available in each block.
IPv4 addresses were not assigned in contiguous numbers, with the first number assigned being
001, the second 002 and so on until the last number was assigned (still at 1001). Each block also
had a size of 16 bits for every host in that block (allowing up to 16.8 billion hosts), which was
split into various subnets. This means that each block covered 128 networks (with more than
enough space to use all of them), but only 256 hosts could be placed within each network.
There are 88 bits available for subnetting in an IPv4 address. Each of these bits can take on two
values, either a 1 or a value from 2 through 126. This means there are 16,777,214 (2 to the 16th
power) possible subnets.
The original IPv4 addressing scheme was assigned in four blocks from 1:1 (meaning one block
per each of the four countries of Europe) up to 2:270 for that continent. However, the system had
a number of shortcomings:
It lacked flexibility because it only allowed a single block to be assigned for each continent. It
could only fit 128 networks within each block and there were only 256 hosts within each
network. There was no way to have “narrow” subnets (much smaller than whole networks).
There was no way to span the Internet across several countries. It did not allow for easy updating
of Internet locations, addresses and services.
In order to solve these problems, a new version of IP addressing called IPv6 was introduced. The
main differences between IPv4 and IPv6 are:
To achieve these goals, IPv6 has been designed with a structure that allows for a lot more
flexibility than the previous version. This structure has been designed to allow for many more
networks than the 4 billion that were theoretically possible with the old system. Also, subnetting
is now possible on a per bit level which allows for much smaller networks than were previously
available (it is also now possible to have discontiguous subnets).
The IPv6 addressing scheme has been divided into three sections: the network prefix, the
network mask, and the host portion.
The IPv6 address is composed of 32 bits (four 8-bit nibbles), with each nibble identified by its
position from left to right within an octet. The first three bits indicate the subnet number; they are
set to either 0 or 255 when a value of 0 is assigned and to 1 or 128 when a value of 128 is
assigned. The next four bits form a network ID; they are set to zero when no subnet is being used
on a particular interface. The last 16 bits specify the host portion of the address; like in IPv4,
they are set to 0 when no host is being addressed. The final two bits are reserved for future use
and must always be set to zero.
Since each of these nibbles identifies a specific position in the address, this scheme allows for
much finer control over how addresses are used than in IPv4.
The first three nibbles in this example, however, have been assigned values of 0, 255 and 1
respectively. This means that those three bits have been allocated to another subnet and therefore
they cannot be used by any hosts. As a result, the fourth nibble cannot be used, and must be set
to zero.
One issue that may arise is that a value of 1 is reserved for the network ID. Since there are only
three bits available when using this method, three values of 1 are not enough to cover all the
possible subnet numbers, therefore:
Here is an example of how the addressing would look like in binary:
Each octet in IPv6 contains 128 bits and can be expressed as four 8-bit nibbles or two 16-bit
words. The Internet is designed so that each of these nibbles represents a different part of the
address.
This allows the IPv6 to use an addressing system which is far more flexible than the previous
version. For example, in IPv4, addresses can be assigned in any way that is convenient and
efficient for the network. There are methods of subnetting available which allow for networks to
be divided into smaller units (so-called subnets). However, assigning and using those subnets
was not very practical because it was difficult to broadcast information about them throughout all
parts of the Internet in a straightforward manner. IPv6 uses an addressing system which allows
for much more flexibility on how addresses are assigned.
IPv6 is designed with a structure that allows for a lot more flexibility than the previous version.
This structure has been designed to allow for many more networks than the 4 billions that were
theoretically possible with the old system. Also, subnetting is now possible on a per bit level
which allows for much smaller networks than were previously available (it is also now possible
to have discontiguous subnets).
The network prefix, network mask and host part of an IPv6 address are designated by their
respective positions in the address: they are separated by colons (:) and they consist of 8 groups
each separated by a colon and 4 groups each separated by slashes. The network prefix determines
the network (for example, 192.0.2.0 refers to the IPv6 address space associated with the Class B
subnet 192.0.2.0/24). The network mask determines the host number (for example, 2001:db8::/48
means a 64-bit host number where the bits after 48 are set to 0 and the bits before 48 are set to
1).
The addressing scheme uses prefixes from /0 up to /128. This means that there are 128 bits
available for subnetting, and that the mask 255.255.255.248 means 16 subnets with 30 hosts for
each of them. The smallest possible subnet is therefore 32 addresses (2 x 2 = 4 addresses), and it
can be enhanced by adding more leading zeroes at the beginning of the subnet mask.
The first bit in the first octet indicates whether or not the subnet is being used. If there is no bit
on its left, then it must be equal to 0 (zero); if there is a bit on its left, then that bit must be equal
to 1 (one). The value 255 can only be assigned when the network has not been partitioned.
The first three bits of the first octet indicate the number of bits in the network ID. The possible
values are 0 through 128. In this example, there is no subnet and therefore no network ID
required; therefore, the value of all four octets is equal to 11111111. Only when a subnet is being
used does it require a value for these three bits:
In this case, two nibbles are used (the second and third ones are both set to 1). There is no
problem; even though we can only reach one host, we can still be sure that these two networks
will not interfere with each other since they start at different subnets. For example, even if an
interface changes from 192.0.2.0 to 192.0.2.1, it will not affect the address of the interface with
address 10.10.10.2; even if we have many addresses with the same network prefix (which
happens a lot with IPv4), it won’t be a problem since they will still be in different subnets and
therefore they won’t interfere with each other at all:
This, however, is not possible in IPv4 because all addresses start from 1; here is another example
of how an interface can change from 192.0.2.0 to 192.0.2.1:
This can be shown in this figure, where the network is denoted in blue and the hosts are in white:
IPv6 addresses are 128 bits long, compared to 32 bits in IPv4. To make matters worse, it was
assumed that an individual device would have only one address. As a result, the IPv6 addressing
system has been designed with a structure that allows for much more flexibility than the IPv4
one. The following diagram illustrates how an address would look like if it was converted to an
area code of the United States. With the new addressing system we no longer show every single
network as it is not necessary, nor can we do so with a 64-bit address.
The first octet is the network address and it is equal to all 0s if the device doesn’t require an
address in this network. In this case, 31 bits have been reserved for a subnet mask (as opposed to
IPv4 where we only had a single bit for that purpose). The bits marked with green are used for
the network and subnet, and the ones with red are available for individual hosts. The device’s
address can be found by masking the address with the subnet:
In IPv4, the network portion of an IP address has 3 bits unused and can therefore accommodate
up to 2 = addresses. This is sufficient for Class A, B and C networks (i.e., /8, /16 and /24
respectively). In IPv6 this is not true anymore: if all three bits are used, we have reached 65
thousand hosts. Even if half of those addresses are reserved for multicast (which will be
discussed in a later chapter), it is still clear that there is no room for Class D addresses. In IPv6
there is something called a “private address space” which consists of the following groups:
Because this address space is reserved for use on private networks, it is not reachable from the
Internet. However, if someone were to use an address from this range on a device connected to
the Internet (for example, in their home or office), it would lead to problems because there is no
way to differentiate between an address in this range and one that belongs to the global routing
infrastructure. Another problem with using such addresses is that they could just as well belong
to another organization on another planet; it’s very hard to determine what part of the global
routing infrastructure each individual network belongs to. Therefore, IPv6 designers decided that
such addresses shouldn’t be used on the global Internet.
Group 1 ( /64) is designated for loopback addresses, which are used for diagnostic purposes or
for testing whether or not a device can be reached (ping for example). They are similar to
127.0.0.1 but start with different values. Because these addresses will never be assigned to a real
device, they don’t have a public counterpart and they aren’t routable on the IPv6 Internet; it’s
very hard to know if someone has initiated an attack from one of those addresses.
Group 2 ( /128) are reserved for other purposes, including special-purpose addresses. They are
unique within the network and they can be used to communicate between hosts on the same
subnet. Hosts on the same subnet will have addresses that start with this prefix so that it is
possible to distinguish the different hosts in a network from each other. It is important to note
that there is only one bit of this prefix available for each of the eight bits in these addresses. The
reason for this limitation is security: because there are only 128 bits available, we can only use
128 characters (as opposed to 255 characters in IPv4). As a result, IPv6 addresses are much
harder to guess as opposed to IPv4 addresses.
Group 2 ( /48) is designated for smaller subnets. This scheme allows for the creation of many
small subnets that can be used to segregate different parts of an organization with a large number
of hosts. For example, if the organization uses two bits (from the 48 available) to create multiple
networks, and then assigns one bit for each host on each network, there would be 128 networks
and at most 256 hosts on each network (or 27 = 128 − 256). This way, if an organization has
more than 1 million hosts, it can create 250 thousand such subnets with different parts of its
infrastructure in each one. This would allow for 250 thousand small networks to be isolated in
each case.
It’s possible to assign addresses from Group 2 to hosts on different networks and subnets, but
those devices must then use Network Address Translation (NAT) in order for their
communications to work. NAT will be discussed later in this book.
Group 3 ( /120) is designated for places where we need even more addresses than can be
provided by groups 2 and 4 combined. It is reserved for any IPv6 addresses that need the
organization’s network to be connected to the global Internet. For example, one could create /120
networks each with a few host computers and connect them to the Internet with NAT.
Group 4 ( /128) is reserved for special purposes. If you are using IPv6, you must not assign an
address from this group of addresses to your home or office network devices that don’t need it.
Doing so may lead to some problems such as incorrectly functioning devices or a disruption in
your service if the affected part of your network would be unreachable by at least one other
device accessible via the Internet (which could cause problems while sending email). For
example, if one of the devices on your home network accidentally has been given one of these
addresses and it’s not able to talk to any other device that is using a routable address, then you
won’t be able to send email from your home machine.
The assignment of addresses to hosts depends on how much address space is available in each
group as well as how much space is available for private networks. For example, if we have to
provide Internet access for a large number of small home networks at a time when there is a
shortage of address space, we will be forced to assign them addresses from Group 2. This would
require that hosts on those networks use NAT in order to reach the Internet.
When designing IPv6, designers also had to deal with issues related to its deployment. One of
these issues was how new host addresses could be assigned as computers are being added to
networks; what happens if we have just assigned one device an IP address and now another one
wants one as well? It would create some problems for the devices if they were two different IP
addresses on the same piece of hardware. Therefore, IPv6 introduces the concept of Association
of Internet Registries (AIR) – a registry that maintains addresses that have been assigned to
organizations and uses them to assign new addresses.
When new devices connect to the network and they need an IP address, it’s possible to ask the
registry for an address from any of the available groups or for a private one. The registry can also
assign a number of large blocks of addresses from each group to one organization. If a company
needs only 10,000 addresses from each group, then it can make sure that only 10,000 are
available in each network. In this way it is possible to limit the amount of address space that a
particular organization uses in such a way that they don’t cause a shortage with their need for
large numbers of addresses. This is important because smaller organizations would become an
easy target for attacks if they were to use too many addresses.
IPv6 hosts can request addresses from AIR by means of anycast. An anycast address is an IPv6
address that’s associated with multiple devices and each device will be able to assign itself the
same address. The idea behind anycast addresses is that a single Internet packet can reach all the
devices that have been assigned this address.
When IPv6 was developed, there were two subnets designed for it: one for research and
development purposes and the other for network devices that are not connected to the global
Internet but must communicate with the Internet. These subnets were set up by research
institutions and universities. In order to be able to communicate with them we use Multicast
Listener Discovery (MLD) records in IPv6 (most of our IPv4 systems will also support MLD
records). IPv6 also supports features, such as DHCPv6, that can be used to connect devices to the
network and provide them with an IP address.
Internet Protocol Version 6 – IPv6
Unlike its predecessor, IPv4, with its four octets of numbers, IPv6 uses a 128-bit address. It also
incorporates other technologies such as mobile IP and multicast routing.
IPv6 is designed to be the dominant protocol on the Internet in order to avoid any single point of
failure and ensure robustness against future changes. This was first proposed in 2003 by a group
called the Internet Engineering Task Force (IETF). In 2009, it was finally approved by all four
spaces of the ITU and is currently undergoing rapid deployment across networks in many
countries. It is expected that 10% of networks will be using IPv6 by 2020 with 75% using it by
2030.
The IPv6 address space has a much longer prefix than IPv4, therefore the smaller number of
possible addresses. An IPv4 address can have any value from 0 to 255; however a unique IPv6
address may only have values between 0:0:0:0 (or 0000 0000) and 126:127:127:127 (or fc00::1):
The full 32-bit IP address is divided into three parts, separated by periods or colons, as shown
above. The first part is known as the network identification or network prefix or locator. The last
two bits are referred to as the subnet ID or subnet index or scope id.
The network prefix is typically assigned to a network interface card (NIC) by the organization
that owns that NIC. It may be a range of IP addresses or a single address. It is usually written in
“dotted quad notation” or “dot-decimal notation”, as shown above, but may also be expressed
using more than four octets in “prefix notation”. Since the network prefix has length /8, it can
accommodate 2-octet values, providing an address space of 2 = , which is practically sufficient
for all current and future devices on the Internet.
The subnet ID identifies a particular subdivision of the computer network identified by the
Subnet prefix. This can be a single IP address or a range of addresses.
The network and subnet portions are further divided into “link-local addresses” and “global
addresses”, respectively. The former have special semantics in IPv6 and can be used only on the
local link, while the latter are global.
There are two types of IPv6 multicast addresses: site-local (SLA) and router-local (RLA). Sitelocal addresses contain no scope id or network prefix; they act as interface identifiers for unicast
communication, similar to NDP neighbor discovery in IPv4. Site-local address prefixes are
assigned by the same organization that assigns global unicast addresses to its member sites,
usually on a regional basis. For example, EU member states would be assigned one such prefix
and Russia’s neighbors might be assigned another. This scheme simplifies the assignment of
IPv6 addresses to sites, as each site only needs to know its local prefix address. Site-local
addresses are commonly written in “contextual” format (i.e., the address is separated from the
context by a single period), as shown above.
Router-local addresses are required for global addressing and for routers, which must know the
unique subnet ID for every link on which they are attached. They contain a special subnet
portion (known as LoS) that identifies each link segment across which packets will travel. The
LoS for multicast packets is formed by concatenating the LoS of the group’s source address and
its routing. For example, SLA addresses have a one-to-one correspondence to link-local
addresses but the reverse does not hold. Hence the prefix length of an RLA has to be /64, as it
represents a portion of a 32-bit IP address that contains 64 bits of information.
Multicast addresses allow for better scalability than IPv4 when handling large numbers of
potential hosts and/or destinations, thanks to its support for group communication. IPv6 is
capable of both unicast and multicast communication. Group communication, however, has a
number of challenges that IPv4 did not face. For example, IPv6 is more resource-constrained
than IPv4, and there are limitations on the number of interfaces an IPv6 host can support. These
challenges are why IPv6 has been placed in a transitional state between the initial deployment
phase of the new protocol and its deployment on an almost global scale.
IPv6 supports Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) to
implement transport layer protocols over IP. TCP is used to provide reliable delivery of packets
with inherent flow control. UDP is used to provide a light-weight data delivery service, where
reliability is achieved through application-level mechanisms that employ timers to track the
progress of each packet at intermediate nodes. IPv4 used IP for transport and UDP for control
messages comprising the routers’ routing tables.
Unlike IPv4, IPv6 does not include a “broadcast” address for sending packets to all hosts on a
local link. Instead, IPv6 uses multicast addresses as its method of doing so. (Note: It is possible
to do broadcasts using multicast with an interface identifier of all ones, such as FF02::1. But this
is typically not done due to the problem of firewalling looping packets.)
IPv6 does not include any more TCP-related addresses for requesting and delivering data than
were present in IPv4. The only difference is that network interfaces have additional capabilities,
such as link-local addresses, source routing, and support for “IPv6 transition mechanisms”.
The IPv6 specification also defines a method of bootstrapping or “initializing” IPv6 networks
over a link that uses IPv4; this was originally called Auto-configuration (ACL). However, in
practice this should not be called “ACL”; it is now more correctly referred to as static
configuration of the IP address and default gateway. The IPv4 address is determined by the
stateful address autoconfiguration (SLAAC) mechanism, which uses an IPv4 multicast for a
router solicitation (RS) message; this will typically be the link-local all-multicast address
FF02::2 for most networks. The solicited-node multicast address FF02::1:FFXX:XXXX is used
by an IPv6 host to request an IPv6 configuration from a DHCPv6 server operating on the same
link. However, this mechanism does not support a single host obtaining IPv6 addresses for
multiple devices on a link. The stateless protocol also does not support IPv4-compatible IPv6
header formats, or any other method of configuring the “source address selection table” with
multiple addresses associated to one interface (in order to allow packets to be forwarded based
on the source address of the incoming packet). Other IPv6 transition mechanisms do not use
DHCPv6 or SLAAC.
To implement these features and provide unicast connectivity, IPv6 uses an expanded list of
virtual network interface layer 3 (L3) protocols compared to IPv4. These include:
In addition, it is assumed that higher layer protocols (e.g. ICMP) are not available.
IPv6 does not include any more TCP-related addresses for requesting and delivering data than
were present in IPv4. The only difference is that network interfaces have additional capabilities,
such as link-local addresses, source routing, and support for “IPv6 transition mechanisms”.
IPv6 has two protocols: Internet Protocol (IP) and Internet Control Message Protocol (ICMP).
The ICMP messages are used to send and receive error messages about problems with the
network connection. Routing protocols can run over ICMP messages. This allows routers to
route IPv6 packets without the overhead of an upper layer protocol such as TCP or UDP.
The Internet Control Message Protocol (ICMP) is used to obtain and verify IP address
configuration and to deliver error messages, in particular addressing and routing related
problems. ICMP messages can be either raw datagrams or encapsulated in IP packets. The IANA
has registered several ICMP types with the purpose of reporting information on connected
networks.
IPv6 uses IPv6 addresses assigned by autonomous systems (AS). Internet service providers must
give their customers a minimum prefix length of /64, which is usually enough for a single
connection to the Internet. A usable link could have one or more subnets; there is no overlapping
of subnets within an end user’s network.
All Internet connectivity can potentially be provided within a single organization, with the
Internet AS number being delegatable to a different one. For example, the “protocol-independent
address” can be used by each subnet and the AS numbers may change from organization to
organization.
There are three types of IPv6 adresses: an IPv6 global unicast address (GUA), which is used for
any device that is directly connected to the internet; a link-local unicast address (LNA) or local
home-interface address (LIA), which is an IPv6 address for every interface on a device; and an
aggregated or composite unicast IPv6 address (AAAA). A GUA is a globally unique IPv6
address. Every device that is directly connected to the internet uses this address. It must be
unique across the entire IPv6 internet (it can use one or more interfaces, but it has at least one).
The interface identifier is generated from eir- prefix and IEEE 802 MAC Addresses. A detailed
description of the generation of GUA can be found in .
A LNA is used as an IPv6 address for every interface on a device. The address is specifically
used by IPv6 to make the problem of duplicate addresses still existing after global renumbering
of all existing IPv4 addresses in the world. Each device receives a unique Address from the local
router and puts it into the required routing table. A detailed description of this mechanism can be
found in .
An aggregated or composite (AAAA) address is an implementation specific unicast IPv6
network prefix that combines part or all of a site’s subnets within a single interface. This
interface is sometimes referred to as a home-interface address. An aggregated address is not
unique across the entire IPv6 internet, but it is globally unique within an IPv6 site. Every device
that is directly connected to the internet uses this address. It must be unique across the entire
IPv6 internet (it can use one or more interfaces, but it has at least one). The interface identifier is
generated from eir- prefix and IEEE 802 MAC Addresses. A detailed description of this
mechanism can be found in.
IPv6 uses a header extension to support the Neighbor Discovery Protocol (NDP). The NDP
header is defined in RFC 4861 and consists of a single 32-bit field, the “Router Alert” (RA) bit,
which is set whenever a router advertises that it has captured an IPv6 packet. This allows a router
to silently drop incoming packets for which no routes exist using the RA bit; in effect, routers
wait for unsolicited routing information from other routers before sending their own. As such,
this provides for much more efficient use of the routing table; however, it creates problems with
fragmentation and cache invalidation, particularly at high data rates. The IPv6 header also
includes a traffic class field, which is similar but not identical to the Type of Service (ToS) field
in IPv4, and the hop limit field has been renamed “hop limit reserved” to prevent its misuse by
applications.
A new feature of NDP is that routers can advertise the prefixes they can reach. An IPv6 host can
build its routing table based on this information and on Prefix Information Options (PIO)
included in packets received from routers. This mechanism is called “prefix discovery”, and it
was designed to remove the need for stateful address configuration and allow automatic
renumbering of hosts. The scope of prefix discovery is limited to a router’s own autonomous
system (AS) and to the networks advertised by its neighbors. It is not necessarily possible for a
host to determine the complete set of routes that are reachable in an AS but the router can tell any
host within its own AS such information. Prefix discovery only works when all routers in an
autonomous system support NDP, which is defined as any router that advertises at least one
prefix with a Router Alert (RA) bit set.
The IPv6 specification defines mechanisms for address sharing among multiple administrative
domains, including mechanisms which are more restrictive than those defined for IPv4. One of
the goals of these mechanisms is to facilitate address conservation.
Chapter 7:
DNS Overview
The Domain Name System (DNS) is a hierarchical naming system which maps human-readable
names to numbers that are easier to remember.
Some popular DNS providers are OpenNIC and Hurricane Electric. These DNS providers have
servers in many countries around the world, allowing people from different geographical
locations to reach one another through the internet using easy-to-remember names for websites
like .com, .org and .net.
The domain name system simplifies the availability of web addresses by assigning each host on a
network an address of a certain size called an IP address. DNS is an inter-network protocol used
to convert names into IP addresses. DNS allows a machine in one network to find the location of
a resource on a different network.
The Domain Name System increases communication connectivity by providing easy access to
resources that would otherwise be difficult to reach. For example, web pages can be found and
read regardless of the location of the user accessing them or the host on which they reside due to
DNS.
Each computer on a network has an IP address, but often users do not know this value when they
are conducting searches. This is a problem that DNS resolves by correlating a host name to an IP
address.
If a user is asked to provide the IP address of a website, the DNS server on their computer
provides that information for them. This preserves the usability of names for human users, who
can speak and read in words rather than numbers. In contrast, if users had to remember IP
addresses, there would be numerous benefits from having this familiarity provided by DNS.
There are some drawbacks to relying on DNS as well. Since all traffic is routed through one
system that converts names into IP addresses and vice versa, hackers may redirect all traffic
through their own machine in order to gain access to private documents and other sensitive
information.
A DNS server is a network device that accepts and processes requests for domain names (the
first part of the domain name to be resolved into an IP address). A DNS server maintains a
database of these records. It also provides services allowing other hosts on the same network to
find that information, without any prior knowledge of how to contact the DNS server.
Domain names are grouped into hierarchies, called domains. Dominant domains are those which
are located at the top of each domain and subdomains are nested under them. For example,
“com” is a top-level domain (TLD) while “wwwnewdomain” is a second-level domain (SLD). A
third-level domain is “newdomain.example.com”, and so on.
Domain names are distributed in a hierarchical manner across the Internet. The Domain Name
System is a distributed database, with most of the traffic handled by authoritative name servers
operated by domain registrars and various other organizations, coordinated by ICANN. These
name servers do not store the domain names themselves; instead, they store pointers to other
name servers or references to locally stored objects (hosts).
DNS entries associate host or other resource records with IP addresses—either IPv4 or IPv6
addresses. In addition, DNS may be used to store other types of resource records such as MX
records for mail exchange, NS records for name servers, etc.
In a typical consumer network, the local network’s DHCP server automatically assigns IP
addresses to each device connected to the network. A common scenario for this is a home router
used for Internet access and Wi-Fi for Internet-enabled devices. The router’s firmware uses a set
of name servers (typically operated by the ISP) and will perform all of the required DNS lookups
locally, translating human readable names into IP addresses.
Typically desktop computers get assigned DNS servers by their respective TCP/IP settings,
while mobile devices often get their DNS servers assigned by their cell provider. These two
types of services can be configured independently on many networks. However, it is possible for
a single DNS server to be used for both desktop and mobile devices. If a router is used to provide
both types of networking, then the DNS service needs to be configured to use different servers
for each type of device.
The Domain Name System is the Internet’s distributed directory service. It is a hierarchical
naming system machined on top of the Internet Protocol that maps human-readable names to
numbers that are easier to remember. It serves as a location guide that helps users find network
resources by using easy-to-remember names—like example.com.
The intent of the Domain Name System is to help users find files and services without having to
remember all the unusual numeric Internet addresses of computers and servers, which are also
known as IP addresses. For this reason, DNS servers are worldwide and connect to each other
using IP protocols in order to identify Internet resources by name and resolve these names into
their respective IP addresses, which computers can use for direct communication with each other.
This means that, instead of remembering the numeric IP address for www.example.com
(121.123.123.123), a person can simply type “www.example.com” into their browser address
box, and the DNS server will translate that name into the correct IP address for
www.example.com (121.123.123.123).
There are some disadvantages to relying on DNS as well: since all traffic is routed through only
one system that converts names into IP addresses, hackers may redirect all traffic through their
own machine in order to gain access to private documents and other sensitive information more
easily. As a result, larger organizations often prefer to use their own internal DNS servers rather
than depending on the services of a third party.
Authoritative name servers are those that have the sole authority for managing the contents of a
domain zone; they are configured with information provided by the domain registrar. The
authoritative server has ultimate control over which machines are named as addressable entities
within the domain and what addresses they run. There may be multiple authoritative servers but
only one per domain, as multiple authoritative nameservers would create redundant resource
records, duplicated maintenance efforts and possible conflicting indications as to which was
authoritative (see Domain Name System Security Extensions).
Uniform Resource Locators (URLs) are a common means to identify resources on the World
Wide Web. The URL is a sequence of characters that uniquely identifies a resource for the
purpose of directing requests for that resource to one or more other server systems, such as a web
browser, proxy server or other server system. A URL is most often considered to be an address
of network data at a certain location, although it can also be used to refer to collections of
dynamically generated data served by one or more servers.
The basic mechanism behind URLs is one-way referencing; each computer knows only its own
outgoing network interface addresses and thus needs an outbound connection in order to request
information from other sites. The Uniform Resource Locator is defined in RFC 1738.
A URL also represents a specific kind of hypertext link, which is typically implemented as a
Uniform Resource Identifier (URI). One URI-based definition of the hypertext link states: “A
hypertext link is composed of a destination URI and a relation to the destination URI, usually
represented by an optional fragment identifier. The relation can be to any resource on the same
server or to another resource.” A hypertext link consists of three parts: the “destination” part
describes the place where all information about a document are kept, the “relation” part tells
what kind of relationship exists between two documents, and finally there are some “fragment
identifier parts”.
A Uniform Resource Name (URN) is a unique identifier for a set of resources, such as a set of
documents, images, or software. URNs may be composed of arbitrary strings (e.g.,
URN:DTISHA5), are case-sensitive, and may use domain names and Uniform Resource
Identifiers (URIs) as the basis for the names.
A Uniform Resource Locator (URL) is a sequence of alphanumeric characters used to identify
resources on the World Wide Web. A URL contains “a scheme and authority part” that identifies
what type of resource it is, such as “http”, “ftp”, “mailto” or others. URLs are defined in in RFC
3986.
Several types of URL encode information, such as a title and/or a name, thus functioning as a
hyperlink. If a user clicks on the URL, they can be redirected to another web page or web site.
The “www” portion of the address is often dropped when users enter URLs into their browsers.
This can be the cause of errors in understanding URLs that do not contain “www”. For example,
someone may think that www.example.com and example.com are different websites because
even though they both have “example” in the name’s first part, only one contains “www”.
A Uniform Resource Name (URN) is a unique identifier for a set of resources, such as a set of
documents, images, or software. URNs may be composed of arbitrary strings (e.g.,
URN:DTISHA5), are case-sensitive, and may use domain names and Uniform Resource
Identifiers (URIs) as the basis for the names.
A uniform resource locator (URL) is a compact string representation of the location and access
method for a resource available via the Internet.
Dynamic Host Configuration Protocol (DHCP)
DHCP stands for Dynamic Host Configuration Protocol and is the protocol used to assign IP
addresses, default routes and other settings to a device automatically. It’s also what can make
your home feel like “home.”
A major part of this is the DHCP Server, which assigns IP addresses dynamically. This means
that when a new computer or device connects, it only has an IP address for as long as the
connection persists. When you turn off your computer or disconnect from Wi-Fi, the server
learns that it no longer needs your address and assigns it to another device. This saves you from
having to set static IPs on every device in your house so they each have one fixed address on
your network.
If you’re connecting a computer to your home network, or if you’re trying to set up a new router
and aren’t sure what the default gateway is (which is what the DHCP server sets as a default),
simply locate your router or modem. Then look for a page in the router’s settings called
something like “DHCP.” It should have information on what its IP address is, where it’s located
and other important connection details.
At its most basic level, a DHCP server just gets your devices online by giving them IP addresses
similar to how mega-giant companies dole out IPs to your favorite sites. When you connect to
cnn.com, the first time you do it your computer needs to know what its IP address is. It can get
that from the DHCP server your ISP provides with your service plan. The same applies to
computers on a LAN. Without that DHCP server, every device would have to have a static IP
address manually input into them/
That’s where the client comes in. A DHCP client is any device (computer, printer, router) that
connects to a network and uses the DHCP service provided by the network’s DHCP server. If a
computer connects to an unauthorized network or one without a DHCP server it could be a
danger zone for hackers. Clients usually request an IP address and other configuration details
from the DHCP server, but they’re not required to if they have other means of connecting to the
internet. If you’ve ever used a mobile hotspot or connected your computer to a corporate
network, you know that it can sometimes connect without any IP information at all.
NTP
Network Time Protocol (NTP) is a protocol that synchronizes the clocks of computers over a
packet-switched network.
A computer’s clock comprises two main components:
1) an oscillator which generates a high-frequency clock signal, typically at 33,000 Hz; and 2) an
error-correcting code, such as the Universal Time Code or Network Time Protocol (NTP). This
stable timekeeping signal is then converted to lower frequencies and sent over the network where
it is received by mechanisms called “stratum one” or “master” clocks. These are usually located
in general internet servers. A stratum one clock provides accurate timekeeping for users on their
local area network.
NTP uses a hierarchical system of clocks, with the precise time being available from the most
accurate to less accurate clocks. The smallest division of time is called a “tick” and is defined as
the 64 nanosecond period during which a specific clock count repeats. The computer’s precision
clock can then be synchronized to the next level in the hierarchy, once errors are accounted for.
In this way, systems are able to synchronize in seconds.
The NTP packet format consists of three sections: frame control, client or reference identifier and
server or variable identifiers (two bytes each). Packet length, in bytes, is given by the high-order
bit of the first byte of the frame control section. The remaining three low-order bits specify
packet type (e.g., leap indicator, roundtrip delay and other variables).
The format of each section is as follows:
The packet type field and the variables field have a common form in the header and can be
interpreted independently. They are interpreted differently at different stages; e.g., roundtrip
delay is included in a response but unused in a request. See NTP Data Format for details on
specific items.
The NTP specification mandates that a clock be provided to all participating systems. The first
level clock is synchronized to an accurate atomic clock and may then be used by other clients in
its stratum 1 hierarchy.
It should be noted that there are 14 active IETF working groups, and many more active
individuals. Each has its own variant of the NTP protocol, differing from the others by detailed
specifications, including packet formats and even specific message types. The only requirement
is that all be compatible with the initial standard developed in 1983 at Bell Laboratories (now
AT&T Labs). Thus, while there is a large degree of compatibility between the protocols, each
group usually specifies non-standard features or extensions to improve performance or support
special needs.
The NTP protocol (also known as the Network Time Protocol or NTP) is a widely used Internet
protocol for synchronizing computer clocks. The modern version of the protocol allows time to
be expressed in nanoseconds, but this was not its original purpose.
The first version of NTP was developed at Bell Laboratories in 1983 and has been widely
adopted by organizations worldwide. With the advent of delay-tolerant networking, it became
clear that the 64-nanosecond precision of previous standards was insufficient for applications
such as real-time communications, when very small relative delays could matter.
NTP was designed to allow organizations to keep their systems clock synchronized to within a
few tens of milliseconds over long periods, using a large network of time servers operating at
carefully-controlled, well-separated physical locations. The protocol is intended for use in client–
server environments where the server can provide an accurate and reliable clock signal and it is
only necessary for clients to have access to local clocks that have no more accurate than one
second resolution. NTP can also be used as a precise time protocol on computer clusters with
non-uniformly distributed clocks, or any other situation which requires moderate accuracy over a
limited duration where losses can be tolerated.
The mechanism by which the sender of NTP requests changes to its local clock is that the current
time is compared with the receiving client’s time. Where the time difference exceeds some
specified threshold, suggested values are supplied by the server so that a change to the local
clock can be requested.
The server’s clock must also be synchronized with an atomic clock (or “master”), and it may not
request changes to its own time unless it is synchronized to an even more accurate reference
signal. When both clocks are correctly synchronized, known as “stratum 1” operation, each
system can synchronize with its peer through a connection-oriented protocol known as the
simple broadcast algorithm. This usually means that each time server synchronizes to its paired
time source.
Every participating system exhibits slightly different characteristics because of different network
paths, temperature and hardware variances. Thus, it is necessary for the protocol to correct for
these differences. This is done by packet delay variation and round-trip delay calculation. Further
correction is accomplished by monitoring the frequency error of the local clock and sending an
offset from the server; however, if a client’s response indicates that it cannot accept offsets, then
this information is used only for statistical calculations.
Generally, network time synchronization protocols allow a computer to synchronize its internal
clock to the time maintained by a remote server, based on the periodic transmission of
timestamps from the remote server to the computer. The most accurate such protocol is the NTP
version 4 (NTPv4) or NTP Secure Network Time Protocol (NTP-SNTP).
NTP’s functionality has been implemented directly in hardware on some systems. For example,
HP’s NonStop Himalaya system includes a hardware implementation on each node. IBM and
Cisco Systems also offer hardware implementations of NTP to customers. Intel’s Atom-based
Nehalem processors, the successor to its older Pentium processors, include hardware support for
NTP. However, the Linux kernel must be recompiled in order to enable hardware configuration
at boot time. Other platforms may also provide a software implementation of NTP to existing
platform-specific infrastructures. For example, the Windows and Linux operating systems are
generally capable of using an existing network time protocol (NTP) server as a reference clock in
real-time applications such as audio playback and games.
In certain special systems where the entire system is synchronised to one or more master clocks
before communicating with other nodes, the protocol may not be necessary or useful. This is the
case for dedicated master clocks, special-purpose systems such as astronomical telescopes, and
systems that are designed to operate independently of any other system (e.g., spacecraft).
Storage Area Networks
Storage Area Network (SAN) systems are networks of computer servers which share a set of
common storage resources, such as hard disk drives or optical discs.
These systems offer increased performance and increased capacity over standard server arrays.
SANs are mainly used in applications such as database environments that require a highavailability environment with low latency, such as transactional databases, data warehouse
environments and e-commerce applications. SANs also provide easier scaling for large
organizations with more than one location who need to perform regular backups or mirror
operations between two locations when primary storage is full.
Aside from the convenience provided by shared storage resources, the most important aspect of a
SAN is its use of Fibre Channel to connect servers together. Fibre Channel, the networking
standard for SANs, is made up of unique Fibre Channel devices (FCpe) that provide either pointto-point or ring topologies. NEC Corporation uses its own proprietary Fibre Channel protocol
called Nanao Express (NEX) which is currently the most widely used model in today’s industry.
Architecture
In a SAN storage area network, storage resources are located within the server itself and
connected via Fibre Channel interfaces. The terms “storage area network” and “SAN” typically
refer to a system that provides some form of shared storage among multiple computers. The
storage resources are typically located within the server and are connected via Fibre Channel
interfaces.
Standards such as fibre channel and iSCSI, which complement each other for scalable
performance and greater bandwidth, are used to establish a Fibre Channel SAN.
As of 2015, some server vendors have announced support for FC over Converged Ethernet
(FCoE) protocols that provide the same performance on existing Ethernet networks as Fibre
Channel networks with less effort and expense on head-end aggregation equipment.
Other vendors have also announced support for iSCSI over Converged Network Adapter (iSCN)
or Fibre Channel over Ethernet Protocol Carrier Services (FCoE).
Communication between servers and the storage resources within each server must be secure.
Many of the current protocols used to communicate from a server to a network storage device are
either local to a single computer such as iSCSI or FC, or are routed on the network using
standard protocols such as TCP/IP.
Local area networks (LANs) are typically provided in small business environments for desktop
computers and thin clients, for file and print sharing, and for basic messaging services such as
electronic mail (email). Fibre Channel is an efficient means of sending data over long distances.
In most cases, however, LAN-based services are adequate to meet the performance needs of
SANs.
SANs have been built using a variety of technologies and protocols. Fibre Channel was
introduced in March 1992 as an ANSI standard. Since that time, Fibre Channel has become the
primary technology for SANs. In the early days of SAN development, many companies in the
storage industry were making a concerted effort to differentiate their products from those of
other storage vendors by developing proprietary protocols and interfaces for their storage
systems.
NEC Corporation was one of the early adopters of Fibre Channel as well as iSCSI and Fibre
Channel over Ethernet. SANs with this type of interface may be configured as an “open” SAN,
meaning that they are not generally interoperable with systems that use unlicensed protocols or
proprietary interfaces. This can be frustrating to users who are looking for a common standard
across many vendors.
Common standards include Fibre Channel over Ethernet or FCoE and Fibre Channel over
TCP/IP or FCIP.
FC Over Ethernet (FCoE) is an extension to the Ethernet protocol to transmit Fibre Channel
frames over Ethernet networks. FCoE makes it possible for data to traverse the Ethernet network
at its full speed, using the existing switches and servers with their high-speed interfaces such as
10GE and 40GE, without disruption. One of the drawbacks is that FCoE does not support IP
storage protocols such as iSCSI.
Fibre Channel over TCP/IP (FCIP) is a Fibre Channel-based protocol that uses a high-speed link
between two nodes on an IP network. FCIP traffic is encapsulated in a TCP/IP payload before
transmitting over an IP network. The FCIP protocol uses an assigned FC-4 number or name from
a Fibre Channel fabric.
iSCSI Protocol (Internet Small Computer Systems Interface protocol) is a storage networking
protocol that provides block oriented access to remote storage devices. In iSCSI, SCSI
commands are encapsulated in TCP packets and sent to the target over an IP based network like
market Ethernet. It is supported on Windows, Linux and UNIX computers. A DDN adapter
makes it possible for a computer running Microsoft Windows to communicate with a remote
SCSI storage device via iSCSI.
ESX/ESXi Server - An application that virtualizes the infrastructure of an entire server. They are
commonly used in data centers, especially for large clusters. ESXi is made to look and act like a
physical server by achieving high performance and reliability with hardware virtualization. ESXi
has two types of servers: Standard Edition (free) and Enterprise Edition (sponsored).
The number of software-defined storage solutions available has grown rapidly over the years,
with more being announced every month.
in a SAN, storage resources are located within the server itself and connected via Fibre Channel
interfaces. The terms “storage area network” and “SAN” typically refer to a system that provides
some form of shared storage among multiple computers. The storage resources are typically
located within the server and are connected via Fibre Channel interfaces. SANs may be
implemented as an on-board disk controller or as one or more disk arrays.
Cloud Models
Cloud and Virtualization
Hypervisor
Virtualization is an abstraction of the operating system layer so that multiple operating systems
can be hosted on a single piece of hardware. Hypervisors are low-level programs that allow
multiple operating systems to run simultaneously on a single host.
These can be type I or type II. Type I is capable of running directly on the hardware and is also
called a native or embedded hypervisor. Type II runs on a host operating system. This is more
common for consumers - think VMWare Player.
Applications Cells and containers
Container creation is similar but allows you to keep parts of an operating system separate from
the kernel. This allows you to run multiple instances of an application simultaneously with
minimal overhead. They are useful for running different environments in a simple and repeatable
way (think Docker). A container will contain a complete runtime environment, including
dependencies, libraries, binaries, configuration files, and so on. It is like virtual machines but for
applications instead of operating systems.
VM Expansion and VM Escape
Virtual machines are not without problems. If you don’t manage them well, you will have a
proliferation of VMs. This is the uncontrolled spread and disorganization of virtual machines. It
is an easy problem to have in large organizations, but because they do not have a permanent
physical location, it is difficult to locate them.
VM escape occurs when attackers or malware can escape from one VM to another VM using the
underlying operating system. There are protections so that a VM can only access the memory
that belongs to it, but the vulnerabilities still exist and are still being found.
Cloud storage
Cloud storage is storage provided over a network. A lot of people are familiar with cloud storage
through things like Apple’s iCloud. It enables better performance, availability, reliability,
scalability, and flexibility.
From a security perspective, encryption should be used to keep an organization’s data transfer to
cloud storage confidential.
Cloud deployment models
As cloud services grow, they have been grouped into different categories.
•
SaaS: software as a service. This allows you to deliver software to end users from the
cloud rather than having them download software—simple updates and integration.
•
PaaS: platform as a service. This refers to offering an IT platform in the cloud. Good
for scalable applications, it could work for something like a database service.
•
Infrastructure as a service (IaaS): Cloud-based systems that allow organizations to
pay for scalable IT infrastructure instead of building their own data centers.
•
Private: Private clouds are resources for your organization only. It is more expensive
but also has less risk of exposure.
•
Public: when a cloud service is provided on a system that is open to public use. It has
the least amount of security checks.
•
Community: When multiple organizations share a cloud environment for a specific,
shared purpose.
•
Hybrid: a mix of community, private, and public. They are often segregated to protect
confidential and private data from public/community use.
On the premises vs. Hosted vs. Cloud
On-premises is when the system physically resides in an organization’s building. This can mean
a virtual machine, services, storage, etc. This gives the organization more control, but it is more
expensive and not scalable.
Hosted services mean that the services are hosted elsewhere, in a specific location. The cloud is
also hosted elsewhere, but it could be deployed (up to the platform to manage it).
VDI/VDE
The virtual desktop infrastructure is the necessary component to set up a virtual desktop
environment. This permits someone to use any machine to access your information, which is
hosted elsewhere (not on that physical machine). This helps with data loss in the event of theft,
etc.
Cloud access security agent
Cloud Access Security Brokers (CASB) are a service that enforces security policies between
your cloud service and your clients. This lets business customers know that they are using a
cloud service securely.
Security as a service
Finally, security as a service (sorry folks, the acronym SaaS is already taken ...) is the
outsourcing of security functions. A third-party vendor offers a wide range of security
specialties. This allows a company to have security protection without developing all these
resources internally. It’s easier for the business in terms of cost, flexibility, and scalability.
Chapter 8:
Networking Devices
Networking devices are electronic devices that communicate on a computer network. Computer
networks are groups of computers connected together to share data and resources. You may have
a network in your house that is shared among your computer, TV, tablet, gaming console and
mobile phone. Networking devices include switches, routers, wireless access points (APs),
firewalls, etc.
Networking devices interface with computers, servers, workstations and other networked
devices. They typically provide the following services:
Interfaces that allow the user to access a computer network from the outside.
Enabling the device to share data or resources with additional networked devices.
Encrypted communications between two networked devices.
The primary function of a network is to provide communication or connect computers together.
An important part of this process is connecting devices, which are typically called networking
devices because they facilitate this connection between computers. The most common types are
hubs and switches, but there are many more out there waiting for your attention! Let’s get started
and find out how these different switch styles can give you an edge in today’s business world.
Hubs
A hub provides a single point where all traffic passes through before it continues on its way. The
hub follows a traditional design that is often cited as being a “dumb” device. It simply has a
network port where one computer can plug in and share its resources with other computers. But
what about the hub’s ability to connect more than one computer? The answer lies in the different
types of hubs on the market today.
The most basic hubs are called regular hubs, but it is possible to find any number of advanced
models out there that have more features.
Switch
A switch is often used as a means to connect multiple computers on a local area network, or
LAN. A switch can have anywhere from 4 ports to 48 ports and more, with the most common
being 16 ports. Some brands even have 8 port switches that are stackable, while others offer
gigabit capabilities.
One advantage over using a hub is in the speed of data transfer. A hub just sends data out to all
ports at the same time, while a switch has the ability to send data directly from one computer to
another.
This is why it is possible to have a faster network with a switch than with a hub.
Modem
Every modem is different. They are all made to be different with the goal of trying to compete
and be the best modem possible. Without any further ado, let’s discuss modems and what you
should look for when purchasing one.
1- Is it DOCSIS 3.0?
2- Gigabit ethernet ports?
3- USB 2.0 or 3.0 ports?
A modem is a device that accepts an internet service provider’s signal and connects it to a
computer, printer, or other hardware that uses the signal for communications on a network such
as the internet or an office LAN (Local Area Network). Modems are connected to, and interact
with an internet provider such as a cable company, telephone company (such as Verizon or
AT&T) or another private company. Once the modem is inside of your computer, it will connect
to your internet service provider.
Internet service providers offer the connection between their system and a computer’s
networking hardware. A modem is used to deliver broadband internet connections to customers
like cable and DSL subscribers who want access to the internet in their home or office. Without a
modem, it would be impossible for you to get online; Internet Service Providers (ISPs) require
that customers have modems if they want Internet access every day at home or work.
Modem technology has advanced considerably in recent years and even though there are many
uses of modems they can also be very power hungry.
Modems have a USB port where it can connect to your computer, or they may have a SCART
input which is more common in older modems where it is connected directly to your TV and/or
VCR. The advantages of having a SCART input on your modem is that it may also allow you to
connect it to your VCR, or even connect it to your TV (called “set-top box” (STB)) if there is no
computer attached.
A DOCSIS 3.0 (Data Over Cable Service Interface Specification) modem can be used only with
cable television services in the United States. A DOCSIS 3.1 modem can be used with any
provider that offers the service, whereas a DOCSIS 3.0 modem will require the use of a cable
service provider to connect with their network and gives you access to internet in addition to
cable tv. If you are only interested in internet use, you can use the older DOCSIS 3.0 modem,
although there are some cable companies that offer it without the need for a land line.
The most common cable modem speed is 6 Mbps which was at one time considered to be
broadband and enough to avoid bottlenecking when using VoIP services like Vonage and Skype
together with Internet browsing or downloads. However, today’s high-speed internet applications
require significantly more bandwidth which is why 10 Mbps connections have become very
common and even faster connections such as 25, 50, 100 and 200 Mbps are now available from
some of the biggest providers.
A DOCSIS 3.0 modem is backward compatible with DOCSIS 1.0, 2.0, and 1.1 modems
(commonly referred to as tier 1 modems), but the data rate is only 1.5 Mbit/s or less;
approximately half the speed of DOCSIS 3.0 and DOCSIS 3.1 modems which have data rates up
to 6 Mbit/s (DOCSIS 3.0) and 8 Mbit/s (DOCSIS 3.1).
A DOCSIS 2.0 modem is suitable for and commonplace in a cable modem connection, but has
been superseded by DOCSIS 3.0 and DOCSIS 3.1 modems, which are faster and give more
flexibility in terms of connection options. The primary advantage of a DOCSIS 2.0 over
DOCSIS 1.0 is that the former can also run on unshielded twisted pair (UTP) wiring, which is
the standard for most homes in the United States today.
A wireless or Wi-Fi modem provides IP packets to devices using the IEEE 802.11 family of
standards (802.11a, 802.11b, 802.11g, 802.11n, etc.). Next-generation wireless modems are
available that provide Gigabit Ethernet, cable internet access, and voice calling (VoIP) via a
wireless connection.
A 3G modem provides internet access through mobile cellular networks, on the third generation
(3G) of mobile communication technology.
A 4G modem provides broadband internet access via mobile cellular networks on the fourth
generation (4G) of mobile communication technology.
Basically, modern modems support all kinds of connections: Ethernet, USB 2.0, or 3.0 ports
from where you can connect your computer to it. As long as you have a compatible port on your
computer for your modem to plug into, your computer should work with that modem without any
issue at all.
Unlike traditional modems, cable company modems typically include a wireless connection that
enables users to “set-top box” and access digital cable television service from a wireless
handheld device. This allows the user to watch live and recorded TV programming on demand or
program guide information for any channel. Some of these devices may also allow users to
access premium television services such as HBO or pay-per-view events on the user’s television
set. Cable companies began offering this service in the late 1990s with the introduction of IPTV
and VOD, which enabled TV viewing via computer and mobile devices.
Basic router
A router, also referred to as an internetworking device, allows an Internet connection to be
shared among multiple devices in the network. Routers are available as either wireless or nonwireless types. Wireless models can sometimes be used in situations where there isn’t any
physical access to a wire-based router but wireless signals allow it and thus is often seen as
providing more flexibility in setup. Non-wireless routers are often used in conjunction with
cables that provide physical access and thus are able to deliver information around the network
faster than wireless models can. In the case of a wireless router, each device connected to the
router can be assigned a single IP address. Wireless routers are typically more expensive than
non-wireless models but can provide benefits such as mobility, mobility from one room to
another and increased speed of access both to the Internet and to other devices in the network.
Network cards
Network Cards are the way that a computer communicates with other devices on your network
(other computers, printers, switches, etc). They were originally designed to handle human
interface devices (HIDs) such as keyboards and mice. As computers became more complex, they
began to be used for data communication and by 1995 networks were using them to
automatically connect. Network cards can be wired or wireless and each has its pros and cons.
What is a Network Card?
- A network card is an expansion card that can be added to the motherboard of certain
types of computers. These cards allow the computer’s operating system software to
communicate with other devices on a network.
-
They originally were designed for humans to use for data communication and were
initially used with keyboards. but since the introduction of Ethernet, the importance of
network cards has drastically changed
-
Wireless network cards are those that connect to a network and allow data
communication between devices through the use of radio waves.
How do Network Cards Work?
A network card contains circuitry that allows it to communicate with other devices
connected to it on a network.
-
Network cards also contain circuit boards with chips on them. These chips contain the
operating system software and give instructions to the computer’s operating system
software as to how they should communicate with other devices on the network.-The
chip in a network card also contains all its configuration details.
What to Know About Network Cards
Wired network cards are those that connect to a network without requiring wireless
communication. – Wireless network cards allow wireless communication with other
devices and use radio waves as opposed to wires.
-
Network cards can be either physical cards that are plugged directly into the computer’s
motherboard or they can be wireless network cards where they are connected to the
computer through a completely separate device such as a USB Key or another
computer on your local area network (LAN)
-
In order for a network card to work, the operating system software must recognize it by
comparing its internal chip with its configuration settings.
-
The network card uses a network interface controller (NIC) to connect and send data
through the cabling to another device.
-
Computer network cards are used in all types of networks. LANs, WANs, and MANs
all use network cards as their main data transfer device.
What is a Network Card Used For?
Network cards allow computers on a LAN to communicate with each other by
allowing them to receive and send data items from one computer to the other. This can
be done through wires or wirelessly depending on the type of card you have installed
into your computer.
-
Network cards tell the operating system software how to send and receive information
to and from another computer on the network.
-
The configuration settings on a network card tell it how to send out its data and where
to send it.
-
The most common type of network card is the Ethernet card. They can come in 10MB,
100 MB, or 1000MB transfer speeds, but most modern Ethernet cards are capable of
1000MB transfer speeds.-Because of their high capacity, these are faster than
traditional ethernet cards.-These allow for automatic connection between devices
through the use of radio waves which reduces errors.-These types of network cards
can be used in both wired or wireless connections, but they are more commonly used
as wired cards.
What is a Network Card Used For?
Network cards allow computers on a LAN to communicate with each other by
allowing them to receive and send data items from one computer to the other. This can
be done through wires or wirelessly depending on the type of card you have installed
into your computer.
-
Network cards tell the operating system software how to send and receive information
to and from another computer on the network.
-
The configuration settings on a network card tell it how to send out its data and where
it should send it.
-
The most common types of network cards are 10 MB, 100 MB, and 1000 MB Ethernet
cards. However, most modern Ethernet cards are capable of 1000 MB transfer speeds.
What is a Network Card Used For?
Network cards allow computers on a LAN to communicate with each other by
allowing them to receive and send data items from one computer to the other. This can
be done through wires or wirelessly depending on the type of card you have installed
into your computer.
-
Network cards tell the operating system software how to send and receive information
to and from another computer on the network.
-
The configuration settings on a network card tell it how to send out its data and where
it should send it.
-
The most common types of network cards are 10 MB, 100 MB, and 1 GB Ethernet
cards. However, most modern Ethernet cards are capable of 1000 MB transfer speeds.
What is a Network Card Used For?
Computer network cards are used in all types of networks. LANs, WANs, and MANs
all use network cards as their main data transfer device.
-
Network cards allow computers on a LAN to communicate with each other by
allowing them to receive and send data items from one computer to the other. This can
be done through wires or wirelessly depending on the type of card you have installed
into your computer.-The configuration settings on a network card tell it how to send
out its data and where it should send it.
-
The most common types of network cards are 10 MB, 100 MB, and 1 GB Ethernet
cards. However, most modern Ethernet cards are capable of 1000 MB transfer speeds.
Wireless access point
Many homes, stores, schools and other buildings have wireless routers installed throughout.
These routers allow users to connect wirelessly to the internet via the router’s antennae. They are
typically set up by technological experts in your home or office and need a wired connection.
Wireless networks are often cheaper because they do not require as much infrastructure as cable
or telephone internet service providers. It is also possible for people to configure their own
wireless network with a computer that has both Ethernet and Wi-Fi components in it allowing
them to create their own private network through an open Wi-Fi hotspot from their laptop
computer or smartphone device wirelessly so they can share it with other devices if needed.
Instead of using a wired connection that would require a network port to be available in every
room, the wireless router broadcasts a signal that can be connected to by any computer wirelessly
within range. Some people have never heard of their wireless router because it is not visible on
the outside. A wireless access point is attached to their main router and helps share the internet
on a single frequency so that multiple computers can connect while reducing interference and
limiting speed. A wireless access point (AP) eliminates the need for connecting multiple devices
point-to-point using Ethernet cables which are not recommended when there are other devices in
close proximity and therefore cannot provide sufficient bandwidth for high-latency applications
such as VoIP, gaming or HD video streaming.
Access points are also used to extend coverage of an existing wireless network, for example if
the signal in a building is not strong enough to cover all areas. Access points work by connecting
to a main router or access point via radio waves, which then allows them to broadcast a Wi-Fi
signal. This eliminates the need for long Ethernet cables running through multiple floors or
offices and allows new devices to connect wirelessly without having to install new network
ports.
A wireless access point is a small device with an antenna attached. It is connected by ethernet
cable and acts as a relay between routers and computers, thus acting as a central node between
them all. A wireless access point may be installed permanently (Usually an unused wireless
antenna is located on the roof or in a window.) or it may be a computer that has been placed in an
existing wired network and “rests” as a bridge to connect another wireless network.
A wireless access point acts as both a router and as a client for devices such as laptops, cell
phones, and other devices. A wireless access point (or AP) is used to connect the server
component of an enterprise network with laptop computers working within the network. It is
used to provide connectivity between all of these devices. The main purpose of the AP is to act
as the server component of your local area network (LAN). The AP is a bridge device. In
essence, the AP receives data from one or several wireless clients and then forwards this data
onto the wired side of your network. The AP does not have its own IP address like a client does.
Instead, it must have an IP address that is within the range of IP addresses that are assigned to the
network that it connects to.
Wireless access points usually connect to a wireless router via radio waves at 2.4, 3 or 5 GHz
frequencies. The wireless router will typically be connected to a DSL or cable modem for
internet connection. The router then provides a network connection for wireless devices such as
laptops, smartphones, tablets and gaming consoles.
A wireless access point is used to provide internet access to any device within the local area
network that is not connected by the Ethernet port of the LAN switch. It serves as a gateway
between wired and wireless networks. It allows wireline users (who are connected to a wired
network) and wireless users (who are connected to a wireless network) to communicate with
each other without purchasing a costly router device.
Wireless access points may be configured as stand-alone units or may be built into routers or
switches with multiple ports, thus making them simpler to setup and deploy. Some access points
include security features such as WEP, WPA and WPA2, which are said to be more secure than
the default WAP.
Wireless network access points do not replace or replace a wired network. They merely provide
additional coverage when a wireless router is not available or when the cost of renting a wireless
router from an ISP is too expensive. Access points are mainly used in large office buildings,
conference rooms, hotels and business parks to extend a wired network throughout large areas.
Wireless APs can also provide more robust coverage for smaller networks.
An access point is also called a wireless bridge, wireless gateway, client/server bridge, or
dedicated wireless station. Most newer computers have built-in wireless access points as do
Wireless routers and Bridges.
Chapter 9:
Protocols and Routing
Implementing Secure Protocols
Protocols are a defined set of rules that allow different components to have a “common
language” for exchanging commands and data.
SSH
- Secure Shell
What it is: This is an encrypted, remote terminal connection program used to make remote
connections to a server.
Use cases: remote server access
Relevant ports: 22
S/MIME
- Multipurpose Internet Mail Extensions
What it is: It is a standard for the transmission of binary data by email (attachments must be
exchanged with the correct encoding). Attachments are sent in clear text so attackers can hear.
Multipurpose Internet Mail Secure Extensions is a “standard for public-key encryption and
MIME data signing in emails.”
Use Cases: Email
SRTP
- Secure Real-time Transport Protocol
What it is: This is a network protocol used for the secure delivery of audio and video messages
over IP networks. It provides encryption, authentication, and message integrity, as well as replay
protection.
Use Cases: Voice and Video Streaming
LDAPS
What it is: LDAP is the main protocol used to transmit directory information. By default, that
traffic is transmitted insecurely. The secure version, which uses an SSL/TLS tunnel to connect
LDAP services, is called Lightweight Secure Directory Access Protocol (LDAPS). Technically,
LDAPS uses Simple Authentication and Security Layer (SASL), but this is not covered by the
Security + exam.
Use cases: file transfer
Relevant ports: ports 989 and 990.
SFTP
What it is: This is an FTP over an SSH channel.
Use cases: file transfer
Relevant ports: 22
SNMPv3
- Simple Network Management Protocol
What it is: This is a standard for managing devices on IP-based networks. Versions 1 and 2 are
considered unsafe. Version 3 was developed to address these security vulnerabilities.
Use cases: network data management; network address assignment
Relevant ports: 161 and 162.
SSL/TLS
What it is: Secure Sockets Layer (SSL) is “an application of cryptographic technology developed
for transport layer protocols on the Web.” SSL has been superseded by TLS, but people continue
to call it SSL / TLS (or use the names interchangeably).
Use cases: protection of other protocols (for example, HTTP -> HTTPS)
Relevant port (s): the standard port is not defined. It depends on the protocol being protected
(that is, HTTP over SSL / TLS is port 443).
HTTPS
- Hypertext Transfer Protocol
What it is: This is used to transmit HTTP traffic. HTTPS is protected by HTTP with SSL / TLS.
Use cases: web
Relevant ports: 443
Secure IMAP/POP
What it is: IMAP and POP are protocols for email servers. Secure IMAP/POP refers to POP3
and IMAP over an SSL / TLS session.
Use Cases: Email
Relevant ports: 995 for Secure POP3 and 993 for Secure IMAP.
NTP
- Network Time Protocol
What it is: This is the standard for time synchronization between servers. It has no security
features, although you can use it in conjunction with a TLS tunnel.
VLANs Trunking and Hopping
A Layer 2 network is a network that operates at the Data Link layer of the OSI model. A VLAN
is an example of a Layer 2 network.
VLANs allow for logical separation so that traffic is restricted to only those devices and
computers within it. This allows for network security from one VLAN and better performance
from another. VLANs also help in preserving bandwidth by limiting broadcast and multicast
transmissions, which are separate on each VLAN port connection.
A VLAN is an IEEE 802.1Q tag on frames (packets) that is used to identify the location or type
of the device in which the packet originated. It provides for traffic isolation between VLANs and
can extend to layer 2 devices in different buildings and remote locations from one another.
VLANs can be used as a firewall since they offer protection against broadcast storms, network
attacks, and rogue management stations. Trunking allows multiple VLANS to be transported
over a single physical link such as fiber optic cable or copper wiring.
The IEEE standards 802.1n, 802.1q (2690), and 802.1p define methods for implementing
VLANs in an Ethernet network.
VLAN implementations are based on IEEE 802.1Q and are specified in the IEEE 802.1Q
standard.
VLANs can be configured on access switches and routers, but they are not possible to configure
directly on PCs or workstations.
VLANs provide a way to logically separate data types by being assigned to each port of an
interface card (or adapter). “VLAN” is an abbreviation for VLAN Tag; it is derived from the
name of the QoS standard that defines how these tags behave. (802.1q)
A switch will have multiple (normally four) physical interfaces, known as trunk ports or just
trunk links.
Trunk links are used to connect to other switches, normally using a cable type called “fiber
optic”. The trunk port also doubles as a standard Ethernet port by defining the 802.1Q tag on
frames.
The tagging protocol is supported only on interfaces connected to other switches, which are
called trunk interfaces. Switches can have several trunk interfaces in order to support VLANs
spanning multiple physical segments and components.
A trunk is a group of two or more links bundled together as a single logical link between two
network devices (usually between two switches). One end of the links must be terminated and
one or both ends could be left without an end connection.
IEEE 802.1q defines the following QoS services:
“Generic CoS” or the “Generalized CoS” is a concept that allows the use of any field in 802.1p
to specify a priority. For example, frames with a 802.1p Source MAC address equal 0xA0 and
Target MAC address equal 0xF0 are set as having a precedence of 100 and will be processed
before all other frames with a precedence of 20.
VLAN Trunking Protocol (or VTP) allows for fast automatic configuration and maintenance of
VLANs on switches, routers and IP hosts (routers). VTP helps control the creation, deletion and
renaming of VLANs on a single VTP server or across a VTP domain.
VTP works at layer 2 and therefore cannot provide security for the traffic on VLANs.
A trunk port is connected to another switch that also has trunk ports. It facilitates passing through
multiple VLANs, typically between switches to routers or servers. Switch port types are either
“access,” which allows only one local device per port, or “trunk,” which carries traffic for more
than one device via 802.1Q tagging. A trunk will connect to other switches (or sometimes hubs).
For a local device such as a personal computer, the Ethernet port will define the 802.1Q tag on
frames. Trunk ports are mainly used in environments where multiple switches exist, but also
sometimes in environments where each device has only one Ethernet port and is connected to
only one switch.
VLANs are not configured on PCs (or workstations), but they are configured on router or switch
ports that connect homes, offices and other locations. These switches can have several trunk
interfaces for VLANs spanning multiple physical segments and components. Ports are not
normally available on PCs per se; however, it is possible to enable ports for one VLAN or
another by modifying the network settings for the IP-based network adapter.
Switches are used to create a virtual LAN. A virtual LAN permits network traffic to be passed
between devices without passing through the actual physical network. This reduces network
overhead and increases available bandwidth by reducing the number of broadcasts and
multicasts, which can potentially slow down data transfer speeds.
A switch connects two or more physical networks to form a single logical network with multiple
address ranges (VLANs). The physical networks can be segregated into separate subnets by
assigning unique IP addresses to individual VLANs. Switches are manufactured in various form
factors depending on the number of ports they support, and may also be available in rack-mount,
blade, desktop or “stick” form factors.
Switches use “trunk ports” (also called trunks or cross-connected links) to connect multiple
networks together. Trunk ports are physical Ethernet ports that are connected to other switches,
and the switch must be configured with trunking enabled to create a virtual LAN. Trunking is the
ability of a switch to use multiple VLANs on its trunk connections simultaneously, allowing
traffic not intended for one particular VLAN to be passed through one or more of its trunks. For
example, a switch connected to two switches over two trunks may have three trunk ports in total.
All three trunks could be connected to different switches, where the combined traffic would be
sent out over all three simultaneously.
Trunk ports are normally “tagged” with 802.1Q tags by default when they are enabled, and
untagged ports can still communicate, although they cannot share VLAN information. The
switch will have a trunk interface for each VLAN where it receives tagged frames on its trunk
ports. Each VLAN behaves as if it is a separate network, with its own subnet and hence the
switch can route between those subnets just like any other network using the routing table that all
switches maintain.
By default, all devices connected to a switch port are part of the same VLAN; however devices
can be assigned to other VLANs by enabling port-based or 802.
VLAN Hopping
In a VLAN environment, conventional switches would be incapable of distinguishing one
interface on a computer attached to the switch from another. This is not acceptable because of the
poor performance that could be caused by this. What is needed is some way to segment traffic on
a VLAN so that individual users can have their own subnet and they can communicate with each
other without interfering with others, even if they are all connected to the same switch.
If you’re having trouble understanding why port-based switching won’t work for your specific
needs, this will clear up what would happen if you bypassed it in favor of VLAN hopping as an
alternative solution. You’ll see that it simply wouldn’t work.
If you have the right equipment, a simple VLAN would be sufficient to segment traffic along the
right lines. The following diagram shows only one of many possible VLANs:
This setup is easy to understand and allows each computer on the same switch and VLAN to
communicate with one another. This is not a good solution for an environment like our
hypothetical office, where two people who happen to be on different VLANs might need to
communicate with each other.
The solution is something called “VLAN hopping”.
The idea behind VLAN hopping is simple. You’re given two computers, a switch, and a router
(or other networking device). First you configure the router to do something called “port
trunking”. This is done by giving the interface an 802.1Q VLAN id. You’ll see why this is done
in a minute:
Here’s how it works. Computer A sends out an ARP request asking for the MAC address of IP
address 10.1.1.10:
This is sent over the network to Router A, where it finds that IP address 10.1.1.10 isn’t in the
same IP network as 10.1.1.1 (the router). So it sends an ARP response back along the wires that
says “IP address 10.1.1.10 is at MAC address 0000.0000.0001.”
The router then sends an ICMP redirect back to computer A, telling it to send all traffic for
10.1.1.10 along a different path:
So computer A will not transmit any data frames with destination 10.1.1.10. Instead, it will ARP
for the MAC address of Proxy-ARP-Server, which is 0000.0000.0002:
Arp 10.1.1.10
That gets a response from Proxy-ARP and computer A will transmit any data frames with
destination 10.1.1.10 to proxy arp server with MAC address 0000.0000.0002
The proxy arp server receives the data frame and then can send it to computer B directly since
they are on the same VLAN. Computer B will then transmit any data frames destined for
10.1.1.10 on to Proxy-ARP Server:
Arp 10.1.1.10
Now Proxy-ARP Server has a data frame that’s destined for 10.1.1.10, so it can send it to
computer A directly, since they’re on the same VLAN:
Arp 10.1.1.10
VLAN hopping is possible because of how ARP requests are handled with proxy ARP servers
and how ICMP redirects are handled in response to ARP requests . This is not a method that
could work in the real world because of the several contingencies that would need to be present
for it to work. At best, it would only work if all of these things were true:
The switch has to be configured in such a way that an ARP request on one VLAN will cause it to
send an ICMP redirect on another VLAN. This is known as “inter-VLAN routing”. The routers
on your network have to be configured in such a way that they’ll route packets from one VLAN
to another (again, known as “inter-VLAN routing”). The proxy-ARP server has to be running on
a computer that is on both VLANs (or, even better for you hackers, the proxy server could be
running on your computer). The rogue machine is the one that actually handles the VLAN
hopping.
Wireless Technologies
Wireless technologies are a set of radio waves and communications devices that allow users to
communicate without wires. The term is usually used to refer to mobile or cordless telephones,
but it can also describe other digital wireless devices, such as cellphones, laptops, Wi-Fi
networks, baby monitors and digital TVs.
The oldest wireless technology is the radio telegraphy based on the transmission of signals using
radio waves. Other older wireless technologies include semaphore telegraphs and optical
telegraphy systems. More recent wireless technologies include:
Cordless telephones and other related cell phone technologies have evolved from analog cellular
systems to digital cellular systems, then to a hybrid of digital cellular and radio UMTS (3G)
networks. This codified the traditional radio-telephone link into “a newer OSI reference model
named Integrated Services Digital Network” or followed by GSM, CDMA and other standards in
the Third Generation (3G). Many different wireless technologies are used by different types of
wireless communication devices. The two most common uses for wireless technology are for
telephone communications and for data communications utilizing infrared wireless signals.
Wireless technology is used throughout modern society to provide access to needed services to
people who cannot use wired networks over conventional telephone lines.
At its most basic level, wireless communication involves the transmission of data between a
transmitter and a receiver. The nature of radio transmission facilitates the spreading of
electromagneticwaves in all directions, and this allows the signal to propagate over long
distances. In addition, radio waves have a relatively low signal strength requiring that signals be
more powerful than those used to transmit by wires (less attenuated as distance increases).
The ability to transmit over greater distances using lower-power signals offers clear advantages
for device design. For example, an omnidirectional antenna is often used in wireless local area
networks (WLANs) to broadcast signals within an office or building. Wireless LANs may offer
some advantages over wired configurations for local area networks because of their flexibility,
low cost, and ease of installation. However, the use of wireless technology can come at a high
cost to both businesses and individuals. As a result, it is common for many businesses to invest
in their own set of wireless LAN equipment to meet their needs.
Any device that operates above the frequency range used by the devices in an existing network
may require unlicensed radio spectrum. For example, microwave ovens operate between 2 GHz
and 3 GHz (microwaves) while cordless phones have frequencies 5.8 GHz–2.5 GHz (very high
frequency; VHF). Wireless devices do not emit radio waves at the same power level as the
microwave oven however they may interfere with the operation of a microwave oven. Wireless
communications can also be disrupted by outside influences such as buildings, mountains, etc.
Despite these shortcomings, extensive development has occurred in wireless communications
and networks. The advantages of wireless technologies are high speed, low cost, flexibility and
freedom from interference. Wireless communication systems can facilitate a wide range of
applications including voice & data, surveillance and military communications
Chapter 10:
Network Management Practices
Identity, Access, and Account
This section is important for security because controlling access to computer systems and
resources is a critical part of security. This chapter covers, among other topics, proper
authentication and authorization.
Identification, authentication, authorization, and accounting
Identification is the assigning a computer identification to a specific user, computer, network
device, or computing process. For security reasons, IDs should not be shared or descriptive of the
job function. They must be unique.
Authentication is the verifying an identity that has already been established in a computer
system. Authorization is the allowing or denying access to a specific resource.
Accounting is the process of “allocating resource usage by account for the purpose of tracking
resource usage.” It is also very useful for forensics after a security incident has occurred.
Multi-factor authentication
You’ve probably heard of it, as many online services push people to enable 2FA. Multi-factor
authentication is the combination of two or more types of authentication. Having multiple factors
increases security and reduces the risk of unauthorized access.
Choose from the following:
Something You Are: Refers to biometrics (fingerprints, eye scanners, etc.)
Something you own: security tokens or other items that a user may physically own.
Something you know: a password, a PIN code, or answers to “challenge” questions.
Something You Do: An action that you perform and perform in a unique way, such as signing
your name.
Somewhere You Are - Your location, as determined by GPS or other location services.
All the policies, protocols, and practices for managing this identity information between systems,
or even between organizations, are called identity federation.
When certain domains trust each other, the other domain can trust the authentication of a
domain. If this process is transitive (that is, A trusts B, B trusts C, then A trusts C), then we
speak of transitive trust. If it’s just two parties trusting each other, you’re talking (creatively) of a
two-way trusting relationship.
Account types
User accounts are required when logging into a computer system. They must belong to a single
person to allow the monitoring of accesses and activities. Again, the user IDs for these accounts
must be unique. Once a user account is created, the permissions are assigned. These permissions
determine what a user is authorized to do on a system.
The two exceptions to the uniqueness rule are for shared / generic accounts and guest accounts.
Shared accounts are accounts shared between two or more people. This could be a generic
account, which has limited functionality (like a kiosk browser) where tracking doesn’t make you
money, and the overhead for individual accounts isn’t worth it. Guest accounts are similar:
accounts with very limited functionality where it makes no sense to create individual accounts
for each visitor in the short term.
Service accounts are those that do not require human intervention to start, complete, or manage.
A good example is batch jobs.
Privileged accounts have more access than normal user accounts. These are usually
administrator or root-level accounts.
Offboarding and onboarding are adding people to a project or removing them from a project.
We will come back to this later with account deactivation.
Review and Audit Permission... Hopefully, by now, the book has made you think that you need
to audit things regularly. Permits are one of those things. You should periodically check that user
accounts on a system are necessary, justified, and represent real people who are still employed.
Then check the permissions granted to those people.
Similarly, checking and reviewing usage is looking at logs to determine what users have done
(are these things allowed? Should they still be allowed?).
Account maintenance is a routine check of all the attributes of an account (can some of them be
automated?)
Time restrictions can be a useful security measure. Assuming your employees have regular
shifts, this is where the system restricts accounts to access or privileges during non-business
hours. Location-based policies also restrict access or privileges, but this is based on where the
user is when logging on to a system.
Recertification goes hand in hand with audits. This is a process to determine if all users are still
employed and/or require them to re-authenticate. Since the latter happens in person, this can be a
pretty serious interruption.
Standard naming convention. In some places, the naming conventions are good: they help
clear up confusion among users and allow them to extract the meaning of a name. In certain
places, this can be bad because attackers also extract meanings from names.
Finally, group-based access control is where you assign privileges and manage users at the group
level rather than at the user level. This will be covered much more in a future chapter.
Account policy compliance
Access to most systems requires a password of some kind. Your organization must have and
enforce password policies.
Credential management is the set of processes, software, and services used to store, manage
and record the use of user credentials. This can be done through different tools or platforms.
Passwords must be complex enough, as defined by the complexity rules. This is a combination of
upper / lower case, numbers, and special characters. Yes, you are right. This is not the most upto-date password hint, but it is what the Security + exam wants to hear. You also need to ensure
that the password length is sufficient (and that your policy is adhered to). Get this entropy.
Your organization may also want to consider restrictions on password reuse. This means that you
will need to store a password history for each user and enforce the rules on how long each
password is used.
You can set an expiration date on an account. For instance, this could be useful for contractors.
You will need to work closely with human resources to do this. You must also have an account
recovery method, and, like all other plans, this must be practiced.
If someone leaves the company for any reason, they must deactivate their account. Why not just
delete it? Deleting your account can “orphan” items that have no other form of ownership. Save
the deletion for later and completely deactivate the account for now.
Policies, Plans, and Procedures
Policies and procedures are driven by external and internal requirements such as laws,
regulations, contracts, and customer specifications.
Standard of Operating Procedures
These are step-by-step directions on how to implement policies within an organization.
Standards are mandatory elements for the implementation of a policy. Putting these two together
gives you standard operating procedures:
Mandatory step-by-step instructions are established by the organization so that in the
performance of their functions, employees comply with the security objectives of the state of the
company.
Agreement types
In establishing these requirements and procedures, legal time is likely to be involved. Doubly
true if other companies (a client, a partner, etc.) participate.
Confidentiality Agreements (NDAs) are standard business documents between a company and
its staff. It describes the limits of secret corporate material and the disclosure of such information
to unauthorized parties.
Acceptable Use Policies (AUPs) are documents that describe what your organization considers
an appropriate use of its resources. This includes computer systems, email, the Internet,
networks, etc. The goal is to enable normal business productivity while limiting inappropriate
use. These AUPs should have teeth.
Personal Management
It is about the establishment, implementation, and monitoring of everything related to personnel.
In true Security + study guide style, we’ll quickly take a look at a number of related topics.
Workplace policies
Employees involved in activities such as fraud or embezzlement must always be present. If you
force employees to take vacations every year, they obviously can’t. Unless they had an
accomplice, I guess.
The separation of duties means ensuring that no individual has the ability to transact alone. This
means that you trust each person a little less, but it also reduces the chances of a person suffering
catastrophic harm.
Desk cleaning policies, even enforced while traveling to the bathroom, meaning that a
workstation lacks confidential information while the user is away.
Hiring, onboarding, and off-boarding
Background checks! Fortunately, you don’t have to wonder if your employees have a shady past.
You can pay someone to find out and report it to you.
When someone joins the company, they must be aware of the issues, responsibilities, and
security policies. This helps establish the importance of your role early on.
Safety training
Everyone should have general safety training. It should include recycling, as well as continuous
evaluation of people’s roles (do they have more responsibilities that justify a new training?).
Continuing education is important.
Role types
Everyone has their own place in the company and has unique training needs.
System administrators are administrative users responsible for keeping a system within defined
requirements. They fail to create the requirement. System owners do this. Just like data
ownership, system ownership is just another business function where the requirements for
security, privacy, retention, and other business functions are set, this time for a complete system.
Users refer to ordinary users who only have limited access and privileges, depending on their
role and work activities.
Privileged users have more access than ordinary users. For instance, is a database administrator.
An executive user is a special subcategory of a user. For instance, an example of high-level
people who don’t need access but will still get it when they ask for it. However, they are natural
targets of phishing attacks, so be careful.
General security policies
This is a high-level statement (or perhaps a series of statements?) That describes “what security
means to an organization.” This would be developed by top management. An example from a
book is “This organization will exercise the principle of least privilege in managing customer
information.”
Social media networks
A possible AUP clause is the restriction of the use of social networks at work or in the work
team. Social media exposes the company to data leaks, malware, and phishing attempts.
Personal email
If personal data and employment data are confused, especially if the data resides on corporate
servers, this is a human resource and legal headache waiting to happen.
Risk Management
Risk management is the process of identifying, evaluating, and controlling threats to an
organization’s capital and earnings. Threats to IT security, data risks, and risk management
strategies to mitigate them have become a priority for digitized companies. This is why a proper
risk management plan increasingly includes a company’s processes for controlling and
identifying threats to its digital assets, including a customer’s personally identifiable information
(PII), proprietary corporate data, and intellectual property.
Every business and organization faces the risk of unforeseen and damaging events that can cost
the business money or cause it to close permanently. Risk management enables organizations to
try to prepare for the unexpected by minimizing additional risks and costs before they occur.
Importance of risk management
This ability to control and understand risk enables organizations to have more confidence in their
business decisions. In addition, sound corporate governance principles that specifically focus on
risk management can help a company achieve its objectives. Other important benefits of risk
management include:
•
Create a safe workplace for all staff and customers.
•
It helps establish the organization’s insurance needs to save on unnecessary
premiums.
•
Protect all persons and property involved from possible harm.
•
Provides protection against damaging events for both the company and the
environment.
The importance of combining patient safety with risk management was also revealed. In most
organizations and hospitals, the risk management and patient safety departments are separate;
they incorporate different leaderships, goals, and objectives. However, some hospitals recognize
that the ability to provide safe, high-quality patient care is necessary for the protection of
financial resources and, therefore, must be integrated with risk management.
Companies are there to make money. Risk management allows them to maximize their return on
investment - that is, to make more money.
Business impact analysis concepts
This is the process of determining the source and related impact values of risk elements in a
process. It also describes how the loss of any critical function will affect an organization.
What are the possible impacts?
If the risk is the possibility that something will not work as expected, the impact is the cost
associated with that (realized) risk. The impact can happen in several ways:
•
Death or injury to third parties.
•
Property damage. This includes damage to commercial property, third-party property,
or environmental damage.
•
Security is the condition for being protected from risk.
•
Finance. This is very late capitalism, yes, but also note that most business decisions
are made on the basis of money.
•
Reputation. If a company damages its reputation, it could harm it in the future (unless
you are Equifax, in which case, there are no other options for your customers).
Common terms:
Availability is the time required for a system to perform its intended functions. Reliability is
simply the measure of the frequency of system failures.
RTO is the goal of recovery time. This is the target time for resuming operations after an
accident.
The recovery point objective (RPO) is the time period that represents the maximum acceptable
data loss period. The data loss part is the differentiator. This is related to the backup frequency.
Mean time to repair (MTTR) is a measure of the time it takes to repair a fault. This is the total
downtime divided by the total failures.
MTBF is the reliability of a system. It is the average time between failures. It is the sum of (start
of idle time - start of uptime) divided by the total number of failures.
Mission-essential functions: The essential functions of the mission are those that MUST occur. If
they do not occur or are performed incorrectly, the company’s mission is directly affected. As
such, they must first be restored.
Once you have figured out what your critical functions are, identifying the data and systems that
support those functions is called critical systems identification. If a single component can cause
the entire system to fail, it is a single point of failure. Make sure the essential functions of your
mission don’t depend on a component like this.
Privacy
Similar to a business impact assessment, privacy impact assessment (PIA) is also crucial. This is
one way to determine the gap between desired privacy performance and actual performance. This
is an analysis of how personally identifiable information (PII) is handled during storage, use, and
communication. A privacy threshold analysis will determine whether a system collects or
maintains PII.
Risk management concepts
These are all elements of the concepts of threat assessment, risk assessment, and security
implementation. They all come from the perspective of business management.
Threat assessment
It is a structured analysis of the threats that a company faces. You can’t change the threats, only
the way they affect you. Threats can be:
•
Environmental: weather, lightning, storms, solar flares, etc.
•
Man-made: Hostile attacks and accidents by your engineers.
•
Insiders - Disgruntled or well-meaning employees who make a mistake that harms
the company.
•
External: An external threat to the organization.
Risk assessment
This method of analyzing potential risk relies heavily on mathematical or statistical models. You
know what that means... even more equations! You are lucky!
SLE is the single loss expectation. It is the value of a loss expected from a single event. It is
calculated from the value of the asset (how long it will take to replace an asset) multiplied by the
exposure factor (that is 50% loss of functionality -> factor 0.5).
The annualized occurrence rate (ARO) is how many times a year you think something will
happen. Usually, this is based on historical data.
ALE - Annual Loss Expectancy is SLE multiplied by ARO.
A risk log is a list of risks associated with a system. You can also include calculations or impact
data for each.
Qualitative vs. quantitative
The likelihood of a given risk occurring is the likelihood of it occurring. It can be quantitative or
qualitative. Quantitative simply means an objective determination of the impact of an event. This
means that there are numbers at stake, and it is easier to calculate risks and make decisions.
Qualitative is when the impact of risk is subjectively determined. You may not be able to provide
numerical values, especially in the case of catastrophic events.
An intermediate measure between the two is when qualitative estimates (low, medium, high) are
made and weights are attributed to risk levels and other factors. These factors could include the
business impact, the cost of the solution, etc.
Supply chain assessment
If you have components that are no longer in production or have very long lead times, you also
run a high risk if something happens to your system.
Bear in mind that your risk assessment should extend to your supply chain.
Vulnerability and penetration testing
These were covered in a previous chapter. These are ways to find out what problems exist so that
you can mitigate them or plan to solve them. Make sure you get written permission before taking
any of these tests.
Change of management
This has its roots in systems engineering (where it’s called configuration management). On the
other hand, Configuration control is the process of controlling changes to a referenced item. Both
are essential when it comes to managing risk. Structure and processes help mitigate the risk.
Risk responses
You cannot eliminate or eliminate risk. It is an absolute. However, there are other options for
dealing with them. You can:
•
Avoid them by minimizing your exposure.
•
Transfer of risk to another person through insurance or other methods.
•
Mitigate them by applying controls to reduce impact.
•
Admit it ... sometimes your best option isn’t that good. One example is to skip the
checks to apply a hotfix and avoid even bigger problems.
Security control
If you want to mitigate your risk and reduce your exposure by applying security controls, there
are several classes.
•
Deterrents. These are controls that hinder the attacker by reducing the likelihood of
success from her point of view. Read, for example.
•
Compensation controls are used to help meet a requirement when you don’t have an
option to directly address the threat. An example is fire fighting. You didn’t put out
the fire, but you did ... something.
•
Preventive checks prevent specific actions, such as firewalls or mantraps, from being
performed.
•
Technical controls. This involves the use of some form of technology to address a
security problem, such as biometrics.
•
Investigative checks help detect intrusions or attacks. Examples are security cameras
or IDS.
•
Physical controls are capable of preventing specific physical actions from occurring.
Again, the mantrap example.
•
Corrective controls are used after the event and help minimize the extent of the
damage. Ex: backup copies.
•
Administrative controls are procedures or policies used to limit security risks.
Chapter 11:
Security Policies
Network Security
Secure Network Design
Firewall
A firewall is a software or hardware (or a combination of the two) that is used to enforce network
security policies on all network connections. Network administrators will determine security
policies: what traffic is allowed and what traffic to deny or block. These rules can be very
specific and nuanced to different applications, ports, machines, users, etc. Firewalls can be
application-specific or (sub) networks, but at a minimum, your organization must have a firewall
between your network and the Internet. Its objective is to block attacks before they reach the
target (web server, mail server, DNS server, database, etc.).
Firewalls, how they work
•
Network Address Translation (NAT) - An IPv4 technique used to link private to
public IP addresses. NAT is not required in IPv6 from an address scarcity point of
view. However, it could be preserved because it also hides internal addressing
schemes for external connections.
•
Basic packet filtering - look at packets, their source + destination ports / addresses/
protocols. Then determine if the packet is permitted by the security rules configured
in the firewall. If not, block.
•
Firewalls can also provide some protection against flood attacks.
•
Firewall Rules - A mirror of network policy restrictions. I’m not sure why this has its
section.
•
ACLs - Access Control Lists: These are lists of users and their allowed actions. It can
be identified by an ID, network address, or token. Use authorization statements
followed by a full denial to apply an implicit denial approach.
•
Application-based vs. network-based: Application-based firewalls scan traffic and
block / allow actions within applications (even those connected to the web).
Network-based firewalls are, uh, network-based, and they look at IP addresses and
ports. They are broader and less specific than those based on applications.
•
Stateless or stateless: everything is easier without a state, of course. But stateful
firewalls allow you to take action based on past actions. For example, if you receive a
response to a request that you didn’t send, it’s probably something you want to block.
However, you wouldn’t know without tracking the status.
•
Implicit Deny - If not explicitly allowed, deny it.
•
The principles of security network management guarantee a set of correctly
configured hardware, software, and operation/maintenance actions. Another orphan
section ... I don’t know why.
•
Rule-based management - Define the desired operational states so that they can be
represented as rules. Just like the software concept of making illegal states
unrepresentable ... not an exact connection, but it forces administrators/developers to
think about which states should be allowed and which are dangerous.
VPN
A VPN concentrator is a way to manage multiple VPN conversations on a network while
keeping them isolated from each other. VPNs can be remote access or site-to-site. Site-to-site
connects machines between two networks continuously (no need to configure each time).
Remote access is more temporary [] and allows remote hosts to connect to a network.
IPsec
IPSec is a set of protocols for the secure exchange of packets at the network layer (layer 3).
IPSec is used in VPN connections to establish connections. This chapter does not cover SSLbased VPNs.
•
IPSec tunnel mode means that the data, as well as the source and destination
addresses, are encrypted. External observers cannot decipher the contents of the
packets or the identities of the communicating parties.
•
The transport mode only encrypts the data, allowing an observer to see that a
transmission is taking place. The original IP header is exposed.
•
There are three connection modes: host-server, server-server, host-host
•
A security association (SA) is an established combination of algorithms, keys, and so
on. Used between the parties. An SA is a one-way join, so two-way traffic requires
two SAs.
•
In IPv4, IPSec is a plug-in, and adoption is up to individual vendors. However, it is
fully integrated into IPv6 (native in all packets).
•
AH - Authentication Header. This is a type of header extension that ensures the
integrity of the data and the authenticity of the data source. Encapsulating Security
Payload header extensions provide confidentiality but don’t help with data integrity.
•
Split tunnel VPNs do not route all traffic through the VPN. This helps avoid
bottlenecks that could result from encrypting all traffic. All traffic that goes through
the VPN is called a full tunnel VPN.
•
TLS - Transport Layer Security. This can be used for VPN to exchange keys and
create secure tunnels for communication. The book points out that IPSec-based VPNs
can have trouble traversing multiple NAT domains.
•
Always-on VPNs are preconfigured and ... always-on by default.
NIPS/NIDS
- Network-based intrusion detection system.
They detect, record, and respond to the unauthorized use of the network.
Unfortunately, the acronym “NIPS” stands for network-based intrusion prevention systems.
NIPS are similar to NIDS but can take automated actions to stop an attack, as determined by preestablished rules.
An intrusion detection system (IDS) does not need to be network-based. Instead, it could be hostbased. An IDS generally contains the following components:
•
Traffic collector to collect events via log files, copying traffic, other logs, etc.
•
An “analysis engine” that analyzes the collected events and compares them to known
malicious patterns.
•
A signature database that stores malicious activity patterns or signatures.
•
A user interfaces to view and report data by alarm level, etc.
An IDS can be signature-based, which means that it detects intrusions based on definitions of
known signatures. Alternatively, it can be heuristic or behavior-based. This means that “normal”
behavior is affixed, and behavior that is outside these limits is considered harmful or bad. This
can have a high false-positive rate. Anomaly-based is similar and looks for anomalous traffic
based on known “normal” behavior. The type of NIPS/NIDS system you have will determine the
complexity of the rules (the book gives an example from Snort to Bayesian)
IDS can be inline, which means that it monitors the data as it flows through the device, or
passive, which means that it copies the data and examines it offline.
It can be in-band, which means you examine the data and can take action within that system (if
something isn’t right, don’t send it). Out of band, you can’t.
Router
Routers are “network traffic management devices that are used to connect different network
segments.” Routers are located in gateways where two or more networks are connected. They
examine each packet and its destination address, then determine the optimal routes through a
network.
Remote access is often necessary, especially for large organizations with routers scattered around
the world. Unauthorized access is a bad thing, so avoid incidents like leaving default passwords,
sending plain text passwords, or using Telnet (or other insecure/outdated protocols - use SSH
instead).
Routers use access control lists (ACLs) to determine whether or not a packet should be permitted
to enter a network based on its source address. Supposedly, if you have an excellent router, you
can configure it to examine stateful packets.
The routers have detailed information about the predicted source IP addresses, so they can verify
the declared source IP address, which may be spoofed. If they do not match, the router should
discard the packet as an anti-spoofing measure.
Switches
Switches operate at the data link layer. Switches connect devices to each other on a network.
They represent a security risk because access means that an attacker can intercept all
communications. Like routers, switches also have insecure access methods (especially Telnet or
earlier versions of SNMP, they use SNMPv3 instead).
Since switches move packets from incoming connections to outgoing connections, they may
verify the packet headers. Port security simply means that switches are capable of controlling
which devices connect to which port via allowed MAC addresses (they can be spoofed,
however). Port security can be set up to assign a specific MAC address to a port, to allow
switches to “learn” acceptable MAC addresses, or to retain accepted MAC addresses (lifelong
learning).
The switches use Open Shortest Path First (OSPF) to route traffic and Spanning Tree Protocol
(STP) to avoid loops. Breakers also typically have flood guards to guard against flood attacks.
Proxies
A proxy server is a way of filtering traffic and can be used to further an organization’s security
goals. A proxy intercepts a client’s requests and forwards them to the intended destination.
Proxies can be forwarded, which means they intercept a request and then forward it to the
destination. They can be reversed, which means that they are installed on the server-side of a
connection and intercept incoming requests. They can be transparent in the sense that they
review the request and resubmit it (or not). Alternatively, they can modify the requests.
Anonymous proxies hide information about the requesting client. Caching proxies are able to
store local copies of content to improve performance. Content filtering proxies check requests
against an Acceptable Use Policy (AUP) and filter out the bad stuff. Open proxies are proxies
available to any user on the Internet. A web proxy is used to manage web traffic (also called web
cache).
Load balancers
Load balancers move loads across multiple resources. This helps avoid overloading a server and
increases fault tolerance. Load balancing is easier on stateless systems.
•
Load balancers can be based on affinity. This means that a host connects to the same
server in a given session. On the other hand, round-robin means that every new
request goes to a new rotating server.
•
Load balancers can be active-passive, which means that one system is balancing
everything, with another system ready to intervene if the main one fails. Activeactive means that all load balancers are active at the same time.
Access points
Wireless Access Points (APs) are “the entry and exit point for radio-based network signals to and
from a network.”
•
An SSID stands for service set identifier. The SSID is a unique identifier for a
network, up to 32 characters. When a user wants to join the network, they must do a
handshake to join an AP. The package must include the SSID. By default, it will be
broadcasted, but you can also disable this feature.
•
APs can use the MAC filter mentioned above. However, since an attacker is able to
observe valid MAC addresses on the network and spoof them, this is not a foolproof
defense.
•
The book is about signal strength, which most people understand intuitively. The
transmit power of the AP, as well as the physical environment, can influence the
signal strength.
•
As more things are connected wirelessly, the wireless band gets a bit crowded. We
now have 5 GHz (802.11a, n, and ac) in addition to 2.4 GHz (802.11b / g and n).
•
Wi-Fi is radio-based, so you need antennas. Antenna types determine gain factors and
transmission patterns. Gain is a measure of antenna efficiency. The antenna location
(hopefully) ensures maximum coverage in an area. You can also broadcast outside of
your building, which is not always a good thing. Panel and Yagi antennas are two
types of directional antennas.
•
Access points can be “thin” (controller-based) or “fat” (standalone). Standalone often
includes authentication, encryption, and channel management features. Controllerbased simplifies centralized management.
SIEM - Security Information and Event Management.
SIEM systems are hardware and software designed to analyze aggregated security data. They are
based on a few different concepts:
•
Data aggregation: event logs, firewall logs security, application logs ... all in one
place.
•
Correlation, which means that events or behaviors can be correlated based on time,
common events, etc.
•
Automatic alerts and triggers: You can set up rules to alert you based on certain
patterns. Your SIEMS can also have automatic reactions.
•
Time Synchronization - I don’t know if your dear reader has ever had to map events
in one time zone to events in another time zone, but it sucks. SIEMs can represent
events in UTC and local time.
•
Event deduplication: SIEMs can remove redundant event information for a better
signal-to-noise ratio.
DLP
And we’ve moved from SIEM to Data Loss Prevention (DLP). I told you that the order of the
topics was a bit unstable. Here, DLP refers to methods to detect and prevent unauthorized data
transfers within an organization.
•
USB block: disable physical points or software-based solutions.
•
Cloud-based DLP becomes more difficult as you have to move _some _data to and
from the cloud.
•
Organizations cannot allow or scan email attachments.
NAC - Network Access Control
This is a case-by-case management methodology. Large organizations can have LOTS of
connected workstations and servers. Managing these large-scale connections is difficult.
Network Access Protection (NAP) is Microsoft’s option, Network Admission Control (NAC) is
Cisco’s option.
Microsoft NAP measures the system integrity of connected machines. Metrics include operating
system patch level, virus protection, and system policies. NAP has existed since Windows XP
SP3.
Cisco NAC enforces policies based on the network administrator and verifies policy settings,
software updates, etc. They both perform health checks on a host before allowing it to connect to
the network.
Agents related to NAP or NAC can be permanently deployed on a host. They are also soluble,
which means they are used (and thrown away) as needed. They can be agent-based, which means
that a code is stored and activated on a host machine. However, if the code resides on the
network and does not persist in the host machine’s memory after use, it can also be agentless.
Mail gateway
These are machines that process email packets over a network. They also manage data loss, filter
spam, and handle encryption.
•
Gateways can filter spam by blacklisting known spam sources by domain or IP
address (alternatively, you can whitelist trusted sources). They can filter by
keywords. There are more sophisticated checks that involve delays, reverse DNS
checks, or callback checks. Additionally, agents like Gmail “learn” from their users
that they can flag messages as spam or junk. Much spam filtering occurs on the
network or at the SMTP server level.
•
Again, data loss prevention is an issue with email attachments.
•
Email is clear by default but can be encrypted. While there are options (like PGP),
there is not wide adoption.
Bridges
Somehow, we have returned to the bridges. Bridges operate at Layer 2 and connect two separate
network segments. This can influence security concerns because segregation of traffic can keep
confidential information more segregated.
•
Encryption takes time and processing power. TLS/SSL accelerators are devoted
devices that help mitigate crypto bottlenecks within organizations.
•
SSL decryptors allow you to control traffic. They are effectively a man-in-the-middle
attack and decrypt the information, review it, then re-encrypt it and forward it back.
•
Media gateways are machines designed to handle various media protocols, including
translating from one protocol to another. Useful for organizations that use a lot of
voice or video signals.
•
Hardware Security Modules (HSMs) are devices intended to manage or store
cryptographic keys. They can also help with other hashing, hashing, encryption, or
digital signing features.
Cryptographic Security
Cryptography is the science of hiding (encrypting) information. This has been going on for
centuries. The word “encryption” comes from the Arabic word “sifr” which means empty/zero.
General cryptographic concepts
Throughout history, cryptography has been a game of cat and mouse in which one party
improves encryption methods, and the other learns how to decrypt them. And now, encryption
(and decryption) uses computing power to aid in its calculations.
Encryption can offer privacy protection, hashing for integrity protection, digital signatures for
non-repudiation, and more. The basic idea of cryptography is to take the plain text that needs to
be protected and change it to ciphertext, which prevents unauthorized people from intercepting
or tampering with it.
Cryptanalysis is simply the process of analyzing ciphertext and other information in an attempt
to translate the ciphertext into plain text. Cryptanalysis can be differential or linear. Both
compare copies of the plaintext and ciphertext to determine the key, but linear cryptanalysis puts
the plaintext through simplified cipher as part of the parsing.
When using cryptographic functions, it is important to use proven technologies. This means
“don’t launch your cryptocurrency.”
Symmetric vs. asymmetric algorithms
Encryption operations require a message, an encryption algorithm, and a key to be encrypted.
Symmetric algorithms are the oldest form of data encryption. It requires that both the sender and
the recipient have the same key. This results in faster calculations, making them suitable for bulk
crypto. However, the shared key is also a disadvantage. How do you get the key safely for all
parties? This problem is known as key exchange. To have secure messages, it is necessary to
have a secure exchange of the symmetric key between the parties.
Common symmetric algorithms include Twofish, 3DES, AES, Blowfish, and RC4.
Operating mode
The modes of operation are used to handle multiple blocks of identical input data so that the
ciphertext does not have repeating blocks of encrypted data. Common methods include Cipher
Block Chaining (CBC), Electronic Code Book (ECB), Output Feedback Mode (OFB), Cipher
Feedback Mode (CFB), and Counter Mode (CTR).
Elliptical curve
Elliptic Curve Cryptography, or ECC Elliptic Curves. There are special mathematical properties
that allow the receiver and sender to openly choose a point on the curve and then individually
derive the keys from that point.
ECC is newer and has not been tested as much as other algorithms. However, the book seems
optimistic about its possibilities. ECC is ideal for use in low-power phones, as it is not as
computationally expensive as other algorithms.
Obsolete algorithms
Over time, computing power increases, which means that algorithms are not as secure. Also, the
flaws are in different algorithms. You should keep up-to-date on the cryptographic methods that
still need to be used.
Hashing
The hash is a special mathematical function that performs one-way encryption. It is easy to hash
something, but it is practically impossible to determine the original content based on the hash.
This is a good way to store computer passwords and also to ensure message integrity. HMAC, or
Hash Message Authentication Code, is a subset of hashes that hash a message using a previously
shared secret. This allows for integrity and authentication.
Some hashing algorithms are also vulnerable to collision attacks, which means that the attacker
encounters two different messages that have the same hash value. This means that integrity is
lost; you can’t prove they started with the correct/original message.
Cryptographic targets
The purpose of using encryption is to protect the integrity and confidentiality of the data.
Diffusion
This is the principle that statistical analysis of plaintext and ciphertext result in the form of
dispersion that makes one structurally independent of the other. Simply put, it means that one
character change in the plain text must correspond to multiple changes in the ciphertext.
Confusion
The principle that influences the randomness of an output. Each character in the ciphertext must
depend on different parts of the key.
Obfuscation
This masks an item so that it is unreadable, but it still works. A famous example is the blur C
challenge. Security through the dark, which means hiding what is being protected, is not a solid
security strategy. However, it can be useful and slow down an attacker.
Transmission vs. Block
Encryption can take place as block operations, which are performed on blocks of data. This
means that you can perform transpose and replace operations. Alternatively, you can also
transmit encryption in data transmission, which is common with audio and video transmission.
This has to happen in smaller blocks, so you can only do the replacement.
Secret algorithms
Although most algorithms are known, leaving the key as a crucial part, you can also have secret
algorithms. This means that the attacker has to decode the algorithm and find the key. However,
since you are not sharing your algorithm, it is not verified, so there is a possibility that you have
serious flaws.
Random number generation
Many of these operations are based on the input of a random number. It is important that the
number is truly random, something that computers have a hard time doing. There are specialized
pseudo-random number generators that attempt to minimize the predictability of non-random
numbers generated by computers.
Data Protection
Data can be protected in transit, at rest, and in use. Transport encryption is used to protect data in
transit. This includes things like transport layer security at the transport layer. Protection of
inactive data is also known as data encryption. This includes things like full disk encryption.
Data in use is data that is stored in a non-persistent state (RAM, CPU cache, CPU registers, etc.).
New techniques such as the Intel Software Protection Extensions can encrypt this data.
Common use cases
•
Low-power devices (such as telephones) require cryptographic capabilities. They
generally use ECC and other non-heavy functions for the calculation.
•
High resilience is the ability to resume normal operations after an external disruption.
•
Confidentiality is protecting data from unauthorized reading.
•
Integrity can show that the data has not been tampered with.
•
Obfuscation is protecting something from casual observation.
•
Authentication is a property that allows you to prove the identity of a party (user,
hardware, etc.)
•
Non-repudiation is the ability to verify that a message has been sent or received in
such a way that the sender or recipient cannot contest the sending or receiving.
•
And, of course, there is the usual balance between the levels of security and the
resources they need.
Physical Security Measures
The reason physical security is important is that if someone has physical access to your systems,
many of your digitals controls no longer matter.
Security measures
Correct lighting (outside and inside the building) is important. Lighting makes it easier to
observe and respond to activities, especially unauthorized ones.
Signs provide information and visual safety signals. For example, signs can alert employees that
an area is limited or that doors must remain closed.
Fencing is not a new concept for any of us. It is a physical barrier around the property. This may
exist outside (fences around the organization’s property, barbed wire, anti-lime fence). It can also
exist indoors to provide a means of restricting entry to an area where different security policies
apply. A typical example is putting servers in a cage with a controlled door.
Fences also fall into the category of barricades. Barricades include walls, fences gates, gates, and
bollards. Bollards are posts that stop vehicular traffic but allow pedestrian traffic. Consider
having windows in server rooms or other areas so that people’s activities are not hidden.
However, these windows should not allow shoulder surfing.
Security guards are a very important security measure because they are visible to potential
attackers and are directly responsible for security. Make sure your guards also receive
networking training. They must be familiar with social engineering attacks and also be able to
discern strange behavior, such as computers restarting all at once or people in the parking lot
with electronic equipment.
Safes are another security measure. Contrary to popular belief, they are designed to prevent
authorized access, not necessarily completely block it. One notch lower than safes are lockers
and security cases. This is a solution where your contents do not need the security level of a safe
or if you have too much bulk to store in a safe.
Cables running between systems must also be protected. Safe cabling helps prevent physical
damage and the resulting communication errors.
Air gap. It is a physical and logical separation of a network from all other networks (make sure
USB drives don’t break this rule!)
Mantraps. It’s a way to avoid tailgating by having two doors in a space that you can’t keep open
at the same time. Both doors require an access card or token, so if someone doesn’t have one,
they will be stuck between the doors.
Another social engineering match is shoulder surfing. You can use screen filters to reduce the
viewing angle to one screen.
Faraday cages are a means of protection against electromagnetic interference (EMI). EMI is
electrical noise in a circuit due to the circuit receiving electromagnetic radiation. This can
become a problem in server rooms, where there is a lot of equipment and cabling. There are
standards to reduce electromagnetic interference through the design and shielding of the board.
You can also use the Faraday cage, which is a housing made of conductive material that is
grounded. This prevents external signals from entering and vice versa.
Locks
Locks are a known security measure. If you’ve been to a computer security conference, you’ve
probably tried a “thief village.” Technical tolerances on most locks make them vulnerable to
pitting. The high security locks are designed to resist theft, punctures, impacts, etc. They also
have mitigations against key duplication (this is known as key verification).
Laptops and other valuable devices should be stored inside a desk when not in use or secured
with cable locks. These cable locks are an easy way to secure laptops to furniture or other
devices.
Entry methods
Organizations may want to restrict entry to the building to authorized persons only. They can do
it through biometrics, which is “the measurement of attributes or biological processes with the
aim of identifying a part that has the characteristics.” This includes fingerprints, iris or retina
scans, face geometry, hand geometry, etc. Biometrics is not foolproof and will likely require
updates to people’s information as they age and change.
You can also use tokens or access cards. Physical keys can be difficult to manage. Tokens and
cards can be provided quickly and remotely if required.
Environmental controls
Fire fighting is also very important. This does not necessarily stop the fire at first but does
provide some mitigation against the spread of fire in a structure.
In addition to fire fighting, you must also have fire detection. Fire detection works by ionization,
photoelectric detection (smoke detection), heat detection, and flame detection (infrared).
Chapter 12:
On-Path Attacks
Application/Service Attack
This part begins with a short history lesson. At first, most of the attacks were against the network
and the operating system layers. This is where the low fruit was found. Now that these levels
have started to put their nonsense together, hackers have moved on to the application level. The
application layer offers a “less homogeneous target” but with many vulnerabilities. As software
continues to devour the world, the opportunities for attackers will undoubtedly continue. Let’s
jump headlong into a long list of attack types. Let’s move...
DoS
Denial of service (DoS) is exactly what it sounds like. Attackers exploit vulnerabilities to deny
authorized users access to specific functions or an entire system. Sometimes this might be an end
in itself. Sometimes this is simply cover for other malicious activities, which we will see later.
DoS attacks are carried out by exploiting a vulnerability in a specific application, operating
system, or protocol. An example is SYN flood, where an attacker abuses TCP’s three-way
handshake. The attacker claims to be someone else, sends a SYN packet, and the target machine
responds to the nonexistent address. Of course, this eventually expires. But, if you do this
enough times, valid handshake requests will be ignored. Another typical example is a ping of
death. This is where the malformed ping packet causes difficulties or failure on the target system.
Like social engineering attacks, these types of vulnerabilities stem from unverified trust in
others.
DDoS
You’ve seen DoS, now prepare for distributed denial of service attacks. If one fastening system
is not enough, you can assemble multiple systems to work together. This often exploits malware
or botnets and can take down large targets.
System administrators can protect themselves by applying patches and updates as they become
available, changing the time-out settings for TCP connections, distributing the workload, and
blocking ICMP packets at the edge of the network. However, any network can be successfully
attacked.
Right now, students are charged with DDoSing university networks in the UK. There is
reportedly a 40% increase in DDoS attacks over 2017, including attacks on political candidates.
MitM - Man in the middle
Man-in-the-middle is a class of attacks that occur when an attacker places himself (or himself)
between two communicating hosts. This gives the attacker the chance to observe all traffic,
including blocking or modifying the traffic.
One way MitM attacks can occur is through session hijacking. This explains where an attacker
steals a cookie and uses it for false authentication. Here is some recent news about a Bluetooth
MitM attack. This was possible because, apparently, no authentication was implemented. UPS.
Buffer overflow
According to the CERT, more than half of all vulnerabilities are the result of some type of buffer
overflow. A buffer overflow occurs when an input buffer is overwritten with more data than that
buffer can hold. As a result, user input spills out into other parts of memory. Worse still, buffer
overflows often inherit the privilege level of the program.
Injection
There are several forms of injection, which the book does not delve into. Instead, talk briefly
about XML, SQL injection, and command-line injection, and LDAP injection. Injection
vulnerabilities, just like buffer overflow attacks, are the result of poor or no input validation.
Injection means that an attacker can provide information that is interpreted/executed by an
application for malicious purposes.
Cross-site scripting
Multiple validations of user input or lack of it. Cross-site scripting is where an attacker can
include a script in their input. The injected script could be run immediately from the backend but
not maintained, making it a non-persistent XSS attack. It could be stored in the backend and then
used against others later (making it a persistent attack). Or it could be run in the browser, making
it a DOM-based attack.
Application forgery on various sites.
Cross-site request spoofing (XSRF or CSRF) is “an attack that forces an end-user to perform
unwanted actions in a web application in which he is currently authenticated.” If a user accesses
a website, the browser receives a cookie. The cookie is further used for subsequent requests to
prove that the user is still logged in. However, malicious people can abuse this by sending a
request from another source using that cookie.
These attacks can be limited by including XSRF tokens so that certain actions occur only once.
Cookie settings can also be configured to limit use. Recent CSRF vulnerabilities include Western
Digital, phpMyAdmin, and Asda.
Increased privileges
From CSRF attacks, we took a completely different turn on privilege escalation. Privilege
escalation simply means starting an ordinary privilege level and reaching the root or
administrator level. This can be done by stealing the credentials (possibly made clear
somewhere). This can also be done with the help of other attacks on processes running with
elevated privileges. CentOS and RedHat are in the news for a privilege escalation vulnerability.
Likewise, the VPN services from ProtonVPN and NordVPN are too.
ARP poisoning
Address Resolution Protocol (ARP) helps translate between IP addresses and MAC addresses of
devices. The answer to “who owns [address]? Please identify yourself” is stored in an ARP
lookup table. Unfortunately, this protocol does not include response verification. Therefore, an
attacker can (quickly) provide false data in response. This is what is referred to as ARP
poisoning and results in a bad address of the malicious address.
Apparently, the San Diego airport has “risky Wi-Fi.” Journalists discovered that an ARP
poisoning attack was taking place there.
Amplification
Amplification refers to the use of a protocol in such a way that it amplifies the result. A good
example of this would involve using ping to make all devices on a network response to a spoofed
address. By using the protocol in this manner, the attacker is able to generate more traffic than
you could generate with a single machine.
DNS spoofing and poisoning
The DNS system is what converts domain names to IP addresses because it is easier for humans
to remember domain addresses. The DNS system is not a collection of authoritative servers.
Instead, there is a hierarchy that caches the data and looks “upstream” to update the information.
DNS poisoning or spoofing is where (similar to ARP poisoning) a DNS record is changed. This
results in incorrectly diverted traffic. There is a working group called DNSSEC that works on
authentication with DNS records.
Domain hijacking
Domain hijacking is simply the unauthorized act of changing the registration of a domain name.
Man in the browser
Man-in-the-browser (MitB) is a bit like man-in-the-middle (MitM). MitB attacks involve
malware that modifies the behavior of the browser through helpers or extensions. An example is
the Cronus variant “Osiris.”
Zero days
We continue our whirlwind tour of ALL types of attacks jumping to zero days ... as a concept. A
zero-day is a vulnerability of which there is no prior knowledge beyond the hacker or the vendor.
Hackers can sell this information through bug bounty programs or through the dark web. Just
because a zero-day is now going public doesn’t mean it’s not dangerous. A zero-day Windows
Task Scheduler continues to cause problems weeks later. Apple macOS and Tor are also in the
news for zero-day vulnerabilities.
Replays
Replay attacks occur when an attacker detects a communication between two parties and relays it
at a later time. This could cause them to authenticate or retry a transaction. Encryption and
timestamps (expiration) can help prevent these attacks. Replay attacks are a whole class of
attacks. Tesla and other automakers have made headlines for keychain repeat attacks.
Pass the hash
Websites use (or should) use hashed passwords and not plain text passwords. If a hacker can
capture the hash value, he may be able to use it to authenticate without knowing the password.
Hijacking and related attacks
We’ll take another turn and dive into a series of kidnapping-related attacks. Yes, we mentioned
the kidnapping earlier. We will do it again now.
Clickjacking
Click hijacking is where elements of the website cause the user to click on something that he did
not want. This could be, for example, a translucent overlay. A real-life example is this poor guy
who subscribed to a service through a clickjacking attack and then had to make a lot of phone
calls to fix everything.
Session hijacking
They are also called TCP / IP hijacking. This is where an attacker takes control of an existing
session between a client and a server. Since the user is (probably) already authenticated, the
attacker can continue with all privileges after the attack is complete. They could also use a DoS
attack against the original client to keep them busy.
Apparently, antivirus company Kaspersky was in the news this spring for session hijacking
issues. More recently, even Google Chrome.
Url and spell hijacking squat
URL hijacking is a class of attacks that manipulate or alter a URL. This could include typos or
mislead the user into thinking that they are clicking the correct URL. This could also involve
malware.
The typo squat is where attackers take advantage of common typos. This has been particularly
relevant lately, as political candidates face squatting attacks over typographical errors. (Or they
can buy potential domain names with typos to avoid future problems.)
Driver tampering
Drivers are software programs that interact between the operating system and peripherals. Driver
tampering, therefore, is an attack that changes drivers (and therefore drivers’ behavior).
One way to do this is by shimming. Shimming refers to inserting another level of code between
the driver and the operating system. This can be a way for developers to abstract functionality
and facilitate future enhancements. It can also open the door to harmful behavior.
Drivers can also be refactored. Refactoring is another legitimate software development process.
It means restructuring the existing code without changing the general behavior (so that the
operating system and the user do not notice any difference). Attackers can also refactor the driver
code to add malicious functionality while retaining the original functionality.
Phishing
Phishing refers to making it appear that something is coming from a different source. Typically
this means impersonating a known, trusted, or authenticated source. The book notes: “When
[network] protocols were developed, it was assumed that people who had access to the network
layer would be privileged users who could be trusted.”
MAC spoofing
MAC spoofing refers to changing a MAC address to bypass security checks looking for a
specific MAC address. A few years ago, MAC address spoofing (or rather, randomization) was
used as evidence against activist Aaron Swartz. It is now an Apple privacy feature.
IP address spoofing
As mentioned above, you can fill in the “from” field when sending IP packets. Of course, this
can cause some problems.
Smurfs attack
If an attacker sends a spoofed packet to the broadcast address on a network, that packet will be
distributed to all users on the network. Typically this means that other devices will send an echo
reply to this echo request. The (spoofed) sender of the original echo-request packet now receives
many responses. This is called a smurf attack.
Spoofing and trusting relationships
You can spoof a package from one system to another system that already trusts that source. This
has some obvious advantages. It also has some obvious mitigations: configure your firewall to
not allow packets from outside the network masquerading as packets from inside the network,
etc.
Sequence numbers and spoofing
The TCP three-way handshake generates two sequence numbers. These numbers are required for
future communication. So if you are off the network, it is more difficult to see (and therefore
spoof) packet sequence numbers.
Wireless Attacks
As in the previous sections, the attack types are a mixture of very specific and very abstract
elements.
Replay attacks
Replay attacks were also mentioned in the application attack post. Basically, it records the traffic
between the endpoints and the wireless access point (you can also do this with Bluetooth, etc.).
You can then replay those messages to authenticate, perform a transaction, etc. Automakers have
recently been in the press for their vulnerability to replication attacks using keychain technology.
IV
IV - Initialization Vectors. They are used in wireless systems as a “scrambling element at the
beginning of a connection.” IV attacks, therefore, are attempts to find the IV and use it to
undermine the encryption. A recent Defcon speech discussed issues in home security systems,
including IV vulnerabilities. The book points out that the WEP protocol is insecure due to
problems with the initialization vector. The VI is sent in clear text and is only 24 bits long
(which means it will likely repeat in a few hours).
Evil twin
This is a type of wireless attack that uses replacement hardware. When using an access point
with high gain antennas, the devices will connect to that AP as it will be the “best” connection
option. From here on, denial-of-service or man-in-the-middle attacks can occur.
Rogue AP
A rogue access point is likened to the evil twin. An attacker can use an unauthorized AP to
persuade users to log in, enter credentials, etc. From here, other attacks like MitM can occur. My
understanding is that the difference between rogue APs and evil twin AP attacks is that an evil
twin AP is designed to look legitimate.
Jamming
Jamming refers to the blocking of wireless or radio signals and the denial of service. It’s illegal,
so don’t do it. It has become a big problem for the military, both on the offensive side and the
anti-jamming defensive side.
WPS
WPS - Wi-Fi Protected Setup. This is a wireless security standard designed for easy Wi-Fi setup.
Unfortunately, the PIN used is susceptible to brute force attacks. Once the attacker has the PIN,
he can obtain the WPA / WPA2 passphrase and access the network. Android Pie (or Android P?)
Has deprecated using WPS for the past year.
Bluejacking
Bluetooth is another wireless standard and exists in most mobile and portable devices. One
Bluetooth-related attack is “bluejacking,” which involves the unauthorized sending of messages
to a Bluetooth-enabled device. This looks like someone’s Bluetooth Airdrop equivalent to a
bunch of photos.
Bluesnarfing
Here, we can see that a good name is not one of the infosec’s strengths. Bluesnarfing is another
Bluetooth-related attack that involves information theft (rather than sending unwanted
information). It appeared in some news reports coinciding with the skimming of the gas station.
There are other more recent Bluetooth attacks. Two examples are BlueBorne and BtleJack. The
latter was a great speech and demonstration at Defcon 26.
RFID
This means radio frequency identification. RFID tags can be active or passive. Active tags have
their own power source, while passive tags are powered by RF (neighboring) fields. As RFID is
increasingly used for authentication, building access, etc., RFID attacks are a serious problem.
Attacks can occur against readers and RFID tags, as well as against communication between
different components of the system. The radio frequencies used are usually publicly known, so
interception and replay attacks are not that difficult. Some examples are the Tesla keychain
accessory attached above and a Mercedes accessory.
NFC
Near Field Communication, or NFC, is a wireless protocol that allows devices to talk in a very
short range (about 4 inches). This is becoming increasingly popular in mobile payment systems
(the “touch to pay”). I haven’t found many recent stories on this, but here’s a 2012 NFC article
on Android and Nokia. Last week, Apple expanded NFC functionality, and an upcoming
conference in Japan will offer $ 60,000 to anyone who can take advantage of the iPhone’s NFC.
Dissociation
Dissociation attacks mean disconnecting (or disassociating) a device from the network. The WiFi standard includes a “de-authentication” frame that can be sent to a device to remove it from
the network. So if an attacker has (or can guess) the MAC address of the victim device, they can
send a packet.
Malware and Ransomware
First of all, what is malware? We can simply define malware software that has been designed for
nefarious purposes. Or at least, that’s the definition given. These purposes can be the damage of
data, or of a system, or the granting of unauthorized access, etc.
Types of malware
Polymorphic malware
Malware causes damage to computers and, therefore, we want to detect and prevent it. These
anti-malware programs look for a “signature.” This signature is a type of indicator in the
malware code that provides clues about its behavior and the type of malware. As mentioned
earlier, this chapter is short. For a more detailed forensic certification, see the GIAC Certified
Forensic Analyst (GCFA) certificate.
However, malware authors can avoid being detected by writing polymorphic malware. Poly
means more than one, and morphic means having forms. In other words, polymorphic malware
changes shape by changing its code after each replication. This makes the “signature” (and
therefore the fact that the code is malicious) more difficult to detect.
Virus
A virus is a malicious code that “replicates itself attached to another piece of executable code.”
When executable code is run, the virus can spread and/or perform other malicious actions. Later,
the book differentiates worms from viruses by saying that viruses need human interaction to
spread.
Searching for news about “viruses” returns a flood of results. Viruses are common and
commonly used phrases. A more skeptical person might say that virus is also a generic phrase for
unsolved computer problems. Last week, news of disruptive viruses can be found in Ulster (NY),
Anniston (AL), Beatrice (NE), West Tisbury (MA), etc. Shielded viruses are viruses that encrypt
or hide their source code. This makes reverse engineering difficult.
Crypto-malware
Crypto-malware is malware that is designed to encrypt files on a system (without being
authorized to do so). This renders the files unusable, sometimes permanently, sometimes until a
ransom is paid (which would also make it ransomware).
Confusingly, “crypto” now also means cryptocurrency (I disagree). As a result, you can find
news about Firefox and others that block cryptocurrencies. In this case, it simply means the
blocking or preventing of malware that helps attackers to mine the cryptocurrency. This is called
crypto-jacking.
Ransomware
This is another form
of malware that performs certain actions and extracts ransom from a user.” If malware from
another category demands money from the user to reverse the damage or prevent future damage,
it is also considered ransomware.
Recent events have shown that ransomware is becoming more and more common. The above
examples (WannaCry, NotPetya, etc.) in the crypto-malware section also qualify as ransomware.
During the past year, several cities in the United States were affected by ransomware.
Worm
These are similar to viruses in the sense that they try to “penetrate computer systems and
networks... [and then] create a new copy of themselves on the penetrated system.” However,
worms do not need to bind to another piece of code to reproduce (unlike a virus).
Note the difference that viruses are system-based problems and worms are network-based.
“Worms behave like a virus, but they also have the ability to travel without human action” and
“can survive on their own.”
Phorpiex, which sounds like a Pokémon character, is a worm + botnet combination that has made
the news for the spread of ransomware. XNet is another worm + botnet + ransomware
combination. WannaMine is a cryptocurrency worm that continues to spread.
Trojan
A Trojan is a standalone program installed by an authorized user, just as the Trojan horse was
carried inside the city walls by its citizens. Therefore, a Trojan is software that appears to do one
thing while also hiding some other malicious functionality.
Other recent Trojan malware news includes OilRig, which has targeted governments in the
Middle East. In Brazil, CamuBot is stealing bank information from users. Finally, Cronus has a
new variant called Osiris. Apparently, all malware writers are fucking nerds.
Rootkit
Rootkits are “a form of malware specifically designed to modify the operation of the operating
system in some way to facilitate non-standard functionality.” The book points out that rootkits
have a legitimate history on UNIX operating systems. Originally, they were administrative tools
used to deal with failure or no response.
A rootkit can do everything that an operating system can do. This means that they have a lot of
power to perform malicious activities (keylogging, back doors, etc.). It also means that they have
a strong ability to cover up such malicious activity and avoid detection. This is done by simply
hiding files, affecting application performance, etc.
There are different types of rootkits that affect different levels of performance. These include
firmware, library, virtual, kernel, and application layers.
Russian hackers Fancy Bear are back in the news with a rootkit vulnerability. This is called
“LoJax,” and it uses the LoJack (formerly Computrace) feature intended to help you find your
stolen laptop.
Key logger
A keylogger is a software that is designed to record all keystrokes typed by a user. Sometimes
this is legitimate; for instance, you might want Microsoft Word to capture all keystrokes and
convert them into a document. Keyloggers turn into malware when the user is unaware of them
and is not under their control.
In the news, Virobot is an all-in-one example of botnets, ransomware, and keyloggers. Despite
targeting people in the US, the ransomware message is in French... bonne chance, all of you).
Keylogging is also a problem for mobile devices, as shown in the stories about Android malware
and infected apps being deleted from the Google Play Store.
Adware
This is software that supports itself through advertising. Therefore, it may be free for the user,
but it is self-financed through paid commercials. Oftentimes, the user is aware of this deal, so it
is fine, even if it is a bit annoying. However, this part of the chapter is about malware that
features “unwanted ads” (as if it doesn’t cover all ads?). Think pop-ups or cascading windows.
Spy software
There is a lot of “licensed” spyware aimed at parents who want to make sure their kids aren’t
doing anything wrong. In my opinion, it is not healthy both from a security point of view and
from an interpersonal point of view (in terms of borders, privacy, trust, etc.). The same goes for
spyware that allows people to spy on their partners.
Recently, malware called “OwnMe” has targeted WhatsApp users, raising fears about their
conversations and browsing history, although the code does not appear to be fully implemented.
Bot
Bots are a hot topic these days. Usage ranges from customer service to influential political
opinions. There are also new botnets, including Torii. Meanwhile, the perpetrators of the Mirai
botnet are working for the FBI.
RAT
RAT stands for Remote Access Trojan. A RAT is a “set of tools designed to provide the ability
to covert surveillance and/or the ability to gain unauthorized access to a target system.”
Wikipedia describes it as “it is a type of malware that controls a system through a remote
network connection.” RATs can allow malicious operators to gain almost unlimited access to a
system, “as if it were physical access.”
Logic bomb
Logic bombs are malware that sits idle for a period of time until they are activated. They can be
triggered by an event or by a specific date/time. Think of a disgruntled IT employee who leaves
the company, and a bunch of files is mysteriously deleted a few weeks later. Logic bombs are a
lesson in the fact that monitoring is necessary and not just for active threats. Also, always keep
backups at all times.
Back door
Backdoors, like other types of malware, have some legitimate uses. For example, software
developers could install a back door to reset a password. Back doors with hard-coded credentials
are themselves a security vulnerability. It can also have malware (like the RATs mentioned
above) that provides a backdoor into a system.
Therefore, the word backdoor also describes malware that allows attackers to gain unauthorized
access to a system even after the initial access method has been blocked.
Indicators of compromise
•
Network traffic- including unusual outbound traffic, geographic irregularities,
unusual DNS requests, mismatched port application traffic, web traffic with nonhuman behavior, DDoSing signals, etc.
•
Accounts- including anomalies in privileged user account activity, account login alert
signals, mobile device profile changes, etc.
•
Data including large database read volumes, size of HTML responses, a large number
of requests for the same file, suspicious changes to the registry or system files, data
clusters in the wrong place, patches system unexpected events, etc.
Conclusion
Network+ covers networking, broadband technologies, operations, security and more. This
certification is a perfect fit for those who are interested in pursuing a career in IT and for those
who want to become network engineers. The Network+ credential has been accepted by
employers as proof of skills and knowledge so it is worth the investment of time and money to
get certified!
This certification is a core part of CompTIA’s list of credentials. It offers a practical
understanding of the principles and practices used to design, implement, maintain and
troubleshoot proprietary networks with on-premise and cloud computing environments.
The CompTIA Network+ is an entry level IT certification that specializes in the topics needed
for entry-level network engineering positions. This credential is ideal for those who are new to
IT or just need an update on their current skillset. The exam consists of 100 multiple choice
questions which you must complete in 90 minutes. You must score at least 675 out of 800 to pass
the test (2 hour time limit). Hands-on practice includes either lab work or a simulation.
The exam is administered by Pearson VUE, a company that is one of the largest vendors of
computerized exams. The Pearson VUE website states that they can administer the certification
test electronically in 10 different locations, and you may take your exam at any location you
choose. You can take the exam on different days or even at different times. There is also a
language version of this exam as well as another version labeled “N10-006”. Both are identical
except for the language section; however, this version is only available in English and it will
count towards your total score but not towards a passing score.
Although there are no specific prerequisites for CompTIA Network+, it is recommended that you
have knowledge of network technologies and standards. This includes an understanding of
TCP/IP, network layers and their functions, public vs. private IP address schemes; basic routing
and switching including OSI models; material with Windows operating systems (Microsoft
Windows XP or higher); and the Internet. For the hands-on part of the test (this part will be
discussed in more detail later), it is recommended that you have at least six months of experience
working on computer networks.
There are multiple training options available to help prepare students for this certification exam.
You can sign up for CompTIA’s online training and study guides, or you can go to the Pearson
VUE website to find a certified trainer in your area. The certification exams themselves can be
covered in an accelerated fashion by taking the “PrepTest” and “ExamPrep” options before
taking the actual exam. The options are:
CompTIA sponsored an open source certificate which could be used as an alternative to the
CompTIA Network+ certification exam. Much of the content of this certificate is identical to that
found on the CompTia A+ Certification Exam (CAT).
CompTIA Network+ certification is the defining IT certification for today’s enterprise and is
valued across numerous industries. As such, CompTIA Network+ has become a global standard
in IT networks.
If you were to consider a career change, this guide is for you.
Download
Study collections