Uploaded by Taranjeet kaur

cyber security

advertisement
Role of the audio review
- This is an audio course, thank you for listening. - Hi there it's me, Mike Chapel. I
hope that you enjoyed watching my series of video courses on preparing for the
Security+ exam. You can consider what you're listening to right now, this audio
only course, as a summary of some of the key points from those longer video
courses. My hope is that you can use these a couple of times. First, you can listen
to this audio course immediately after watching the video courses to help you
remember the material. Second, I encourage you to set aside some time to listen
to this audio review again right before you take the Security+ exam. All right, let's
dive right into the material.
Objective 1.1
- Let's dive in by starting to explore the first domain of the Security+ exam,
threats, attacks, and vulnerabilities. The first objective that we encounter in this
domain, objective 1.1, is to compare and contrast different types of social
engineering techniques. Now social engineering is the use of some kind of trickery
to manipulate someone into performing an action that you'd like them to
perform. And in the cybersecurity world, our adversaries often use social
engineering to try to obtain passwords or other credentials, get someone to
modify an access control list, let them in a building, or some other type of similar
adverse action. Social engineering is effective for a number of reasons. And the
perpetrators of social engineering rely upon seven principles to help them be
more effective in their attacks and gain the trust of their victims. These seven
principles are authority, intimidation, consensus, scarcity, familiarity, trust, and
urgency. So with those general principles under our belt, let's move on and talk
about some of the specific types of social engineering attacks that take place. The
big one is phishing and phishing is sending an email to someone trying to
manipulate them into clicking a link, providing a username and password, or
performing some other action. That's a very general definition of phishing, but
there are a lot of variations on it as well. So while phishing uses email, there's also
SMShing, which uses text messages or SMS messages. When we combine the
words SMS and phishing, we get SMShing. There's also vishing, which is voice
phishing. That's using the telephone to engage in a phishing attack. Phishing
attacks can also vary based upon their target. While general phishing attacks
might be just blasted out to hundreds or thousands of people hoping to trap
anyone, spear phishing attacks are specifically targeted at an individual or a
specific group of people, and they use some insider knowledge about a company
and its environment or an organization and the people around it in order to make
the message more effective. One more variant is the whaling attack. Now a whale
is a big fish. So a whaling attack is a phishing attack that's targeted at someone
who's really important, either they're a senior executive or a politician or public
figure. Whaling is going after a really large lucrative target. Phishing attacks can
also vary in their purpose. We've talked about credential theft. That's one aspect
of identity fraud, basically trying to assume someone else's identity to log onto a
system or engage in a financial transaction, or whatever else you're trying to do as
that person. Another very common thing that happens with phishing attacks are
invoice scams. In an invoice scam, the perpetrator sends an email to someone in
an account's payable department or a manager with an invoice attached saying,
please process this invoice for my services. Now of course there weren't any
services, but the hope is that the busy accounts payable clerk is just going to get
that message and process the invoice without realizing that it's not legitimate.
Spam is another type of social engineering attack. Spam is just unsolicited
commercial email. These are email messages that are sent without someone
requesting them. They're usually for advertising or marketing purposes. Now they
don't have quite the malicious intent that a phishing email does, but spam is
unsolicited junk email. We can also see this happening over text messages, SMS,
and other instant messaging services. And in those cases, spam is referred to as
SPIM, for spam over instant messaging. The last thing we need to look at from this
objective are a few physical types of social engineering. These are ways that we
can social engineer someone when we're physically present with them. One of
these is shoulder surfing. That's just looking over someone's shoulder when
they're working at their desk and seeing what they're doing on their computer
screen, or maybe sitting next to someone on an airplane when you can see their
laptop. Dumpster diving is another physical type of social engineering attack. In a
dumpster diving attack, the perpetrator just goes through an organization's
garbage, and they know that people aren't always diligent about shredding or
destroying paper records. And if they look carefully, they can find information
that might be very sensitive on its own, or it might give them the clues that they
need to engage in a social engineering attack to get credentials. Maybe they find
an organizational chart, for example. So they know who supervises who within
the organization. The last type of social engineering attack that we need to talk
about is the tailgating attack. In a tailgating attack, the attacker tries to gain
physical access to a building, and they do this by watching for someone who is
allowed to enter the building go ahead and swipe their ID card, or enter their ID
code or whatever the case may be. And then as the door closes behind that
person, the perpetrator just slips in right behind them, gaining access to the
building. The analogy here is tailgating in a car. You're following too closely to
someone and that's what's happening in a tailgating attack in the physical world.
Objective 1.2
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] The second objective in the threats, attacks, and vulnerabilities
domain is objective 1.2. Given a scenario, analyze potential indicators to
determine the type of attack. In this objective, what we're looking to do is
understand a lot of the different ways that adversaries might attack us and be
able to recognize those attacks when we see them in the wild. The first major
category of these indicators is malware, malicious software. There are many types
of malware that can impact our organizations, either by exploiting some
vulnerability that we have, or by tricking a user into installing malicious code.
Viruses are the original type of malware. These are pieces of malware that spread
from system to system when the user performs some action. Whether that's
inserting an infected USB drive into a computer, downloading software from a
malicious website, clicking a link in a malicious email. Or just some other thing
that allows software to gain a foothold on the system. Now there are a lot of
variants of malicious software besides the typical virus. Ransomware is a very
common one. And what this malicious software does is encrypt all the files on a
computer system, and then demand payment of a ransom in cryptocurrency,
usually in Bitcoin, before the attackers will provide the decryption key that the
victim needs to access their files again. Worms are very similar to viruses in that
they spread from system to system, but they don't require any action on behalf of
the user to do that spreading. Worms exploit vulnerabilities in different systems,
and use those vulnerabilities to spread. We also have Trojan horses, which are
pieces of malicious software that are disguised as something that we want, like a
game or some other application. But once we run the Trojan horse on our system,
it performs some malicious activity in the background. Logic bombs are a type of
malicious code that are embedded into some other software. In a logic bomb,
what happens is the programmer who created that software writes some logical
conditions that cause a malicious action to happen at some point in the future
when those conditions are met. Say, for example, that a programmer is no longer
employed by the organization. They might set up a logic bomb that notices that
their account no longer exists, and then performs malicious activity on other
systems. A lot of malicious software is designed to steal information from people.
And this software falls into the general category of spyware. Spyware is malicious
software that intends to spy on the activities of a user. This could be a keylogger,
where the spyware is watching every keystroke that the user types. Hoping to
capture a password or credit card number, or some other type of information. It
could also be a remote access Trojan, where the attacker can not only monitor
what's happening on a system, but they can also connect to that user system and
control it. That type of commanding control over user systems is very often a goal
of malicious software. And more generally, when an adversary is able to take
control of lots and lots of systems, we call those individual compromised systems
bots. And together, we call them a botnet. A collection of compromised systems
that the attacker is able to use to gain some type of advantage. That's an
overview of some of the main types of malware. The second category of indicator
that you need to know about for the exam are password attacks. Passwords are
the most common type of authentication that we use and they are vulnerable to
attack. That attack could just be guessing, a brute force attack that just guesses
every possible combination of passwords trying to find a match. Or the attack
could be more intelligent and use a dictionary attack, where we take common
passwords and then use those against accounts that we know exist. One of the
best defenses against password attacks is to use strong passwords and store them
securely. When we store them, we usually store them as the hash of a password.
And those hashes are vulnerable to a different type of attack called a rainbow
table attack. In a rainbow table attack, the attacker pre-computes hashes for a lot
of common passwords and then checks the password file to see if it includes any
of those hashes. When it finds a matching hash, it then knows the password that
corresponds to that hash. A more insidious type of password attack is password
spraying. This attack depends upon the fact that people use the same passwords
on different websites. It takes the password files from compromised websites,
and then tries to use those same username and password combinations on other
sites. This is a way to take a password file from a pretty innocuous site, like your
supermarket website, and then use it to gain access to really sensitive sites, such
as a bank or other financial institution. Attacks can also be physical. Physical
attacks can be done using a malicious cable, either a USB cable or maybe even a
flash drive that contains malicious software. And then when it's plugged into a
system, the malicious software is installed on the computer and infects it. Physical
attacks are also very common with credit card readers in ATM machines, where
the attacker fixes a physical card reader right on top of the real card reader of the
device. When somebody comes up to use the machine, they slide their card
through both the skimmer, which is what that fake card reader is called, and
through the skimmer into the actual machine. So the machine works properly, but
the attacker is able to read all the information off the card and use it to create
their own fake credit card with the same information on it. That type of attack is
called card cloning, where the attacker is making a copy of a card based upon
data that they've read with a skimmer. Attackers can use artificial intelligence
techniques against us as well, in an approach known as adversarial artificial
intelligence. In these attacks, they're either tainting the training data for a
machine learning algorithm, or they're attacking the security of the machine
learning algorithm itself. The last category of attacks we're going to look at in this
review are cryptographic attacks. These are attacks against encryption. The first
type of these attacks that I like to talk about are collision attacks. This happens
when we have two files that have the same exact hash value. Those duplicate
hash values are what we call a collision, and collisions can have dire consequences
for digital signatures and other technologies that depend upon those hashes. The
other type of cryptographic attack that you want to be familiar with is the
downgrade attack. Where if two people, either a user and a server or two users,
are using a strong encryption technology to communicate with each other and an
adversary is able to trick them both into downgrading the sophistication of their
encryption. That new downgraded weaker encryption might then be vulnerable to
attack. Those are the major types of attack that you need to know about for
objective 1.2 on the Security Plus exam.
Objective 1.3
Selecting transcript lines in this section will navigate to timestamp in the video
- Objective 1.3 of the Security+ exam is that when given a scenario, you're able to
analyze potential indicators that are associated with application attacks. These
are attacks against software, ways that an adversary can exploit applications that
we use in our organizations to gain access to information and systems. One of the
common goals of this type of attack is privilege escalation, trying to exploit
vulnerabilities that allow you to take a normal user account and turn it into a
privileged administrator account. So let's review some of the types of application
attacks that exist out there. One is cross-site scripting or XSS attacks. In a crosssite scripting attack, the attacker fools a user's web browser into executing some
code that's written in JavaScript or another scripting language. This commonly
happens in a type of attack known as a stored or persistent cross-site scripting
attack. In these attacks, the attacker might post a message on a message board or
some other place that they can put web content. And if that website doesn't filter
out scripts, those scripts can then be related to other users who visit the site,
causing their browsers to execute the code. The code might then throw out a
popup window asking for their password, for example, hoping that some user
who visits the site is going to fall victim to the attack and type in their password,
which is then relayed to the attacker. Injection attacks are another very common
type of application attack. In an injection attack, the adversary looks for places on
websites where they can provide input, and then that input gets inserted into
some other type of command that's executed on the system or another server.
The most common type of injection attack is a Structured Query Language or SQL
injection attack. In this attack, the adversary carefully crafts input that's given to a
web application, knowing that that input is going to be used in a SQL query. They
can then cause the database to execute not only the intended query, but also
another command that the attacker provides as part of the malicious input. SQL
isn't the only technology that's vulnerable to injection attacks. These attacks can
also be used against dynamic-link libraries, DLL files, in the Lightweight Directory
Access Protocol, LDAP, and with the Extensible Markup Language, XML. Now
there are many other types of application attacks. One of those is directory
traversal, where the attacker uses directory navigation commands in a URL in an
attempt to work their way out of the portions of a server that are dedicated to
the web content that we're supposed to view, and look at other sensitive files
that are stored on the server in other locations. There are also overflow attacks
such as buffer overflows and integer overflows. These are also input based
attacks where an attacker finds a place where an application is expecting some
input, and then puts far more information in that field than should ever be
inserted there. This causes the location in memory, the buffer that set aside for
that field, to overflow and lets attacker potentially execute their own malicious
commands on that server. Many of these application attacks can be mitigated
through the use of input validation. This is a control where when you're
developing a web application, you look at all the input that you're receiving from
users, and you make sure that it matches the type of input that you expect to see.
So for example, if you're asking someone for the number of children that they
have, the input that you get should be an integer number, and it should probably
be less than 20 and greater than or equal to 0. You shouldn't see negative
numbers or very long numbers or text written in that field. Using strong input
validation protects against many types of application attack
Objective 1.4
Selecting transcript lines in this section will navigate to timestamp in the video
- Objective 1.4 of the Security+ exam says that when given a scenario, you need to
be able to analyze potential indicators associated with network attacks. So let's
talk about a few different categories of network attack. The first of these
categories are wireless attacks. Wireless networks are very vulnerable to attack
because they're intentionally broadcasting signals over a large area. So if an
attacker is able to get in range of a wireless access point, they have the ability to
at least attempt a wireless network attack. The first thing they might do is place
their own access point in an area where they think that legitimate users will
connect, and then give that wireless network the same name that authorized
users are expecting to see. So if I take a wireless access point and set it up at
Jones Corporation, and call the network Jones because I know that's the normal
network used by the company, I'm hoping I'm going to trick users of Jones
Corporation into connecting to my wireless access point, and then providing their
username and password, allowing me to steal their credentials. This type of attack
is called an evil twin attack, where I'm creating a malicious clone of a wireless
access point and installing it on a network myself. Another type of wireless attack
is the disassociation attack. This is an attack where the adversary sends signals
that force a user to disconnect from a wireless network. Then when the user
reconnects, the attacker can capture information about that authentication
session to try to gain access to the wireless network themselves. Wireless attacks
can also take place against Bluetooth connections that users are using to connect
peripherals to their smartphones and other devices. These include bluejacking,
which is a technique where the attacker can send messages to users through
Bluetooth, and bluesnarfing, where the attacker is actually able to hack into
someone's phone through a Bluetooth connection, and gain access to the
information stored on the device. And other type of network attack has a few
names. It's most commonly known as the man-in-the-middle attack, but now, it's
more generally known as the on-path attack. In this type of attack, the attacker
tricks a user into connecting to the attacker system, instead of to a third party
website that the user is trying to access. The attacker then becomes the person in
the middle. The attacker is in the middle of the connection between the user and
the legitimate server. The user connects to the attacker, and the attacker
connects to the server, so the attacker can act as a go-between between the user
and the server and they're able to monitor all the communications that take place
between the user and the remote system. Some network attacks happen at layer
two. This is the data link layer of the OSI model. And that's where the address
resolution protocol, or ARP works. ARP translates for network devices between
the media access control, or MAC addresses that are hard-coded into a network
interface, and the IP addresses that are used on modern networks. There are a
few different types of attacks that take place against MAC addresses in the ARP
protocol. First, MAC addresses are assigned to hardware devices when they're
manufactured, but they're very easy to modify. So if an attacker is able to clone
the MAC address of a legitimate device, they can then pretend to be that device
on the network. Attackers can also exploit the ARP protocol itself. One of these
attacks is called a MAC flooding attack. In a MAC flooding attack, the attacker
sends thousands of requests to a network switch trying to register different MAC
addresses with that switch. This causes the memory space that the switch has set
aside for storing MAC addresses to fill up and overflow. When that overflow
occurs, the switch starts to receive traffic for devices on the local network, and
instead of sending it directly to that device, it just broadcasts it to everybody on
the network, hoping that it reaches the end device and opening up an avenue for
eavesdropping attacks. The last type of attack that happens at layer two is ARP
poisoning. This is where the attacker injects invalid MAC address information into
a switch, and then causes the switch to provide that information to users on the
network, leading them to believe that they're connecting to legitimate systems,
when in fact, they're connecting to malicious systems. Just like the ARP protocol,
which translates between MAC and IP addresses is vulnerable to attack, so is the
DNS protocol, the domain name system, or DNS translates between the domain
names that we commonly use, like linkedin.com and the IP addresses that
computers need to use to communicate on the network. Attackers who are able
to disrupt DNS can redirect traffic from a legitimate site to their own. The most
common DNS attack is something called domain hijacking, where an attacker
manages to steal a domain name by hacking into account at the service where the
company registered that domain name and then changes the password on that
account and the IP address where traffic from that domain should be sent. On a
smaller scale, DNS can be disrupted with DNS poisoning attacks, where the
attackers inject invalid DNS information into a DNS server on the victim's local
network, causing the victim to visit a malicious site instead of the legitimate site
that they want to visit. The last type of network attack we'll look at is the denial of
service attack. In a denial of service attack, the attacker just bombards a system
with traffic, hoping to overwhelm it so that it can't process any legitimate traffic
that it receives. The most common type of denial of service attack is the
distributed denial of service, or DDoS attack. In a DDoS attack, the traffic comes
from all over the place. Thousands of systems are all bombarding the victim with
traffic so the victim system is not only overwhelmed, but it's also not easy to
block the traffic because it's coming from so many different places. The most
common way that attackers engage in DDoS attacks is to use a botnet of
compromised systems that they've already gathered that are located all around
the world, and then instruct those systems to bombard their victim with traffic.
Those are the most common types of network attacks that you'll need to be
familiar with when you take the Security+ exam.
Objective 1.5
Selecting transcript lines in this section will navigate to timestamp in the video
- Objective 1.5 on the Security+ exam is that you must be able to explain different
threat actors, threat vectors, and intelligence sources. So let's talk about those.
We'll start with threat actors. These are the adversaries who are trying to
undermine the confidentiality, integrity, and availability of our information
systems. Threat actors come in a variety of different forms. They can either be
internal to our organization, such as the insider threat, individuals who work with
us and for us who are actually trying to engage in attacks against us. Or they can
be external threats. People on the outside who are trying to gain access to our
systems for a whole variety of different reasons. Threat actors also differ based
upon their level of sophistication. We have some who are not very skilled at all.
These are script kiddies, people who are just taking exploits developed by others
and running them against targeted systems. Usually without a lot of focus on who
they're targeting but just trying to execute code that they didn't necessarily
create themselves. At the other end of the spectrum, we had the advanced
persistent threat, or APT. APTs are really determined attackers with a lot of skill
and talent and sophistication. They're often sponsored by nation states or
criminal syndicates or people with a lot of resources and funding to put behind
those attacks. Advanced persistent threats can engage in very, very sophisticated
attacks that are custom developed to take down a particular target. And different
threat actors are motivated by different things. For example, those advanced
persistent threat actors we just talked about are sponsored by nation states and
they're motivated by political or military and intelligence goals. On the other
hand, if they're sponsored by a criminal syndicate, the motivation is financial.
They're looking for ways to exploit the access that they've gained to steal money
or services. Attackers can also be motivated by ideology. So we might have a
hacktivist. That's someone who engages in hacking in order to spread their own
message of either speaking out against the government or for a political cause.
Hacking activity isn't always malicious. There are three different ways this can
take place. We might have hackers who are authorized, in the past called white
hat hackers, who are sponsored by an organization. They're either employees or
consultants or contractors who are going out and conducting penetration testing
in an attempt to hack into an organization's systems to expose any vulnerabilities
that might exist so they can be fixed. At the other end of the spectrum, we have
what used to be called black hat hackers, who are unauthorized. They were not
given permission. And they're going out there with malicious intent. Now in the
spectrum in between authorized and unauthorized, there's a lot of gray area of
semi-authorized attacks. These are cases where the individual might not have
explicit permission to engage in this type of activity, but they're doing it with good
intent. They're trying to find vulnerabilities in systems. And then they're going to
tell the target about those vulnerabilities when they discover them. Attackers can
use a variety of different threat vectors when they're trying to gain access to a
system. Now they might have direct access where they can walk up to a system
and touch it. And then they're able to engage in their attack that way. Or they
might conduct their attack over a wired or wireless network. They might engage
in attacks through email using phishing and other social engineering techniques to
reach into an organization and try to get people to unwittingly assist them in their
attack. Threats can also be delivered through removable media and the cloud, or
even injected directly into an organization's supply chain by corrupting suppliers
or devices before they even reach the organization's network. For a cyber security
team to protect against these different threat actors and threat vectors, they
really need to understand how the threat environment is changing over time.
There are new threats every day and threat intelligence allows us to stay on top
of how the threat environment is evolving and design our own security controls in
a way that protects against these evolving threats. There are a lot of different
sources of threat intelligence. We can use open source intelligence or we can use
proprietary or closed source intelligence that we purchase. There are databases
of vulnerabilities out there and there are even information sharing centers, called
ISACs, that bring together the public and private sector to share information. We
can also look for threat intelligence on the dark web, trying to see mentions of
our organization and our systems on hacking forums and in other places where
these topics are discussed. As you're conducting threat intelligence research, you
can use a variety of different sources. These might include vendor websites,
vulnerability and threat feeds, conferences, academic journals, the request for
comments or RFC documents that define how protocols and services work, social
media, local groups. All of these things are designed to help you get a strong
understanding of the tactics, techniques, and procedures used by your
adversaries so that you can build better defenses against them.
Objective 1.6
Selecting transcript lines in this section will navigate to timestamp in the video
- Objective 1.6 on the Security+ exam is that you must be able to explain the
security concerns associated with different types of vulnerabilities. You need to
be familiar with vulnerabilities that occur both on-premises and in the cloud
because different types of risks that your organization faces can alter the way that
you develop, implement, and maintain security controls. Now, one of the most
serious types of vulnerabilities you might encounter is the zero-day vulnerability.
It's given this name because the vulnerability itself is brand new. When an
adversary discovers a new vulnerability, there's a window of time where they are
the only people who know about that vulnerability, and they can exploit it very
easily because the vendor of the service or system that has the vulnerability
doesn't know about it yet, so they haven't been able to develop a security patch,
making it very difficult for organizations to defend against a zero-day attack. Zeroday attacks are a hallmark of the advanced persistent threat, or APT. These are
state sponsored, very sophisticated hacking organizations that because of their
level of sophistication are able to identify vulnerabilities themselves, and then
keep those vulnerabilities secret for a long period of time while they exploit them
as zero-day vulnerabilities. Vulnerabilities can also arise from weak configurations
of systems and software. These could be open permissions, insecure root or
administrative accounts, errors that display too much information and allow
someone to gain intelligence about how to hack into a system, the use of weak
encryption or insecure protocols, leaving default settings enabled that present
security vulnerabilities, or leaving too many open ports on services on a system.
This increases the attack service and provides an attacker with a lot of different
ways that they could gain access to a system. Another source of vulnerabilities is
improper or weak patch management. Now, while zero-day attacks can exploit
brand new vulnerabilities, most vulnerabilities that exist out there are ones that
have been known about for a long time. And technology professionals have the
tools to protect against them by applying security patches and updating their
configurations. But sometimes, they just don't get around to it, and that leaves
their systems open to attack. We need to make sure that we fully patch, not only
our operating systems and applications, but also the firmware of network devices
and other embedded systems. When we use legacy platforms, that also creates
vulnerabilities for us. If those systems are no longer being maintained by their
manufacturer, then they're not receiving security updates. And when new
vulnerabilities arise, it's very, very difficult, if not impossible, to correct them. We
also need to consider third-party risks to our organization because we use so
many different vendors to assist us in getting our IT work done. These vendors
range from hardware and software providers to the cloud service providers that
are so integral to the way that we work today. Organizations need to have strong
vendor management practices that monitor their vendors and their entire supply
chain, looking for places where security vulnerabilities might arise and then
addressing them. So we have vulnerabilities on-premises and in the cloud due to
weak configurations, zero-day attacks, third-parties, missing patches. And what all
this boils down to is that we need to have a strong vulnerability management
program in place. Because if we don't address these vulnerabilities, they can have
a significant impact on our organization. These impacts might include the loss of
data through data breaches and exfiltration, identity theft, financial or
reputational damage to our organization, or the disruption of business activities
through attacks that cause a loss of availability of systems.
Objective 1.7
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] Objective 1.7 of the Security+ exam is that you be able to summarize
the techniques used in security assessments. We're going to break this up into
three major categories. The first of these categories is vulnerability scanning.
Vulnerability scanning is the core of security assessments. It's also one of the
most important security assessment tools. What vulnerability scans do is they use
technology to automate checking systems on your network for vulnerabilities and
then provide you with important information about what you need to fix to keep
your network secure. There are some different ways that we can categorize
vulnerability scans. First, we can look at what those scans are targeting, either a
network, applications, or web applications. Network vulnerability scans reach out
and probe systems on the network, looking across IP addresses, trying to find
systems that contain vulnerabilities. Application scans target the software that we
run, looking at whether there are opportunities for buffer overflows or other
issues in our source code that could create an avenue for an attacker to break
into our systems. Web application scans look specifically at applications that run
over the web. These scans look for web-specific vulnerabilities, including SQL
injection, cross-site scripting, and cross-site request forgery. So that's the first way
that we can categorize vulnerability scans, by their target. The second way we can
categorize them is by the type of access that they have. And this boils down to
whether they're credentialed scans or non-credentialed scans. In a credentialed
scan, we give the scanner a username and password that it can use to access the
systems that are being scanned. This allows the scan to reach more deeply into
the target systems and analyze their configurations, giving the scan extra
information to be able to uncover vulnerabilities. Non-credentialed scans on the
other hand don't have that access. They don't have credentials to reach into
systems so they're only looking at them from an external perspective. They're
trying to find what an attacker who doesn't have credentials yet would see as
they're trying to break into a system. The third way that we can categorize
vulnerability scans is whether they're intrusive or non-intrusive. Intrusive scans
have the potential to actually disrupt the system. The scan itself can cause
problems for the environment. These scans are the most accurate because
they're trying all possible types of vulnerabilities but they could also cause a
system outage. So we want to be really careful about how we use intrusive scans
and make sure that we coordinate with system owners to know that the scan isn't
going to disrupt important operational activity. Non-intrusive scans on the other
hand limit the types of exploits being tested so that they don't accidentally bring
down a system. Vulnerability scanners all work off of a database of vulnerabilities
and many share a common database called Common Vulnerabilities and
Exposures or CVE. CVE is a centralized database of vulnerabilities from all sorts of
operating systems, applications, network devices, and other components. And it's
shared among many different vulnerability scanners. CVE provides each
vulnerability with a number that can be used to cross-reference the results from
different scans. These vulnerabilities are also rated according to their impact, how
significant they are, how easy they are to exploit, and the potential damage that
could be caused if they are exploited. These ratings are evaluated using a shared
system called the Common Vulnerability Scoring System or CVSS. So when we
look at a vulnerability, we often refer to its severity by using its CVSS score. The
last thing we want to talk about with vulnerability scans are the results of those
scans. Each time a vulnerability scanner reports a vulnerability that's either a real
vulnerability that exists on a system, a situation known as a true positive, or in
some cases the vulnerability scanner might report a vulnerability that doesn't
actually exist. That's an error and it's a situation known as a false positive report.
Other times the vulnerability scanner doesn't report that an issue exists. If in
reality, there is no issue, that's correct and it's a situation known as a true
negative. On the other hand, if there is a vulnerability on a system and the
vulnerability scan reports that there is no vulnerability, that's a false negative
report. So of course we want to tune our vulnerability scans so that we have true
positives and true negatives and not false positives and false negatives. The
second type of security assessment we need to talk about is threat hunting.
Threat hunting follows the presumption of compromise. That just means that we
assume that an attacker has already gained access to our system and then we go
around our network hunting for signs of their presence. We can use a lot of
different sources of information to find these indicators of compromise. Those
sources include threat feeds, advisories and bulletins from the government and
vendors, and we can look for signs of known attackers that tell us that somebody
has already compromised our network. The third category of tools that fall under
this objective are the tools that help us do log analysis. Most organizations use a
Security Information and Event Management or a SIEM system to bring together
all of the different log entries that are being generated by security tools,
applications, and operating systems, and correlate those entries to detect signs of
an intrusion or other security event. The important thing that a SIEM does is
aggregate all of those different logs and then discover patterns that exist across
different log sources in a way that we wouldn't see if we were just looking at the
logs from one system. We can take this a step further and actually automate our
responses to those events and have workflows that kick off in response to
detections by a SIEM. When we do that, we move from SIEM technology to a
more advanced technology called Security Orchestration, Automation, and
Response or a SOAR. We often hear about these tools together, SIEM and SOAR.
You can just think of them as the SIEM system is performing correlation and
analysis and identifying that an issue exists. And then the SOAR system is
automating the response to that issue.
Objective 1.8
Selecting transcript lines in this section will navigate to timestamp in the video
- The final objective of the threats, attacks, and vulnerabilities domain is objective
1.8, explain the techniques used in penetration testing. In a penetration test, the
testers adopt the tactics, tools, and procedures of an actual hacker. We assume
the mindset of the hacker and try to put ourselves in their shoes and then use the
techniques that they would use to try to actually gain access to our own systems.
These go beyond security assessments, because we're actively trying to break into
systems and exploit vulnerabilities in our own security. When we conduct
penetration testing, we can use three different methods. These are known
commonly as black box, white box, and gray box testing. In a black box test, or an
unknown environment test, the tester doesn't have access to any inside
information about the systems or networks being attacked. They simply get
pointed at the company and then are given free reign to go and explore and try to
figure out things on their own. These black box tests very closely approximate a
real attack because an attacker wouldn't likely have inside information. A white
box test, which is also called a known environment test, provides the testers with
a lot of information about the systems and networks that are being tested. This
approach has the advantage of speeding up the test by skipping the discovery
phase. It gives the testers access to a lot of information that they can use to begin
identifying vulnerabilities that they might exploit during their test. The final type
of test is a gray box test, which is also called a partially known environment test.
These tests lay somewhere in the middle between white box and black box tests.
The attackers are given access to some information, but they don't have the
complete details of the environment that's being assessed. During a penetration
test, the testers are actually trying to exploit real vulnerabilities on the target
systems, and those exploits can have negative impacts on the organization's
operations. Because of this, it's really important to define the rules of
engagement in advance of the test and make sure that the testers know what
they are and are not allowed to do, and the procedures that they should follow if
they discover a vulnerability or an actual attack in progress. As they are engaging
in penetration testing, penetration testers are going to use the same tactics that
hackers use. They're going to try to gain initial access to a system and then move
laterally around the network once they have that foothold, finding other systems
that they can exploit. Then they're going to try to perform privilege escalation
attacks that take them from having normal user access to having administrative
access. Once the testers gain a strong foothold on the network, they're also going
to try to establish persistence. That means that they'll set up ways that they can
later regain access to systems, even if the original path that they used to gain that
access is now closed. Then, at the end of the test, they clean up and restore
normal operations. After all, this is a test that's being performed by the
organization so we want to make sure that when we're done, we leave the
organization in a secure state Penetration testers also perform a lot of
reconnaissance, especially when they're performing a black box test. They have to
go and figure out how the organization is structured and what types of systems
exist. This may involve performing physical reconnaissance, walking around and
looking at a facility trying to figure out who and what is entering the facility. As
they do this, they might even use drones. They might use a technique called war
driving or war flying, where they either drive a car or fly a drone with wifi
antennas on it, to try to figure out what wireless networks are present around the
facility. These are all different physical reconnaissance techniques that
penetration testers might use. But reconnaissance can also be electronic, using
footprinting to perform scans and try to figure out what systems are present on a
network. Attackers will basically use any information available to them, whether
it's from reconnaissance that they're performing themselves, or whether it's from
open source intelligence, just looking around the internet, trying to learn
everything they can about the organization that they're targeting. The last
important assessment tool covered under this objective are cybersecurity
exercises. These exercises provide security teams with experience in handling real
world incidents. During a cybersecurity exercise, the players are organized into
teams. The red team are the attackers. They're the ones who are on the offensive.
They're basically performing a real life penetration test, trying to gain access to
systems. The blue team is the defense. They're trying to prevent the red team
from gaining access to systems by actively monitoring and updating security
controls in real time. There's also usually a white team that serves as the sort of
referees of the exercise. They're moderating and making sure that everybody is
playing within the rules of engagement and keeping the exercise moving along At
the end of an exercise, it's common for teams to come together as a purple team,
bringing together the red team, blue team, and white team to debrief on what
they discovered during the exercise, what they learned, and maybe even sit
beside each other and watch each other engaging in offensive and defensive
tactics, because after all, the whole purpose of a cybersecurity exercise is for
everybody to learn, so they learn by doing as members of the red team or blue
team, but they also learn from each other as members of the purple team.
Objective 2.1
- [Instructor] The second domain of the security plus exam is architecture and
design. This domain has eight objectives. The first of those is that you be able to
explain the importance of security concepts in an enterprise environment. You'll
need to know the importance of configuration management. These are the tools
and processes that we use to gather and monitor the configuration of systems,
software, and devices in our organization and making sure that those
configurations comply with our security standards. We can use a number of tools
to help us with this process. First, we use a lot of diagrams. These are just pictures
of the environment showing us how systems are set up and configured. They're a
really important tool, and they're probably the first place that most IT
professionals go when they're looking to understand a system. We also use
baseline configurations for operating systems, applications, and devices. These
baseline configurations are the standard security settings that we use across all of
our systems. Using these baselines allows us to then compare running systems
against the baseline and look for deviations. Places where settings have changed
and are deviating from our security standards. That gives us the opportunity to
investigate and correct those issues. As we perform configuration management,
we should try to standardize the way that we name and address systems. This
involves using standard naming conventions so that our systems all have names
that explain what they do and where they're located on the network, and
standard IP address schemes that help us identify the location of systems by their
IP address. Data protection is another crucial security concept. This is the
collection of actions that we take and tools that we use to ensure that our data
remains safe. Data loss prevention or DLP tools are an important data protection
technique. They monitor systems and networks for signs that someone or some
process is trying to take sensitive information outside the organization. They do
this by looking for data that's in motion on our network and being moved outside
the organization that might be in violation of our security policies. Some of the
other tools we can use for data protection are designed to obscure the meaning
of data. We can use encryption to do this where we're using mathematical
techniques to encrypt data in such a way that it can't be viewed without access to
the appropriate decryption key. We can also mask data. This is taking our data
and simply x-ing out sensitive parts. So for example, if we have 16 digit credit card
numbers stored in our system, that's really sensitive data that we probably don't
want to keep accessible. So we might use masking by x-ing out the first 12 digits
of those credit card numbers, leaving only the last four for identification
purposes. When we're thinking about how we protect data, we want to consider
data in three different states. Data at rest, which is data that's being stored
without actively being used. Data in motion, which is data that's in transit over a
network and data in processing, which is data that's in memory and actively being
used by a system. The next essential security concept we need to consider is data
sovereignty. This is looking at the geographic considerations around where our
data is being kept and what jurisdictions have authority over that data. Data
sovereignty becomes especially important in the world of cloud computing, where
it might be making use of data centers located around the world. Data
sovereignty says that we need to be careful about where we store our data and
know the laws and regulations that apply based upon the locations of that
storage. We also need to consider site resiliency as we're considering security in
an enterprise environment. This means that we want to have backup places to
process our data. We can have hot sites, cold sites, and warm sites. A hot site is a
data center that's set up and ready to go. It has everything in place. Electricity,
cooling, networking, systems, and data. So the hot site can pick up processing for
our organization at a moment's notice. Hot sites are the most effective type of
alternate processing facility but they're also the most expensive. At the other
extreme, we can have cold sites. These are the least effective but also the least
expensive type of facility. A cold site is a data center that has basic utilities and
network connections in place but it has none of the systems or data. When we
want to activate a cold site, we need to install and configure systems and get the
data loaded. That's going to take weeks in order to get a cold site up and running.
Warm sites are somewhere in the middle. We have systems that are ready to go
and we might even have applications loaded but we need to load our data before
we activate the warm site. So it's still going to take some time but while a cold site
might take weeks to activate, a warm site might just take hours or days. The last
topic I want to review in this domain are deception and disruption tools. These
are ways that we can try to confuse our adversary. The most common of these is
the honey pot. This is a system that's set up to look like an attractive target for an
attacker. The honey pot might have an attractive name like accounting server, or
it might have files stored on it that look like they contain sensitive information.
But in reality, the honey pot is a system that's carefully instrumented and
designed to detect attack attempts. There's no real sensitive data on the system
and the honeypot has no purpose other than attracting attackers and then tipping
us off to their presence. Honey nets are entire networks of honey potted systems.
Honey nets have no legitimate purpose on our network other than to detect
hacking activity. So if we see people trying to connect to addresses on a honey
net, we know that they're most likely engaging in some type of malicious activity.
We can also have honey files. These are files that look like they contain sensitive
information but they have no legitimate use. And they are embedded in our other
file systems. Just like with honey pots and honey nets, we watch for attempts to
access those honey files and then investigate those attempts as potential sources
of malicious activity.
Objective 2.2
- [Instructor] Objective 2.2 of the Security Plus exam is that you be able to
summarize virtualization and cloud computing concepts. In this domain, we cover
a wide variety of topics that are really important to cloud computing. The first of
those is virtualization. As you learned in the video course, virtualization is the core
technology that makes cloud computing work. Virtualization allows us to run
many different guest virtual computers on a single hardware platform, and it does
this by using a hypervisor. A hypervisor is just software that serves as the
middleman between the guest operating systems and the actual hardware, and
isolates them from each other. Virtualization allows us to have massive
computing at scale in the cloud. When we use the cloud, we think about different
ways that it can be delivered to us, the environments in which we can operate.
The most common thing we usually think of is the public cloud. The public cloud is
just when a cloud service provider makes cloud services available to anyone who
wants to use them. The public cloud relies upon the multi-tenancy model, where
many different customers can be sharing the same physical hardware, but have
their guest operating systems isolated from each other. The opposite of the public
cloud is a private cloud. And in a private cloud, we have the same benefits of
being able to use multiple operating systems and share resources across guests.
But all of the hardware is dedicated to a single customer, usually in our own data
centers where we have somebody manage it for us. Hybrid cloud combines
resources from both public and private cloud environments. Many organizations
create hybrid clouds, where they have public cloud operations along with their
own private cloud, and then shift workloads back and forth. There are different
types of services that can be delivered through the cloud. In an infrastructure as a
service environment, the cloud service provider is giving us the core building
blocks of computing. Virtual server instances, storage, networking, all of the
components that we can put together to build our own cloud services.
Infrastructure as a service is where the customer does the most work of any cloud
service, but it also gives us the most flexibility as a result. In software as a service,
the service provider is doing most of the work because they're providing a fully
developed application to us that we can use in the cloud. The provider manages
everything and we just access the application, usually over the web. In the middle
is platform as a service. Platform as a service is an environment where we can
write code, and then give it to a cloud service provider to have them execute that
code for us. One sub-category of platform as a service is function as a service
computing, or serverless computing. In function as a service computing, we create
discrete code functions and then have our cloud providers execute those
functions for us in response to an event or on a planned schedule. In addition to
using these types of cloud services, we also rely on a variety of other managed
service providers and managed security service providers who can run portions of
our IT infrastructure for us. Whether that infrastructure is on premises, or off
premises in the cloud. The cloud also enables us to use a model known as
infrastructure as code. Where instead of creating servers or other infrastructure
elements by hand as we need them, we instead write code that creates those
elements for us. The benefit is that we can reuse that code if we need to recreate
those services in the future. This is very commonly done in infrastructure as a
service environments. And it's also what allows us to start moving towards
software defined networking, SDN, where we can reconfigure our network by
writing code. And have the network even reconfigure itself in response to events.
Those are some of the ways that virtualization and cloud computing play an
important role in the world of cybersecurity.
Objective 2.3
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] Objective 2.3 of the Security Plus exam is that you be able to
summarize secure application development, deployment, and automation
concepts. Let's begin by talking about the different environments where code
might exist. First, we have development environments. This is where
programmers actually begin to create code. They're actively working on code in a
development environment that's isolated and dedicated for this purpose. You
never want to have developers working on code that's actually being used by end
users, because the developers might make mistakes as they're performing their
development work. The development environment provides a sandbox where
developers can perform their day to day work. Once a developer finishes their
work it's time to test that code. The code then moves from the development
environment to the test environment, where developers, quality assurance
personnel, and end users can make sure that the code works properly before it
moves on to being actively used. When testing is complete, the code moves from
the test environment to a staging environment where it's configured and
prepared for active use. Once it's ready to go, the code that's in the staging
environment moves to the production environment. The production environment
contains the code that's actively being used on a day to day basis. As developers
create code they need to use secure coding techniques. These are time tested
software development practices that promote good code that's secure and
efficient. One of the most important concepts in this area is the reuse of code.
Making sure that developers don't write the same code over and over again.
Instead, commonly used code can be placed in shared libraries where different
developers can access it, and leverage each other's work. When working with
databases, it's good practice for developers to normalize their databases.
Database normalization is just a set of best practices for how we organize the
data in a database to avoid redundancy and other issues that might arise. When
accessing databases from code, the use of stored procedures and parameterized
queries helps avoid sequel injection attacks. Developers also need to be sensitive
to memory management, making sure that they're allocating memory and using it
appropriately to avoid buffer overflow attacks, where an attacker attempts to put
more data into a memory location than is allocated for its use, in an attempt to
get the system to execute code that it shouldn't be executing. Developers working
on web applications should pay careful attention to the standards and
documentation published by a group called the Open Web Application Security
Project, or OWASP, in particular OWOSP publishes a list of the top 10
vulnerabilities in web applications that provides developers with a great roadmap
for avoiding the most common and most serious security issues that affect web
applications. Version controls and other key software development security
concept. When a lot of developers are working on the same code, it can become
very confusing what the current version of that code is. Version control
techniques try to sort all of this out. They use code repositories and other tools to
allow developers to check out code that they're working on, and then check that
code back in when they're finished with their work. This approach allows for the
orderly and consistent modification of code by different developers without
having those step all over each other. Mature IT organizations also benefit from
the use of automation and scripting. Automating courses of action allows IT teams
to become much more efficient in the way that they work. Instead of having to
manually configure things, they can rely upon automation to perform repetitive
tasks in a consistent manner. Some of the automation tools they might use
include continuous monitoring, continuous validation, continuous integration,
continuous delivery, and continuous deployment. The final two concepts we need
to talk about related to automation, are scalability and elasticity. These are
measures of how a system responds to changing demand. Scalability means that
we design systems that are capable of expanding as the demand on those systems
increases. To achieve scalability, we might add additional memory or CPUs to a
server or add additional servers to a pool of servers in order to allow the system
to scale up as it faces increased demand. Elasticity builds upon scalability by also
adding the capability to scale back down again when the demand decreases.
Elasticity is very commonly found in cloud applications, because it allows us to
make optimal use of our resources. The environment grows when necessary to
meet increased demand, but when that demand decreases the environment then
shrinks so that we're not using and paying for resources that we no longer need.
Objective 2.4
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] Objective 2.4 of the security plus exam is that you be able to
summarize authentication and authorization design concepts. In the world of
security, we often talk about the concept of triple A. That's authentication,
authorization, and accounting. When a user tries to gain access to a system, the
first thing we do is authenticate that user. We ask them to prove their claim of
identity through a password or some other means. Once we're confident that
they are who they claim to be, we then perform authorization to determine what
that person is allowed to do. And then finally after granting them the ability to do
something, we perform accounting, which is tracking what the user does and
keeping a record so that we can later look back and see what actions took place.
Together these three triple A activities of authentication, authorization, and
accounting form the core of identity and access management programs. There are
three common ways that users can authenticate themselves to systems. The first
of these is something you know. The most common example of something you
know is a password. Answers to security questions and pin codes also fit into this
category. These are facts that are in the user's memory and are then repeated to
the authentication system as proof of the user's identity. The second way that
users can authenticate to a system is something you have. This is some device
such as a smartphone, a key fob, or a smart card that the user produces in order
to prove their claim of identity. And the third way that users can authenticate to
systems is biometric authentication or something you are. This is by using some
characteristic of their body to prove their identity to a system. This could be as
simple as a fingerprint scan or it could be a retinal or iris scan of their eye. We can
use facial recognition or voice recognition or even things like analyzing the
pattern of veins in a hand or the way that a person walks. All of these biometric
authentication techniques provide very secure authentication. Strong
authentication systems use a technique called multi-factor authentication or MFA
where we combine two or more authentication techniques that represent two or
more different factors. For example, we might take a password, something you
know, and combine it with a fingerprint scan, something you are. We also might
combine a password, something you know with a smart card, something you
have. Either of these approaches provide strong authentication. If an attacker
steals a user's password, they're still going to have to have access to that user's
smart card or fingerprint in order to gain access to the system. One of the most
important things that you can remember about multifactor authentication is that
the authentication techniques have to represent two different factors. Something
you know, something you have, and something you are. For example, if we
combine a password, which is something you know, with a pin or the answers to
security questions, which are also something you know, that is not multi-factor
authentication, because we haven't brought in something that you have or
something that you are. There are also other ways that we can add assurance to
the authentication process by using different attributes. In addition to the
something you know, something you have, and something you are factors, we can
bring in information such as where the user is or how they behave to add
additional context to the authentication process.
Objective 2.5
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] Objective 2.5 of the Security Plus Exam is that, when given a
scenario, you'll be able to implement cybersecurity resilience. Resilience is the
ability of a system to withstand potential sources of disruption. When we conduct
resilience activities, what we're doing is trying to make our systems stronger so
that they're able to face all those threats that are out there in the world without
being disrupted. One of the key elements of resilience is redundancy. By taking
systems and their components, and having multiple backup parts, we protect
ourselves in the event that one of those components fails. Let's talk about four
ways that you can implement redundancy. The first of these is geographic
dispersal. Geographic dispersal spreads our systems over a large geographic area.
For example, if we have four web servers providing access to our organization's
website, we might place one of those servers in New York, another in Los Angeles,
another in Tokyo, and the fourth in Rome. Having this geographic dispersion of
our servers helps protect us against failures that might occur because of where
those servers are located. The second type of redundancy we can have is disc
redundancy. Discs are one of the most common parts of a server to fail, and by
placing multiple discs inside a single server and spreading data across those discs,
we can protect ourselves against those failures. We most commonly do this using
a technology called RAID, that's Redundant Arrays of Inexpensive Discs. RAID
technology spreads our data across discs in a way such that if one of those discs
fails, all of the data is still available from using the other discs. The third place we
can achieve redundancy is in networking. We can do this by having load balancers
that distribute our traffic across multiple servers. And we can also do it at the
server level by using a technology called NIC teaming. NIC teaming lets us use
multiple Network Interface Cards, or NICs, to access the network, so that if one of
those cards fails, another is able to pick up the load. And the fourth place that we
use redundancy is when it comes to power. At a very high level, when we speak of
the sources of power coming into our facility, it's great if we're able to have two
independent sources of power entering our buildings so that a power outage at
an external facility is less likely to affect us. Power redundancy also refers to the
way that we provide power within our facility, making sure that we have
Uninterruptable Power Supplies or UPS's, that are able to cover momentary
glitches in the power, and generators that are able to provide our own power in
the event of a longer term disruption. The third place that we worry about power
is in the server itself, because every server contains its own power supply. These
power supply components are also likely to fail, so redundancy says that we
should put two different power supplies in the same server, so that if one fails,
the other is able to provide power to the server. Data protection is another form
of resilience. Data protection means that we make sure that we replicate our data
across multiple locations, and we keep backups in order to have access to our
data if the original location where that data is stored somehow becomes
corrupted. There are three different types of backup that you need to know
about. A full backup backs up all of the data that's stored on a system. A
differential backup backs up only the data that's been modified on a system since
the last full backup, and an incremental backup backs up all of the data that's
been modified since the last full or incremental backup. These backups can be
stored in many different places, they might be stored on discs, they might be
written to tape, or they might be moved to cloud services. The important thing is
making sure that your backups are stored in a different location than your
primary servers. This makes sure that if some kind of disaster affects the building
where your servers are located, the backups still remain available. When planning
resilience activities, you should keep diversity in mind. And the diversity that
we're talking about here is the diversity of the technologies, vendors,
cryptography, and controls that we're implementing in our environments. Having
different vendors in our supply chain minimizes the likelihood that some sort of
failure at a single vendor is going to significantly disrupt our business operations.
Those are the key things that you need to know about cyber security resilience as
you prepare for the Security Plus Exam.
Objective 2.6
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] Objective 2.6 of the Security+ exam is that you be able to explain the
security implications of embedded and specialized systems. So let's first talk
about what these systems are. Embedded systems are systems that are placed
inside other things that we use in our everyday lives. You might find embedded
systems in everything, ranging from your refrigerator or car to the industrial
equipment that operates a factory floor. There are some common technologies
that are used to create embedded systems. These are technologies like Raspberry
Pis, Arduino devices, and field-programmable gate arrays. When we use these
embedded systems in an industrial setting there are two terms that apply to
them. They're called Supervisory Control And Data Acquisition, or SCADA systems.
And they're also called Industrial Control Systems, or ICS technologies. We find
these systems in all sorts of facilities. Industrial facilities, manufacturing plants,
energy production and distribution facilities, and in logistics and supply chain
operations. When we use embedded systems in our everyday lives, we often call
them the Internet of Things. These are the sensors, smart devices, wearables, and
other systems that make our homes smart and our lives more efficient. Now of
course, having all of these systems in our homes and businesses makes our lives
easier, but it also opens up new avenues for attackers to try to exploit us. That's
why we have to be really careful about making sure that our systems are designed
in a secure way. And they're maintained to protect against security vulnerabilities.
One of the most important things that you can do as you're thinking about
embedded systems security is to make sure that you have an accurate inventory
of all of the places that embedded systems exist in your environment. Now, some
of them are obvious, but some you might not think of it first. Some of the things
you should look for are medical systems, cars, aircraft, smart meters, voice over IP
telephones, heating ventilation and air conditioning or HVAC systems, drones,
multifunction printers, surveillance systems, and anything else that might contain
a computer inside of it that needs to be protected against security threats.
Embedded systems have specialized constraints because they're generally small,
and they're often deployed in places where they don't have convenient access to
networks or power. So when you're thinking about embedded systems, be
familiar with these constraints. They include limited power consumption, low
computing power, low network bandwidth, or no network connectivity at all, and
the inability to use strong cryptography. These constraints increase the
importance of providing security and making sure that you're able to patch those
systems, and making sure that those systems use strong authentication to ensure
that people accessing them are who they claim to be. Because of their nature,
embedded systems are often placed in remote locations. And this means that we
need to think about different ways to connect them back to our networks. It's not
always possible to have an ethernet or wifi network reaching out to a remote
facility. To facilitate communications for embedded systems, we sometimes use
other technologies, such as using SIM cards, to allow these systems to access a
5G, or other cellular network. Narrowband communications. Baseband radio
communications. Or specialized technologies designed for embedded systems,
like ZigBee and Z-Wave networks. Those are the main things that you need to
know about embedded and specialized systems as you're preparing for the
Security+ exam.
Objective 2.7
Selecting transcript lines in this section will navigate to timestamp in the video
- [Educator] Objective 2.7 of the security plus exam is that you be able to explain
the importance of physical security controls. These are the controls that we use to
affect security in the physical world. One of our main objectives with physical
security is to prevent unauthorized people from accessing a facility. And there are
a lot of different techniques that we can use to enforce this. Of course, we can
build fences around our facility to make sure that nobody's able to enter an area
unless they go through an authorized gate. We can also use a special room called
an access control vestibule or a mantrap. These rooms prevent tailgating attacks
where two people might try to enter the facility at the same time. When we're
using an access control vestibule, someone who wants to enter a facility first
opens the exterior door of the vestibule, then they enter the vestibule and close
the exterior door. They're not able to open the interior door that allows them into
the facility until the exterior door is fully closed, ensuring that nobody's able to
sneak in behind them. Locks are also an important part of preventing people from
accessing a facility that they're not authorized to access. We can use all different
kinds of locks, from traditional physical locks to electronic locks that use keypads
or smart cards or even biometric locks that look at fingerprint scans, eye scans, or
other biometric techniques to grant someone access to a facility. We can also use
locks to protect equipment by using cable locks to secure laptops and other
portable equipment to a desk, table, wall, or other permanent part of the
building. Physical security controls also help us to detect intrusions. We can use
traditional burglar alarms to help identify intruders that are trying to access the
facility. We should have strong lighting around the outside of our facility so that
guards can see people trying to enter a secure area. Those guards might be
human beings, or they might even be robot sentries that are designed to watch
for the presence of unauthorized individuals. Intrusion detection also uses a
variety of sensors ranging from closed-circuit television surveillance cameras that
can perform motion recognition and object detection to noise detection sensors,
moisture detection, temperature detection, and other signs that something might
be going wrong in an area from a physical perspective. Now, of course, we
sometimes have authorized visitors to our area and it's important that we have
visitor management procedures in place that allow us to track and manage those
visitors. There are two important elements here. The first is a visitor log. This log
gives us a record of who entered the facility, when they came and left, and who
granted that access. The second is badging that allows us to clearly and easily
distinguish between employees and visitors who might require an escort when
they're accessing our facility. Faraday cages are physical security controls that
prevent electronic eavesdropping by preventing electronic signals from leaving an
area. Now, these are very disruptive security controls because Faraday cages
block all electronic signals including those we might want to have, but they do
protect against unintentional electronic emanations. As we're thinking about
physical security, we should look at all of the different places in our facility that
might be considered secure areas. These include data centers, executive office
spaces, vaults, safes, and other places where sensitive material is maintained. The
last physical security control we're going to talk about is secure data destruction.
When paper or electronic records reach the end of their life cycle, we need to
destroy them in a way that someone isn't able to pick them up out of the trash
and gain access to the sensitive information that they contain. Secure data
destruction techniques include burning, shredding, pulping, and pulverizing for
physical materials and the degaussing of electronic media. Degaussing uses strong
magnetic fields to eliminate traces of data that might still be stored on a device.
We can either perform these data destruction techniques ourselves or we can
hire a third-party vendor to perform data destruction for us. Those are the things
that you need to know about physical security as you prepare for the security plus
exam.
Objective 2.8
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] Objective 2.8 of the Security+ exam is that you be able to summarize
the basics of cryptographic concepts. Cryptography is the practice of using
mathematics to obscure the meaning of sensitive information to people who
aren't authorized to view it. Cryptography has two basic operations. The first is
encryption. Encryption uses an encryption key to take plain text sensitive
information and transform it into encrypted cipher text. The second is decryption.
Decryption takes that encrypted cipher text and uses a decryption key to return
encrypted information back into its plain text form. There are two major
categories of encryption algorithms: Symmetric encryption techniques use the
same key to encrypt and decrypt information. In those cases, you can think of a
shared secret key as the password to encrypt and decrypt the file. In asymmetric
encryption algorithms, different keys are used to encrypt and decrypt the
information. These are known as a public and a private key. When we're
encrypting information for confidentiality, we encrypt that information with the
public key belonging to the person that we want to read the information. When
that person receives the encrypted information, they decrypt it using their own
private key, and then they're able to see the original plain text. When you're
evaluating encryption algorithms, you should check for two things. First, that the
encryption algorithm itself is secure against attack. And second, that you're using
a key that's long enough to protect against exploits where attackers might try to
guess the key. One of the easiest ways that you can make an encryption algorithm
more secure is to increase the key length. We can use encryption to achieve many
different security objectives. We can use it to support confidentiality, integrity,
obfuscation, authentication and non-repudiation. In confidentiality, we're
protecting the secrecy of information. In integrity, we're making sure that
information isn't altered by unauthorized individuals. With obfuscation, we're
trying to avoid sharing the intent of our information with other people. And in
authentication, we're gaining the ability to verify a user's claim of identity. Nonrepudiation allows us to prevent someone from later denying that they've sent a
message. The main way that we achieve non-repudiation is through the use of
digital signatures. In a digital signature, the sender of a message uses
cryptography to affix their digital signature to a message in a way that the
recipient can then later prove to someone else that the originator actually sent
that message. Let's talk a little bit about how that works. When you digitally sign a
document, The first thing that you do is create a message digest by taking that
document and running it through a hashing algorithm. This produces a short
summary of the message which the signer then encrypts using their own private
key. That encrypted message digest is then known as a digital signature. The
sender then attaches that digital signature to the message and sends it on. When
the recipient receives a digitally signed message, they remove the digital
signature. They then decrypt that signature with the sender's public key, and in
doing so, they obtain the message digest that was originally encrypted by the
sender. Then using the same hash function that the sender used, they compute
their own message digest from the plain text message. They then compare the
message digest that they computed with the message digest that they decrypted
from the digital signature. If those two values match, the recipient then knows
that the message is authentic and did indeed come from the person who owns
that public key. Another use of cryptography that you should be familiar with is
steganography. This is hiding message in either plain text or encrypted form
inside of other files. Steganography can be used to exchange secret information in
plain sight. Steganography often embeds messages in audio files, video files, and
images. As you prepare for the exam, you should also be familiar with the concept
of quantum computing and quantum communications. This is taking the
principles of quantum physics and applying them to computing in a way that
provides massive computing power. Now there aren't practical applications of
quantum computing today, but we have to prepare for a potential post-quantum
world. Because if quantum computing is ever achieved in any useful form, it has
the potential to undermine all of the encryption technologies that we're using
today. Those are the important things that you need to know about cryptography
as you prepare for the Security+ exam.
Objective 3.1
Selecting transcript lines in this section will navigate to timestamp in the video
- The third domain of the Security Plus Exam is implementation. This domain has
nine objectives. The first objective, Objective 3.1 is that when given a
scenario, you'd be able to implement secure protocols. So let's talk through those
protocols for some different use cases. First, for voice and video, the
recommended secure protocol is SRTP, the Secure Real-Time Transport
Protocol. It's also important that you keep the times on all of your devices
synchronized so that you can easily correlate log entries that are generated by
different devices. The Network Time Protocol, NTP, provides the service for your
network. There are a number of protocols associated with email. The first of these
is the Simple Mail Transfer Protocol, SMTP. SMTP is used to transfer email
messages between mail servers. Now the standard version of SMTP is not
encrypted but you can use the secure version of SMTP, SMTPS to have a secure
connection as you transfer mail messages. Email clients used by end users use
two different protocols to retrieve email from email servers, the Post Office
Protocol, POP3, and the Internet Message Access Protocol, IMAP. As with SMTP,
the plain versions of POP3, and IMAP are not secure, but there are secure
alternatives. POP3 over SSL, and the IMAP Secure or IMAPS protocol. You can
achieve end-to-end encryption for email messages using the Secure Multipurpose
Internet Mail Extensions or SMI Protocol. SMI adds encryption on top of
whatever other email transfer protocols you're using. Users access websites using
the insecure, and unencrypted Hypertext Transfer Protocol, HTTP. The secure
version of this protocol is the Hypertext Transfer Protocol over SSL or TLS called
HTTPS. Basic file transfers are handled by the File Transfer Protocol, FTP. As with
the other protocols we've discussed, the original version of FTP was not
secure. There are two different secure alternatives. The File Transfer Protocol
Secure, FTPS, and the SSH File Transfer Protocol, SFTP. You can access directory
services using the Lightweight Directory Access Protocol, LDAP, and the secure
version of this protocol is LDAP Secure or LDAPS. The Secure Shell or SSH
Protocol provides for encrypted administrative connections to remote
servers. You can build secure remote access virtual private network tunnels using
a variety of protocols including the Internet Protocol Secure or IPsec. IPsec uses
two distinct protocols, Authentication Header, AH, which provides authentication
for packets sent over the VPN connection, and the Encapsulating Security
Payload or ESP that provides both confidentiality and authentication. IPsec can be
run in two different modes, transport mode and tunnel mode. In tunnel
mode, two remote sites are connected to each other, and any packet sent
between those sites are wrapped in a new packet that's completely encrypted
using Ipsec. In transport mode, the IP header remains unencrypted, and the
remainder of the packet is sent securely. Domain name resolution is an important
network function that allows us to use friendly domain names instead of IP
addresses. We use the Domain Name System, DNS to perform these look-ups. The
DNS Security Extensions or DNSSEC add strong authentication to DNS queries to
ensure that the responses from servers that you receive are actually
legitimate. When managing routers and switches, administrators often use the
Simple Network Management Protocol, SNMP. If you're using SNMP, it's
important to ensure that you're using SNMP version three because that version is
secure while earlier versions of SNMP are insecure. Finally, Network Address
Allocation provides systems with IP addresses that they can use on a local
network. The main protocol used for this purpose is the Dynamic Host
Configuration Protocol, DHCP. Those are the important protocols that you need to
know as you prepare for the Security Plus Exam.
Objective 3.2
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] Objective 3.2 of the Security Plus exam is that when given a
scenario, you can implement host or application security solutions. Let's begin by
talking about endpoint protection solutions. You should use standard antivirus
and anti-malware software to protect your endpoints against malicious
software. Going beyond that, endpoint detection and response, or EDR solutions,
provide added security that allows administrators to manage the quarantine and
removal of malicious code found on systems. Data loss prevention, or DLP
technology, watches for endpoints that contain sensitive information that they
shouldn't, or that attempt to move sensitive information outside of the
organization. DLP solutions can then block those attempts. Next generation
firewall, or NGFW capabilities, block unwanted inbound network
connections. Host intrusion detection systems watch incoming traffic to a system
for signs of malicious activity. And host intrusion prevention systems go a step
further and actually block potentially malicious connections. Boot integrity is an
extremely important part of endpoint protection because if malicious code is
inserted into the boot process, that code can bypass many other operating
system protections. The unified extensible firmware interface, or UEFI, is the
primary method used for booting modern systems. When securing databases, one
of the most important things you can do is look for sensitive information in the
database that isn't necessary and remove that information. If you aren't able to
remove it, you can use techniques like hashing to transform it into a version that's
not sensitive. Tokenization similarly takes sensitive values and transforms them
into nonsensitive variants. The difference between hashing and tokenization is
that tokenization is reversible using a lookup table, while hashing is not
reversible. In the world of application security, one of the most important things
that you can do is perform input validation. That's checking any input that an
application receives from a user for signs of potentially malicious content before
passing it on to the application. You can also manage applications in your
environment by using an allow list or a block list. In the allow list approach, you
list the applications that are allowed to run in an organization, and no other
applications may be executed. Block lists take the opposite approach and list
applications that are not allowed to run in the organization, and presume that any
other code may be executed. Code signing is a technique that allows
developers to apply digital signatures to their code to show that it came from a
legitimate source. As developers create code, they should use secure coding
practices. They can then perform testing to verify that those practices are
functioning properly. This testing may include static code analysis, that simply
looks at the code for security flaws without executing it. Or dynamic code
analysis, which actually executes the code, probing it for vulnerabilities. Fuzzing is
a common dynamic code analysis technique that supplies many different
possible input values to an application. It's important to harden the systems that
reside on your network. Some of the key things that you can do are
removing open ports and unnecessary services on servers. Locking down registry
settings to make sure that they match your secure baseline. Encrypting discs using
full disc encryption or self-encrypting drives. Configuring the operating system to
operate in a secure manner. And performing regular patch management on both
the operating system and any applications, to make sure that you're automatically
receiving security updates as they become available. The last concept that you
need to know for this objective is the use of hardware security to manage
encryption keys. Most modern computers contain a special chip called the trusted
platform module, or TPM, that helps establish a hardware route of trust. This
ensures that an encrypted drive is actually placed in a computer that's authorized
to access that drive, before allowing the user to retrieve data from it. Those are
the important host and application security solutions that you need to know as
you prepare for the Security Plus exam.
Objective 3.3
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] Objective 3.3 of the Security+ exam is that when given a
scenario, you're able to implement secure network designs. Let's talk about a few
of these network design concepts. The first is load balancing. Load balancing
allows you to distribute work for a particular function amongst several different
servers. This provides resiliency because those servers provide redundancy to
each other, and it also adds security because if one server is compromised, you
can just take it out of the load balancer pool and continue operations. Load
balancing can be done in an active-active mode, where all of the servers are up
and running at any given time, or in an active-passive mode, where one or more
servers are the primary servers that are in active running mode, and the others
are in passive mode, and only become active if one of the active servers
fails. Network segmentation is used to group systems onto network
segments with systems of similar security levels. This is often achieved using
virtual local area networks, or VLANs. When designing an organization's network
segments, designers commonly use several different approaches. They use
firewalls to separate the internet from their intranet, which is designed for
internal users, and then they also create a demilitarized zone, or DMZ network,
also known as a screen subnet, where they can place systems that need to be
accessible from the outside world. This screened subnet approach limits the
damage that an attacker can cause if they compromise a system, because systems
on the screen subnet can't communicate with the internal network without
passing through the firewall. Many organizations also implement an extranet that
provides access to vendors and other trusted partners who need limited access to
the organization's network. Virtual private networks, or VPNs, provides secure
remote access for users, and they also connect different locations of a business
together over the internet. Virtual private networks function over public
networks by using encryption to keep traffic away from prying eyes. The primary
protocols used to implement VPNs are IP Sec, TLS, HTML 5, and the layer two
tunneling protocol, L2TP. Network access control systems, or NAC systems, are
used to authenticate devices before they're allowed to connect to the
organization's network. This authentication includes ensuring that the device is a
legitimate device owned by the organization and performing posture checking to
verify that the device is configured securely. This may be done using an agent
that's installed on the device or an agentless approach. Port security is a function
of network switches that verifies the MAC address of systems communicating on
the network. When port security is used, the switch registers the MAC address of
the first device that it sees on a switch port, and then doesn't allow any other
devices to use that port unless the port security is reset by an administrator. Jump
servers are devices that provide a way for administrators to move between
networks in a secure fashion. An administrator can connect to a jump server from
a remote network and then use that jump server to access internal systems. Proxy
servers are used to negotiate network connections for users inside a secure
network. Instead of connecting directly to remote web servers, users pass their
traffic through the proxy server, which then connects to the remote web server
on their behalf and performs content filtering to ensure that malicious traffic isn't
sent over the network. Networks also have intrusion detection systems and
intrusion prevention systems that use signature and anomaly detection
techniques to watch for signs of potentially malicious activity on the
network. Intrusion detection systems can alert administrators to that traffic, while
intrusion prevention systems can go a step further and actually block that
traffic from reaching the network.
Objective 3.4
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] Objective 3.4 of the Security Plus exam is that when given a
scenario, you'll be able to install and configure wireless security settings. The first
of these settings that you need to consider is the cryptographic protocol that
you're going to use. The options for cryptographic protocols are wired equivalent
privacy, otherwise known as WEP, wifi protected access, WPA, and WPA versions
two and three. Of these, WEP and the original WPA protocol are now considered
insecure and no longer acceptable for use. So they should not be used on modern
networks. That leaves us with two possibilities, WPA2 and WPA3. WPA2 uses a
technology called counter mode cipher blockchaining message authentication
protocol, or just CCMP. CCMP implements the advanced encryption standard on
wireless networks. Now while there are some security vulnerabilities in
WPA2, most security professionals still consider the protocol secure enough for
use on modern networks. WPA3, the more secure replacement for WPA2, is now
starting to be used on wireless networks. WPA3 uses a technology
called simultaneous authentication of equals, or SAE. SAE uses a key exchange
protocol that's based upon the Diffie-Hellman algorithm to exchange encryption
keys between a wireless network and a wireless client. The second major factor
that you need to consider is the method that you're going to use for
authentication. The first option is to use a pre-shared key in PSK mode. This
simply means that you have a password on your wireless network that you share
with anyone who wants to use the network. This approach is very easy to
implement at first, but it has a significant drawback. Anytime you want to change
the network password, you need to reconfigure all of the devices that connect to
your network. A better approach for wireless authentication is to use enterprise
mode. Networks running in enterprise mode use an individual's regular username
and password to allow access to the network. This has the advantage of being
able to uniquely identify users, and also control access to the network on a user
by user basis. When you're granting access to your wireless network to people
that you don't know, you have a couple of options. First, you could simply run an
open, unencrypted wireless network that allows anyone to use it. Second, you can
implement a captive portal. A captive portal is a webpage that pops up whenever
users want to access the network, that requires them to authenticate before they
gain access to the network. The standard for authentication on both wired and
wireless networks in enterprise mode is a protocol called IEEE 802.1x. This
standard uses the extensible authentication protocol, or EAP, to create a secure
connection between a wireless user and the wireless network. In these situations,
authentication is normally performed by a server running the RADIUS
protocol. RADIUS stands for remote authentication dial in user service. And while
it was originally created for dial up modem users, it's now widely used for
authentication to enterprise networks. As you're planning a wireless network, you
should think through a number of installation considerations. First, the physical
environment where you're installing the network may have a significant impact
on how radio waves travel around that environment. You should perform a site
survey to get a sense of the built environment in your organization, and how it
will affect wireless network propagation. This will help you place your wireless
access points appropriately and also decide on the channels that you want to use
to minimize interference with other wireless networks in the area. When your
wireless network is running, you can use a wifi analyzer to assess the coverage of
the wireless network in your organization. It's also very helpful to produce a heat
map. That's a visualization of the layout of your offices, showing graphically where
the wireless signal is weaker and where it's stronger. Those are the important
things that you need to know about wireless network security as you prepare for
the Security Plus exam.
Objective 3.5
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] Objective 3.5 of the Security+ exam is that when given a
scenario, you'll be able to implement secure mobile solutions. As we work
through the objective, the first things that we need to consider are the
connection methods that you can use for mobile devices. For broad network
connectivity, the two main options are cellular networking and wifi
networking. These allow access to the global internet for mobile devices, but
there are also a number of other protocols that you should be aware of that are
used locally by devices. For example, you can use the Bluetooth protocol to
connect wireless headsets, computers and other devices to your mobile
phone. You can use near-field communication, or NFC technology, for contactless
payments. And you can even use infrared networks or USB cables to share data
between mobile devices and other systems. You should also be familiar with two
specialized protocols. The global positioning system, or GPS, allows you to use
satellites to pinpoint your location on the Earth's surface. Radiofrequency
identification, or RFID technology, can be used to track devices in a small
area such as on a factory floor. Organizations should use mobile device
management, or MDM technology, to ensure that they maintain all of their
mobile devices in a secure configuration. You can use MDM packages to manage
the applications that are installed on a device, to perform content filtering and to
make sure that users are abiding by your organization's security and acceptable
use policies. You can also use this software to allow the remote wiping of remote
devices and to perform geo-fencing, to notify you when devices leave a defined
geographic area. Mobile device management technology also allows you to locate
devices using GPS and ensure that those devices are securely configured. This can
include making sure that they have screen locking, passwords and PINs enabled or
that they use strong biometric or other authentication technology. It also includes
enforcing full device encryption policies that ensure that the data on a lost device
isn't compromised. In cases where you need to use mobile devices for highly
secure applications, you can consider the use of hardware security modules on
micro SD cards to manage encryption keys or even have highly secure operating
systems such as SE Android on a device. As you manage the mobile devices in
your organization, you'll want to make sure that you're enforcing and monitoring
all of your organization's mobile security policies. Some of the things you should
consider are managing the applications that are installed in your
organization. You can do this through a full blown mobile application
management solution or you may create policies that limit the use of third-party
application stores, the side loading of apps by bringing them onto device through
a USB connection or StorageGuard and limiting the ability of a user to root or jail
break a device and bypass security controls by gaining administrative access to
the device. Security policies can also ensure that devices have current
firmware and that they receive over the air updates. They can prohibit carrier
unlocking that would allow a user to take the device from the current mobile
carrier and move it to another carrier. Policies can also limit camera use, the use
of text messaging or other phone services and the connection of external media
to the device. They can enable and disable cameras and microphones and GPS
tagging. Security policies can also limit the networking capabilities of mobile
devices. They might prevent devices from being used to create their own wireless
networks, to generate a hotspot, and they might disallow the tethering of other
devices to the mobile device to share connectivity. Finally, as you develop your
organization's mobile device philosophy, you can consider four different
deployment models. The first of these is corporate-owned devices. This is simply
where the organization purchases devices
Objective 3.6
Selecting transcript lines in this section will navigate to timestamp in the video
- Objective 3.6 of the Security Plus exam is that when given a scenario, you'd be
able to apply cyber security solutions to the cloud. As you're developing secure
cloud solutions, there are a number of important considerations that you should
have in mind. First, you should design your cloud solutions to make use of
different zones of service offered by your service provider. This allows you to
build high availability, resilient environments, that can remain up and
running even if an entire zone fails. You should also create resource policies that
limit what users can do in the cloud to minimize your organization's exposure. For
example, you might limit the number or type of server instances that an individual
user can create to limit the financial risk that you face in the event that that user
account is compromised. As you're selecting a cloud provider, you should
evaluate the different integrations that they have available with the technologies
that you already use and the ability that you'll have to perform auditing to
ensure that your organization's security policies are maintained. There are also
some security controls that you should implement that are service specific. For
example, when you're using cloud storage solutions, you should be able to
manage and monitor the permissions that users have to access those storage
solutions, the encryption technology that's used to protect data while it's in cloud
storage, and the replication and high availability capabilities of storage services to
perform data protection tasks. When you're using network resources in the
cloud, you should understand that the cloud allows you to create your own virtual
networks using a technology called virtual private clouds. You can create public
and private sub nets that are either exposed to the public internet or protected
behind firewalls offered by your cloud provider. This allows you to build a
segmented approach to cloud networking that's similar to the approach that you
would use in an on premise data center. When you're securing cloud computing
resources, one of the primary controls that you have are security groups. These
are the access control lists that determine what devices on the network are going
to be able to access your compute instances. You can think of security groups as
the equivalent of firewall rules in the cloud. Cloud computing resources can be
dynamically allocated by individual users of the cloud service, so you should make
sure that you have visibility into the number and types of instances that are being
used by your organization and when they're no longer necessary so that you don't
have unused instances continuing to accrue both costs and security risks. There
are a number of cloud specific security technologies that you can use to enhance
your cloud security posture. Cloud access service brokers, or CASB
solutions, integrate with many different cloud providers and provide you with a
single point of enforcement where you can specify your organization's cloud
security policies and then automatically implement those policies across all of the
cloud service providers that you use. Secure web gateway products intercept and
filter user requests for web resources, allowing you to enforce your content and
acceptable use policies. You should also consider the use of application security
controls such as web application firewalls to protect your organization's cloud
hosted web applications from attack. As you're working through the options
available to you for cloud solutions, you may wish to consider both the cloud
native security controls offered by your cloud service provider and the use of third
party cloud security solutions. As you sort through these options, you should
consider both cost and functionality as important criteria. Those are the
important things that you need to know about cyber security solutions in the
cloud as you prepare for the Security Plus exam.
Objective 3.7
Selecting transcript lines in this section will navigate to timestamp in the video
- Objective 3.7 of the Security+ exam is that when given a scenario, you'll be able
to implement identity and account management controls. Identity management is
one of the crucial foundational elements of a security program. You won't be able
to make any other security decisions unless you have confidence in your ability to
identify and authenticate users. Identity and access management, or IAM
solutions, use the concepts of subjects and objects. Subjects are the people,
systems, or services that want to gain access to resources, and objects are the
resources that they want to gain access to. Each subject that has an identity in the
identity and excess management system has a number of attributes associated
with their identity. These attributes may include things such as their job role, their
affiliation with the organization, the department that they're in, or any other
characteristics that are tied to their identity. The identity provider, or IdP, is the
organization that's providing that digital identity to a user. IdPs are commonly an
individual's employer, school, or similar organization. Users can prove their
identities through a variety of techniques that we discussed when we reviewed
authentication controls. It's common to use technologies other than
passwords, such as digital certificates, hardware or software security tokens,
smart cards, or SSH keys to authenticate a claim of identity. As you're building an
account management solution, you should be aware of the different types of
accounts that might exist. There is, of course, the standard user account and
super user accounts that have administrative privileges. Individuals who do have
administrative privileges should normally have both, an administrative account
and a normal user account that they use for their day-to-day work. They should
only access the administrative account when they actually need to execute a
command that requires those administrative privileges. You should avoid having
accounts in your organization that are shared between multiple people. The
reason for this is that the activities taken with those accounts can't then be tied
back to a single individual violating the principle of accountability. The same thing
is true for guest accounts or vendor accounts that might be shared among
multiple people. The other type of account you will most likely have in your
organization are service accounts. These are accounts that are not set up for
interactive login to systems by people, but they're used by operating systems and
applications when they need to gain access to resources. You'll want to create a
number of different account policies in your identity and access management
program. Some of these are around passwords. You might want policies that
specify whether passwords expire, if users are allowed to reuse old
passwords, which you can track by maintaining a password history, and the
complexity requirements for passwords, that is how long they need to be and
how many different character types they should contain. When evaluating user
access requests, identity and access management systems can also take other
factors into consideration. These include the IP address of the user, which gives
you an idea of their net worth location, as well as their geolocation that you might
obtain from GPS data. Using this technology, you can ensure that users are
located in a specific geographic area when you're granting them access to
resources. Similarly, you can use geofencing to notify you when an actively logged
in user leaves a specific geographic area, and geotagging to annotate log
entries with the user's physical location. You can use time-based logins to restrict
the hours of the day during which a user can access systems. Having a user's
location and time information also allows you to create access policies that
prohibit users from accessing systems when there's an impossible travel time
involved. For example, if a user logs in from a system in New York at 5:00 pm, and
then two hours later, logs in from France, that's an impossible travel time. The
same person could not have been physically present in New York, and then two
hours later, be present in France. These access policies allow you to build out a
strong identity and access management environment and create the ability to
lock out and disable accounts when you suspect suspicious activity. They also
allow you to build an environment where you're conducting regular audits of user
accounts to ensure that user activity matches your expectations. Those are the
important things that you need to know about identity and account
management, as you prepare for the Security+ exam.
Objective 3.8
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] Objective 3.8 of the Security+ exam is that when given a scenario
you'll be able to implement authentication and authorization
solutions. Authentication management is one of the important subtopics of this
objective. Authentication management technologies allow you to safeguard the
credentials that are used to access other resources. Passwords are still the most
common access control, and protecting those passwords is extremely
important. Most organizations encourage users to make use of password vault
technology that allows them to have a strong, unique password for each site that
they visit, and then easily manage and use those passwords. You can also use
hardware security technologies to protect passwords and access keys. The
Trusted Platform Module, or TPM, is a chip that's installed directly in a
device such as a laptop to allow you to manage the keys associated with data
stored on that laptop, such as when you're using full-drive encryption. Hardware
Security Modules, or HSMs, provide an enterprise-wide management
capability for passwords and other sensitive knowledge-based authentication
mechanisms. A number of technologies can assist you in building out
authentication and authorization solutions. We've already discussed the role that
technologies like the Extensible Authentication Protocol 802.1X and RADIUS play
in network authentication and authorization. There are a number of other
technologies that exist that allow you to work towards a single sign-on, or
SSO, environment where users can authenticate once and then use that
authentication session to gain access to many systems throughout your
organization's environment. One of the most important of these is the Kerberos
protocol. Kerberos provides a centralized authentication and authorization
solution that can be used for access to many different services throughout an
organization. Services that are enabled for use with Kerberos are known as
Kerberized services. There are also some older protocols, such as the Password
Authentication Protocol, PAP, and the Challenge-Handshake Authentication
Protocol, CHAP. These two protocols use insecure technologies to exchange
passwords and should no longer be used. Federated authentication
solutions allow you to use credentials from one organization with resources
belonging to another organization. There are a number of technologies
available to help you support this. The Security Assertion Markup Language, or
SAML, is an XML-based solution that allows for federated single sign-on. OpenID
Connect is a solution that allows you to use an account from one identity
provider to access resources from another provider. For example, when you log
on to a third-party website using your Google, Amazon, or Facebook
credentials that authentication session is most likely using OpenID
Connect. OAuth is a technology that allows you to authorize a service to access
resources that belong to you at another organization. For example, you can use
OAuth to allow a web service to access your Gmail account or your
calendar. There are a number of access control models in use in modern
cybersecurity programs. Mandatory Access Control, or MAC, systems are set up to
enforce existing security requirements without allowing any exceptions. In a MAC
solution each object is labeled with a security level, and each user is given a
security clearance. The system then enforces policies that restrict users to only
accessing resources at their security level or below. In a Discretionary Access
Control, or DAC, environment the individual owners of files and resources are
able to grant permission to other users to access those resources. Most modern
file systems use Discretionary Access Control for file system permissions. That's
because DAC makes it easy for users to control access to the files that they
create. Role-based access control systems grant access to resources based upon a
user's role in the organization, while attribute-based access control systems look
at one or more attributes of a user's identity when deciding whether to grant or
deny an authorization request. Firewalls implement a model known as rule-based
access control where they operate off of a predefined set of security rules to
enforce a security policy. As you're building out your authentication and
authorization solutions you should consider the implementation of a Privileged
Access Management, or PAM, platform that carefully manages and monitors the
use of administrative and other privileged accounts in your organization. Those
are the important things that you need to know about authentication and
authorization solutions as you prepare for the Security+ exam.
Objective 3.9
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] Objective 3.9 of the Security+ exam is that when given a
scenario, you'd be able to implement a public key infrastructure. The public key
infrastructure is based on asymmetric cryptography and it provides the ability for
users to securely share their public encryption keys with others and provide
others with the assurance that those keys are legitimate. The primary mechanism
for sharing these keys is the use of digital certificates. Digital certificates are
files that contain a user's public encryption key and are digitally signed by a
trusted third party known as a certificate authority or CA. These CAs are normally
well-known organizations that are widely trusted around the world. Certificate
authorities may rely upon a network of intermediate certificate authorities and
registration authorities to distribute their workload. When a user or system wants
to rely upon a digital certificate, it must first verify whether the certificate is
valid. The first step of this process is confirming that the digital signature on the
certificate is authentic and was created by a trusted certificate authority. Once
satisfied that the certificate is authentic, the next step is to determine whether
the certificate is still valid. Certificates contain expiration dates and an expired
certificate should not be trusted. In addition, certificate authorities have the
ability to revoke digital certificates if the associated encryption keys are
compromised. They can do this in two ways. First, they can maintain a list of
invalid certificates called a certificate revocation list or CRL that users can check
when validating a certificate. And second, they can use the Online Certificate
Status Protocol, OCSP, to provide real-time validation of the status of a digital
certificate. When you want to create a new digital certificate, you do this by
creating a certificate signing request, or CSR, and sending that CSR to a certificate
authority. The certificate authority will then validate your identity and if
appropriate, issue a digital certificate. These certificates may be issued to an
individual user or email address or to a system for use on a web server or other
service requiring encrypted connections. Each certificate contains a common
name, or CM, which is also known as the fully qualified domain name. This is the
system to which a digital certificate is issued. Certificates may be valid for
additional names and if so, those are contained in the certificate as subject
alternative names. Organizations may also obtain wildcard certificates that are
valid for any system across an entire domain or subdomain. Organizations that
don't want to bear the expense of obtaining certificates from a third-party
certificate authority may decide to create their own self-signed certificates. These
certificates are useful within an organization but they're generally not useful on
the web because users outside of the organization will not trust the
organization's internal certificate authority. When purchasing a digital
certificate, organizations may choose to purchase a domain validated or DV
certificate or an extended validation or EV certificate. DV certificates go through a
fairly simple authentication process that just certifies that the organization
obtaining the certificate has control of the domain name under which the
certificate is issued. EV certificates go through a more thorough authentication
process to confirm the identity of the organization obtaining the
certificate. Certificates are stored in files that come in a number of different
formats. The most common is the Distinguished Encoding Rules or DER
format. This is a binary certificate format that's normally stored in a file with the
.der, .crt or .cer file extension. The PEM certificate format is closely related to the
DER format. PEM stands for Privacy-Enhanced Mail, and it's an ASCII text
version of the binary DER certificate. You can easily convert between DER and
PEM certificates using tools like OpenSSL. PEM certificates are normally stores in
files with the .pem or .crt extensions. The Personal Information Exchange or PFX
format is another binary format that's commonly used by Windows systems. PFX
certificates typically have either a .pfx or .p12 file extension. You can also store
PFX certificates in text format using the .p7b format. This is an ASCII text
equivalent for binary PFX certificates. Those are the most important things that
you need to know about the public key infrastructure as you prepare for the
Security+ exam.
Objective 4.1
Selecting transcript lines in this section will navigate to timestamp in the video
- The fourth domain of the Security+ exam is Operations and Incident
Response. This domain has five objectives. The first of these objectives, objective
4.1, is that when given a scenario, you be able to use the appropriate tool to
assess organizational security. This objective requires that you be familiar with a
large number of security tools. We'll look at these in a few different
categories. The first category is Network Reconnaissance and Discovery, and the
first tool in this category is the Traceroute command. Traceroute is used to
identify the current network path between two systems. The Ping command is
used to test whether a remote system is up and running on the network, and the
Hping command is a version of Ping that allows you to customize the packets used
in your scan. The PathPing command is a Windows tool that combines the
functionality of both Ping and Traceroute. The netstat, or network statistics
command, is used to show you the active network connections on a device, while
the nc command, which is short for netcat, allows you to send and receive raw
text over a network connection. The ipconfig command on Windows systems and
the ifconfig command on Mac and Linux systems allows you to display and
modify the configuration of a network interface. The nslookup and dig
commands are used to perform DNS queries, while the ARP command is used to
perform Lookups using the Address Resolution Protocol. IP Scanners are used to
probe the systems active on a network, and the Nmap command is used to
identify the open ports on a remote system. Nessus is a vulnerability scanner used
to probe systems for active security vulnerabilities. The route command displays a
system's current network routing table. The curl command is used to retrieve
webpages and files from servers using a command-line interface. theHarvester,
Sniper, Scanless, and Dnsenum commands are used to automatically retrieve a
large amount of information about a remote system for use in network
reconnaissance. Cuckoo is an automated malware analysis tool, and those are the
important network reconnaissance and discovery commands that you need to be
familiar with. The next category of tools you need to know are the file
manipulation commands. These are commonly used on Linux systems. The head
command is used to display the first few lines of a file, while the tail command is
used to display the last few lines of a file. The cat command is used to display an
entire file, and the grep command is used to search for content within a file. The
chmod command is used to change a file's permissions, and the logger command
is used to send log entries to a centralized log server. The third category of tools
you need to be familiar with are shell and script environments. Secure Shell, or
the SSH command, is used to securely access remote systems and it's commonly
found in Linux environments. PowerShell is a scripting environment used for
administrative control of Windows systems. Python is a general purpose
programming language that's widely used for system administration, and the
OpenSSL library is an open-source implementation of the Transport Layer Security
or TLS protocol. The fourth category of tools you need to know about are Packet
Capture and replay tools. The tcpdump tool is a command-line utility used to
capture and record network traffic, while Wireshark is a graphical tool that offers
similar capabilities. The tcpreplay tool may be used to replay network traffic that
was captured using tcpdump or Wireshark. The fifth category of tools you need to
know are forensic tools. These include tools like dd, the disk dump command that
is used to create forensic images of hard drives, the mem dump command that is
used to save the current contents of the computer's memory, the WinHex editor
that's a hexadecimal editor useful in forensics, and the FTK and Autopsy
suites, which provide high-end forensic capabilities. Finally, you also need to be
familiar with the use of exploitation frameworks to automate penetration
tests and other security activities, password crackers to attempt brute force and
other attacks against password files, and data sanitization tools that are used to
permanently purge information from media. Those are the important tools that
you need to be familiar with as you're preparing for the Security+ exam.
Objective 4.2
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] Objective 4.2 of the Security+ exam is that you be able to summarize
the importance of policies, processes and procedures for incident response. Every
organization should have a carefully defined incident response plan that describes
how the organization will identify and respond to security incidents that take
place. The standard incident response process has six steps. The first step is
preparation. This is the work that we put in before an incident takes place to
ensure that we have the right policies and procedures outlined, resources
available, and people trained to respond to a security incident should one
occur. The second step of the incident response process is identification. This is
when an organization's security operation center or other resources identify that
a security incident is taking place. After detecting a security incident, we move
into phase three of incident response, containment. During the containment
phase, the priority is to isolate the damage caused by the security incident to limit
its spread. This may involve disconnecting systems from the network or taking
other actions to ensure that the damage caused by a security incident is
contained. After containing the incident, we move on to the fourth phase of
incident response, eradication. In the eradication phase, we're negating the
effects of the security incident and removing compromised resources from our
networks. After eradication is complete, we move on to the fifth stage of the
process, recovery, where the organization restores and resumes normal
operations. Finally, the incident response process concludes with a lessons
learned session, where all the major players involved in the incident response
effort gather together, physically or virtually, to review the security incident and
identify lessons they can draw from it to improve the organization's ability to
detect and respond to future incidents. Exercises play an important role in
incident response because they help an organization's incident response
team prepare for future incidents. Tabletop exercises simply gather
people around a physical or virtual conference table to discuss their roles in the
incident response process and perhaps walk through a scenario that's provided by
a facilitator and describe how they would respond if that scenario were to
actually occur. A structured walkthrough is similar in that it gathers people
together and asks them to review the incident response procedures that govern
their own activities and ensure that they're current and up to date. A simulation
exercise goes a step further and actually creates a fictitious security incident that
the team then responds to as they would if it were an actual incident. You can
think of a simulation exercise as a fire drill for incident response. Incident
responders should be familiar with a number of different attack frameworks that
describe how intruders typically operate, because the knowledge of these
tools and techniques used by hackers is invaluable when you're attempting to
contain, eradicate and recover from their activities. Some of the common attack
frameworks used in these approaches are the MITRE ATT&CK framework, the
diamond model of intrusion analysis, and the Lockheed Martin Cyber Kill
Chain. When responding to a security incident, it's very important that you
perform solid stakeholder management. This means ensuring that employees,
subject matter experts, management, law enforcement, the media, and the many
other people who will be interested in the outcome of the security incident are
communicated to appropriately and that each has the ability to interact with the
incident response team in an appropriate manner. Incident response processes
are also tightly aligned to an organization's continuity of operations planning
efforts. So the incident response team should also be well versed in the
organization's business continuity plan that's designed to ensure continued
operations in the event of an emergency and the disaster recovery plan, which is
designed to quickly recover operations in the event they are disrupted. All of the
records generated during an incident response effort should be governed by the
organization's retention policy that describes how long those records will be
preserved. Those are the important things that you need to know about incident
response policies, processes and procedures as you prepare for the Security+
exam.
Objective 4.3
Selecting transcript lines in this section will navigate to timestamp in the video
- Objective 4.3 of the Security+ exam is that when given an incident, you're able
to utilize appropriate data sources to support an investigation. Incident
responders have a large amount of information available to them, including the
output of vulnerability scans. This output is crucial to helping responders
understand how an intruder might have gained access to systems and identify
other systems that might be vulnerable to the same exploits. Security information
and event management systems play an important role in facilitating incident
response because they act as centralized aggregation points for all of an
organization's security logs and other information. Incident response teams can
use these SIEM systems to conduct trend analysis, respond to alerts, and
correlate events happening across many different systems. SIEMs are driven by
access to log files. These log files might come from network devices, operating
systems, applications, security devices, web servers and web applications, DNS
servers, authentication servers, and many other systems throughout the
organization that generate large amounts of information. Now, in isolation, these
individual sources of information might not be so useful, but when they're
aggregated and correlated by a SIEM, they might provide crucial insight into the
evolution of a security incident. All of these devices report back to the SIEM using
standardized technologies, such as the syslog protocol, which provides an open
standard for the communication of security log entries to the SIEM. Incident
response teams should also have access to information about the network. This
might include bandwidth monitors that show the levels of network activity on
different circuits, NetFlow logs, which show which systems were
communicating with each other, when, and how much data was exchanged, and
the output of protocol analyzers that can show the full contents of packets that
traveled over the network. The final source of information that's important to
incident responders is the metadata, the header information attached to email
messages, web exchanges, files, and mobile communications. By digging into this
metadata, incident responders can often find important clues about the origin of
different files and communications. Those are the important things that you need
to know about using data sources to support an investigation, as you prepare for
the Security+ exam.
Objective 4.4
Selecting transcript lines in this section will navigate to timestamp in the video
- Objective 4.4 of the Security Plus exam is that when given an incident, you be
able to apply mitigation techniques or controls to secure an environment. The
focus of this objective is recovery, restoring an organization's operations, not only
to the state that they were in before an incident, but to an even more secure
state that isn't vulnerable to the same type of incident. One of the most
important things that you can do in response to a security incident is
reconfiguring endpoint security solutions to avoid the same type of incident from
occurring in the future. This might involve setting up a quarantine where endpoint
security is tested before a system is allowed to join the network. In this case,
devices that don't meet security standards are placed on the quarantine
network, where they can access the resources required to update their security
configuration, but they don't have access to any other network
resources. Endpoint security can also be used to implement application
control that uses either an approved list of authorized applications or a block list
of unapproved applications to limit the software that can be run on a
device. During the recovery phase, incident responders might also make other
configuration changes to security devices, such as updating firewall
rules, reconfiguring mobile device management policies, implementing data loss
prevention technology, updating or revoking digital certificates, and
implementing or reconfiguring content filtering solutions that limit the web
resources that users may access. When configuring network
security, organizations should consider strategies of segmentation, isolation, and
removal. Network administrators use segmentation to divide networks into
logical segments, grouped by types of users or systems. In incident response,
segmentation allows you to contain the spread of an attack from compromised
systems without alerting the attacker to the fact that you've detected their
activity. To perform this type of containment, you can create a new virtual land
called a quarantine land and then move impacted systems to the quarantine
land with access controls that prevent those compromised systems from
communicating with other systems on your network. Isolation takes
segmentation to the next level. Instead of simply moving the compromised
systems to a different VLAN, they're moved to a network that is completely
disconnected from the rest of the network. Depending upon the isolation strategy
being used, those systems may still be able to communicate with each other and
are still connected to the internet so that they can communicate with the
attacker. Removal completely disconnects impacted systems from any
network. They're completely unable to communicate with other systems or the
internet and the attacker is cut off from access to those systems because they're
totally isolated from the network. The last important technology that you need to
be familiar with for incident response is security orchestration, automation, and
response, or SOAR technology. SOAR platforms use runbooks and playbooks to
automate responses to security incidents. They're tightly integrated with security
information and event management, or SIEM solutions, so that when a SIEM
product detects a potential security incident, the SOAR solution can automatically
implement a rapid response that protects the organization from an attack. Those
are the important things that you need to know about mitigation techniques used
to secure an environment during a security incident.
Objective 4.5
Selecting transcript lines in this section will navigate to timestamp in the video
- [Educator] Objective 4.5 of the security plus exam is that you be able to
explain the key aspects of digital forensics. Security professionals must be
familiar with the standards of documentation for evidence collected during an
incident response effort. This includes ensuring that evidence collected will be
admissible in court by maintaining a chain of custody that documents the process
used to collect the evidence and every person who came in contact with that
evidence from the time it was collected until it was presented in court. This
includes maintaining comprehensive timelines of the sequence of events that
contain time stamps and a time offset if system clocks are not synchronized. It
also includes tagging evidence with attributes that are important during the
investigation, documenting interviews perhaps using video recordings, and
maintaining event logs that show what happened during an incident response
effort. Incident responders must also be familiar with the legal hold process that
requires that an organization preserve any records that they believe might be
used in a court proceeding. As you acquire different types of digital evidence, you
should consider the order of volatility, that is, how likely it is that evidence will be
destroyed. More volatile evidence should be gathered before less volatile
evidence. The most volatile category of evidence are the contents of random
excess memory or RAM, which will be destroyed whenever power is removed
from a computer. After gathering the contents of memory, you should move on to
gathering files that are stored on the disc. First considering files that are kept in
temporary spaces that might be overwritten quickly and then gathering files that
are written to more permanent storage locations. There are many different
sources of evidence that you might use in a forensic investigation. These include
endpoint devices, servers, network devices, applications, and all of the other
artifacts that might provide crucial evidence to the investigation. Teams creating
forensic procedures should also consider the different circumstances when
gathering evidence from on-premises and cloud-based resources. Teams typically
have unrestricted access to on-premises resources but they might have more
difficulty gathering evidence from cloud service providers. It's important to
understand before an incident occurs the capabilities and limitations of cloud
service providers and their willingness to provide support for forensic
investigations. Preserving evidence is of the utmost importance and incident
responders are responsible for ensuring the integrity of evidence that they
gather. One of the primary mechanisms used to demonstrate the integrity of
evidence is hashing. Hash functions can be used to create a cryptographic
checksum of a file. These hash values can then later be used to demonstrate that
a file has not been modified from the time it was collected until the time it was
used as evidence. Digital signatures may also be added to evidence to provide
non-repudiation. Those are the important things that you need to know about
digital forensics as you prepare for the security plus exam.
Objective 5.1
Selecting transcript lines in this section will navigate to timestamp in the video
- The fifth domain of the Security Plus exam is governance, risk, and
compliance. This domain has five objectives. The first of these, objective 5.1, is
that you be able to compare and contrast various types of security
controls. Security professionals use a variety of different categories to group
similar security controls. We'll talk about two different ways. First, we'll discuss
grouping controls by their purpose or type, whether they're designed to
prevent, detect, correct, deter, or compensate for security issues. Then we'll
discuss them by their mechanism of action, the way that they work. This groups
them into the categories of technical, operational, and managerial
controls. Preventive controls are designed to stop a security issue from occurring
in the first place. A firewall that blocks unwanted network traffic is an example of
a preventive control. Detective controls identify potential security breaches that
require further investigation. An intrusion detection system that searches for
signs of network breaches is an example of a detective control. Corrective
controls remediate security issues that have already occurred. If an attacker
breaks into a system and wipes out critical information, restoring that information
from backup is an example of a corrective control. Deterrent controls seek to
prevent an attacker from attempting to violate security policies. Vicious guard
dogs and barbed wire fences are examples of deterrent controls. Physical controls
are security controls that impact the physical world. Examples of physical security
controls include fences, perimeter lighting, locks, fire suppression systems, and
burglar alarms. The final type of security control commonly used is the
compensating control. Compensating controls are designed to fill a known gap in
a security environment. For example, imagine that a facility has a tall barbed wire
fence surrounding it, but then has one gate in the fence with a turn-style that
allows authorized individuals access. One risk with this approach is that someone
might simply hop over the turn-style. The organization might place a guard at this
gate to monitor individuals entering the facility as a compensating control. The
second way that we can categorize controls is by their mechanism of action. This
group's controls as technical, operational, or managerial controls. Technical
controls are what the name implies. The use of technology to achieve security
objectives. Think about all the components of an IT infrastructure that performs
security functions. Firewalls, intrusion prevention systems, encryption, data loss
prevention, and antivirus software are all examples of technical security
controls. Operational controls include the processes that we put in place to
manage technology in the secure manner. These include many of the tasks that
security professionals carry out each day, such as user access reviews, log
monitoring, background checks, and conducting security awareness training. Now
it's sometimes a little tricky to tell the difference between technical and
operational controls. If you get an exam question on this topic, one trick is to
remember that operational controls are carried out by individuals while technical
controls are carried out by technology. For example, a firewall enforcing rules is a
technical control, while a system administrator reviewing firewall logs is an
operational control. Managerial controls are focused on the mechanics of the risk
management process. For example, one common management control is
conducting regular risk assessments to identify the threats, vulnerabilities, and
risks facing an organization or a specific information system. Other management
controls include conducting regular security planning and including security
considerations in an organization's change management, service acquisition, and
project management methodologies. Those are the important things that you
need to know about security controls as you prepare for the Security Plus exam.
Objective 5.2
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] Objective 5.2 of the Security+ exam is that you be able to explain the
importance of applicable regulations, standards, or frameworks that impact an
organization's security posture. This includes knowing the regulations,
standards, and legislation that apply to your organization. You'll need to know all
of the different national, territory, or state laws that may apply in the jurisdictions
where you operate. Two important ones that are specifically mentioned in the
exam objectives are the European Union's General Data Protection
Regulation, GDPR, which regulates the handling of personal
information belonging to European Union residents, and the Payment Card
Industry Data Security Standard, PCIDSS, which is a private regulation that applies
to credit card information. You should also note key security frameworks used by
security professionals as they configure and protect systems. These include the
security benchmarks and configuration standards for the Center for Internet
Security, CIS, and the risk management framework and cybersecurity framework
available from the National Institute of Standards and Technology, NIST. You
should also be familiar with the key standards from the International Organization
for Standardization. These ISO standards all have numbers and you need to know
four of them as you prepare for the Security+ exam. ISO 27001 is a standard for
information security management. ISO 27002 provides a reference set of
information security controls. ISO 27701 contains standards for information
privacy and ISO 31000 is a standard covering risk management. The Cloud
Security Alliance provides a cloud control matrix and reference architecture
useful for security professionals working in the cloud. You should also be familiar
with the benchmarks and secure configuration guides available from the vendors
who create the operating systems, servers, and network devices used in your
organization. You'll also need to be familiar with the ways used to verify the
security standards of cloud service providers that your organization relies
upon. The most common way to do this is through service organization control
audits. In particular, SOC 2 audits are designed to perform detailed testing of a
service provider's confidentiality, integrity, availability, and privacy controls. The
reports from SOC 2 audits often contain sensitive information and they're not
widely shared unless you're willing to sign a non-disclosure agreement with the
cloud service provider. There are two different types of audit report that you can
receive from a SOC 2 audit. Type one report simply describe the controls that a
service provider has in place and report the auditor's opinion on the suitability of
those controls. In a type one report, the auditor does not give an opinion on
whether the controls are working in an effective way. Type two reports contain
the same opinions as type one reports, but go further and they include the results
of the auditor actually testing the controls to verify that they're working
properly. The test used in a type two report must be run over a period of time,
which is typically six months. Those are the important things that you need to
know about regulations, standards, and frameworks as you prepare for the
Security+ exam.
Objective 5.3
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] Objective 5.3 of the Security+ exam is that you be able to explain the
importance of policies to organizational security. Let's begin by talking
about some personnel security policies. An acceptable use policy outlines the
ways that employees are permitted to use the technology assets of an
organization. Job rotation policies move personnel in sensitive positions through a
series of different jobs to give them different experiences and also to prevent
someone from remaining in a single sensitive position for a long period of
time where they might engage in illicit activity. Mandatory vacation policies
require that individuals take time away from the office where they don't have
access to systems to provide an opportunity to uncover any fraudulent
activity that might be taking place. Separation of duties policies say that one
person should not have two different permissions that when combined together
allow them to perform sensitive actions. The most common example of this is the
ability to create a new vendor in an account's payable system and the ability to
issue a check to that vendor. The least privileged policy says that
individuals should have the minimum set of permissions necessary to carry out
their job functions. Clean desk policies are designed to ensure that sensitive
papers aren't left laying in the open where someone might observe
them. Organizations use background checks to check for criminal history before
hiring new employees as part of the onboarding process. Also during the
onboarding process, personnel are asked to sign non-disclosure agreements, or
NDAs, where they agree to maintain the confidentiality of sensitive information
that they encounter. During an organization's off-boarding process, an
employee's permissions should be revoked, and they should be reminded of their
obligations under the non-disclosure agreement. Organizations should conduct
user training on a regular basis to remind employees of their security
responsibilities and the role that each person plays in keeping the organization
secure. This may include role-based training, computer-based training, phishing
campaigns and simulations and games such as capture the flag exercises that help
build security skills. It's important to use a diversity of training techniques to make
sure that your message gets across. The next set of policies that you should have
in place are third-party risk management policies that ensure that the vendors
and business partners in your supply chain are doing what they need to do to
maintain the security of information and systems that they manage on your
behalf. There are a variety of different agreements that you can use to manage
vendors. The first is a service level agreement, or SLA, which outlines the
performance expectations that you have of the vendor and the consequences if
they fail to meet those standards. Memorandums of understanding, or MOUs, are
informal agreements usually used inside an organization between business units
to outline a relationship between different units. Business partnership
agreements, or BPAs, are used when you're building a new partnership with an
external organization to outline the parameters of that partnership. As you're
working with vendor equipment, you should also watch for the end of life,
EOL, and end of service life, EOSL, announcements to ensure that you're
continuing to use equipment that's supported by the vendor. There are three
important data policies that you should have in place in your organization. Data
classification policies outline the requirements for classifying data. Data
governance policies outline the procedures that the organization will use to
manage the data life cycle and data retention policies outline what data the
organization will keep and the period of time it will maintain different types of
information. Credential policies outline the requirements for employees and third
parties with access to systems to handle passwords and other credentials. These
policies should specifically address device-based credentials, the use of service
accounts and the protections around administrator or root accounts. Finally,
organizations should have strong change management, change control and asset
management policies in place to ensure that systems are maintained
properly. Those are the important things that you need to know about policies as
you prepare for the Security+ exam.
Objective 5.4
Selecting transcript lines in this section will navigate to timestamp in the video
- [Instructor] Objective 5.4 of the Security+ exam is that you be able to
summarize risk management processes and concepts. Organizations face a variety
of different kinds of risk. Some of these are external to the organization, like
hackers, and some are internal to the organizations, such as malicious
employees. You need to watch for the risks associated with legacy systems, the
theft of intellectual property, and software licensing compliance concerns. When
you face a risk, there are four risk management strategies that you can adopt to
manage that risk. The first of these is risk mitigation. Risk mitigation controls
reduce the likelihood or impact of a potential risk if it should materialize. The
second risk management strategy is risk transference. Risk transference shifts
some of the risk to an outside organization. The most common example of risk
transference is the purchase of an insurance policy where the insurance provider
will cover the financial loss that your organization will experience if a risk
occurs. The third risk management strategy is risk avoidance. This is changing your
business practices to make a risk irrelevant to your organization. The fourth
strategy, risk acceptance, involves management acknowledging that a risk
exists, but deciding to continue business, despite that risk. And organizations
should record the risks that it's aware of in a risk register that logs risks and the
risk management strategies used to address different risks. During a risk control
assessment, assessors might create a risk matrix or heat map that shows which
risks are most likely to affect the organization and cause the most damage. This
may be done by engaging an outside provider, or by performing a selfassessment. When managing risk, an organization has to make some decisions
about its risk tolerance. This is the amount of risk that it's willing to accept. As
they're performing this assessment, they first look at the inherent risk facing
them. Inherent risk is the risk that exists because of the way that they do
business. The organization then implements controls to reduce that inherent
risk. The risk that's leftover after the implementation of controls is the residual
risk. And implementing controls can sometimes create new risks generated by the
controls themselves. You can determine the total risk facing an organization by
beginning with the inherent risk, then determining the residual risk, and adding
on the control risk. The goal of risk management activities is to ensure that the
total combination of residual and control risk is within the organization's risk
appetite. When you're performing risk assessments, you can use either a
qualitative or quantitative approach. In a qualitative approach, use subjective
categories like low, medium, and high to rate the likelihood and impact of each
risk. When you use a quantitative approach, you use numeric values to perform
that analysis. You determine the impact of a risk by calculating the single loss
expectancy, the amount of financial damage that would occur if the risk
materialized. You then compute the likelihood of a risk by determining the
annualized rate of occurrence, the number of times that you expect the risk to
materialize each year. To get a measure of overall risk, you multiply the single loss
expectancy by the annualized rate of occurrence to determine the annualized loss
expectancy. When you're performing this type of analysis, you should prepare for
all types of disasters, including environmental and man-made disasters, and those
from internal and external sources. During the risk management process, you
conduct a business impact analysis, or BIA. This analysis uses a number of
metrics to determine how well-prepared an organization is to recover from a
disaster that disrupts their normal operations. This includes determining the
recovery time objective, or RTO, which is the amount of time that the
organization can tolerate an outage, and the recovery point objective, or
RPO, which is the amount of data that the organization is willing to accept the loss
of in the event of a disruption. As you conduct this analysis, you should also
determine how often each piece of equipment is expected to fail. This is
determined using the mean time between failures, or MTBF, and you should also
determine the amount of time that it normally takes to bring the equipment back
after a failure, which is the mean time to repair, or MTTR. All of this should be
documented in your organization's disaster recovery plan, which identifies
mission essential functions and the systems that are critical to those functions. It
also performs a risk assessment of those functions, and then identifies recovery
plans that are designed to restore service after a disaster. Those are the
important things that you need to know about risk management as you prepare
for the Security+ exam.
Objective 5.5
- [Instructor] Objective 5.5, the final objective covered on the Security+ exam is
that you'll be able to explain privacy and sensitive data concepts in relation to
security. You should be able to explain the organizational consequences of
privacy and data breaches. This of course includes the financial damage that
might occur from the fines that you experience, but you also need to consider
reputational damage, the impact of identity theft on your customers, employees,
and other stakeholders, and the potential loss of intellectual property. In many
cases, you'll need to notify affected individuals of a security breach. You should
understand the escalation procedures used for data breaches within your
organization and the requirements that you may face for public notifications and
disclosures. Every organization should perform data classification that looks at
categories of sensitive information and provides the handling controls required
for those categories. Organizations use a variety of terms for their data
classifications. Common schemes include levels like public, private, sensitive,
confidential, critical, and proprietary. You may also have very specific
classification levels for different types of personal information that you handle
about individuals. The most general of these is personally identifiable information
or PII, which is any information that uniquely identifies an individual person. You
might also have categories for health information, financial information,
government data, and customer data. Organizations should use privacyenhancing technologies to better protect sensitive information. This includes
following the principle of data minimization, which says that you should only keep
the data that's absolutely necessary for your business. You should also use data
masking approaches to remove sensitive elements from information, tokenization
to replace sensitive elements with alternative values that can be reversed using a
lookup table, and anonymization and pseudo anonymization techniques that take
personally identifiable information and remove all the elements that make it
personally identifiable. There are a number of different roles and responsibilities
related to data handling. The data owner in an organization is a senior
executive who bears overall responsibility for a data element. The data owner is
responsible for ensuring that the organization follows its own data handling
practices and is accountable for maintaining the security and privacy of data. The
data owner often delegates some of their authority to data custodians and data
stewards who carry out the day-to-day activities of data handling. Under GDPR
and other privacy regimes, there are some special terms that you need to know. A
data controller is an organization that determines why and how an organization
processes personal information. A data processor is a third-party
organization that handles data on behalf of the data controller. Every organization
that handles personal data under GDPR is required to appoint a data protection
officer or DPO. The DPO is an individual within the organization who is
responsible for implementing privacy policies and serving as the organization's
main contact for privacy issues. It's important to outline security controls that
follow information throughout its life cycle. From the time that the information is
initially created or collected until it's eventually destroyed or archived. Privacy
and security professionals should perform impact assessments to determine any
points during the information life cycle where information may be exposed to
unauthorized use. Finally, organizations should post a clear privacy policy that
provides stakeholders with information that they need to know about how the
organization is handling their information in accordance with the terms of any
agreements that apply. Those are the important things that you need to know
about privacy and sensitive data as you prepare for the Security+ exam.
Final thoughts
- [Instructor] We covered a lot of material in this course over the five domains of
the Security+ exam. Attacks, threats and vulnerabilities, architecture and design,
implementation, operations and incident response, and governance risk and
compliance. Remember to use this audio course as a quick review before you take
the test. If you're wrapping up your studies, I now recommend that you try taking
a few practice exams and then register for the real thing. Good luck on the
Security+ exam.
Download