Uploaded by meja brian

cat (2)

advertisement
SCHOOL OF INFORMATION SCIENCE AND TECHNOLOGY
UNIVERSITY EXAMINATION
BACHELOR DEGREE IN SOFTWARE ENGINEERING/INFORMATION TECHNOLOGY
SOEN 424/BIT 410: INFORMATION SYSTEMS AUDIT
Wanjiru Brian Meja
IN16/20415/17
YEAR FOUR SEMESTER TWO
CAT
QUESTION ONE
Giving justification, state why it is important for an organization to incorporate risk management
in there strategic planning of an enterprise. [10 Marks]
Challenge Assumptions to Identify Unknown Risks
Discovering enterprise risks to strategic goals involves a process. In order to unearth these
unknown risks, management should first identify the key assumptions that make the current
strategy successful. Protiviti suggests that asking questions like, “What needs to go right for a
strategy to be successful?” These assumptions include but not limited to consumer preference
trends, brand recognition, regulation, capital accessibility, capabilities of competitors, and
current technology. Once the key assumptions are identified, the next step in the process is to
develop contrarian statements that make the assumptions invalid. Lastly, management should
develop an implications statement designed to identify changes or events that could make the
assumptions invalid and then develop a response to that event.
The thought paper used the 2008 financial crisis and financial institutions as an example because
the institutions that focused on the unknowns were able to identify a change in the market 12-14
months ahead of their peers. This gave them the ability to react in time and stay in business.
Here is how challenging assumptions and thinking about contrarian views would be helpful in
assessing the lending strategies that got many banks in trouble.

Lending Strategy: Lend to the low-income housing sector at an accelerated pace and
high volume with the intent to sell these loans as collateralized mortgage obligations
(CMO).

Key Assumptions: Increasing and stable home prices, capital accessibility would be
constant and continued demand for CMOs.

Contrarian Statement: Housing market collapses nationwide, rendering CMOs
worthless.

Implications Statement: Monitor housing markets related to loan portfolio and decrease
or increase investment in CMOs as indicators indicate.
Adapt to Every Changing Environment
The longer a business is in operation, the more likely it will experience a fundamental change in
its operating environment. In order to deal with these changes, businesses need to become more
adaptable. By becoming more adaptable they become more resilient. An intelligence-gathering
process that is aligned to strategy will give direction as to what to pay attention to. The Protiviti
thought paper uses the bookstores, Boarders and Barnes & Noble, to demonstrate how aligning
the intelligence-gathering process to strategy can help businesses adapt to a fundamental
change.
In 2003, Boarders’ strategic growth initiative was to expand brick and mortar stores in the US
and abroad. The key assumptions were that Boarders’ extensive inventory and alternative types
of media (CDs and DVDs) would make Boarders’ “the place to go”. Boarders did not anticipate
a consumer preference shift to online shopping and downloadable media like mp3s or ebooks. However, Barnes & Noble did see a consumer shift coming as well as emerging
technology and decided to change its strategy in order to stay relevant. Barnes & Noble
decreased its square footage, developed an e-reader and offered titles online, and deceased its
inventory in CDs and DVDs. Barnes & Noble was able to adapt to the new market because it
anticipated and reacted to a fundamental change.
The thought paper also discusses the concept of “early mover status”. Like Barnes & Noble,
early movers not only recognize changes but also have the ability to react to changes by:
1. Adjusting their strategy
2. Creating new plans and processes
3. Promoting a culture that expedites information that can impact or disrupt the status
quo
Non-early movers tend to fear change and deny that a change is coming. They focus on today
instead of obsessing over new opportunities or risks tomorrow.
Manage Your Most Valuable Asset: Your Reputation
The thought paper explains that reputation risk can be avoided by risk management and
mitigated by crisis management. For instance, when assessing reputation risk, management
should consider the “big picture” approach by assessing risks to the entire value
chain. Executives should not only consider risks that directly affect their company but also risks
that could directly affect their suppliers, their supplier’s suppliers, distributers and retail
partners. When consumers are injured due to a faulty supply part, they don’t blame the supplier
who is in another country. Rather, they blame the company whose name is on the product. In
assessing end-to-end reputation risk, executives should consider environmental conditions that
could stop the flow of materials and inventory, labor conditions that could be embarrassing to the
company, or inadequate controls over raw materials that could be toxic to consumers. If these
conditions exist and continue, they will eventually be exposed and your reputation will suffer
because of it.
The other side of reputation risk is crisis management. Crisis management involves a quick
response time coupled with honest and open communications to consumers designed maintain
customer’s faith in the brand. Companies should have a crisis management team designated to
act quickly with a pre-determined plan when needed. For instance, Tylenol responded to its crisis
in 1982 perfectly. They responded quickly, informed consumers and acted with consumer’s best
interest at heart. As a result, Tylenol was able to protect their brand and reputation.
Promote a Risk Intelligent Culture
There are three aspects to developing a risk intelligent culture:
1. Having and listening to a contrarian voice. The thought paper uses Washington
Mutual as an example. In 2007, two former chief risk officers (CRO) attempted to limit
risky lending practices. When the CROs attempted to warn the other executives, they
were ignored at first and then eventually isolated. A strategically well-placed CRO with
a direct line to the board of directors could have changed Washington Mutualist lending
practices before the financial crisis.
2. Balancing value creation and value protection. As companies purse value they accept
risks. But how do companies know when they are taking on too much risk? In order to
answer this question, management should understand the risk appetite of the
company. In doing this, the management should develop a risk appetite statement, which
is approved by the board of directors, which creates a boundary for management to
operate in. The thought paper mentions Lehman Brothers as a perfect example, because
its executives defined their risk appetite but decided to ignore it and accept more risk than
the company could handle.
3. Develop forward-looking key risk indicators. Using retrospective performance
indicators are great for performance management, but these indicators, also known as
“lag-indicators”, tell us about the past. Companies should devote time looking forward
by assessing the business environment and how it is changing, using scenario analysis to
determine whether plans are in place, implement a risk identification and assessment
process, and defining situations in which the current business plan is no longer working.
Center Board’s Risk Focus on Critical and Emerging Risks
Critical and emerging should be the focal point of the board when it comes to the risk
management process. The thought paper states that the board should place risks into five
categories:
1. Governance Risks: board composition, CEO selection, executive compensation.
2. Critical Enterprise Risks: strategic risks.
3. Board-Approval Risks: requires the board and management to understand each other
concerning major decisions and activities that need board approval.
4. Business Management Risk: operational, compliance and financial risks.
5. Emerging Risks: a new competitor, new technology, changing regulation.
By placing risks into these categories, it will limit the board’s scope to concentrate on the bigger
issues instead of all the risks that each division faces. The board should be ensuring that
management has an effective risk management process in place that (1) identifies, assesses and
manages critical and emerging risks; (2) communicates these risks to the board.
QUESTION TWO
Discuss any FOUR factors that may compel a network administrator to implement network
monitoring and analysis in a network
[10 Marks]
Choosing what to monitor with a network monitoring software is just as important as deciding to
implement one in your business. You can use network monitoring to track a variety of areas in a
network, but monitoring usually focuses on the following four areas:

Bandwidth use: Monitoring network traffic, how much bandwidth your company uses
and how effectively it’s used helps ensure that everything runs smoothly. Devices or
programs that hog your bandwidth may need to be replaced.

Application performance: Applications running on your network need to function
properly, and network monitoring systems can test to be sure that they do. Network
monitoring systems can test the response time and availability of network-based
databases, virtual machines, cloud services and more to be certain that they are not
slowing down your network.

Server performance: Email servers, web servers, DNS Servers and more are the crux of
many functions in your business, so it’s essential to test the uptime, reliability and
consistency of each server.

Network configuration: Network monitoring systems can supervise many kinds of
devices, including cell phones, desktops and servers. Some systems include automatic
discovery, which allows them to log and track devices continuously as they are added,
changed or removed. These tools can also segregate devices according to their type,
service, IP address or physical location, which helps keep the network map updated and
helps plan for future growth.
QUESTION THREE
Discuss network authentication and suggest the most effective way of realizing it [10 Marks]
Put simply, network-level authentication is how a network confirms that users are who they say
they are. It’s a system for differentiating legitimate users from illegitimate ones. When a user
attempts to login to a network, they indicate their identity with a username. A system then crosschecks the username with a list of authorized users to ensure they are cleared to access the
network.
Yet this process is not sufficient to create a secure system. What if a nefarious party pretends to
be someone else by entering a username that’s not their own? Here’s where secure authentication
methods come in. Authentication is an additional step that verifies the person entering a
username is in fact the owner of that username. Once a user has been authenticated, it’s safe to
allow them access to the network.
As internet technology has evolved, a diverse set of network authentication methods have been
developed. These include both general authentication techniques (passwords, two-factor
authentication [2FA], tokens, biometrics, transaction authentication, computer recognition,
CAPTCHAs, and single sign-on [SSO]) as well as specific authentication protocols (including
Kerberos and SSL/TLS). We’ll now turn to the most common authentication methods, showing
how each one can work for your clients.
1) Password authentication
Anyone who uses the internet is familiar with passwords, the most basic form of authentication.
After a user enters his or her username, they need to type in a secret code to gain access to the
network. If each user keeps their password private, the theory goes, unauthorized access will be
prevented. However, experience has shown that even secret passwords are vulnerable to hacking.
Cybercriminals use programs that try thousands of potential passwords, gaining access when
they guess the right one.
To reduce this risk, users need to choose secure passwords with both letters and numbers, upper
and lower case, special characters (such as $, %, or &), and no words found in the dictionary. It’s
also important to use long passwords of at least eight characters; each additional character makes
it harder for a program to crack. Short, simple passwords such as “password” (one of the most
common) and “12345” are barely better than no password at all. The most secure systems only
allow users to create secure passwords, but even the strongest passwords can be at risk for
hacking. Security experts have therefore developed more sophisticated authentication techniques
to remedy the flaws of password-based systems.
2) Two-factor authentication (2FA)
Two-factor authentication builds on passwords to create a significantly more robust security
solution. It requires both a password and possession of a specific physical object to gain access
to a network—something you know and something you have. ATMs were an early system to
use two-factor authentication. To use an ATM, customers need to remember a “password”—their
PIN—plus insert a debit card. Neither one is enough by itself.
In computer security, 2FA follows the same principle. After entering their username and a
password, users have to clear an additional hurdle to login: they need to input a one-time code
from a particular physical device. The code may be sent to their cell phone via text message, or it
may be generated using a mobile app. If a hacker guesses the password, they can’t proceed
without the user’s cell phone; conversely, if they steal the mobile device, they still can’t get in
without the password. 2FA is being implemented on an increasing number of banking, email,
and social media websites. Whenever it’s an option, make sure to enable it for better security.
3) Token authentication
Some companies prefer not to rely on cell phones for their additional layer of authentication
protection. They have instead turned to token authentication systems. Token systems use a
purpose-built physical device for the 2FA. This may be a dongle inserted into the computer’s
USB port, or a smart card containing a radio frequency identification or near-field
communication chip. If you have a token-based system, keep careful track of the dongles or
smart cards to ensure they don’t fall into the wrong hands. When a team member’s employment
ends, for example, they must relinquish their token. These systems are more expensive since they
require purchasing new devices, but they can provide an extra measure of security.
4) Biometric authentication
Biometric systems are the cutting edge of computer authentication methods. Biometrics
(meaning “measuring life”) rely on a user’s physical characteristics to identify them. The most
widely available biometric systems use fingerprints, retinal or iris scans, voice recognition, and
face detection (as in the latest iPhones). Since no two users have the same exact physical
features, biometric authentication is extremely secure. It’s the only way to know precisely who is
logging in to a system. It also has the advantage that users don’t have to bring a separate card,
dongle, or cell phone, nor do they have to remember a password (though biometric
authentication is more secure when paired with a password).
Despite their security advantages, biometric systems also have considerable downsides. First,
they are expensive to install, requiring specialized equipment like fingerprint readers or eye
scanners. Second, they come with worrisome privacy concerns. Users may balk at sharing their
personal biometric data with a company or the government unless there is a good reason to do so.
Thus biometric authentication makes the most sense in environments requiring the highest level
of security, such as intelligence and defense contractors.
5) Transaction authentication
Transaction authentication takes a different approach from other web authentication methods.
Rather than relying on information the user provides, it instead compares the user’s
characteristics with what it knows about the user, looking for discrepancies. For example, say an
online sales platform has a customer with a home address in Canada. When the user logs in, a
transaction authentication system will check the user’s IP address to see if it’s consistent with
their known location. If the customer is using an IP address in Canada, all is well. But if they’re
using an IP address in China, someone may be trying to impersonate them. The latter case raises
a red flag that triggers additional verification steps. Of course, the actual user may simply be
traveling in China, so a transaction authentication system should avoid locking them out entirely.
Transaction authentication does not replace password-based systems; instead, it provides an
additional layer of protection.
6) Computer recognition authentication
Computer recognition authentication is similar to transaction authentication. Computer
recognition verifies that a user is who they claim to be by checking that they are on a particular
device. These systems install a small software plug-in on the user’s computer the first time they
login. The plug-in contains a cryptographic device marker. Next time the user logs in, the marker
is checked to make sure they are on the known device. The beauty of this system is that it’s
invisible to the user, who simply enters their username and password; verification is done
automatically. The disadvantage of computer recognition authentication is that users sometimes
switch devices. Such a system must enable logins from new devices using other verification
methods (e.g., texted codes).
7) CAPTCHAs
Hackers are using increasingly sophisticated automated programs to break into secure systems.
CAPTCHAs are designed to neutralize this threat. This authentication method is not focused on
verifying a particular user; rather, it seeks to determine whether a user is in fact human. Coined
in 2003, the term CAPTCHA is an acronym for “completely automated public Turing test to tell
computers and humans apart.” The system displays a distorted image of letters and numbers to
the user, asking them to type in what they see. Computers have a tough time dealing with these
distortions, but humans can typically tell what they are. Adding a CAPTCHA enhances network
security by creating one more barrier to automated hacking systems. Nevertheless, they can
cause some problems. Individuals with disabilities (such as blind people using auditory screen
readers) may not be able to get past a CAPTCHA. Even nondisabled users sometimes have
trouble figuring them out, leading to frustration and delays.
8) Single sign-on (SSO)
Single sign-on (SSO) is a useful feature to consider when deciding between device
authentication methods. SSO enables a user to only enter their credentials once to gain access to
multiple applications. Consider an employee who needs access to both email and cloud
storage on separate websites. If the two sites are linked with SSO, the user will automatically
have access to the cloud storage site after logging on to the email client. SSO saves time and
keeps users happy by avoiding repeatedly entering passwords. Yet it can also introduce security
risks; an unauthorized user who gains access to one system can now penetrate others. A related
technology, single sign-off, logs users out of every application when they log out of a single one.
This bolsters security by making certain that all open sessions are closed.
What are the most common authentication protocols?
Now that we have a sense of commonly used authentication methods, let’s turn to the most
popular authentication protocols. These are specific technologies designed to ensure secure user
access. Kerberos and SSL/TLS are two of the most common authentication protocols.
1) Kerberos
Kerberos is named after a character in Greek mythology, the fearsome three-headed guard dog of
Hades. It was developed at MIT to provide authentication for UNIX networks. Today, Kerberos
is the default Windows authentication method, and it is also used in Mac OS X and Linux.
Kerberos relies on temporary security certificates known as tickets. The tickets enable devices on
a nonsecure network to authenticate each other’s identities. Each ticket has credentials that
identify the user to the network. Data in the tickets is encrypted so that it cannot be read if
intercepted by a third party.
Kerberos uses a trusted third party to maintain security. It works as follows: First, the client
contacts the authentication server, which transmits the username to a key distribution center. The
key distribution center then issues a time-stamped access ticket, which is encrypted by the ticketgranting service and returned to the user. Now the user is ready to communicate with the
network. When the user needs to access another part of the network, they send their ticket to the
ticket-granting service, which verifies that it’s valid. The service then issues a key to the user,
who sends the ticket and service request to the actual part of the server they need to
communicate with.
This is all invisible to the user, happening behind the scenes. Kerberos has some
vulnerabilities—it requires the authentication server to be continuously available, and it requires
clocks on different parts of the network to always be synchronized. Still, it remains a widespread
and useful authentication technology.
Download