History of information technology auditing

advertisement
http://en.wikipedia.org/wiki/History_of_information_technology_auditing
History of information technology auditing
Introduction
Information Technology Auditing (IT auditing) began as Electronic Data Process (EDP)
Auditing and developed largely as a result of the rise in technology in accounting systems, the
need for IT control, and the impact of computers on the ability to perform attestation services.
The last few years have been an exciting time in the world of IT auditing as a result of the
accounting scandals and increased regulation. IT auditing has had a relatively short yet rich
history when compared to auditing as a whole and remains an ever changing field.
Beginning
The introduction of computer technology into accounting systems changed the way data was
stored, retrieved and controlled. It is believed that the first use of a computerized accounting
system was at General Electric in 1954. During the time period of 1954 to the mid-1960s, the
auditing profession was still auditing around the computer. At this time only mainframes were
used and few people had the skills and abilities to program computers. This began to change in
the mid-1960s with the introduction of new, smaller and less expensive machines. This increased
the use of computers in businesses and with it came the need for auditors to become familiar
with EDP concepts in business. Along with the increase in computer use, came the rise of
different types of accounting systems. The industry soon realized that they needed to develop
their own software and the first of the generalized audit software (GAS) was developed. In 1968,
the American Institute of Certified Public Accountants (AICPA) had the Big Eight (now the Big
Four) accounting firms participate in the development of EDP auditing. The result of this was the
release of Auditing & EDP. The book included how to document EDP audits and examples of
how to process internal control reviews.
Around this time EDP auditors formed the Electronic Data Processing Auditors Association
(EDPAA). The goal of the association was to produce guidelines, procedures and standards for
EDP audits. In 1977, the first edition of Control Objectives was published. This publication is
now known as Control Objectives for Information and Related Technology (CobiT). CobiT is the
set of generally accepted IT control objectives for IT auditors. In 1994, EDPAA changed its
name to Information Systems Audit and Control Association (ISACA). The period from the late
1960s through today has seen rapid changes in technology from the microcomputer and
networking to the internet and with these changes came some major events that change IT
auditing forever.
The formation and rise in popularity of the Internet and E-commerce have had significant
influences on the growth of IT audit. The Internet influences the lives of most of the world and is
1
a place of increased business, entertainment and crime. IT auditing helps organizations and
individuals on the Internet find security while helping commerce and communications to
flourish.
Major Events
There are four major events in U.S. history have had significant impact on the growth of IT
auditing. These are the Equity Funding scandal, the development of the Internet and Ecommerce, the 1998 IT failure at AT&T, and the Enron and Arthur Andersen LLP scandal.
These events have not only heightened the need for more reliable, accurate, and secure systems
but have brought a much needed focus to the importance of the accounting profession.
Accountants certify the accuracy of public company financial statements and add confidence to
financial markets. The heightened focus on the industry has brought improved control and higher
standards for all working in accounting, especially those involved in IT auditing.
Equity Funding Corporation of America
The first known case of misuse of information technology occurred at Equity Funding
Corporation of America. Beginning in 1964 and continuing on until 1973, managers for the
company booked false insurance policies to show greater profits, thus boosting the price of the
stock of the company. If it wasn’t for a whistle blower, the fraud may have never been caught.
After the fraud was discovered, it took the auditing firm Touche Ross two years to confirm that
the insurance policies were not real. This was one of the first cases where auditors had to audit
through the computer rather than around the computer.
AT&T
In 1998 AT&T suffered an IT failure that impacted worldwide commerce and communication. A
major switch failed due to software and procedural errors and left many credit card users unable
to access funds for upwards of 18 hours. Events such as this bring to the forefront our reliance in
IT services and remind us of the need for assurance in our computer systems.
Enron and Arthur Andersen
The Enron and Arthur Andersen LLP scandal led to the demise of a foremost Accounting firm,
an investor loss of more than 60 billion dollars and the largest bankruptcy in U.S. history. Arthur
Andersen was recently found guilty of obstruction of justice for their role in the collapse of the
energy giant. This scandal had a significant impact on the Sarbanes-Oxley Act and was a major
self-regulation violation.
September 11th Terrorist Attacks
2
The terrorist attacks of September 11, 2001 left the world feeling vulnerable and afraid. The
economic market began to fall and all realized the most powerful nation in the world was
susceptible to attack. September 11th paved the way for The Homeland Security Act and the
increased regulation and security of the electronic infrastructure.
Future
IT auditing is future of the accounting profession. We no longer live in a world where company
dynamics and financial state can be determined without the use of computers. The rapid rise in
information technology cannot be denied and must be utilized in order to succeed. IT auditing
adds security, reliability and accuracy to the information systems integral to our lives. Without
IT auditing we would be unable to safely shop on the internet or control our identities. The role
IT auditors play maybe unknown to most but it impacts the lives of all. As history continues we
will continue to see the rise of this up and coming profession.
REF: Senft, Sandra; Manson, Danial P. PhD; Gonzales, Carol; Gallegos, Frederick (2004).
Information Technology Control and Audit (2nd Ed.). Auerbach Publications. ISBN 0849320321
3
COBIT
The Control Objectives for Information and related Technology (COBIT) is a framework for
information (IT) management risks created by the Information Systems Audit and Control
Association (ISACA), and the IT Governance Institute (ITGI). Control Objectives for
Information and related Technology, or COBIT, provides managers, auditors, and IT users with a
set of generally accepted information technology control objectives to assist them in maximizing
the benefits derived through the use of information technology and developing the appropriate IT
governance and control in a company. In its 3rd edition, COBIT has 34 high level objectives that
cover 318 control objectives categorized in four domains: Planning and Organization,
Acquisition and Implementation, Delivery and Support, and Monitoring.
It comprises six elements: management guidelines, control objectives, COBIT framework,
executive summary, audit guidelines and an implementation toolset. All are documented in
separate volumes.
It was developed by the IT Governance Institute and the Information Systems Audit and Control
Foundation in 1992 when the control objectives relevant to information technology were first
identified. The first edition was published in 1996; the second edition in 1998; the third edition
in 2000, and the on-line edition became available in 2003. It has more recently found favour due
to external developments, especially the Enron scandal and the subsequent passage of the
Sarbanes-Oxley Act.
The COBIT mission is “to research, develop, publicize and promote an authoritative, up-to-date,
international set of generally accepted information technology control objectives for day-to-day
use by business managers and auditors.” Managers, auditors, and users benefit from the
development of COBIT because it helps them understand their IT systems and decide the level of
security and control that is necessary to protect their companies’ assets through the development
of an IT governance model.
4
Computer forensics
From Wikipedia, the free encyclopedia.
Computer forensics is the process of investigating data storage devices and/or data processing
equipment typically a home computer, laptop, server, office workstation, or removable media
such as compact discs, to determine if the equipment has been used for illegal, unauthorized, or
unusual activities. It can also include monitoring a network for the same purpose. Computer
forensics experts must:
1.
2.
3.
4.
Identify sources of documentary or other digital evidence
Preserve the evidence
Analyze the evidence
Present the findings
They must do so in a fashion that adheres to the standards of
evidence that is admissible in a court of law.
Understand the suspects
It is absolutely vital for the forensics team to have a solid understanding of the level of
sophistication of the suspect(s). If insufficient information is available to form this opinion, the
suspects must be considered to be experts, and should be presumed to have installed
countermeasures against forensic techniques. Because of this, it is critical that you appear to the
equipment to be as indistinguishable as possible from its normal users until you have shut it
down completely, either in a manner which provably prohibits the machine modifying the drives,
or in exactly the same way they would.
If the equipment contains only a small amount of critical data on the hard drive, for example,
software exists to wipe it permanently and quickly if a given action happens. It is straightforward
to link this to the Microsoft Windows "Shutdown" command, for example. However, simply
"pulling the plug" isn't always a great idea, either-- information stored solely in RAM, or on
special peripherals, may be permanently lost. Losing an encryption key stored solely in RAM,
and possibly unknown even to the suspects themselves by virtue of having been automatically
generated, may render a great deal of data on the hard drive(s) unusable, or at least extremely
expensive and time-consuming to recover.
Electronic Evidence Considerations
Like any other piece of evidence used in a case, the information generated as the result of a
computer forensics investigation must follow the standards of admissible evidence. Special care
must be taken when handling a suspect’s files; dangers to the evidence include viruses,
5
electromagnetic or mechanical damage, and even booby traps There are a handful of cardinal
rules that are used when to ensure that the evidence is not destroyed or compromised:
1.
2.
3.
4.
Handle the original evidence as little as possible to avoid changing the data
Establish and maintain the chain of custody
Document everything done
Never exceed personal knowledge
If such steps are not followed the original data may be changed, ruined or become tainted, and so
any results generated will be challenged and may not hold up in a court of law. Other things to
take into consideration are
1.
2.
The time that business operations are inconvenienced
How sensitive information which is unintentionally discovered will be handled
Secure the machine and the data
Unless completely unavoidable, data should never be analyzed using the same machine it is
collected from. Instead, forensically sound copies of all data storage devices, primarily hard
drives, must be made.
To ensure that the machine can be analyzed as completely as possible, the following sequence of
steps must be followed:
Examine the machine's surroundings
Look for notes, concealed or in plain view, that may contain passwords or security instructions.
Secure any recordable media, including music mixes. Also look for removable storage devices
such as keydrives, MP3 players or security tokens. In some cases, these can be worn as
jewellery. See Category: Solid-state computer storage media
Record open applications
If the machine is still active, any intelligence which can be gained by examining the applications
currently open should be recorded. If the machine is suspected of being used for illegal
communications, such as terrorist traffic, not all of this information may be stored on the hard
drive. If information stored solely in RAM is not recovered before powering down, it will be
lost. For most practical purposes, it is not possible to completely scan contents of RAM modules
in a running computer. Though specialized hardware could do this, the computer may have been
modified to detect chassis intrusion (some Dell machines, for example, can do this stock;
software need only monitor for it) and removing the cover could cause the system to dump the
contents. Ideally, prior intelligence or surveillance will indicate what action should be taken to
avoid losing this information.
Modern RAM cannot be analyzed for prior content after erasure and power loss with any real
probability of success.
6
Power down carefully
If the computer is running when seized, it should be powered down in a way that is least
damaging to data currently in memory and that which is on the hard disk. The method that
should be used is dependent on the operating system that the computer is running. The
recommended methods of shutting down is shown in the following table:DOS
Pull the plug
Windows 3.1
Pull the plug
Windows 95
Pull the plug
Windows 98
Pull the plug
Windows NT
Pull the plug
Windows NT Server
Shut down
Windows 2000
Pull the plug
Windows 2000 Server
Shut down
Windows XP
Pull the plug
Windows 2003
Shut down
Linux
Shut down
Unix
Shut down
Macintosh OS 9 and older Pull the plug
Macintosh OS X
Shut down
If the operating system cannot be determined, pulling the plug will suffice.
When pulling the plug make sure that you pull the lead out from the computer unit itself. This is
because if the computer has an uninterruptible power supply connected and the power to this is
turned off, the power to the computer will remain powered.
Shutting the computer down by the correct method is critical if certain data is normally stored
only in memory, to be committed back to disk when the machine is powered off.
Shutting down computers which do not normally store data in memory (such as Windows XP)
by the usual method will result in possible changes to the data on the hard drive. This is to be
avoided at all cost, especially if there is no benefit in shutting down the computer in this way.
For this reason it is recommended that the plug is pulled on these computers.
Inspect for traps
Inspect the chassis for traps, intrusion detection mechanisms, and self-destruct mechanisms. It
takes a lot to destroy a hard drive to the point where no data at all can be recovered off of it-- but
it doesn't take much to make recovery very, very difficult. Find a hole in the chassis you can use
for inspection (cooling fans are a good bet), or pick a safe spot in the chassis to drill one, and use
7
an illuminated fiberscope to inspect the inside of the machine. Look specifically for large
capacitors or batteries, nonstandard wiring around drives, and possible incendiary or explosive
devices. PC hardware is fairly standardized these days, and you should treat anything you don't
recognize as cause for concern until proven otherwise. Look for wires attached to the chassis-PCs aren't normally grounded this way, so those are cause for concern.
You should specifically look for a wire running from anything to the CMOS battery or "CMOS
clear" jumper. CMOS memory can be used to store data on the motherboard itself, and if power
is removed from it, the contents will be lost. You must avoid causing CMOS memory to lose
power. Encryption keys, etc., may be stored here.
Once you have determined that the case is safe to open, proceed to remove the cover.
Fully document hardware configuration
Completely photograph and diagram the entire configuration of the system. Note serial numbers
and other markings. Pay special attention to the order in which the hard drives are wired, since
this will indicate boot order, as well as being necessary to reconstruct a RAID array. A little time
being thorough here will save you more later.
Duplicate the hard drives
Using a standalone hard-drive duplicator or similar device, completely duplicate the entire hard
drive. This should be done at the sector level, making a bit-stream copy of every part of the useraccessible areas of the hard drive which can physically store data, rather than duplicating the
filesystem. Be sure to note which physical drive each image corresponds to. The original drives
should then be moved to secure storage to prevent tampering.
Use some kind of hardware write protection to insure no writes will be made to the original
drive. Even if operating systems like Linux can be configured to prevent this, a hardware write
blocker is the best practice. The process is often called Imaging. You can image to another hard
disk drive, a tape, or other media. Tape is a preferred format for archive images, since it is less
vulnerable for damage and can be stored for a longer time. There are two goals when making an
image:
1.
2.
Completeness (imaging all of the information)
Accuracy (copying it all correctly)
The imaging process is verified by using a MD5 message digest algorithm or higher (SHA1,
etc.). To make a forensic sound image, you need to make two reads that results in the same MD5.
Generally, a drive should be hashed in at least two algorithms to help ensure its authenticity from
modification in the event one of the algorithms is cracked. This can be accomplished by first
imaging to one tape labeled as the Master and then make an image labeled Working. If onsite
and time is critical, the second read can be made to Null.
8
E-Mail Review
E-mail has become one of the primary mediums of communication in the digital age, and vast
amounts of evidence may be contained therein, whether in the body or enclosed in an
attachment. Because users may access email in a variety of ways, it's important to look for
different kinds of emails. The user may have used a dedicated program, or Mail User Agent
(MUA), a web browser, or some other program to read and write email. Additionally, files for
each of these programs may be stored on a local hard drive, a network device, or a removable
device. A good examiner will search all of these locations for email data. Be aware that many
email clients will save a copy of outgoing messages, so both the sender and the recipient may
have a copy of each message. Finally, mail may also be stored on a dedicated mail server, either
awaiting delivery or as permanent storage.
E-mail Headers
All email programs generate headers that attach to the messages. The study of these headers is
complex. Some investigators favor reading the headers from the bottom up, others from the top
down. Under normal circumstances, headers are supposed to be created by the mail user agent
and then prepended by mail servers, the bottom up method should work. But a malicious mail
server or forger may make this difficult.
The headers added by an MUA are different than those added by mail servers. For example, here
is the format for headers generated by Mozilla Thunderbird 1.0 running on Microsoft Windows.
(Note to editors: This should eventually be moved to a page about analyzing MUA headers)
Message-ID: <41B5F981.5040504@hostname.net>
Date: Tue, 07 Dec 2004 13:42:09 -0500
From: User Name <username@hostname.net>
User-Agent: Mozilla Thunderbird 1.0 (Windows/20041206)
X-Accept-Language: en-us, en
MIME-Version: 1.0
To: recipient@otherhost.com
Subject: Testing
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Extensions such as enigmail may add extra headers.
The Message-Id field has three parts:
1. The time the message was sent in seconds past the epoch in hexadecimal
2. A random value called a salt. The salt is of the format #0#0#0# where # is a random digit.
Because Thunderbird treats the salt like a number, it may be shorter if the leading digits are
zeros. For example, a salt of "0030509" would display as "30509"
9
3. the fully qualified domain name of the sender
Message-ID: [time].[salt]@[domain-name]
Information on the Message-ID header was derived from the source code in
mozilla/mailnews/compose/src/nsMsgCompUtils.cpp in function msg_generate_message_id()
Sorting Through the Masses
While theoretically possible to review all e-mails, the sheer volume that may be subject to review
may be a daunting task; large-scale e-mail reviews cannot look at each and every e-mail due to
the sheer impracticality and cost. Forensics experts use review tools to make copies of and search
through e-mails and their attachments looking for incriminating evidence using keyword
searches. Some programs have been advanced to the point that they can recognize general
threads in e-mails by looking at word groupings on either side of the search word in question.
Thanks to this technology vast amounts of time can be saved by eliminating groups of e-mails
that are not relevant to the case at hand
10
Information forensics
From Wikipedia, the free encyclopedia.
Information Forensics is the science of investigation into systemic processes that produce
information. Systemic processes utilize primarily computing and communication technologies to
capture, treat, store and transmit data. Manual processes complement technology systems at
every stage of system processes; e.g. from data entry to verification of computations, and
management of communications to backing-up information reports. In context, both technology
and manual systems, with systemic processes that are either proprietary by design or evolved
inconsequentially, constitute the enterprise Information System. The complexity of enterprise
business systems, in particular those augmented with technology and legacy systems, often are
susceptible to fraud, abuse, mistakes, and sabotage.
Information forensic investigation dwells into the aspects of creation, operation and evolution of
the enterprise information system. Specifically, investigation focuses on causal factors and
processes that govern the life cycle implementation of such systems. Forensic investigation may
be initiated when a system is suspect or compromised; generally, investigation occurs when a
system fails. Investigations normally concentrate on specific problem areas or components of a
system; the intricacies of business systems, costs and resources available, often preclude detail
examination of the whole information system. Nevertheless, bringing about scientific
examination of facts when problems occur is only prudent, but necessary for the court of law.
The methodological approach to investigation at present is the subject of research interest and
topical development.
The following discourse highlights some of the issues in Information Forensics that includes:



Adherence to conventions
Dealing with parties of interest
Technology and systems design
Investigation Concerns
Investigations characteristically seek to identify the perpetrators, uncover the processes that lead
to the creation of the system in question, and understand the operational or systemic processes on
information that resulted in the problem i.e. to clarify and document the erroneous processes.
Investigation may distinguish the causes of failures that include fraudulent intent, negligence,
abuse of power, sabotage and terror. Problems that warrant forensic investigations normally are
catastrophic system failures, but also include doubtful system operations, anomalous events or
just exceptional investigations on matters of compliance.
The design of the system in its entirety or in parts, and the modification of the system either
through amendments of existing design or inclusion of new system modules in all sorts of
11
manner, are considered vulnerable phases of systems development. In spite of regulatory
constraints, stringent checks, standardization, proven methods, professional edicts, assurance
contracts, and other forms of preventive measures, systems continue to fail.
A widely speculated accusation of common causes of failure of typically in-house developed
information systems is due to the unwarranted influence of certain system users with vested
interests. Systems development processes are often swayed to implement deliberated functions to
serve the needs of such users. A form of abuse.
Abuse of Power. Strategic exploitation of information is recognized as a source of influence.
The manner of how information is acquired, processed and used, gives rise to power. The
process as a whole in particular is of interest to information investigators. In order to fully
comprehend technology and information systems that afford power play, investigators must be
well versed in disciplines that include psychology, sociology, ethnicity, linguistics, and
organizations. Other fields of interest include ethics, theology and beliefs, epistemology,
knowledge engineering, and knowledge management. Some aspects of technical consideration,
specifically in the field of Information Systems, broadly include close examination of systems
development processes i.e. applied standards and models, the system or business processes, and
the information or business domain itself.
Stakeholders of Information and Systems
Stakeholders of information and owners of information systems typically by and large are
converged at certain geographical locations, bound by local legislations, professionalism and
customary norms. Their action upon information at their disposal and control of their systems
however affect a greater multitude of users, many whom are from elsewhere and practice
differing norms. What is acceptable, as permissible practices in dealing with information and
information systems, may be perceived even established legally as forbidden by others.
Information Users
Users are the target of information propagation and generally considered victims of
circumstances. However, users are also benefactors in the manipulation of business information.
Users too, are stakeholders of information.
Manipulation and consumption of information involve the intervention of information
stakeholders at every stage of the information value chain. Two channels of control (generally)
run parallel alongside information processes, one shapes the other regulates the information
system.
Information Processes
Information system processes are essentially viewed as a black box of algorithms and
procedures, proprietary and never disclosed. This notion brings about conflicting arguments and
questions on the intentions, implementation and operations of certain information systems.
12
Investigation of information processes emphasises examination of the following, categorically:
1.
2.
3.
4.
5.
6.
Development approach to the creation of information processes or systems.
Information process itself, e.g. functions, procedures, etc.
Interaction of processes within a system.
Interaction of processes among systems.
the business context.
the local environment.
Technology Systems
Legacy systems are generally designed to serve the businesses they are commissioned for. And
not intended to trace the development of the system itself, which if ever done is performed by
another system.
Technology systems in itself enable investigators to gather facts of misdeed, though with some
difficulties.
Methods and Standards
Established standards govern the creation, modification, operation and retirement of information
systems. Standard methods however are commonly adapted and modified to suit local or specific
requirements. The prerogative of how standards are actually implemented rests entirely with the
stakeholders of the system in question. Contractors too have a role to play. What really transpires
in the process of development is transparent and will never be known; yet investigators need to
uncover the facts. Although contracts are used to define and measure means and deliverables, the
actual approach to resolution is often ignored so long as business objectives are met.
Legal action requires comprehensive explanation and understanding of probable causes and
effects of a forensic situation. In this arena, information management across a multitude of
people and systems is vastly differentiated, necessitates investigators to possess the appropriate
knowledge and understanding of how information resources interact to investigate effectively.
The lack of formal expository methods makes this new field rather desirable.
Application of Information Forensics
Some examples of specific application of the science of information forensics in a systemic
context include the following:






Bioinformatics
Cryptography, see Cryptographic engineering
Information systems forensics
Information traversing Pervasive systems
Information traversing Ubiquitous networks and computing environments
Intelligence, Command channels
13




Musicology, in Music business
Review of compliance
Theological research
Trace, Information trace
What is and is Not Information Forensics
Information forensics encompasses information systems forensics and computer forensics.
Information forensics deals with system processes, human factors, and applied methodologies
and standards. Arguably information forensics concerns the use of technology, formal methods,
and implicating factors which are largely human in nature.
In fundamental research, information forensics examines the extraction and analysis of
information for security applications (IEEE SPS). Fundamental areas of interest include attack
models, cryptanalysis, steganalysis, steganography; audio engineering, authentication, human
identification, performance metrics, signal classification, surveillance, transaction tracking, etc.
Information technology audit
An Information technology audit (or IT audit) is a review of the controls within an entity's
technology infrastructure. These reviews are typically performed in conjunction with a financial
statement audit, internal audit review, or other form of attestation engagement. Formerly called
an Electronic data processing (EDP) audit, an IT audit is the process of collecting and evaluating
evidence of an organization's information system, practices, and operations. Evaluation of the
evidence ensures whether the organization's information system safeguards assets, maintains data
integrity, and is operating effectively and efficiently to achieve the organization's goals.
An IT audit is also known as an EDP Audit, an Information Systems Audit, and a computer
audit.
Purpose
An IT audit is similar to a financial statement audit in that the study and evaluation of the basic
elements of internal control are the same. However, the purpose of a financial statement audit is
to determine whether an organization's financial statements and financial condition are presented
fairly in accordance with generally accepted accounting principles (GAAP). The purpose of an
IT audit is to review and evaluate an organization's information system's availability,
confidentiality, and integrity by answering questions such as:


Will the organization's computer systems be available for the business at all times when
required? (Availability)
Will the information in the systems be disclosed only to authorized users?
(Confidentiality)
14

Will the information provided by the system always be accurate, reliable, and timely?
(Integrity)
Types of IT Audits

Computerized Systems and Applications: an audit to verify that systems and
applications are appropriate to the entity's needs, is efficient, and adequately controlled to
ensure valid, reliable, timely, and secure input, processing, and output at all levels of a
system's activity.

Information Processing Facilities: an audit to verify that the processing facility is
controlled to ensure timely, accurate, and efficient processing of applications under
normal and potentially disruptive conditions.

Systems Development: an audit to verify that the systems under development meets the
objectives of the organization, and ensures the systems are developed in accordance with
generally accepted standards for systems development.

Management of IT and Enterprise Architecture: an audit to verify that IT
management has developed an organizational structure and procedures to ensure a
controlled and efficient environment for information processing.

Client/Server, Telecommunications, Intranets, and Extranets: an audit to verify that
controls are in place on the client (computer receiving services), server, and on the
network connecting the clients and servers.
IT audit process
The following are the basic steps in performing the Information Technology Audit Process:
1.
2.
3.
4.
Planning the audit
Evaluation of internal controls
Audit procedures
Completing the audit
History of IT auditing
The concept of IT auditing was formed in the mid-1960's and has gone through numerous
changes due to advances in technology and the incorporation of technology into business.
IT audit topics
15
Regulations and legistation related to IT audits
Several information technology audit regulations have been introduced in the past few years.
These include the Gramm Leach Bliley Act, the Sarbanes-Oxley Act, and the Health Insurance
Portability and Accountability Act(HIPAA).





COBIT
HIPAA
Gramm-Leach-Bliley Act (GLBA)
Sarbanes-Oxley Act
Companies with Sarbanes-Oxley certification delays and material weaknesses caused by
IT issues
o Captaris Inc. - material weakness and filing delay due to inadequate internal controls
and related IT controls per SOX requirements
o Cray Inc. - numerous material weaknesses in internal control over financial reporting,
specifically, inadequate review of third-party contracts and lack of software
application controls and documentation
Security
Auditing information security is a vital part of any IT audit. Within the broad scope of auditing
information security we find topics such as data centers, networks and application security.
Auditing information security covers topics from auditing the physical security of data centers to
auditing the logical security of databases and highlights key components to look for and different
methods used for auditing these areas. It is important to remember that in this ever expanding
technical realm these things are always changing and as such IT auditors must continue to
expand their knowledge and understanding of systems and the systems environment to help
verify and ensure information security.
Emerging Issues
Technology changes rapidly and so do the issues IT auditors must face. From biometric retinal
scans to protecting physical security to transmitting data from a cell phone, this issue is truly
limited only to one’s imagination.
See also



IT audit resources
Famous IT Auditors & Experts
Information technology audit - operations
Operations


Backup systems and recovery
Change management auditing
16





Software development life cycle auditing
Helpdesk and incident reporting auditing
SAS 70
Disaster recovery and business continuity auditing
Evaluating the qualifications of IT personnel for the purposes of an audit
Auditing systems, applications and networks





Operating system audit
Mainframe audit
Database audit
Enterprise Resource Planning audit
Systems applications products audit
Computer Forensics


Computer forensics
Data analysis
Fraud


Computer fraud case studies
SAS 99
Retrieved from "http://en.wikipedia.org/wiki/Information_technology_audit"
17
Operating system audit
As computers became more sophisticated, many manual operations are automated within the
operating system (see more about the history of operating systems).The operating system (OS) is
the program that runs all other programs. OS perform the undertaking of coordinating all tasks,
such recognizing input from the keyboard and keeping track of files and directories. It also
ensures that all the different programs that are running and the users for those systems do not
interfere with each other. OS is also in charge of security and guarantees that no unauthorized
use occurs.
The operating system provides a software platform on top of which other programs called
applications can run. Some examples of popular operating systems include Windows, Unix, and
Linux.
Why is OS security relevant?
In today's business climate, there is an increasing use and awareness of many OS used by large
organizations. The mechanisms that control the information and the data itself is what is
considered valuable. Therefore security of information systems is crucial. It has been recognized
that it is good security protocol to either perform internal security audits or hire external firms to
audit existing policies, practices, and installations. OS interact with vital business assets such as
payroll, human resources, development, and customer information.
The operating system sees “[all] data on the disk as streams of bits in the records inside the files
and folders. The operating system does not see the data relating to the basic pay of an employee
as being significantly more or less sensitive than the employee's telephone number. It is the
application software that understands the data from the business perspective; all business rules
relating to the way the data can be manipulated are enforced through programs in the application
software.”
Good application software has controls designed to enforce all the validations and business rules
relating to who interacts with which elements of the data and how. As long as the user stays
within such an application, the user's actions are well controlled. “However, if a user is able to
bypass the application and gain access to the operating system, then all the rules and controls in
the application software become irrelevant.” Hence, it is necessary to carry out reviews of the OS
and database for all critical applications and the servers that hold sensitive information.
How do you perform an Operating Systems audit?
“The purpose of this page is to focus on the concepts and need for the audit of OS and not to
provide detailed guidelines or checklists for doing the same. Such guidelines or checklists are
specific in technical detail to different OS. Many professional audit firms develop, through their
own research, guidelines and work procedures for such technical audits.” Typically, operating
18
systems are purchsed from outside vendors. The auditor should obtain and understand the
technical descriptions and documentation from their vendors, before beginning an audit.
By their nature, operating systems are heavily relied upon for general operation of computer
hardware. Therefore, an operating system audit requires the auditor to deploy further
investigation in determining whether:
1) An application program can access main or data storage areas or files being used by other
applications.
2) Important security and accuracy features (e.g., error handling for invalid data types of
formats) are fully used and are not being overridden by application programs.
3) Adequate supervisory procedures are established for the system programmers (in addition, a
security background investigation should be performed).

Usually, the system programmers have access to all system software. A primary control is
necessary, in order to reduce the programmer’s ability to perform unauthorized or damaging
acts that could impair the accuracy and/or reliability of the system.
4) Access to and use of privileged instructions (e.g., input and output instructions that would
enable reading or writing of data from another user’s file) is restricted.
5) Scheduling functions are self-processing or require extensive operator intervention.
6) Improvements to the system are routinely implemented. Most of the changes are initiated as
maintenance described by the vendors. The organization should control software changes by:



Establishing formal procedures that require supervisory authorization before
implementation.
Ensuring all the changes are thoroughly tested.
Removing critical files and application programs from the computer area while the
system programmers are making changes.
Important areas in an OS audit are the following:



Physical Security - protecting the equipment guarantees that physical access to specific
systems is only granted to those who need it. This is indispensable for many large
organizations because they often have multiple data centers, server rooms, and operating
systems. It is important to ensure that physical access is limited and secure
Logical Security – controlled access to applications and data.
Security Policy and administration – instituting change control policies. Sound change
control policies help ensure that systems are kept free of operator errors and other common
problems such as changes that are meant to be temporary, but are then never changed back to
their original state. This also provides a good baseline review of the organization. On a side
19
note, having a concrete and reliable standard is essential in the event of a disaster or security
breach
The following steps aim to cover each of the aforementioned topics.




“Evaluating whether the security features have been enabled and parameters have been set to
values consistent with the security policy of the organization, and verifying that all users of
the system (user IDs) have appropriate privileges to the various resources and data held in
the system. Next, the auditor should obtain the list of user IDs in the system and map these
with actual users. Then, the auditor has to determine for each user what the permissions and
privileges to the different resources/data are in the system. There are different methods, for
example, commands for ascertaining this from the system for different OS. Another way is
to determine for a given critical piece of data that the users with access are, and whether their
access is appropriate.”
“Some of the most common security parameters that can be evaluated are password rules,
such as minimum password length, password history, password required, compulsory
password aging, lock-out on unsuccessful logins, login station and time restrictions. The
other areas of scrutiny are whether the logging of certain events, such as unsuccessful login
attempts, has been enabled or whether the superuser password is held by the appropriate
person. Other OS/version-specific parameters also have to be verified.”
“Another point for examination pertains to the network. With all computers intricately
connected to the internal and external networks, the network-related vulnerabilities of such
systems also need to be covered in reviews, although they are even more specialized.”
Through suitable use of tools, the auditor should determine whether the services that are
open and running in the server (such as FTP, Telnet, HTTP) or ports are only those that
really are required. “If the review is being done on a system that is hosting a web server or a
firewall, the evaluation must be done by an expert.”
After an assessment of the control is performed, the auditor must conclude and report their
findings and see if any changes need to be made to the initial audit plan. This is also the time
when weaknesses are brought to the attention of appropriate parties that need to be informed,
such as management. If weaknesses are discovered in the OS audit, and nothing is done it
will compromise the following audits of the organization’s ERP (Enterprise Resource
Planning), SAP, applications, and business components.
20
Mainframe audit
What is a Mainframe?
The definition of a mainframe is not clear-cut and may vary depending on what reference is
used. Most people associate a mainframe with a large computer, and though this is generally the
case, mainframes are getting smaller all the time. Another problem is that technology has
become so diverse and multi-operational that characteristics that were once only found in
separate devices and systems have been developed into one product or service. Oftentimes the
terms mainframe and enterprise server are used to describe the same or similar technology. The
advent of the Supercomputer has also eradicated the notion that a mainframe is defined simply
by its size. Even though there are similarities between the two, there are obvious differences in
their usage. Supercomputers are generally utilized for their speed and complexity, while
mainframes are used for storing large volumes of sensitive data. The best definition the author
found states that: “Mainframes used to be defined by their size, and they can still fill a room, cost
millions, and support thousands of users. But now a mainframe can also run on a laptop and
support two users. So today's mainframes are best defined by their operating systems: Unix and
Linux, and IBM's z/OS, OS/390, MVS, VM, and VSE. Mainframes combine four important
features: 1) Reliable single-thread performance, which is essential for reasonable operations
against a database. 2) Maximum I/O connectivity, which means mainframes excel at providing
for huge disk farms. 3) Maximum I/O bandwidth, so connections between drives and processors
have few choke-points. 4) Reliability--mainframes often allow for "graceful degradation" and
service while the system is running.” (Software Diversified Services). Other properties particular
to mainframes include:



The ability to handle a large number of users simultaneously.
Being able to distribute large workloads that can be handled by the machine over
different processes and input and output devices.
That output is sent to a terminal through a program running on the mainframe, and
nothing else goes over the line. This helps make mainframe data more secure (The
History of Computing Project, April 27, 2005).
What is the history of mainframe systems?
The mainframe computing age got its start in 1939 with the creation of the Atanosoff Berry
Computer (ABC Computer) Iowa. Though not a computer in the modern sense, as it lacked
general controls or purpose, it was the first proposal to use electronics for calculation and/or
logic. The first computer in the modern sense was the Eniac, created in 1942 and was used to
compute World War II ballistic firing tables. This machine was very large and consisted of 30
separate units weighing a combined 30 tons and during operation consumed almost 200 kilowatts
of electrical power. As technology improved mainframes became more prevalent, faster,
efficient, and were able to hold more memory and do more complex calculations. As a result of
this mainframe usage grew during the 1950’s, 60’s, and 70’s. Mainframes developed during that
time include the UNIVAC and the IBM 360 (The History of Computing Project, April 27, 2005).
21
Beginning in the early 1980’s demand for mainframes began to lower as companies felt that
smaller computers (Such as IBM PCs) could accomplish similar goals at a lower cost, while
giving users greater access to their systems. During this time IBM was left as the only major
player as other companies were squeezed out or abandoned their mainframe operations. In the
late 1990’s demand reemerged as companies found new uses for them because of their reliability
for critical operations and their flexibility in being able to run several operations at once. IBM
currently has over 80% of the market and current mainframes include the S/390 and the zSeries
890 and the zSeries 990 which are about the size of a dishwasher and can host up to 32 Gigabytes of memory. These mainframes can also process hundreds of million instructions per second
(MIPS) (The History of Computing Project, April 27, 2005).
How are mainframes currently used?
Generally mainframes are used by large corporations and government agencies to handle
processing and protection of large volumes of data. Examples include sales transactions and
customer inquiries. They are also used for computation intensive applications such as analyzing
seismic data and flight simulation and as “Super-servers” for large client/server networks and
high volume websites. Other uses include data mining and warehousing, and electronic
commerce applications (O’Brien, 2002).
What are the components of a mainframe?
The components of a mainframe can vary wildly depending on the type and its role in the
organization. Generally, there are four main components of the mainframes that are important for
the purposes of our discussion. These are:
1. The Operating System: This is: “the main guts” and “ensures that other applications are able to
use memory, input and output devices and have access to the file system.” Types of operating
systems vary greatly but common examples of these include Unix, and MVS (Multiple Virtual
System), and O/S 390. Generally this is managed by an organization’s systems technicians (The
Henderson Group, October, 2001). (Interview, 2005).
2. The Security Server: This help prevent unauthorized access and manipulation. Security
software such as ACF2, RACF, and Top Secret are needed to help secure an MVS operating
system. This software identifies who the user is, and whether that user can perform a given
function (The Henderson Group, January, 2002).
3. System Products: These are performance tools of the operating system. This includes VTAM
(Virtual Telecommunication Access Method), which manages data flow between terminals and
applications (Or between applications) and supports multiple teleprocessing applications, and
Netview, a distributed network management system. This also includes database management
and administration tools (Also called DB2 Utilities) and the database manager. Another item of
note that fits this category is TCP (Transmission Control Protocol) which is the protocol for
managing applications over IP (Internet Protocol). IP provides message routing, but not
applications (Software Diversified Services, No Date).
22
4. Application System: A decision support system. It provides graphics, statistical functions,
business modeling, and forecasting. These are usually customized by the users depending on the
goals of an organization (Software Diversified Services).
A company’s mainframe is usually located in the data center, which is a facility used to house
large amounts of computer equipment and data. Because of the large amounts of sensitive data
available access is usually restricted.
How are mainframes audited?
The purpose of a mainframe audit is to provide assurance that processes are being implemented
as required, the mainframe is operating as it should, security is strong, and that procedures in
place are working and are updated as needed. This oftentimes would also entail the auditor
making recommendations for improvement.
Obtain and Support an Understanding of the Mainframe, the Entity and its Environment
Generally this includes but is not limited to an understanding of the following:






The type of mainframe, its features, usage, and its purpose in the organization.
Nature of the entity.
Organization’s external factors such as regulatory requirements and the nature of its industry.
Organization’s management, governance, and objectives and strategies.
Entity’s business processes.
Organization’s performance compared to the industry and its benchmarking procedures
(Messier, 2003).
This information can be obtained by conducting outside research, interviewing employees,
touring the data center and observing activities, consultations with technical experts, and looking
at company manuals and business plans.
Identify Risks, Evaluate the Entity’s Responses to those Risks, Obtain Evidence of
Implementation, and Based on the Risk Assessment, Design and Perform Audit Procedures
General:
Passwords: Who has access to what, and are employees protecting their passwords properly? Are
there written policies and procedures in place stating how this is accomplished and are they
enforced. Are passwords timed out? Evidence of implementation can be obtained by requesting
employee manuals, evaluating the software and user histories, and by physical observation of the
environment. (Gallegos, F., 2004).
Are cables adequately protected from damage and sniffing between the Network and the Data
Center? This can be achieved by proper routing of the cables, encryption linkage, and a good
network topology (Software Diversified Services). Physical observation of where the cables are
23
routed and confirmation of the security procedures should be obtained. Tests of controls should
be conducted to determine any additional weaknesses.
Does the mainframe have access to an Uninterrupted Power Supply? If so confirmation should
be obtained that it exists, is available, and is adequate to meet the organizations needs.
Environmental controls: Are physical controls such as power badges for access, fire suppression
devices, and locks in place to protect the data center (and the mainframe inside) from theft,
manipulation or damage? A physical observation should be conducted and employee reference
manuals should be examined to confirm this assurance. For all items the level of risk should be
assessed and that assessment should be used to determine the general or specific audit procedures
used.
The Operating System
Because this is needed to run all the other applications it is the most important and critical area to
be examined.
What controls are in place to make sure the system is continually updated? Is the software
configured to do it, or is it done by the system technicians, or both. Examination of company
procedures should be conducted and computer assisted audit techniques need to be employed to
make a determination.
Many of the individuals responsible for maintaining the system have elevated privilege. Controls
should be in place to deter unauthorized manipulation or theft of data, and processes and
procedures are needed and a risk/benefit analysis should be conducted by the organization to
determine who should have access to a specific application. Proper segregation of duties also
needs to be verified. The company’s internal controls need to be tested to determine if they are
effective and recommendations should be made to improve any deficiencies. Samples of entries
into the system should be examined to verify that the controls are effective and unauthorized
and/or suspicious voided transactions need to be investigated (Gallegos, 2004).
The operating system should leave a full audit trail so that assurances by management can be
verified. Any deficiencies in this area will depending on the circumstances either probably
require more audit investigation and work, or the inability of the audit team to rely on
management’s assurances.
Are there any processes on the system that could needlessly compromise other components?
Tests and procedures need to be conducted to determine if this is the case. Procedures and
measures need to be in place to minimize the risk of unauthorized access through Backdoors in
the system, such as the Program Properties Table (PPT). An audit of an MVS needs to confirm
that all entries through this door are appropriate and were done with proper authorization. In
addition there should be an accurate audit trail that can be followed. This can often be
accomplished by examining the Bypass Password and the Privilege Protect Key in the system,
and by examining entries for reasonableness. Mainframe companies such as IBM provide
24
information that can help determine if PPT entries are reasonable. A software tool such as CAExamine can also be helpful in this endeavor (The Henderson Group, October, 2001).
Security Server
Because the security administrators who manage this not only have elevated privilege, but also
model and create the user passwords, this area always takes high priority during an audit. Are
proper segregation of duties implemented and enforced and is technology and procedures in
place to make sure there is a continuous and accurate audit trail? Controls need to be put in place
to minimize the risk of unnecessary and unauthorized entry into the system, and the protection of
passwords. Computer assisted audit techniques should be used to explore the system, and on
hand observations should be conducted to verify procedures, such as segregation of duties are
being followed. Security systems such as RACF, ACF2, and Top Secret need to be constantly
evaluated to verify that they are providing the necessary security and if additional protection
such as new firewalls is needed. Before beginning an audit of these systems printouts should be
obtained that provide detailed information pertaining to specific fields, the UID string, rules,
and/or additional explanations. With this information security info can be more easily understood
and make evaluating it much easier. (The Henderson Group, August, 2002).
System Products
When auditing DB2 the auditor should be most concerned with whether security measures in the
software are properly controlling who can use it, and which data can a user read or write.
Controls by management should be in place to prevent unauthorized access or manipulation, and
how many copies of the software are being used and what for. For VTAM the auditor’s concerns
include whether the applicable security software is contacted when an employee logs in. This is
to prevent terminated employees from entering the system, because the security software is
updated immediately while other software generally is not. Because all connections to the system
come through the VTAM the dataset describing the connections should be constantly monitored
and examined. Internal controls over Backdoors into the system should be sufficient to minimize
unauthorized entry and the auditor should determine what these controls are so they can be tested
appropriately. Software tools such as CA-Examine and Consul can be used for this purpose and
to find additional Backdoors. It should also be verified that certain sensitive network connections
are encrypted, and that rules controlling the use of applids (Programs that terminals can be
connected to) and terminals are adequate (The Henderson Group, January, 2002).
Application System
This area of the audit should be concerned with the performance and the controls of the system,
its ability to limit unauthorized access and manipulation, that input and output of data are
processed correctly on the system, that any changes to the system are authorized, and that users
have access to the system. Evaluating internal controls and testing the software with computer
assisted audit techniques; including vulnerability assessment tools should be accomplished to
achieve these objectives (Gallegos, 2004).
25
It should be noted that the vast majority of these computer assisted audit techniques for the
mainframe and its supporting systems can in most cases be conducted from a simple 3270
terminal which has a connection to the network (Interview of a computing security specialist and
IT auditor at Boeing who was interviewed to obtain information for this paper).
Evaluate Whether Sufficient Evidence was Obtained
After performing the necessary tests and procedures determine whether the evidence obtained is
sufficient to come to a conclusion and recommendation. If the information is sufficient then a
final report and/or recommendation can be completed. If the evidence is insufficient and material
then further testing will be required, unless the information is unattainable, in which case a full
report cannot be completed.
How is the security of the mainframe maintained?
Mainframes, despite their reliability possess so much data that precautions need to be taken to
protect the information it holds and the integrity of the system. To do this, internal controls must
be put in place. These include:







Physical controls over the mainframe and its components.
Encryption techniques.
Putting procedures in place that prevent unnecessary and unauthorized entries into a
system and that input, output, or processing is recorded and accessible to the auditor.
This is particularly important for people with Elevated-Privilege.
Security Software such as RACF, ACF2, and Top Secret.
Constant testing of the security system to determine any potential weaknesses.
Properly protecting Backdoor accesses.
Continual examination of the techniques to determine effectiveness.
To gauge the effectiveness of these internal controls an auditor should do outside research,
physically observe controls as needed, test the controls, perform substantive tests, and employ
computer assisted audit techniques when prudent.
Gallegos, F., Senft, S., Manson, D., Gonzales, C. (2004). Information Technology Control and
Audit. (2nd ed.) Boca Raton, Florida: Auerbach Publications.
Messier jr., W., F. (2003) Auditing & Assurance Services: A Systematic Approach. (3rd ed.)
New York: McGraw-Hill/Irwin.
Philip, G. (2000). The University of Chicago Press: Science and Technology Encyclopedia.
Chicago, IL: The University of Chicago Press.
Wikpedia (May 19, 2005). Mainframe Computer. Wikpedia: The Free Encyclopedia. Retrieved
May 20, 2005 from the World Wide Web at: [7]
26
O’Brien, J., A., (2002). Management Information Systems: Managing Information Technology
in the E-Business Enterprise. 5th ed. New York: McGraw-Hill/Irwin.
Retrieved from "http://en.wikipedia.org/wiki/Mainframe_audit"
Database audit
What is a database?
A database is an integrated aggregation of data usually organized to reflect logical or functional
relationships among data elements (Gallegos 759). In simple terms, a database is a computerized
record keeping system. A database includes a system involving data, hardware that physically
stores the data, software that utilizes the hardware’s file system in order to store the data and
provide a standardized method for retrieving or changing the data, and the users who access the
data and turn it into information. Data consists of raw facts and figures that are meaningless by
themselves, and can be expressed in characters, digits, and symbols, which can represent people,
things, and events (Gallegos 759).
What are the commonly used databases?
Some examples of databases that are currently used by businesses include Oracle, Microsoft
SQL Server, Sybase ASE, Sybase ASA, and IBM DB2.
Is security important in databases and what does it
comprise?
Database system security is a serious issue affecting an organization’s information security,
damage, and loss (Mookhey 1). It is common for an organization to make every effort to lock
down their network, but leave the database vulnerable. It is critical to protect the database from
unauthorized access because 90% of the organization’s sensitive information is contained within
their database. Unauthorized access into the database could be catastrophic to a company.
Companies often do not realize how much risk is associated with the sensitive information within
their database until an internal audit is conducted, which gives the details regarding who can
access the sensitive data. Tremendous financial losses could result if an employee with access to
the sensitive data distributed the confidential information of the business or its customers.
Depending on the severity of the security breach, the company’s reputation could be adversely
affected, thus resulting in a decline in sales, consumer, and investor confidence.
Each company will need to decide the level of security that suits their organization. This will
require an evaluation of the sensitivity of the data within their database. While considering
options to protect the sensitive database information, the business should ensure that that their
27
privacy protection measures do not interfere with authorized persons obtaining the right data at
appropriate times (Nevins 2). A database security solution should also be application transparent,
meaning that no changes need to be made to the underlying applications. This will provide a
faster implementation and lower support costs.
Scott Nevins, the president and CEO of Protegrity, and the author of “Database Security:
Protecting Sensitive and Critical Information,” considers making sure that you have a secureaudit trail for tracking and reporting activity around confidential data the key issue when
purchasing a database security solution. The author also lists additional topics to consider when
selecting a database security technology, such as fast performance, the ability to work across
applications, and ease to implement. IT security experts also recommend selectively encrypting
and securing sensitive database information. This process of wrapping each individual data item
in a protective security is much more effective than simply building a firewall around the
database. With only a firewall protecting the database, if the firewall is penetrated, the data is
vulnerable to intruders. Encrypting the data provides an extra layer of protection. Nevins also
notes that one of the best ways to develop an effective database security solution is to recognize
that securing the data is essential to the company’s reputation, profitability, and critical business
objectives.
Poor database security is a lead contributor to the incidence of identity theft; the less security
measures an organization has in place to protect the database, the higher the incidence of identity
theft will be. Much of the personal information that is used to commit identity theft, such as
Social Security numbers and credit card or bank account numbers is stored in databases. Law
enforcement experts estimate that more than half of all identity theft cases are committed by
employees with access to large financial databases (Nevins 2). That means that more than 50%
of identity theft cases are committed by employees within the organization who have access to
the database. As more and more security breaches relating to databases occur, audit committees
are becoming increasingly stringent about protecting customer information.
There are currently data-privacy regulations in place that many companies must comply with.
These regulations include best practice requirements and industry guidelines regarding the usage
and access to customer data. Data security is no longer an option, government legislation and
industry regulations mandate it. Some of the privacy requirements in place for protecting
personal information include proper access control, selective encryption of stored data,
separation of duties, and centralized independent audit functions (Nevins 2). Financial
institutions are currently regulated by the Gramm-Leach-Bliley Act (GLBA), which requires the
protection of non-public personal data while in storage and implements a variety of access and
security controls. These access and security controls are crucial. A 2002 Computer Crime and
Security Survey revealed that over half of the databases in use have some kind of a security
breach on a yearly basis. This security breach can cost the organization approximately $4 million
dollars in losses. Many organizations will do their best to cover up security breaches within their
company so as not to alarm customers and harm the business’ profitability. Many professionals
in the field believe that there is much more unauthorized access to databases than corporations
are willing to admit. In an effort to keep companies from covering up security breaches that
occur within their organizations, the state of California recently enacted a law that mandates
28
public disclosure of computer security breaches in which confidential information may have
been compromised.
With the recent database hacks at companies such as Lexis Nexis and the loss of Bank of
America data tapes containing the personal and financial information of over 1.2 million
customers, as well as the increase in identity theft, we are likely to see more legislation in the
coming months and years regarding data security. The good news is that unauthorized access to
the database and the misuse of data can be prevented with database security products and new
audit procedures. Management must realize that information security is no longer just an IT
function; it is a business necessity that must be grasped by the organization as a whole.
What practical security measures can be put in place?
Database security can be broken down into the following into the following key categories:




Server Security
Database Connections
Table Access Control
Restricting Database Access
Server Security: Server security is the process of limiting the access to the database server. This
is one of the most basic and most important components of database security. It is imperative
that an organization does not let their database server be visible to the world. If an organization’s
database server is supplying information to a web server, then it should be configured to allow
connections only from that web server. Also, every server should be configured to allow only
trusted IP addresses.
Database Connections: With regard to database connections, system administrators should not
allow immediate unauthenticated updates to a database. If users are allowed to make updates to a
database via a web page, the system administrator should validate all updates to makes sure that
they are warranted and safe. Also, the system administrator should not allow users to use their
designation of “sa” when accessing the database. This gives employees complete access to all of
the data stored on the database regardless of whether or not they are authenticated to have such
access.
Table Access Control: Table access control is related to an access control list, which is a table
that tells a computer operating system which access rights each user has to a particular system
object. Table access control has been referred to as one of the most overlooked forms of database
security. This is primarily due to the fact that it is so difficult to apply. In order to properly use
Table access control, the system administrator and the database developer will need to
collaborate.
Restricting Database Access: Internet based databases have been the most recent targets of
attacks, due to their open access or open ports. It is very easy for criminals to conduct a “port
scan” to look for ports that are open that popular database systems are using by default
(Weidman 4). The ports that are used by default can be changed, thus throwing off a criminal
29
looking for open ports set by default. There are additional security measures that can be
implemented to prevent open access from the Internet, such as



Trusted IP addresses – Servers can be configured to answer pings from a list of trusted hosts
only.
Server account disabling – The server ID can be suspended after three password attempts.
Special tools – Products can be used to send an alert when an external server is attempting to
breach the system’s security. One such example is RealSecure by ISS.
In previous years, businesses focused on preventing access to their databases via perimeter
security. Perimeter security includes items such as firewalls and intrusion detection equipment.
The problem with this method, however, is that it protects information from those outside the
organization that might attempt to retrieve information from the database. As was stated earlier,
the majority of security breaches that occur are by those individuals that are within the
organization. As we conclude our discussion of database security, it is important to remember
that database security should occur in conjunction with other security technologies, but data
protection should be the core element of a complete enterprise security infrastructure (Nevins 1).
What are the main issues surrounding a database Audit?
The primary security concerns of the auditor when conducting a database audit includes
authentication and authorization issues. The following general principles for developing an audit
strategy, auditing suspicious database activity, and auditing normal database activity can guide
the auditor throughout the audit.
General Principles for Developing an Audit Strategy:


Evaluate you purpose for auditing – In order to have an appropriate auditing strategy and to
avoid unnecessary auditing, you must have a clear understanding of the reasons for auditing.
Audit knowledgeably – In order to prevent unnecessary audit information from cluttering the
meaningful information, it is important to audit the minimum number of statements, users, or
objects required to get the targeted information.
General Principles for Auditing Suspicious Database Activity:


Audit generally, then specifically – Enable general audit options at first, then use more
specific audit options. This will help the auditor gather the evidence required to make
concrete conclusions regarding the origins of suspicious database activity.
Protect the Audit Trail – Protect the audit trail so that audit information cannot be added,
changed, or deleted without being audited.
General Principles for Auditing Normal Database Activity: Auditing normal database activity
refers to the process of gathering historical information about particular database activities.

Audit only pertinent actions – In order to avoid cluttering the meaningful information with
useless audit information, audit only the targeted database activities.
30
Archive audit records and purge the audit trail – After you have collected the required
information, archive audit records that are of interest and purge the audit trail of this
information.

What are the options auditors have for database audits?
Using an automated database audit solution:
In order to ensure that unauthorized users are not accessing the database, the auditor will need to
audit user activity. Auditing user activity provides the auditor with assurance that the policies,
procedures, and safeguards that management has enacted are working as intended (Mazer 1).
This also helps the auditor to identify any violations that may have occurred.
Auditing user activity can be accomplished via continuous data auditing. Continuous data
auditing is the process of monitoring, recording, analyzing, and reporting database activity on a
periodic basis. This is a critical concept because unauthorized access to the database and the
information contained within can occur at any time. If the auditor is using a testing schedule,
violators can easily sidestep that schedule. This is not the case, however, with continuous data
auditing. The auditor and management must be able to identify which behavior is suspicious
versus which behavior is routine. Any behavior that is not identified as routine and valid access
to the database must be examined and analyzed further.
Before beginning the audit, the auditor should assess the database environment. This includes
identifying and prioritizing the users, data, activities, and applications to be monitored (Mazer 2).
The Internal Audit Association lists the following as key components of a database audit:
1.
2.
3.
4.
5.
6.
7.
Identify all database systems and use classifications. This should include production and
test data.
Classify data risk within the database systems. Monitoring should be prioritized for high,
medium, and low risk data.
Analyze access authority. Users with higher degrees of access permission should be
under higher scrutiny, and any account for which access has been suspended should be
monitored to ensure access is denied and attempts are identified.
Assess application coverage. Determine what applications have built-in controls, and
prioritize database auditing accordingly. All privileged user access must have audit
priority. Legacy and custom applications are the next highest priority to consider,
followed by the packaged applications.
Ensure technical safeguards. Make sure access controls are set properly.
Audit activity. Monitor data changes and modifications to the database structure,
permission and user changes, and data viewing activities.
Archive, analyze, review, and report audit information. Reports to auditors and IT
managers must communicate relevant audit information, which can be analyzed and
reviewed to determine if corrective action is required. Organizations that must retain
audit data for long-term use should archive this information with the ability to retrieve
relevant data when needed.
31
The first five steps listed are to be performed by the auditor manually, while the last two steps
are best achieved by using an automated solution.
The ideal approach to effectively capture and analyze database activity, is through non-trigger
audit agents associated with each database server. Non-trigger audit agents capture all relevant
activity, regardless of the application used (Mazer 3). In comparison, database triggers-an
automatic procedure that occurs when data has been altered in a table-are not recommended, as
database administrators can easily disable them. The non-trigger database audit agents gather
information through two means:
1.
2.
Database transaction log – Each database maintains a database transaction log through
the normal course of its operation, which gathers data modifications and other activity.
Database's built-in event notification mechanism – Obtains additional information, such
as permission changes and data viewing activities.
Data Access Auditing:
Data access auditing is a surveillance mechanism that watches over access to all sensitive
information contained within the database. This mechanism brings only the suspicious activity to
the auditor’s attention. As was previously discussed, databases generally organize data into
tables containing columns. Access to the data generally occurs through a language called
Structured Query Language or SQL (Richardson 2). The perfect data access auditing solution
would address the following six questions:
1.
2.
3.
4.
5.
6.
Who accessed the data?
When?
Using what computer program or client software?
From what location on the network?
What was the SQL query that accessed the data?
Was it successful; and if so, how many rows of data were retrieved?
The auditor can choose to either audit within the client, audit within the database, or audit
between the client and the database. The following graphic depicts these options:
32
Enterprise Resource Planning audit
What is an ERP system?
Any software system designed to support and automate the business processes of medium and
large businesses. This may include manufacturing, distribution, personnel, project management,
payroll, and financials. Enterprise Resource Planning systems are accounting-oriented
information systems for identifying and planning the enterprise-wide resources needed to take,
make, distribute, and account for customer orders. ERP systems were originally extensions of
MRP II systems, but have since widened their scope. The basic premise of ERP systems is to
implement a single information warehouse that will service all of the functional areas of a
business: marketing and sales, production and materials management, accounting and finance,
and human resources. Information is updated real-time in the ERP database, so that employees in
all business units are using the same information, and all information is up-to-date.
How have ERP systems impacted the nature of Auditing?
The interaction and flow of information, issues with data and the processing of data, controls and
security of the data and the systems, and training of employees are the four major areas in which
ERP’s have impacted the nature of auditing. The increased implementation and use of ERP
systems in today’s business environment, and especially in financial reporting, means auditors
must become knowledgeable about ERP’s. When a company uses an ERP system, the audit
focus shifts from substantive testing of the books of account to understanding the business
processes, testing the systems and applications controls, as well as controls over system access.
The technical complexity of ERP systems has required auditors to increase their knowledge of
information technology and more often supplement their audits with outside technical expertise.
At the same time, auditors must retain a firm grasp of how accounting entries and processes are
performed manually, so that they can ensure that the computer is automating the process
correctly.
What is the history of ERP Systems?
The root of ERP systems began in the manufacturing industry, where software was developed
during the 1960’s and 1970’s to track inventory. The first software incarnation, called Materials
Requirements Planning (MRP) software, was introduced in 1975 and allowed plant managers to
coordinate the planning of production and raw material requirements. MRP software worked
backwards from sales forecasts, factoring in lead times and then determining the order size and
timing. MRP was the first attempt at an integrated information system (Brady 20).
MRP was made possible by mainframe computers handling the basic functions through
sequential file processing and electronic data interchange (EDI), which increased the availability
of up-to-date information (Brady 20). With improvements in mainframe computers during the
1980’s, the idea of MRP was expanded into Manufacturing Resource Planning (MRP II)
systems. Instead of using the information system to run the manufacturing unit of a business, the
33
goal of MRP II was to have a companies manufacturing, engineering, marketing, and finance
units run on the same information system, thus using the same set of data (Tibben-Lembke).
The first true ERP system began development in 1972 when five former IBM systems analysts
formed a company that was to become Systems, Applications and Products in Data Processing
(SAP). With the goal of developing standard software to integrate business processes and make
data available in real time, the founders began developing a standard financial accounting
package. Soon after, a Materials Management program, with modules for Purchasing, Inventory
Management and Invoice Verification, followed. In 1978, SAP developed a more integrated
version of its software products, called the SAP R/2 system. R/2 took full advantage of the
current mainframe computer technology, allowing for interactivity between modules and
additional capabilities like order tracking (Brady 20-21).
In 1992, SAP released its SAP R/3 system, which took four years to develop. The main feature
of R/3 that distinguished it from previous ERP systems is its use of client-server hardware
architecture. This setup allows the system to run on a variety of computer platforms such as Unix
and Windows NT. R/3 was also designed with an open-architecture approach, allowing thirdparty companies to develop software that will integrate with SAP R/3 (Brady 22). During the
1990’s, ERP competition increased dramatically, with companies such as Oracle Corporation,
PeopleSoft, J.D. Edwards and Baan producing such systems. Currently, SAP and Oracle are the
two leading ERP system developers.
How do you Audit an ERP System?
There are few rules that can be applied to all ERP auditing situations. As each system serves the
client in a different capacity and has been altered to fit the client’s business model, ERP auditors
must be flexible and creative in designing an audit plan. On the same note, there are no hard
rules on splitting roles and responsibilities between audit groups. Role differentiations are
determined on a client-to-client basis, as a function of auditor experience, expertise and training.
A common distinction is made between financial auditors and information systems auditors.
However, with ERP, financial reporting and especially internal accounting controls, must be
audited working through the computer; therefore, it is important that auditors have knowledge of
both accounting and technology, learning new skills sets in the process (Moulton). Specialists are
also commonly hired to determine if complex technology is working correctly. The concept of an
“integrated auditor,” who has enough accounting and IT knowledge to work both sides of the
audit, has emerged as workable solution to the complexities of ERP auditing (Hahn).
ERP systems are technically complex, with the system residing on multiple computers and the
flexibility to support multiple configurations and customizations (Hahn). To begin understanding
a client’s ERP system, auditors must evaluate how the technology relates to the business
environment. To determine the scope of the audit, they must take into consideration:




how the technology is used in the organization
the number of people using the technology
which ERP models have been implemented
the existence of distributed applications
34

whether legacy systems are used and to what capacity (Hahn)
Auditors must go through a significant amount of training to acquire the knowledge necessary to
adequately understand the functioning of an ERP system and how it intakes data and produces
financial reports. Auditors must also consider learning new tools to take advantage of functions
in ERP systems. SAP’s language, ABAP/4, would be useful for an auditor to know so that he can
examine the programming code when there is a question about the functioning of the system
(Hahn). As another example, Oracle’s database comes with its own set of basic auditing actions
designed to detect unauthorized access and internal abuse of the data being stored (Finnigan).
ERP’s have specifically influenced the auditing profession in four main ways: the interaction and
flow of information, issues with data and the processing of data, controls and security of the data
and the systems, and training of employees who use the ERP systems.
Interaction and the Flow of Information - With an ERP system, operational and financial data
are tied together through a complex information flow. Transactions can be automatically entered
without review or pre-checking. Therefore, ERP’s make it difficult to perform financial audits
without relying on system controls. Such controls should be designed, in part, to prevent
inaccurate or false information from entering the system. As many transactions are automated
functions of modules creating information entries for other modules, it is impossible to audit
“around the computer” (i.e. comparing input to output). Rather, auditing must be done “through
the computer” (i.e. testing the system process that the input went through to create the output),
using such methods as test decks and parallel simulation. In order to conduct a proper audit
through the computer, the auditor must have a certain level of understanding about technology
and how the system functions.
The ideal of a “paperless office” is facilitated through an ERP system, because the system is
centralized and communicates data from a common internal source, the database. Instead of
hardcopy evidence, ERP’s create event history logs for a visible trail of evidence to trace
information to the original input source (Adint). These audit trails allow an auditor to both detect
when an undesirable event has occurred and reconstruct an event by what happened. At a
minimum, the trails should contain the user ID, the data and time of the event, and the action
taken. Other information could include previous and current field values (Adint). Auditors of
ERP systems need to be cognizant of how to use these audit trails and the appropriateness of
their design because it impacts the ability to rely on system controls or the output created.
Because ERP’s are customizable and often changed by an organization’s internal programmers,
auditors must pay attention to how these changes take place. The production code forms the basis
of running the ERP system. To protect this valuable asset, changes in the production code should
be:





authorized by the business owner (if functional) or IT manager (if technical)
tested thoroughly
approved by the business owner or IT manager
performed by an authorized person
documented
35
To verify the controls of authorization and approval are valid, any change to the code should be
traceable to a request. Access to the production code should be limited and traceable to the
authorized individual who made changes. To check these, auditors must look for hard-copy
documentation, such as change request forms, as well as documentation embedded in the code
itself (Adint).
Controls and Security - It is important for any entity to segregate the duties of authorization of
transactions, recording of transactions, and custody of transactions. Auditors should examine the
business process flows to identify where authorization, recording, and custody of a business
transaction takes place, and compare it to how the user responsibilities have been designed.
Often user responsibilities are given wide-open access for the initial installation, but rarely are
access restrictions introduced once the system has proven functional. Also, the auditors should
check to see if the segregation of duties is accomplished with a combination of system and offline controls. Segregation of duties should be designed into user responsibilities and functions,
and documented in the business requirements stage. The auditor should determine which users
were given access to what functions by examining documentation from the implementation stage
(Cooke).
The same segregation rule needs to be applied to IT functions to ensure system integrity. For
example, IT personnel should not have user responsibilities. This serves the purpose of
segregating development and production activities. IT personnel are responsible for maintaining
the production software, including the associated controls, while production data is owned by the
business users and serves as a record for business activities (Adint). If these duties were not
segregated, a transaction could be processed with circumvented controls compromising data
integrity.
Auditors must now be aware of the logical security of data used by the ERP system. Logical
security includes user ID’s and passwords. Auditor’s must make sure that user ID’s are unique,
because unique ID’s ensure accountability and the ability to trace actions to individuals. The
ability to sign on with a generic ID needs to be tightly controlled. This requires changing all the
default passwords for generic ID’s that the ERP comes with and allowing few employees to
know what the new password is. As an example, Oracle databases come programmed with
generic ID passwords such as CHANGE_ON_INSTALL, MANAGER, and ORACLE (Adint).
The problem with retaining the default passwords in prepackaged systems is that these
passwords are open to the public and anyone who has network access can use them to gain
access to the system.
Auditors also must look at corporate policy regarding application and database password.
Passwords form the basis of logical security and strong passwords protect the data from
unauthorized access. Clear policies stress the importance of employee’s creating strong, complex
passwords. Password policies should encompass minimum length, complexity requirements,
expiration periods and lock out times. An example policy would include:



Minimum of 8 characters
Cannot be one of the users previous four passwords
Contains at least one letter or number
36



Contains at least one special character
Not based on words found in the dictionary or on proper names
Expires in 14 days (Adint)
A process must exist for business owners to review the user access lists, as well as who monitors
day-to-day administration of controls and how often controls are reviewed (Cooke). Business
owners are in the best position to determine if access to the system or an application is needed to
perform an employee’s task (Adint). Restricting employee access to certain fields and windows
of the ERP system prevents inappropriate changes in the data. For example, an accounts payable
clerk should not be given access to the purchase order module, since access to this module is not
required to perform his job. The company should also have a review process in place to identify
when people have changed positions or left the company and no longer need access to the
system. In order to remove the task from IT, business owners should be enabled to pull their own
access report (Adint).
Data Processing and Data Issues - ERP systems are designed to automatic updates of data
throughout the system once a transaction has been entered, so the implementation of an ERP
system shifts the focus of an audit from substantive testing to a largely controls-based audit.
Since a logical system is performing the updating and reporting, routine transactions can be
checked by the presence of proper controls. If strong controls are in place, auditors can do little
substantive testing when performing an audit, while instead focusing on manual and non-routine
transactions, and get reasonable assurance that the financial statements are free of material
misstatements.
Since ERP’s use on-line, real-time processing, information is updated simultaneously. Every
transaction of every function is stored in one common database, and the various modules in an
ERP system automatically create entries in the database for each other, thus creating
simultaneous updates to the system that are transparent to all users (Hahn). Traditional “batch”
controls and audit trails are no longer available for the auditor. Data entry accuracy is maintained
through the use of default values, cross-field checking and transaction balancing rather than
batch processing (Hahn).
Because the information is updated, maintained and stored electronically, auditors need to
understand how the modules interact with each other and with the database. Additionally, the
flow of information must be understood. Because of the high degree of automation present in
ERP systems, understanding the logical flow of information that is produced will help ensure
that the information is correct.
With the use of a single database, data entry is more important because an erroneous piece of
information will permeate through the entire company’s records (Brady 120). ERP systems shift
the burden of correctness to the front-line workers, while higher end processes of data transfer
and report creation is done automatically. Auditors must spend more time with lower-level
employees to determine if those entering the data understand what they are doing, and especially
what to do if a problem arises or a mistake is made. In non-integrated information systems, an
error in data input is less harmful than an ERP, because the error will not be spread to the records
of other departments and can be caught when auditors compare records. However, with ERP
37
systems there is no way to discover a mistake by checking it against another system since it
relies on a common database.
Employee Training - ERP systems require extensive training to use. Auditors of ERP systems
need to assess the business environment and how it communicates to users of the ERP the proper
uses and processes of the system (Arlinghaus). Training manuals and documents should be
reviewed, as well as training course outlines. The training should be designed for the end user’s
job, and stress to employees how the data they control affects the entire business operation. If
proficiency tests are in place, the auditor should examine the difficulty of the questions (Brady
120-121). Continual training, especially in the use of new modules and functions, should also be
examined.
Auditors should also examine how the client’s management deals with the changes that ERP
systems bring to the business. A company’s managers and employees will often resist ERP
systems, because it requires changing the way they have performed their jobs in the past. Typical
ERP training costs between $10,000 and $20,000 per employee (Brady 32). Because of the high
price and the lack of immediate results, many companies do not properly training employees on
how to use the ERP system.
Adint, Laura Packaged Software Control Objectives. (Feb. 4, 2002).
[www.auditnet.org/docs/PackagedSoftwareControlObjectives.doc] May 12, 2005
All About ERP: ERP Software Solutions. (2003). [1] May 16, 2005
Arlinghaus, Barry. “Internal Audit’s Role in the Implementation of Enterprise and Other Major
Systems.” Internal Auditing, Nov/Dec 2002: p. 32-39.
Brady, Joseph, Ellen Monk, and Bret Wagner. Concepts in Enterprise Resource Planning.
Boston: Thomson Learning. 2001.
Cooke, Michael. Application Audits (2004).
[www.auditnet.org/articles/200404%20Cooke%20Application%20Audits.htm] May 15, 2005
Finnigan, Pete Introduction to Simple Oracle Auditing. (2003). [www.security
focus.com/infocus/1689] May 15, 2005
Hahn, Jennifer ERP Systems: Audit and Control Risks. (April 26, 1999).
[www.auditnet.org/docs/erprisks.pdf] May 12, 2005
Internal Audit Process: How It Works. (2005). [www.auditnet.org/process.htm] May 15, 2005
Moulton, Phil. Audit Risk in a SAP R/3 Environment. [www.auditnet.org/SAP/
Auditing%20in%20a%20SAP%20Environment.pdf] May 15, 2005
Parth, Frank R. and Joy Gumz. Getting Your ERP Implementation Back on Track. (2003). [2]
May 15, 2005
38
Tibben-Lembke, Ron. ERP History and Overview. (April 9, 2002).
[www.coba.unr.edu/faculty/rontl/701/15tn-APICS-ERP.ppt] May 15, 2005
Wynne, Diane. Application Controls within Oracle. (2001) [3] May 12, 2005
39
Systems Applications Products audit
What is SAP’s relationship to ERPs?
Enterprise resource planning systems are specifically designed to help with the accounting
function and the control over various other aspects of the companies business such as sales,
delivery, production, human resources, and inventory management. There are three main ERP’s
used in today’s larger businesses, they are; SAP, Oracle, and PeopleSoft. Despite the benefits of
ERP’s there are also many potential pitfalls that companies who turn to ERP’s occasionally fall
into.
What is SAP?
SAP is the acronym for Systems, Applications, Products. It is a mainframe system that provides
users with a soft real time business application. It contains a user interface and is extremely
flexible.
What is the history of SAP?
Systems, Applications, Products in data processing or SAP was originally introduced in the
1980’s as SAP R/2 which was a mainframe system that provided users with a soft-real-time
business application that could be used with multiple currencies and languages. Later when client
servers were introduced SAP brought out a server based version of their software called SAP
R/3, henceforth referred to as SAP, which was launched in 1992. They also developed a
graphical user interface or GUI to make it more user friendly and to move away from the
mainframe style user interface. For the next 10 years SAP dominated the large business
applications market. It was successful primarily because it was extremely flexible. Because SAP
was a modular system meaning that the various functions provided by it could be purchased
piecemeal it was an extremely versatile system. All a company needed to do was purchase the
modules they wanted and customize the processes to match the company’s business model.
SAP’s flexibility, while one of its greatest strengths is also one of its greatest weaknesses. We
will now turn to the audit issues surrounding SAP.
What are the two main issues in a SAP Audit?
There are two main areas of concern when doing an SAP audit, they are; Security, and Data
Integrity. We will discuss these issues from implementation to production to show how this
evolution must be monitored closely or all other efforts to ensure the accuracy of the data
provided by the system will be mute.
Security – Security concerns are the first and foremost concern in any SAP audit. Making sure
that there is proper segregation of duties and access controls is paramount to establishing the
integrity of the controls for the system. When a company first receives SAP it is almost devoid of
40
all security measures. When implementing SAP a company must go through an extensive
process of outlining their processes and then building their system security from the ground up to
ensure proper segregation of duties and proper access. Proper profile design and avoidance of
redundant user ID’s and super user access will be important in all phases of operation. Along
with this comes the importance of ensuring restricted access to terminals, servers, and the
datacenter to prevent tampering. Because each company will have different modules each
company’s security structure will be distinctly different.
With security it all starts at the beginning with the proper design and implementation of security
and access measures for employees. For new employees it is important that their access is set up
properly and that future access granted has proper approval. After the system has been
implemented the control over system changes and the approval process required for it is vital to
ensure the continued security and functionality of the system. Without proper security measures
in place from start to finish there will be a material weakness in the controls of the system
because of this there will likely be some level of fraud as well.
Through security you are able to monitor who has access to what data and processes and ensure
that there is sufficient segregation of duties so as to prevent someone from perpetrating fraud.
One of the major advantages of SAP is that it can be programmed to perform various audit
functions for you. One of the most important of those is for reviewing user access and using the
system to cross check based on an access matrix to ensure that proper segregation is in place so a
person with payment request access does not also have access to create a vendor.
After ensuring that security is set up to ensure proper segregation of duties the next area of
concern surrounding security is with regards to system changes. All companies should have three
different systems: the development system, the test system, and the production system. All
changes to production will need to be run through an approval process and be tested to ensure
that they will function properly when introduced into the production system. The security around
who can authorize a change and who can put that change through into production is paramount to
ensuring the security and integrity of the system. Review of this process and the people involved
with it will be a key to the audit of the system.
The goal of auditing the access, steps and procedures for system updates is to ensure proper
controls over change management of the system and to ensure that proper testing and
authorization procedures are being used. All of these measures also affect our second major area
of concern, data integrity.
Data Integrity – Because SAP integrates data from Legacy systems it is extremely important to
ensure that the mapping of the interaction between the legacy systems and SAP is thorough and
complete. Without that any data received from SAP would be suspect. Also, it is important that
proper backups of the database be maintained along with an up to date and practiced disaster
recovery plan to ensure continuity after a disaster. A thorough review of these plans along with
the mapping of system interfaces will be important in this phase of the audit. Also, because all
SAP data is stored on inter-related tables it is possible for users with certain security to change
them. It is extremely important that the output be verified to ensure accuracy. SAP does provide
41
some basic audit programs to assist with the review of data to ensure that it is processing
properly. It is also customizable so that a user can create a program to audit a specific function.
The monitoring of change management, the moving of updates to the system from the
development stage is one of the key elements of this particular concern. The review and testing
procedures for these programs that are being pulled through to production need to be
painstakingly reviewed to ensure that they will function properly and not adversely affect
another area of the system. If anything is missed the potential for a processing error or system
crash could cause some major concern. Because of this review of the process of review and pull
through to production needs to be a high priority.
Controls around the system need to be reviewed especial around the accounts payable and
account receivable sub ledgers. Auditors must perform or review reconciliations between SAP
and external information such as bank reconciliations and A/P statement reconciliations. They
must review cost center and responsibility accounting, management review and budgetary
control and the route of authorization for non-routine transactions.
The audit review should include a review of validation of data that is input in certain
transactions, the design of ABAP statements and their authority checks matching documents
prior to closing. Also, with regard to the master file control there must be an independent review
of master file changes and creation of transactional responsibilities to identify any redundant
master files.
When it comes to data integrity the primary concerns are the integration of data from the legacy
systems and then ensuring that data being input into the system for processing has been properly
approved and is accompanied by the proper documentation. Through reviewing these aspects of
the processing from implementation through to production you can gain reasonable confidence
that the controls surrounding the data are sufficient and that the data is likely free of material
error. The use of the built in audit functions will greatly assist with this process and the ability to
create your own audit programs will allow you to customize the work to the company you are
working with.
The two major control risks that need to be monitored with SAP are security and data integrity.
To ensure that both are sufficient it is important that both be properly outlined and developed
during implementation. User profiles must be designed properly and access must be sufficiently
segregated to minimize the chance of fraud. Use of the SAP audit functions to cross check the
user access with the matrix of allowable accesses is the quickest and easiest way to ensure that
duties and access are properly segregated. New and old users must be entered and removed
promptly and avoidance and monitoring of any super user access is imperative. Also, review of
the access to upload and pull through changes to production and review of the associated
authorization process is important from both a security and data integrity point of view. To
further ensure data integrity it is important that proper documentation be reviewed along with
confirmation of any external data available either through a legacy system or through a third
party. This is extremely important with regard to certain sensitive accounts such as A/P. Review
of controls around budgets and management review and also review of authorization for nonroutine transactions and physical access will be imperative to ensuring the accuracy of the data
42
input and output from the system. The use of and development of tools within SAP will help
accelerate this process and help to ensure that it is accurate. These are the two most vital parts to
any SAP audit and successful review of them should allow you to determine the adequacy of
control around the SAP system and access to it to determine whether or not there are any
material deficiencies with the systems controls.
Retrieved from "http://en.wikipedia.org/wiki/Systems_Applications_Products_audit"
Perry, William E., "Auditing the Data Center: An Introduction," in EDP Auditing, Pennsauken,
NJ: Auerbach Publishers, Inc., 1985.
43
Download