Uploaded by taahahasnain711

pdfcoffee.com cmap1textv2aug19a4pdf-pdf-free

advertisement
CMA Part 1
Volume 2: Sections D – F
Financial Planning,
Performance and, Analytics
Version 20.03
HOCK international books are licensed only for individual use and may not be
lent, copied, sold or otherwise distributed without permission directly from
HOCK international.
If you did not download this book directly from HOCK international, it is not a
genuine HOCK book. Using genuine HOCK books assures that you have complete,
accurate and up-to-date materials. Books from unauthorized sources are likely outdated
and will not include access to our online study materials or access to HOCK teachers.
Hard copy books purchased from HOCK international or from an authorized
training center should have an individually numbered orange hologram with the
HOCK globe logo on a color cover. If your book does not have a color cover or does
not have this hologram, it is not a genuine HOCK book.
2020 Edition
CMA
Preparatory Program
Part 1
Volume 2: Sections D – F
Financial Planning,
Performance and Analytics
Brian Hock, CMA, CIA
and
Lynn Roden, CMA
with
Kevin Hock
HOCK international, LLC
P.O. Box 6553
Columbus, Ohio 43206
(866) 807-HOCK or (866) 807-4625
(281) 652-5768
www.hockinternational.com
cma@hockinternational.com
Published August 2019
Acknowledgements
Acknowledgement is due to the Institute of Certified Management Accountants for
permission to use questions and problems from past CMA Exams. The questions and
unofficial answers are copyrighted by the Certified Institute of Management Accountants
and have been used here with their permission.
The authors would also like to thank the Institute of Internal Auditors for permission to
use copyrighted questions and problems from the Certified Internal Auditor Examinations
by The Institute of Internal Auditors, Inc., 247 Maitland Avenue, Altamonte Springs,
Florida 32701 USA. Reprinted with permission.
The authors also wish to thank the IT Governance Institute for permission to make use
of concepts from the publication Control Objectives for Information and related
Technology (COBIT) 3rd Edition, © 2000, IT Governance Institute, www.itgi.org.
Reproduction without permission is not permitted.
© 2019 HOCK international, LLC
No part of this work may be used, transmitted, reproduced or sold in any form or by any
means without prior written permission from HOCK international, LLC.
ISBN: 978-1-934494-71-4
Thanks
The authors would like to thank the following people for their assistance in the
production of this material:
§
§
§
All of the staff of HOCK Training and HOCK international for their patience in the
multiple revisions of the material,
The students of HOCK Training in all of our classrooms and the students of HOCK
international in our Distance Learning Program who have made suggestions,
comments and recommendations for the material,
Most importantly, to our families and spouses, for their patience in the long hours
and travel that have gone into these materials.
Editorial Notes
Throughout these materials, we have chosen particular language, spellings, structures
and grammar in order to be consistent and comprehensible for all readers. HOCK study
materials are used by candidates from countries throughout the world, and for many,
English is a second language. We are aware that our choices may not always adhere to
“formal” standards, but our efforts are focused on making the study process easy for all
of our candidates. Nonetheless, we continue to welcome your meaningful corrections and
ideas for creating better materials.
This material is designed exclusively to assist people in their exam preparation. No
information in the material should be construed as authoritative business, accounting or
consulting advice. Appropriate professionals should be consulted for such advice and
consulting.
Dear Future CMA:
Welcome to HOCK international! You have made a wonderful commitment to yourself
and your profession by choosing to pursue this prestigious credential. The process of
certification is an important one that demonstrates your skills, knowledge, and
commitment to your work.
We are honored that you have chosen HOCK as your partner in this process. We know
that this is a great responsibility, and it is our goal to make this process as efficient as
possible for you. To do so, HOCK has developed the following tools for your use:
Ÿ
Ÿ
Ÿ
Ÿ
Ÿ
Ÿ
Ÿ
Ÿ
A Study Plan that guides you, week by week, through the study process. You
can also create a personalized study plan online to adapt the plan to fit your
schedule. Your personalized plan can also be emailed to you at the beginning of
each week.
The Textbook that you are currently reading. This is your main study source and
contains all of the information necessary to pass the exam. This textbook follows
the exam contents and provides all necessary background information so that you
don’t need to purchase or read other books.
The Flash Cards include short summaries of main topics, key formulas and
concepts. You can use them to review whenever you have a few minutes, but
don’t want to take your textbook along.
ExamSuccess contains original questions and questions from past exams that
are relevant to the current syllabus. Answer explanations for the correct and incorrect answers are also included for each question.
Practice Essays taken from past CMA Exams that provide the opportunity to
practice the essay-style questions on the Exam.
A Mock Exam enables you to make final preparations using questions that you
have not seen before.
Teacher Support via our online student forum, e-mail, and telephone throughout your studies to answer any questions that may arise.
Videos using a multimedia learning platform that provide the same coverage as
a live-taught course, teaching all of the main topics on the exam syllabus.
We understand the commitment that you have made to the exams, and we will match
that commitment in our efforts to help you. Furthermore, we understand that your time
is too valuable to study for an exam twice, so we will do everything possible to make
sure that you pass the first time.
I wish you success in your studies, and if there is anything I can do to assist you, please
contact me directly at brian.hock@hockinternational.com.
Sincerely,
Brian Hock, CMA, CIA
President and CEO
CMA Part 1
Table of Contents
Table of Contents
Section D – Cost Management ......................................................................................... 1
D.1. Measurement Concepts ............................................................................................. 2
Why Cost Management?
Classifications of Costs
Product Costs vs. Period Costs
Costs Based on Level of Activity (Fixed, Variable and Mixed Costs)
Introduction to Costing Systems
Introduction to Cost Accumulation Methods
Introduction to Cost Measurement Systems
Cost of Goods Sold (COGS) and Cost of Goods Manufactured (COGM)
2
3
3
5
11
12
12
20
Variable and Absorption Costing ................................................................................... 22
Fixed Factory Overheads Under Absorption Costing
Fixed Factory Overheads Under Variable Costing
Effects of Changing Inventory Levels
Income Statement Presentation
22
23
23
25
Joint Products and Byproducts ..................................................................................... 33
Methods of Allocating Costs to Joint Products
Accounting for Byproducts
33
41
D.2. Costing Systems ...................................................................................................... 44
Review of Introduction to Costing Systems
44
Process Costing .............................................................................................................. 44
Steps in Process Costing
Process Costing Diagram – FIFO
Process Costing Diagram – Weighted Average
Process Costing Summary
Process Costing Examples
Spoilage in Process Costing
46
59
60
61
62
67
Job-Order Costing ........................................................................................................... 72
Operation Costing ........................................................................................................... 74
Life-Cycle Costing ........................................................................................................... 74
Customer Life-Cycle Costing.......................................................................................... 76
D.3. Overhead Costs........................................................................................................ 77
Introduction to Accounting for Overhead Costs
Traditional (Standard) Allocation Method
Activity-Based Costing
Shared Services Cost Allocation
Allocating Costs of a Single (One) Service or Support Department to Multiple Users
Allocating Costs of Multiple Shared Service Departments
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
77
79
91
98
99
102
i
Table of Contents
CMA Part 1
Estimating Fixed and Variable Costs........................................................................... 109
High-Low Points Method
109
D.4. Supply Chain Management ................................................................................... 112
What is Supply Chain Management?
Lean Resource Management
Just-in-Time (JIT) Inventory Management Systems
Kanban
Introduction to MRP, MRPII, and ERP
Outsourcing
Theory of Constraints (TOC)
Capacity Level and Management Decisions
112
113
117
119
120
123
123
135
D.5. Business Process Improvement........................................................................... 138
The Value Chain and Competitive Advantage
Process Analysis
Business Process Reengineering
Benchmarking Process Performance
Activity-Based Management (ABM)
The Concept of Kaizen
The Costs of Quality
ISO 9000
Quality Management and Productivity
Other Quality Related Issues
Accounting Process Redesign
138
144
144
146
146
147
148
158
158
158
161
Section E – Internal Controls ....................................................................................... 166
E.1. Governance, Risk, and Compliance ..................................................................... 167
Corporate Governance
Responsibilities of the Board of Directors
Audit Committee Requirements, Responsibilities and Authority
Responsibilities of the Chief Executive Officer (CEO)
Election of Directors
167
173
173
176
176
Internal Control .............................................................................................................. 177
Internal Control Definition
The Importance of Objectives
Who Is Responsible for Internal Control?
Components of Internal Control
What is Effective Internal Control?
Transaction Control Objectives
Types of Transaction Control Activities
Safeguarding Controls
177
179
179
180
190
190
190
191
Legislative Initiatives About Internal Control ............................................................. 195
ii
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
CMA Part 1
Table of Contents
Foreign Corrupt Practices Act (FCPA)
Sarbanes-Oxley Act and the PCAOB
195
196
External Auditors’ Responsibilities and Reports ........................................................ 202
Financial Statement Opinion
Internal Control Opinion
Reviews and Compilations
Reports to the Audit Committee of the Board of Directors
202
204
204
205
E.2. Systems Controls and Security Measures ........................................................... 206
Introduction to Systems Controls
Threats to Information Systems
The Classification of Controls
General Controls
Application Controls
Controls Classified as Preventive, Detective and Corrective
Controls Classified as Feedback, Feedforward and Preventive
206
207
208
209
217
225
226
System and Program Development and Change Controls ........................................ 227
Internet Security ............................................................................................................ 232
Viruses, Trojan Horses, and Worms
Cybercrime
233
234
Business Continuity Planning ...................................................................................... 238
System Auditing ............................................................................................................ 240
Assessing Controls by Means of Flowcharts
Computerized Audit Techniques
240
241
Section F – Technology and Analytics ........................................................................ 244
Introduction to Technology and Analytics
244
F.1. – Information Systems ........................................................................................... 244
The Value Chain and the Accounting Information System
The Supply Chain and the Accounting Information System
Automated Accounting Information Systems (AIS)
Databases
Enterprise Resource Planning Systems
Data Warehouse, Data Mart, and Data Lake
Enterprise Performance Management
244
245
245
258
261
264
266
F.2. Data Governance .................................................................................................... 267
Definition of Data Governance
IT Governance and Control Frameworks
Data Life Cycle
Records Management
Cyberattacks
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
267
267
278
279
280
iii
Table of Contents
Defenses Against Cyberattack
CMA Part 1
281
F.3. Technology-enabled Finance Transformation ..................................................... 285
Systems Development Life Cycle (SDLC)
Business Process Analysis
Robotic Process Automation (RPA)
Artificial Intelligence (AI)
Cloud Computing
Blockchains, Distributed Ledgers, and Smart Contracts
285
285
286
288
290
293
F.4. Data Analytics......................................................................................................... 301
Business Intelligence (BI)
Data Mining
Analytic Tools
Visualization
302
306
313
336
Answers to Questions ................................................................................................... 346
iv
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Section D – Cost Management
Section D – Cost Management
Section D represents 15% of the Part 1 Exam. Section D focuses on the process of determining how much
it costs to produce a product. Topics covered include several types of cost accumulation, cost measurement
and cost allocation systems as well as sources of operational efficiency and business process performance
for a firm. Important concepts in the business process performance topic are the concepts of competitive
advantage and how a firm can attain competitive advantage.
Major topics include:
•
Variable and Absorption Costing
•
Joint Product and Byproduct Costing
•
Process Costing
•
Job Order Costing
•
Operation Costing
•
Life-cycle Costing
•
Overhead Cost Allocation
•
Activity-based Costing
•
Shared Services Cost Allocation
•
Estimating Fixed and Variable Costs
•
Supply Chain Management
•
Business Process Improvement
The three topics that will be the most challenging are:
1)
Variable and Absorption Costing
2)
Process Costing
3)
Activity-based Costing
The three topics above are not the only important topics, of course. The other topics are also important
and will be tested. However, variable and absorption costing, process costing, and activity-based costing
are areas where candidates will need to spend more time to ensure full understanding for the exam.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
1
D.1. Measurement Concepts
CMA Part 1
D.1. Measurement Concepts
Why Cost Management?
Cost management systems are used as basic transaction reporting systems and for external financial reporting. Cost management systems not only provide reliable financial reporting, but they also track costs
in order to provide information for management decision-making.
The most important function of cost management is to help management focus on factors that make the
firm successful. The management accountant is an integral part of management, identifying, summarizing,
and reporting on the critical success factors necessary for the firm’s success. Critical success factors are
a limited number of characteristics, conditions, or variables that have a direct and important impact on the
efficiency, effectiveness and viability of an organization. They are the aspects of the company’s performance
that are essential to its competitive advantage1 and therefore to its success. Activities related to the critical
success factors must be performed at the highest possible level of excellence.
For example, the management accountant can provide information about the sources of the firm’s competitive advantage, such as the cost, productivity or efficiency advantage the firm has relative to competitors
or about the additional prices the company can charge for additional features that make its offering distinctive relative to the costs of adding those features. Strategic cost management is cost management that
specifically focuses on strategic issues such as these. Thus, cost management contributes to the company’s
achieving its strategic goals and objectives.
Evaluating Operating Performance
In determining whether a firm is achieving its goals and objectives, two aspects of operations are important:
effectiveness and efficiency.
As was emphasized in the section on Strategic Planning in Section B, “a publicly-owned for-profit company
must have maximizing shareholder value as its ultimate goal. The shareholders are the owners. They have
provided risk capital with the expectation that the managers will pursue strategies that will give them good
returns on their investments. Thus, managers have an obligation to invest company profits in such a way
as to maximize shareholder value.” Effectiveness and efficiency are important aspects of fulfilling that obligation.
An effective operation is one that achieves or exceeds the goals set for the operation. Maximizing shareholder value is the ultimate goal. Effectiveness in reaching its goals can be measured by analyzing the
firm’s effectiveness in attaining its critical success factors. Critical success factors may be a desired level of
operating income, an increase in market share, new products introduced, or a specified return on investment. The master budget states the desired operating income for the period, and comparing actual results
with the planned results is a basic starting point for evaluating the effectiveness of the firm in attaining its
profitability goals.
An efficient operation is one that makes effective use of its resources in carrying out the operation. If a
firm attains its goal of increasing sales but it spends more of its resources than necessary to attain that
goal, the firm may be effective but it is not efficient. Alternatively, a firm may be efficient in its use of
resources, spending less than planned per unit sold, but if the firm’s goals for profitability and growth are
not achieved because sales are too low, the firm’s operation was not effective.
Therefore, assessments of efficiency are independent from assessments of effectiveness. Cost management
aids in assessing both effectiveness and efficiency.
1
Competitive advantage is an advantage that a company has over its competitors that it gains by offering consumers
greater value than they can get from its competitors.
2
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.1. Measurement Concepts
Classifications of Costs
It is important to understand the different types, classifications and treatments of costs.
The Difference Between Costs and Expenses
Costs and expenses are two different things.
1)
Costs are resources given up to achieve an objective.
2)
Expenses are costs that have been charged against revenue in a specific accounting period.
“Cost” is an economic concept, while “expense” is an accounting concept. A cost need not be an expense,
but every expense was a cost before it became an expense. Most costs eventually do become expenses,
such as manufacturing costs that reach the income statement as cost of goods sold when the units they
are attached to are sold, or the cost of administrative fixed assets that have been capitalized on the balance
sheet and subsequently expensed over a period of years as depreciation.
However, some costs do not reach the income statement. Implicit costs2 such as opportunity costs3 never
become expenses in the accounting records, but they are costs nonetheless because they represent resources given up to achieve an objective.
Product Costs vs. Period Costs
Costs are classified according to their purpose. The two main classifications of costs based on purpose are
product (or production) costs and period costs.
Product Costs (also called Inventoriable Costs)
Product costs, or inventoriable costs, are costs for the production process without which the product could
not be made. Product costs are “attached” to each unit and are carried on the balance sheet as inventory
during production (as work-in-process inventory) and when production is completed (as finished goods
inventory) until the unit is sold. When a unit is sold, the item’s cost is transferred from the balance sheet
to the income statement where it is classified as cost of goods sold, which is an expense.
The main types of product costs are: 1) direct materials, 2) direct labor, and 3) manufacturing overhead (both fixed and variable). These different product costs can be combined and given different names
as outlined in the tables below. Candidates need to know what types of costs are included in the different
classifications.
Direct labor and manufacturing overhead are called conversion costs. They are the costs of converting
the direct materials to finished goods in the production process.
Note: The definition above of product cost is the definition used for financial reporting purposes. However, other types of costs are considered “product costs” for pricing and other purposes, and those will
be discussed later.
2
Implicit costs are costs that do not involve any specific cash payment and are not recorded in the accounting records.
3
An opportunity cost is the contribution to income that is lost by not using a limited resource in its best alternative use.
An opportunity cost is a type of implicit cost. Both implicit costs and opportunity costs are discussed in more detail later.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
3
D.1. Measurement Concepts
CMA Part 1
Types of Product Costs
The costs that follow are the main costs incurred in the production process.
Direct labor
Direct labor costs are the costs of labor that can be directly traced to the production
of a product. The cost of wages for production workers is a direct labor cost for a
manufacturing company.
Direct material
Direct materials are the raw materials that are directly used in producing the finished product. The costs included in the direct material cost are all of the costs
associated with acquiring the raw material: the raw material itself, shipping-in
cost, and insurance, among others. Common examples of direct materials are
plastic and components.
Manufacturing
overhead
Manufacturing overhead costs are the company’s costs related to the production
process that are not direct material or direct labor but are necessary costs of production. Examples are indirect labor, indirect materials, rework costs, electricity
and other utilities, depreciation of plant equipment, and factory rent.
Indirect labor
Indirect labor is labor that is part of the overall production process but does not
come into direct contact with the product. A common example is labor cost for
employees of the manufacturing equipment maintenance department. Indirect labor is a manufacturing overhead cost.
Indirect material
Similar to indirect labor, indirect materials are materials that are not the main
components of the finished goods. Examples are glue, screws, nails, and other
materials such as machine oils, lubricants, and miscellaneous supplies that may
not even be physically incorporated into the finished good. Indirect materials are
a manufacturing overhead cost.
Groupings of Product Costs
The five main types of product costs in the previous table can be further combined to create different cost
classifications. The three classifications that candidates need to be aware of for the CMA exam are in the
following table.
Prime costs
Prime costs are the costs of direct material and direct labor. Direct material
and direct labor are the direct inputs, or the direct costs of manufacturing.
Manufacturing
costs
Manufacturing costs include the prime costs and manufacturing overhead applied. These are all of the costs that need to be incurred in order to produce the
product. Manufacturing costs do not include selling or administrative costs, which
are period costs.
Conversion costs
Conversion costs include manufacturing overhead (both fixed and variable) and
direct labor. Conversion costs are the costs required to convert the direct materials into the final product.
Note: Direct labor is both a prime cost and a conversion cost.
4
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.1. Measurement Concepts
Period Costs, or Nonmanufacturing Overheads
Period costs are costs for activities other than the actual production of the product. Period costs are expensed as they are incurred.
The number of period costs is almost unlimited because period costs include essentially everything other
than the product costs, since all costs must be either product costs or period costs. The more commonlyused examples of period costs include selling, administration, and accounting, but period costs are all the
costs of any department that is not involved in production.
Period costs can be variable, fixed or mixed, but they are not included in the calculation of cost of goods
sold or cost of goods manufactured (both of which are covered later). For financial reporting purposes,
period costs are expensed to the income statement as they are incurred.
However, for internal decision-making, some period costs may be allocated to the production
departments and then to the individual units. This allocation may be done internally so that the company can set a price for each product that covers all of the costs the company incurs. This type of allocation
will be covered in the topic of Shared Services Cost Allocation.
Note: Overhead allocation of period costs to production is not reported as such in the external financial
statements issued by the company, because it is not proper to do so according to U.S. GAAP or IFRS.
According to both U.S. GAAP and IFRS, period costs should be expensed in the period in which they are
incurred. Allocation of period costs to production would be used for internal decision-making purposes
only.
The number of classifications of period expenses that a company uses on its income statement will depend
on the company. Examples include general and administrative expense, selling expense, accounting, depreciation of administrative facilities, and so on.
Costs Based on Level of Activity (Fixed, Variable and Mixed Costs)
In the following table are the main groups of costs based on their behavior as the level of activity
changes. An activity is an event, task, or unit of work with a specified purpose. “Activity” in production
can refer to the number of units of a resource used such as hours of direct labor, or it can refer to the
number of units of product produced. Both production costs and period costs can be classified based on
activity level, though the type of activity used in the classification of period costs is different from that used
for production costs. For period costs, “activity” frequently refers to number of units sold, though it can be
used for any type of activity that incurs costs.
For the following three types of costs, candidates need to know both how the cost per unit of activity
changes and how the total cost changes as the level of activity changes.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
5
D.1. Measurement Concepts
CMA Part 1
Fixed, Variable, and Mixed Costs
Fixed costs
Fixed costs do not change within the relevant range of activity. As long as the
activity level remains within the relevant range, the total amount of fixed
costs does not change with a change in activity level such as production volume. However, the cost per unit decreases as the activity level increases and
increases as the activity level decreases.
Variable costs
Variable costs are costs such as material and labor (among production costs)
or shipping-out costs (among period costs) that are incurred only when the
activity takes place. The per unit variable cost remains unchanged as the
activity increases or decreases while total variable cost increases as the activity level increases and decreases as the activity level decreases.
Note: Because discounts are often received when more units are purchased, it
may appear that variable costs per unit decrease as activity increases. However, companies do not order units of production inputs one at a time. As part
of the budgeting process a company determines how many of a particular input
it will need to purchase during the year, and the cost per unit for that particular
quantity of inputs is used in the budget. Therefore, budgeted variable costs per
unit do not change as the production levels change for the company.
Mixed costs
Mixed costs have both a fixed and a variable component. An example of a mixed
cost is a contract for electricity that includes a basic fixed fee that covers a
certain number of kilowatts of usage per month, and usage over that allowance
is billed at a specified amount per kilowatt used. The electricity plan has a fixed
component and a variable component. A mixed cost could also be an allocation
of overhead cost that contains both fixed and variable overheads.
Cost Behavior in the Production Process
Fixed costs, variable costs, and mixed costs behave in fundamentally different ways in the production process as the production level changes. It is important for candidates to understand how total costs and costs
per unit change as production changes. This knowledge of the fundamental behavior of fixed and variable
costs will be required in other sections of the CMA Part 1 exam and also in the CMA Part 2 exam. Although
cost behavior is not inherently difficult, it is such an important underlying element of the production process
that it will be discussed in detail.
Variable Costs
Variable costs are incurred only when the company actually produces something. If a company produces
no units (sits idle for the entire period), the company will incur no variable costs. Direct material and direct
labor are usually variable costs.4
•
As the production level increases, total variable costs will increase, but the variable cost per
unit will remain unchanged.
•
As the production level decreases, total variable costs will decrease, but the variable cost
per unit will remain unchanged.
Note: The selling price per unit minus all unit variable costs is equal to the unit contribution. The unit
contribution is the amount from each sale available to cover fixed costs or to generate operating income
after the fixed costs have been covered. Contribution margin is a measure of contribution as a percentage of the sales price.
4
In some situations, direct labor may be considered a fixed cost, for example in the calculation of throughput contribution
margin, covered later under Theory of Constraints in Supply Chain Management, but those situations are not relevant
for this discussion.
6
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.1. Measurement Concepts
Fixed Costs
Fixed costs are costs that do not change in total as the level of production changes, as long as production
remains within the relevant range. The relevant range is the range of production within which the fixed
cost remains unchanged. As long as the production activity remains within the relevant range, an increase
in the number of units produced will not cause an increase in total fixed costs and a decrease in the number
of units produced will not cause a decrease in total fixed costs.
Fixed costs are best explained by using production in a factory as an example. A factory has the capacity
to produce a certain maximum number of units. As long as production is between zero and that maximum
number of units, the fixed cost for the factory will remain unchanged. However, once the level of production
exceeds the capacity of the factory, the company will need to build (or otherwise acquire) a second factory.
Building the second factory will increase the fixed costs as the company moves to another relevant range.
Within the relevant range of production, the total fixed costs will remain unchanged, but the fixed
costs per unit will decrease as the level of production increases.
Note: In an exam question, if a change in volume results in a volume that is outside the relevant range
given in the question, the question will provide information that would enable recalculation of the total
fixed costs at the changed volume. If a question does not mention anything about the relevant range,
assume that any changes in volume are within the relevant range and that fixed costs do not change in
total because of the change in volume.
Note: Over a large enough time period, all costs will behave like variable costs. In the short
term, some costs are fixed (such as a factory), but over a longer period of time, the company may be
able to change its factory situation by expanding or moving to another facility so that the factory cost
also becomes variable.
Note: Period costs can be fixed or variable, and production costs can also be fixed or variable.
Mixed Costs
In reality, many costs are mixed costs. Mixed costs have a combination of fixed and variable elements.
Mixed costs may be semi-variable costs or semi-fixed costs, which are also called step costs or step variable
costs.
A semi-variable cost has both a fixed component and a variable component. It has a basic fixed amount
that must be paid regardless of the amount of activity or even in the event of no activity, and added to that
fixed amount is an amount that varies with activity. Utilities provide an example. Some basic utility expenses are required just to maintain a factory building, even if no production is taking place. Electric service,
water service, and other utilities usually must be continued, so that basic amount is the fixed component
of utilities. If production begins (or resumes), the cost for utilities increases by a variable amount, depending on the production level, because machines are running and using electricity and people are using the
water. The fixed component does not change, but the total cost increases incrementally by the amount of
the variable component when production activity increases. Another example of a semi-variable cost is the
cost of a salesperson who receives a base salary plus a commission for each sale made. The base salary is
the fixed component of the salesperson’s salary, and the commission is the variable component.
A semi-fixed cost, also called a step cost or a step variable cost, is fixed over a given, small range of
activity, and above that level of activity, the cost suddenly jumps. It stays fixed again for a while at the
higher range of activity, and when the activity moves out of that range, it jumps again. A semi-fixed cost
or step variable cost moves upward in a step fashion, staying at a certain level over a small range and then
moving to the next level quickly. All fixed costs behave this way, and a wholly fixed cost is also fixed only
as long as activity remains within the relevant range. However, a semi-fixed cost is fixed over a smaller
range than the relevant range of a wholly fixed cost.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
7
D.1. Measurement Concepts
CMA Part 1
Example: The nursing staff in a hospital is an example of a semi-fixed cost. The hospital needs one
nurse for every 25 patients, so each time the patient load increases by 25 patients an additional nurse
will be hired. When each additional nurse is hired, the total cost of nursing salaries jumps by the amount
of the additional nurse’s salary.
In contrast, hospital administrative staff salaries remain fixed until the patient load increases by 250
patients, at which point an additional admitting clerk is needed. The administrative staff salaries are
wholly fixed costs over the relevant range, whereas the nursing staff salaries are semi-fixed costs because the relevant range for the administrative staff (250 patients) is greater than the relevant range
for the nursing staff (25 patients).
Note: The difference between a semi-variable and a semi-fixed cost (also called a step cost or a step
variable cost) is that the semi-variable cost starts out at a given base level and moves upward smoothly
from there as activity increases. A semi-fixed cost moves upward in steps.
Total Costs
Total costs consist of total fixed costs plus total variable costs. The lowest possible total cost occurs when
nothing is produced or sold, because at an activity level5 of zero, the only cost will be fixed costs. Total
costs begin at the fixed cost level and rise by the amount of variable cost per unit for each unit of increase
in activity. In theory at least, total costs graph as a straight line that begins at the fixed cost level on
the Y intercept and rises at the rate of the variable cost per unit for each unit of increase in activity.
The cost function for total manufacturing costs is
y = F + Vx
Where:
y = Total Costs
F = Fixed Costs
V = Variable Cost Per Unit
x = Total Production
Note: The cost function can also be written as y = Vx + F. The order of the two terms on the right side
of the equals sign is not important.
To illustrate, following is a graph of total manufacturing costs for a company with fixed manufacturing costs
of $700,000 and variable manufacturing costs of $20 per unit produced. Total cost is on the y-axis, while
total production is on the x-axis. The cost function for this company’s total manufacturing costs is
y = 700,000 + 20x
The total cost line on the graph is a straight line beginning at $700,000 on the y-axis where x is zero and
increasing by $200,000 for each production increase of 10,000 units (because 10,000 units multiplied by
$20 equals $200,000). The graph of the above cost function follows. The graph should look familiar, because
the cost function and the line on the graph are the same as those for the equation of a linear regression
line. Linear regression was covered in Section B in Vol. 1 of this textbook in the Forecasting topic under
Trend Projection and Regression Analysis in relation to using simple regression analysis to make forecasts.
This same concept will be discussed again later in this section, in the topic of Estimating Fixed and Variable
Costs.
5
“Activity level” or “level of activity” can be used to refer to various types of activity. It can refer to production volume
in number of units of output, the number of units of inputs to the production process, sales volume, or to any other
activity being performed.
8
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.1. Measurement Concepts
Total Manufacturing Costs
$2,100,000
Total Manufacturing Costs
$1,900,000
$1,700,000
y = 700,000 + 20x
$1,500,000
$1,300,000
$1,100,000
$900,000
Total Manufacturing Costs
$700,000
$500,000
Production Volume
Direct Versus Indirect Costs
Direct costs are costs that can be traced directly to a specific cost object. A cost object is anything
for which a separate cost measurement is recorded. It can be a function, an organizational subdivision, a
contract, or some other work unit for which cost data are desired and for which provision is made to
accumulate and measure the cost of processes, products, jobs, capitalized projects, and so forth. Examples
of direct costs that will be covered in detail are direct materials and direct labor used in the production of
products.
Indirect costs are costs that cannot be identified with a specific cost object. In manufacturing, overhead
is an indirect cost. Manufacturing indirect costs are grouped into cost pools for allocation to units of product
manufactured. A cost pool is a group of indirect costs that are grouped together for allocation on the basis
of the same cost allocation base. Cost pools can range from very broad, such as all plant overhead costs,
to very narrow, such as the cost of operating a specific machine.
Other indirect costs are nonmanufacturing, or period, costs. Examples are support functions such as IT,
maintenance, security, and managerial functions such as executive management and other supervisory
functions.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
9
D.1. Measurement Concepts
CMA Part 1
Other Costs and Cost Classifications
In addition to all of the costs and classifications listed above, candidates need to be familiar with several
other types of costs.
Explicit costs
Explicit costs are also called out-of-pocket costs. Explicit costs involve payment of
cash and include wages and salaries, office supplies, interest paid on loans, payments
to vendors for raw materials, and so forth. Explicit costs are the opposite of implicit
costs. Most explicit costs eventually become expenses, though the timing of their recognition as expenses may be delayed, for example when inventory is purchased and its
cost does not become an expense until it is sold.
Implicit costs An implicit cost, also called an imputed cost, is a cost that does not involve any specific
cash payment and is not recorded in the accounting records. Implicit costs are also
called economic costs. Implicit costs do not become expenses. They cannot be specifically included in financial reports, but they are needed for use in a decision-making
process. For example, interest not earned on money that could have been invested in
an interest-paying security but instead was invested in manufacturing equipment is a
frequent implicit or imputed cost. The “lost” interest income is an opportunity cost of
investing in the machine. The lost interest is not reported as an expense on the income
statement, but it is a necessary consideration in making the decision to invest in the
machine, because it will be different if the machine is not purchased.
Opportunity
costs
An opportunity cost is a type of implicit cost. “Opportunity cost” is an economics term,
and opportunity cost is considered an economic cost. It is the contribution to income
that is lost by not using a limited resource in its best alternative use. Opportunity cost
includes the contribution that would have been earned if the alternative decision had
been made less any administrative expenditures that would have been made for the
other available alternative but that will not be made.
Anytime money is invested or used to purchase something, the potential return from
the next best use of that money is lost. Often times, the lost return is interest income.
If money were not used to purchase inventory, for example, it could be deposited in a
bank and could earn interest. The lost interest can be calculated only for the time period
during which the cash flows are different between the two options.
Carrying
costs
Carrying costs are the costs the company incurs when it holds inventory. Carrying costs
include: rent and utilities related to storage; insurance and taxes on the inventory; costs
of employees who manage and protect the inventory; damaged or stolen inventory; the
lost opportunity cost of having money invested in inventory (called cost of capital); and
other storage costs.
Because storage of inventory does not add value to the inventory items, carrying costs
are expensed on the income statement as incurred. They are not added to the cost of
the inventory and thus are not included on the balance sheet.
Sunk costs
Sunk costs are costs that have already been incurred and cannot be recovered. Sunk
costs are irrelevant in any decision-making process because they have already been
incurred and no present or future decision can change that fact.
Committed
costs
Committed costs are costs for the company’s infrastructure. They are costs required to
establish and maintain the readiness to do business. Examples of committed costs are
intangible assets such as the purchase of a franchise and the purchase of fixed assets
such as property, plant, and equipment. They are fixed costs that are usually on the
balance sheet as assets and become expenses in the form of depreciation and amortization.
10
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.1. Measurement Concepts
Discretionary
costs, also
sometimes
called flexible
costs
Discretionary costs, sometimes called flexible costs, are costs that may or may not be
incurred by either engaging in an activity or not engaging in it, at the discretion of the
manager bearing the cost. In the short term, not engaging in a discretionary activity
will not cause an adverse effect on the business. However, in the long run the activities
are necessary and the money does need to be spent. Discretionary cost decisions are
made periodically and are not closely related to input or output decisions. Furthermore,
the value added and the benefits obtained from spending the money cannot be precisely
defined. Advertising, research and development (R&D), and employee training are usually given as examples of discretionary costs. Discretionary costs, or flexible costs, may
be fixed costs, variable costs, or mixed costs.
Marginal
costs
Marginal costs are the additional costs necessary to produce one more unit.
Note: When overtime is worked by production employees, the overtime premium paid to the workers
is considered to be factory overhead. The overtime premium is the amount of increase in the wages
paid per hour for the overtime work.
For example, direct labor is paid $20 per hour for regular hours and is paid and time and-a-half, or $30
per hour, for overtime hours worked in excess of 40 hours per week. Ten hours of overtime are needed
in a given week to complete the week’s required production. The regular rate of $20 per hour multiplied
by 10 hours, or $200, is classified as a direct labor cost, even though it is worked in excess of regular
hours. The half-time premium of $10 additional paid per hour multiplied by 10 hours, or $100, is
classified as factory overhead.
The half-time premium of $100 is not charged to the particular units worked on during the overtime
hours, because the units worked on during the overtime hours could just as easily have been different
units if the jobs to be done had simply been scheduled differently. As overhead, the overtime premium
paid is allocated equally among all units produced during the period.
However, if the need to work overtime results from a specific job or customer request, the overtime
premium should be charged to that specific job as part of the direct labor cost of that job and not
included in the overall overhead amount to be allocated.
Introduction to Costing Systems
Product costing involves accumulating, classifying and assigning direct materials, direct labor, and factory
overhead costs to products, jobs, or services.
In developing a costing system, management accountants need to make choices in three categories of
costing methods:
1)
The cost measurement method to use in allocating costs to units manufactured (standard, normal, or actual costing).
2)
The cost accumulation method to use (job costing, process costing, or operation costing).
3)
The method to be used to allocate overhead (volume-based or activity-based).
Cost measurement systems are discussed here. Cost accumulation methods and allocation of overhead
will be discussed later. However, a few words about cost accumulation systems are needed before explaining cost measurement systems.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
11
D.1. Measurement Concepts
CMA Part 1
Introduction to Cost Accumulation Methods
Cost accumulation systems are used to assign costs to products or services. Job order costing (also called
job costing), process costing, and operation costing are different types of cost accumulation systems used
in manufacturing.
•
Process costing is used when many identical or similar units of a product or service are being
manufactured, such as on an assembly line. Costs are accumulated by department or by process.
•
Job order costing (also called job costing) is used when units of a product or service are distinct
and separately identifiable. Costs are accumulated by job.
•
Operation costing is a hybrid system in which job costing is used for direct materials costs while
a departmental (process costing) approach is used to allocate conversion costs (direct labor and
overhead) to products or services.
Process costing, job order costing, and operation costing will be discussed in more detail later.
Introduction to Cost Measurement Systems
Costs are allocated to units manufactured in three main ways:
1)
Standard costing systems
2)
Normal costing systems
3)
Actual costing systems
These three cost measurement systems are used for allocating both direct manufacturing costs (direct labor
and direct materials) and indirect manufacturing costs (overhead) in order to value the products manufactured.
1) Standard Costing
In a standard cost system, standard, or planned, costs are assigned to units produced. The standard cost
of producing one unit of output is based on the standard cost for one unit of each of the inputs required to
produce that output unit, with each input multiplied by the number of units of that input allowed for one
unit of output. The inputs include direct materials, direct labor and allocated overhead. The standard cost
is what the cost should be for that unit of output.
Direct materials and direct labor are applied to production by multiplying the standard price or rate per
unit of direct materials or direct labor by the standard amount of direct materials or direct labor allowed
for the actual output. For example, if three direct labor hours are allowed to produce one unit and 100
units are actually produced, the standard number of direct labor hours for those 100 units is 300 hours (3
hours per unit × 100 units). The standard cost for direct labor for the 100 units is the standard hourly wage
rate multiplied by the 300 hours allowed for the actual output, regardless of how many direct labor
hours were actually worked and regardless of what actual wage rate was paid. The cost applied
to the actual output is the standard cost allowed for the actual output.
Note: In a standard cost system, the standard quantity of an input allowed for the actual output,
not the actual quantity of the input used for the actual output and the standard price allowed per
unit of the input, not the actual price paid per unit of the input, are used to calculate the amount of the
input’s cost applied to production. Some candidates find that a difficult concept to grasp, because it
requires using the standard price and quantity allowed per unit with the actual number of units produced.
12
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.1. Measurement Concepts
In a standard cost system, overhead is generally allocated to units produced by calculating a predetermined, or standard, manufacturing overhead rate (a volume-based method) that is applied to the units
produced on the basis of the standard amount of the allocation base allowed for the actual output. When a
traditional method of overhead allocation is used, the standard manufacturing overhead application rate is
calculated as the budgeted overhead cost divided by the budgeted activity level of the allocation base.
The predetermined overhead rate is calculated as follows:
Budgeted Monetary Amount of Manufacturing Overhead
Budgeted Activity Level of Allocation Base
The best cost driver to use as the allocation base is the measure that best represents what causes overhead
cost to be incurred. The most frequently-used allocation bases are direct labor hours, direct labor costs, or
machine hours. For a labor-intensive manufacturing process, the proper base is probably direct labor hours
or direct labor costs. For an equipment-oriented manufacturing process, number of machine hours is the
better allocation base.
To apply overhead cost to production, the predetermined overhead rate is multiplied by the standard
amount of the allocation base allowed for producing one unit of product, and then that standard overhead
amount for one unit is multiplied by the number of units actually produced to calculate the standard overhead cost to be applied to all the units produced.
Of course, the actual costs incurred will probably be different from the standard costs. The difference is a
variance. The difference is also called an under-applied or over-applied cost. At the end of each accounting period, variances are accounted for in one of two basic ways.
•
If the variances are immaterial, they may be closed out 100% to cost of goods sold expense on
the income statement.
•
If the variances are material, they should be prorated among cost of goods sold and the relevant
Inventory accounts on the balance sheet according to the amount of overhead included in each
that was allocated to the current period’s production.
If the variances are closed out 100% to cost of goods sold, the cost of the goods in inventories will be equal
to their standard cost only.
Standard costing enables management to compare actual costs with what the costs should have been for
the actual amount produced. Moreover, it permits production to be accounted for as it occurs. Using actual
costs incurred for manufacturing inputs would cause an unacceptable delay in reporting, because those
costs may not be known until well after the end of each reporting period, when all the invoices have been
received.
The emphasis in standard costing is on flexible budgeting, where the flexible budget for the actual production is equal to the standard cost per unit of output multiplied by the actual production volume.
Standard costing can be used in either a process costing or a job-order costing environment.
Note: The standard cost for each input per completed unit is the standard rate per unit of input multiplied
by the amount of inputs allowed per completed unit, not multiplied by the actual amount of inputs
used per completed unit.
Standard costing is applicable to a wide variety of companies. Manufacturing companies use standard costing with flexible budgeting to control direct materials and direct labor costs. Service companies such as
fast-food restaurants use standard costs, too, mainly to control their labor costs since they are laborintensive. Increasingly, firms are using Enterprise Resource Planning (ERP) systems to help them track
standard and actual costs and to assess variances in real time. Enterprise Resource Planning is covered
later in this section in Supply Chain Management.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
13
D.1. Measurement Concepts
CMA Part 1
2) Normal Costing
In a normal cost system, direct materials and direct labor costs are applied to production differently
from the way they are applied in standard costing. In normal costing, direct materials and direct labor costs
are applied at their actual rates multiplied by the actual amount of the direct inputs used for production.
To allocate overhead, a normal cost system uses a predetermined annual manufacturing overhead
rate, called a normal or normalized rate. The predetermined rate is calculated the same way the predetermined rate is calculated under standard costing. However, under normal costing, that predetermined rate
is multiplied by the actual amount of the allocation base that was used in producing the product,
whereas under standard costing, the predetermined rate is multiplied by the amount of the allocation base
allowed for producing the product.
Normal costing is not appropriate in a process costing environment because it is too difficult to
determine the actual costs of the specific direct materials and direct labor used for a specific production
run. Process costing is used when many identical or similar units of a product or service are being manufactured, such as on an assembly line. Costs are accumulated by department or by process. In contrast,
job costing accumulates costs and assigns them to specific jobs, customers, projects, or contracts. Job
costing is used when units of a product or service are distinct and separately identifiable. Normal costing
is used mainly in job costing.
The purpose of using a predetermined annual manufacturing overhead rate in normal costing is to normalize
factory overhead costs and avoid month-to-month fluctuations in cost per unit that would be caused by
variations in actual overhead costs and actual production volume. It also makes current costs available. If
actual manufacturing overhead costs were used, those costs might not be known until well after the end of
each reporting period, when all the invoices had been received.
3) Actual Costing
In an actual costing system, no predetermined or estimated or standard costs are used. Instead, the actual
direct labor and materials costs and the actual manufacturing overhead costs are allocated to the units
produced. The cost of a unit is the actual direct cost rates multiplied by the actual quantities of the direct
cost inputs used and the actual indirect (overhead) cost rates multiplied by the actual quantities used of
the cost allocation bases.
Actual costing is practical only for job order costing for the same reasons that normal costing is practical
only for job order costing. In addition, actual costing is seldom used because it can produce costs per unit
that fluctuate significantly. This fluctuation in costs can lead to errors in management decisions such as
pricing of the product, decisions about adding or dropping product lines, and performance evaluations.
14
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.1. Measurement Concepts
efham CMA
Below is a summary of the three cost measurement methods:
Cost Accumulation Method
Cost MeasureUsually
ment Method
Used With
Standard
Costing
Normal
Costing
Actual
Costing
Process
Costing or
Job Order
Costing
Job Order
Costing
Job Order
Costing
Dir. Materials/
Direct Labor
Application
Rate
Dir. Materials/
Direct Labor
Application
Base
Standard
Rate
Standard
Amount
Allowed
for Actual
Production
Actual
Rate
Actual Amount
Used for Actual
Production
Actual
Rate
Actual Amount
Used for Actual
Production
Overhead
Application
Rate
Overhead
Allocation
Base
Predetermined
Standard
Rate
Standard
Amount of Allocation Base
Allowed
for Actual
Production
Estimated
Normalized
Rate
Actual Amount
of Allocation
Base Used
for Actual
Production
Actual
Rate
Actual Amount
of Allocation
Base Used
for Actual
Production
It is important in answering a question to identify what type of costing the company uses.
•
If the company uses standard costing, the costs applied to each unit will be the standard costs
for the standard amount of inputs allowed for production of the actual number of units produced.
•
Actual amount of inputs used for the actual production are used in calculating the costs applied
to each unit only when the company uses normal or actual costing.
Example of standard costing, normal costing, and actual costing used for the same product under the
same set of assumptions:
Log Homes for Dogs, Inc. (LHD) manufactures doghouses made from logs. It offers only one size and
style of doghouse. For the year 20X4, the company planned to manufacture 20,000 doghouses. Overhead is applied on the basis of direct labor hours, and the direct labor standard is 2 DLH per doghouse.
The company’s planned costs were as follows:
Direct materials
Direct labor
Variable overhead
Fixed overhead
$45 per doghouse (5 units of DM/doghouse @ $9/ unit)
$30 per doghouse (2 DLH/doghouse @ $15/ DLH)
$10 per doghouse (2 DLH/doghouse @ $5/DLH
$260,000, or $13 per doghouse (2 DLH/doghouse @ $6.50 per DLH)
LHD actually produced and sold 21,000 doghouses during 20X4.
LHD’s actual costs incurred were:
Direct materials
Direct labor
Variable overhead
Fixed overhead
$882,000: 5.25 units of DM used per doghouse @ $8/unit of DM
$617,400: 2.1 DLH used per doghouse @ $14/DLH
$224,910
$264,600
(Continued)
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
15
D.1. Measurement Concepts
CMA Part 1
Total Costs Applied Under Standard Costing:
Direct materials cost applied: $9 std. cost/unit of DM × 5 units allowed/house × 21,000 = $945,000
Direct labor applied:
$15 std. rate/DLH × 2 DLH allowed/house × 21,000 = $630,000
Variable overhead applied:
$5 std. rate/DLH × 2 DLH. allowed/house × 21,000 = $210,000
Fixed overhead applied:
$6.50 std. rate/DLH × 2 DLH. allowed/house × 21,000 = $273,000
Total Costs Applied Under Normal Costing:
Direct materials cost applied: $8 actual rate/DM unit × 5.25 units used/house × 21,000 = $882,000
Direct labor cost applied:
$14 actual rate/DLH. × 2.1 DLH. used/house × 21,000 = $617,400
Variable overhead applied:
$5 est. rate/DLH × 2.1 DLH used/house × 21,000 = $220,500
Fixed overhead applied:
$6.50 est. rate/DLH × 2.1 DLH used/house × 21,000 = $286,650
Total Costs Applied Under Actual Costing:
Direct materials cost applied: $8 actual rate/DM unit × 5.25 units used/house × 21,000 = $882,000
Direct labor cost applied:
$14 actual rate/DLH × 2.1 DLH used/house × 21,000 = $617,400
Variable overhead applied:
$5.101 actual rate/DLH × 2.1 DLH used/house × 21,000 = $224,910
Fixed overhead applied:
$6.002 actual rate/DLH × 2.1 DLH used/house × 21,000 = $264,600
1
Actual rate calculated as $224,910 ÷ 21,000 = $10.71/house. $10.71 ÷ 2.1 DLH/house = $5.10/DLH
2
Actual rate calculated as $264,600 ÷ 21,000 = $12.60/house. $12.60 ÷ 2.1 DLH/house = $6.00/DLH
The costs applied per unit under each of the cost measurement methods were:
Cost Applied per Unit Under Standard Costing:
Direct materials ($9 std. cost/unit of DM × 5 units of DM allowed)
Direct labor ($15 std. rate/DLH × 2 DLH allowed)
Variable overhead ($5/DLH allowed × 2 DLH allowed)
Fixed overhead ($6.50/DLH allowed × 2 DLH allowed)
Total cost per unit
$45.00
30.00
10.00
13.00
$98.00
Cost Applied per Unit Under Normal Costing:
Direct materials ($8 actual cost/DM unit × 5.25 units used)
Direct labor ($14 actual rate/DLH × 2.1 DLH used)
Variable overhead ($5 est. rate/DLH × 2.1 DLH used)
Fixed overhead ($6.50 est. rate/DLH × 2.1 DLH used)
Total cost per unit
$42.00
29.40
10.50
13.65
$95.55
Cost Applied per Unit Under Actual Costing:
Direct materials ($8 actual cost/DM unit × 5.25 units)
Direct labor ($14 actual rate/DLH × 2.1 DLH used)
Variable overhead ($5.10 actual rate/DLH × 2.1 DLH used)
Fixed overhead ($6.00 actual rate/DLH × 2.1 DLH used)
Total cost per unit
$42.00
29.40
10.71
12.60
$94.71
16
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.1. Measurement Concepts
Effect of Each Cost Measurement System on Variance Reporting
The manufacturing variances calculated will be affected by which cost measurement system is being used.
Effect of Standard Costing on Variance Reporting
When standard costing is being used, all of the manufacturing overhead variances as covered in Section C,
Performance Management, will be calculated, and their interpretation will be as presented in Section C.
Effect of Normal Costing on Variance Reporting
When normal costing is being used, direct materials and direct labor costs are applied to production at
their actual rates multiplied by the actual amount of the direct inputs used in the production after
the end of the period when the actual rates and the actual quantities of direct inputs used in production are
known. Therefore, under normal costing no direct material or direct labor variances can exist because the actual costs incurred are equal to the costs applied to production.
Under normal costing, overhead is applied at a predetermined overhead rate calculated the same way the
standard overhead rate is calculated under standard costing: budgeted total overhead divided by the number of hours of the allocation base (direct labor hours or machine hours) allowed for the budgeted output.
However, under normal costing, overhead is applied to production by multiplying that predetermined overhead rate by the actual amount of the allocation base used in producing the product, instead of multiplying
it by the amount of the allocation base allowed for producing the product, as in standard costing.
Therefore, under normal costing, the difference between the actual variable overhead incurred and the
variable overhead applied consists only of the variable overhead spending variance. No variable overhead
efficiency variance arises under normal costing.
Under normal costing, the fixed overhead spending (flexible budget) variance is the same as it is under
standard costing, because that variance is actual fixed overhead incurred minus budgeted fixed overhead,
and neither of those is affected by the amount of fixed overhead applied.
However, the fixed overhead production-volume variance is different from what it would be under standard
costing because the amount of fixed overhead applied is different. The fixed overhead production-volume
variance is budgeted fixed overhead minus fixed overhead applied, and the amount of fixed overhead
applied under normal costing is based on the amount of the allocation base actually used for the actual
units of output rather than based on the standard amount of the allocation base allowed for the actual
output.
Effect of Actual Costing on Variance Reporting
In an actual costing system, no predetermined or estimated or standard costs are used. Instead, the actual
direct labor and direct materials costs and the actual manufacturing overhead costs are allocated to the
units produced after the end of the period when the actual costs and the actual quantities used of the cost
allocation bases are known. No direct input or variable overhead variances are calculated at all because the
costs applied are the same as the costs incurred.
Under actual costing, the fixed overhead spending (flexible budget) variance is the same as it is under
standard costing because it is calculated as actual fixed overhead incurred minus budgeted fixed overhead.
The fixed overhead production-volume variance is basically meaningless, however, because the fixed overhead applied would be the same as the fixed overhead incurred. Therefore, the fixed overhead productionvolume variance, which is budgeted fixed overhead minus fixed overhead applied, would be the same as
the fixed overhead spending (flexible budget) variance, except it would have the opposite sign. For example, if the fixed overhead spending variance were favorable, the fixed overhead production-volume variance
would be unfavorable by the same amount because the amount of fixed overhead applied would be the
same as the amount of actual fixed overhead incurred.
The two fixed overhead variances combined would net to zero.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
17
D.1. Measurement Concepts
CMA Part 1
Question 1: Which one of the following is a variance that could appear if a company uses a normal
costing system?
a)
Direct material price variance.
b)
Direct labor efficiency variance.
c)
Variable overhead spending variance.
d)
Variable overhead efficiency variance.
(ICMA 2013)
Benefits and Limitations of Each Cost Measurement System
Benefits of Standard Costing
•
Standard costing enables management to compare actual costs with what the costs should have been
for the actual production.
•
It permits production to be accounted for as it occurs, since standard costs are used to apply costs
to units produced.
•
Standard costing prescribes expected performance and provides control. Standard costs establish
what costs should be, who should be responsible, and what costs are under the control of specific
managers. Therefore, standard costs contribute to management of an integrated responsibility accounting system.
•
Standards can provide benchmarks for employees to use to judge their own performance, as long as
the employees view the standards as reasonable.
•
Standard costing facilitates management by exception, because as long as the costs remain within
standards, managers can focus on other issues. Variances from standard costs alert managers when
problems require attention, which enables management to focus on those problems.
Limitations of Standard Costing
•
Using a predetermined factory overhead rate to apply overhead cost to products can cause total
overhead applied to the units produced to be greater than the actual overhead incurred when production is higher than expected; and overhead applied may be less than the amount incurred if actual
production is lower than expected.
•
If the variances from the standards are used in a negative manner, for instance to assign blame,
employee morale suffers and employees are tempted to cover up unfavorable variances and to do
things they should not do in order to make sure the variances will be favorable.
•
Output in many companies is not determined by how fast the employees work but rather by how fast
the machines work. Therefore, direct labor quantity standards may not be meaningful.
•
The use of standard costing could lead to overemphasis on quantitative measures. Whether a variance is “favorable” or “unfavorable” and the amount of the variance is not the full story.
•
There may be a temptation on the part of management to emphasize meeting the standards without
considering other non-financial objectives such as maintaining and improving quality, on-time delivery, and customer satisfaction. A balanced scorecard can be used to address the non-financial
objectives as well as the financial objectives.
•
In environments that are constantly changing, it may be difficult to determine an accurate standard
cost.
•
The usefulness of standard costs is limited to standardized processes. The less standardized the
process is, the less useful standard costs are.
18
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.1. Measurement Concepts
Benefits of Normal Costing
•
The use of normal costing avoids the fluctuations in cost per unit that occur under actual costing
because of changes in the month-to-month volume of units produced and in month-to-month fluctuations in overhead costs.
•
Manufacturing costs of a job are available earlier under a normal costing system than under an actual
costing system.
•
Normal costing allows management to keep direct product costs current because actual materials
and labor costs incurred are used in costing the production, while the actual incurred overhead costs
that would not be available until much later are applied based on a predetermined rate.
Limitations of Normal Costing
•
Using a predetermined factory overhead rate to apply overhead cost to products can cause total
overhead applied to the units produced to be greater than the actual overhead incurred when production is higher than expected; and overhead applied may be less than the amount incurred if actual
production is lower than expected.
•
Normal costing is not appropriate for process costing because the actual costs would be too difficult
to trace to individual units produced, so it is used primarily for job costing.
Benefits of Actual Costing
•
The primary benefit of using actual costing is that the costs used are actual costs, not estimated
costs.
Limitations of Actual Costing
•
Because actual costs must be computed and applied, information is not available as quickly after the
end of a period as it is with standard costing.
•
Actual costing leads to fluctuations in job costs because the amount of actual overhead incurred
fluctuates throughout the year.
•
Like normal costing, actual costing is not appropriate for process costing because the actual costs
would be too difficult to trace to individual units produced. Therefore, it is used primarily in a job
costing environment.
Note: The focus of the remainder of this section will be on standard costing, because that is the
most commonly used system in manufacturing.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
19
D.1. Measurement Concepts
CMA Part 1
Cost of Goods Sold (COGS) and Cost of Goods Manufactured (COGM)
The various classifications of costs are used in the calculation of cost of goods sold (COGS) and cost of
goods manufactured (COGM).
•
Cost of goods sold (COGS) is the total of the costs directly attributable to producing items that
were sold during the period. For each unit sold, cost of goods sold includes the direct material
and direct labor costs of the unit and an allocation of a portion of manufacturing overhead costs.
•
Cost of goods manufactured (COGM) is the total of the costs directly attributable to producing
items that were brought to completion in the manufacturing process during the period,
whether manufacturing on them was begun before the current period began or during the current
period. For each unit brought to completion, cost of goods manufactured includes the direct material and direct labor costs of the unit and an allocation of a portion of manufacturing overhead
costs.
Neither cost of goods sold nor cost of goods manufactured includes indirect selling and administrative costs
such as sales, marketing and distribution, as those are period costs.
Though COGS and COGM are somewhat similar, they are very different in one key aspect. COGS is an
externally reported figure and it is reported on the income statement. It is the cost of producing the units
that were actually sold during the period. COGM, on the other hand, is an internal number and is not
reported on either the balance sheet or the income statement. COGM represents the cost of producing the
units that were completed during the period. COGM is, however, used in the calculation of cost of goods
sold for a company that produces its own inventory. The calculation of both numbers is covered in more
detail below.
The process of calculating the cost of producing an item is a very important one for any company. It is
critical for the calculated cost to represent the complete cost of production. If the company does not calculate the cost of production correctly, it may charge a price for the product that will be incorrect. The result
will be either low sales volume if the price is too high or low profits if the price is too low.
The calculated production cost per unit is reported on the balance sheet as the value of each unit of finished
goods inventory when the items are completed. As the items in inventory are sold, the cost of each item
sold is transferred to the income statement as cost of goods sold.
Due to the need to determine the cost of production accurately, the information that accountants provide
to management regarding the company’s production costs is crucial. Furthermore, it is beneficial to provide
the information quickly and often, so that management can make any necessary adjustments such as
changes in pricing as soon as possible.
Note: Costs that are not production costs are period costs, and period costs are generally expensed as
incurred (for example, inventory carrying costs, general and administrative costs, and so on).
Calculating Cost of Goods Sold
Cost of goods sold represents the cost to produce or purchase the units that were sold during the period.
It is perhaps the largest individual expense item on the income statement, so it is important for cost of
goods sold to be calculated accurately.
Cost of goods sold is calculated using the following formula:
Beginning finished goods inventory
+ Purchases for a reseller or cost of goods manufactured for a manufacturer
− Ending finished goods inventory
= Cost of goods sold
20
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.1. Measurement Concepts
The formula above is a simplification of what is actually occurring because it assumes all of the units in
finished goods inventory at the beginning of the period were either sold during the period or were still in
finished goods inventory at the end of the period, which does not always happen. In reality, some units
may be damaged, stolen or lost. However, for the CMA Part 1 Exam, the above formula is sufficient.
Calculating Cost of Goods Manufactured
The cost of goods manufactured represents the cost of the units completed and transferred out of
work-in-process inventory during the period. COGM does not include the cost of work that was done on
units that were not finished during the period.
Cost of goods manufactured is calculated using the following formula:
Direct Materials Used*
+
Direct Labor Used
+
Manufacturing Overhead Applied
=
Total Manufacturing Costs
+
Beginning Work-in-Process Inventory
−
Ending Work-in-Process Inventory
=
Cost of Goods Manufactured
* Direct Materials Used = Beginning Direct Materials Inventory + Purchases + Transportation-In − Net Returns −
Ending Direct Materials Inventory
The COGM of a manufacturing company will be part of its calculation of cost of goods sold (COGS).
As with the calculation of COGS, the calculation of COGM simplifies reality because it assumes that all items
of work-in-process inventory were either completed during the period or are in ending work-in-process
inventory. In reality, some of the work-in-process inventory may have been lost, damaged or otherwise
not used during the period, and therefore some of the raw materials entered into production or items
worked on during the period will not be in ending inventory. However, for the CMA Part 1 exam, the formula
above is sufficient.
Question 2: The Profit and Loss Statement of Madengrad Mining Inc. includes the following information
for the current fiscal year.
Sales
Gross profit
Year-end finished goods inventory
Opening finished goods inventory
$160,000
48,000
58,300
60,190
The cost of goods manufactured by Madengrad for the current fiscal year is
a)
$46,110
b)
$49,890
c)
$110,110
d)
$113,890
(ICMA 2010)
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
21
Variable and Absorption Costing
CMA Part 1
Variable and Absorption Costing
Variable and absorption costing are two different methods of inventory costing. Under both variable and
absorption costing, all variable manufacturing costs (both direct and indirect) are inventoriable costs. The
only two differences between the two methods are in:
1)
Their treatment of fixed manufacturing overhead
2)
The income statement presentation of the different costs
Note: All other costs except for fixed factory overheads are treated in the same manner under both
variable and absorption costing, although they may be reported in a slightly different manner on the
income statement.
Note: Variable costing can be used only internally for decision-making. Variable costing is not acceptable
under generally accepted accounting principles for external financial reporting. It is also not acceptable
under U.S. tax regulations for income tax reporting.
Fixed Factory Overheads Under Absorption Costing
For a manufacturing company, absorption costing is required for external financial reporting by generally
accepted accounting principles. Under absorption costing, fixed factory overhead costs are allocated to
the units produced during the period according to a predetermined rate. Fixed manufacturing overhead
is therefore a product cost under absorption costing. Product costs are inventoried and they are expensed
as cost of goods sold only when the units they are attached to are sold.
The predetermined fixed overhead rate is calculated as follows:
Budgeted Monetary Amount of Fixed Manufacturing Overhead
Budgeted Activity Level of Allocation Base
The budgeted activity level of the allocation base is the number of budgeted direct labor hours, direct
labor cost, material cost, or machine hours—whatever is being used as the allocation base.
When standard costing is being used, the fixed overhead is applied to the units produced on the basis of
the number of units of the allocation base allowed for the actual output.
Example: Fixed overhead is applied to units produced on the basis of direct labor hours. The standard
number of direct labor hours allowed per unit produced is 0.5 hours. The company budgets $1,500,000
in fixed overhead costs for the year and budgets to produce 750,000 units. Thus, the standard number
of direct labor hours allowed for the budgeted number of units is 750,000 × 0.5, or 375,000 DLH. The
fixed overhead application rate is therefore $1,500,000 ÷ 375,000, or $4.00 per DLH.
The company actually produces 800,000 units, incurring $1,490,000 in actual fixed factory overhead.
The amount of fixed overhead applied to the units produced is $4.00 × (800,000 × 0.5), which equals
$1,600,000. Fixed factory overhead is over-applied by $110,000 ($1,600,000 applied − $1,490,000
actual cost).
Note that the fixed factory overhead incurred did not increase because a greater number of
units was produced than had been planned. In fact, fixed overhead incurred was actually lower
than the budgeted amount. That can occur because fixed factory overhead does not change in total
because of changes in the activity level, as long as the activity level remains within the relevant
range.
In contrast, variable factory overhead, which is applied to production in the same way as fixed factory
overhead, does change in total because of changes in the activity level.
22
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Variable and Absorption Costing
Note: Fixed factory overheads are allocated to the units produced as if they were variable costs,
even though fixed factory overheads are not variable costs.
Absorption costing is required not only by U.S. GAAP for external financial reporting but also by the U.S.
taxing authorities for tax reporting.
When absorption costing is being used, the operating income reported by a company is influenced by the
difference between the level of production and the level of sales. For example, when the level of production
is higher than the level of sales, some of the fixed manufacturing overhead costs incurred during the current
period are included on the balance sheet as inventory at year-end. As a result, the fixed costs that are in
inventory are not included on the income statement as an expense. This topic will be discussed in more
detail later.
Fixed Factory Overheads Under Variable Costing
Under variable costing (also called direct costing), fixed factory overheads are reported as period costs
and are expensed in the period in which they are incurred. Thus, no matter what the level of sales, all of
the fixed factory overheads will be expensed in the period when incurred.
Variable costing does not conform to GAAP. For external reporting purposes, GAAP requires the use
of absorption costing, and therefore variable costing cannot be used for external financial reporting.
However, many accountants feel that variable costing is a better tool to use for internal analysis, and
therefore variable costing is often used internally.
Note: It is important to remember that the only difference in operating income between absorption
costing and variable costing relates to the treatment of fixed factory overheads. Under absorption
costing, fixed factory overhead costs are included, or absorbed, into the product cost and reach the
income statement as part of cost of goods sold when the units the fixed costs are attached to are sold.
Under variable costing, only variable direct costs (direct materials and direct labor) and variable indirect
costs (variable overhead) are included as product costs. Fixed factory overhead costs are excluded from
the product cost and treated as a period cost.
Question 3: Which of the following statements is true for a firm that uses variable costing?
a)
The cost of a unit of product changes because of changes in the number of manufactured units.
b)
Profits fluctuate with sales.
c)
An idle facility variation is calculated.
d)
Product costs include "direct" (variable) administrative costs.
(CMA Adapted)
Effects of Changing Inventory Levels
Because fixed factory overheads are treated differently under absorption and variable costing, it is virtually
certain that variable and absorption costing will result in different amounts of operating income (or operating loss) for the same period of time.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
23
Variable and Absorption Costing
CMA Part 1
Note: In addition to producing different amounts of operating income, variable costing and absorption
costing will always produce different values for ending inventory because different costs are included in
each unit of inventory. Ending inventory under absorption costing will always be higher than it would be
if variable costing were used because under absorption costing, each unit of inventory will include some
fixed factory overhead costs. In contrast, under variable costing fixed overhead costs are not allocated
to production and so are not included in inventory. Instead, they are expensed in the period incurred.
Only when production and sales are equal in a period (meaning there is no change in inventory levels
and everything that was produced was sold) will there not be a difference between the operating incomes
reported under variable costing and absorption costing. If sales and production are equal, the fixed factory
overheads will have been expensed as period costs under the variable costing method, and the fixed factory
overheads will have been “sold” and included in cost of goods sold under the absorption costing method.
Whenever inventory changes over a period of time, the two methods will produce different levels of operating income.
Production Greater than Sales (Inventory Increases)
Whenever production is greater than sales, the operating income calculated under the absorption
costing method is greater than operating income under variable costing because some of the fixed
factory overheads incurred were inventoried under absorption costing. Under absorption costing, fixed factory overheads are allocated to each unit. When a unit that was produced but not sold during the period
remains on the balance sheet as inventory, its inventory cost includes the fixed factory overhead applied
to that unit. As a result, that amount of fixed factory overhead is temporarily reported on the balance sheet
instead of the income statement. When the unit is sold during the following period, that amount of fixed
factory overhead moves to the income statement as cost of goods sold.
Under variable costing all of the fixed factory overheads incurred are expensed on the income statement in
the period incurred.
Sales Greater than Production (Inventory Decreases)
If production is lower than sales, the variable costing method will result in a greater operating income than absorption costing will because under the variable method, the only fixed factory overheads
included as expenses in the current period were those that were incurred during the current period. Because
sales were greater than production, some of the products that were produced in previous years were sold
during the current period. Thus, under the absorption method, some of the fixed factory overhead costs
that had been inventoried in previous years will be expensed in the current period—in addition to all the
costs incurred during the current period—as cost of goods sold.
Note: Over a long period of time, the total operating income that will be presented under both
methods will be essentially the same. In the long term, operating income will be the same under
variable and absorption costing, because in the long term the company will not produce more than it
can sell and therefore sales will equal production. Rather, the difference between the two methods will
appear in the allocation of operating income to the different periods within that longer time period.
24
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Variable and Absorption Costing
The following table summarizes the effect on operating income of changing inventory levels (production
compared to sales) under the two methods:
Production & Sales
Operating Income
Production = Sales
Absorption = Variable
Production > Sales
Absorption > Variable
Production < Sales
Absorption < Variable
Note: Ending inventory under absorption costing will always be higher than ending inventory under
variable costing, because more costs are applied to each unit under absorption costing than under variable costing, and the number of units in ending inventory are the same under both methods. The units
in ending inventory under absorption costing will always contain some fixed costs, whereas the units in
ending inventory under variable costing will always contain no fixed costs.
Income Statement Presentation
The presentation of the income statement will also be different under absorption costing and variable costing.
The Income Statement under Absorption Costing
Under absorption costing gross profit is calculated by subtracting from revenue the cost of goods sold,
which includes all variable and fixed manufacturing costs for goods sold. All variable and fixed nonmanufacturing costs (period costs) are then subtracted from gross profit to calculate operating income.
The income statement (through operating income) under absorption costing is as follows:
Sales revenue
−
Cost of goods sold – variable and fixed manufacturing costs of items sold
=
Gross profit
−
Variable nonmanufacturing costs (expensed)
−
Fixed nonmanufacturing costs (expensed)
=
Operating Income
The Income Statement under Variable (Direct) Costing
Under variable costing a manufacturing contribution margin is calculated by subtracting all variable
manufacturing costs for goods that were sold from revenue. From this manufacturing contribution
margin, nonmanufacturing variable costs are subtracted to arrive at the contribution margin. All fixed
costs (manufacturing and non-manufacturing) are then subtracted from the contribution margin to calculate
operating income.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
25
Variable and Absorption Costing
CMA Part 1
The income statement under variable costing is as follows:
Sales revenue
−
Variable manufacturing costs of items sold
=
Manufacturing contribution margin
−
Variable nonmanufacturing costs (expensed)
=
Contribution Margin
−
All fixed manufacturing costs (expensed)
−
All fixed nonmanufacturing costs (expensed)
=
Operating Income
Note: The difference in presentation between the two methods does not affect the difference in the
treatment of fixed manufacturing overheads under the two different methods. Candidates need to know
that under the absorption method a gross profit is reported, while under the variable method a contribution margin is reported; and the two are different.
The difference is demonstrated in the example (and the answer to the example) that follows this explanation.
Absorption Costing versus Variable Costing: Benefits and Limitations
While absorption costing is required for external reporting (the GAAP financial statements) and income tax reporting in the U.S., it is generally thought that variable costing is better for internal uses.
Benefits of Absorption Costing
•
Absorption costing provides matching of costs and benefits.
•
Absorption costing is consistent not only with U.S. Generally Accepted Accounting Principles
but also with U.S. Internal Revenue Service requirements for the reporting of income on income
tax returns.
Limitations of Absorption Costing
•
Because of the way that fixed costs are allocated to units under absorption costing, managers have
an opportunity to manipulate reported operating income by overproducing in order to keep some of
the fixed costs on the balance sheet in inventory. Thus, the effect of this manipulation is to move
operating income from a future period to the current period. It also creates a buildup of inventories
that is not consistent with a profitable operation.
•
When the number of units sold is greater than the number of units produced, inventory decreases.
Operating income under absorption costing will be lower than it would be under variable costing,
because some prior period fixed manufacturing costs will be expensed under absorption costing along
with the current period’s fixed manufacturing costs.
26
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Variable and Absorption Costing
Benefits of Variable Costing
•
The impact on operating income of changes in sales volume is more obvious with variable costing
than with absorption costing.
•
By not including fixed costs in the calculation of cost to produce, companies are able to make better
and more informed decisions about profitability and product mix.
•
Operating income is directly related to sales levels and is not influenced by changes in inventory
levels due to production or sales variances.
•
Variance analysis of fixed overhead costs is less confusing than it is with absorption costing.
•
The impact of fixed costs on operating income is obvious and visible under variable costing because
total fixed costs are shown as expenses on the income statement.
•
It is easier to determine the “contribution” to fixed costs made by a division or product—and thereby
helps determine whether the product or division should be discontinued.
•
Variable costing tends to be less confusing than absorption costing because it presents costs in the
same way as they are incurred: variable costs are presented on a per-unit basis and fixed costs are
presented in total.
•
Advocates argue that variable costing is more consistent with economic reality, because fixed costs
do not vary with production in the short run.
Limitations of Variable Costing
•
Variable costing does not provide proper matching of costs and benefits and so is not acceptable for
external financial reporting under generally accepted accounting principles. Variable costing is also
not acceptable for income tax reporting.
•
Because only variable manufacturing costs are charged to inventory, variable costing requires separating all manufacturing costs into their fixed and variable components.
•
To prepare an income statement based on variable costing, it is also necessary to separate the selling
and administrative costs into their fixed and variable components.
Note: The issue of absorption costing versus variable costing is relevant only for a manufacturing company. A company that does not do any manufacturing, for example a reseller or a service company,
would have only non-manufacturing fixed overhead. Non-manufacturing fixed overhead for such a company would simply be expensed as incurred.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
27
Variable and Absorption Costing
CMA Part 1
Variable/Absorption Costing Example
Hardy Corp. uses the FIFO method to track inventory. The records for Hardy include the following information. For simplicity, assume that the amounts given are both the budgeted amounts and the actual
amounts and thus, there were no variances.
Inventory (in units)
Beginning balance, in units
Production
Available for sale
Less units sold
Ending balance, in units
Year 1
-012,000
12,000
( 7,800)
4,200
Year 2
4,200
10,000
14,200
(12,000)
2,200
Other information
Sales ($2.10 per unit)
Variable mfg. costs ($0.90/unit)
Fixed mfg. costs
Variable selling and admin. costs
Fixed selling and admin. costs
$16,380
10,800
5,000
2,250
2,250
$25,200
9,000
5,400
3,750
3,750
Required: Prepare the income statements for Year 1 and Year 2 using both the absorption and the
variable methods of costing. (The solution follows.)
Year 1 Absorption Costing:
Sales revenue
Year 1 Variable Costing:
$
Sales revenue
$
+_________
Manuf. Cont. Margin
$
________
Gross Profit
$
________
Contribution Margin
$
________
Operating Income
$
Operating Income
Year 2 Absorption Costing:
Sales revenue
________
$
Year 2 Variable Costing:
$
Sales revenue
$
________
Manuf. Cont. Margin
$
________
Gross Profit
$
________
Contribution Margin
$
________
Operating Income
28
$
________
Operating Income
$
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Variable and Absorption Costing
Answer to the Variable/Absorption Costing Example
Year 1 Income Statements
Absorption Costing
Variable (Direct) Costing
Sales (7,800 × $2.10)
$ 16,380
Sales (7,800 × $2.10)
Variable Mfg. COGS (7,800 × $0.90)
7,020
Variable Mfg. COGS (7,800 × $0.90)
Fixed Mfg. COGS ($5,000÷12,000×7,800)
3,250
Manufacturing Contribution Margin
COGS
(10,270)
$
$
9,360
(2,250)
$
7,110
6,110
S&A exp. ($2,250 var. + $2,250 fixed)
(4,500)
Operating income
(7,020)
Less: Variable S&A
Contribution margin
Gross profit
$ 16,380
$
1,610
Less: Fixed mfg. costs
(5,000)
Fixed S&A
(2,250)
Operating income
$
(140)
Year 2 Income Statements
Absorption Costing
Variable (Direct) Costing
Sales (12,000 × $2.10)
$ 25,200
Variable Mfg. Cost (12,000 × $0.90)
10,800
Fixed Mfg. COGS-Year 1 production
6
Fixed Mfg. COGS-Year 2 production
7
COGS
$ 25,200
Variable Mfg.COGS (12,000×$0.90)
(10,800)
1,750
Manufacturing Contribution margin
14,400
4,212
Variable S&A
(3,750)
Contribution margin
10,650
Less: Fixed mfg. costs
(5,400)
Fixed S&A
(3,750)
(16,762)
Gross profit
8,438
S&A expenses
Operating income
Sales (12,000 × $2.10)
(7,500)
$
938
Operating income
$ 1,500
6
Beginning inventory for Year 2 was 4,200 units, and those units were produced during Year 1 at Year 1 costs. Each
unit had $0.4167 of fixed manufacturing cost attached to it ($5,000 fixed manufacturing costs in Year 1 ÷ 12,000 units
produced in Year 1). Since the company uses FIFO, those units were the first units sold in Year 2. 4,200 units × $0.4167
of fixed manufacturing cost per unit = $1,750 of fixed manufacturing cost from Year 1 production that was sold in Year
2.
7
A total of 12,000 units were sold in Year 2. As noted above, 4,200 of those units came from Year 1’s production. That
means the remainder, or 12,000 – 4,200 = 7,800 units, came from Year 2’s production. Fixed manufacturing cost per
unit during Year 2 was $5,400 fixed manufacturing costs ÷ 10,000 units produced, or $0.54 per unit. So the fixed
manufacturing cost included in the units that were sold from Year 2’s production is $0.54 × 7,800 = $4,212.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
29
Variable and Absorption Costing
CMA Part 1
Note: In certain situations, it is very easy to calculate the difference between the variable and absorption
method operating income. Given that the only difference between the two costing methods is the treatment of fixed factory overheads, if one of the three situations below applies and the question asks for
the difference in operating income between the two methods, the only calculation needed is:
Fixed overhead cost per unit applied to production8
×
Number of units of change in inventory
=
Difference in operating income between the two methods
The three situations in which the formula above can be used to calculate the difference in operating
income between the two methods are:
(1) Beginning inventory is zero. In many questions there is a statement either that there is no beginning
inventory, or that it is the company’s first year of operations (in which case there is no beginning
inventory).
(2) The LIFO inventory cost flow assumption is being used and ending inventory is higher than beginning
inventory (in other words, none of the beginning inventory was sold during the period).
(3) If an inventory cost flow assumption other than LIFO is being used, and (a) the beginning inventories
are valued at the same per-unit fixed manufacturing cost as the current year’s planned per-unit fixed
manufacturing cost and (b) under- or over-applied fixed manufacturing overhead is closed out to
cost of goods sold only.
The following information is for the next four questions: The estimated unit costs for a company
using absorption (full) costing and planning to produce and sell at a level of 12,000 units per month are
as follows.
Cost Item
Estimated Unit Cost
Direct materials
$32
Direct labor
20
Variable manufacturing overhead
15
Fixed manufacturing overhead
6
Variable selling
3
Fixed selling
4
Question 4: Estimated conversion costs per unit are:
a)
$35
b)
$41
c)
$48
d)
$67
8
If the inventory level has fallen, the previous year’s fixed overhead cost per unit will need to be used as the “extra
inventory” sold that was produced during the previous year. If inventory has risen, the current period’s fixed overhead
cost per unit should be used.
30
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Variable and Absorption Costing
Question 5: Estimated prime costs per unit are:
a)
$73
b)
$32
c)
$67
d)
$52
Question 6: Estimated total variable costs per unit are:
a)
$38
b)
$70
c)
$52
d)
$18
Question 7: Estimated total costs that would be incurred during a month with a production level of
12,000 units and a sales level of 8,000 units are:
a)
$692,000
b)
$960,000
c)
$948,000
d)
$932,000
(CMA Adapted)
Question 8: When a firm prepares financial reports using absorption costing:
a)
Profits will always increase with increases in sales.
b)
Profits will always decrease with decreases in sales.
c)
Profits may decrease with increased sales even if there is no change in selling prices and costs.
d)
Decreased output and constant sales result in increased profits.
(CMA Adapted)
Question 9: Jansen, Inc. pays bonuses to its managers based on operating income. The company uses
absorption costing, and overhead is applied on the basis of direct labor hours. To increase bonuses,
Jansen's managers may do all of the following except:
a)
Produce those products requiring the most direct labor.
b)
Defer expenses such as maintenance to a future period.
c)
Increase production schedules independent of customer demands.
d)
Decrease production of those items requiring the most direct labor.
(CMA Adapted)
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
31
Variable and Absorption Costing
CMA Part 1
Question 10: Nance Corp began operations in January. The company produced 50,000 units and sold
45,000 units in its first year of operations. Costs for the year were as follows:
Fixed Manufacturing Costs
Variable Manufacturing Costs
Fixed General and Selling Costs
Variable General and Selling Costs
$250,000
180,000
75,000
80,000
How would the operating income of Nance compare between the variable method and full absorption
costing methods?
a)
Variable would be $25,000 higher.
b)
Absorption would be $25,000 higher.
c)
Variable would be $32,500 higher.
d)
Absorption would be $32,500 higher.
(HOCK)
The following information is for the next two questions: Osawa planned to produce and actually
manufactured 200,000 units of its single product in its first year of operations. Variable manufacturing
costs were $30 per unit of product. Planned and actual fixed manufacturing costs were $600,000, and
the selling and administrative costs totaled $400,000. Osawa sold 120,000 units of product at a selling
price of $40 per unit.
Question 11: Osawa’s operating income using absorption costing is:
a)
$200,000
b)
$440,000
c)
$600,000
d)
$840,000
Question 12: Osawa’s operating income using variable costing is:
a)
$200,000
b)
$440,000
c)
$800,000
d)
$600,000
(CMA Adapted)
32
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Joint Products and Byproducts
Joint Products and Byproducts
When one production process leads to the production of two or more finished products, joint products
result. The products are not identical, but they share the same production process up to what is called the
splitoff point. The splitoff point is the point at which the two products stop sharing the same process and
become different, identifiable products.
The main issue with joint products is how to account for the joint costs (those costs incurred prior to the
splitoff point) and how to allocate the joint costs to the separate products. Accurate allocation is needed
primarily for financial reporting purposes and pricing decisions. The inventory cost of each unit of each joint
product needs to be determined accurately so that the balance sheet will be accurate. Since the inventory
cost of each unit becomes its cost of goods sold when it is sold, the amount of cost to be expensed to COGS
for each unit sold is needed.
An example of joint products would be the processing of pineapple. As a pineapple goes through processing
at the factory the rind is removed. The fruit is used to manufacture two products: bottled pineapple juice
and canned pineapple slices, two separate products that arise from the same initial production process
(removal of the rind). Therefore, the joint costs of the unprocessed pineapples (the direct materials) and
of processing the pineapples to remove the rinds need to be allocated between the juice and the slices.
Joint costs may include direct materials, direct labor, and overhead. Costs incurred after the splitoff point
may also include direct materials, direct labor, and overhead. The costs incurred after the splitoff point are
separable costs and they are allocated to each product as they are incurred by that product.
Byproducts are the low-value products that occur naturally in the process of producing higher value products. They are, in a sense, accidental results of the production process. The main differentiator between
main products and joint products is the market value. If the product has a comparatively low market value
when compared to the other products produced, it is a byproduct.
Methods of Allocating Costs to Joint Products
Several different measures may be used to allocate joint costs, but all of the different methods use some
sort of ratio between the two or more products. The cost allocation is largely a mathematical exercise, but
candidates need to learn the different allocation bases are calculated.
The primary methods are:
1)
Relative Sales Value at Splitoff method
2)
Estimated Net Realizable Value (NRV) method
3)
Physical Measure and Average Cost methods
4)
Constant Gross Profit (Gross Margin) Percentage method
1. Relative Sales Value at Splitoff Method (or Gross Market Value Method)
The Relative Sales Value at Splitoff method is also called the Gross Market Value method, the Sales
Value at Splitoff method, or, more simply, just the Sales Value method.
Under the Relative Sales Value at Splitoff method, joint costs are allocated on the basis of the sales values
of each product at the splitoff point, relative to the total sales value of all the joint products.
The formula to allocate the costs between or among the products is as follows, for each of the joint
products:
Sales Value of Product X
Total Sales Value of all Joint Products
× Joint Cost
=
Joint Cost Allocated to
Product X
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
33
Joint Products and Byproducts
CMA Part 1
The Relative Sales Value at Splitoff method can be used only if all of the joint products can be sold at
the splitoff point (in other words, with no further processing). Management may decide it would be more
profitable to the company to process some of the joint products further; but the Relative Sales Value at
Splitoff method can still be used to allocate joint costs up to the splitoff point, as long as sales prices at the
splitoff point do exist for all of the joint products.
Example of the Relative Sales Value at Splitoff method:
Cafe Industries manufactures two kinds of coffee percolators: an electric model and a stovetop model.
Part of the manufacturing process is the production of the coffee basket assembly, which includes a
basket and a spreader. The 8-cup electric model and the 8-cup stovetop model use the same basket
assembly, though the pump stems are different. The basket assembly is also sold separately as a replacement part for both percolators at a price of $10.00. Separate prices for the basket assemblies that
go on to be incorporated into the two percolators do not exist, since they become integral parts of the
percolators. However, the whole production run could be sold as replacement baskets, since a market
does exist for them at that stage of production.
One batch consists of 500 basket assemblies, of which 300 are destined to become part of electric
percolators, 150 are destined to become part of stovetop percolators, and 50 are sold separately as
replacement parts. The joint costs of one batch total $2,500, or $5.00 per unit.
Using the Sales Value at Splitoff method, how much of the joint cost is allocated to electric percolators,
how much to stovetop percolators, and how much to the replacement parts?
The sales value of the electric percolator basket assemblies is 300 × $10.00, or $3,000. The sales value
of the stovetop percolator basket assemblies is 150 × $10.00, or $1,500. The sales value of the replacement baskets is 50 × $10.00, or $500. The total sales value of all 500 basket assemblies is therefore
$3,000 + $1,500 + $500, or $5,000.
The portion of the joint cost allocated to the electric percolators is ($3,000 ÷ $5,000) × $2,500, or
$1,500.
The portion of the joint cost allocated to the stovetop percolators is ($1,500 ÷ $5,000) × $2,500, or
$750.
The portion of the joint cost allocated to the replacement basket assemblies is ($500 ÷ $5,000) ×
$2,500, or $250.
To confirm that the full $2,500 of joint cost has been allocated, sum the allocated amounts: $1,500 +
$750 + $250 = $2,500.
Benefits of the Relative Sales Value at Splitoff (Gross Market Value) Method
34
•
Costs are allocated to products in proportion to their expected revenues, in proportion to the individual
products’ ability to absorb costs.
•
The method is easy to calculate and is simple, straightforward, and intuitive.
•
The cost allocation base is expressed in terms of a common basis—amount of revenue—that is recorded in the accounting system.
•
It is the best measure of the benefits received from the joint processing. It is meaningful because
generating revenues is the reason for the company to incur the joint costs.
•
It can be used when further processing is done, as long as selling prices exist for all the joint products.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Joint Products and Byproducts
Limitations of the Relative Sales Value at Splitoff (Gross Market Value) Method
•
Selling prices at the splitoff point must exist for all of the products in order to use this method.
•
Market prices of joint products may vary frequently, but this method uses a single set of selling prices
throughout an accounting period, which can introduce inaccuracies into the allocations.
2. Estimated Net Realizable Value (NRV) Method
The Estimated Net Realizable Value (NRV) method can be used if one or more of the joint products must
be processed beyond the splitoff point in order to be sold. It may also be used under certain circumstances
if one or more of the joint products may be processed beyond the splitoff point in order to increase its
value above the selling price at the splitoff point.
This method is essentially the same as the Relative Sales Value method, and the allocation is done in the
same way, except an estimated Net Realizable Value (NRV) is used for the product or products that
must be or will be processed further.
The estimated NRV for a product to be processed further is calculated as:
Future sales price of items produced that will be sold in the future
− Separable costs incurred after the splitoff point
= Estimated net realizable value
Note: If one (or more) of the joint products is not processed further but is sold at the splitoff
point, instead of using NRV for those products, the company will simply use the Sales Values at the
splitoff point for the products that can be sold at the splitoff point, while using the NRVs for the products
that must be processed further to be marketable.
The Estimated NRV method would generally be used instead of the Relative Sales Value at Splitoff Point
method only when a market price at the splitoff point is not available for one or more of the joint
products, for example because a product is not marketable at the splitoff point. If a market price at the
splitoff point is available for one or more of the joint products because the products can be sold at that
point, that price is used instead of the products’ NRVs, even if the products will be processed
further.
If a sales value at splitoff is not available for one or more of the joint products, it is acceptable to use within
the same allocation the net realizable values of the products that must be processed further in order to be
sellable while using the sales values at splitoff for the products that can be sold at the splitoff point. For
instance, a company may have one product that cannot be sold at the splitoff point and must be processed
further (thus no sales value at splitoff is available for it), while the other product can be sold at the splitoff
point. In such a case, the NRV of the product that must be processed further is its estimated NRV (sales
price after further processing less cost to process further), while the NRV of the product that can be sold at
the splitoff point is its sales value at the splitoff point.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
35
Joint Products and Byproducts
CMA Part 1
Note: The Net Realizable Value method is generally used in preference to the Relative Sales Value at
Splitoff method only when selling prices for one or more products at splitoff do not exist. However,
sometimes when sales prices at the splitoff do exist for all of the joint products but one or more products
can be processed further, an exam problem will say to use the Net Realizable Value method to allocate
the joint costs. If a problem says to use the Net Realizable Value method, use the net realizable
values for the products that can be processed further even though sales prices at splitoff do
exist, but only if the cost to process further is less than the additional revenue to be gained
from the further processing. (if the cost to process a product further is greater than the additional
revenue to be gained from the further processing, the product will not be processed further.)
If the problem does not say to use the Net Realizable Value method and sales values at the splitoff exist
for all products, then use the sales values of all of the joint products for the allocation, even if one or
more of the products can be or will be processed further.
Note: The joint costs of production are not relevant costs in the decision to process further or sell
immediately because they are sunk costs. In order to determine whether or not a product should be
processed further, the company should compare the incremental revenues (the increase in the sales
price that results from further processing) with the incremental cost (the increase in costs related to
the additional processing). If the incremental revenue is greater than the incremental cost, the product
should be processed further.
Costs incurred by each of the separate products after the splitoff point are simply allocated directly to
those products.
Example of the Net Realizable Value method:
Simpli Chili Company produces three flavors of its chili in a joint process. The flavors are mild, original
and extra spicy. 500,000 gallons of unspiced chili are produced per batch, and then varying amounts
and types of spices are added to produce the mild, original and extra spicy flavors. The three types of
chili are packaged in 16-ounce cans. The total joint cost of the unspiced chili is $1,850,000.
One batch results in 500,000 gallons of unspiced chili. The unspiced chili is, of course, not marketable
at that point. It needs spices.
After the spices have been added, Simpli has 800,000 cans of mild chili, 2,000,000 cans of original chili,
and 1,200,000 cans of extra spicy chili. The cost per can of adding the spices and blending them into
the unspiced chili are as follows:
Mild
Original
Extra spicy
0.065
0.075
0.080
The mild chili sells for $0.98 per can. The original chili sells for $1.05 per can. The extra spicy chili sells
for $1.09 per can.
Using the Net Realizable Value method of allocating the joint costs, how much of the joint costs will be
allocated to each type of chili?
Product
# Cans
Price/
Can
Mild
Original
Extra spicy
800,000
2,000,000
1,200,000
0.98
1.05
1.09
Total
Extended
Sales Value
$
Cost to Process Further
784,000
2,100,000
1,308,000
4,000,000
$ 52,000
150,000
96,000
NRV
$
732,000
1,950,000
1,212,000
Percentage
of Total NRV
18.8%
50.1%
31.1%
$3,894,000
(Continued)
36
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Joint Products and Byproducts
The joint cost of $1,850,000 will be allocated as follows:
Mild
Original
Extra Spicy
$1,850,000 × 0.188 =
$1,850,000 × 0.501 =
$1,850,000 × 0.311 =
Total allocated
$
347,800
926,850
575,350
$1,850,000
Benefits of the Net Realizable Value Method
•
The Net Realizable Value method can be used instead of the Sales Value at Splitoff method when
selling prices for one or more products at the splitoff do not exist, because it provides a better
measure of the benefits received than the other methods that could be used in this situation.
•
The allocation results in comparable profitability among the joint products.
Limitations of the Net Realizable Value Method
•
The NRV method is complex. It requires information on the specific sequence of further processing
and the separable costs of further processing, as well as the point at which individual products will
be sold.
•
The NRV method is often implemented with simplified assumptions. Companies assume a specific set
of processing steps beyond the splitoff point, but they may actually do something else and in fact
may change the steps frequently.
•
Selling prices of joint products may vary frequently, but the NRV method uses a single set of selling
prices throughout an accounting period, which can introduce inaccuracies into the allocations.
3. Physical Measure and Average Cost Methods
The Physical Measure and Average Cost methods are essentially the same. In the Physical Measure method,
the joint cost allocation is done based on the weight, volume, or other physical measure of the joint products, such as pounds, tons, or gallons. In the Average Cost method, the joint cost allocation is done based
on the physical units of output. In both methods, joint costs are allocated proportionately among the joint
products, so that each product is allocated the same amount of joint cost per unit of measure, whether that
unit is a unit of physical measure or a unit of output.
Physical Measure Method
Joint cost allocation may be done based on the weight, volume, or other physical measure of the joint
products. Joint costs are allocated based on some common unit of measurement of output at the splitoff
point, such as pounds, tons, gallons, or board feet (for lumber). The Physical Measure method may also be
called the Quantitative Unit method.
The total joint cost up to the splitoff point is prorated between or among the joint products based on the
physical measure being used. It stands to reason that it must be possible to measure all of the joint products
in the same unit of measurement. If all of the output that results from a joint process cannot be measured in the same terms—for instance, if some of the output is liquid and some of the output is dry—the
Physical Measure method cannot be used.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
37
Joint Products and Byproducts
CMA Part 1
Example of the Physical Measure method:
The Simpli Chili Company produces three flavors of its chili in a joint process. The three flavors are mild,
original, and extra spicy. 500,000 gallons of unspiced chili are produced per batch, and then varying
amounts and types of spices are added to produce the mild, original and extra spicy flavors.
In this example, each of the three types of chili is packaged in a choice of can sizes: 12-ounce, 16-ounce
and 20-ounce cans. 100,000 gallons of the unspiced chili are used to produce the mild chili, 250,000
gallons are used for the original chili, and 150,000 gallons are used for the extra spicy chili. The total
joint cost of the unspiced chili (including direct materials, direct labor, and overhead) is $1,850,000. The
joint cost is allocated as follows:
Product
Physical Measure
Mild
Original
Extra spicy
Total
100,000 gal.
250,000 gal.
150,000 gal.
Proportion
0.20
0.50
0.30
Cost Per
Gallon
Allocation of Joint Cost
$1,850,000 × 0.20 = $
$1,850,000 × 0.50 =
$1,850,000 × 0.30 =
500,000 gal.
370,000
925,000
555,000
$3.70
$3.70
$3.70
$1,850,000
Average Cost Method
The Average Cost method may also be called the Physical Unit method. It is used when the joint costs
are to be allocated on the basis of physical units of output in completed form. It is basically the same as
the Physical Measure method, but because physical units of completed product are used, it is called by the
name Average Cost, or sometimes Average Unit Cost method.
The total joint cost is divided by the total number of units of all of the joint products produced to calculate
the average cost per unit. Then that average cost per unit is multiplied by the number of units of each
product produced to find the amount of cost to be allocated to each product.
Example of the Average Cost/Physical Unit Method:
In the Simpli Chili example, now all three flavors of chili are packaged in 16-ounce cans only. Since the
size of the cans is the same for all the different versions, units of output can be used for the allocation,
with the can as the unit of output. The 500,000 gallons of output are in 4,000,000 16-ounce cans. The
average cost per can is the total cost of $1,850,000 divided by 4,000,000 cans, or $0.4625 per can.
Now, the output and the allocations from the joint process are as follows:
Product
Units of Output
Mild
Original
Extra spicy
800,000 cans
2,000,000 cans
1,200,000 cans
Total
4,000,000 cans
× Avg. Cost/Unit
$0.4625
$0.4625
$0.4625
Allocation of Joint Cost
$
370,000
925,000
555,000
$1,850,000
Note that the allocation of the cost by product using the Average Cost/Physical Unit method is exactly
the same as it was when the Physical Measure method was used.
38
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Joint Products and Byproducts
Benefits of the Physical Measure and Average Cost Methods
•
The Physical Measure and Average Cost methods are easy to use.
•
The allocation is objective.
•
The methods are useful when rates or prices are regulated.
Limitations of the Physical Measure and Average Cost Methods
•
The Physical Measure and Average Cost methods can result in product costs that are greater than
the market values for some of the joint products. The physical measures of the individual products
may have no relationship to their respective abilities to generate revenue. If weight or size is used,
the heaviest or largest product will be allocated the greatest amount of the joint cost; but that
product may have the lowest sales value. Products with a high sales value per weight or size would
show large profits, while products with a low sales value per weight or size would show large losses.
•
Physical measures are not always comparable for products. For example, some products might be in
liquid form (for example, petroleum), whereas some might be in gaseous form (for example, natural
gas). When the physical measures used for the joint products differ, this method cannot be used.
4. Constant Gross Profit (Gross Margin) Percentage Method
The Constant Gross Profit Percentage method allocates the joint costs in such a way that all of the joint
products will have the same gross profit margin percentage. It is done by “backing into” the amount of
joint cost to be allocated to each of the joint products.
Step 1: Calculate the gross profit margin percentage for the total of both (or all, if more than two) of the
joint products to be included in the allocation by subtracting the total joint and total separable costs from
the total final sales value and dividing the remainder by the total final sales value. This calculation should
be done for all of the joint products produced during the period, not for all of the joint products sold during
the period. The result is the total gross profit margin percentage.
Step 2: Calculate the gross profit for each of the individual products by multiplying the total gross profit
margin percentage calculated in Step 1 by each individual product’s final sales value.
Step 3: Subtract the gross profit calculated in Step 2 and any separable costs from each individual product’s
final sales value. The result of this subtraction process will be the amount of joint costs to allocate to each
product.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
39
Joint Products and Byproducts
CMA Part 1
efham CMA
Example of the Constant Gross Profit (Gross Margin) Percentage method:
Pineapple Co. produces pineapple juice and canned slices at its Pineapple Processing Plant in Hawaii. The
information about the process and the two joint products is as follows:
•
•
•
•
•
•
•
10,000 pineapples are processed.
The process results in 2,500 kg of juice and 7,500 kg of slices.
The juice can be sold for $10 per kg and the slices can be sold for $15 per kg.
The joint costs of production are $120,000.
The juice can be processed further into a premium juice. The additional processing costs an additional
$8,000 ($3.20 per kg), but the sales price per kg will increase by $5 per kg to $15.
The slices can be further processed into chunks. The additional processing costs $4,000 ($0.533 per
kg) and the chunks can be sold for $2 more per kg than the slices, or $17 per kg.
Since the additional revenue to be earned from processing each product further exceeds the additional cost to process further, both products are processed further.
Step 1: Calculate the overall gross profit margin percentage for both products:
Premium Juice
Margin
Final sales value
Separable costs
Joint costs
Overall gross profit
$37,500
1
Chunks
$127,500
Total
2
$ 165,000
12,000
120,000
$ 33,000
Gross Profit
Margin %
20%3
Step 2: Fill in the separable costs for each product and calculate what a 20% gross profit margin
percentage for each product would be:
Final sales value
Separable costs (given)
Joint costs
Required gross profit
Premium Juice
$37,500
8,000
J
$ 7,500 4
Chunks
$127,500
4,000
C
$ 25,500
5
Total
$165,000
12,000
120,000
$ 33,000
Gross Profit
Margin %
20%
Step 3: The amount of joint cost allocated to each product is whatever amount would create the required gross profit for the product when the separable costs and the allocated joint cost are subtracted
from the final sales value.
Juice:
$37,500 − $8,000 − J = $7,500
Chunks: $127,500 − $4,000 − C = $25,500
J = $22,000
C = $98,000
The $120,000 of joint costs is allocated $22,000 to juice and $98,000 to chunks.
Calculations:
1
2
3
4
5
40
2,500 × $15
7,500 × ($15 + $2)
$33,000 ÷ $165,000
$37,500 × 0.20
$127,500 × 0.20
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Joint Products and Byproducts
Benefits of the Constant Gross Profit (Gross Margin) Percentage Method
•
The Constant Gross Profit method is the only method for allocating joint costs under which products
may receive negative allocations.
•
This method allocates both joint costs and gross profits. Gross profit margin is allocated to the joint
products in order to determine the joint cost allocations so that the resulting gross profit margin percentage for each product is the same.
•
The method is relatively easy to implement, so it avoids the complexities of the NRV method.
Limitations of the Constant Gross Profit (Gross Margin) Percentage Method
•
It assumes that all products have the same ratio of cost to sales value, which is probably not the case.
Accounting for Byproducts
Byproducts are the low-value products that occur naturally in the process of producing higher value products. They are, in a sense, accidental results of the production process. Because they are so low in value,
the accounting method used to account for byproducts does not need to be very detailed or sophisticated.
The main issue in accounting for byproducts relates to the treatment of the associated costs and revenues.
Even though the byproducts are small and immaterial to the larger production cost, there are still costs
that will be allocated to the byproduct. At a minimum, there are some direct materials in the byproduct, so
some percentage of the direct materials used did not make it into the final main products but became
byproducts. The byproducts might be material remnants from cutting fabric, sawdust from cutting lumber,
or any other small amounts of materials.
The company may or may not have a policy of assigning costs to the byproducts, and that policy is reflected
in its accounting treatment of byproducts. The costs and revenues of byproducts can be accounted for in
either of two ways.
Note: The specific accounting for these two methods is outside the scope of the exam. Candidates need
to know only how to treat the associated costs and revenues of byproducts.
The two ways to treat the associated costs and revenues of the byproduct are:
1)
Use the estimated net realizable value of the byproduct9 as a reduction of the costs of production
of the main product. This method is called the production method and in it the NRV of the
byproduct is inventoried as the cost of the byproduct.
2)
All the costs of production are allocated to the main product or joint products. When the byproduct
is sold, the revenue is recorded as either revenue without any associated COGS or as a reduction
to the cost of production of the main products. This method is called the sales method and only
revenue is recognized from the byproduct.
Which Method is Better?
Both methods are acceptable. The Production Method, where the byproduct is inventoried at the time of
production, is conceptually correct because it is consistent with the matching principle. Byproduct inventory
is recognized as an asset in the accounting period in which it is produced, and it reduces the inventory cost
assigned to the main product or joint products.
9
The estimated net realizable value of a byproduct is the revenue from its sale less marketing and administrative costs
of selling the byproduct and less any subsequent costs of processing the byproduct after the splitoff point.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
41
Joint Products and Byproducts
CMA Part 1
The Sales Method, in which the byproduct revenue is recognized at the time of sale, is simpler and is used
more frequently in practice if the monetary amounts of the byproduct or byproducts are immaterial.
The Sales Method has a disadvantage, though. The Sales Method makes it possible for managers to time
the sale of the byproducts and thus enables them to manage their earnings. A manager could store the
byproducts for a period of time and sell them to increase operating income during a period when sales or
gross profits from the main product or joint products are low.
Benefits of the Production Method of Accounting for Byproduct Costs
•
The Production Method is conceptually correct because byproduct inventory is recognized as an
asset in the accounting period in which it is produced.
•
Inventory cost assigned to the main product or joint products is reduced.
Limitations of the Production Method of Accounting for Byproduct Costs
•
The Production Method is more complex than the Sales Method.
Benefits of the Sales Method of Accounting for Byproduct Costs
•
The Sales Method is simpler to implement than the Production Method.
•
It is more practical when the monetary amounts of the byproducts are immaterial.
Limitations of the Sales Method of Accounting for Byproduct Costs
•
It is possible for managers to manage their earnings by timing the sale of the byproducts.
Note: An exam question will outline the treatment of the costs or revenue associated with the byproduct
and candidates just need to follow the mathematical steps required. If a question states that the
company inventories the byproduct, it means that it treats the byproduct’s net realizable
value as a reduction of the costs of production and uses the Production Method.
HOCK international books are licensed only for individual use and may not be lent,
copied, sold, or otherwise distributed without permission directly from HOCK
international.
If you did not download this book directly from HOCK international, it is not a
genuine HOCK book. Using genuine HOCK books assures that you have complete, accurate,
and up-to-date materials. Books from unauthorized sources are likely outdated and will not
include access to our online study materials or access to HOCK teachers.
Hard copy books purchased from HOCK international or from an authorized training
center should have an individually numbered orange hologram with the HOCK globe
logo on a color cover. If your book does not have a color cover or does not have this
hologram, it is not a genuine HOCK book.
42
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Joint Products and Byproducts
Question 13: Lankin Corp. produces two main products and a byproduct out of a joint process. The ratio
of output quantities to input quantities of direct materials used in the joint process remains consistent
from month-to-month. Lankin has employed the physical measure method to allocate joint production
costs to the two main products. The net realizable value of the byproduct is used to reduce the joint
production costs before the joint costs are allocated to the two main products. Data regarding Lankin’s
operations for the current month are presented in the chart below. During the month, Lankin incurred
joint production costs of $2,520,000. The main products are not marketable at the splitoff point and,
thus, have to be processed further.
Monthly input in pounds
Selling price per pound
Separable process costs
1st Main Product
90,000
$30
$540,000
2nd Main Product
150,000
$14
$660,000
Byproduct
60,000
$2
The amount of joint production cost that Lankin would allocate to the Second Main Product by using the
physical measure method to allocate joint production costs would be:
a)
$1,200,000
b)
$1,260,000
c)
$1,500,000
d)
$1,575,000.
(CMA Adapted)
Question 14: Sonimad Sawmill manufactures two lumber products from a joint milling process. The two
products developed are mine support braces (MSB) and unseasoned commercial building lumber (CBL).
A standard production run incurs joint costs of $300,000 and results in 60,000 units of MSB and 90,000
units of CBL. Each MSB sells for $2 per unit, and each CBL sells for $4 per unit.
If there are no further processing costs incurred after the splitoff point, the amount of joint cost allocated to the mine support braces (MSB) on a relative sales value basis would be:
a)
$75,000
b)
$180,000
c)
$225,000
d)
$120,000
(CMA Adapted)
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
43
D.2. Costing Systems
CMA Part 1
D.2. Costing Systems
Review of Introduction to Costing Systems
Product costing involves accumulating, classifying and assigning direct materials, direct labor, and factory
overhead costs to products, jobs, or services.
In developing a costing system, management accountants make choices in three categories of costing
methods:
1)
The cost measurement method to use in allocating costs to units manufactured (standard, normal, or actual costing), as covered in D.1. Measurement Concepts.
2)
The cost accumulation method to use (job costing, process costing, or operation costing).
3)
The method to be used to allocate overhead, as will be covered in D.3. Overhead Costs.
Cost accumulation methods and the allocation of overhead will be covered in this topic.
Cost Accumulation Systems
Cost accumulation systems are used to assign costs to products or services. Job order costing (also called
job costing), process costing, and operation costing are different types of cost accumulation systems used
in manufacturing.
•
Process costing is used when many identical or similar units of a product or service are being
manufactured, such as on an assembly line. Costs are accumulated by department or by process.
•
Job order costing (also called job costing) is used when units of a product or service are distinct
and separately identifiable. Costs are accumulated by job.
•
Operation costing is a hybrid system in which job costing is used for direct materials costs while
a departmental (process costing) approach is used to allocate conversion costs (direct labor and
overhead) to products or services.
Process costing, job order costing, and operation costing will be discussed in more detail later.
Process Costing
Process costing is used to allocate costs to individual products when the products are all relatively similar
and are mass-produced (the term “homogeneous” is used to describe the products, and it means “identical
to each other”). Process costing is basically applicable to assembly lines and products that share a similar
production process. In process costing, all of the costs incurred by a process (a process is often referred to
as a department) are collected and then allocated to the individual goods that were produced, or worked
on, during that period within that process (or department).
The basic exercise is to allocate all of the incurred costs to either the completed units that left the
department or to the units in ending work-in-process (EWIP) that are still in the department. It is
largely a mathematical operation and some basic formulas are used to make certain that everything is
accounted for.
Some general concepts and ideas of process costing will be presented first, and then the steps of process
costing will be discussed one-by-one. The topic will conclude with a review of the steps and a comprehensive
example.
All of the costs incurred during the current period and during all previous periods for the units worked on
during the period must be allocated to either finished goods (or to the next department for more work to
be done) or EWIP at the end of the current period.
44
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Process Costing
The costs in the department, usually materials and conversion costs10 and sometimes transferred-in costs,
that require allocation can come from one of three places:
1)
The costs are incurred by the department during the period. Materials and conversion costs are
accounted for separately.
2)
The costs are transferred in from the previous department. Transferred-in costs include total
materials and conversion costs from previous departments that have worked on the units. Transferred-in costs are transferred in as total costs. They are not segregated according to direct
materials and conversion costs.
3)
The costs were in the department on the first day of the period as costs for the beginning workin-process (BWIP). They were incurred by the department during the previous period to begin
the work on the units in the current period’s BWIP.
In reality, the categories of costs can be numerous. They may include more than one type of direct materials, more than one class of direct labor, indirect materials, indirect labor or other overheads. However, on
the CMA exam, generally only two classifications of costs are tested: direct materials and conversion
costs. Conversion costs include everything other than direct materials—specifically direct labor and overhead—and are the costs necessary for converting the raw materials into the finished product.
Note: Conversion costs is a term used in process costing questions to refer to direct labor and
factory overhead. It encompasses everything except raw materials. Conversion costs are the costs
necessary to convert the raw materials into the finished product. Placing direct labor and factory overhead into a single category reduces the number of individual allocations needed.
Note: Transferred-in costs are the total costs that come with the in-process product from the previous
department. They are similar to raw materials but they include all of the costs (direct materials and
conversion costs) from the previous department that worked on the units. The costs of the previous
department’s “completed units” are the current department’s transferred-in costs, and the transferredin costs and work are 100% complete (even though the units themselves are not complete when received) because the work done in the previous department is 100% complete.
At the end of the period all of the costs within the department—including direct materials, conversion costs,
and transferred-in costs, if applicable—must either be moved to finished goods inventory (or to the next
department if further work is required) if the work on them in the current department was completed, or
they will remain in ending WIP if they are not complete (the allocation process will be explained later). The
Ending WIP inventory for the current period will be the beginning WIP inventory for the next period.
When the goods that have been completed and transferred to finished goods inventory are sold, the costs
associated with the units that were sold will end up in COGS. The costs of the units that have been completed but have not been sold will remain in finished goods inventory until they are sold.
Thus, the cost of every unit that goes through a particular process in a given period must be recorded in
one of the four following places at the end of the period:
10
1)
Ending WIP inventory in the department or process
2)
The next department in the assembly process
3)
Ending finished goods inventory
4)
Cost of goods sold
Conversion costs include direct labor and overhead costs.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
45
Process Costing
CMA Part 1
Items 2 and 3 in the preceding list are classified together as completed units transferred out of the
department. The costs for all units on which the current process’s work has been completed are transferred
either to finished goods inventory or to the next department or process for further work.
Whether the units have been sold (and the costs are in COGS), are still being worked on (are in ending WIP
for the company), or finished but not sold (in ending finished goods inventory for the company) is irrelevant
to the process in a given department. The objective of process costing is to allocate costs incurred to date
on products worked on in one department during one period between completed units and ending workin-process inventory for that department.
Note: The basic accounting for a process costing system is as follows:
All of the manufacturing costs incurred are debited to a WIP inventory account. Manufacturing costs
include direct material, direct labor, and factory overhead consisting of indirect materials, indirect labor and other factory overhead such as facility costs. Direct materials are usually added
to units in process in a different pattern from the way conversion costs (direct labor and overhead) are
added to units in process, so they are usually accounted for separately.
The costs in WIP then need to be allocated between units completed during the period and units remaining in ending WIP at the end of the period. Costs for units completed during the period are transferred
to either finished goods inventory or, if more work is needed on them, to the next department’s WIP
inventory. This cost allocation is done based on a per unit allocation basis. Candidates do not need to be
familiar with the accounting steps in the process, just the process of allocating the costs, but the information is presented because it may help candidates to see what is happening in the process.
Steps in Process Costing
The following will examine the steps in process costing in more detail. It is important for candidates to be
very comfortable with equivalent units of production (covered in much more detail later) and how they
are calculated. Equivalent units of production are used to allocate costs between completed units transferred out during the period and the incomplete units remaining in ending work-in-process inventory at the
end of the period. Equivalent units of production, or EUP, are an important concept in process costing and
one that is likely to be tested.
The steps in process costing are:
1)
Determine the physical flow of goods
2)
Calculate how many units were started and completed during the period
3)
Determine when materials are added to the process
4)
Calculate the equivalent units of production for materials and conversion costs
5)
Calculate the costs incurred during the period for materials and conversion costs
6)
Calculate the cost per equivalent unit for materials and conversion costs
7)
Allocate the costs for materials and conversion costs separately between units in EWIP and units
transferred out according to the equivalent units in each
Each step is explained in detail in the following pages.
46
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Process Costing
1) Determine the Physical Flow of Goods
Note: The formula to determine the physical flow of goods is used as the first step in process costing in
order to understand how many units to account for.
One of the most important things needed to solve a process costing problem is tracking each of the physical
units that go through the department—knowing where they came from and where they were at the end of
the period. A problem will give enough information to solve this type of situation, but understanding the
way the goods move is essential.
A process costing problem will usually involve units that are partially completed at the beginning of the
period (beginning WIP inventory) and units that are partially completed at the end of the period (ending
WIP inventory). An example of ending WIP is water bottles in the filling department at the end of the period
that are partially full.
The physical flow of goods formula utilizes full units, no matter how complete they are (for example, the
number of actual water bottles in the filling department at the end of the period, even if some of the bottles
are not yet filled). The physical flow of goods formula enables tracking of how many physical units went
into production and where they are at the end of the period. The formula is:
Units in Beginning WIP + Units Started/Transferred In
=
Units in Ending WIP + Units Completed/Transferred Out
Candidates need to be able to use the above formula to solve for any of the items within it.
Note: When solving a problem about physical flow, the completion percentages for any of the beginning
or ending WIP units are not important. The percentage of completion of the units in beginning and ending
WIP inventory is used in calculating equivalent units of production, which will be covered later. In the
physical flow of goods formula, though, only the number of physical units is used.
Question 15: Ben Company had 4,000 units in its work-in-process (WIP) on January 1. Each unit was
50% complete in respect to conversion costs. During the first quarter, 15,000 units were completed.
On March 31, 5,000 units were in ending WIP that were 70% complete in respect to conversion cost.
For this product, all of the direct materials are added when the unit enters the facility. How many units
did Ben start during the first quarter?
a)
13,000
b)
15,000
c)
16,000
d)
16,500
(HOCK)
2) Calculate the Number of Units Started and Completed
Note: The formula for calculating the number of units started and completed by itself is not going to be
tested on the exam, but it is important. The number of units started and completed is used later in
calculating the equivalent units of production.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
47
Process Costing
CMA Part 1
The next step is to calculate how many units were both started and completed during the period. In
other words, how many units had all of their work done during the period. (For example, if the product is
bottled water, of the water bottles that were transferred into the filling department during the period, how
many were completely filled during the period?)
All of the units that were completed during the period were either in beginning inventory at the beginning
of the period or were started on (or transferred in) during the period. Subtracting the number of physical
units in beginning WIP from the total number of units completed results in the number of units both started
and completed during the period.
The formula below is used to calculate the number of units that were both started and completed during
the period (meaning that 100% of the work was performed, start to finish, during the current period).
# Units Completed − # Units in Beginning WIP = Units Started and Completed This Period
3) Determine When the Materials Are Added to the Process
The point at which the materials are added is used to determine the amount of materials that were added
to beginning work-in-process inventory during the previous period and to ending work-in-process inventory
during the current period. The information will be provided in the question. It is very important to identify
when the materials are added, so always look for that information in the question and be able to identify
whether the materials are added:
•
at the beginning of the process,
•
at the end of the process,
•
at some point in the middle of the process, or
•
evenly throughout the process (in which case they will behave like conversion costs).
Usually, conversion costs are added evenly throughout the process, but if conversion costs are added at a
specific point in the process, then those conversion costs would need to be treated like materials.
4) Calculate the Equivalent Units of Production (EUP)
Note: The concept of “equivalent units of production” is similar to the concept of “full-time equivalent
employees.” When an employer has half-time employees, the employer expresses the number of employees as “full-time equivalents.”
For example, if the employer has only one half-time employee, the employer has 0.5 of a full-time
equivalent employee. If the employer has two half-time employees, the employer has one full-time
equivalent employee. If the employer has 10 half-time employees, those 10 half-time employees are
equal to 5 full-time equivalent employees (10 × 0.5).
Similarly, if a unit in ending work-in-process inventory is 30% complete as to conversion activities, for
example, it represents 0.30 of an equivalent unit of conversion costs added. Therefore, if 100 units in
ending work-in-process inventory are 30% complete as to conversion costs, they represent 30 equivalent
units (100 × 0.30) of conversion costs added.
Keep that in mind while going through the following explanation of equivalent units of production.
To allocate costs to the individual units produced, it is necessary to know how many “units” the costs should
be allocated to. In a system where beginning and ending WIP in the department are both zero, allocation
of costs to units would be a simple matter of dividing the costs by the number of units both started and
completed during the period, since all the units started during the period were also completed during the
period and all the units completed during the period were also started during the period.
48
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Process Costing
However, since usually a number of incomplete units are in both beginning and ending WIP, the process is
more complicated. The allocation of costs is accomplished by calculating the number of units that would
have been “completely produced” (100% completed) during the period if the company had simply produced
one unit from start to finish, and then started another unit. “Completely produced” means completed from
start to finish with all direct materials added and all conversion activities completed.
Equivalent units are used to mathematically convert partially completed units of product into an equivalent number of fully completed units. The number of complete units that could have been produced during
the period is called the number of equivalent units of production.
By way of illustration, because the units in beginning WIP are partially completed, only the remaining
portion of the work (a partial unit of work) needs to be done in the current period to finish each unit in
BWIP. (For example, if a 1-liter water bottle is 60% full on January 1, only 40% of a liter needs to be added
to fill that bottle.) Similarly, because each unit in ending WIP is partially completed, less than a full unit of
work has been done on each of those units.
The number of equivalent units of production (EUP) serves to quantify the partially completed units in
BWIP and EWIP.
Essentially, three different amounts of work may apply to an individual unit during the period:
1)
Completed (beginning work-in-process inventory that has been completed) meaning that some
of the work was done in the previous period.
2)
Started and Completed (see Step 2 above for the calculation) meaning that the unit was started
on or transferred in during the current period and was completely finished during the same period.
3)
Started (but not completed), or units that were started on or transferred in during the period
but the work on them was not finished at the end of the period and thus they have not yet been
transferred out of the process.
The number of equivalent units of production work done during the period on each of these three categories
of work are used to allocate the costs.
The idea of EUP is probably best explained with some examples. Example #2 contains a formula that is
used for the calculation of number of equivalent units of production.
Example #1: A company has 100 units in beginning WIP and each unit is 25% complete. That 25% of
the work was done during the previous period. If no other units were added to the system this period,
and if at the end of the period 100 units are complete, the number of equivalent units produced this
period is 75. The 100 units needed 75% of the work to be done on them during this period in order to
be complete (100% minus the 25% completed during the previous period). Therefore, the number of
equivalent units of work done during the current period is calculated as 100 × 0.75.
For example, 100 1-liter bottles in beginning WIP inventory are each 25% full. Twenty-five liters of water
were added to all of the bottles during the previous period. Therefore, 75 liters of water will need to be
added to all of the bottles to make them all complete. In other words, 75 “equivalent units” of water (or
75% of 100 liters) need to be added to the units in beginning WIP during the period to complete the
units.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
49
Process Costing
CMA Part 1
Example #2: Assume that in addition to the 100 units in beginning WIP that are 25% complete at the
beginning of the period, 100 more units (empty bottles to be filled) were transferred in during the period,
and at the end of the period 10 units are in ending WIP that are 40% complete. Calculate
1) The number of units completed,
2) The number of units started and completed during the period,
3) The amount of work performed in terms of equivalent units to start the incomplete units that were
in ending WIP inventory, and
4) The total number of EUP during the period.
Solutions:
1) The number of units completed is 190, calculated using the physical flow of goods formula: Units in
BWIP (100) + Units transferred in (100) = Units in EWIP (10) + Units Completed (must be 190).
2) Having calculated the number of units completed, the number of units started and completed can
be determined (190 units completed − 100 physical units in BWIP = 90 units started and completed).
The number of units started and completed is used to calculate the number of equivalent units
produced.
3) The total number of EUP is calculated using the three different amounts of work that may apply to
an individual unit during the period.
The calculation of equivalent units of production is:
To Complete BWIP
Started and Completed
To Start EWIP
Total EUP
(100 × 0.75) =
(90 × 1.00) =
(10 × 0.40) =
75
90
4
169
Though the concept of equivalent units of production is mathematically simple, candidates sometimes initially have trouble with it. So please make certain to spend the necessary time to become comfortable with
the way it works. The process will be explained further in the following pages.
Question 16: Ben Company operates a production facility that has three departments. The following
information is in respect to the second production department for the month of May:
Number of units in BWIP
% complete for BWIP
Number of units started
Number of units completed
Number of units in EWIP
% complete for EWIP
200
20%
1,300
1,100
400
80%
The equivalent units of production for Ben Company for May was:
a)
900
b)
1,260
c)
1,380
d)
1,620
(HOCK)
50
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Process Costing
EUP for Materials and Conversion Costs Separately
In most problems, candidates will need to allocate direct materials costs and conversion costs separately
and so will need to calculate EUP for materials and conversion costs separately because materials are not
added to production in the same way as conversion costs are added to production. Usually, conversion
costs are added proportionately throughout the process, whereas direct materials are added at a specific
point in the process. This does not make the problem any more difficult; it simply means that twice as
many calculations are needed.
When the Materials Are Added
•
In process costing questions, candidates must pay attention to when materials are added to the
process in order to calculate the EUP for materials in both beginning WIP inventory and ending
WIP inventory. If the materials are added at the beginning of the process, beginning WIP inventory will have had 100% of the needed materials added, and no materials will need to be
added to complete beginning WIP inventory during the current period. Ending WIP inventory will
also have had 100% of the materials added.
•
If the materials are added at the end of the process, no materials will have been added to
beginning WIP inventory, and all of the direct materials (100%) will need to be added to complete the beginning WIP inventory during the current period. Ending WIP inventory will also have
had no materials added.
•
If the materials are added at some point in the middle of the process (such as when the
process is 40% complete), compare that percentage with the percentage complete as to conversion
costs for the units in beginning WIP inventory and compare that percentage with the percentage
complete as to conversion costs for the units in ending WIP inventory.
•
o
If the conversion activities have been completed beyond the point at which materials are added
(for example, materials are added when the process is 40% complete and BWIP is 60% complete as to conversion costs), the units in BWIP will have had 100% of the needed materials
added during the previous period, and no materials will need to be added to complete the
beginning WIP inventory during the current period.
o
If the conversion activities are not completed beyond that point (for example, materials are
added when the process is 40% complete and BWIP is 30% complete as to conversion costs),
the units in BWIP will have had no materials added during the previous period, and beginning
WIP inventory will need to have 100% of its materials added during the current period to
complete production.
If the materials are added evenly throughout the process, the percentage of materials already
added to beginning WIP will be equal to the percentage of conversion costs added to beginning
WIP inventory. The remaining amount (100% minus the percentage already added) will need to
be added to complete beginning WIP inventory during the current period. A similar process applies
to ending WIP inventory. The percentage of materials added to ending WIP inventory will be equal
to the percentage of conversion costs added to ending WIP inventory.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
51
Process Costing
CMA Part 1
Example – ending WIP inventory: The materials are added when the unit is 50% complete as to
conversion costs, and the ending WIP was 40% complete as to conversion costs. Therefore, the materials
had not yet been added to the units in ending WIP inventory, and the units in ending WIP inventory were
0% complete as to direct materials although the units were 40% complete as to conversion costs. There
were zero equivalent units of materials in ending WIP inventory, but there were equivalent units of
conversion costs equal to 40% of the number of physical units in ending WIP inventory.
However, if the units in ending WIP inventory were 60% complete for conversion costs, the full amount
of material needed for each unit would have been added to the units in ending WIP inventory and the
units in ending WIP inventory would have been 100% complete as to direct materials though they were
only 60% complete for conversion costs. In this case, the number of equivalent units of materials in
ending WIP inventory would be 100% of the number of physical units in ending WIP inventory, while the
number of equivalent units of conversion costs in ending WIP inventory would be 60% of the number of
physical units in ending WIP inventory.
Similar calculations are necessary for beginning WIP in order to calculate the work that was done in the
current period to complete those units in beginning WIP, as follows:
Example – beginning WIP inventory: Materials are added when the units are 50% complete as to
conversion costs, and the beginning WIP inventory was 35% complete as to conversion costs. Therefore,
the materials had not yet been added to the units in beginning WIP inventory although the units were
35% complete as to conversion costs. Thus, 100% of the materials, or equivalent units of materials
equal to 100% of the number of physical units, needed to be added during the current period to complete
the units in beginning WIP inventory.
The equivalent units of conversion costs that needed to be added to the units in beginning WIP inventory
during the current period were 65% of the number of physical units in beginning WIP inventory (100%
− 35% added during the previous period).
However, if the units in beginning WIP inventory were 70% complete as to conversion costs, 100% of
the direct materials would have been added to each unit in beginning WIP inventory during the previous
period although the units were 70% complete as to conversion costs. In this case no materials, or zero
equivalent units of materials, would need to be added during the current period to complete the units in
beginning WIP inventory.
The equivalent units of conversion costs that would have needed to be added to the units in beginning
WIP inventory during the current period would have been 30% of the number of physical units in beginning WIP inventory (100% − 70% added during the previous period).
Note: How and when in the process materials are added is very important. It is used to determine how many equivalent units of direct materials, if any, were added in order to complete beginning
WIP inventory during the period and how many equivalent units of direct materials, if any, were added
to the incomplete units in ending WIP inventory before the current period ended.
When answering a process costing question, always look for the information on how and when the materials are added.
52
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Process Costing
Question 17: Hoeppner Corp. uses process costing to allocate costs. In the Pressing Department, all of
the materials are added at the very beginning of the process. After this first addition, no additional
materials are added during the process. At the end of January, Hoeppner was presented with the following information:
Beginning work-in-process (60% complete for conversion costs)
Units started in January
Transferred out from Pressing during January
Ending work-in-process, (40% complete for conversion costs)
2,000
5,000
6,000
1,000
Under the EUP method already discussed, what are the equivalent units of production for the month of
January?
Materials
Conversion Costs
a)
5,000
5,200
b)
5,000
5,600
c)
5,200
5,400
d)
5,200
5,200
(HOCK)
EUP and Inventory Tracking Methods
Another complication in the EUP calculation and the allocation of costs (and a much more significant complication) is that the calculation of EUP is influenced by the inventory cost flow assumption the company
uses in its process costing. As with any other kind of inventory, a system is needed for determining whether
or not each unit that was completed was an old unit taken from the units that were in BWIP or not. In
process costing, two inventory cost flow assumptions are used:
1)
First-in-first-out (FIFO) – In FIFO, the assumption is made that in each period the company will
finish what is in BWIP inventory before starting any new units. The FIFO method is what has been
used in the previous explanations and examples where total equivalent units for a period were
calculated.
2)
Weighted Average (WAVG) – In WAVG the assumption is not made that the units in BWIP are
finished first. As a result, all of the units (both those from BWIP and those transferred in or started
in the current period) will be treated the same way. The costs of the work done during the previous
period on beginning WIP and the work done during the current period are combined and averaged,
creating the weighted averaging.
The main difference between FIFO and WAVG is the treatment of the costs that were assigned to the
units already in BWIP at the start of the period. The treatment of costs will be covered in much more
detail later, but very briefly:
•
Under FIFO all of the costs in BWIP that were incurred during the previous period are automatically
allocated 100% to units completed and transferred to finished goods or to the next department,
while costs incurred during the period are allocated between units completed and units in ending
WIP inventory on the basis of equivalent units of production.
•
Under WAVG the costs in BWIP incurred during the previous period are combined with the costs
incurred during the current period, and the total costs are divided by all the equivalent units worked
on during the period (whether the units were in BWIP or were started during the period) to calculate a weighted average to use in allocating the total costs between completed goods and ending
WIP inventory on the basis of equivalent units.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
53
Process Costing
CMA Part 1
Note: Candidates need to be able to make the calculations for EUP and to allocate costs under both the
FIFO and the WAVG method. The two methods are discussed in more detail individually below and a
simple formula for calculating EUP under each method is provided. To be able to answer process costing
questions, be sure to memorize the formulas.
EUP Under the FIFO Method
Under the FIFO method it is assumed that all of the units in BWIP at the beginning of the period were
completed during the current period before any other new units were started. Thus, under the FIFO
method, all of the units that were in BWIP were completed and were transferred out at the end of the
period.
Because the units in BWIP have been transferred out of the department, the costs that were already associated with those units in BWIP (costs incurred in the current department during the previous period) are
kept separate from the costs incurred during the current period. The costs that were in BWIP at the beginning of the period are 100% allocated to completed units transferred out during the current period. They
are not allocated between units transferred out and units remaining in ending WIP.
Therefore, only the costs incurred during the current period are allocated between completed units transferred out and units in EWIP. The costs incurred during the current period (only) are allocated
according to the work actually done during the period (only) as determined by the equivalent units of
production.
Since the costs are allocated on the basis of equivalent units of production, the equivalent units of production used in the allocation include only the work done during the current period on units in beginning WIP
inventory and units started during the period. The work done during the previous period on units in beginning WIP inventory is excluded from the calculation of EUP and from the allocation of current period costs
between units completed and units in EWIP.
In other words, under FIFO the cost per EUP calculation will include only the costs incurred during the
current period and three elements of work:
1)
The work required to complete the units in BWIP
2)
The work required for all of the units started and completed
3)
The work done to start the EWIP
The costs incurred during the current period are allocated between units completed and transferred out
(items numbered 1 and 2 above) and units in EWIP (item 3 above). The costs and the EUP that were in
BWIP at the beginning of the period (for work done during the previous period) are not included in the
allocation between completed units and ending WIP inventory. Instead, the units in BWIP are considered
completed units for the period, and the costs incurred during the previous period for the units in BWIP are
transferred out in full either to finished goods inventory or to the next process because under the FIFO
assumption, first the units in BWIP are finished, and then other new units are started.
Thus, the costs transferred out for completed units equal (1) 100% of the costs in BWIP plus (2) an allocation of the costs incurred during the period. The costs remaining in ending WIP are an allocation of the
costs incurred during the period.
Note: When the units are transferred out, the costs for BWIP, the costs incurred during the current
period, and any transferred-in costs are combined and become the transferred-in costs for the next
department. The costs are not kept separate in future departments. This means that the next department
will use a weighted average for the cost of the units transferred in and will not have similar units that
have different transferred-in costs.
54
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Process Costing
The FIFO calculation of EUP during the period consists of three steps as follows (this is very important):
How Calculated
1)
Completion of BWIP
Units in BWIP × % of work done this period
2)
+
Started & Completed
# of Started & Completed Units × 1
3)
+
Starting of EWIP
# of units in EWIP × % of work done this period
=
EUP this period
TOTAL
EUP Under the Weighted Average Method
Under the Weighted Average cost flow assumption, the assumption is made that all of the costs and the
equivalent units of production in BWIP at the beginning of the period are simply added together with all of
the work done and costs incurred during the current period and averaged together for allocation between
units completed and units in EWIP. No distinction is made between the costs of the units that were
in BWIP and the units that were transferred in or started on during the period. Therefore, no distinction is made between the work done and costs incurred on the BWIP last period and the work done and
costs incurred this period to finish the BWIP.
The costs associated with BWIP that were incurred during the previous period are combined with the costs
that were incurred during the current period. Thus, the costs incurred during the previous period to start
the beginning WIP are treated as though they were incurred during the current period, and all of the
units that were in BWIP are considered to have been 100% worked on and completed during the current
period. The total combined cost is therefore allocated to all units worked on to determine their average
cost.
Again, in WAVG, the costs and EUP of work that already were held in BWIP at the start of the period are
considered to have been incurred and worked during the current period. Therefore, an average cost is
calculated for the work done last period that was still in the department at the beginning of this period as
well as for the work done this period.
Because the assumption is made that all of the work done on BWIP was done during the current period,
the calculation of EUP for the weighted average method is simpler than for FIFO. For WAVG, equivalent
units of production need to be calculated for only two categories of units: Units Completed and the Starting
of EWIP.
The formula for EUP under the WAVG Method has two steps, as follows (candidates must know this):
How Calculated
1)
2)
Units Completed
# of Units completed during the period × 1
+
Starting of EWIP
# of units in EWIP × % of work done this period
=
EUP this period
TOTAL
Units completed is calculated as:
Physical Units in BWIP + Units Started/Transferred In – Physical Units in EWIP = Units Completed
The weighted average method essentially creates a weighted average of the production costs of the two
periods by putting them together (and hence, the name).
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
55
Process Costing
CMA Part 1
Note: The total equivalent units of production under the weighted average method can never
be lower than the total equivalent units of production under FIFO. The two calculations of EUP
will be equal if no units were in beginning work-in-process inventory, but total EUP under FIFO can never
be higher than total EUP under the weighted average method. The difference in the EUP between the
two methods will be equal to the EUP that were in BWIP at the start of the period.
The following information is for the next four questions: Levittown Company employs a process
cost system for its manufacturing operations. All direct materials are added at the beginning of the
process, and conversion costs are added proportionately. The production schedule for November is:
Work-in-process on November 1 (60% complete as to conversion costs)
Units started during November
Total units to account for
Units
1,000
5,000
6,000
Units completed and transferred out from beginning WIP inventory
Units started and completed during November
Work-in-process on November 30 (20% complete as to conversion costs)
Total units accounted for
1,000
3,000
2,000
6,000
Question 18: Using FIFO, the EUP for direct materials for November are:
a)
5,000 units
b)
6,000 units
c)
4,400 units
d)
3,800 units
Question 19: Using FIFO, the EUP of conversion costs for November are:
a)
3,400 units
b)
3,800 units
c)
4,000 units
d)
4,400 units
Question 20: Using weighted-average, the EUP for materials for November are:
a)
3,400 units
b)
4,400 units
c)
5,000 units
d)
6,000 units
Question 21: Using weighted-average, the EUP for conversion costs for November are:
a)
3,400 units
b)
3,800 units
c)
4,000 units
d)
4,400 units
(CMA Adapted)
56
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Process Costing
5) Calculation of Costs Incurred During the Period
After calculating the EUP during the period the next step is to determine the costs that will be considered
to have been incurred during the period under each inventory method.
Costs Incurred Under the FIFO Method
Under the FIFO method all of the costs associated with the units in BWIP at the start of the period are
transferred out. The costs in BWIP do not go into the “process” during the period because under FIFO, the
units in BWIP are all assumed to have been finished before any other units are started. Therefore, all of
the costs in BWIP will end up as costs for units completed and transferred out during the current period.
Only the costs that were actually incurred during the current period will be allocated between units completed and transferred out and ending WIP according to the EUP in each during the period. Thus, the costs
allocated to units completed and transferred out will consist of two components:
1)
All of the costs incurred by this department during the previous period for the EUP in BWIP.
2)
An allocated portion of costs incurred by this department during the current period for the units
that were started and completed during the period.
Costs Incurred Under the Weighted Average Method
Under the weighted average method all of the costs in BWIP are added to the costs that were incurred
during the current period. This treatment is the same thing that was done with the calculation of EUP under
the weighted average method. The costs in BWIP and the costs incurred during the period are combined
and allocated to either completed units or to ending WIP inventory according to the EUP in each during the
period as calculated under the WAVG method (which included the EUP that were in BWIP at the beginning
of the period).
Because the costs of the current and previous period are mixed, the result is a weighted average.
Selecting an Inventory Cost Flow Method
The weighted average method is simpler to use than the FIFO method since only one calculation is needed
to allocate the costs. However, the weighted average method mixes together costs, so any change in the
cost of an input is in a sense covered up by the weighted average. The weighted average method is
best used when prices are stable.
Because the FIFO method keeps the costs of the two periods separate, FIFO is preferable when prices
of inputs are changing frequently.
6) Calculation of the Cost per EUP
Once the EUP have been determined (under either FIFO or WAVG), and the costs to be allocated have been
identified, the next step is to determine a rate (or unit cost) per EUP for each cost element, usually raw
materials and conversion costs. The cost per equivalent unit is calculated by dividing the total costs to be
allocated for each cost element (Step #5) by the total equivalent units for that cost element. The calculation
is done separately for materials and conversion costs (Step #4). The cost per EUP for materials and
conversion costs must be calculated separately because the total equivalent units for materials
and conversion costs may be different.
Note: Remember that if using the FIFO cost flow assumption, all of the costs associated with the BWIP
are transferred 100% to FG or to the next department. They do not need to be allocated between
completed units and ending WIP inventory and thus they are excluded from the calculation of equivalent
units of production and from the allocation based on equivalent units of production.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
57
Process Costing
CMA Part 1
7) Allocation of the Costs to the Products
After the rates per EUP for materials and conversion costs have been determined, the last step is to allocate
the costs to completed units transferred out and to EWIP based on the number of equivalent units that
were completed and the number of equivalent units in EWIP. This allocation is simply a mathematical
exercise that requires multiplying the EUP in each place by the rate (or cost) per EUP.
Note: It is usually easiest to allocate the costs to EWIP first and then all remaining costs end up in units
transferred out. When FIFO is being used, however, it is critical not to forget to allocate the costs in
BWIP to units transferred out, as well.
On the following two pages are diagrams illustrating the FIFO and Weighted Average process costing systems. Note that the only difference between them is the treatment of BWIP.
•
Under FIFO, the costs of BWIP go directly to completed units transferred out.
•
Under the weighted average method, the costs in BWIP are included with the costs added into the
“process” for the current period and are allocated. Similarly, under the weighted average method
the work that had been done in previous periods on the BWIP is also added to the “process” and
is considered to have been done this period.
The diagrams will again go through the steps of process costing, and the diagrams will be followed by an
example that goes through all of the steps for both FIFO and WAVG.
58
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Process Costing
Process Costing Diagram – FIFO
Beginning WIP
1
Conversion Costs Added
During Period
Materials Added During
Period
Beginning WIP contains a
number of units that have
had some work (EUP) done
on them and have been allocated costs incurred during
the previous period.
These are the costs paid for
labor and overhead during
the period to convert the
materials into the finished
product.
These are the costs paid for
materials during the period
to produce the product.
Costs associated with
BWIP
Cost of Conversion Costs
Costs of Materials
The Process
In the process, the EUP for materials and conversion costs
are calculated.
A cost per EUP is calculated
and used to allocate the costs
incurred between units Completed and Transferred Out and
EUP in Ending WIP.
1
Transferred Out
Ending WIP
# of units completed that
were NOT in BWIP ×
cost/EUP for whole unit
(Materials and Conv. Costs)
EUP of Materials ×
Cost/EUP of Materials
+
EUP of Conv. Costs ×
Cost/EUP of Conv. Costs
Under FIFO the units in BWIP are considered to have been completed during the period, so the costs
associated with BWIP are all transferred out with the completed units. The EUP in BWIP are not used in
the allocation of costs incurred during the current period because the costs in BWIP are allocated 100%
to units completed and transferred out.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
59
Process Costing
CMA Part 1
Process Costing Diagram – Weighted Average
Beginning WIP
2
Beginning WIP contains a
number of units that have
had some work (EUP) done
on them and have been allocated costs incurred during
the previous period.
Costs associated with
BWIP AND EUP associated
with BWIP
Conversion Costs Added
During Period
Materials Added During
Period
These are the costs paid for
labor and overhead during
the period to convert the
materials into the finished
product.
These are the costs paid for
materials during the period
to produce the product.
Cost of Conversion Costs
Cost of Materials
The Process
In the process, the EUP of
materials and conversion
costs are calculated. The
units that were in BWIP are
included and it is assumed
that that work was done
during this period
A cost per EUP is calculated
and used to allocate the
costs incurred between units
Transferred Out and EUP in
Ending WIP
2
60
Transferred Out
Ending WIP
# of units completed ×
cost/EUP for whole unit
(Materials and Conv. Costs)
EUP of Materials ×
Cost/EUP of materials
+
EUP of Conv. Costs ×
Cost/EUP of Conv. Costs
Under WAVG the units in BWIP are considered to have been done completely during the current period.
The costs in BWIP are combined with the costs incurred during the current period and the total cost is
divided by the work done during the previous period on the units in BWIP and the work done during the
current period (total EUP) to calculate the cost per EUP.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Process Costing
Process Costing Summary
The seven steps in a process costing question are shown again below in an abbreviated form.
1)
Determine the physical flow of units of goods. The formula is:
Units in BWIP + Started or Transferred In = Units in EWIP + Completed/Transferred
Out
2)
Calculate how many units were started and completed during the period. The formula is:
Started and Completed = Units Completed or Transferred Out – Units in BWIP
3)
Determine when the materials are added to the process.
4)
Calculate the equivalent units of production during the period. The calculation of equivalent
units will most likely need to be done twice—once for materials and once for conversion costs. If
materials are added at the beginning of the process (or at any other point in the process) the EUP
calculation for materials will need to be done separately. However, if the materials and the conversion costs are both added evenly (or proportionately) throughout the process, only one
calculation of equivalent units is needed, as the total equivalent units will be the same for materials
and conversion costs.
For FIFO:
1)
Completion of BWIP
Units in BWIP × % work done this period
2)
+
Started and Completed
Number of Started & Completed Units × 1
3)
+
Starting of EWIP
Units in EWIP × % work done this period
=
EUP this period
TOTAL
For Weighted Average:
1)
2)
Units Completed*
Units completed during the period × 1
+
Starting of EWIP
Units in EWIP × % work done this period
=
EUP this period
TOTAL
* Units Completed is calculated as:
Physical Units in BWIP + Units Started or Transferred In – Physical Units in EWIP = Units Completed
5)
Calculate the costs incurred during the period.
For FIFO, only costs that were actually incurred during the period are included in the amount to
be allocated between units completed and units in Ending WIP. Costs in BWIP are allocated 100%
to units completed.
For WAVG, the costs allocated between units completed and units in Ending WIP will include those
actually incurred during the period, plus the costs that were incurred during the previous period in
this department to start the units that were in BWIP at the start of the period.
The costs for materials and conversion costs need to be calculated separately.
6)
Calculate the cost per equivalent unit for materials and conversion costs by dividing the
costs from Step 5 by the EUP calculated in Step 4. The cost per equivalent unit is done separately
for materials and conversion costs.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
61
Process Costing
7)
CMA Part 1
Allocate the costs between completed units and EWIP. The allocation of costs between completed units and units in EWIP is done by multiplying separately for materials and conversion costs
the equivalent units of production for each that were calculated in Step 4 by the cost per EUP for
each that were calculated in Step 6. If FIFO is being used, also allocate 100% of the costs in
beginning WIP inventory for materials and conversion costs to completed units.
Process Costing Examples
On the following two pages are examples showing how the entire process works for both the FIFO and the
Weighted Average methods. The same facts will be used for both methods and the monetary value of EWIP
and FG will be calculated using both the FIFO and Weighted Average methods. Exam questions will be
easier when candidates understand the entire process, as the exam questions generally ask about only one
part of the process.
62
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Process Costing
FIFO Example: Beginning WIP inventory is 150 units (60% complete as to conversion costs). Beginning
WIP inventory material costs are $250 and conversion costs are $500. All material for the product is
added at the beginning of the production process. Conversion takes place continuously throughout the
process. During the period 550 units were started. Material costs for the period are $800 and conversion
costs for the period are $6,000. Ending WIP is 80 units at 20% completion as to conversion costs.
Calculate the amounts to be in EWIP and FG using the FIFO method.
1) Determine the physical flow.
BWIP
150
+
+
Started
550
=
=
EWIP
80
+
+
Units Completed
620
2) Calculate the Units Started and Completed.
S&C
470
=
=
Units Completed
620
–
–
BWIP
150
3) Materials are added at the beginning of the process. Therefore, 100% of the required materials
were added to the units in BWIP during the previous period and 100% of the required materials have
been added to the units in EWIP during the current period. Thus, the number of equivalent units of
direct material in each is equal to the number of physical units in each.
4) Calculate equivalent units of production. Under FIFO, the costs in BWIP are allocated 100% to
units completed and transferred out. Therefore, only costs incurred during the current period are
allocated between completed units and units in EWIP on the basis of equivalent units.
Complete BWIP
Started & Completed
Start EWIP
TOTAL EUP
Materials Conversion Costs
0
60
= 150 × 40%
470
470
80
16
= 80 × 20%
550
546
5) Calculate costs incurred. Under FIFO, only costs actually incurred during the period are allocated
between units completed and units in EWIP.
Materials
Conversion Costs
$ 800
$6,000
6) Incurred costs per equivalent unit.
Materials
Conversion Costs
$ 800 ÷ 550 =
$ 6,000 ÷ 546 =
$ 1.46
$10.99
7) Allocate incurred costs to EWIP.
Materials
Conversion Costs
Total
$ 1.46 × 80 = $
$10.99 × 16 =
$
116.80
175.84
292.64
Allocate BWIP costs and incurred costs to finished goods.
Costs of BWIP (100% to finished goods)
Materials
Conversion Costs
Total
Total allocated costs
*
$250 + $500 = $ 750.00
$1.46 × 470 =
686.20
$10.99 × 530 = 5,824.70
$ 7,260.90
$7,553.54 *
The total cost to allocate is $750 in BWIP, $800 materials added, and $6,000 conversion costs added,
for a total of $7,550. The $3.54 difference between the total cost to allocate and the total of the
allocated costs is a rounding difference that can be dealt with by adjusting the individual components
by small amounts so that the allocations total to $7,550.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
63
Process Costing
CMA Part 1
WAVG Example: Beginning WIP inventory is 150 units (60% complete as to conversion costs). Beginning WIP inventory material costs are $250 and conversion costs are $500. All material for the product
is added at the beginning of the production process. Conversion takes place continuously throughout the
process. During the period 550 units were started. Material costs for the period are $800 and conversion
costs for the period are $6,000. Ending WIP is 80 units at 20% completion as to conversion costs.
Calculate the amounts to be in EWIP and FG using the Weighted Average method.
1) Determine the physical flow.
BWIP
150
+
+
Started
550
=
=
EWIP
80
+
+
Units Completed
620
2) Calculate the Units Started and Completed.
S&C
470
=
=
Units Completed –
620
–
BWIP
150
3) Materials are added at the beginning of the process. Therefore, 100% of the required materials
were added to the units in BWIP during the previous period and 100% of the required materials have
been added to the units in EWIP during the current period. Thus, the number of equivalent units of
direct material in each is equal to the number of physical units in each.
4) Calculate equivalent units of production.
Materials Conversion Costs
620
620
80
16
= 80 × 20%
700
636
Units COMPLETED
EWIP
TOTAL
5) Calculate costs to allocate using EUP. Under WAVG the costs associated with BWIP are added to
the costs incurred during the period.
Materials
Conversion Costs
Total costs to allocate
$250 in BWIP + $ 800 incurred =
$ 500 in BWIP + $ 6,000 incurred =
$1,050
6,500
$7,550
6) Costs per equivalent unit (including costs in BWIP).
Materials
Conversion Costs
($250 + $800) ÷ 700 =
($ 500 + $ 6,000) ÷ 636 =
$ 1.50
$10.22
7) Allocate costs to EWIP.
Materials
Conversion Costs
Total
$ 1.50 × 80 = $
$10.22 × 16 =
$
120.00
163.52
283.52
Allocate costs to finished goods.
Materials
Conversion Costs
Total
Total allocated costs
*
$1.50 × 620 = $ 930.00
$10.22 × 620 = 6,336.40
$ 7,266.40
$7,549.92 *
The total cost to allocate is $7,550. The $0.08 difference between the total cost to allocate and the
total of the allocated costs is a rounding difference that can be dealt with by adjusting the individual
components by small amounts so that the allocations total to $7,550.
Note: If the company is in its first month of operation, it will have no BWIP. In that case, the cost
allocations under FIFO and WAVG will be the same because the treatment of BWIP is the only difference
between the two methods.
64
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Process Costing
efham CMA
The following information is for the next two questions: A sporting goods manufacturer buys
wood as a direct material for baseball bats. The Forming Department processes the baseball bats, which
are then transferred to the Finishing Department where a sealant is applied. The Forming Department
began manufacturing 10,000 "Casey Sluggers" during the month of May. There was no beginning inventory.
Costs for the Forming Department for the month of May were as follows:
Direct materials
Conversion costs
Total
$33,000
17,000
$50,000
A total of 8,000 bats were completed and transferred to the Finishing Department; the remaining 2,000
bats were still in the forming process at the end of the month. All of the Forming Department's direct
materials were placed in process, but, on average, only 25% of the conversion cost was applied to the
ending work-in-process inventory.
Question 22: The cost of the units transferred to the Finishing Department is:
a)
$50,000
b)
$40,000
c)
$53,000
d)
$42,400
Question 23: The cost of the work-in-process inventory in the Forming Department at the end of May
is:
a)
$10,000
b)
$2,500
c)
$20,000
d)
$7,600
(CMA Adapted)
Question 24: Nance Co. began operations in January 20X5 and uses the FIFO Process Costing method
in its accounting system. After reviewing the first quarter results, Nance is interested in how the equivalent units of production would have been different if the weighted average method had been used
instead. Assume that the number of units in ending work-in-process was the same at the end of January,
February and March and that at the end of each month they were the same percentage complete. If
Nance had used the weighted average method, the EUP for conversion costs for each of the first two
months would have compared to the FIFO method in what way?
January
February
a)
Same
Same
b)
Greater number
Greater number
c)
Greater number
Same
d)
Same
Greater number
(HOCK)
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
65
Process Costing
CMA Part 1
The following information is for the next two questions: Kimbeth Manufacturing uses a process
cost system to manufacture Dust Density Sensors for the mining industry. The following information
pertains to operations for the month of May.
Beginning work-in-process inventory, May 1
Started in production during May
Completed production during May
Ending work-in-process inventory, May 31
Units
16,000
100,000
92,000
24,000
The beginning inventory was 60% complete for materials and 20% complete for conversion costs. The
ending inventory was 90% complete for materials and 40% complete for conversion costs.
Costs pertaining to the month of May are as follows:
•
Beginning inventory costs are materials, $54,560; direct labor, $20,320; and factory overhead,
$15,240.
•
Costs incurred during May are materials used, $468,000; direct labor, $182,880; and factory overhead, $391,160.
Question 25: Using the weighted-average method, the cost per equivalent unit for materials during May
is
a)
$4.12
b)
$4.50
c)
$4.60
d)
$5.02
Question 26: Using the weighted-average method, the cost per equivalent unit for conversion costs
during May is
a)
$5.65
b)
$5.83
c)
$6.00
d)
$6.20
(CMA Adapted)
Question 27: At the end of the first month of operations, Larkin had 2,000 units in ending WIP that
were 25% complete as to conversion costs. All materials are added at the end of the process. The
number of equivalent units of conversion costs would be:
a)
Equal to the number of units completed during the period.
b)
Less than the number of equivalent units of materials.
c)
Less than the number of units completed during the period.
d)
Less than the number of units placed into the process during the period.
(HOCK)
66
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Process Costing
Spoilage in Process Costing
Some units worked on during a period are never completed and also are no longer in EWIP at the end of
the period because they are spoiled and are identified as defective during the process. When spoiled units
are detected, the spoiled units are removed from processing immediately. No further direct materials are
added to them and no further conversion work is done on them. The term used for defective units that are
removed from processing is spoilage.
Up to this point, all of the costs that are in the department (costs in beginning WIP, costs incurred during
the period, or costs transferred in) have been allocated either to completed units—units transferred to
finished goods inventory (FG) or to the next process—or to ending WIP inventory. Now that spoilage has
been added to the equation, spoiled units must be included as another possible destination for a portion of
the costs that are in the department.
The formula to calculate the physical flow of units of goods when spoilage is included needs to be expanded
to the following:
Units in BWIP + Started/Transferred In = Units in EWIP + Completed/Transferred Out + Spoiled Units
When units are spoiled, material will have been added to the spoiled units and some of the needed conversion work will have been done on them before they were identified as spoiled. Therefore, a portion of the
material costs, conversion costs, and transferred-in costs, if any, must be allocated to the spoiled units.
The treatment of spoilage is similar to the treatment of ending WIP inventory—the units were started but
not finished. The spoiled units have costs associated with them based on what has been done to them,
similar to EWIP.
The allocation of costs to the spoiled units is based on equivalent units, and the number of equivalent units
worth of each cost element used in the allocation depends on when the spoiled units are identified as
spoiled.
Whenever spoilage exists, the following issues need to be addressed:
1)
How many units were spoiled?
2)
How are the spoiled units classified—as normal spoilage or abnormal spoilage?
3)
What costs are allocated to each spoiled unit?
4)
What is done with the costs allocated to the spoiled units?
1) How many units were spoiled?
The number of units (physical units) spoiled are the units that did not pass inspection and that were removed from the production process at the inspection point.
Though the number of spoiled units is usually given in the question, it is important to identify the point at
which the spoiled units were removed from the production process, because that information will
be needed to calculate the equivalent units associated with the spoiled units.
2) How are the spoiled units classified – as normal spoilage or abnormal spoilage?
For each process, the company will have established standards regarding the number of defective (spoiled)
units that are expected to occur. The expected spoilage is normal spoilage and spoilage above that is
abnormal spoilage. All spoiled units are classified as either normal or abnormal.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
67
Process Costing
CMA Part 1
The way the standard for normal spoilage is presented in the question is used to determine the number of
spoiled units to be classified as normal spoilage. The expected amount of spoilage (normal spoilage) is
usually presented in a question in one of four ways, as follows:
1)
As a percentage of the number of units started into production
2)
As a percentage of the units that completed production
3)
As a percentage of the good units that passed inspection
4)
As a percentage of the total number of units that were inspected
The number of spoiled units up to and including, but not exceeding, the number of spoiled units expected
to occur are classified as normal spoilage. Any spoiled units in excess of the expected spoilage are classified
as abnormal spoilage. The distinction between normal and abnormal spoilage is important because the
costs associated with the two different classifications of spoilage are treated differently.
3) What costs are allocated to each spoiled unit?
If the spoiled units are identified as spoiled after processing has begun in the current department, they will
have had some conversion costs allocated to them and possibly materials as well, depending on when in
the process the materials are added. If costs incurred in a previous department or departments were transferred in along with the units, some of the transferred-in costs will also need to be allocated to the spoiled
units. The different costs are allocated to the spoiled units based on their EUP. The equivalent units of
spoiled units are calculated the same way as equivalent units are calculated for ending WIP inventory, so
another line is added to the EUP worksheet calculation.
In a problem with spoilage, the formula for FIFO now looks like the following:
1)
Completion of BWIP
Units in BWIP × % work done this period
2)
+
Started and Completed
Number of Started & Completed Units × 1
3)
+
Starting of EWIP
Units in EWIP × % work done this period
4)
+
Starting of Spoiled Units
Units Spoiled × % work done this period
=
EUP this period
TOTAL
In a problem with spoilage, the formula for WAVG now looks like the following:
1)
Units Completed
Units completed during the period × 1
2)
+
Starting of EWIP
Units in EWIP × % work done this period
3)
+
Starting of Spoiled Units
Units Spoiled × % work done this period
=
EUP this period
TOTAL
Allocation of Costs to the Spoiled Units
After the cost per EUP for materials and conversion costs and transferred-in costs (if applicable) are calculated as discussed above, the company will allocate costs to each unit, including the spoiled units. The
allocation of costs is made in the same manner as the allocation of costs are made to EWIP: the number
of EUP for spoiled units is multiplied by the cost per EUP. Again, this allocation will probably be done
separately for materials and conversion costs and will definitely be done separately for transferred-in costs,
if any.
68
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Process Costing
4) What is done with the costs allocated to the spoiled units?
After determining the direct materials costs, conversion costs, and transferred-in costs to be allocated to
the normally and abnormally spoiled units, those costs must be transferred somewhere at the end of the
period. The spoiled units are neither finished goods nor ending work-in-process, but the costs have been
incurred in the department and must be moved out of the department.
The treatment of spoilage costs will depend on the type of spoilage.
Normal Spoilage
If the spoilage is normal spoilage, the costs that have been allocated to the normally spoiled units are
added to the costs of the good units that are transferred out to finished goods (or to the next
department). As a result, the cost per unit transferred out will be higher than the actual cost of producing
a good unit in the current department.
Abnormal Spoilage
It is generally considered that production management can control abnormal spoilage. Normal spoilage is
expected to occur and generally cannot be prevented. However, abnormal spoilage should not occur and
should therefore be preventable.
Therefore, the costs that have been allocated to the abnormally spoiled units will be expensed on the
income statement in that period as a loss from abnormal spoilage.
Note for FIFO only: To simplify the allocation of costs to spoiled units under FIFO, the spoiled units
are accounted for as though they were all started in the current period. Although the spoiled units
probably included some units from beginning work-in-process inventory, all spoilage is treated as if it
came from current production and thus a portion of only the current production costs incurred, along
with a portion of any costs transferred in during the period, are allocated to the spoiled units.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
69
Process Costing
CMA Part 1
Question 28: A company employs a process cost system using the FIFO method. The product passes
through both Department 1 and Department 2 in order to be completed. Units enter Department 2 upon
completion in Department 1. Additional direct materials are added in Department 2 when the units have
reached the 25% stage of completion with respect to conversion costs. Conversion costs are added
proportionally in Department 2. The production activity in Department 2 for the current month was:
Beginning work-in-process inventory (40%
complete with respect to conversion costs)
Units transferred in from Department 1
Units to account for
15,000
80,000
95,000
Units completed and transferred to finished goods
85,000
Ending work-in-process inventory (20%
complete with respect to conversion costs)
Units accounted for
10,000
95,000
How many equivalent units for direct materials were added in Department 2 for the current month?
a)
70,000 units.
b)
80,000 units.
c)
85,000 units.
d)
90,000 units.
(CMA Adapted)
Question 29: During May 20X5, Mercer Company completed 50,000 units costing $600,000, exclusive
of spoilage allocation. Of these completed units, 25,000 were sold during the month. An additional
10,000 units, costing $80,000, were 50% complete at May 31. All units are inspected between the
completion of manufacturing and transfer to finished goods inventory. Normal spoilage for the month
was $20,000, and abnormal spoilage of $50,000 was also incurred during the month. The portion of
total spoilage that should be charged against revenue in May is
a)
$50,000
b)
$20,000
c)
$70,000
d)
$60,000
(CMA Adapted)
70
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Process Costing
Benefits of Process Costing
•
Process costing is the easiest, most practical costing system to use to allocate costs for homogeneous
items. It is a simple and direct method that reduces the volume of data that must be collected.
•
Process costing can aid in establishing effective control over the production process. Process costing
can be used with standard costing by using standard costs as the costs to be allocated. Use of standard
costing with process costing makes it possible to track variances and to recognize inefficiency in a
specific process.
•
It is flexible. If the company adds or removes a process, it can adapt its process costing system easily.
•
Management accountants can review the amounts of materials and labor used in each process to look
for possible cost savings.
•
Process costing enables obtaining and predicting the average cost of a product, an aid in providing
pricing estimates to customers.
Limitations of Process Costing
•
Process costing can introduce large variances into the costing system if standard costs allocated to the
units are not up to date. Depending on how the variances are resolved, the variances could cause the
product to be over- or under-costed, which could lead to pricing errors.
•
Process costing can be time-consuming for management accountants.
•
Calculating the equivalent units for beginning and ending work-in-process inventory can lead to inaccuracies, because the percentage of completion of those inventories may be subjective (an estimate
or even a guess).
•
Process costing cannot provide an accurate cost estimate when a single process is used to produce
different (joint) products.
•
Process costing is not suitable for custom orders or diverse products.
•
Because the work of an entire department is combined in the process costing system, process costing
makes it difficult to evaluate the productivity of an individual worker.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
71
Job-Order Costing
CMA Part 1
Job-Order Costing
Job-order costing is a cost system in which all of the costs associated with a specific job or client are
accumulated and charged to that job or client. The costs are accumulated on what is called a job-cost sheet.
All of the job sheets that are still being worked on equal the work-in-process at that time. In a job-order
costing system, costs are recorded on the job-cost sheets and not necessarily in an inventory account.
Job-order costing can be used when all of the products or production runs are unique and identifiable from
each other. Audit firms and legal firms are good examples of job-order costing environments. As employees
work on a particular client or case, they charge their time and any other costs to that specific job. At the
end of the project, the company simply needs to add up all of the costs assigned to it to determine the
project’s cost. Performance measurement can be done by comparing each individual job to its budgeted
amounts or by using a standard cost system.
While direct materials and direct labor are accumulated on an actual basis, manufacturing overhead must
be allocated to each individual job. Overhead allocation is done in much the same manner as has already
been explained. A predetermined overhead rate is calculated and applied to each product based either on:
•
Actual usage of the allocation base (as in normal costing)
•
Standard usage of the allocation base (as in standard costing)
Multiple cost allocation bases may be used if different overheads have different cost drivers. For example,
in a manufacturing environment, machine hours for each job may be used to allocate overhead costs such
as depreciation and machine maintenance, whereas direct labor hours for each job may be used to allocate
plant supervision and production support costs to jobs. If normal costing is being used, actual machine
hours and actual direct labor hours will be used. If standard costing is being used, the standard machine
hours allowed and the standard direct labor hours allowed for the actual output on each job will be used.
Note: Under job-order costing, selling and administrative costs are not allocated to the products
in order to determine the COGS per unit. Selling and administrative costs are expensed as period costs.
Question 30: Lucy Sportswear manufactures a line of T-shirts using a job-order cost system. During
March, the following costs were incurred completing Job ICU2: direct materials, $13,700; direct labor,
$4,800; administrative, $1,400; and selling, $5,600. Factory overhead was applied at the rate of $25
per machine hour, and Job ICU2 required 800 machine hours. If Job ICU2 resulted in 7,000 good shirts,
the cost of goods sold per unit would be:
a)
$6.50
b)
$6.30
c)
$5.70
d)
$5.50
(CMA Adapted)
Note: On the Exam, a numerical question about job-order costing is most likely to be nothing more than
a question where a standard overhead application rate needs to be used to apply overhead to specific
jobs.
72
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Job-Order Costing
Benefits of Job Order Costing
•
Job order costing is best for businesses that do custom work or service work.
•
Job order costing enables the calculation of gross profit on individual jobs, which can help management
determine in the future which kinds of jobs are desirable.
•
Managers are able to keep track of the performance of individuals for cost control, efficiency, and
productivity.
•
The records kept result in accurate costs for items produced.
•
Management can see and analyze each cost incurred on a job in order to determine how it can be
better controlled in the future.
•
Costs can be seen as they are added, rather than waiting until the job is complete.
Limitations of Job Order Costing
•
Employees are required to keep track of all the direct labor hours used and all the materials used.
•
The focus is on direct costs of products produced. The focus on direct costs can allow for inefficiencies
and increasing overhead costs.
•
Depending on the type of costing being used, overhead may be applied to jobs on the basis of predetermined rates. If the overhead application rates are out of date, the costing can be inaccurate.
•
If overhead is applied on the basis of predetermined rates and the rates are not calculated on any
meaningful basis, the cost of each job will not be meaningful.
•
To produce meaningful results, job order costing requires a lot of accurate data entry. Because of the
massive amount of data recording required, input errors can occur and if not corrected, the errors can
lead to poor management decisions.
•
The use of job order costing is limited to businesses that do custom manufacturing or service work.
It is not appropriate for high volume manufacturing or for retailing.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
73
Operation Costing
CMA Part 1
Operation Costing
Operation costing is a hybrid, or combination, of job-order costing and process costing. In operating
costing, a company applies the basic operation of process costing to a production process that produces
batches of items. The different batches all follow a similar process, but the direct materials input to each
batch are different.
Examples of manufacturing processes where operation costing would be appropriate are clothing, furniture,
shoes and similar items. For each of these items, the general product is the same (for example, a shirt),
but the materials used in each shirt may be different.
In operation costing the direct materials are charged to the specific batch where they are used, but conversion costs are accumulated and distributed using a predetermined conversion cost per unit. Conversion
costs are allocated by batch.
An operation costing worksheet would look very much like a process costing worksheet, except it would
require a separate column for each product’s direct materials, while the worksheet would have one conversion costs column that would pertain to the conversion of all the products.
Life-Cycle Costing
Life-cycle costing is another type of costing that is useful only for internal decision-making.
In life-cycle costing a company does not determine the production cost in the short-term sense of the
production of one unit. Rather, the company takes a much longer view to the cost of production and attempts to allocate to each product all of the research and development, marketing, after-sale service and
support costs, and any other cost associated with the product during its life cycle. The life cycle of the
product may be called its value chain.11
The longer-term view used in life-cycle costing is of particular importance when the product has significant
research and development (R&D) costs or other non-production costs such as after sale service and support
costs associated with it. For the product to be profitable over its life, these non-production costs also need
to be covered by the sales price. If the company fails to take into account the large costs of R&D, it runs
the risk of the sales price covering the costs of the actual production of each unit sold but not covering the
costs of R&D, marketing, after-sale service and support, and other costs.
In the process of looking at all of the costs, the company should be able to determine the ultimate value of
developing a better product. In addition to R&D, costs include after-sale costs such as warranties and repair
work and product support expense. It may be that a larger investment in the design or development of the
product will be recovered through smaller after-sale costs. Alternatively, the company may realize that
additional design costs will not provide sufficient benefit later to make the additional investment in design
and development feasible.
Note: Life-cycle costing is different from other costing methods because it treats pre-production and
after-sale costs as part of the product costs, whereas other methods treat those costs as period expenses
that are expensed as incurred. Therefore, under other methods, pre-production and after-sale costs are
not directly taken into account when determining the profitability of a product or product line.
11
The term “value chain” refers to the steps a business goes through to transform inputs such as raw materials into
finished products by adding value to the inputs by means of various processes, and finally to sell the finished products
to customers. The goal of value chain analysis is to provide maximum value to the customer for the minimum possible
cost.
74
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Life-Cycle Costing
All of the costs in the life cycle of a product can be broken down into three categories. The three categories
and the types of costs included in each are:
Upstream Costs (before production)
•
Research and Development
•
Design – prototyping (the first model), testing, engineering, quality development
Manufacturing Costs
•
Purchasing
•
Direct and indirect manufacturing costs (labor, materials and overhead)
Downstream Costs (after production)
•
Marketing and distribution
•
Services and warranties
For external financial reporting under GAAP, R&D and design costs and the other life-cycle costs other than
manufacturing costs are expensed as incurred. However, for internal decision-making purposes, it is important for the company to treat all of these costs as product costs that need to be recovered from the sale
of the product.
Life-cycle costing plays a role in strategic planning and decision-making about products. After making the
life-cycle cost calculations, the company can make an assessment as to whether or not the product should
be manufactured. If management believes the company will not be able to charge a price high enough to
recover all the life-cycle product costs for the product, then it should not be produced.
Furthermore, by looking at all of the costs expected to be incurred in the process of developing, producing
and selling the product, the company can identify any non-value-adding costs, which can then be reduced
or eliminated without reducing the value of the product to the customer.
Benefits of Life-Cycle Costing
•
Life-cycle costing provides a long-term, more complete perspective on the costs and profitability of
a product or service when compared to other costing methods, which typically report costs for a short
period such as a month or a year.
•
When long-term costs are recognized in advance, life-cycle costing can be used to lower those longterm costs.
•
Life-cycle costing includes research and development costs as well as future costs such as warranty
work, enabling better pricing for profitability over a product’s lifetime.
•
Life-cycle costing can be used to assess future resource requirements such as needed operational
support for the product during its life.
•
Life-cycle costing can help in determining when a product will reach the end of its economic life.
Limitations of Life-Cycle Costing
•
When life-cycle costing is used to spread the cost of fixed assets over the life of a product, the
assumption may be made that the fixed assets will be as productive in later years as they were when
they were new.
•
Accurate estimation of the operational and maintenance costs for a product during its whole lifetime
can be difficult.
•
Cost increases over the life of the product need to be considered.
•
Life-cycle costing can require considerable time and resources, and the costs may outweigh the benefits.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
75
Customer Life-Cycle Costing
CMA Part 1
Customer Life-Cycle Costing
Customer life-cycle costing looks at the cost of the product from the customer’s (the buyer’s) standpoint.
It focuses on the total costs that will be paid by the customer during the whole time the customer owns the
product: the customer’s purchase costs plus costs to use, maintain, and dispose of the product or service.
For example, the life-cycle cost of laundry equipment includes the purchase cost plus the cost for energy
to operate it over its lifetime, the cost of repairs, and the cost to dispose of it at the end of its life, if any.
Customer life-cycle costing is important to a company because it is part of the pricing decision. If a product
is expected to require minimal maintenance when compared with its competition, the company can charge
a higher price for the product than the price the competition is charging for their products, and the total
cost to the customer may still be lower than the cost for the competitor’s product.
Example: BusinessSoft Co. is about to launch a new product. The company expects a 6-year life cycle
from the moment it starts developing its product through its last sale and installation of the product.
However, it also expects to provide after-sale services as part of the contract during and beyond the 6year life cycle period.
The company’s cost estimates are:
R&D
Design
Manufacturing costs
Marketing
Distribution
Customer service
After-sale support
$750,000
500,000
300,000
200,000
100,000
250,000
60,000
The company plans to produce and sell 1,500 installations of the product and would like to earn a 40%
mark-up over its life-cycle costs relating to the new product. The company envisions that an average
client would incur around $500 of installation, training, operating, maintenance, and disposal costs
relating to usage of the product per installation. What is the expected total life-cycle cost per
installation for BusinessSoft, and how much does the company need to charge per
installation?
Solution: The life-cycle costs to BusinessSoft include all of the upstream, manufacturing, and downstream costs related to the new product. The upstream costs are the research and development
($750,000) and the design ($500,000) costs. Manufacturing costs are $300,000. Downstream costs
include all of the other costs listed above, from marketing to after-sale support, totaling $610,000. In
total, the life-cycle costs are $2,160,000.
With a required 40% mark-up over the life-cycle cost, a mark-up of $864,000 ($2,160,000 × 0.40) will
be required. Therefore, the price that BusinessSoft needs to charge for each of the 1,500 installations is
calculated as ($2,160,000 + $864,000) ÷ 1,500, or $2,016 per installation.
If the company charges $2,016 per installation, it will be able to recover all of the life-cycle costs
associated with the new product and earn a 40% mark-up on those costs.
The customer’s life-cycle cost will be the customer’s purchase price of $2,016 per installation plus the
estimated $500 per installation for the installation, training, operating, maintenance, and disposal costs,
for a total of $2,516. The additional $500 cost to the customer is relevant information to the company
in making its final pricing decision, but the customer’s additional $500 costs are not included with the
company’s costs when calculating the company’s total life-cycle cost for the product.
Note: The costs incurred by the customer are relevant only for customer life-cycle costing and as input
into pricing decisions made by the company.
76
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.3. Overhead Costs
D.3. Overhead Costs
Introduction to Accounting for Overhead Costs
In general, overheads are costs that cannot be traced directly to a specific product or unit. Overheads are
actually of two main types: manufacturing (or factory) overheads and nonmanufacturing overheads. Manufacturing overheads are overheads related to the production process (factory rent and electricity, for
example), whereas nonmanufacturing overheads are not related to the production process. Examples of
nonmanufacturing overheads are accounting, advertising, sales, legal counsel and general corporate administration costs.
The allocation of manufacturing overheads is covered first in the topic that follows. Allocation of manufacturing overheads can be accomplished by a variety of methods that include traditional allocation, process
costing, job order costing, operation costing, activity-based costing, and life-cycle costing. All of those
methods, with the exception of life-cycle costing, can be used for external financial reporting, although
some of the principles of activity-based costing must be adapted in order for it to be used for external
reporting because in principle, activity-based costing does not conform to generally accepted accounting
principles. Methods that cannot be used for external financial reporting can be used internally for decisionmaking.
The allocation of nonmanufacturing overheads is covered next in Shared Services Cost Allocation. However,
some of the concepts and ideas covered in manufacturing overhead are also applicable in the allocation of
nonmanufacturing overheads.
Manufacturing Overhead Allocation
Note: In order to help the following explanations flow more easily, the term “overhead” will be used in
the majority of situations, even though the term “manufacturing overhead” would be more technically
accurate. If “manufacturing overhead” were used in every situation, the language would become cumbersome and be more difficult to read. Also, the term “factory overhead” can be used in place of
“manufacturing overhead” because the two are interchangeable terms.
The three main classifications of production costs are:
1)
Direct materials
2)
Direct labor
3)
Manufacturing (or factory) overhead
Direct materials (DM) and direct labor (DL) are usually simple to trace to individual units or products because direct materials and direct labor costs are directly and obviously part of the production process.
Therefore, the CMA exam does not put much emphasis on the determination of DM and DL. Rather, the
emphasis is on the allocation of overhead.
Overhead costs are production and operation costs that a company cannot trace to any specific product or
unit of a product. Because overhead costs are incurred and paid for by the company and are necessary for
the production process, it is essential that the company know what these costs are and allocate them to
the various products being manufactured. Allocation to products manufactured must occur so that the full
costs of production and operation are known in order to set the selling prices for the different products. If
a company does not take overhead costs into account when it determines its selling price for a product, it
runs a significant risk of pricing the product at a loss because the price the company charges may cover
the direct costs of production but not the indirect costs of production.
Furthermore, generally accepted accounting principles require the use of absorption costing for external
financial reporting. In absorption costing, all overhead costs associated with manufacturing a product become a part of the product’s inventoriable cost along with the direct costs. Therefore, all manufacturing
overhead costs must be allocated to the units produced.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
77
D.3. Overhead Costs
CMA Part 1
The categories of costs included in factory overhead (OH) are:
1)
Indirect materials – materials not identifiable with a specific product or job, such as cleaning
supplies, small or disposable tools, machine lubricant, and other supplies.
2)
Indirect labor – salaries and wages not directly attributable to a specific product or job, such as
those of the plant superintendent, janitorial services, and quality control.
3)
General manufacturing overheads, such as facilities costs (factory rent, electricity, and utilities) and equipment costs, including depreciation and amortization on plant facilities and
equipment.
Note: Remember that factory overhead and manufacturing overhead are interchangeable terms
that mean the same thing. Either may be used in a question.
Overheads may be fixed or variable or mixed.
•
Fixed overhead, like any fixed cost, does not change with changes in activity as long as the
activity remains within the relevant range. Examples of fixed manufacturing overhead are factory
rent, depreciation on production equipment, and the plant superintendent’s salary.
•
Variable overheads are costs that change as the level of production changes. Examples of variable manufacturing overheads are indirect materials and equipment maintenance.
•
Mixed overheads contain elements of both fixed and variable costs. Electricity is an example of
a mixed overhead cost because electricity may be billed as a basic fixed fee that covers a certain
number of kilowatts of usage per month and usage over that allowance is billed at a specified
amount per kilowatt used. A mixed overhead cost could also be an allocation of overhead cost from
a cost pool containing both fixed and variable overhead costs.
The number of ways in which a company can allocate overhead are numerous and limited only by the
imagination of the accountant, and now, the computer programmer. However, for the CMA Exam, candidates need to be familiar with only the primary methods of overhead allocation. The primary methods are
the traditional overhead allocation method and activity-based costing.
•
The traditional method of overhead allocation involves grouping manufacturing overhead costs
into a cost pool12 and allocating them to individual products based on a single cost driver, 13 such
as direct labor hours or machine hours. Traditional overhead allocation may involve the use of
separate cost pools for fixed overhead and variable overhead, though fixed and variable overhead
may also be combined into a single cost pool.
•
Activity-based costing (ABC) involves using multiple cost pools and multiple cost drivers to
allocate overhead on the basis of cost drivers specific to each cost pool. Activity-based costing in
its pure form does not conform to GAAP and thus cannot be used for external financial reporting.
Product costs used in external financial reports must include all manufacturing costs and only
manufacturing costs, whereas under pure activity-based costing, some manufacturing costs are
excluded and some non-manufacturing costs are included in the cost pools. However, if activitybased costing is adapted so that all manufacturing overhead costs and only manufacturing overhead costs are allocated to products, ABC can be used for external financial reporting.
Regardless of the manner of allocation used, overhead allocation is simply a mathematical exercise of
distributing the overhead costs to the products that were produced using some sort of basis and
formula.
12
A cost pool is a group of indirect costs that are being grouped together for allocation on the basis of some cost
allocation base.
13
A cost driver is anything (it can be an activity, an event or a volume of something) that causes costs to be incurred
each time the driver occurs.
78
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.3. Overhead Costs
Traditional (Standard) Allocation Method
Traditionally, manufacturing overhead costs have been allocated to the individual products based on
either the direct labor hours, machine hours, materials cost, units of production, weight of production or
some similar measure that is easy to measure and calculate. The measure used is called the activity base.
For example, if a company allocates factory overhead based on direct labor hours, for every hour of direct
labor allowed per unit of output (under standard costing) or used per unit of output (under any other type
of cost measurement system), a certain amount of factory overhead is allocated to, or applied to, each
unit actually produced. The determination of how much overhead is allocated per unit is covered below.
By summing the direct materials, direct labor, and allocated manufacturing overhead costs, a company
determines the total cost of producing each specific unit of product.
Determining the Basis of Allocation
When choosing the basis of allocation (for example, direct labor hours or machine hours), the basis used
should closely reflect the reality of the way in which the costs are actually incurred. For example, in a
highly-automated company, direct labor would most likely not be a good allocation basis for factory overhead because labor would not be a large part of the production process. The allocation basis does not need
to be direct labor hours or machine hours, though those are the most common bases used. For example,
in a company that produces very large, heavy items (such as an appliance manufacturer), the best basis
on which to allocate overhead may be the weight of each product.
Plant-Wide versus Departmental Overhead Allocation
A company can choose to use plant-wide overhead allocation or departmental overhead allocation.
Plant-wide overhead allocation involves putting all of the plant-wide overhead costs into one cost pool
and then allocating the costs in that cost pool to products using one allocation basis, usually machine hours
or labor hours.
Alternatively, a company can choose to have a separate cost pool for each department that the products
pass through in production. This second method is called departmental overhead allocation. Each department’s overhead costs are put into a separate cost pool, and then the overhead is allocated according
to the cost basis that managers believe is best for that department.
Note: When process costing is being used and products pass through several different processes, or
departments, before becoming finished goods, departmental overhead allocation is essential so that
overhead costs can be applied to the products as production activities take place in each department.
In both plant-wide and department overhead allocation, fixed overhead costs can be segregated in a separate cost pool from variable overhead costs and the fixed and variable overheads allocated separately.
The fixed and variable overheads can be allocated using the same cost basis, or they can be allocated using
different cost bases. For planning purposes and in order to calculate fixed and variable overhead variances,
it is virtually essential to segregate fixed and variable overhead costs.
Note: A cost basis, or basis of cost allocation, is a measure of activity such as direct labor hours or
machine hours that is used to assign costs to cost objects. A cost object is a function, organizational
subdivision, contract, or other work unit for which cost information is desired and for which provision is
made to accumulate and measure the cost of processes, products, jobs, capital projects, and so forth.
For example, if Department A uses very little direct labor but a lot of machine time, Department A’s overhead costs would probably be allocated to products on the basis of machine hours. If Department B uses a
lot of direct labor and very little machine time, Department B’s overhead costs would probably be allocated
to products on the basis of direct labor hours. If Department C is responsible for painting the products,
Department C’s overhead costs might be allocated based on square footage or square meterage of the
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
79
D.3. Overhead Costs
CMA Part 1
painted products. A department that assembles products may allocate overhead costs based on the number
of parts in each product.
To best reflect the way that manufacturing overhead is incurred, departmental overhead allocation is
preferable to plant-wide overhead allocation. The greater the number of manufacturing overhead allocation
rates used, the more accurate or more meaningful the overhead allocation will be. However, departmental
overhead allocation requires a lot more administrative and accounting time than plant-wide allocation and
thus is more costly. The more bases used to allocate overhead, the more costs will be incurred to obtain
the needed information for the allocation. Therefore, a company needs to find a balance between the usefulness of having more than one overhead allocation basis against the cost of making the needed
calculations for the additional bases.
Departmental overhead allocation would be chosen by a company’s management if it felt the benefit of the
additional information produced would be greater than the cost to produce the information. For example,
the additional information could be used to develop more accurate product costs for use in setting prices
and making other decisions.
Note: The only time that it may be applicable to use only one overhead application rate for the
factory is when production is limited to a single product or to different but very similar products.
Calculating the Manufacturing Overhead Allocation Rate
Once the method, or basis, of manufacturing overhead allocation is determined, the predetermined manufacturing overhead allocation rate is calculated. The predetermined rate is the amount of
manufacturing overhead that will be charged (allocated) to each unit of a product for each unit of the
allocation basis (direct labor hours, machine hours, and so on) used by that product during production.
The predetermined overhead rate may be a combined rate including both variable and fixed overheads; or
it may be calculated separately for variable overhead and fixed overhead and applied separately. Whichever
way it is done, the total overhead allocated to production will be the same if the same allocation base is
used for both fixed and variable overhead.
It is important to note that fixed overhead is applied to units produced as if it were variable overhead, even though fixed costs do not behave the same way variable costs behave. Actual variable costs
increase in total as production increases and decrease in total as production decreases. However, as long
as the production level remains within the relevant range, actual fixed manufacturing costs do not change
in total as production increases and decreases. Therefore, actual fixed manufacturing cost per unit (total
fixed manufacturing cost divided by the actual number of units produced) must change as the production
level increases and decreases.
Although actual fixed manufacturing overhead may not vary much in total from budgeted fixed manufacturing overhead, the variance between the amount of actual fixed manufacturing overhead incurred and
the amount of fixed manufacturing overhead applied to production can be significant because of the fact
that fixed overhead is applied to production the same way variable overhead is, on a per-unit basis, but it
is not incurred that way. Therefore, the fixed manufacturing overhead component of total overhead costs
creates a large part of the reported variances.
The rate used to allocate overhead is usually calculated at the beginning of the year, based on budgeted
overhead for the coming year and the budgeted level of activity for the coming year.
Unless material changes in actual overhead costs incurred during the year necessitate a change to the
predetermined rate, that rate (or those rates, if fixed and variable overheads are allocated separately) will
be used to allocate manufacturing overheads throughout the year. Because the manufacturing overhead
allocation rate is set before the production takes place, it must use budgeted, or expected, amounts. The
manufacturing overhead allocation rate is called the predetermined rate because it is calculated at the
beginning of the period.
80
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.3. Overhead Costs
Note: It is important to remember that the manufacturing overhead allocation rate is calculated at the
beginning of the year and then used throughout the year unless it becomes necessary to change it during
the year.
The predetermined overhead rate is calculated as follows:
Budgeted Monetary Amount of Manufacturing Overhead
Budgeted Activity Level of the Allocation Base
The budgeted activity level of the allocation base is the number of budgeted direct labor hours, direct
labor cost, material cost, or machine hours—whatever is being used as the allocation basis—allowed (under
standard costing) for the budgeted output. The budgeted activity level will be discussed in greater detail
later in this explanation.
However, as noted above, the application rate should be reviewed periodically and adjusted if necessary so
that the amount applied is a reasonable approximation of the current overhead costs per unit.
Note: Since the predetermined rate is a calculated rate using Budgeted Manufacturing Overhead divided
by Budgeted Activity Level of the Allocation Base, the predetermined rate and the budgeted activity level
can easily be used to reverse the process and calculate the budgeted overhead amount that was used
to calculate the predetermined rate. Such an exercise will frequently be required, particularly for fixed
overhead.
Predetermined rate × Budgeted Activity (Production) Level = Budgeted Manufacturing Overhead
The above formula is particularly important to remember when working with fixed manufacturing overhead, since neither actual nor budgeted fixed overhead is affected in total by the number of
units actually produced or the amount of the allocation base actually used as long as production
remains within the relevant range.
Only the amount of fixed overhead applied is affected by the actual production activity level.
It is critical to use the budgeted amount of manufacturing overhead for the numerator and the budgeted activity level of the allocation base for the denominator in calculating the predetermined overhead
allocation rate. Do not use budgeted for one and actual for the other. The budgeted amount of manufacturing overhead is the cost the company expects to incur and the budgeted activity level is the number of
units of the allocation base allowed for the expected production during the upcoming year. The company
determines the allocation rate at the beginning of the year and uses it for the entire year for the application
of the manufacturing overhead costs unless something changes that requires the allocation rate to be
adjusted.
Note: The calculation of the allocation rate can also be done on a weekly or a monthly basis. In such a
situation, the process would be exactly the same except that the budgeted overhead cost and activity
level used would be for the upcoming week or month (or whatever time period is used).
Clearly, the budgeted rate is not going to be the actual rate that occurs during the year. However, in order
to determine the cost of goods produced throughout the year so their cost can flow to inventory when
produced and then to cost of goods sold when they are sold, an estimated rate must be used. A company
cannot wait until the end of the year to determine what its cost of production was. As long as the rate is
reviewed periodically and adjusted when necessary, however, variances can be minimized.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
81
D.3. Overhead Costs
CMA Part 1
Determining the Level of Activity
In relation to the allocation rate, the company must decide what level of activity to use for its budgeted
activity level of the allocation base in the denominator of the overhead predetermined rate calculation. The
level of activity used is the number of machine hours, direct labor hours, or whatever other activity base
the company plans to use during the year. The activity level is estimated in advance. As the activity level
is one of the two figures used in the determination of the manufacturing overhead rate, it will greatly impact
the allocation rate.
A company has several choices for the activity level to use in calculating a predetermined allocation rate
for overhead. U.S. GAAP, in ASC 330, specifically prescribes that normal capacity should be used for
external financial reporting. The other activity levels can be used for internal reporting and decision-making,
however.
The choices, the impact of using each option, and what each measure is best used for by a company are in
the table below.
Activity Level
What it is
Impact of Using It
Best Used For
Normal capacity
utilization
The level of activity that
will be achieved in the long
run, taking into account
seasonal changes in the
business as well as cyclical
changes.
Normal capacity utilization
is the level of activity that
will satisfy average customer demand over a longterm period such as 2 to 3
years.
Long-term planning
This is required by U.S.
GAAP.
Practical (or
currently attainable) capacity
The theoretical activity
level reduced by allowances
for unavoidable interruptions such as shutdowns for
holidays or scheduled
maintenance but not decreased for any expected
decrease in sales demand.
The practical capacity basis
is greater than the level
that will be achieved and
will result in an underapplication of manufacturing overhead.
Pricing decisions
Master budget
capacity utilization (expected
actual capacity
utilization)
The amount of output actually expected during the
next budget period based
on expected demand.
Results in a different overhead rate for each budget
period because of increases
or decreases in planned
production due to expected
increases or decreases in
demand.
Developing the
master budget
Theoretical, or
ideal, capacity
The level of activity that
will occur if the company
produces at its absolute
most efficient level at all
times.
A company will not be able
to achieve the theoretical
level of activity, and if that
level is used to calculate
the overhead application
rate, manufacturing overhead will be underapplied because the resulting application rate will
be too low.
Has very little use in
practical situations,
because it will not be attained.
82
Current performance
measurement
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.3. Overhead Costs
Example: RedHawks Co. produces bookshelves for shipment to distributors. RedHawks’ fixed manufacturing overheads for the coming year are expected to be $250,000. In a perfect situation, RedHawks
has the capacity to produce 40,000 bookshelves. During the current year, RedHawks produced 38,000
bookshelves, the most in company history. Management thinks this high production level was attributable to a performance bonus that was in place for management during the current year, but that bonus
will not be repeated next year. For the seven years prior to the current year, RedHawks produced an
average of 34,000 bookshelves per year, with production always between 32,500 and 35,500. Due to
an expected decrease in demand next year, the company expects to produce 36,000 bookshelves next
year.
The CFO of RedHawks is trying to determine the cost to produce each bookshelf in order to determine
the price to charge. The CFO knows that a portion of the fixed manufacturing overhead must be allocated
to each of the units. Below are the calculations for the fixed overhead allocation rate using the four
different levels of production.
Calculating the Predetermined Rate
1) If RedHawks uses theoretical capacity, the fixed overhead allocation rate for next year will be
$6.25 per unit ($250,000 ÷ 40,000 units).
2) If the fixed manufacturing overhead allocation rate is determined using the current year’s performance as the practical capacity, the allocation rate will be $6.58 per unit ($250,000 ÷ 38,000
units).
3) If the master budget capacity is used in calculating the predetermined fixed overhead allocation
rate, the rate will be $6.94 per unit ($250,000 ÷ 36,000 units).
4) Finally, if Redhawks uses what had been the normal capacity prior to the current year, the allocation rate per unit would be $7.35 ($250,000 ÷ 34,000 units).
Allocation of Fixed Manufacturing Overhead Under the Traditional Method
The year is now complete. RedHawks actually produced 35,750 units during the year. Fixed overhead
was allocated per unit. The allocation was made by multiplying the actual number of units produced by
the predetermined rate per unit. Under each of the four levels of capacity, a different amount of fixed
manufacturing overhead would have been allocated to the products.
1) Using theoretical capacity, RedHawks would have allocated $223,438 ($6.25 × 35,750 units) to the
units produced.
2) Using practical capacity, it would have allocated $235,235 ($6.58 × 35,750 units).
3) Using master budget capacity, it would have allocated $248,105 ($6.94 × 35,750 units).
4) Using normal capacity, it would have allocated $262,763 ($7.35 × 35,750 units).
Summary Analysis
Because the actual production was different from any of the bases that could be used, the amount of
fixed overhead allocated would not have been exactly correct no matter which capacity level was chosen.
However, the amount of overhead allocated under master budget capacity would be the closest to the
expected amount of $250,000.
If RedHawks had used the theoretical base at the beginning of the year (40,000) in its calculation of cost
per unit, the company would have run the risk of underpricing the bookshelves. The fixed overhead rate
per unit would have been too low and not enough overhead would have been allocated to the bookshelves
that were made. If management had used normal capacity, too much overhead would have been allocated to each bookshelf, and management might have overpriced the bookshelves.
If the company used master budget capacity (36,000), the price charged would fluctuate from one year
to the next, as the expected production level fluctuated from budget period to budget period based on
anticipated demand and in turn cost per unit would have fluctuated.
(Continued)
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
83
D.3. Overhead Costs
CMA Part 1
Using practical capacity (38,000) would have resulted in an allocation of fixed overhead that would be
lower than that under normal or master budget capacity. The total cost per unit would not have increased
because of a lower expected demand during the coming year, and the company would not have increased
the price next year to reflect the higher cost, possibly avoiding causing further decreases in demand.
For pricing decisions, practical capacity is the best capacity level to use for calculating the
fixed overhead allocation rate because it can avoid the downward demand spiral caused by cost
increases when production falls, which leads to price increases and further reductions in demand and
further production decreases, and so on.
The result of using different capacity levels for overhead allocation will be discussed in more detail later
under the topic Capacity Level and Management Decisions.
Allocating Manufacturing Overhead to the Units
Once the overhead allocation rate has been determined, the company can allocate overhead to the individual units produced. Overhead is allocated by multiplying the predetermined rate by the number of units
of the allocation basis that were either allowed to be used (under standard costing) or were actually used
(under other costing methods) in the production of each unit.
Overhead allocation is a very simple mathematical operation, but it becomes a little more involved because
management must decide which cost allocation method to use: standard, normal, or actual. The three
methods were discussed in the topic of Costing Systems, so they will just be reviewed quickly here as they
apply to overhead application.
84
Overhead
Application Rate
Overhead
Allocation Base
Standard Costing
Predetermined Standard
Rate
Standard Amount of Allocation Base Allowed
for Actual Production
Normal Costing
Estimated
Normalized
Rate
Amount of Allocation Base
Actually Used for
Actual Production
Actual Costing
Actual
Rate
Amount of Allocation Base
Actually Used for
Actual Production
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.3. Overhead Costs
Example: A company that allocates overhead on the basis of machine hours has the following budgeted
and actual results for 20XX:
Overhead cost
Production volume (units)
Total machine hours
Budgeted
$250,000
100,000
200,000
Actual
$288,000
125,000
240,000
How much overhead would be allocated to the production under standard, normal and actual costing?
Standard
Predetermined OH Rate: $250,000 ÷ 200,000
$1.25
Normal
Actual
$1.25
Actual OH Rate: $288,000 ÷ 240,000
Allocation Base:
Standard number of machine hours
Actual number of Machine hours
OH applied to production
1
$1.20
250,0001
$312,500
240,000
240,000
$300,000
$288,000
Two hours are allowed for each unit, calculated as 200,000 total machine hours budgeted divided by
budgeted production volume of 100,000 units. The standard number of machine hours is the actual
production volume of 125,000 multiplied by 2 machine hours allowed per unit.
Why would the company not want to use actual costing, since the amount of overhead applied under
that system is correct whereas the amounts applied under the other systems are not accurate?
The total actual costs for the year are not known until sometime after the end of the reporting period
because of the delay in receiving and recording invoices. In most cases, overhead needs to be applied
to production as the production takes place. It cannot wait until the year has been closed out and the
total actual costs are known. Therefore, an estimated rate is used throughout the year. The difference
between the actual amount of overhead incurred and the amount applied (called over-applied or underapplied overhead) is closed out after the total costs for the year are known by either debiting or crediting
cost of goods sold for the total amount of the variance or by distributing the variance between cost of
goods sold and inventories. See the following topic for further information.
Fixed Overhead Application to Production
For direct materials, direct labor, and variable overhead, the amount of standard cost applied to production
is the same as the flexible budget amount of cost allowed for the actual production. For example, the
amount of variable overhead applied to production is the same as the flexible budget amount of variable
overhead. If variable overhead is under-applied, it means that the actual cost incurred was greater than
the amount applied to production and also greater than the flexible budget by the same amount.
However, fixed overhead is different. Fixed overhead does not change in total as the level of production
changes; however, fixed overhead is applied to production as if it were a variable cost that does
change in total as the level of production changes. Therefore, the amount of fixed overhead applied
to production is not the same as the amount of fixed cost in the flexible budget, as is the case with variable
overhead and other variable manufacturing costs.
The amount of fixed cost in the flexible budget is the same as the amount of fixed cost in the static budget
because the amount of budgeted fixed cost does not change with changes in production level as long as
the activity remains within the relevant range.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
85
D.3. Overhead Costs
CMA Part 1
Example: Total manufacturing overhead is budgeted to be $960,000 for the budget year: $360,000 for
variable manufacturing overhead and $600,000 for fixed manufacturing overhead. Output, based on
normal capacity, is budgeted at 600,000 units for the year. Overhead is allocated based on machine
hours. Two machine hours are allowed per unit, so the budgeted activity level of the allocation base is
1,200,000 machine hours per year (600,000 budgeted units × 2 hours per unit).
The standard variable overhead allocation rate is $360,000 ÷ 1,200,000 machine hours, or $0.30 per
MH. Since two machine hours are allowed per unit, the standard variable overhead allowed per unit is
$0.30 × 2, or $0.60 per unit.
The standard fixed overhead allocation rate is $600,000 ÷ 1,200,000 machine hours, or $0.50 per MH.
Since two machine hours are allowed per unit, the standard fixed overhead allowed per unit is $0.50 ×
2, or $1.00 per unit.
The actual number of units produced during the first quarter were 48,000 units in January, 52,000 in
February, and 58,000 in March, or 158,000 units for the quarter. Actual fixed overhead incurred was the
same as budgeted fixed overhead, or $50,000 per month ($600,000 ÷ 12). The total variable and fixed
overhead applied to production during each month of the first quarter, compared with the actual costs
incurred, are:
January
February
March
Actual Variable
Overhead Incurred
$29,500
$31,000
$35,000
Variable Overhead
Applied @ $0.60/Unit
$28,800
$31,200
$34,800
Actual Fixed
Fixed Overhead ApOverhead Incurred plied @ $1.00/Unit
$50,000
$48,000
$50,000
$52,000
$50,000
$58,000
The amounts applied can be calculated either by multiplying the actual number of units produced by the
allocation rate per unit ($0.60 per unit for variable overhead and $1.00 per unit for fixed overhead) or
by multiplying the number of machine hours allowed for the actual number of units produced (2 hours
per unit × the actual production) by the allocation rate per machine hour ($0.30 per MH for variable
overhead and $0.50 per MH for fixed overhead).
The allocation of variable overhead for January has been calculated below using both methods to demonstrate that the result is the same:
48,000 units produced × $0.60 per unit = $28,800.
48,000 units produced × 2 machine hours allowed per unit × $0.30 per machine hour = $28,800.
Variable overhead is under-applied by $700 for January ($29,500 − $28,800) and fixed overhead is
under-applied by $2,000 for January ($50,000 − $48,000). When production increases to 52,000 in
February, variable overhead is over-applied by $200 while fixed overhead is over-applied by $2,000.
The larger variances in the application of fixed overhead versus those of variable overhead are due to
the fact that fixed overhead is applied to production as though it were a variable cost even
though it is not a variable cost. Because of that, variances between actual production volume and
planned production volume will have a larger effect on fixed overhead manufacturing cost variances than
they will have on variable overhead manufacturing cost variances.
If actual production continues to exceed budgeted production, the standard application rate for fixed
overhead should be decreased so that fixed overhead is not over-applied.
Note: The difference in cost behavior between fixed and variable manufacturing overhead costs is very
important to understand.
86
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.3. Overhead Costs
The Process of Accounting for Factory Overhead
Factory overhead includes all costs, both fixed and variable, that cannot be traced to the production of a
specific unit or group of units. Factory overhead includes supervisory and other indirect salaries and wages,
depreciation on production facilities, indirect materials used such as cleaning rags, indirect facility costs
such as utilities, and an almost limitless list of other costs.
As factory overhead costs are actually incurred, the actual incurred costs are debited to a factory overhead (OH) control account with the following journal entry:
Dr
Factory overhead control ................................................ X
Cr
Cash (or accounts payable) ................................................... X
The factory overhead control account is used to accumulate (collect) all the actual factory overhead costs
that the company incurs during the year. If the company has several cost pools, for instance for variable
overhead and fixed overhead in each of its various departments, it will have a separate factory overhead
control account for each cost pool. It is from this account that overhead will be allocated to work-in-process
and then to finished goods.
As each unit is produced, some of the overhead cost that has accumulated in the factory overhead control
account is transferred to the work-in-process (WIP) account with the following journal entry, using the
calculated, predetermined overhead rate and the amount of the allocation base allowed for the actual
output (assuming standard costing is being used):
Dr
WIP inventory ................................................................... X
Cr
Factory overhead control .................................................. X
The credit may instead be to an account called factory overhead applied, simply to keep the debits in
one account and the credits in a different account. The factory overhead applied account follows the factory
overhead control account in the chart of accounts and carries a credit balance. The net of the balances in
the two accounts at any point in time represents under-applied or over-applied overhead.
The work-in-process inventory account is one of the inventory accounts for the company (the others are
finished goods and raw materials). The process of debiting WIP inventory and crediting either factory OH
control or factory OH applied is called applying the overhead to production, and when the overhead is
transferred into the WIP account, it becomes applied.
Thus, the factory OH control account receives the debits for the actual costs the company incurs, and then
the costs are allocated out to the WIP account to apply them to the work-in-process inventory using the
predetermined overhead application rate. If the credits used to apply the overhead are made directly to the
factory OH control account, graphically it looks like the following:
Actual Costs
Factory Overhead Control
Account
Work-in-Process Inventory
Account: Costs Applied to
Production @ Predetermined
Manufacturing OH Rate ×
Quantity of Allocation Base
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
87
D.3. Overhead Costs
CMA Part 1
If a separate account, factory overhead applied, is used to apply the costs to production, it would look like
this:
Actual Costs
Factory Overhead Control
Account
Debit Balance
Factory Overhead Applied
Account
(Credit Balance)
Work-in-Process Inventory
Account: Costs Applied to
Production @ Predetermined
Manufacturing OH Rate ×
Quantity of Allocation Base
The amount of overhead applied to production from the factory OH control account will not be exactly the
same amount as the actual costs that were debited to the factory OH control account when the costs were
incurred.
•
If the amount of overhead applied to units produced during a period is less than the actual incurred
amounts that were debited to the factory OH control account, the overhead is under-applied.
When overhead is under-applied, each unit received a lower amount of overhead than it should
have.
•
If the amount of overhead applied to units produced during a period is greater than the actual
incurred amounts debited to the factory OH control account, overhead is over-applied. When
overhead is over-applied, each unit received a greater amount of overhead than it should have.
The amount of the difference between the amount incurred and the amount applied is a variance, and it
remains in the factory OH control account until the period is closed. The process of closing out variances
will be explained in more detail in the pages that follow.
After the overhead costs are moved into the WIP account, they will be in one of three places in the financial
statements at the end of the year. Where the cost of a particular unit is reported in the financial statements
depends on what happened to that unit by the end of the year. The three possible locations in the financial
statements are:
88
1)
In ending WIP inventory on the balance sheet if the item has not yet been completed at the end
of the period.
2)
In finished goods inventory on the balance sheet if the item has been completed but not sold.
3)
As an expense in cost of goods sold on the income statement if the item has been sold.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.3. Overhead Costs
Over-Applied and Under-Applied Manufacturing Overhead
Chances are very good that the actual costs incurred and the actual level of activity during the period
will be different from the estimates made at the beginning of the year. Since the costs are being allocated
using the budgeted costs and budgeted usage of the allocation base, if the actual costs or usage are different from the budgeted, the allocation will be, in essence, “incorrect.” As a result, the factory overhead
control account (or the net of the two accounts, if two separate accounts are used) will have a balance in
it at the end of the year because the credits for the allocated costs will not be equal to the debits representing the incurred costs. The remaining balance may be a positive (debit) or negative (credit) balance,
but whichever it is, it needs to be eliminated from the account (or accounts).
•
If the balance is a debit balance, it means that factory overhead was under-applied to the
production. In other words, the amount of factory overhead that was applied (credited to factory
OH control or to factory OH applied) during the period was less than the actual factory overhead
incurred (debited to factory OH control) during the period. If two accounts are being used, the net
of the two accounts will be a debit balance.
•
If the balance is a credit balance (negative balance), it means factory overhead was overapplied to the production. In other words, the amount of factory overhead that was applied (credited to factory OH control or to factory OH applied) during the period was greater than the actual
factory overhead costs incurred (debited to the account) during the period. If two accounts are
being used, the net of the two accounts will be a credit balance.
The amount of over- or under-applied factory overhead is calculated as follows:
Actual costs incurred
−
Factory overhead applied during the period
=
Under- (over-) applied factory overhead
Whether factory overhead has been under-applied or over-applied, the company needs to correct the imbalance, because it is not reasonable to have any balance in the factory OH control account (or the net of
the two accounts) at the end of the period. Therefore, the remaining balance (the amount over- or underapplied) in the factory overhead control account must be removed from the account(s) as part of the periodend closing entries.
The way this remaining over- or under-applied overhead is closed out depends on whether the balance is
immaterial or material.
Immaterial Amount
If an under-applied amount is immaterial it can simply be debited to COGS in that period. Cost of goods
sold is debited and the factory overhead control account is credited for the amount of the debit balance in
the factory overhead control account to reduce the balance in the factory overhead control account to zero.
COGS for the period is increased, and operating income for the period is decreased.
If an over-applied amount is immaterial, it can be credited to COGS, reducing COGS and increasing the
operating income for the period. The factory overhead control account is debited for the amount of the
credit balance in the account to reduce it to zero and cost of goods sold is credited for the same amount.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
89
D.3. Overhead Costs
CMA Part 1
efham CMA
Material Amount
If the amount of overhead that was over- or under-applied is material, it must be distributed among the
WIP inventory, finished goods inventory, and cost of goods sold accounts. The variances are prorated according to the amount of overhead included in each that was allocated to the current period’s
production, not according to the ending balances in each.
Note: The pro-ration of under- or over-applied overhead should be done on the basis of the
overhead allocated to production during the current period only, not on the basis of the balances
in the inventories and cost of goods sold accounts at year end. Information on the amount of overhead
allocated to production during the current year should be available in the accounting system.
An under-applied amount will be debited to the inventories and cost of goods sold accounts while an overapplied amount will be credited to those accounts. The opposing credit or debit to the factory overhead
control account will bring the balance in that account to zero as of the end of the period.
Whenever the variances (under-applied or over-applied amounts) are allocated proportionately among inventories and cost of goods sold on the basis of the costs applied during the period, the cost per unit for
those costs will be the same as if the actual costs per unit instead of the budgeted costs per unit had been
allocated to production during the year.
Accumulating Overhead Variances
Instead of immediately charging over- or under-applied overhead amounts to COGS or to inventories and
COGS each month, companies may accumulate the over- or under-applied overhead (also called the variance) in variance accounts throughout the year by closing out the over- or under-applied monthly amount
to the variance accounts.
An advantage of waiting until year-end to resolve the variances is that some of the variances may reverse
by the end of the year. That is particularly true if, based on year-to-date variances, the standard overhead
application rates are adjusted mid-year.
Note: In questions about overhead allocation on the exam, candidates should make certain to identify
the allocation base. It may be machine hours, direct labor hours, direct labor costs, weight, size, or
something similar. It is also possible that the company in a question will use more than one overhead
allocation base for the same cost pool. If the company uses more than one overhead allocation base,
the mathematics of overhead allocation will need to be performed several times and then summed to
find the total to be allocated to production. Doing this does not make the question any more difficult; it
just takes longer to do since the same operation needs to be performed more than once.
90
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.3. Overhead Costs
Question 31: On January 1, 20X5, the first year of operations, Medina Co. had the following annual
budget.
Units produced
Sales
Minus:
Total variable expenses
Total fixed expenses
Operating income
Factory overhead:
Variable
Fixed
20,000
$120,000
70,000
25,000
$ 25,000
$40,000
20,000
At the end of the first year, no units were in progress and the actual total factory overhead incurred
was $45,000. There was also $3,000 of over-applied factory overhead. Factory overhead was allocated
on a basis of budgeted units of production. How many units did Medina produce this year?
a)
14,000
b)
16,000
c)
20,000
d)
23,333
(HOCK)
Activity-Based Costing
Activity-based costing (ABC) is another way of allocating overhead costs to products, and in ABC the method
of allocation is based on cost drivers.14 As with the other overhead allocation methods, ABC is a mathematical process. It requires identification of the costs to be allocated, followed by some manner of allocating
them to departments, processes, products, or other cost objects. ABC can be used in a variety of situations
and can be applied to both manufacturing and nonmanufacturing overheads. It can also be used in service
businesses.
The Institute of Management Accountants defines activity-based costing as
“a methodology that measures the cost and performance of activities, resources, and cost
objects based on their use. ABC recognizes the causal relationships of cost drivers to
activities.”15
Activity-based costing is a costing system that focuses on individual activities as the fundamental cost
objects to determine what the activities cost.
•
An activity is an event, task or unit of work with a specified purpose. Examples of activities are
designing products, setting up machines, operating machines, making orders or distributing products.
•
A cost object is anything for which costs are accumulated for managerial purposes. Examples of
cost objects are a specific job, a product line, a market, or certain customers.
14
Direct costs such as direct materials and direct labor can easily be traced to products, so activity-based costing focuses
on allocating indirect, or overhead, costs to departments, processes, products, or other cost objects.
15
Institute of Management Accountants, Implementing Activity-Based Management: Avoiding the Pitfalls (Montvale, NJ:
Statements on Management Accounting, Strategic Cost Management, 1998), 2.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
91
D.3. Overhead Costs
•
CMA Part 1
A cost driver is anything (it can be an activity, an event, or a volume of some substance) that
causes costs to be incurred each time the driver occurs. Cost drivers can be structural or executional.
o
Structural cost drivers relate to the actual structure of the company’s operations and the
complexity of the technologies the company uses. A more complex working environment leads
to higher structural costs. Decisions to offer a new service or to open a new retail location and
if so, what size of store to open are examples of structural cost drivers.
o
Organizational cost drivers relate to the organization of activities and the delegation of
authorities. Authorizing employees to make decisions is an organizational cost driver, for example granting purchasing authority up to a specific amount to a purchasing agent. Any
purchases above that amount require a manager’s approval.
o
Executional, or activity, cost drivers relate to the actual processes performed. The cost of
executing activities is determined by the company’s effective use of staff and processes used.
Examples of executional cost drivers are set-ups, moving, number of parts used in assembling
a product, casting, packaging, or handling. In a service business, a service call is an activity
cost driver.
Traditional Costing versus ABC
Differences between traditional costing and activity-based costing include:
Allocations Based On Different Things
Traditional costing systems allocate costs according to general usage of resources, such as machine hours
or direct labor hours. The resources on which the allocations are based may or may not have a connection
with the costs being allocated.
With ABC, the cost allocations are instead based on activities performed and the costs of those activities.
ABC is much more detailed than traditional costing, because it uses many more cost pools and each cost
pool has its own cost driver.
Costs Attached to Low-Volume Products
The use of activity-based costing can result in greater per-unit costs for products produced in low volume
relative to other products than would be reported under traditional costing.
If a product is produced in low volume, it will require fewer resources such as machine hours or direct labor
hours used as cost drivers in traditional overhead allocation than a product that is produced in high volume.
Therefore, under traditional overhead costing, that low-volume product would be allocated a small amount
of total overhead costs. However, a low-volume product may require just as much time and cost per
production run as a high-volume product.
One example is product setups. Production setup time is the same for a low-volume production run as it is
for a high-volume production run. If the cost of product setups is included in total overhead and is allocated
according to machine hours or direct labor hours as in traditional costing, not much product setup cost will
be allocated to the low-volume product because the volume of products produced and the cost drivers used
to produce them are low relative to other products. However, if the cost of product setups is segregated
from other overhead costs, as in ABC, and is allocated according to how many product setups are done for
each product instead of how many units of each product are produced and the machine hours or direct
labor hours per unit, then a more realistic amount of cost for product setups will be allocated to the lowvolume product. The amount of product setup cost allocated to the low-volume product will probably be
higher than it would be under traditional costing, so cost per unit allocated to the low-volume product will
probably be higher under ABC than under traditional costing.
92
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.3. Overhead Costs
Note: ABC is becoming more of a necessity for many companies, since traditional systems may use
direct labor to allocate overheads and direct labor is becoming a smaller part of the overall production
process. Essentially, activity-based costing is very similar to the standard method of overhead
allocation, except for the fact that there are many cost pools, each with a different cost driver, and the
cost drivers should have a direct relationship to the incurrence of costs by an individual product. Therefore, in the application of ABC, many different overhead allocations are performed. The main difference
is the determination of the allocation bases. In a numerical ABC exam question, candidates should be
prepared to make three or four allocations of different cost pools to the final product. Candidates may
also need to determine the appropriate allocation base for each of the cost pools.
The ABC Process
Setting up an ABC system is more difficult, time-consuming, and costly than setting up a traditional system.
It involves the
1)
Identification of activities
2)
Identification of cost drivers
3)
Identification of cost pools
1) Identification of Activities
The company must first analyze the production process and identify the various activities that help to
explain why the company incurs the costs it considers to be indirect costs.
As part of the process of analyzing the production process, the company may identify some non-valueadding activities. Non-value-adding activities are activities that do not add any value for the end consumer, and the company should try to reduce or eliminate them. Reducing non-value-adding costs provides
an additional benefit to the company (in addition to the more accurate costing of products that ABC provides) because it can lead to a reduction in the cost of production. In turn, the cost reduction can enable
the company to either reduce the sales price or to recognize more gross profit from the sale of each unit.
Value-adding activities are the opposite of non-value-adding activities. As the name suggests, valueadding activities are activities (and their costs) that add value to the customer. Value-adding activities add
something to the product that customers are willing to pay for. Even though these activities are valueadding activities, they must be monitored to make certain that their costs are not excessive.
Note: This identification of non-value adding activities is potentially one of the greatest benefits a company can receive from implementing an ABC system. Some companies decide to analyze their costs as
value adding o
r non-value adding activities even if they choose to not fully implement ABC.
2) Identification of Cost Drivers and Cost Pools
The company must evaluate all of the tasks performed and select the activities that will form the basis of
its ABC system. Each activity or cost driver identified requires the company to keep records related to that
activity and increases the complexity of the ABC system. Therefore, an activity-based costing system that
includes many different activities can be overly detailed and difficult to use. On the other hand, an activitybased costing system with too few activities may not be able to measure cause-and-effect relationships
adequately between the cost drivers and the indirect costs.
Activities should be selected that account for a majority of indirect costs, and management should combine
activities that have the same cause-and-effect relationship with the same cost driver or drivers into a single
cost pool. Cost pools should contain costs that are homogeneous. In other words, all of the costs in a cost
pool should have the same or a similar cause-and-effect or benefits-received relationship to a cost driver
or drivers. Identification of the cost pools and identification of the cost driver or drivers for each cost pool
should therefore occur simultaneously.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
93
D.3. Overhead Costs
CMA Part 1
A cost pool is similar to the traditional factory overhead control account, except the company maintains
many of them, each with its own cost driver or drivers. The cost pools are used to collect the costs associated with the various activities (drivers) that incur the costs, and then the costs are allocated as the drivers
are used or consumed in the production of the product.
For example, management may decide to combine equipment maintenance, machine operations, and process control into a single cost pool because it determines that machine hours are the most appropriate cost
driver for all of those costs and have the same cause-and-effect relationship with them.
Note: On the CMA Exam, candidates will probably not need to identify the cost drivers or cost pools in
a large question. The cost drivers and cost pools will likely be provided in the question and candidates
will simply need to use the provided information to determine the amount of overhead to be allocated
to any product. In a smaller question, however, candidates may need to determine the appropriate driver
for each cost pool from among those given. Usually, though, the cost drivers and cost pools are fairly
obvious, and candidates can determine what cost driver should be used for what cost pool by simply
thinking about what would cause costs to be incurred.
3) Calculation of the Allocation Rate and Allocation
After the activities have been identified and the cost pools and cost drivers determined, an allocation rate
is calculated for each of the cost allocation bases. The identified cost drivers should be used as the
allocation bases for each cost pool.
The calculation of the allocation rate is done the same way as it is done under the traditional method:
expected costs in each cost pool are divided by the expected usage of the cost allocation base (cost driver)
for the cost pool. The costs in the cost pools are then allocated to the products based upon the usage of
the cost allocation base/driver for each product. This process of cost allocation under ABC is similar to the
accumulation of actual manufacturing overhead costs and the allocation of the costs done under the traditional method of overhead allocation. The difference is that ABC involves the use of many cost pools and
cost drivers, so the allocation calculations need to be performed many times.
Because of the increased number of allocation bases, ABC provides a more accurate costing of products
when the company produces multiple products. If a company produces only one product, it does not need
to use ABC because all of the costs of production are costs of that one product. ABC is meaningful only if a
company produces more than one product, because ABC affects how much overhead is allocated to each
product.
Note: ABC uses multiple cost pools and multiple allocation bases to more accurately reflect the different
consumptions of overhead activities between and among products.
Categories of Activities
Four different types of activities are used in ABC, based upon where the activity occurs in relation to the
final product and the facility as a whole. The four categories are:
94
1)
Unit-level activities are performed for each unit produced. Examples of unit-level activities are
hours of work, inspecting each item, operating a machine and performing a specific assembly task.
2)
Batch-level activities occur each time a batch is produced. Examples of batch-level activities
are machine setups, purchasing, scheduling, materials handling and batch inspection.
3)
Product-sustaining activities are activities performed in order to support the production of
specific products. Examples of product-sustaining activities include product design, engineering
changes, maintaining product specifications, and developing a special testing routine for a particular product.
4)
Facility-sustaining activities are activities undertaken to support production in general, such
as security, maintenance, and plant management. Facility-sustaining activities and their associated
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.3. Overhead Costs
costs present an issue that requires a company decision. Given the broad nature of facility-sustaining costs, it is difficult to allocate them in any reasonable manner to the final goods. For internal
decision-making purposes, a company may either try to allocate these costs or may simply expense
them in the period incurred. However, for external financial reporting and tax reporting purposes,
the costs are product costs and must be allocated to products in order to maintain absorption
costing in accordance with U.S. GAAP and tax regulations.
Note: The four different categories of activities may be needed more for answering word questions than
for numerical questions. The cost allocations are done the same way for all the different types of activities.
Benefits of Activity-Based Costing
•
ABC provides a more accurate product cost for use in pricing and strategic decisions because overhead
rates can be determined more precisely and overhead application occurs based on specific actions.
•
By identifying the activities that cause costs to be incurred, ABC enables management to identify
activities that do not add value to the final product.
Limitations of Activity-Based Costing
•
Not everything can be allocated strictly on a cost driver basis. In particular, facility-sustaining costs
cannot be allocated strictly on a cost driver basis.
•
ABC is expensive and time consuming to implement and maintain.
•
Activity-based costing systems cannot take the place of traditional costing systems without significant
adaptation because standard ABC is not acceptable for U.S. GAAP or U.S. tax reporting.
ABC will provide the most benefits to companies that produce very diverse products or have complex activities. For companies that produce relatively similar products or have fairly straightforward
processes that are consumed equally by all products, the costs of ABC would probably outweigh the benefits.
Question 32: Cost drivers are:
a)
Accounting techniques used to control costs.
b)
Accounting measurements used to evaluate whether performance is proceeding according to plan.
c)
A mechanical basis, such as machine hours, computer time, size of equipment or square footage
of factory used to assign costs to activities.
d)
Activities that cause costs to increase as the activity increases.
(CMA Adapted)
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
95
D.3. Overhead Costs
CMA Part 1
Below are two examples of how activity-based costing might be used in a real situation.
Example #1: A manufacturing firm. In a manufacturing environment, machine setup is required
every time the production line changes from producing one product to producing another product.
The cost driver is machine setups. An engineer might be required to supervise the setup of the machine
for the product change. The engineer spends 20% of his or her time supervising setups (the resource
driver). Therefore, 20% of the engineer's salary and other expenses will be costs to be allocated according to the amount of time the engineer spends supervising each product's machine setup as a
percentage of the amount of time spent supervising all product setups (the resource driver).
In addition to the engineer, other personnel are required for machine setups. Production supervisors are
also needed to supervise machine setups, and they spend 40% of their time doing that (the resource
driver again).
All of the costs of machine setups are collected in a "machine setups" cost pool. Setup time spent on
each product as a percentage of setup time spent on all products is the activity driver. The total costs
in the pool are allocated to the different products being produced based on the percentage of total setup
time used for each product.
Example #2: A service firm. Bank tellers process all kinds of transactions. The transactions relate to
the many different banking services the bank offers. Transactions processed are the cost drivers. How
should the tellers' time, the teller machines used by the tellers, the supplies used by the tellers and the
space occupied by the tellers be allocated among the various services offered by the bank (checking
accounts, savings accounts, and so forth) in order to determine which services are most profitable?
The bank does time and motion studies to determine the average time it takes tellers to process each
type of transaction (checking account transactions, savings account transactions, and so forth). Then,
information on how many of each type of transactions are processed by tellers is captured. The average
time for each type of transaction is multiplied by the number of transactions processed. The percentage
of teller time spent on each type of transaction as a percentage of the amount of teller time spent on all
types of transactions is the activity driver.
The tellers spend 90% of their time performing teller transactions and 10% of their time doing something
else like answering telephones, so that 90% is the resource driver for the tellers' salaries. 90% of the
tellers' salaries will be put into the "tellers" cost pool along with 100% of the costs of their teller machines, their supplies, and the square footage occupied by their teller stations. Then the percentage of
tellers' time spent on each type of transaction in relation to their total time spent on all teller transactions
(the activity driver) is used to allocate the teller costs in the cost pool proportionately among the bank's
various services.
96
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.3. Overhead Costs
The following information is for the next two questions: Zeta Company is preparing its annual
profit plan. As part of its analysis of the profitability of individual products, the controller estimates the
amount of manufacturing overhead that should be allocated to the individual product lines from the
information given as follows:
Units produced
Material moves per product line
Direct labor hours per unit
Wall Mirrors
25
5
200
Specialty Windows
25
15
200
Budgeted materials handling costs - $50,000 in total
Question 33: Under a costing system that allocates manufacturing overhead on the basis of direct labor
hours, the materials handling costs allocated to one unit of wall mirrors would be:
a)
$1,000
b)
$500
c)
$2,000
d)
$5,000
Question 34: Under activity-based costing (ABC), the materials handling costs allocated to one unit of
wall mirrors would be:
a)
$1,000
b)
$500
c)
$1,500
d)
$2,500
(CMA Adapted)
Note: The above question is a very good example of a question that requires determining a cost under
both ABC and the traditional method. Candidates should make sure to be able to do that.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
97
D.3. Overhead Costs
CMA Part 1
Shared Services Cost Allocation
Shared services are administrative services provided by a central department to the company’s operating
units. Shared services are usually services such as human resources, information technology, maintenance,
legal, and many accounting services such as payroll processing, invoicing and accounts payable. Usage of
the services by the individual departments (cost objects) can be traced in a meaningful way based on a
cost driver that fairly represents each one’s usage of the service.
These shared service, or support, departments incur costs (salaries, rent, utilities, and so on). For internal
decision-making, the costs of shared service departments need to be allocated to the operating departments
that use their services in order to calculate the full cost of operations or production. The methods of allocating shared services costs are different from the methods of allocating manufacturing overhead costs
covered previously. Shared services costs are allocated for each shared service department as a single cost
pool containing all of the service department’s costs, and the costs are allocated to user departments on
the basis of a single cost driver, such as hours of service used.
Note: The cost allocation methods that follow are all stand-alone cost allocation methods.
Reasons for allocating shared services costs include:
•
Cost allocation provides accurate departmental and product costs for use in making decisions,
valuing inventory, and evaluating the efficiency of departments and the profitability of individual
products.
•
It motivates managers and other employees to make their best efforts in their own areas of responsibility to achieve the company’s strategic objectives by reminding them that earnings must
be adequate to cover indirect costs.
•
It provides an incentive for managers to make decisions that are consistent with the goals of top
management.
•
It fixes accountability and provides a fair evaluation of the performance of segments and segment
managers.
•
It can be used to create competition among segments of the organization and their managers to
stimulate improved performance.
•
It justifies costs, such as transfer prices.
•
It can be used to compute reimbursement when a contract provides for cost reimbursement.
Cost allocations may be performed for just one shared service or support department whose costs are
allocated to multiple user departments, or they may be performed for multiple shared service or support
departments whose costs are being allocated both to other service and support departments and to operating departments. But even when costs of shared service departments are allocated to other service
departments, ultimately all the shared service costs will be allocated fully to operating departments.
98
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.3. Overhead Costs
Allocating Costs of a Single (One) Service or Support Department to Multiple Users
Before allocating the costs of a shared service department to operating departments, management must
decide whether the shared service department’s fixed costs and its variable costs should be allocated as
one amount or whether the fixed and variable costs should be allocated separately, which would enable the
fixed costs to be allocated in a different manner from the variable costs. Allocating all costs as one amount
is called the single-rate method. Allocating the fixed and variable costs as two separate cost pools is called
the dual-rate method.
•
Single-Rate Method – The single-rate method does not separate fixed costs of service departments from their variable costs. All of the service department costs are put into one cost pool and
the costs are allocated using one allocation base.
•
Dual-Rate Method – The dual-rate method breaks the cost of each service department into two
pools, a variable-cost pool and a fixed-cost pool. Each cost pool is allocated using a different costallocation base.
Allocation bases for either the single-rate method or the dual-rate method can be:
•
Budgeted rate and budgeted hours (or other cost driver) to be used by the operating divisions.
•
Budgeted rate and actual hours (or other cost driver) used by the operating divisions.
Following are examples of the single-rate method and the dual-rate method, using the same data for both.
Example #1: Single-rate method of allocation:
The following information is from the 20X0 budget for EZYBREEZY Co. EZYBREEZY has one service department, its Maintenance Department, that serves its Manufacturing and Sales departments. The
Maintenance Department has a practical capacity of 5,000 hours of maintenance service available each
year. The fixed costs of operating the Maintenance Department (physical facilities, tools, and so forth)
are budgeted at $247,500 per year. The wages for the maintenance employees and the supplies they
require are the variable costs, and those are budgeted at $25 per hour.
Budgeted Maintenance usage by the Manufacturing and Sales departments is as follows:
Manufacturing
3,500 hours
Sales
1,000 hours
Total budgeted usage
4,500 hours
Using the single-rate method, the budgeted total cost pool will be:
$247,500 fixed cost + (4,500 hours × $25) variable cost = $360,000
The allocation rate for the total maintenance cost is $360,000 ÷ 4,500 hours, which is $80 per hour.
The single-rate method is usually used with the second allocation base option: budgeted rate and actual
hours used. The actual maintenance hours used by the Manufacturing and Sales departments are as
follows:
Manufacturing
3,600 hours
Sales
1,100 hours
Total actual usage
4,700 hours
(Continued)
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
99
D.3. Overhead Costs
CMA Part 1
Therefore, the amounts of Maintenance department costs that will be allocated to the Manufacturing and
Sales departments are:
Manufacturing
3,600 × $80
$288,000
Sales
1,100 × $80
88,000
Total cost allocated
$376,000
The problem with the single-rate method is that, from the perspective of the user departments, the
amount they are charged is a variable cost, even though it includes costs that are fixed. The manager
of the Manufacturing department could be tempted to cut his department’s costs and outsource the
maintenance function if he could find a company to supply maintenance for less than $80 per hour.
Assume that an outside maintenance company offers to do the maintenance for the Manufacturing department for $60 per hour. The amount paid to the outside company will be 3,600 hours @ $60 per
hour, or $216,000. The Manufacturing department manager is happy, because he has saved $72,000
($288,000 − $216,000) for his department.
However, the fixed costs of the in-house Maintenance department do not go away just because Manufacturing is not using its services any longer. The total cost of the Maintenance department (now being
used only by the Sales department) will be $247,500 fixed cost + (1,100 hours used by the Sales department × $25) variable cost = $275,000, assuming the actual costs incurred are equal to the budgeted
amounts. This is $85,000 less than was budgeted ($360,000 − $275,000). However, the company as a
whole is paying an additional cost of $216,000 to the outside maintenance company. Thus, for the
company as a whole, total maintenance cost is now $491,000 ($275,000 for the internal maintenance
department + $216,000 for the outside company), which is $131,000 greater than the budgeted amount
of $360,000 for maintenance. The Manufacturing department’s actions have caused a variance in costs
for the company as a whole of $131,000 over the budgeted amount.
Example #2: Dual-rate method of allocation:
The following information is from the 20X0 budget for EZYBREEZY Co. EZYBREEZY has one service department, its Maintenance Department, that serves its Manufacturing and Sales departments. The
Maintenance Department has a practical capacity of 5,000 hours of maintenance service available each
year. The fixed costs of operating the Maintenance Department (physical facilities, tools, and so forth)
are budgeted at $247,500 per year. The wages for the maintenance employees and the supplies they
require are the variable costs, and those are budgeted at $25 per hour.
Budgeted Maintenance usage by the Manufacturing and Sales departments is as follows:
Manufacturing
3,500 hours
Sales
1,000 hours
Total budgeted usage
4,500 hours
Because a dual-rate method is being used, EZYBREEZY selects an allocation base for the variable costs
and an allocation base for the fixed costs. The company allocates the variable costs based on the budgeted variable cost per hour ($25) and the actual hours used. It allocates fixed costs based on budgeted
fixed costs per hour and the budgeted number of hours for each department (option #1).
(Continued)
100
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.3. Overhead Costs
The actual maintenance hours used by the Manufacturing and Sales departments are as follows:
Manufacturing
3,600 hours
Sales
1,100 hours
Total actual usage
4,700 hours
The allocation rate for the fixed cost is $247,500 ÷ 4,500 hours, or $55 per hour. The allocation rate
for the variable cost is $25 per hour.
The amounts allocated to each user department are now:
Manufacturing:
Fixed Costs:
$55 × 3,500 hours
Variable Costs:
$25 × 3,600 hours
Total allocated to Manufacturing
Sales:
Fixed Costs
$55 × 1,000 hours
Variable Costs:
$25 × 1,100 hours
Total allocated to Sales
Total cost allocated
$ 192,500
90,000
$ 282,500
$
55,000
27,500
$ 82,500
$ 365,000
Under the dual-rate method, the total costs allocated to each user department are different from the
costs allocated under the single-rate method because the fixed costs are allocated based on the budgeted usage under the dual-rate method and based on the actual usage under the single-rate method.
Under the dual-rate method, the Manufacturing and Sales departments would each be charged for its
budgeted fixed allocation of costs even if it does not use the internal Maintenance department.
That should discourage the manager of the Manufacturing department from contracting with an outside
maintenance service.
Benefits of the Single-Rate Method
•
The cost to implement the single-rate method is low because it avoids the analysis needed to classify
all of the service department’s cost into fixed and variable costs.
Limitations of the Single-Rate Method
•
The single-rate method makes fixed costs of the service department appear to be variable costs to
the user departments, possibly leading to outsourcing that hurts the organization as a whole.
Benefits of the Dual-Rate Method
•
The dual-rate method helps user department managers see the difference in the ways that fixed costs
and variable costs behave.
•
The dual-rate method encourages user department managers to make decisions that are in the best
interest of the organization as a whole, as well as in the best interest of each individual department.
Limitations of the Dual-Rate Method
•
The cost is higher than the cost of the single-rate method because of the need to classify all of the
costs of the service department into fixed and variable costs.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
101
D.3. Overhead Costs
CMA Part 1
Allocating Costs of Multiple Shared Service Departments
Special cost allocation problems arise when an organization has several shared service departments and
the shared service departments provide support not only to the operating departments but to the other
shared service departments as well. The factor that complicates the process is service departments using
the services of other service departments. For example, people in the maintenance department use the
services of the IT department and eat in the cafeteria.
Three different methods of allocating costs of multiple shared service departments are used when service
departments use the services of other service departments. All three methods are simply mathematical
allocations, much like the way manufacturing overhead is allocated to the products. The different methods
of multiple shared service cost allocation treat these reciprocal services differently.
The three methods of allocation are:
1)
The direct method
2)
The step (or step-down) method
3)
The reciprocal method
Note: On an exam question, treat the service departments as shared service departments even if they
could be treated in a different manner. Accounting is an example of a shared service that may be treated
differently, for example. In some companies, the costs of accounting may simply be expensed and not
allocated. But if accounting is given as a service department in a question, its costs should be allocated.
Note: The following discussion of multiple service departments uses only the single rate method (fixed
and variable costs allocated together). However, the dual-rate method can also be used to allocate costs
of multiple service departments. To do so, a company would need to do two separate allocations, one of
fixed costs and one of variable costs. There is no reason to expect that an exam question will require a
dual-rate allocation of multiple service departments’ costs, though, so the following discussion is limited
to use of the single rate method for multiple service departments.
1) The Direct Method – IGNORING Services Provided Between Service Departments
Under the direct method the reciprocal services provided by the different shared service departments to
each other are ignored. The company simply allocates all of the shared service departments’ costs directly
to the operating departments. The allocation is made on a basis that is reasonable and equitable to the
operating departments for each service department. For example, the costs of a subsidized employee cafeteria should be allocated to the operating departments based on the number of employees, while the
maintenance department’s costs may be allocated based on the number of maintenance hours used by
each operating department.
When calculating the usage ratios for the different operating departments, count only the usage of the
shared service departments by the operating departments. The usage of shared service departments
that takes place in the other service departments is excluded because service departments will not be
allocated any costs from other service departments.
102
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.3. Overhead Costs
The direct method is the simplest and most common method. A very short example follows (the calculations
of the allocations are not shown).
Departmental
Costs Incurred
Allocation of
Maintenance
Costs
Allocation of
Cafeteria
Costs
TOTAL COSTS
Maintenance
Cafeteria
Production 1
Production 2
Production 3
100
120
300
400
800
20
30
50
(120)
30
30
60
0
350
460
910
(100)
0
2) The Step-Down or Sequential Method – Recognizing SOME Services Provided Between
Service Departments
The step-down method is also called the step method or the sequential method. In the step-down
method the services the shared service departments provide to each other are included, but only one
allocation of the costs of each service department is made. After the costs of a particular service department
have been allocated, that service department will not be allocated any additional costs from other service
departments. The step-down method leads to a stair step-like diagram of cost allocations, as below. Because each service department’s costs, including costs allocated to it by other service departments, are
allocated in turn, all service department costs ultimately end up allocated to the operating departments.
In order to use the step-down method, an order must be established in which the service department costs
are allocated. The order can be any order management chooses. A popular method is to determine
the order according to the percentage of each department’s services provided to other shared service departments, although that is not the only possible way. If, for example, the costs of the department that
provides the highest percentage of its services to other shared service departments are allocated first, then
the costs of the department that provides the next highest percentage of its services to other shared service
departments comes next, and so forth.
The first shared service department’s costs are allocated to the other shared service departments and the
operating departments. The second shared service department’s costs (which now include its share of the
first shared service department’s costs) are allocated next to the other shared service departments (but
not to the first shared service department that has already been allocated) and the operating departments.
Once a shared service department’s costs have been allocated, no costs will be allocated to it from other
shared service departments.
A problem on the exam will give the allocation order to use if it is not obvious.
Following is an example of the step-down method.
Departmental
Costs Incurred
Allocation of
Maintenance
Costs
Maintenance
Cafeteria
Production 1
Production 2
Production 3
100
120
300
400
800
(100)
10
16
28
46
(130)
36
32
62
0
352
460
908
Allocation of
Cafeteria
Costs
TOTAL COSTS
0
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
103
D.3. Overhead Costs
CMA Part 1
The example of the step-down method is a bit different from the example of the direct method. Use of the
step-down method results in a slightly different result from the direct method, even though the operating
departments used the same amounts of services of the two shared service departments in both examples.
Again, the allocation calculations are not shown.
Note: In the step-down method, the costs allocated from the cafeteria include its own incurred costs
(120), plus the cafeteria’s share of the maintenance costs that were allocated to it from maintenance
(10). When the maintenance department’s costs are allocated, they are allocated on the basis of the
number of hours that the cafeteria utilized the services of the maintenance department. However, when
the cafeteria costs (including the cafeteria’s share of the maintenance costs) are allocated, none of its
costs are allocated to the maintenance department for its usage of the cafeteria.
3) The Reciprocal Method – Recognizing ALL Services Provided Between Service Departments
The reciprocal method is the most complicated and advanced of the three methods of shared services cost
allocation because it recognizes all of the services provided by the shared service departments to the other
shared service departments. Because of this detailed allocation between and among the shared service
departments, the reciprocal method is the most theoretically correct method to use. However, a company
will need to balance the additional costs of allocating costs this way against the benefits received. Graphically the reciprocal method looks like the following.
Maintenance
Cafeteria
Production 1
Production 2
Production 3
100
120
300
400
800
Allocation of
Maintenance
Costs
(108)
10
20
29
49
Allocation of
Cafeteria
Costs
8
(130)
33
30
59
0
0
353
459
908
Departmental
Costs Incurred
TOTAL COSTS
Notice that the shared service departments are each allocating some of their costs to the other shared
service department. As a result, the total amount allocated by each shared service department will be
greater than its own overhead costs, since each shared service department must allocate all of its own
costs plus some of the other shared service department’s costs that have been allocated to it. The math to
do this is the most complicated of the three methods, but even it is not too difficult.
To solve a problem using the reciprocal method, “simultaneous” or multiple equations are used. With two
shared service departments, the multiple equations are set up as follows:
Maintenance Costs (M) to Allocate = M’s Costs Incurred + M’s % of C’s Costs
Cafeteria Costs (C) to Allocate = C’s Costs Incurred + C’s % of M’s Costs
The first step is to solve for either “Maintenance Costs to Allocate” or “Cafeteria Costs to Allocate,” and
after that solve for the other number. These calculated amounts become the amounts that need to be
allocated from the maintenance department and cafeteria to all the other departments, including the other
service departments.
The process of solving the two equations is shown in detail in the example on the following pages.
104
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.3. Overhead Costs
Comprehensive Example of the Three Methods of Shared Service Cost Allocation
Example: The following information about Cubs Co. is used to demonstrate the three different methods
of allocating shared service costs. Cubs Co. has two shared service departments (A and B) and three
operating departments (X, Y and Z). Shared Service Department A allocates its overhead based on direct
labor hours and Shared Service Department B allocates its overhead based on machine hours. The following information is in respect to the service and operating departments:
Overhead
Labor Hours
Machine Hours
Dept. A
$100,000
-2,000
Dept. B
$50,000
1,000
--
Dept. X
$200,000
2,000
2,000
Dept. Y
$300,000
4,000
2,000
Dept. Z
$250,000
3,000
2,000
Total
$900,000
10,000
8,000
Direct Method
Under the direct method each shared service department allocates costs to only the operating departments. The operating departments used a total of 9,000 (2,000 + 4,000 + 3,000) labor hours of service
from Department A, so each department is allocated $11.11111 ($100,000 in costs ÷ 9,000 total service
hours provided) for each direct labor hour used. Department B provided 6,000 (2,000 + 2,000 + 2,000)
machine hours of service to the operating departments, so it allocates $8.33333 ($50,000 in costs ÷
6,000 machine hours provided) per machine hour to the operating departments. The services provided
by each service department to the other service department are ignored in the direct method. The
numbers that are ignored are shaded below.
Overhead
Labor Hours
Machine Hours
Dept. A
$100,000
-2,000
Dept. B
$50,000
1,000
--
Dept. X
$200,000
2,000
2,000
Dept. Y
$300,000
4,000
2,000
Dept. Z
$250,000
3,000
2,000
Total
$900,000
10,000
8,000
The next step is to determine how much of the costs of the shared service departments will be allocated
to each operating department. The costs of Department A are allocated first, on the basis of direct labor
hours. Since the 1,000 labor hours provided to Department B by Department A are ignored, only 9,000
total direct labor hours are used in the allocation. Of the total 9,000 direct labor hours used in the
allocation, Department X used 2,000 labor hours, Department Y used 4,000 labor hours, and Department
Z used 3,000 labor hours. Therefore, Department X is allocated $11.11111 × 2,000 hours = $22,222.22
of costs from Department A. Department Y is allocated $11.11111 × 4,000 hours = $44,444.45 (adjusted for rounding difference), and Department Z is allocated $11.11111 × 3,000 hours = $33,333.33.
Department B’s costs are allocated next on the basis of machine hours. Since 2,000 machine hours of
service were provided by Department B to Department A and those are ignored, 6,000 machine hours
in total are used in the allocation. Of the total 6,000 machine hours used in the allocation, Departments
X, Y and Z each used 2,000 machine hours. Therefore, each operating department is allocated $8.33333
per hour × 2,000 hours = $16,666.66. (Two of the amounts will be adjusted for rounding differences.)
Below are the total overhead costs for each department after the allocation:
Own Overhead
Allocated from A
Allocated from B
Total OH
Dept. A
$100,000.00
(100,000.00)
0.00
$
0.00
Dept. B
$50,000.00
0.00
(50,000.00)
$
0.00
Dept. X
$200,000.00
22,222.22
16,666.67
$238,888.89
Dept. Y
$300,000.00
44,444.45
16,666.66
$361,111.11
Dept. Z
$250,000.00
33,333.33
16,666.67
$300,000.00
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
105
D.3. Overhead Costs
CMA Part 1
Step-Down Method
The first requirement for the step-down method is to determine which shared service department’s costs
will be allocated first. 25% of the service provided by Department B was provided to Department A, but
only 10% of Department A’s service provided was provided to Department B. Therefore, Department B’s
costs are allocated first. Department B’s cost of $50,000 is allocated on the basis of machine hours used
by all the other departments, including Department A. In total 8,000 machine hours were used, so
Department B’s cost per machine hour is $6.25 ($50,000 ÷ 8,000).
Department B’s costs are allocated among all the other departments, including Department A, based on
machine hours used by each. Department A is allocated $6.25 × 2,000 hours, or $12,500. This $12,500
is added to the $100,000 of A’s own overhead, for a total $112,500 cost for Department A to allocate.
This $112,500 cost for Department A is allocated only to the operating departments based on the direct
labor hours used by each. The allocation of Department A’s costs is done the same way as in the direct
method, except that the amount of cost allocated by Department A is greater than it was in the direct
method because some costs allocated from Department B are included in Department A’s costs to allocate.
Department B’s costs are allocated first. As mentioned previously, Department A is allocated $12,500 of
Department B’s costs. Since Departments X, Y and Z each also used 2,000 machine hours, each of those
departments is also allocated $12,500 from Department B ($6.25 × 2,000 machine hours).
Next, the costs of Department A are allocated to the operating departments only. The total costs to be
allocated for Department A are Department A’s own overhead of $100,000 plus the $12,500 allocated
overhead from Department B, for a total to allocate of $112,500. That $112,500 is allocated based on
direct labor hours, excluding the direct labor hours used by Department B. Therefore, a total of 9,000
direct labor hours (2,000 + 4,000 + 3,000) are used in allocating Department A’s total overhead of
$112,500. $112,500 ÷ 9,000 = $12.50 to be allocated per direct labor hour. Department X, which used
2,000 machine hours, is allocated $12.50 × 2,000 = $25,000. Department Y, which used 4,000 machine
hours, is allocated $12.50 × 4,000 = $50,000, and Department Z, which used 3,000 machine hours, is
allocated $12.50 × 3,000 = $37,500.
It is important to remember in the step-down method that the hours the second shared service department provides to the first are ignored when allocating the second shared service department’s costs.
The ignored numbers are shaded below.
Overhead
Labor Hours
Machine Hours
Dept. A
$100,000
-2,000
Dept. B
$50,000
1,000
--
Dept. X
$200,000
2,000
2,000
Dept. Y
$300,000
4,000
2,000
Dept. Z
$250,000
3,000
2,000
Total
$900,000
10,000
8,000
Below are the total overhead costs for each department after the allocation:
Own Overhead
Allocated from A
Allocated from B
Total OH
106
Dept. A
$100,000.00
(112,500.00)
12,500.00
$
0.00
Dept. B
$50,000.00
0.00
(50,000.00)
$
0.00
Dept. X
$200,000.00
25,000.00
12,500.00
$237,500.00
Dept. Y
$300,000.00
50,000.00
12,500.00
$362,500.00
Dept. Z
$250,000.00
37,500.00
12,500.00
$300,000.00
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.3. Overhead Costs
Reciprocal Method
Under the reciprocal method some of Department A’s costs are allocated to Department B and some of
Department B’s costs are allocated to Department A using simultaneous equations. Since Cubs Co. has
two shared service departments, two equations will be needed. (If the company had three shared service
departments, three equations would be needed, and so forth.) The equations express how much each
shared service department needs to allocate to all the other departments, including the other shared
service departments.
Department A used 25% of Department B’s services (2,000 MH used ÷ 8,000 total MH). Therefore, A is
allocated 25% of B’s costs. A’s costs to allocate equal the $100,000 in costs incurred by A plus 25% of
the costs incurred by B. Department B used 10% of Department A’s services (1,000 DLH used ÷ 10,000
total DLH). Therefore, B is allocated 10% of A’s costs. B’s costs to allocate equal the $50,000 in costs
incurred by B plus 10% of the costs incurred by A. The equations needed are:
A = $100,000 + 0.25B
B = $50,000 + 0.10A
Solve for A by substituting the right side of the second equation (B) into the first equation:
A = $100,000 + 0.25 ($50,000 + 0.10A)
The resulting equation has only one variable (A) in the equation and can be solved for Dept. A, as follows.
A = $100,000 + $12,500 + 0.025A
0.975A = $112,500
A = $115,384.62
The result, $115,384.62, is the total cost for Department A to be allocated to all the other departments,
including B, the other shared service department.
The next step is to put the value found for A into the second equation in place of the variable A:
B = $50,000 + 0.1 ($115,384.62)
B = $61,538.46
The total cost for Department B to be allocated to all the other departments, including A, the other
shared service department, is $61,538.46.
The total overhead cost for Department A, $115,384.61, is allocated among all the other departments
on the basis of the full 10,000 total direct labor hours used, or $11.53846 per hour. The cost of Department A allocated to Department B is $11.53846 × 1,000 = $11,538.46. Department X is allocated
$11.52846 × 2,000 = $23,076.92 of Department A’s costs, Department Y is allocated $11.53846 ×
4,000 = $46,153.85, and Department Z is allocated $11.53846 × 3,000 = $34,615.38.
The total overhead cost for Department B, $61,538.46, is allocated among all the other departments on
the basis of the full 8,000 total machine hours used, or $7.69231 per machine hour. Departments A, X,
Y and Z each used 2,000 machine hours, so each will be allocated $7.69231 × 2,000 = $15,384.62.
(Two of the departments’ amounts will be adjusted for rounding differences.)
Below are the total overhead costs for each department after the allocations:
Own Overhead
Allocated from A
Allocated from B
Total OH
Dept. A
$100,000.00
(115,384.61)
15,384.61
$
0.00
Dept. B
$50,000.00
11,538.46
(61,538.46)
$
0.00
Dept. X
$200,000.00
23,076.92
15,384.61
$238,461.53
Dept. Y
$300,000.00
46,153.85
15,384.62
$361,538.47
Dept. Z
$250,000.00
34,615.38
15,384.62
$300,000.00
Notice that in all of the methods of overhead allocation, the total of the overhead amounts allocated to
the operating departments is equal to the total amount of overhead that the company as a whole—
including the shared service departments—incurred during the period ($900,000).
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
107
D.3. Overhead Costs
CMA Part 1
The following information is for the next five questions: The managers of Rochester Manufacturing are discussing ways to allocate the cost of support departments such as Quality Control and
Maintenance to the production departments. To aid them, they were provided the following information:
Budgeted overhead costs
before allocation
Budgeted machine hours
Budgeted direct labor hours
Budgeted hours of service:
Quality Control
Maintenance
Quality Control
Maintenance
Machining
Assembly
Total
$350,000
---
$200,000
---
$400,000
50,000
--
$300,000
-25,000
$1,250,000
50,000
25,000
-10,000
7,000
--
21,000
18,000
7,000
12,000
35,000
40,000
Question 35: If Rochester uses the direct method of allocating support department costs, the total
support costs allocated to the Assembly Department would be:
a)
$80,000
b)
$87,500
c)
$120,000
d)
$167,500
Question 36: If Rochester uses the direct method, the total amount of overhead allocated to each
machine hour at Rochester would be:
a)
$2.40
b)
$5.25
c)
$8.00
d)
$15.65
Question 37: If Rochester uses the step-down method of allocating support costs beginning with quality
control, the maintenance costs allocated to the Assembly Department would be:
a)
$70,000
b)
$108,000
c)
$162,000
d)
$200,000
Question 38: If Rochester uses the reciprocal method of allocating support costs, the total amount of
quality control costs to be allocated to the other departments would be:
a)
$284,211
b)
$336,842
c)
$350,000
d)
$421,053
Question 39: If Rochester decides not to allocate support costs to the production departments, the
overhead allocated to each direct labor hour in the Assembly Department would be:
a)
$3.20
b)
$3.50
c)
$12.00
d)
$16.00
(CMA Adapted)
108
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Estimating Fixed and Variable Costs
Question 40: A segment of an organization is referred to as a service center if it has:
a)
Responsibility for combining the raw materials, direct labor, and other factors of production into
a final output.
b)
Responsibility for developing markets and selling the output of the organization.
c)
Authority to make decisions affecting the major determinants of profit, including the power to
choose its markets and sources of supply.
d)
Authority to provide specialized support to other units within the organization.
(CMA Adapted)
Estimating Fixed and Variable Costs
Sometimes costs are mixed costs or the fixed costs are not segregated from the variable costs in the
historical information available, but the fixed costs need to be separated from the variable costs for analysis,
budgeting or costing purposes. The High-Low Points Method can be used to separate fixed costs from
variable costs.
High-Low Points Method
For the High-Low Points Method, the highest and lowest observed activity levels of the cost driver
within the relevant range and the costs associated with those values are used. The activity level of the
cost driver may be production level in units, direct labor hours, machine hours, or any other cost driver.
For example, to segregate fixed production costs from variable production costs when only a single total
cost amount is available, select the month of the highest production and the month of the lowest production.
Compare the difference in production levels with the difference in total costs for the two months to determine approximately the amount of costs that are variable and the amount of costs that are fixed. The steps
are as follows:
1)
Estimate the variable portion of the total cost using the highest and lowest activity level values of
the cost driver. Divide the difference between the costs associated with the highest and the lowest
activity levels by the difference between the highest and lowest activity levels.
Difference in Associated Costs
Difference in Activity Levels
= Variable Cost per Unit
of the Cost Driver
Since fixed costs do not change with changes in activity, the difference between the costs for the
highest activity level and the costs for the lowest activity level divided by the difference in the two
activity levels is the estimated variable cost per unit of the cost driver.
2)
Multiply the variable cost per unit of the cost driver by the activity level at either the highest or
the lowest activity level to calculate the total variable cost at that activity level.
3)
Subtract the total variable cost at that activity level from the total cost at that activity level to
calculate the fixed cost.
Note: The “associated costs” used must always be the costs associated with the highest and lowest
activity levels of the cost driver, even if those are not the highest and lowest historical costs in the
data set.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
109
Estimating Fixed and Variable Costs
CMA Part 1
Another way of estimating the variable cost per unit using the High-Low Points method is to set up two
equations in two variables, with one equation representing the highest level of the cost driver and one
equation representing the lowest level. The two variables are Fixed Costs and Variable Costs. Then, subtract
one equation from the other equation to eliminate the Fixed Cost as a variable and solve the remainder for
the Variable Cost.
Note: Both methods will be illustrated in the example that follows.
After the estimated total fixed costs and the estimated variable cost per unit of the cost driver have been
determined, the cost function can be written. Total costs at any activity level can be estimated by
using the cost function.
The cost function describes total costs in terms of total fixed cost plus total variable cost, where the total
variable cost is the variable cost per unit of the cost driver multiplied by the number of units. The cost
function expresses the relationship between fixed and variable costs.
The cost function can be written in either of two ways, as follows:
y = a + bx
Where: y = the dependent variable and the predicted value of y, which here is
total costs
a = the constant coefficient and the y-intercept, or the value of y when
the value of x is zero, which here is the fixed cost
b = the variable coefficient and the slope of the line, which here is the
variable cost per unit of the activity level
x = the independent variable, which here represents the number of units
of the activity level
OR
y = ax + b
Where: y = the dependent variable and the predicted value of y, or total costs
a = the variable coefficient and the slope of the line, or the variable cost
per unit of the activity level
x = the independent variable, or the number of units of the activity level
b = the constant coefficient and the y-intercept, or the fixed cost
Note: The terms may be in any order. The variable representing fixed costs may come first, or it may
come at the end. Whichever way the cost function is written, the constant coefficient (the fixed costs)
will always be by itself and the variable coefficient (the variable cost per unit of the cost driver) will
always be next to the independent variable (the x, or the number of units/hours).
Examples of both methods of calculating the fixed cost and the variable cost per unit of the cost driver
follow.
110
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
Estimating Fixed and Variable Costs
Example: Ray Corporation experienced the following total production costs during the past year:
January
February
March
April
May
June
July
August
September
October
November
December
Ray Corporation
Production Volumes and Costs
Production in Units
Total Production Costs
6,257,000
$1,500,000
4,630,000
1,200,000
5,200,000
1,300,000
5,443,000
1,350,000
5,715,000
1,400,000
3,000,000
900,000
3,543,000
1,000,000
3,815,000
1,050,000
5,715,000
1,400,000
6,800,000
1,600,000
6,529,000
1,550,000
5,172,000
1,300,000
What is Ray Corporation’s fixed production cost and what is its variable production cost per unit?
The highest and the lowest production volumes and their associated production costs are:
Production in Units
Total Production Costs
October
6,800,000
$1,600,000
June
3,000,000
900,000
Using the first method to estimate the variable cost per unit of the cost driver:
Difference in Associated Costs
Difference in Activity Levels
=
$700,000
3,800,000
$0.1842105 variable cost/unit
Using the second method (two equations) to estimate the variable cost per unit of the cost driver:
−
FC + 6,800,000 VC
FC + 3,000,000 VC
=
=
$1,600,000
900,000
0 + 3,800,000 VC
VC
=
=
$ 700,000
$0.1842105
The next step is to put the variable cost per unit into an equation to calculate fixed cost using either the
lowest or the highest activity level and its associated total cost. Using the highest volume, 6,800,000
units and its associated total cost of $1,600,000:
FC
=
Total Cost − Variable Cost
FC
=
$1,600,000 − ($0.1842105 × 6,800,000) = $347,369
Fixed cost is $347,369, and variable cost is $0.1842105 per unit.
Therefore, the cost function that expresses the relationship between fixed and variable costs and that
can be used to estimate total costs at any activity level is:
y = 347,369 + 0.1842105x
These fixed and variable cost amounts can be proved by substituting them for the variables in the cost
function using the lowest activity level of 3,000,000 units. The result should be the total historical cost
at that historical activity level, and it is.
$347,369 + ($0.1842105 × 3,000,000) = $900,000
Fixed cost is $347,369 and variable cost is $0.1842105 per unit.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
111
D.4. Supply Chain Management
CMA Part 1
D.4. Supply Chain Management
What is Supply Chain Management?
Nearly every product that reaches an end user represents the coordinated efforts of several organizations.
Suppliers provide components to manufacturers, who in turn convert them into finished products that they
ship to distributors for shipping to retailers for purchase by the consumer. All of the organizations involved
in moving a product or service from suppliers to the end-user, the customer, are referred to collectively as
the supply chain.
Supply chain management is the active management of supply chain activities by the members of a supply
chain with the goals of maximizing customer value and achieving a sustainable competitive advantage. The
supply chain firms endeavor to develop and run their supply chains in the most effective and efficient ways
possible. Supply chain activities cover product development, sourcing, production, logistics, and the information systems needed to coordinate the supply chain activities.
The organizations that make up the supply chain are linked together through physical flows of products and
through flows of information. Physical flows involve the movement, storage, and transformation of raw
materials and finished goods. The physical flows are the most visible part of the supply chain, but the
information flows are just as important. Information flows allow the various supply chain partners to coordinate their long-term plans and to control the day-to-day flow of goods and material up and down the
supply chain.
By sharing information and by planning and coordinating their activities, all the members of the supply
chain can be in a position to respond quickly to needs without having to maintain large inventories. Retailers
share daily sales information with distributors and the manufacturer, so that the manufacturer knows how
much production to schedule and how much raw material to order and when, and the distributors know
how much to order. The trading partners share inventory information. The sharing of information reduces
uncertainty as to demand.
The result of effective supply chain management is fewer stockouts at the retail level, reduction of excess
manufacturing by the manufacturer and thus reduction of excess finished goods inventories, and fewer rush
and expedited orders. Each company in the supply chain is able to carry lower inventories, thus reducing
the amount of cash tied up in inventories for all.
Some supply chain management goes so far as the retailer allowing the distributor or the distributor allowing the manufacturer to manage its inventories, shipping product to it automatically whenever its inventory
of an item gets low. Such a practice is called supplier-managed or vendor-managed inventory.
Along with the benefits of supply chain management, issues and problems can arise because of communications problems, trust issues, incompatible information systems, and the required increases in personnel
resources and financial resources.
112
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.4. Supply Chain Management
Lean Resource Management
“Lean manufacturing,” the earliest form of lean resource management, was a philosophy and system of
manufacturing developed originally by Toyota following World War II. Toyota called the system the Toyota
Production System (TPS).
In 1990, a book titled The Machine That Changed the World by James P. Womack, Daniel T. Jones, and
Daniel Roos was published. The Machine That Changed the World articulated the Toyota principles and
coined the term “lean production” or “lean manufacturing” to describe the principles. The authors documented the advantages of lean production over the existing mass production model and predicted that the
method would eventually take over in place of the mass production model. That did occur, as manufacturers
throughout the world adapted the system to meet their own manufacturing needs. It has become a global
standard that virtually all manufacturing companies must adopt in order to remain competitive.
However, the authors of The Machine That Changed the World also predicted that the system would ultimately spread beyond production and be applied to every value-creating function such as health care,
retail, and distribution. That is happening today.
Womack and Jones authored another book in 1996 that was updated and expanded in 2003, called Lean
Thinking. Lean thinking is a new way of thinking about the roles of firms and functions in managing the
flow of value creation—called the “value stream”—from concept to the final customer. The emphasis in lean
thinking is on adding value to the customer while cutting out waste, or muda in Japanese.
•
“Adding value to the customer” is not the same thing as adding value to the product or service. If
extra options added to a product or service are not things the customer wants or is willing to pay
for, they may well add value to the product, but they do not add value to the customer.
•
“Muda” or “waste” is anything that does not add value to the customer or anything the customer
is not willing to pay for.
Identifying and eliminating waste is a primary focus of lean resource management. For example, in manufacturing, the primary wasteful activities addressed by lean production include:
1)
Over-production, that is, producing more items than can customers want.
2)
Delay, or waiting for processing, and parts sitting in storage.
3)
Transporting parts and materials from process to process or to various storage locations.
4)
Over-processing, or doing more work on a part or product than is necessary.
5)
Inventory, as in committing too much money and storage space to unsold parts or products.
6)
Motion, or moving parts more than the minimum amount required to complete and ship them.
7)
Making defective parts, that is, creating parts or products that are not sellable due to defects
and that must be discarded or reworked.
Waste inhibits throughput, the rate at which work proceeds through a manufacturing system. Machine
downtime, waiting for materials, out of stock supplies, operator errors, and poorly designed processes all
contribute to poor throughput in a manufacturing system. Eliminating waste increases throughput. Even
small reductions in waste can have a cumulative effect in increasing throughput.
In Lean Thinking, Womack and Jones defined “lean thinking” as five principles that incorporate the concepts
of the “lean enterprise” and “lean resource management.”
1)
Specify value as the starting point for lean thinking. “Value” is defined by the ultimate customer
in terms of a specific good or service that meets the customer’s needs at a specific price and at a
specific time. The producer must provide the value customers actually want through dialogue with
specific customers. Providing a good or service that customers do not want is wasteful, even if it
is provided in the most efficient manner. Existing assets, technologies, and capabilities to supply
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
113
D.4. Supply Chain Management
CMA Part 1
a product or service are irrelevant if the product or service they can supply is not what customers
want.
2)
The next step is to identify the value stream. The “value stream” is all the specific actions
needed to provide a good or service. It involves problem-solving—developing a concept and designing and producing the product or service; information management—including order-taking,
scheduling, and delivery; and the physical transformation process—converting raw materials to a
finished product. The value stream includes all the upstream and downstream participants in the
process, as well, not just a single company. Duplication of effort and inefficient processes can be
taking place at any point in the value stream. Mapping the value stream for a product or service
can expose many steps that create no value for the customer and thus are waste that can be
eliminated. All of the concerned parties working together to create a channel for the entire value
stream and removing the waste is called the lean enterprise.
3)
The remaining steps are then lined up to create continuous flow. “Flow” is the antithesis to batch
processing. Doing work in batches, whether it is production work or other activities such as invoicing or accounts receivable, leads to inefficiencies because it can mean long waits between batches
when volume is low. Unless volume is in the millions of units, it is actually more efficient to create
continuous flow from one process to the next without using assembly lines. Processing steps of
different types can be conducted immediately adjacent to each other so the object undergoing
conversion is kept in continuous flow if the tools can be quickly changed and if the equipment is
made smaller. Once production has begun, the product is kept moving through the value stream
without ever placing it into a holding area to wait for later processing. When flow is introduced,
time required for processing is dramatically reduced. Furthermore, shifting demand can be accommodated quickly because flow provides the ability to design, schedule, and make exactly what the
customer wants when the customer wants it. That leads to the next step.
4)
Pull is the next step. “Pull” means the customer pulls the product from the producer as needed
rather than the producer’s pushing products onto the customers. When producers stop making
products customers don’t want, they can end periodic price discounting that is needed to move the
unwanted goods. Customer demand becomes more stable because customers know they can get
what they want when they want it. A good example of “pull” is the printing of books (such as this
one) in response to orders received and supplying electronic versions of the book that the user
can print as needed.
5)
Perfection, or rather, “continuous improvement” is the final step. Product teams in dialogue with
customers find more ways to specify value and learn more ways to enhance flow and pull. As the
first four principles continuously interact, value flows faster and more waste is exposed in the value
stream. The lean enterprise seeks perfection by continuing to make improvements.
An additional principle has been added in more recent years—that of respect for people, where “people”
include employees as well as customers, shareholders, business partners, and communities.
Lean Concepts
Minimal machine downtime when changing from manufacturing one item to manufacturing a different
item is an important component of lean production. The changeover process generally involves removing
and replacing dies from machine beds and removing and replacing direct materials used. The more the
downtime involved in the changeover process can be reduced, the less waste of resources takes place.
“SMED,” or “Single Minute Exchange of Die” is a primary method of speeding up the changeover process.
The goal of SMED is to change dies and other components in less than 10 minutes. The most powerful
method of attaining SMED is to convert all “internal setup” procedures (procedures that can be completed
only while the machine is down) to “external setup” procedures (procedures that can be completed while
the machine is running). For example, preparation for the next setup can be done while the machine or
process is still running, resulting in less downtime because the machine needs to be stopped only during
the actual setup activities.
114
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.4. Supply Chain Management
efham CMA
Often, large batches are assumed to be more economical than small batches due to excessively long or
difficult changeover procedures. However, production of large batches results in large on-hand inventories.
Reducing setup time makes it practical to produce smaller batches. Smaller batches can reduce finished
goods inventory on hand, allow for more varieties of products that can be produced more quickly, and lead
to greater customer responsiveness.
Ideally, batching is not done at all in lean production. The goal of lean production is to maintain continuous
flow, meaning once production of a product has begun, the product is kept moving through the value
stream without ever placing it into a holding area for later processing.
In lean production, the plant layout is re-arranged by manufacturing cells or work cells. Each work cell
produces a specific product or product type. The work cells are generally laid out in a U-shape or horseshoe
shape, but the shape can be whatever works best. The configuration’s purpose is to enable workers to
easily move from one process to another and to pass parts from one worker to another with little effort.
The goal in the layout of the work cell is for the workers to be able to pass a part or product through every
needed process with a minimum amount of wasted motion and distance. Each worker in each cell knows
how to operate all the machines in that cell and can perform supporting tasks within that cell, reducing
downtime resulting from breakdowns or employee absences. Furthermore, a properly laid-out work cell can
produce a product with a staff of just one person moving from station to station, or fully staffed with a
worker at each station, or staffed somewhere in between. Product demand determines the staffing needed
in each work cell. The rate of production is matched to the demand to avoid creating excess inventory or
incurring excess costs.
Kaizen is part of the lean manufacturing philosophy. The term kaizen is a Japanese word that means
“improvement.” As used in business, it implies “continuous improvement,” or slow but constant incremental
improvements being made in all areas of business operations. Standard costs used in manufacturing may
be either ideal standards, attainable only under the best possible conditions, or practical, expected
standards, which are challenging to attain, but attainable under normal conditions. Toyota would say that
standards in manufacturing are temporary and not absolutes. Improvement is always possible, and the
goal is to attain the ideal standard. Even though practical standards are being attained, the ultimate goal
is still not being achieved. The concept of kaizen has extended to other business operations outside the
manufacturing function, and it will be discussed later in this volume in that context.
Kanban is also a component of lean manufacturing. Kanban refers to a signal that tells workers that more
work needs to be done. Kanban is covered in more detail in the next few pages.
Error and mistake-proofing means creating improvements on many different levels to make the products
correctly the first time. Tooling and processes are often reworked to produce error-free products or to catch
errors before they become products that become spoilage or require rework. Even the design of the product
may be changed to minimize errors in manufacturing.
Just-in-time (JIT) production and inventory management are also used in lean manufacturing. JIT is
a process for synchronizing materials, operators, and equipment so that the people and the materials are
where they need to be, when they need to be there, and in the condition they need to be in. Just-in-time
processes are discussed in more detail in the next topic.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
115
D.4. Supply Chain Management
CMA Part 1
Benefits of Lean Resource Management
•
Quality performance, fewer defects and rework.
•
Fewer machine and process breakdowns.
•
Lower levels of inventory.
•
Less space required for manufacturing and storage.
•
Greater efficiency and increased output per person-hour.
•
Improved delivery performance.
•
Greater customer satisfaction.
•
Improved employee morale and involvement.
•
Improved supplier relations.
•
Lower costs due to elimination of waste leading to higher operating income.
•
Increased business because of increased customer responsiveness.
Beyond Manufacturing
The concept of lean resource management can be and is being extended beyond manufacturing to every
enterprise. Examples include but are certainly not limited to the following:
•
In health care, lean thinking can be used to reduce waiting time for patients, reduce waste in
hospitals’ inventories, and in some cases, it can lead to better quality care.
•
In a warehouse, individual items can be organized by size and by frequency of demand, with
parts most frequently demanded moved closest to the beginning of the sorting and picking areas.
Appropriately-sized containers can be used to hold the items and pick them, and picking can take
place more quickly. Ordering can be done daily instead of weekly or monthly and include just what
was shipped that day. As a result, smaller inventories can be carried of a broader selection of items
and the investment in inventory can be minimized.
•
Service organizations tend to have long cycle times, complex variables, multiple decision points,
interactions with various computer systems, and complex interactions with customers. A team that
does similar work can work to standardize the steps they take, even with complex, variable processes, in order to maximize the consistency and quality of services provided to customers, subject
to continuous improvement efforts.
Lean principles create greater value for the customer while using fewer resources, that is increasing value
while reducing waste. Less waste also means lower costs. However, “fewer resources” should not be confused with cost-cutting activities such as laying off employees. Short-term cost-cutting can lead to shortterm increases in profit. Without a focus on increased value, however, customers will figure out that corners
are being cut. On the other hand, if creating more value with fewer resources is done properly, there will
be plenty of work for everybody because satisfied customers will want more.
116
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.4. Supply Chain Management
Just-in-Time (JIT) Inventory Management Systems
Just-in-time inventory management systems are used in lean manufacturing. They are based on a manufacturing philosophy that combines purchasing, production, and inventory control into one function. The
goal of a JIT system is to minimize the level of inventories held in the plant at all stages of production,
including raw materials, work-in-process, and finished goods inventories while meeting customer demand
in a timely manner with high-quality products at the lowest possible cost.
The advantage of a JIT system is reduction in the cost of carrying the inventory. In addition to reducing the
amount invested in the inventory, the cost savings include reduction in the risk of damage, theft, loss, or
a lack of ability to sell the finished goods.
One of the main differences between JIT and traditional inventory systems is that JIT is a “demand-pull
system” rather than a “push system.” In a push system, a department produces and sends all that it can
to the next step for further processing, which means that the manufacturer is producing something without
understanding customer demand. The result of a push system can be large, useless stocks of inventory.
The main idea of a demand-pull system such as JIT is that nothing is produced until the next
process in the assembly line needs it. In other words, nothing is produced until a customer orders it,
and then it is produced very quickly. The demand-pull feature of JIT requires close coordination between
and among workstations. Close coordination between and among workstations can keep the flow of goods
smooth in spite of the low levels of inventory.
To implement the JIT approach and to minimize inventory storage, the factory must be reorganized to
permit lean manufacturing, as discussed in the preceding topic. Thus, lean manufacturing and just-in-time
inventory management go together.
Elimination of defects is an important part of a JIT system. Because of the close coordination between and
among workstations and the minimum inventories held at each workstation, defects caused at one workstation very quickly affect other workstations. JIT requires problems and defects to be solved by eliminating
their root causes as quickly as possible. Since inventories are low, workers are able to trace problems to
their source and resolve them at the point where they originated. In addition to the advantage of lower
carrying costs for inventory, other benefits of a JIT system include greater emphasis on improving quality
by eliminating the causes of rework, scrap,16 and waste.
Supply chain management is also an important part of just-in-time inventory management, and just-intime inventory management is an important part of supply chain management. Because inventory levels
are kept low in a JIT system, the company must have very close relationships with its suppliers to make
certain that the suppliers make frequent deliveries of smaller amounts of inventory. In a JIT system, inventory is purchased so that it will be delivered just as needed for production (or just as needed for sales,
if the company is a reseller instead of a manufacturer). The inventory must be of the required quality
because the company has no extra inventory to use in place of any defective units that are delivered.
Because very little inventory is held, a supplier that does not deliver direct materials on time or delivers
direct materials that do not meet quality standards can cause the company to not be able to meet its own
scheduled deliveries. Therefore, a company that uses JIT purchasing must choose its suppliers carefully
and maintain long-term supplier relationships.
A goal of JIT is to reduce batch setup time because reduced setup time makes production of smaller batches
more economical. The smaller batches enable the inventory reductions and enable the company to respond
quickly to changes in customer demand. The reduced setup time also leads to lower manufacturing lead
times. As mentioned in the topic Lean Manufacturing, batch setup times can be reduced by separating the
required activities into preparation activities and actual setup activities. The preparation for the next setup
can be done while the machine or process is still running, resulting in less downtime because the machine
needs to be stopped only during the actual setup activities. Material handling for setups can be improved
16
Scrap is materials left over from product manufacturing such as small amounts of supplies or surplus materials. Unlike
waste, scrap usually has some monetary value, especially if it can be recycled.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
117
D.4. Supply Chain Management
CMA Part 1
by moving the material closer to the machine. These are only two suggestions, and many other things could
be done to reduce setup times, depending on the actual process.
Furthermore, JIT systems typically require less floor space than traditional factories do for equivalent levels
of production because large amounts of inventories do not need to be stored. Reductions in square footage
can reduce energy use for heating, air conditioning, and lighting. Even more importantly, reducing the
needed floor space can reduce the need to construct additional production facilities, reducing the need for
capital investment and the associated environmental impacts that result from construction material use,
land use, and construction waste.
Just-in-time production also has costs and shortcomings. The reduced level of inventory carries with it an
increased risk of stockout costs and can lead to more frequent trips for parts and material inputs from sister
facilities or from suppliers. More frequent trips can contribute to traffic congestion and environmental impacts associated with additional fuel use and additional vehicle emissions. If the products produced have
large or unpredictable market fluctuations, a JIT system may not be able to reduce or eliminate overproduction and associated waste. Furthermore, JIT implementation is not appropriate for high-mix
manufacturing environments, which often have thousands of products and dozens of work centers.
Benefits of Just-in-Time Inventory Management
•
JIT permits reductions in the cost of carrying inventory, including the investment in inventory and
reduction in the risk of damage, theft, loss, or a lack of ability to sell the finished goods.
•
Because inventories are low, workers are able to trace problems and defects to their source and
resolve them at the point where they originated. Because problems with quality are resolved quickly,
the causes of rework, scrap, and waste are eliminated, leading to improved quality.
•
Batch setup time is reduced, making production of smaller batches more economical and leading to
lower manufacturing lead times. The smaller batches and lower manufacturing lead times enable
inventory reductions and enable the company to respond quickly to changes in customer demand.
•
JIT systems typically require less floor space than traditional factories do, leading to reduced facility
costs.
Limitations of Just-in-Time Inventory Management
•
The demand-pull feature of JIT requires close coordination between and among workstations to keep
the flow of goods smooth despite the low levels of inventory held.
•
Because of the close coordination between and among workstations and the minimum inventories
held at each workstation, defects caused at one workstation very quickly affect other workstations,
so problems and defects must be solved as quickly as possible.
•
Since inventory levels are kept low, raw materials received must be of the required quality and delivered on time because the company has no excess inventory to use in place of any defective units
that are delivered or if a delivery is delayed. Therefore, the company must maintain very close relationships with its suppliers.
•
The reduced inventory carries an increased risk of stockout costs.
•
If the products produced have large or unpredictable market fluctuations, a JIT system may not be
able to reduce or eliminate overproduction and associated waste.
•
JIT is not appropriate for high-mix manufacturing environments, which may have thousands of products and dozens of work centers.
118
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.4. Supply Chain Management
Kanban
Kanban is a Japanese inventory system. The word “kanban” means “card” or “sign” or “visual record” in
Japanese. Kanban is an integral part of lean manufacturing and JIT systems. Kanban provides the physical
inventory control cues that signal the need to move raw materials from the previous process.
The core of the kanban concept is that components are delivered to the production line on an “as needed”
basis, the need signaled, for example, by receipt of a card and an empty container, thus eliminating storage
in the production area. Kanban is part of a chain process where orders flow from one process to another,
so production of components is pulled to the production line, rather than pushed (as is done in the traditional
forecast-oriented system).
A kanban can be a card, a labeled container, a computer order, or some other device that is used to signal
that more products or parts are needed from the previous production process. The kanban contains information on the exact product or component specifications needed for the next process. Reusable containers
may serve as the kanban, assuring that only what is needed gets produced.
Kanban can be used to control work-in-process (WIP), production, and inventory flow, thus contributing to
eliminating overproduction.
However, if production is being controlled perfectly, kanban will not be needed because the necessary parts
will arrive where they are needed at just the right time. If the parts do not arrive where and when needed,
though, then the kanban is sent to request the needed parts so that the station can keep operating. As
production control is improved, fewer kanban will be needed because the parts will almost always be where
they are needed when they are needed.
The major kanban principles are:
•
Kanban works from upstream to downstream in the production process, starting with the
customer’s order. At each step, only as many parts are withdrawn as the kanban instructs, helping
ensure that only what is ordered is produced. The necessary parts in a given step always accompany the kanban to ensure visual control.
•
The upstream processes produce only what has been withdrawn. Items are produced only
in the sequence in which the kanban are received and only in the amounts indicated on the kanban.
•
Only products that are 100 percent defect-free continue on through the production line.
At each step in the production line, defects are recognized and corrected before any more defective
units can be produced.
•
The number of kanban should be decreased over time. As mentioned above, kanban are
used when the needed components do not show up on time. As areas of needed improvement are
addressed, the total number of kanban is minimized. By constantly improving production control
and reducing the total number of kanban, continuous improvement is facilitated while the overall
level of stock in production is reduced.
Different types of kanban include supplier kanban (orders given to outside parts suppliers when parts are
needed for assembly lines); in-factory kanban (used between processes in the factory); and production
kanban (indicating operating instructions for processes within a line).
It should be mentioned that kanban can be extended beyond being a lean manufacturing and JIT technique
because it can also support industrial reengineering and HR management.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
119
D.4. Supply Chain Management
CMA Part 1
Introduction to MRP, MRPII, and ERP
MRP, MRPII, and ERP systems are all integrated information systems that have evolved from early database
management systems.
•
MRP stands for Material Requirements Planning
•
MRPII refers to Manufacturing Resource Planning
•
ERP stands for Enterprise Resource Planning
MRP and MRPII systems are predecessors of ERP systems, though MRP and MRPII are still used widely in
manufacturing organizations.
Material Requirements Planning (MRP) systems help determine what raw materials to order for production,
when to order them, and how much to order. Manufacturing Resource Planning (MRPII) systems followed
MRP and added integration with finance and personnel resources.
Enterprise Resource Planning (ERP) takes integration further by including all the systems of the organization, not just the manufacturing systems. ERP systems address the problem of paper-based tasks that
cause information in organizations to be entered into systems that do not “talk” to one another. For example, a salesperson takes an order and submits the order on paper to an order entry clerk, who prepares the
invoice and shipping documents. The shipping documents are delivered manually to the shipping department, and the shipping department prepares the shipment and ships the order. After shipping, the sale is
recorded and the customer’s account is updated with the receivable due. If the organization maintains
customer relations management software, the order information is entered separately into that database,
so that if the customer calls about the order, the customer service person will be able to discuss the order
with knowledge, since the customer service person does not have access to the accounting records.
The above is only a minor example, and it does not even include the communication needed with production
to make certain the product ordered will be available to ship. Entering the same information into multiple
systems causes duplication of effort and leaves the organization more vulnerable to input errors.
Enterprise Resource Planning integrates all departments and functions across a company onto a single
computer system with a single database so that the information needed by all areas of the company will be
available to the people who need it for planning, manufacturing, order fulfillment, and other purposes.
MRP, MRPII, and ERP all provide information for decision-making by means of a centralized database.
Material Requirements Planning (MRP)
Material requirements planning, or MRP, is an approach to inventory management that uses computer
software to help manage a manufacturing process. It is a system for ordering and scheduling of dependent
demand inventories.
Dependent demand is demand for items that are components, or subassemblies, used in the production
of a finished good. The demand for them is dependent on the demand for the finished good.
MRP is a “push-through” inventory management system. In a push-through system, finished goods are
manufactured for inventory on the basis of demand forecasts. MRP makes it possible to have the needed
materials available when they are needed and where they are needed.
When demand forecasts are made by the sales group, the MRP software breaks out the finished products
to be produced into the required components and determines total quantities to be ordered of each component and the timing for ordering each component, based on information about inventory of each
component already on hand, vendor lead times and other parameters that are input into the software.
Once the quantities and the timing have been worked out, the required cash to pay for the components can
be forecasted and arranged. MRP can be used to reduce the amount of cash needed by the organization,
which in turn improves profitability and ROI. MRP creates the antithesis of the situation often found in old
manufacturing organizations where large amounts of cash are tied up in inventory before products can be
120
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.4. Supply Chain Management
assembled and sold. Instead, MRP aims to decrease the amount of cash tied up through careful planning
and management.
Although MRP is primarily a push inventory system, it can also be used in a “demand-pull” situation, for example if an unexpected order is received, to determine the components to be purchased and
when each should be purchased in order to produce the special order as efficiently and quickly as possible
using just-in-time (JIT) inventory management techniques.
MRP uses the following information in order to determine what outputs will be necessary at each stage of
production and when to place orders for each needed input component:
•
Demand forecasts for finished goods.
•
A bill of materials for each finished product. The bill of materials gives all the materials, components, and subassemblies required for each finished product.
•
The quantities of the materials, components, and product inventories to determine the necessary
outputs at each stage of production.
The need for management accountants to collect and maintain updated inventory records is a challenge in
using MRP. Accurate records of inventory and its costs are necessary. Management accountants also need
to estimate setup costs and downtime costs for production runs. When setup costs are high, producing
larger batches and thus incurring larger inventory carrying costs actually reduces cost because the number
of setups needed is reduced.
Manufacturing Resource Planning (MRPII)
Manufacturing Resource Planning (MRPII) is a successor to Material Requirements Planning. While MRP is
concerned mainly with raw materials for manufacturing, MRPII’s concerns are more extensive. MRPII integrates information regarding the entire manufacturing process, including functions such as production
planning and scheduling, capacity requirement planning, job costing, financial management and forecasting, order processing, shop floor control, time and attendance, performance measurement, and sales and
operations planning.
An MRPII system is designed to centralize, integrate and process information for effective decision making
in scheduling, design engineering, inventory management and cost control in manufacturing.
However, if a firm wants to integrate information on its non-manufacturing functions with the information
on its manufacturing functions, it needs an ERP system.
Enterprise Resource Planning (ERP)
Enterprise Resource Planning (ERP) is a successor to Manufacturing Resource Planning. An ERP system is
usually a suite of integrated applications that is used to collect, store, manage, and interpret data across
the organization. Often the information is available in real-time. The applications share data, facilitating
information flow among business functions.
In addition to integrating production information, ERP systems integrate the logistics, distribution, sales,
marketing, customer service, human resources, and all accounting and finance functions into a single system. ERP systems track all of a firm’s resources (cash, raw materials, inventory, fixed assets, and human
resources) and forecast their requirements, track sales, shipping, invoicing, and the status of the firm’s
commitments (orders, purchase orders, and payroll, for example).
The main focus of an ERP system is tracking all business resources and commitments regardless of
where, when, or by whom they were entered into the system. For example, a customer support
representative using an ERP system would be able to look up a customer’s order, see that the product the
customer ordered is on backorder due to a production delay, and provide an estimate for the delivery based
on the expected arrival of the required raw materials. Without the sales, support, and production systems
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
121
D.4. Supply Chain Management
CMA Part 1
being tightly integrated through an ERP system, such a level of customer service would be very difficult if
not impossible to achieve.
Writing software that serves the needs of finance as well as human resources and those in the warehouse
is not an easy task. Each of the individual departments in a company usually has its own computer system
and software to help perform its specific work. However, through ERP all of them are combined into a single,
integrated software program through business re-engineering.
All of the data for the entire company is also stored in a single location, called an enterprise-wide database, also known as a data warehouse. By having all of the company’s information from different
departments in the same location, a company is able to more efficiently manage and access the information.
Through data warehousing and data mining facilities, individuals in the company can sort through and
utilize the company’s information more quickly and easily than if it were stored in separate locations. In
data mining, the data in the data warehouse are analyzed to reveal patterns and trends and discover new
correlations to develop business information.
Note: If two things are correlated with one another, it means there is a close connection between them.
It may be that one of the things causes or influences the other, or it may be that something entirely
different is causing or influencing both of the things that are correlated.
Note: The major components of an ERP system are:

Production planning

Integrated logistics

Accounting and finance

Human resources

Sales, distribution and order management
Any subdivision of any of the above components is, by itself, not a component of an ERP system.
Early ERP systems ran on mainframe computers and could require several years and several million dollars
to implement, so only the largest companies were able to take advantage of them. As the systems evolved,
vendors created a new generation of ERP systems targeted to small and mid-sized businesses that were
easier to install, easier to manage, and required less implementation time and less startup cost. Many ERP
systems are now cloud-based, and the software is not purchased and is not installed at the user company
at all but is accessed over the Internet. Use of cloud-based ERP systems allows smaller and mid-sized
businesses to access only what they need and to reduce their investment in hardware and IT personnel.
Increasingly, ERP systems are being extended outside the organization as well, for example enabling supply
chain management solutions in which vendors can access their customers’ production schedules and materials inventory levels so they know when to ship more raw materials. Interconnected ERP systems are
known as extended enterprise solutions. ERP has also been adapted to support e-commerce applications.
122
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.4. Supply Chain Management
Operational Benefits of ERP Systems
•
Integrated back-office systems result in better customer service and production and distribution
efficiencies.
•
Centralizing computing resources and IT staff reduces IT costs versus every department maintaining its own systems and IT staff.
•
Day-to-day operations are facilitated. All employees can easily gain access to real-time information
they need to do their jobs. Cross-functional information is quickly available to managers regarding
business processes and performance, significantly improving their ability to make business decisions and control the factors of production. As a result, the business is able to adapt more easily to
change and quickly take advantage of new business opportunities.
•
Business processes can be monitored in new and different ways, such as with dashboards.17
•
Communication and coordination are improved across departments, leading to greater efficiencies
in production, planning, and decision-making that can lead to lower production costs, lower marketing expenses, and other efficiencies.
•
Data duplication is reduced and labor required to create, distribute, and use system outputs is
reduced.
•
Expenses can be better managed and controlled.
•
Inventory management is facilitated. Detailed inventory records are available, simplifying inventory
transactions. Inventories can be managed more effectively to keep them at optimal levels.
•
Trends can be more easily identified.
•
The efficiency of financial reporting can be increased.
•
Resource planning as a part of strategic planning is simplified. Senior management has access to
the information it needs in order to do strategic planning.
Outsourcing
When a company outsources, an external company performs one or more of its internal functions. By
outsourcing certain functions to a specialist, management can free up resources within the company in
order to focus on the primary operations of the company. It may also be cheaper to outsource a function
to a company that specializes in an area than it is to operate and support that function internally. The
disadvantage of outsourcing is that the company loses direct control over the outsourced functions.
Theory of Constraints (TOC)
For a company to be competitive, it needs to be able to respond quickly to customer orders. Theory of
Constraints is an important way for a company to speed up its manufacturing time so it can improve its
customer response time and thus its competitiveness and its profitability. Theory of Constraints (TOC) was
developed by Eliyahu M. Goldratt in the 1980s as a means of making decisions at the operational level that
will impact a company’s profitability positively.
17
The word “dashboard” originally described the cluster of instruments used by drivers to monitor at a glance the major
functions of a vehicle. As used in business, it refers to a screen in a software application, a browser-based application,
or a desktop application that displays in one place information relevant to a given objective or process. For example, a
dashboard for a manufacturing process might show productivity information for a period, variances from standards, and
quality information such as the average number of failed inspections per hour. For senior management, it might present
key performance indicators, balanced scorecard data, or sales performance data, to name just a few possible metrics
that might be chosen. The dashboard may be linked to a database that allows the data presented to be constantly
updated.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
123
D.4. Supply Chain Management
CMA Part 1
Manufacturing cycle time, also called manufacturing lead time or throughput time, is usually defined
as the amount of time between the receipt of an order by manufacturing and the time the finished good is
produced. However, different firms may define the beginning of the cycle differently. For some, it begins
when a customer places an order. For others, it can begin when a production batch is scheduled, when the
raw materials for the order are ordered, or when actual production on the order begins.
In addition to the actual production time, manufacturing cycle time includes activities (and non-activities)
such as waiting time (the time after the order is received by the manufacturing department and before
manufacturing begins, or time spent waiting for parts for the next process); time spent inspecting products
and correcting defects; and time spent moving the parts, the work-in-process, and the finished goods from
one place to another.
Manufacturing cycle efficiency, or MCE, is the ratio of the actual time spent on production to the total
manufacturing cycle time.
Manufacturing Cycle Efficiency (MCE)
=
Value-Adding Manufacturing Time
Total Manufacturing Cycle Time
Notice that only actual manufacturing time—time when value is being added to the product—is included in
the numerator of the MCE calculation. Waiting time, time spent on equipment maintenance, and other nonvalue-adding times are not included in the numerator (though they are included in the denominator). For
example, if the actual time spent on manufacturing is 3 days while the total manufacturing cycle time is 10
days (because the waiting time is 7 days), the MCE is 3 ÷ 10, which equals 0.30 or 30%. Companies would
like their MCE to be as close to 1.00 as possible, because that means very little time is being spent on nonvalue-adding activities.
A diagram of the total customer response time, from the time the customer places an order until the order
is delivered to the customer, follows.
Receipt
Time
Waiting
Time
Order Received
From Customer
Delivery
Time
Manufacturing
Cycle Time
Order Received
By Manufacturing
Manufacturing
Time
Manufacturing
Begins
Finished Goods
Complete
Order Delivered
To Customer
Customer-Response Time
Theory of Constraints can be used to decrease a company’s manufacturing cycle time and its costs. If a
company is not using TOC, management might be devoting its time to improving efficiency and speed in
all areas of the manufacturing process equally. However, TOC stresses that managers should focus their
attention only on areas that are constraints or bottlenecks.
Constraints are the activities that slow down the product’s total cycle time while areas and people performing other activities have slack time. If managers spend their time and effort speeding up activities that
are not constraints, they are wasting resources. Unnecessary efficiency just results in the buildup of work
waiting to be done at the constraint, while activities following the constraint do not have enough work to
do because work is held up in the constraint process. If activities that are not constraints are speeded up,
124
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.4. Supply Chain Management
total production speed is not improved despite the extra cost incurred to improve efficiency. Managers’ time
and effort and the associated cost should be spent on speeding up the activities that cause production to
slow down.
Theory of Constraints says that constraint processes are the only areas where performance improvements will bring about a meaningful change in overall profitability. To improve profitability,
managers need to identify the constraints and focus on improving them. Theory of Constraints
focuses on measurements that are linked directly to performance measures such as net profit, return on
investment, and cash flow. It gives managers a method of making decisions on a day-to-day basis that will
truly affect the overall performance of the organization.
Throughput time, or manufacturing cycle time, is the time that elapses between the receipt of a customer order by the manufacturing activity and the completion and shipment of the finished goods.
Throughput time is a rate. It is the rate at which units can be produced and shipped. For example, if it
takes 2 days to produce and ship 100 units, then the rate per day is 50 units per day.
Throughput contribution margin is the rate at which contribution margin is being earned in monetary
terms. Throughput contribution margin is the revenue earned from the sale of units minus the totally variable costs only (usually only direct materials) for those units sold during a given period of time. If the sale
price for one unit is $500, the direct materials cost is $300 per unit, and throughput rate per day is 50 units
per day, then the throughput contribution margin per day is $200 × 50 = $10,000. Or, calculated
another way, if 50 units can be produced and shipped in one 8-hour day, then it takes 8 hours ÷ 50, or
0.16 of one hour, to produce and ship one unit. $200 ÷ 0.16 = $1,250 per hour. In an 8-hour day, throughput contribution margin is $1,250 × 8, or $10,000.
Note: Throughput contribution margin is the amount earned for product produced and shipped during a
period such as one hour, one day, or one month, calculated using revenue for the period minus only the
strictly variable costs. Strictly variable costs are usually only direct materials costs.
Following are the steps in managing constrained operations through the use of TOC analysis:
1)
Identify the constraint. Recognize that the constraint or bottleneck operation determines the
throughput contribution margin of the system as a whole, and identify the constraint by determining where total required hours exceed available hours. The management accountants work with
manufacturing managers and engineers to develop a flow diagram that shows the sequence of
processes, the amount of time each process requires given current demand levels, and the amount
of time available in terms of labor hours and machine hours. The flow diagram enables the management accountants, manufacturing managers, and engineers to identify the constraint.
2)
Determine the most profitable product mix given the constraint. The most profitable product mix is the combination of products that maximizes total operating income. Product profitability
is measured using the throughput contribution margin. The throughput contribution margin is
the product price less materials cost, including the cost of all materials used, all purchased components, and all materials-handling costs. Direct labor and other manufacturing costs such as
overhead are excluded, because it is assumed they will not change in the short term. The throughput contribution margin of each product is divided by the number of minutes required for one unit
at the constraint to calculate the throughput contribution margin per minute per unit in the
constraint activity for each product. The product with the highest throughput contribution margin
per minute in the constraint will be the most profitable, even though it may have a lower throughput contribution margin for the manufacturing process as a whole.
3)
Maximize the flow through the constraint. The management accountant looks for ways to
simplify the constraint process, reduce setup time, or reduce other delays due to non-value-adding
activities such as machine breakdowns, in order to speed the flow through the constraint.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
125
D.4. Supply Chain Management
CMA Part 1
4)
Add capacity to the constraint. Increase the production capabilities of the constraint by adding
capacity such as additional equipment and labor. Adding equipment and labor are longer-term
measures to consider if it is possible and profitable to do so.
5)
Redesign the manufacturing process for flexibility and fast cycle time. Analyze the system
to see if improvements can be made by redesigning the manufacturing process, introducing new
technology, or revising the product line by eliminating hard-to-manufacture products or by redesigning products so they can be manufactured more easily. This final step in managing constrained
operations is the most strategic response to the constraint.
TOC Terms
Exploiting the constraint means taking advantage of the existing capacity at the constraint, because that
capacity can be wasted by producing the wrong products or by improper policies and procedures for scheduling and controlling the constraint. “Exploiting the constraint” means using it to its best advantage to
produce the product that will be most profitable to sell and by scheduling work so that the constraint is
kept busy all the time.
Elevating the constraint means adding capacity to the constrained activity or otherwise adjusting the
resources to increase the output possible from the constrained activity. Adding capacity means purchasing
an additional machine or using a new technology that makes better use of the existing machine.
126
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.4. Supply Chain Management
Drum-Buffer-Rope
Drum-Buffer-Rope is the production planning methodology portion of Theory of Constraints. It is a tool that
can be used to balance the flow of production through the constraint. It minimizes the buildup of excess
inventory at the constraint while at the same time keeping the constraint producing at all times.
•
Drum: The drum is the process that takes the longest time. It is the constraint. The constraint
is called the drum because it provides the beat that sets the pace for the whole production process.
All production flows must be synchronized to the drum.
•
Rope: The rope consists of all of the processes that lead up to the drum, or the constraint. Activities preceding the drum must be carefully scheduled so that they do not produce more output
than can be processed by the constraint, because producing too much output in activities preceding
the constraint creates excess inventory and its associated costs without increasing throughput
contribution margin. At the same time, though, the constraint must be kept working with no down
time.
•
Buffer: The buffer is a minimum amount of work-in-process inventory (a “buffer” inventory) of
jobs waiting for the constraint. The purpose of the buffer is to make sure the constraint process is
kept busy at all times. Production schedules are planned so that workers in the non-constrained
processes will not produce any more output than can be processed by the drum, the constraint
process; but at the same time, the non-constrained processes must produce enough to keep the
buffer full.
Example: A company manufactures garments. The manufacture of a jacket involves four separate processes:
1)
2)
3)
4)
Cutting the fabric pieces
Stitching the fabric pieces together
Hemming the sleeves and the bottom of the jacket
Finishing the jacket, folding it, and packaging it in clear plastic.
Hemming the sleeves and the bottom of the jacket requires the most time and is the constraint.
The garment manufacturer sells only to wholesalers, who in turn sell to retailers. The time required for
each process for one jacket and the available times are as follows. The total hours available per month
are calculated by assuming 22 work days per person per month and 7 hours work per person per day.
Cutting
Stitching
Hemming
Finishing, folding, and packaging
Minutes
Required
per unit
18
20
30
10
Number of
Employees
20
23
28
11
Total Hours
Available
Per Month
3,080
3,542
4,312
1,694
The demand per month averages 10,000 jackets per month. The constraint process is the hemming
process, the process for which the demand exceeds the hours available. Below are the total hours required to produce 10,000 jackets per month using the current number of employees and the current
equipment. The total hours required is 10,000 × minutes required per unit ÷ 60.
Cutting
Stitching
Hemming
Finishing, folding, and packaging
Total
Hours
Required
3,000
3,334
5,000
1,667
Total
Hours
Available
3,080
3,542
4,312
1,694
Difference
Between
Hrs. Required &
Hrs. Available
(80)
(208)
688
(27)
(Continued)
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
127
D.4. Supply Chain Management
CMA Part 1
Since the different jobs the employees are doing are not specialized, some of the employees currently
doing other jobs could be shifted to the hemming process, since they have some extra time and the
hemming process requires more time than is currently available. One employee currently doing cutting
could spend half time doing hemming instead, and one employee could be shifted from stitching to
hemming. The company has enough equipment to accommodate those changes. If those changes are
made, the number of employees per process will change to 19.5 for Cutting, 22 for Stitching, and 29.5
for Hemming. That would create the following changes in total hours available and the differences between hours required and hours available:
Difference
Total
Total
Between
Hours
Hours
Hrs. Required &
Required
Available
Hrs. Available
Cutting
3,000
3,003
(3)
Stitching
3,334
3,388
(54)
Hemming
5,000
4,543
457
Finishing, folding, and packaging
1,667
1,694
(27)
The production capability of the whole department is dependent on the production capability of the
constraint, which is the hemming process. The production line cannot move any faster than its slowest
process. After making these duty reassignments, the company still does not have the capacity to produce
10,000 units per month. With 4,543 hours available at the constraint, the company can produce only
9,086 jackets per month (4,543 hours available × 60 minutes per hour ÷ 30 minutes to hem one jacket).
The long-term answer to the problem is to add capacity to the constraint (elevate the constraint).
However, in the short-term, the challenge is to exploit the constraint by maximizing the flow through
the constraint—making sure that the hemming operation has no downtime during which it is waiting for
the prior processes to provide it with jackets to hem. To do that, the company needs to make sure it has
a small work-in-process inventory waiting to be hemmed at all times while not letting the inventory stack
up too much in front of the hemming process.
The Drum-Buffer-Rope system is one of the ways the size of the work-in-process inventory waiting to
be hemmed can be controlled.
CONSTRAINT
BUFFER
Small amount of
Input
work-in-process
inventory
Stitching
Cutting
Hemming
ROPE
Output
(Continued)
128
Finishing
DRUM
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.4. Supply Chain Management
The work in cutting and stitching must be carefully controlled so that just the right amount of work-inprocess is in the buffer waiting to be hemmed at any one time so that the hemming process is always
busy and always working at its maximum while at the same time, too much work-in-process is not
allowed to build up in the buffer.
In the long term, if the company wants to meet the full demand, it will need to either hire more hemming
employees and invest in more equipment for them to use, or it will need to find some way to increase
the speed of the hemming process. The company finds it can invest in attachments for the hemming
machines that will reduce the time for the hemming from 30 minutes per jacket to 27 minutes per jacket.
After analyzing the costs of the two alternatives, management determines it will cost less to purchase
the attachments for the hemming machines in order to make better use of the 4,543 hours available
with current employees and equipment. With hemming requiring only 27 minutes per jacket, the time
required for 10,000 jackets will be 4,500 hours (10,000 jackets × 27 minutes to hem one jacket ÷ 60
minutes per hour). Now, the hours required for Hemming are decreased and are as follows:
Difference
Total
Total
Between
Hours
Hours
Hrs. Required &
Required
Available
Hrs. Available
Cutting
3,000
3,003
(3)
Stitching
3,334
3,388
(54)
Hemming
4,500
4,543
(43)
Finishing, folding, and packaging
1,667
1,694
(27)
The company now has the capability to manufacture 10,000 jackets per month.
When the Theory of Constraints is applied to production, speed in manufacturing is improved by increasing
throughput contribution margin while decreasing investments and decreasing operating costs.
Throughput (product produced and shipped) will be maximized, investments will be minimized, and operating costs will be minimized. TOC helps reduce throughput time, or cycle time, and therefore operating
costs. Furthermore, the use of TOC enables the company to carry a lower overall level of inventory, so
inventory investment is decreased.
In TOC terms, “investments” is the sum of costs in direct materials, work-in-process and finished goods
inventories; R&D; and costs of equipment and buildings. Inventory costs are limited to costs that are
strictly variable, called “super-variable.” Super-variable costs are usually only direct materials.
Note: Absorption costing for external financial reporting purposes is not done any differently when TOC
is being used. Inventory costs for internal TOC analysis purposes are simply different from inventory
costs for financial reporting purposes.
Also, for the purpose of using Theory of Constraints, operating costs are equal to all operating costs other
than direct materials or any other strictly variable costs incurred to earn throughput contribution margin.
In TOC, “operating cost” is defined as the cost of converting the inventory into throughput.
Thus, operating costs in TOC include salaries and wages, rent, utilities, depreciation, indirect materials, and
other overhead costs. For TOC purposes, all of these operating costs are treated as period costs that are
expensed as incurred, and they are not inventoried. Inventory for TOC includes only the direct material
costs. (Again, “inventory” for TOC purposes is used only for TOC management. It is not the same as “inventory” for absorption costing.) All employee costs are considered operating costs, whether they are direct
labor or indirect labor. Direct labor is not included in the calculation of throughput contribution
margin, and thus it is considered an operating cost for TOC.
When work is properly scheduled, the constraint will achieve its maximum performance without interruptions. The material is released only as needed without building up unneeded material (inventory) at the
non-bottleneck resources, enabling the factory to achieve optimal performance.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
129
D.4. Supply Chain Management
CMA Part 1
Summary of Theory of Constraints:
•
Throughput is product produced and shipped.
•
Throughput time or manufacturing cycle time is the time that elapses between the receipt by
the manufacturing department of a customer’s order and the shipment of the order.
•
Throughput contribution margin is revenue minus direct materials cost for a given period of time.
•
Only strictly variable costs—which are usually only direct materials—are considered inventory
costs. All other costs, even direct labor, are considered operating, or fixed costs.
•
Theory of Constraints assumes that operating costs are fixed costs because they are difficult to
change in the short run.
•
Theory of Constraints focuses on short-run maximization of throughput contribution margin
by managing operations at the constraint in order to improve the performance of production as a
whole.
Some ways that operations at the constraint process can be relieved include:
•
Eliminate any idle time at the constraint operation, such as time when the machine is not
actually processing products. For example, perhaps an additional employee could be hired to work
at the constraint operation to help unload completed units as soon as a batch is processed and to
help load the machine for the next batch. If doing that would increase throughput at the constraint
by a minimum of 2,000 units per year at an annual cost of $40,000, and if the throughput contribution margin (selling price minus direct material cost per unit) is greater than $20 per unit
($40,000 ÷ 2,000), then hiring an additional employee at the constraint would increase operating
income.
•
Process only products that increase throughput contribution margin and do not produce
products that will simply remain in finished goods inventory. Making products that remain in inventory does nothing to increase throughout contribution margin.
•
Move items that do not need to be processed on the constraint operation to other, nonconstrained machines or outsource their production.
•
Reduce setup time and processing time at the constraint operation. If reducing setup time
costs an additional $10,000 per year but it enables the company to produce 500 additional units
per year, again, if the throughput contribution margin is greater than $20 per unit ($10,000 ÷
500), operating income will increase.
•
Improve the quality of processing at the constrained resource. Poor quality is more costly
at a constraint operation than at a non-constraint operation. Since a non-constraint operation has
unused capacity, no throughput contribution is forgone when a non-constraint operation produces
product that cannot be sold, so the cost of the defective production is limited to the wasted materials. However, unsellable production at the constraint operation costs more than just the cost of
the wasted materials. The cost of poor quality at a constraint also includes the opportunity cost of
lost throughout contribution margin, because the constraint does not have any extra time to waste.
Lost time at the constraint is lost throughput contribution margin. Therefore, the constraint operation should not waste time processing units that were defective when received from the previous
process. Units in production should be inspected before they are passed on to the constraint
operation for processing.
If these actions are successful in increasing the capacity of the constraint operation, its capacity may eventually increase to the point where it exceeds the capacity of some other process, and so the other process
may become the constraint. The company would then focus its continuous improvement actions on increasing the efficiency and capacity of the new constraint process, and so on.
130
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.4. Supply Chain Management
Theory of Constraints Reports
A Theory of Constraints Report conveys throughput contribution margin and selected operating data.
It identifies each product’s throughput contribution margin per hour required for the constraint. It also
identifies the most profitable product or products and enables monitoring to achieve maximum profitability given existing constraints. By identifying the most profitable products, it can assist in making
decisions about product mix.
Calculating Throughput Contribution Margin
The concept of throughput contribution margin as it is used in Theory of Constraints analysis is a variation
on the concept of contribution margin. The contribution margin is the difference between total revenues
and total variable costs, and contribution margin per unit is simply the sale price for one unit minus the
total variable costs for one unit. The variable costs included in the calculation of the contribution margin
include direct materials, direct labor, variable overhead, and variable selling expenses.
However, in Theory of Constraints analysis, everything except for direct materials and any other totally
variable cost is considered an operating cost and thus a period cost. The throughput contribution margin or
throughput contribution margin per unit in TOC analysis is the selling price minus only the totally variable costs, as referred to in the above discussion of TOC analysis. The totally variable costs are usually
only the direct materials costs. Calculating the contribution margin in this way is called super-variable
costing. The assumption is made that labor and overheads are fixed costs because usually they cannot be
changed in the short term.
If an exam question requires calculation of the throughput contribution margin (or throughput contribution or throughput margin—they all mean the same thing) for a period of time, calculate how many units
can be produced in that time. The throughput contribution margin will be the throughput contribution
margin per unit multiplied by the number of units that can be produced in the given time.
Example: Using the garment manufacturer again, the manufacturer has two different styles of jackets:
a down-filled jacket for winter and a light jacket for spring. For both jackets, the constraint is the hemming operation. The winter jacket sells for $125, and the direct materials cost is $75. The spring jacket
sells for $75, and the direct materials cost is $30. The hemming process takes 30 minutes for the winter
jacket and 25 minutes for the spring jacket. Demand for the winter jacket is 6,000 jackets per month,
whereas demand for the spring jacket is 8,000 jackets per month. The company has 4,543 hours available per month for hemming. Which jacket should the company give priority to in scheduling production?
The company should give priority to the product with the higher throughput margin per minute,
calculated using the time required in the constraint process.
Price
Direct Materials cost
Throughput contribution margin
Constraint time for hemming
Throughput margin per minute
Winter Jacket
$125.00
75.00
$ 50.00
30.00
$ 1.67
Spring Jacket
$75.00
30.00
$45.00
25.00
$ 1.80
The company should manufacture the 8,000 spring jackets needed before manufacturing any winter
jackets, because the spring jacket’s throughput margin per minute in the constrained resource is higher.
That will mean using 3,334 hemming hours for the 8,000 spring jackets required (8,000 × 25 minutes
per jacket ÷ 60 minutes per hour). That will leave 1,209 hemming hours available for the winter jackets
(4,543 hours available – 3,334 hours used for the spring jackets). With those 1,209 hours, the company
can manufacture 2,418 winter jackets (1,209 hours × 60 minutes per hour ÷ 30 minutes per jacket).
That will maximize the company’s total contribution margin.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
131
D.4. Supply Chain Management
CMA Part 1
Example: Below is another use for the throughput contribution margin.
A company has an opportunity to get a special order for 100,000 units. The company has excess capacity
(unused direct labor and equipment) and would really like to have this order. Because of the unused
capacity, the company would not need to hire any additional employees if it gets the order and would
not need to cut back on production of other orders. The company is bidding against several other companies, so management knows in order to get this order, it must get the price down as low as possible.
The calculated costs per unit are as follows:
Direct materials
Direct labor
Fixed Overhead
Total
$ 6
3
2
$11
Management determines that if it bids $900,000 ($9 per unit), it will just cover the variable costs (direct
materials and direct labor), so it bids $900,000. To management’s amazement, however, the company
loses the bid to a company that bid $750,000.
If the company had bid $700,000, or $7 per unit, it would have gotten the job. The company would have
recovered its materials costs and its operating income would have been increased by $100,000 ($1 per
unit × 100,000 units). The direct labor cost would not have been covered; but then, the company was
paying those people anyway, even without the special order. Even though management did not have
work for them without the special order, it had no plans to lay them off. Therefore, the direct labor cost
is like a fixed cost because it would have been the same whether the company had the special order or
not. Likewise, the fixed manufacturing costs would have been no different whether the company had the
order or not.
To illustrate this incremental analysis, operating income is as follows without the special order, assuming a selling price of $12 and a current volume of 1,000,000 units:
Sales revenue $12 × 1,000,000
Variable costs:
Direct materials $6 × 1,000,000
Direct labor $3 × 1,000,000
Contribution margin
Fixed overhead $2 × 1,000,000
Operating income
$ 12,000,000
6,000,000
3,000,000
$ 3,000,000
2,000,000
$ 1,000,000
If management had bid $700,000 and gotten the special order, operating income would have been as
follows:
Sales revenue for regular orders $12 × 1,000,000
Sales revenue for special order $7 × 100,000
Total revenue
Variable costs:
Direct materials $6 × 1,100,000
Direct labor $3 × 1,000,000
Contribution margin
Fixed overhead $2 × 1,000,000
Operating income
$ 12,000,000
700,000
$ 12,700,000
6,600,000
3,000,000
$ 3,100,000
2,000,000
$ 1,100,000
If the company had calculated a throughput contribution margin using a price of $7 per unit ($7 −
$6 direct materials = throughput contribution margin of $1 per unit) in preparing the bid instead, it
would have gotten the special order and would have earned $100,000 more in operating income
(100,000 units × $1) than it earned without the special order.
Next time, management will use the throughput contribution margin in preparing its bid.
132
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.4. Supply Chain Management
The following information is for the next four questions: EEK Industries produces thingamajigs.
The thingamajigs sell for $200 per unit. The product goes through three processes. The processes, costs
and volumes are as follows:
Molding
Heat Transfer
Trimming
Direct labor required
per unit @ $25/hr.
0.5 hrs. (2 units/hr.)
0.25 hrs. (4 units/hr.) 0.5 hrs. (2 units/hr.)
Direct labor hours available
per day
225 hrs.
100 hrs.
200 hrs.
Direct materials
$100 per unit
$20 per unit
$0 per unit
Annual fixed manufacturing overhead
$1,000,000
$750,000
$250,000
$428,571
$190,476
$380,953
Daily capacity in units based
on DL hrs. available
450 units
400 units
400 units
Annual capacity (260
working days per year)
117,000 units
104,000 units
104,000 units
Annual production & sales
104,000 units
104,000 units
104,000 units
Fixed selling and admin.
exp. (allocated according
to available DL hrs.)
EEK has a policy of not laying off idle workers but instead finds other work for them to do such as
maintenance on equipment. The company can sell all it can produce.
Question 41: What is the throughput contribution per unit according to the theory of constraints?
a)
$48.75
b)
$80.00
c)
$30.59
d)
$29.52
Question 42: What is the throughput contribution per day?
a)
$36,000
b)
$19,500
c)
$12,236
d)
$32,000
Question 43: What are the annual operating costs?
a)
$3,412,500
b)
$1,000,000
c)
$6,412,500
d)
$5,412,500
(Continued)
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
133
D.4. Supply Chain Management
CMA Part 1
Question 44: The Heat Transfer and Trimming processes are very labor-intensive. How could EEK better
assign its existing workers to increase its operating income?
a)
By reassigning an employee for 5 hours per day from Molding to Heat Transfer.
b)
By reassigning an employee for 5 hours per day from Molding to Heat Transfer and reassigning
employees for 10 hours per day from Molding to Trimming.
c)
By reassigning employees for 10 hours per day from Molding to Heat Transfer and reassigning an
employee for 5 hours per day from Molding to Trimming.
d)
By reassigning an employee for 5 hours per day from Molding to Heat Transfer and reassigning
an employee for 5 hours per day from Molding to Trimming.
(HOCK)
Question 45: Urban Blooms is a company that grows flowering plants and sells them in attractively
designed container arrangements to upscale hotels, restaurants and offices throughout the greater New
York City metropolitan area. When first established, the organization produced every aspect of its product on site and handled all business functions from its facility, in either the greenhouses, production
areas, or office. The only exception was importing expensive, large containers from Mexico. After five
years in business, Urban Blooms had become very profitable and increased its staff from 10 to 200
employees, including horticulturalists, production/design workers, business managers, and sales staff.
However, the owners found it increasingly difficult to keep up with the complexities and demands
brought about by the company’s continuing growth. It became apparent that several areas of the business were not time- or cost-effective for Urban Blooms, such as mixing its own potting soil for the plants
and maintaining payroll accounting. Urban Blooms contacted Hampshire Farms to create a special potting soil mix for its plants and hired Lindeman Associates to handle the company’s payroll accounting
requirements.
This process is referred to as:
a)
Materials resource planning (MRP).
b)
Activity-based costing (ABC).
c)
Outsourcing.
d)
Lean production.
(HOCK)
134
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.4. Supply Chain Management
Capacity Level and Management Decisions
Estimates about plant production capacity, called denominator level capacity, are used for various purposes.
Production activity levels are used in developing standards and the budget and allocating manufacturing
overhead, as previously discussed. However, production activity levels are also used in making management decisions, particularly decisions about pricing, bidding, and product mix.
A company does not need to use the same denominator level capacity for management decisions as it uses
to set standards and allocate overhead for its external financial statements. The capacity level used in
making decisions should meet the need for the purpose for which it will be used. Recall that the various
choices of capacity levels are:
1)
Theoretical, or ideal capacity, which assumes the company will produce at its absolutely most
efficient level at all times, with no breaks and no downtime.
2)
Practical capacity, which is the most that a company can reasonably expect to produce in a year’s
time. Practical capacity is the theoretical level reduced by allowances for idle time and downtime,
but not reduced for any possible decreases in sales demand.
3)
Master budget capacity, which is the amount of output actually expected during the budget period
based on expected demand.
4)
Normal capacity, which is the level of annual activity achieved in the long run that will satisfy
average customer demand over a period of 2 to 3 years.
The level of plant capacity to use in decision-making is an important strategic decision that needs to be
made by management. In fact, that decision should really be made before any decisions are made about
how much plant capacity to provide. If the company needs to manufacture 15,000 units per year, then
management needs to determine how much it will cost per year to acquire the capacity to produce those
15,000 units. Since the capacity acquired will be the capacity needed for 15,000 units, 15,000 units is the
company’s practical capacity (theoretical capacity is not attainable).
In the short run, the capacity and its cost are fixed. If the company does not use all of the capacity it has
available, the fixed costs of that capacity do not decrease the way variable costs do. If the cost to acquire
and maintain the capacity to make 15,000 units per year is $1,125,000 per year, then at a production level
of 15,000 units, the fixed cost per unit is $75. However, if the company does not produce 15,000 units in
a given year but instead produces only 12,500 units, not all of the capacity supplied at $75 per unit will be
needed or used that year. The company will be paying for unused capacity. The fixed cost per unit produced
based on actual production would be $90 per unit ($1,125,000 ÷ 12,500 units); but in reality, the fixed
cost per unit should not change just because the number of units manufactured changes. The “real” fixed
cost per unit manufactured is still $75 per unit; and the company has unused capacity cost of $187,500
for the units not produced ($75 × [15,000 – 12,500]) that it needs to absorb as an expense.
Pricing decisions and bidding decisions should be made using the $75 per unit fixed cost that results from
using practical capacity to calculate the fixed cost per unit, regardless of whether that volume is being
produced or not.
Use of practical capacity as the denominator level for pricing and bidding decisions best expresses what the
true cost per unit of supplying the capacity should be, regardless of the company’s usage of its available
capacity. Recall that practical capacity is the absolute most that the company can reasonably expect to
produce in a year’s time using the capacity it has. It is the theoretical capacity level reduced by allowances
for unavoidable interruptions such as shutdowns for holidays or scheduled maintenance, though not decreased for any expected decrease in sales demand. Therefore, practical capacity is the best denominator
level to use in pricing and other management decisions.
Customers cannot be expected to absorb the unused capacity cost by paying higher prices that the company
charges per unit in order to cover the higher fixed cost it is allocating to each unit produced because
production is below the expected level. Customers will not absorb it. They will take their business elsewhere.
That will result in even lower production and even greater fixed cost per unit, even higher prices, and even
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
135
D.4. Supply Chain Management
CMA Part 1
lower sales, called a downward demand spiral. A downward demand spiral can put a company out of
business. Customers expect a company to manage its unused capacity or else to bear the cost of the unused
capacity, not to pass it along to them.
Since the use of practical capacity excludes the cost of the unused capacity from the per unit fixed cost, it
gives management a more accurate idea of the resources needed to produce one unit of product and thus
the resources needed to produce the volume the company is actually producing. If the company does not
need all of its capacity, management should make adjustments. It may be that the company’s unneeded
capacity can be rented or sold. Alternatively, management might be able to make use of that unused
capacity by developing a new product or by producing a product for another company that is outsourcing
some of its manufacturing.
Highlighting the cost of the unused capacity gives management the opportunity to make strategic decisions
for its use.
Capacity Level and its Effect on Financial Statements
The company will have an under-application of manufacturing overhead any time the actual overhead incurred is greater than the amount of manufacturing overhead applied to the level of production achieved.
Under-application of overhead occurs when production is lower than anticipated and as a result, the manufacturing overhead charged to the products produced was less than the actual incurred manufacturing
overhead. On the other hand, if production is higher than anticipated, the amount of manufacturing overhead applied will be greater than the amount actually incurred, and the manufacturing overhead will be
over-applied.
At the end of the accounting period, variances that result from differences between the actual overhead
incurred and the overhead applied must be resolved as part of the closing activities. The variances should
be prorated among ending work-in-process inventory, ending finished goods inventory, and cost of goods
sold for the period according to the amount of overhead included in each that was allocated to the
current period’s production. Or, if the variances are not material, they may be 100% closed out to cost
of goods sold. A third approach (though not often used) is to restate all amounts using actual cost allocation
rates rather than the budgeted cost application rates.
•
If a variance is closed out 100% to cost of goods sold, reported operating income and inventory
balances will vary depending on the activity level that was used to calculate the overhead application rate.
•
If a variance is pro-rated between inventories and cost of goods sold according to the amount of
that type of overhead allocated to the current period’s production for each, reported operating
income will be the same regardless of what activity level was used to calculate the fixed overhead
application rate. The combination of the amount allocated to production and the pro-ration of the
variance will cause inventories and cost of goods sold expense to be equal to the actual rate under
all activity levels.
Note: The pro-ration of under- or over-applied overhead should be done on the basis of the
overhead allocated to production during the current period only, not on the basis of the balances
in the inventories and cost of goods sold accounts at period end. Information on the amount of overhead
allocated to production during the current period should be available in the accounting system.
If variances are closed out 100% to cost of goods sold, the use of master budget capacity or normal capacity
will lead to higher operating income than if theoretical or practical capacity is used. Operating income
will be higher under master budget capacity or normal capacity whether the overhead is under-applied or
over-applied.
When master budget or normal capacity is used, the denominator level used will be lower than if theoretical
or practical capacity is used and the resulting application rate will be the higher. Therefore, more manufacturing overhead will be allocated to each product produced throughout the period than it will be if theoretical
136
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.4. Supply Chain Management
or practical capacity is used. Thus, more manufacturing overhead will be included in the finished goods and
work-in-process inventories on the balance sheet at the end of the period than would be the case with
usage of the other denominator levels.
To explain: When a variance is closed out 100% to cost of goods sold, no adjustment is made to inventories
as part of the closing entries, so inventories under master budget or normal capacity will remain higher
than under the other methods. Since inventories are higher, cost of goods sold will be lower. The result is
that operating income will be higher when master budget or normal capacity is used and 100% of the
variances are closed out to COGS.
However, if variances caused by differences between the actual manufacturing overhead incurred and the
overhead applied during the period are pro-rated among finished goods inventory, work in process inventory and cost of goods sold on the basis of the amount of overhead allocated during the current period to
the units in each, the choice of the denominator level for allocating manufacturing costs during the period
will have no effect on the end-of-period financial statements.
After the variances have been resolved and closed out for the period, final operating income and inventory
balances will be the same no matter which capacity level was used to determine the overhead allocation
rate during the period if the variances were pro-rated. Under all of the capacity levels, operating income
and inventory balances will be same as if the actual incurred overhead had been allocated to the period’s
production.
Note: The above will be true only when the variance is pro-rated among inventories and cost of goods
sold on the basis of the amount of overhead allocated during the current period to the units in each. If
the variance is charged to cost of goods sold only, operating income and inventory balances will not be
the same as if the actual incurred overhead had been allocated to the period’s production.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
137
D.5. Business Process Improvement
CMA Part 1
D.5. Business Process Improvement
A business process is a related group of activities encompassing several functions that produces a specific
product or service of value to a customer or customers. A business process is also activities that result in
higher quality or lowered costs of a product or service.
The Value Chain and Competitive Advantage
Michael Porter, a leading authority on competitive advantage from Harvard Business School, introduced the
concept of the value chain in his 1985 book, Competitive Advantage: Creating and Sustaining Superior
Performance.
Competitive Advantage
Competitive advantage, discussed in Strategic Planning in Section B of Vol. 1 of this textbook, is an advantage that a company has over its competitors that it gains by offering consumers greater value than
they can get from its competitors. Competitive advantage is derived from attributes that enable the organization to outperform its competitors, such as access to natural resources, highly-skilled personnel, a
favorable geographic location, high entry barriers, and so forth. The greater value offered to customers
may be in lower prices for the same product or service; or it may be in offering greater benefits and service
than its competitors do, thereby justifying higher prices; or it may be offering greater benefits at the same
or even at a lower price than its competitors charge.
A company that has competitive advantage will be more profitable than the average company it competes
with for customers. The higher its profits are in comparison to its competitors, the greater its competitive
advantage will be. Competitive advantage leads to increased profitability; and greater profitability leads to
increased competitive advantage. Competitive advantage makes the difference between a company that
succeeds and a company that fails.
Competitive
Advantage
Increased
Profits
In order to have competitive advantage, a company must have or create two basic things:
1)
Distinctive competencies and the superior efficiency, quality, innovation, and customer
responsiveness that result.
2)
The profitability that is derived from the value customers place on its products, the price that
it charges for its products, and the costs of creating those products.
Note: A company’s distinctive competencies are the things that it does better than the competition.
The four generic distinctive competencies that give the company competitive advantage are:
•
Superior efficiency
•
Superior quality
•
Superior innovation
•
Superior customer responsiveness
138
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.5. Business Process Improvement
The Value Chain
Manufacturing companies create value for customers by transforming raw materials into something of value
to customers. Retailers create value by providing a range of products to customers in a way that is convenient for the customers, sometimes supported by services such as fitting rooms or personal shopper
advice. In service businesses, value is created when people use inputs of time, knowledge, equipment and
systems to create services of value to their customers.
The value that is created less its cost is called the profit margin:
Value Created − Cost of Creating that Value = Profit Margin
Anything that increases the value to the customer or decreases the cost of creating that value adds to the
profit margin. All of an organization’s functions play a role in lowering the cost structure and increasing the
perceived value of products and services through differentiation. The creation of value involves functions
from R&D through production to marketing and sales, and on to customer service, and includes all the
activities that play supporting roles.
The value chain describes a business unit’s chain of activities for transforming inputs into the outputs that
customers will value. The chronological sequence of the value chain activities adds value to the customer.
The transformation process includes primary activities (business functions), those that are essential for
adding value to the product or service and creating competitive advantage, and support activities, those
activities that support the primary activities. Support activities add value indirectly to the product or service.
Michael Porter’s 1985 Value Chain
Support Activities
The value chain as envisioned by Michael Porter in 1985 looked like the following:
Infrastructure
Human Resources Management
Technological Development
Value Added
Minus Cost
= Margin
Service
Marketing
and Sales
Outbound
Logistics
Operations
Inbound
Logistics
Procurement
Primary Activities
Primary Activities:
•
Inbound logistics is materials management and includes receiving and handling inputs to the
product, inventory control, warehousing, scheduling deliveries, and returns to the supplier.
•
Operations activities are activities necessary to transform inputs into final products and services,
such as machining, assembly, equipment and facility maintenance, and packaging.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
139
D.5. Business Process Improvement
CMA Part 1
efham CMA
•
Outbound logistics involves handling of the finished products to distribute them to buyers or to
warehouses. It includes order processing and operation of delivery vehicles.
•
Marketing and sales activities provide the means for buyers to purchase the firm’s products and
services and convince them to do so, such as by advertising, sales activities, marketing channel
selection, and making pricing decisions.
•
Customer service includes activities that provide support to customers by enhancing or maintaining the value of the product. Repair, parts supply, and employee training are service activities.
Support Activities:
Infrastructure includes the administrative activities such as management, accounting, finance, internal
audit, and legal counsel that are necessary to support the whole organization and enable it to maintain its
day-to-day activities. The infrastructure also includes the organizational structure, the organization’s control systems, and its culture. Through strong leadership, top management can shape the company’s
infrastructure and thus the performance of all the other value creation activities that go on within the
company.
Human resource management, through its involvement with recruiting activities, hiring, employee training and development, and compensation, aids the organization in obtaining and keeping the right mix of
skilled people needed to perform its value creation activities effectively. The human resources function
ensures that the people are properly trained, motivated, and compensated. An effective human resources
function leads to greater employee productivity, which lowers costs. A competent staff performs excellent
customer service, which increases value to the customer. Human resource management supports primary
activities, other support activities, and the entire value chain.
Technological development supports efforts to improve products, processes, and services throughout
the support activities and primary activities. Technology development that is related to product development supports the entire value chain, and other technology development, such as investments in IT, are
associated with specific primary or support activities.
Procurement is the process of purchasing inputs that are used in the firm’s value chain. Inputs include
raw materials and supplies as well as fixed assets. Purchased inputs are used not only in the primary
activities, but they are present in every support activity, as well. Procurement may be performed by various
persons throughout the organization in addition to the purchasing department, for example office managers
who may purchase office supplies.
Value Chain Activities Today
Value chain activities vary from business to business and a specific activity may be a primary activity in
one business and a support activity in another business, depending on the business model of each firm.
Business models exist today that were unheard of in 1985 due to the rise of the Internet and on-line
business activities. Various generic versions of the value chain have been suggested by different authors,
and some versions, such as the following, do not separate what might be considered support activities from
primary activities or may not include some activities considered support activities.
R&D
140
Design
Production
Marketing
Distribution
Customer
Service
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.5. Business Process Improvement
•
Research is the search for knowledge that can be used to create new or improved products,
services, or processes. Development uses those research findings in the planning process to
improve the products, services, or processes that the company intends to sell or use internally.
•
Design is the detailed planning and engineering for development efforts.
•
Production is the acquisition of raw materials and the coordination and assembly required to
produce a product or deliver a service. The costs of production include direct materials, direct
labor, and factory overhead (inventoriable costs). In this version of a value chain, inbound logistics
are part of the production phase.
•
Marketing includes advertising, promotion, and sales activities.
•
Distribution is the delivery of products or services to customers. It is the “outbound logistics” of
Porter’s value chain, but it can also be considered a part of marketing and selling.
•
Customer service includes customer support and warranty services after a sale.
Another version of a value chain segregates activities into primary activities and support activities similar
to the way Porter’s value chain did. However, there are a few differences.
Primary Activities
INPUTS
R&D
Infrastructure
Product
Design
Production
Information
Systems
Marketing
and Sales
Materials
Management
Customer
Service
OUTPUTS
Human
Resources
Support Activities
In the above version of a value chain, materials management is a support activity, and materials management includes both inbound and outbound logistics. The logistics function manages the movement of
physical materials through the value chain. Materials management controls procurement (inbound logistics), which includes finding vendors and negotiating the best prices. Materials management also controls
the movement of the procured materials through production and distribution (outbound logistics).
Information systems in the above value chain is a support activity. Information systems are the electronic
systems for maintaining records of inventory, sales, prices, and customer service. Information systems can
add value to the customer by tracking inventory and sales so that the company can provide the proper mix
of goods to customers while eliminating items that do not sell well.
The actual activities in an individual company’s value chain will depend on the company’s type of business.
In a service industry, the focus will be on marketing, sales, and customer service rather than on manufacturing and raw materials. In a technology business, the emphasis will be on research and development.
Furthermore, an activity might be considered a primary value chain activity in one firm and a support value
chain activity in another firm. However, all activities in the value chain contribute to creating value.
Note: In any organization, the more value a company creates, the more its customers will be willing to
pay for the product or service, and the more likely they are to keep on buying.
Regardless of the way a company views its value chain, all of the value chain activities contain opportunities
to increase the value to the customer or to decrease costs without decreasing the value to the customer by
reducing non-value-adding activities. Some examples are given below in the topic Value Chain Analysis.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
141
D.5. Business Process Improvement
CMA Part 1
The analysis should take place at a relatively detailed level of operations, at the level of processes that are
just large enough to be managed as separate business activities.
Value Chain Analysis
Value chain analysis can help an organization gain competitive advantage by identifying the ways in which
the organization creates value for its customers. Value chain analysis identifies the steps or activities that
do and do not increase the value to the customers. Once those areas are identified, the organization can
maximize the value created by increasing the related benefits or reducing (even eliminating) non-valueadding activities. The resulting increase in value to the customer or decrease in costs will make the company
more profitable and competitive.
The firm should analyze each process in its operations carefully to determine how each activity contributes
to the company’s operating income and its competitiveness. The goal of value chain analysis is to provide
maximum value to the customer for the minimum possible cost.
For internal decision-making to achieve the goal of providing maximum value at minimum cost, value
chain financial statements may be utilized. In value chain financial statements, all the costs for primary
activities are considered product costs that are allocated to products and inventoried, whereas all the costs
for support activities are considered period costs that are expensed as incurred. Value chain financial statements are significantly different from conventional financial statements because many costs that are period
costs in conventional financial statements (research and development, product design, marketing and sales,
customer service) would be inventoriable costs in value chain financial statements. Value chain financial
statements do not conform to any Generally Accepted Accounting Principles, so their use is limited to internal decision-making.
Steps in Value Chain Analysis
Value chain analysis involves three steps, as follows.
1)
Identify the activities that add value to the finished product. Which activities are valueadding activities depends on the industry the company is in and what the company does (manufacturing, resale, service). They will be whatever activities this firm and firms in its industry
perform in the processes of designing a product or service, manufacturing the product, marketing
it and providing customer service after the sale.
2)
Identify the cost driver or cost drivers for each activity.
3)
Develop a competitive advantage by increasing the value to the customer or reducing the costs of
the activity. For example:
142
•
Research and development can add value to established products or services by finding
ways to improve them, in addition to developing new ones.
•
The function of production is to assemble the raw materials into finished goods. When production is done efficiently, high quality products can be manufactured while costs are lowered,
leading to higher operating income.
•
Marketing adds value by informing customers about the products or services, which may
increase the utility that customers attribute to the product or service and enable the company
to charge a higher price. Through marketing research, marketing can also discover what customers want and need and can then communicate that to the R&D group so the R&D group
can design products that match the customers’ needs.
•
Customer service after the sale adds value by delighting customers with the responsive service received, thus creating superior value for them. Increased utility for customers because
of excellent customer service can also enable the company to charge more for its products.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.5. Business Process Improvement
•
Efficiency in inbound logistics can significantly lower costs, thus creating value. Controlling
the flow of goods from suppliers into a retailer’s stores and ultimately to the consumer can
eliminate the need to carry large inventories of goods. Lower inventories lead to lower inventory costs and greater value creation.
•
Efficient procurement for a manufacturer can lead to lower manufacturing costs.
•
Information systems can add value to the customer by tracking inventory and sales so that
the company can provide the proper mix of goods to customers while eliminating items that do
not sell well. Staying current with technological advances and maintaining technical systems
excellence are other sources of value creation.
Value chain analysis can also be used to determine what place a firm should occupy in a complete value
chain that consists of multiple businesses. The main concept in such a value chain analysis is that each firm
occupies a selected part or parts of the entire value chain. Which part or parts of the value chain to
occupy is determined by comparative advantage of the individual firm, or where the firm can best provide
value at the lowest possible cost. Some firms manufacture parts that they sell to other firms for assembly
into another product entirely. Then those products may be sold to a wholesaler, who sells them to a retailer,
who sells them to the ultimate consumer. Each of those manufacturers, sellers, and resellers occupies a
place in the value chain.
Question 46: The primary activities or business functions that are considered in the value chain include:
a)
Customer service, production, marketing and sales, and human resources.
b)
Customer service, production, marketing and sales, and information systems.
c)
Infrastructure, human resources, materials management, and R&D.
d)
Customer service, production, marketing and sales, and R&D.
(HOCK)
Question 47: An outside consultant has been hired by a manufacturing firm to evaluate each of the
firm’s major products beginning with the design of the products and continuing through the manufacture, warehousing, distribution, sales, and service. The consultant has also been requested to compare
the manufacturer’s major products with firms that are manufacturing and marketing the same or similar
products. The consultant is to identify where customer value can be increased, where costs can be
reduced, and to provide a better understanding of the linkages with customers, suppliers, and other
firms in the industry. The type of analysis that the consultant most likely has been asked to perform
for the manufacturing firm is called a
a)
Balanced scorecard study.
b)
Benchmarking analysis.
c)
SWOT (strengths, weaknesses, opportunities, threats) analysis.
d)
Value-chain analysis.
(ICMA 2013)
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
143
D.5. Business Process Improvement
CMA Part 1
Process Analysis
A business process is a related group of activities encompassing several functions that produces a specific
product or service of value to a customer or customers. A business process is also a group of activities that
result in higher quality or lowered costs of a product or service.
For an insurance company, settling a claim is a business process. For just about any company, fulfilling a
customer’s order is a business process. Though manufacturing is a process, it is really a sub-process of
order fulfillment—something that must take place in order to fulfill an order. The sales function is also a
sub-process of order fulfillment, because it is also something that must take place before any order is
fulfilled.
Any process has inputs and it has outputs. Inputs are materials, labor, energy, and capital equipment. The
process transforms the inputs into outputs that have value to the customer. In classic economics terms,
inputs are economic resources such as land, labor, and capital. The output of a process may be a manufactured product, or it may be a service provided.
The challenge to a business is to make its processes work effectively and efficiently, in order to accomplish
the most possible with the least waste. Process analysis is used to understand the activities included in a
process and how they are related to each other. A process analysis is a step-by-step breakdown of the
phases of the process that conveys the inputs, outputs, and operations that take place during each phase
of the process. A process analysis is used to improve understanding of how the process operates. The
process analysis usually involves developing a process flowchart 18 that illustrates the various activities and
their interrelationships.
Once a process has been analyzed, the information gained from the analysis can be used to make operating
decisions. Sometimes a process needs improvement, such as eliminating waste or increasing operating
efficiency, but sometimes the process needs to be completely reengineered.
Business Process Reengineering
The term reengineering was originally used to refer to the practice of disassembling a product in order to
redesign it, but it more recently has been applied to the restructuring of organizational processes brought
about by rapidly changing technology and today’s competitive economy. For instance, instead of simply
using computers to automate an outdated process, technological advances bring opportunities to fundamentally change the process itself. In applying the concept of business process reengineering, management
starts with a clean sheet of paper and redesigns processes to accomplish its objectives. Operations that
have become obsolete are discarded.
The philosophy of “reengineering” business processes was set forth by Michael Hammer and James Champy
in their book, Reengineering the Corporation: A Manifesto for Business Revolution. Hammer and Champy
maintained that businesses are still being operated on the assembly line model that led to the industrial
revolution in the 20th century. Specialization in work assignments and the concept of division of labor that
was introduced back then brought about major productivity gains. Productivity increased immensely because workers doing the same things over and over again became very adept at doing those things and
their speed increased tremendously. Furthermore, specialization saved the time that was required for a
worker to move from one task to the next task. The assembly line model worked very well in the 1900s.
However, the authors say that the world has changed since the 1900s and companies operated today on
the same model no longer perform well. The underlying problem, according to the authors, is fragmented
processes, not only in manufacturing but in other business processes as well such as order fulfillment,
purchasing and customer service. The division of labor model has led to fragmented processes where one
department performs a part of the process and then hands the work off to the next department to do its
18
A flowchart is a diagram showing a sequence of work or a process. Flowcharts are used to analyze, design, document,
or manage a process or a program. Flowcharts use geometric symbols to show the steps and arrows to show the sequence
of the steps.
144
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.5. Business Process Improvement
part. Although this division of labor is modeled on the assembly line that worked so well in the 1900s,
passing work around an organization today slows down the completion of the process and creates additional
costs without adding any value to the product or service the customer receives. In fact, because it wastes
time, passing work around inhibits the corporation’s ability to respond to customers’ needs in a timely
manner.
Business process reengineering involves analyzing and radically redesigning the workflow. Radical redesign
means throwing out the old procedures and inventing new ways of getting the work done. Reengineering
is not about making incremental improvements but it is about making quantum leaps.
Hammer and Champy recommend that in reengineering, the work should be organized around outcomes,
not tasks. In other words, management should think about what it wants to accomplish and then think of
ways to accomplish it rather than thinking of tasks to be done.
The processes in the organization should first be identified, then prioritized for reengineering according to
three criteria: (1) which processes are the most dysfunctional, (2) which will have the greatest impact on
customers, and (3) for which processes reengineering is most feasible.
The authors emphasize the use of technology, not to make old processes work better but to break the rules
and create new ways of working. One of the primary “rules” in the use of technology is that information
should be captured only once, and it should appear simultaneously every place it is needed. Taking information from one system and inputting it into another is an indicator of a broken process that needs to be
reengineered.
People within the organization involved in reengineering typically include:
•
A reengineering project leader who makes a particular reengineering project happen.
•
The process owner, the manager with specific responsibility for the process being reengineered.
•
The reengineering team, a group of people who diagnose the current process and oversee its
redesign and the implementation of the redesign.
•
A reengineering steering committee, a group of senior managers who determine the organization’s
overall reengineering strategy and oversee it.
•
A reengineering czar, a person with overall responsibility for developing reengineering techniques
to be used throughout the organization and for coordinating the activities of the company’s various
reengineering projects.
The reengineering process begins with the customer, not with the company’s product or service. The reengineering team asks itself how it can reorganize the way work is done to provide the best quality and the
lowest-cost goods and services to the customer.
Frequently, the answer to the question is that the company’s value-chain activities could be organized more
effectively. For example, several different people in different areas might be passing work from person to
person to perform a business process. Instead, the same process might be able to be performed by one
person or one group of people all working closely together, at lower cost. Individual job assignments might
become more complex and thus more challenging, and the grouping of people into cross-functional teams
can both reduce costs and increase quality.
After the business process reengineering has been completed and the value-chain activities have been
reorganized to get the product to the final customer more efficiently, quality management takes over and
focuses on continued improvement and refinement of the new process.
Furthermore, internal controls for the reengineered process must not be neglected. When a process is
reengineered, its internal controls must be reengineered, as well. If existing internal controls are disassembled and not replaced with new ones, the process will be without any controls.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
145
D.5. Business Process Improvement
CMA Part 1
Benchmarking Process Performance
One of the best ways to develop the distinctive competencies that lead to superior efficiency, superior
quality, superior innovation, and superior responsiveness to customers, all of which confer competitive advantage to a firm, is to identify and adopt best practices. Best practices can be accomplished
through benchmarking.
A benchmark is a standard of performance; and benchmarking is the process of measuring the organization against the products, practices and services of some of its most efficient global competitors or against
those of other segments of the firm. The company can use these standards, also called best practices, as
a target or model for its own operations. Through the application of research and sophisticated software
analysis tools, companies undertake best practice analysis and then implement improvements in the
firm’s processes to match or beat the benchmark. The improvements could include cutting costs, increasing
output, improving quality, and anything else that will aid the firm in achieving its strategic business goals
and objectives.
Benchmarking continuously strives to emulate (imitate) the performance of the best companies in the world
or the best units in the firm, and through meeting these higher standards, the organization may be able to
create a competitive advantage over its marketplace competitors. The benchmarked company does not
need to be in the same industry as the company that is trying to improve its performance.
The first thing a company must do is to identify the critical success factors for its business and the
processes it needs to benchmark. Critical success factors are the aspects of the company’s performance
that are essential to its competitive advantage and therefore to its success. Each company’s critical success
factors depend upon the type of competition it faces.
After identifying its critical success factors, a team is set up to do best practice analysis. Best practice
analysis involves investigating and documenting the best practices for the processes used in performing
the firm’s critical success factor activities. The team members should be from different areas of the business
and have different skills. The team will need to identify what areas need improvement and how it will
accomplish the improvement by utilizing the experience of the benchmarked company.
Certain companies are generally recognized as leaders in specific areas. Some examples are Nordstrom for
retailing, Ritz-Carlton Hotels for service, and Apple Computer for innovation. The American Productivity and
Quality Center (www.apqc.org), a non-profit organization, is a resource for companies that want to do
benchmarking. APQC is one of the world’s leading proponents of business process and performance improvement. It maintains a large benchmarking database and offers firms several ways to participate in
benchmarking and to access benchmarking information.
Activity-Based Management (ABM)
Activity-based management (ABM) is closely related to and draws upon data from activity-based costing.
Activity-based costing uses activity analysis to develop detailed information about the specific activities the
company uses to perform its operations. Activity-based costing improves tracing of costs to products and
can even be used to trace costs to individual customers. It answers the question, “What do things cost”?
Activity-based management, on the other hand, is a means of understanding what causes costs to occur.
Activity-based management uses ABC data to focus on how to redirect and improve the use of resources
to increase the value creation.19 Activity-based management is a means of performing value chain analysis
and business process reengineering.
19
Institute of Management Accountants, Implementing Activity-Based Management: Avoiding the Pitfalls (Montvale, NJ:
Statements on Management Accounting, Strategic Cost Management, 1998), 2.
146
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.5. Business Process Improvement
The IMA’s Statement on Management Accounting, Implementing Activity-Based Management: Avoiding the
Pitfalls, defines activity-based management as
“a discipline that focuses on the management of activities as the route to improving the
value received by the customer and the profit achieved by providing this value. ABM includes cost driver analysis, activity analysis, and performance measurement, drawing on
ABC as its major source of data.”20
Activity-based management uses activity analysis and activity-based costing data to improve the value
of the company’s products and services and to increase the company’s competitiveness.
Activity-based management is subdivided into operational ABM and strategic ABM.
Operational ABM uses ABC data to improve efficiency. The goal is for activities that add value to the
product to be identified and improved, while activities that do not add value are reduced in order to cut
costs without reducing the value of the product or service.
Strategic ABM uses ABC data to make strategic decisions about what products or services to offer and
what activities to use to provide those products and services. Because ABC costs can also be traced to
individual customers, strategic ABM can also be used to do customer profitability analysis in order to identify
which customers are the most profitable so the company can focus more on serving their needs.
The Concept of Kaizen
The term kaizen is a Japanese word that means “improvement.” As used in business, it implies “continuous
improvement,” or slow but constant incremental improvements being made in all areas of business operations. Small-scale improvements are considered to be less risky than a major overhaul of a system or
process. The slow accumulation of small developments in quality and efficiency can, over time, lead to very
high quality and very low costs. Kaizen needs to be a part of the corporate culture. It requires conscious
effort to think about ways that tasks could be done better. Continuous improvement can be difficult to
maintain and takes years to show results, but if done properly, it confers a sustained competitive advantage.
Kaizen principles can also be used for a “blitz,” in which substantial resources are committed to a focused,
short-term project to improve a process. A blitz usually involves analysis, design, and reengineering of a
product line or area. The results can be immediate and dramatic.
Kaizen can be used along with activity-based management and activity-based costing in a business process
reengineering project to improve the quality and reduce the cost of a business process.
A company may use target costing along with kaizen principles to determine what its ideal standard costs
are. Target costing puts the focus on the market because it starts with a target price based on the market
price. The market determines the target price, and the company must attain the target cost in order to
realize its desired gross profit margin for the product. The ideal standard is thus defined as the target cost,
or the standard cost that will enable the company to attain its desired cost and desired gross profit margin.
Using kaizen principles, the company figures out how it can manufacture the product at the target cost.
The standard is achieved through development of new manufacturing methods and techniques that entail
continuous improvement or the ongoing search for new ways to reduce costs.
The heart of the kaizen concept is the implementation of ideal standards and quality improvements. Kaizen
challenges people to imagine the ideal condition and strive to make the necessary improvements to achieve
that ideal.
Organizations can also apply kaizen principles to the budgeting process. Because kaizen anticipates continuous improvements in the production process, a budget developed on kaizen principles will incorporate
planned improvements, resulting in decreasing costs of production over the budget period.
20
Ibid.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
147
D.5. Business Process Improvement
CMA Part 1
Note: Kaizen is the Japanese term for improvement and it is used in business to mean continuous
improvement.
The Costs of Quality
Management will also be closely interested in the costs of quality. The costs of quality include not only
the costs of producing quality products, but they also include the costs of not producing quality products.
Over the long term, not producing a quality product is more costly than producing a quality product because
lack of quality causes loss of customers.
The costs of quality are classified in two categories, costs of conformance and costs of nonconformance. Each of those two categories is broken down into two different types of costs.
•
The costs of conformance are the costs to produce a quality product, and they consist of prevention costs and appraisal costs.
•
The costs of non-conformance are the costs of not producing a quality product, and they consist
of internal failure costs and external failure costs.
Costs of Conformance
The costs of conformance are the costs the company incurs to assess internal quality with the purpose of
insuring that no defective products reach the consumer.
The two types of costs of conformance are:
1)
Prevention costs are the costs incurred in order to prevent a defect from occurring in the first
place. Prevention costs include:
•
Design engineering and process engineering costs, so the design of the product is not defective
and the process for manufacturing it produces a quality product;
•
Quality training, both internal programs and external training to teach employees proper manufacturing procedures, proper delivery procedures, and proper customer service procedures,
including salaries and wages for employee time spent in training;
•
Preventive maintenance on production equipment so an equipment failure does not cause a
quality failure;
•
Supplier selection and evaluation costs to ensure that materials and services received meet
established quality standards, and costs to train suppliers to conform to the firm’s requirements;
•
Evaluation and testing of materials received from a new supplier to confirm their conformance
to the company’s standards;
•
Information systems costs to develop systems for measuring, auditing, and reporting of data
on quality; and
•
Planning and execution costs of quality improvement programs such as Six Sigma21 or Total
Quality Management.22
21
Six Sigma is an approach to quality that strives to virtually eliminate defects. To achieve Six Sigma, a process must
produce no more than 3.4 defects per million opportunities. “Opportunities” refers to the number of opportunities for
nonconformance or for not meeting the required specifications. “Opportunities” are the total number of parts, components, and designs in a product, any of which could be defective. If a product has 10,000 parts, components, and designs,
for example, 3.4 defects per million would amount to 34 products out of every 1,000 that would have some defect. The
goal of Six Sigma is to improve customer satisfaction by reducing and eliminating defects, which will lead to greater
profitability.
22
Total Quality Management is covered later in this topic.
148
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
2)
D.5. Business Process Improvement
Appraisal costs are the costs incurred to monitor production processes and individual products
and services before delivery in order to determine whether all units of the product or service meet
customer requirements. Appraisal costs include:
•
Costs to test and inspect manufacturing equipment, raw materials received, work-in-process,
and finished goods inventories;
•
Cost for equipment and instruments to be used in testing and inspecting manufacturing equipment, raw materials, work-in-process, and finished goods inventories; and
•
Costs for quality audits.
Costs of Nonconformance
Nonconformance costs are costs incurred after a defective product has been produced. The costs of nonconformance can be broken down into two types:
1)
2)
Internal failure occurs when a problem is detected before a product’s shipment to the customer
takes place. The costs associated with internal failure are:
•
Cost of spoilage and scrap;
•
Costs (materials, labor, and overhead) to rework and reinspect spoiled units;
•
Tooling changes and the downtime required to do the tooling changes to correct a defective
product;
•
Machine repairs due to breakdowns;
•
Engineering costs to redesign the product or process to correct quality problems and improve
manufacturing processes if the problem is detected before the product is in the customers’
possession;
•
Lost contribution margin due to reduction of output caused by spending time correcting defective units; and
•
Expediting costs, the cost of rushing to re-perform and complete an order in time because of
a failure to complete it correctly the first time.
External failure occurs when a defect is not detected until after the product is already with the
consumer. The costs of external failure are:
•
Customer service costs of handling customer complaints and returns;
•
Warranty costs to repair or replace failed products that are returned;
•
Product recall and product liability costs, including settlements of legal actions;
•
Public relations costs to restore the company’s reputation after a high-profile external failure;
•
Sales returns and allowances due to defects;
•
Lost contribution margin on sales lost because of the loss of customer goodwill; and
•
Environmental costs such as fines and unplanned cleanup fees caused by a failure to comply
with environmental regulations.
Note: Make certain to know the four subcategories of the costs of quality and the individual items that
go into these four types of costs.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
149
D.5. Business Process Improvement
CMA Part 1
Opportunity Costs of Quality
The nonconformance costs include opportunity costs associated with poor quality. An opportunity cost
is the benefit of the next best use of the resource that was lost.
Production of defective products creates opportunity costs in several ways.
•
The company must either repair or replace defective products that fail within the warranty period
and products that may be recalled if the defect is serious. If the product is repaired, the company
must use resources to fix or replace the defective parts. The people the company pays to do the
repair work could have been producing new items for sale instead of spending their time on work
that will not produce any revenue, and the replacement parts used could have been used in new
items instead. If defective products cannot be repaired and must be replaced, the products used
to replace the defective ones are products that could have been sold but instead will not generate
any revenue even though their production creates costs.
•
The company will need to provide more customer service after the sale, so it will need to pay more
customer service employees. The extra people spending their time on customer service for the
defective products could have been doing something else for the company that would be more
productive.
•
The company may gain a reputation of supplying poor-quality products and could lose future sales
as a result. The lost future sales lead to lost profits. The lost resource is the cash profits that could
have been earned from the lost sales. If the company had that cash, it could invest it and earn
more future profits with it.
•
Another opportunity cost associated with poor quality management concerns design quality failures. Costs of design quality are costs to prevent poor quality of design or costs that arise as a
result of poor quality of design. Design quality failure costs include the costs of designing, producing, marketing and distributing a poorly designed product as well as costs to provide service after
the sale for it. In addition, design quality failures can cause lost sales because the product is not
what customers expect and want. Design quality failures can be a significant component of design
quality costs.
Note: In manufacturing, it is often said that the main causes of quality problems are the “Four Ms”:
machines, materials, methods, and manpower.
Calculating the Costs of Quality
The costs of quality (conformance and nonconformance) can be quantified and documented on a cost of
quality (COQ) report. A cost of quality report shows the financial impact of implementing processes for
prevention and appraisal and for responding to internal and external failures.
Activity-based costing simplifies preparation of a COQ report, because an ABC system identifies costs with
activities. The costs of activities that are required to prevent or to respond to poor quality can be much
more easily identified when ABC is used than when a traditional costing system is used, because traditional
costing systems accumulate costs according to function (such as production or administration), rather than
according to activities. If traditional costing is being used, additional analysis is required to locate and
segregate the costs of quality.
Various formats can be used to report the costs of quality. The format used should depend on how management wants to see the costs of quality reported. Usually such a report would be done separately for
each product or product line. Following is one example of a cost of quality report in a spreadsheet format.
150
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
1
A
B
Cost of Quality – Widgets - Month of May 20X2
2
3
D.5. Business Process Improvement
C
Cost Allocation Rate
D
E
F
Cost Allocation Base
Quantity
Total
Costs
Prevention costs:
4
Design and process engineering
$65
per hour
528
hours
$ 34,320
5
Equipment preventive maintenance
$60
per hour
528
hours
31,680
6
Training
$55
per hour
160
hours
8,800
7
Supplier selection/evaluation
$40
per hour
88
hours
3,520
8
Testing of materials
$25
per hour
352
hours
9
Total prevention costs
8,800
$ 87,120
10
11
Appraisal costs:
12
Inspection of manufacturing equipment
$35
per hour
352
hours
$ 12,320
13
Equipment for inspection of raw materials
$30
per hour
176
hours
5,280
14
Equipment for inspection of work-in-process
$30
per hour
528
hours
15,840
15
Equipment for inspection of finished goods
$30
per hour
176
hours
16
Total appraisal costs
5,280
$ 38,720
17
18
Internal failure costs:
19
Cost of spoilage
$75
per def. unit
50
def. units
20
Cost of rework
$95
per rewkd. unit
80
rewkd. units
7,600
21
Est. lost contribution margin due to rework
$60
per rewkd. unit
80
rewkd. units
4,800
22
Total internal failure costs
$
3,750
$ 16,150
23
24
External failure costs:
25
Customer service-complaints & returns
26
Warranty costs
27
Estimated product liability costs
28
Contribution margin on estimated lost sales
29
$35
$110
$25
$230
per hour
per def. unit
per unit mfd.
per unit lost
176
35
3,300
300
hours
def. units
$
6,160
3,850
Units mfd.
82,500
Est. lost sales
69,000
Total external failure costs
$161,510
30
31
Total costs of quality
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
$303,500
151
D.5. Business Process Improvement
CMA Part 1
Total Quality Management (TQM)
Total Quality Management describes a management approach that is committed to customer satisfaction
and continuous improvement of products or services. The basic premise of TQM is that quality improvement
is a way of increasing revenues and decreasing costs. As such, a company should always strive for improvement by performing its service or producing its product correctly the first time. Total Quality
Management is a prevention technique. The costs of implementing a TQM program are classified on a Cost
of Quality Report as prevention costs.
At the heart of TQM is the definition of what quality is. Quality can mean different things to different people.
For a customer it is a product that meets expectations and performs as it is supposed to for a reasonable
price. For a production manager it is a product that is manufactured within the required specifications.
When a company is considering quality, it must be certain to include the different perspectives of quality
from all of the involved parties.
Total quality management programs are often developed to implement an overall low-cost or a differentiation business strategy, because TQM’s goals are to both reduce costs and improve quality. Approximately
90% of manufacturing companies and 70% of service businesses have put into practice some form of a
TQM program, but success or failure revolves around involvement and leadership of senior management.
A TQM program requires a change in corporate culture by eliminating faulty processes, empowering employees and creating teamwork that focuses on quality.
The objectives of TQM include:
•
Enhanced and consistent quality of the product or service
•
Timely and consistent responses to customer needs
•
Elimination of non-value-adding work or processes, which leads to lower costs
•
Quick adaptation and flexibility in response to the shifting requirements of customers
Certain core principles, or critical factors, are common to all TQM systems:
•
They have the support and active involvement of top management
•
They have clear and measurable objectives
•
They recognize quality achievements in a timely manner
•
They continuously provide training in TQM
•
They strive for continuous improvement (kaizen)
•
They focus on satisfying their customers’ expectations and requirements
•
They involve all employees
TQM is an organizational action. For it to be successful, the entire organization must strive for quality
improvement and pursue excellence throughout the organization. Part of this pursuit of excellence is a focus
on continuing education. Employees at all levels participate regularly in continuing education and training in order to promote and maintain a culture of quality.
One of the unique perspectives of TQM relates to customers. In a TQM system, it is important to remember
that people within the organization are also customers. Every department, process, or person is at
some point a customer of another department, process, or person and at some point, a supplier to another
department, process, or person.
Another feature of TQM is quality circles. A quality circle is a small group of employees who work together
and meet regularly to discuss and resolve work-related problems and monitor solutions to the problems.
The form of communication that takes place in quality circles is vital to a successful TQM program.
In TQM, the role of quality manager is not limited to a special department. Instead, every person in the
organization is responsible for finding errors and correcting any problems as soon as possible.
152
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.5. Business Process Improvement
To achieve total quality management, the company must identify the relevant quality problems when
and where they occur, and then use some of the following, or similar, methods to analyze them.
Statistical Quality Control (SQC)
Statistical quality control (SQC), or statistical process control (SPC), is a method of determining whether a
process is in control or out of control. Some variations in quality are expected, but if too many units are
tested and found to be outside the acceptable range, the process may not be in control. A control chart
is used to record observations of an operation taken at regular intervals. The sample is used to determine
whether all the observations fall within the specified range for the operation, and the intervals are measurable in time, batches, production runs or any other method of delineating an operation.
When statistics are used in determining the acceptable range, the control chart is a statistical control
chart. For example, the acceptable range might be plus or minus two standard deviations from the mean.
If no sample falls outside the limit of two standard deviations, the process is in statistical control, as long
as all the samples are randomly distributed with no apparent patterns, and if the numbers of observations
that are above and below the center of the specified range are approximately equal. If instead trends,
clusters, or many measurements near the limits are observed, the process may be out of control even
though all of the observations are within two standard deviations of the mean.
Below is an example of a statistical control chart showing that all observations are within two standard
deviations of the mean, with no trends, no clusters, nor many measurements near the limits. For a company
that considers two standard deviations to be acceptable, the control chart below indicates that the process
is in control.
+3σ
+2σ
+1σ
μ
−1σ
−2σ
−3σ
JAN
FEB
MAR
APR
MAY
JUN
JUL
AUG
SEP
OCT
NOV
DEC
If any of the observations were above +2 standard deviations or below −2 standard deviations from the
mean, or if they indicated a trend going one way or another, or if several observations were clustered
together near the ±2 standard deviations, the process might not be in control.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
153
D.5. Business Process Improvement
CMA Part 1
Histograms
A histogram is a bar graph that represents the frequency of events in a set of data. Patterns that may not
be apparent when just looking at a set of numbers become clear in a histogram, which can pinpoint most
of the problem areas. For instance, if a particular production line is experiencing most of the difficulty, a
histogram can help determine what types of problems are causing the problems most often.
Here is an example of a histogram for a manufacturer of wood bookcases. The company has evaluated the
various types of problems that occur in the manufacturing process, and the most frequent problems are:
•
Wood cracked, 15 out of 100 defective units
•
Trim not attached correctly, 19 out of 100 defective units
•
Uneven stain, 38 out of 100 defective units
•
Shelves out of alignment, 21 out of 100 defective units
•
Shelf or shelves missing, 7 out of 100 defective units
The company develops the following histogram to illustrate its findings:
Defective Bookcases
40
35
30
25
20
15
10
5
0
Wood
cracked
Trim not
attached
corrrectly
Uneven
stain
Shelves out
of
alignment
Shelf or
shelves
missing
The histogram illustrates that the problem of uneven stain is by far the greatest problem occurring.
154
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.5. Business Process Improvement
Pareto Diagrams
A Pareto diagram is a specific type of histogram. Vilfredo Pareto, a 19th-century Italian economist, came
up with the now well-known 80-20 observation, or Pareto principle. Someone relying on the Pareto principle would state, for example, that “20% of the population causes 80% of the problems” or “20% of the
population is doing 80% of all the good things,” and such a statement would usually be fairly accurate.
After management pinpoints which 20% of the causes are accounting for 80% of the problems, it can focus
efforts on improving the areas that are likely to have the greatest overall impact.
In addition to showing the frequency of the causes for the quality problems with bookcases, a Pareto diagram puts them in order from the most frequent to the least frequent. Furthermore, it adds a curve to the
graph showing the cumulative number of causes, going from the most frequent to the least frequent.
Below is an example of a Pareto diagram for the manufacturer of wood bookcases.
Defective Bookcases
100
90
80
70
60
50
40
30
20
10
0
Uneven
stain
Shelves out Trim not
of
attached
alignment corrrectly
Wood
cracked
Shelf or
shelves
missing
The curve on the graph illustrates the incremental addition to total defective bookcases contributed by each
cause.
The histogram and the Pareto chart both show graphically that uneven stain is the most frequently occurring
manufacturing problem. The company will be able to maximize its quality control efforts by giving first
priority to finding ways to prevent the uneven stain problem.
Once the most frequent quality problems have been identified, the next step is to identify the cause or
causes of each quality problem.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
155
D.5. Business Process Improvement
CMA Part 1
Cause-and-Effect (Ishikawa) Diagram
A cause-and-effect diagram, or Ishikawa, diagram, organizes causes and effects visually to sort out root
causes and identify relationships between causes and their effects. The idea of a cause-and-effect diagram
was identified by Karou Ishikawa, who noted that it is often difficult to trace the many causes leading to a
single problem. Ishikawa developed a way of diagramming the relationships to better trace them. An Ishikawa diagram consists of a spine, ribs and bones, so it is also called a fishbone diagram. The right end
of the spine is the quality problem, and the spine itself connects the main causes—the ribs—to the effect,
the quality problem.
Ishikawa Diagram
Usually, causes of problems in manufacturing fall into the following categories, referred to as the “4M”s:
•
Machines
•
Materials
•
Methods
•
Manpower
To resolve the problem of uneven stain on bookcases, operating personnel and management of the bookcase manufacturer hold a series of meetings, called “brainstorming sessions,” in which they attempt to
figure out what is causing the problem of the uneven stain on the bookcases, using the 4Ms. They find the
following:
Machines:
No equipment is used. The stain and protective coating are applied by hand.
Materials:
Wood: Maple, a hardwood, is used in bookcase construction. Although hardwoods should absorb stains
evenly, maple sometimes does not. A wood conditioner used before the stain would alleviate the problem,
but management has not wanted to incur the additional cost for the wood conditioner.
The maple sometimes is received from the lumberyard with machine marks in it that cause the stain to be
absorbed unevenly.
Methods:
The stain is applied with a brush. As more pressure is applied to the brush, more stain is applied to the
wood. If the pressure is not kept consistent, the stain will have variations in color. After the stain has been
applied, it is wiped off. The longer the stain is left on, the darker is the resulting color. If the employee does
not work quickly enough, the color on the bookcase will be lighter in the first areas wiped off and darker in
the last areas wiped off.
Manpower:
The maple is inspected before it is made into bookcases. If boards with machine marks are not caught in
this inspection and are not pulled from production, little can be done at the staining point to prevent a poor
stain job. Recently, a significant number of boards with machine marks have been missed in the inspection.
156
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.5. Business Process Improvement
The employees do not always keep a consistent pressure while applying the stain, and they do not always
get the color wiped off quickly enough. Both of these things are causing variations in the color. Some of
the employees are inexperienced, and some are simply not following procedures.
Using input from the brainstorming sessions, the following cause-and-effect diagram is prepared:
Machines
Materials
Wood conditioner
not being used
Not applicable-machines not used for
staining
High level of skill required
Highly labor-intensive
Machine marks in wood
Machine marks in wood
missed in inspections
Problem:
Uneven
Stain
Inexperienced employees
Employees not following
instructions
4)
Methods
Manpower
The above cause-and-effect diagram can lead to changes made that may mitigate the problem of uneven
stain. Management may decide to use a spray-on stain that is easier to apply; it may decide to begin using
a wood conditioner on the maple or may change to using a different hardwood that will take the stain more
evenly; it may increase employee training programs in both the staining department and in the wood
inspection department; and it may look into automating portions of the process.
Total Quality Management and Activity-Based Management
Activity-based management principles are widely used to recommend process performance improvements. They are applied to help companies make better strategic decisions, enhance operating
performance, reduce costs, and benchmark performance. Activity-based management uses activity-based
costing data to trace costs to products or individual customers for the purpose of analyzing business process
performance.
A total quality management system is most compatible with an ABC system because the ABC system
makes the costs of quality more apparent. A firm with a good ABC system needs only to modify the system
to identify costs and activities relating to costs of quality. A company that utilizes ABC will also be continuously identifying activities that can be eliminated, as well as ensuring that necessary activities are carried
out efficiently.
Unnecessary and inefficient activities are non-value-adding activities, and the cost drivers of those activities need to be reduced or eliminated. Activity to rework defective products is a non-value-adding activity
and thus needs to be reduced or eliminated. On the other hand, the company needs to carry out the valueadding activities as efficiently as possible. A focus on TQM and its central tenet of continuous improvement will lead to achieving both goals.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
157
D.5. Business Process Improvement
CMA Part 1
ISO 9000
The International Organization of Standardization introduced the ISO 9000 quality assurance standards.
The standards do not have the force of law, but in both the U.S. and the European Union, foreign companies
in some industries must be ISO 9000 certified in order to do business there.
The ISO 9000 standards are not set to assure the quality of an individual product but to assure that
the quality is the same throughout all of the company’s product of that type. However, due to the
wide acceptance of the standards, a company that fails to achieve them runs the risk of losing business
due to perceived lack of quality.
Note: Adherence to (that is, following) design specifications is one of the most important components
of quality control. ISO standards do not address whether design specifications have been followed.
Quality Management and Productivity
At first glance, it may seem that as a company’s commitment to quality increases, the productivity of the
company will decrease. Since productivity is measured as the level of output given an amount of input, it
would seem that by allocating resources to quality and spending resources in the quality process, the level
of outputs related to a given level of inputs would decline. However, that usually does not occur.
In fact, as a company’s commitment to quality increases, usually productivity also increases. Reasons for
this increase in productivity include:
•
A reduction in the number of defective units. The reduction in the number of defective units
in turn reduces the amount of time, material and effort wasted on unusable output as well as time
spent fixing salvageable defective units. (The term “the hidden factory” refers to the time and
effort spent on rework and repair.)
•
A more efficient manufacturing process. As a result of an emphasis on quality production, the
company may remove or change inefficient, unproductive or non-value-adding activities.
•
A commitment to doing it right the first time. This commitment is related to the first item, a
reduction in the number of defective units. As the culture in the company focuses on doing things
right the first time, the employees of the company take a more conscientious approach to their
work, and a more conscientious approach may lead to greater productivity.
No matter the cause, the relationship between quality and productivity is a positive one: the more attention
that is paid to quality, the higher are the levels of production.
Other Quality Related Issues
With the development of a good TQM system, a company can also manage its time better and become
more productive. In today’s environment, it is ever more important to become the first company to get a
new product or service to the marketplace. The need to be first leads to a need to have shorter product
development time and shorter response times to changes in demand or the market.
Customer-response time is the length of time between receipt of an order from a customer and the
receipt of the product by the customer. The components of cycle time are order receipt time (from receipt
of the customer’s order until the order is received by manufacturing), manufacturing cycle time (consisting of waiting time before beginning manufacturing of the order and manufacturing time), and order
delivery time.
158
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.5. Business Process Improvement
The following graphic is used in the topic Theory of Constraints to illustrate Manufacturing Cycle Efficiency
(MCE) and it illustrates the elements of customer-response time, as well.
Elements of Customer-Response Time
Receipt
Time
Waiting
Time
Order Received
From Customer
Delivery
Time
Manufacturing
Cycle Time
Order Received
By Manufacturing
Manufacturing
Time
Manufacturing
Begins
Finished Goods
Complete
Order Delivered
To Customer
Customer-Response Time
Only the actual manufacturing time is time spent on value-adding activities. Total customer-response time
can be decreased by decreasing receipt time, waiting time, and delivery time.
Question 48: Of these statements, which is not relevant to the overall benchmarking process?
a)
Management determines what processes the company uses to convert materials, capital and labor
into value-adding products.
b)
Management uses target costing to set standard costs in order to put the focus on the market as
well as on what target price to charge because the company must attain the target cost to realize
its desired profit margin for the product.
c)
Target costing utilizes kaizen to reduce costs in order to attain a desired profit margin.
d)
Management puts into place a program to measure the organization against the products, practices and services of the most efficient competitors in its marketplace.
(HOCK)
Question 49: Which of the following is not an objective of a company’s TQM program?
a)
To examine issues relating to creating value for the company’s customers.
b)
To understand the company’s capacity for innovation.
c)
To evaluate environmental issues that may affect the company’s profitability.
d)
To utilize computers in product development, analysis, and design modifications.
(HOCK)
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
159
D.5. Business Process Improvement
CMA Part 1
Question 50: The cost of scrap, rework and tooling changes in a product quality cost system are categorized as a(n):
a)
External failure cost.
b)
Training cost.
c)
Prevention cost.
d)
Internal failure cost.
(CMA Adapted)
Question 51: Listed below are selected line items from the Cost of Quality Report for Watson Products
for last month.
Category
Rework
Equipment preventive maintenance
Product testing equipment
Product repair
Amount
$ 725
1,154
786
695
What is Watson's total prevention and appraisal cost for last month?
a)
$2,665
b)
$1,154
c)
$786
d)
$1,940
(CMA Adapted)
160
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.5. Business Process Improvement
Accounting Process Redesign
While accounting and finance professionals are recommending process improvements for other areas of the
organization, they need to look at their own processes, too, for ways to make accounting and financial
operations more efficient.
Businesses are under pressure to reduce the cost of business transactions. At the same time, the financial
function needs to focus more of its resources on activities that add greater value to the organization such
as decision support and strategic analysis. Therefore, accounting and finance need to be able to provide
the same or even higher levels of service in transaction processing while using fewer resources for transaction processing. The resources freed by improvements made can be put to new uses, enabling the
financial function to add value to the organization.
In order to accomplish these changes, the role of finance needs to be re-thought, and financial processes
need to be redesigned to do things better, faster, and cheaper, in line with the philosophy of continuous
improvement. By redeploying its resources to decision support and other value-adding activities, the financial function can do more for the organization, possibly at a lower cost.
Creating a Future Vision for Finance
In developing a future vision for the financial function, benchmarking studies should be used to identify
best practices being used. Much can be learned from organizations using best practices. The project team
can make best-practice visits to organizations that are functioning at high levels of efficiency and effectiveness in the processes being considered for redesigning. Benchmarking enables the project team to develop
standards against which its vision for the finance function can be measured.
A current use assessment is another technique for creating a future vision. A current use assessment is
a customer-centered approach based on the idea that every aspect of the financial function should be
traceable to internal customer needs. Internal customers are the internal users of the financial outputs. The
project team needs to ask, “What value do we currently provide to users?” Current use assessment identifies the outputs and activities that users depend on and that must remain a part of the redesigned
processes.
A current use assessment determines what reports are being used and by whom, how often updates are
needed, and what information in the reports is most essential. Surveys can be utilized to find out what
reports users use, why they use them, and whether they are satisfied or dissatisfied with them. The surveys
can also be used to gather information for a “wish list” of information the users would like to receive.
In addition, moving toward a decision-support model for finance requires an understanding of what type of
decisions are being made and how the finance function can support the efforts. Interviews and surveys can
help in gaining that understanding.
Selecting and Prioritizing Areas for Improvement
Every process cannot be redesigned at the same time, so it is necessary to prioritize. Priority should be
given to the processes most central to the organization’s strategy. Any area that has a significant impact
on the effectiveness of the business drives total performance and should receive top priority because those
areas are critical to the success of the organization. Processes that have a high probability of successful
redesign should also receive priority.
Process Walk-Throughs
Once the decision has been made to redesign a specific accounting or finance function and the future vision
has been created, the first step is to gather information about how things are currently being done. The
current processes, assumptions, and the structures of the function need to be examined in order to identify
improvements that can be made in efficiency and effectiveness. Process walk-throughs are used to
gather the necessary information.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
161
D.5. Business Process Improvement
CMA Part 1
A walk-through is a demonstration or explanation detailing each step of a process. To conduct a process
walk-through, a member of the process redesign team meets with the process owner and each participant
in the process in order to gain a thorough understanding of how the work gets done from beginning to end
for the purpose of uncovering opportunities for improvement. A process owner is a person who has the
ultimate responsibility for the performance of a process and who has the necessary authority and ability to
make changes in the process.
The existing processes need to be thoroughly documented before they can be streamlined. Documentation
of a process involves more than simply listing the sequence of events in the process. In addition to listing
the steps performed, the documentation should include:
•
Time required to complete each step and information about what may be slowing down each step,
so that improvement can be directed toward shrinking the tasks that require the greatest amount
of time rather than those that require little time.
•
Wait time between processing steps in order to discover areas requiring improvement.
•
Forms used as inputs in order to locate those with duplicate, unused, or missing information.
•
Reports used as outputs to identify unneeded reports or where information on several reports can
be merged into fewer reports.
•
Identification of who performs each step, because handing off the process from employee to employee can cause waiting times that could be eliminated by assigning the task steps to fewer
employees to reduce the number of work queues.
•
Informal communication paths being used, so they can be included in the new formal process.
•
Controls currently in place, in order to know in advance what controls will be affected by changes
made in the system.
•
Where errors typically occur in the process, because correction of errors is time-consuming and
decreasing the number of errors can decrease the amount of time required for the process.
Every step, every piece of paper, and every input and output should be challenged. For example, investigate:
•
Why each step is being done, whether it is necessary, and whether it adds value.
•
Whether the step could be automated and whether the physical pieces of paper being used are
necessary.
•
Whether there may be duplication of effort.
•
Whether the same input is being keyed in more than once, such as into a database and also into
a spreadsheet.
•
The accuracy of the inputs and outputs.
Process mapping may be used to provide a visual map of the way information, documents, and work is
routed through a process. Process maps can pinpoint problem areas such as bottlenecks within the system,
places where reports are being prepared manually due to fragmentation of data, and rework.
Identification of Waste and Over-capacity
The process walk-throughs are a good starting point for identifying waste and over-capacity, such as:
162
•
Duplication of effort.
•
Tasks being done that are not necessary or that do not add value.
•
Output that is not being used.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.5. Business Process Improvement
Determination of personnel under- or over-capacity can be done in various ways:
•
One way of determining personnel needs is through metrics. For example, if an accounts payable
clerk can process 100 invoices for payment per day, historical totals of invoices processed can be
used to determine how many payables clerks are needed.
•
Another way of determining personnel usage is changes in the size of work backlogs. If work
backlogs increase, personnel capacity may be too low. If work backlogs decrease, personnel capacity may be too high.
•
Employees can be asked to review their own needs for more or fewer employees.
•
If overtime is regularly excessive, then more employees may be needed.
Identifying the Root Cause of Errors
The walk-through can help to identify the root cause of errors.
•
If the same input is being keyed in multiple times, not only is the duplication of effort wasteful,
but the multiple inputs carry the inherent risk that the data will be keyed in differently each time.
Changing the process so that the input is keyed in just once and establishing controls such as
reconciliations of data can eliminate the cause of errors.
•
Each error regularly made, such as an error on an invoice or an error in an inventory count, is an
indication of a control weakness. Controls should be devised to prevent or detect the errors.
Process Design
Once the current process is fully understood, process design can take place in line with the vision for the
new process. The design process builds on the process concept developed during the vision step. Every
process is different, and a creative project team is required in order to generate a range of alternative
solutions. The redesigned process needs to cover every aspect of the internal customers’ (users’) needs.
Several potential process designs should be generated for consideration.
Risk-Benefit Evaluation
After several potential process designs have been generated and before one is selected, the potential designs need to be weighed carefully in terms of their potential risk impact as well as their potential benefits.
The greater the changes being made, the less the organization can be sure of a successful outcome. Risk
can be the deciding factor in selecting one approach to process redesign over another one. If the risks of
all the options are determined to be too great, a return to the process design step may be necessary to
generate another set of alternative designs.
Planning and Implementing the Redesign
A complete redesign of a process or processes has the potential to be very disruptive, and it requires careful
planning. The initiative for finance redesign often comes from senior management, so it is a top-down
implementation. It requires engaging people in the change, providing leadership, supporting the change,
and planning the change.
If the changes are extensive, they will need to be phased in to allow the employees and the rest of the
organization time to adjust to their impact. The impact of the changes on the people, their jobs, their sense
of security, and their attitudes must be considered. All those involved in the change effort need to actively
seek to reduce the stresses put on the human resources of the company, or else the people will limit the
effectiveness of the project. The project team needs to move the people involved from denial and resistance
to the change to acceptance and commitment to the new way of doing things.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
163
D.5. Business Process Improvement
CMA Part 1
Process Training
Redesigning processes requires finding new ways to use the skills of existing employees and to further
enhance those skills through training.
•
Training will be needed in the changes that have been made to the process. Everyone involved in
the revised process should be included in the training.
•
The need for skills enhancement training should be evaluated on an individual basis. Skills training
should be individualized to the needs of each employee to eliminate weaknesses in each employee’s skills. For example, certain employees may need to attend a class on using spreadsheets
whereas others would not need that but would need something else.
•
Training may be needed in how to fully take advantage of the systems being used.
Reducing the Accounting Close Cycle
Significant improvements can be made in the time required to close the general ledger at the end of each
month, quarter, and fiscal year. “Soft closes” can be used for month-end closes while reserving the more
detailed closing activities and allocations for quarter-end and year-end closes.
164
•
Accuracy at the point of data entry needs to be heightened so that reconciliations can be done
more quickly.
•
Perpetual inventory records should be used, and inventories can be estimated for the soft closes.
•
Timing-related accruals, such as inventories in transit, can be eliminated in the soft closes.
•
The materiality level for consolidation and elimination entries can be raised.
•
Use of a standardized chart of accounts and general ledger application across all company locations
is important for speed in closing.
•
Review transactions and a preliminary set of financial statements for errors prior to the end of the
period.
•
Bank reconciliations can be done daily without waiting for the month-end statement by accessing
transactions online.
•
Valuation accounts for obsolete inventory and credit losses should be reviewed for updating in
advance of the period end.
•
If recurring invoices are sent out on a regular schedule, they can be printed in advance by setting
the accounting software date forward so the revenue is recorded in the proper months.
•
If expenses incurred or employee hours worked are billable to customers, review them before the
end of the period so the billing can be done quickly and accurately at the appropriate time.
•
Interest expense can usually be accrued prior to the end of the month. Unless a large change in
debt principal takes place at the very end of the period, last-minute changes in debt will not make
much impact on the total interest expense accrued.
•
For accruing unpaid wages, a centralized timekeeping system can be used so the most current
information about hours worked is available.
•
Depreciation can be calculated a few days before the end of the period. Calculating the depreciation
early does entail the risk that a fixed asset would be acquired or disposed of after the calculations
are done and before the end of the period. However, for month-end closings, the missing depreciation can be recorded during the following month, so the total depreciation expense for the year
will still be correct. For year-end closings, the depreciation can be adjusted if necessary.
•
To speed up payables closing, require purchase orders for all larger purchases so the company has
a record of all outstanding orders of a material size. The accounting staff can access the information
to accrue for invoices not yet received as of month end.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section D
D.5. Business Process Improvement
By completing some of the closing work before the end of the period, the accounting staff is less rushed
and may make fewer errors, leading to more accurate as well as more timely financial statements.
In order to maintain consistency, a standard checklist of journal entries needed in the closing should be
maintained. The checklist should include a place to initial for each month when the entry has been done.
Journal entry templates23 should be stored in the accounting system so the entry is standardized and only
the amount needs to be keyed in. The checklist should give the name of the stored template to be used for
each set of closing entries.
When divisional accounting staffs send summarized information to the corporate accounting staff, the information can contain errors that the corporate accounting staff must investigate and correct, which can
be very time-consuming. Furthermore, different accounting procedures used throughout the company result in inconsistent reporting that is difficult to reconcile and consolidate. Mandating standardized
accounting procedures can improve accuracy at the source, resulting in less time spent by the corporate
accounting staff in correcting errors.
Centralization of Accounting as a Shared Service
Centralization of all accounting processes using a single consolidated accounting system is the best way to
resolve closing problems created by accounting decentralization. Shared service centers are increasingly
being used to move accounting and finance, as well as other major business processes, out of individual
business units and to concentrate them in a single location.
When the processing of transactions is centralized, specific types of transactions such as accounts payable,
accounts receivable, and general ledger can be organized along functional lines, utilizing a smaller number
of highly-trained people. The result is usually fewer errors. By reorganizing responsibilities along functional
lines instead of geographical lines, responsibility for various closing tasks can be assigned to a smaller
number of managers who are in closer proximity, resulting in greater efficiency.
In a centralized accounting system, accounting errors can be researched more easily because of having a
single database of accounting information. Analysts can more easily drill down through the data to locate
problem areas when the numbers do not look right instead of having to contact the divisional accounting
office and asking them to research it and then waiting to receive a response. The time required to locate
and resolve errors is shortened as a result.
However, transaction processing is not the only finance and accounting function being transferred to shared
services departments. Increasingly, higher-level functions such as controllership, tax, treasury, internal
audit, and compliance are also being centralized, often in offshore locations. Some finance and accounting
functions such as planning, budgeting, and forecasting and review and approval of period-end results usually remain close to the headquarters office for control reasons, though.
Use of Cloud-Based Services
Smaller companies that may not be able to justify the cost of an ERP system can turn to the cloud to
integrate finance, sales, service and fulfillment. All the essential information is in one place, and the result
can be a much faster close.
23
A journal entry template is a blank journal entry stored in the accounting software for which the same account
numbers are used repetitively but the amounts of the entries vary. Users go to the journal entry checklist to find the
name of the stored template, enter the numbers into the template, and save the journal entry with the correct date.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
165
Section E – Internal Controls
CMA Part 1
Section E – Internal Controls
Section E comprises 15% of the CMA Part 1 Exam. Section E is composed of two parts: (1) Governance,
Risk and Compliance and (2) Systems Controls and Security Measures.
Internal control examines the controls the company has developed and implemented to help achieve its
objectives. The purpose of internal controls is often perceived as being to prevent something from going
wrong, but controls are really set up to assist the organization in the achievement of its objectives. It is
important to be very familiar with the objectives of internal control.
Other important topics are the major internal control provisions of the Sarbanes-Oxley Act of 2002 in
Sections 201, 203, 204, 302, 404, and 407 of the Act and the role of the PCAOB (Public Company Accounting
Oversight Board), which was established by the Sarbanes-Oxley Act.
Two of the most important elements of internal control that candidates need to understand are the segregation of duties and the elements that make up the components of internal control. It is important
to know these topics and the other internal control topics not only from an academic standpoint (definitions
and lists, for example) but also from a practical application standpoint. The answers to the applicationrelated questions can be very difficult because it may seem that all of the choices are good controls or none
of the duties are ones that can be performed by the same person. However, do not spend too much time
thinking about any particular question because each question has the same value, and therefore candidates
have nothing to gain from figuring out the answer to a hard question versus answering a simple one. Answer
the simple questions first and spend extra time on the hard ones only if time allows.
In addition, ExamSuccess includes a lot of questions from past exams that have covered specific situations
relating to internal control and systems control. The specific topics covered on these past exam questions
may not be specifically mentioned in this textbook because of the vast scope of potential topics that would
need to be explained in order to do that. Instead, these types of questions are included in ExamSuccess.
Candidates do not need to remember every specific detail from each question, but they should be familiar
with the concepts and issues covered in those questions. Candidates are advised to learn the overall concepts and issues and then apply their best professional judgment in answering questions about them. The
actual exam questions will not be the same as the practice questions in these study materials, since the
practice questions are previous exam questions. The actual exam questions are always being updated and
changed, so it is not likely that past exam questions will be asked again. For that reason, this textbook
does not try to “teach to the study questions” in this section. Instead, the information in this textbook is
focused on the topics covered in the ICMA’s current Learning Outcome Statements, as questions asked on
an exam today are more likely to be from the current Learning Outcome Statements than they are likely to
duplicate past exam questions.
Most of the internal control concepts covered in Governance, Risk and Compliance are adapted from the
report Internal Control – Integrated Framework developed by COSO, the Committee of Sponsoring Organizations of the Treadway Commission. It is the guide for all internal control systems.
The second topic in this section is Systems Controls and Security Measures. The most important thing to
learn in Systems Controls and Security Measures is the terminology. Some candidates may already be
familiar with the terminology from work or experience with computer systems, but if not, it is important to
learn it.
166
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
E.1. Governance, Risk, and Compliance
E.1. Governance, Risk, and Compliance
The internal controls of a company are an important part of its overall operations. A strong internal control
system will provide many benefits to a company including:
•
Lower external audit costs.
•
Better control over the assets of the company.
•
Reliable information for use in decision-making.
A company with weak internal controls is putting itself at risk for employee theft, loss of control over the
information relating to operations, and other inefficiencies in operations and decision-making that can damage its business.
Corporate Governance
Good corporate governance is basic to internal control. The term “governance” will be used frequently in
this section of the textbook. What is corporate governance, why is it important, and how is it related to
internal control, risk assessment, and risk management?
What is Corporate Governance?
Corporate governance includes all of the means by which businesses are directed and controlled, including
the rules, regulations, processes, customs, policies, procedures, institutions and laws that affect the way
the business is administered. Corporate governance spells out the rules and procedures to be followed in
making decisions for the corporation. Corporate governance is the joint responsibility of the board of directors and management.
Corporate governance also involves the relationships among the various participants and stakeholders in
the corporation, such as the board of directors, the shareholders, the Chief Executive Officer (CEO), and
the managers.
Corporate governance is very concerned with what is known as the “agency problem.” Agency issues arise
from the fact that the owners of the corporation (the shareholders) and the managers of the corporation
(the agents of the shareholders) are different people. The priorities and concerns of the managers are
different from the priorities and concerns of the shareholders. The managers are concerned with what will
benefit them personally and lead to increased salary, bonuses, power, and prestige. The shareholders’
priorities lie with seeing the value of their investments in the corporation increase. The priorities of the
shareholders and the priorities of the managers can easily be in conflict with one another, because what
benefits the managers may not benefit the owners.
Therefore, corporate governance specifies the distribution of rights and responsibilities among the
various parties with conflicting priorities and concerns in an effort to mitigate the agency problem and bring
about congruence between the goals of the shareholders and the goals of the agents.24 Incentives are
needed so the agents will take actions that are consistent with shareholder benefit. At the same time,
however, monitoring mechanisms are needed to control any activities of the agents that would benefit them
while hurting the shareholders.
For example, management compensation policies that tie managers’ bonuses to stock price increases can
lead to actions on the part of management that will cause the stock price to increase and will thus be good
for all shareholders. However, if managers try to conceal poor financial performance in an effort to keep
the stock price going up so their own bonuses remain intact, those same incentives can lead to fraudulent
financial reporting, which obviously is not good for the shareholders or any other stakeholders. Prevention
24
Goal congruence is defined as “aligning the goals of two or more groups.” As used in business, it refers to the aligning
of goals of the individual managers with the goals of the organization as a whole.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
167
E.1. Governance, Risk, and Compliance
CMA Part 1
of unintended consequences such as fraudulent financial reporting is the responsibility of the board of
directors and should be implemented through internal controls.
Why is Corporate Governance Important?
Corporate governance has always been an important topic for shareholders, management, and the board
of directors. However, the topic took on greater importance following the dramatic downfalls of companies
such as Enron, WorldCom, Adelphia and others back in 2001-02. More recently, the world financial crisis
that began in 2008 raised again the issue of good corporate governance. AIG (American International
Group) went from being the 18th largest public company in the world in 2008 to needing an $85 billion U.S.
government bailout. The Lehman Bros. bankruptcy in September 2008 was the largest bankruptcy in U.S.
history. The lesson from these events is that good governance is not just a U.S. issue but it is a global
issue.
Good governance is not just a good idea for a company—it is an absolute must. Considering just Enron,
more than $60 billion of shareholder wealth was erased from investors’ books. Good corporate governance
is not only important for company shareholders but it is vital for the general health and well-being of a
country’s economy as well.
Corporate governance does not exist as a set of distinct and separate processes and structures. It is interconnected with the company’s internal control and enterprise risk management.
How is Corporate Governance Related to Risk Assessment, Internal Control and Risk Management?
Corporate governance specifies the distribution of rights and responsibilities among the various participants
in the corporation.
•
The board of directors and executive management are responsible for developing and implementing business strategies.
•
In setting business strategies, the board and executive management must consider risk.
•
In order to consider risk, the company must have an effective process for identifying, assessing
and managing risk.
•
In order to have an effective risk management process, the company must have an effective internal control system, because an effective internal control system is necessary in order to
communicate and manage risk.
Therefore, governance, risk management and internal control all rely on each other.
The internal audit activity serves as the “eyes and ears” of management and the audit committee and thus
has an important role in the governance function of the organization.
Internal audit’s primary role is assessing internal controls over the reliability of financial reporting, the
effectiveness and efficiency of operations, and the organization’s compliance with applicable laws and regulations. According to IIA (Institute of Internal Auditors) Internal Auditing Standard 2110, this primary role
of internal audit includes assessing and making appropriate recommendations for improving the governance
process in its accomplishment of the following objectives:
168
•
Promoting appropriate ethics and values within the organization.
•
Ensuring effective organizational performance, management and accountability.
•
Communicating risk and control information to appropriate areas of the organization.
•
Coordinating the activities of and communicating information among the board, external and internal auditors, and management.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
E.1. Governance, Risk, and Compliance
Principles of Good Governance
A set of governance principles, called 21st Century Governance Principles for U.S. Public Companies, was
published in 2007 by a group of leading academic experts from four universities. The principles were developed by Paul D. Lapides, Joseph V. Carcello, Dana R. Hermanson and James G. Tompkins of Kennesaw
State University; Mark S. Beasley of North Carolina State University, F. Todd DeZoort of The University of
Alabama; and Terry L. Neal of University of Tennessee. The authors stated that the purpose of the principles
was “to advance the current dialogue and to continue to promote investor, stakeholder and financial statement user interests.”
The principles are (some explanatory footnotes have been added):
1)
Board Purpose – The board of directors should understand that its purpose is to promote and
protect the interests of the corporation’s stockholders while considering the interests of other external and internal stakeholders such as (e.g. creditors, employees, etc.).
2)
Board Responsibilities – The board’s major areas of responsibility should be monitoring the CEO
and other senior executives, overseeing the corporation’s strategy and processes for managing the
enterprise, including succession planning; and monitoring the corporation’s risks and internal controls, including the ethical tone.25 Directors should employ healthy skepticism26 in meeting their
responsibilities.
3)
Interaction – Sound governance requires effective interaction among the board, management,
the external auditor, the internal auditor, and legal counsel.
4)
Independence – An “independent” director has no current or prior professional or personal ties
to the corporation or its management other than service as a director. Independent directors must
be able and willing to be objective in their judgments. The vast majority of the directors should be
independent in both fact and appearance.
5)
Expertise and Integrity – The directors should possess relevant business, industry, company,
and governance expertise. The directors should reflect a mix of backgrounds and perspectives and
have unblemished records of integrity. All directors should receive detailed orientation and continuing education to assure they achieve and maintain the necessary level of expertise.
6)
Leadership – The roles of Board Chair and CEO should be separate.27 If the roles are not separate,
then the independent directors should appoint an independent lead director. The lead director and
committee chairs should provide leadership for agenda setting, meetings, and executive sessions.
7)
Committees – The audit, compensation and governance committees of the board should have
charters, authorized by the board, that outline how each committee will be organized, the committees’ duties and responsibilities, and how they report to the board. Each of these committees
should be composed of independent directors only, and each committee should have access to
independent outside advisors who report directly to the committee.
25
Companies need to make sure that inappropriate and unethical behavior is not tolerated. A culture of integrity is
dependent on the “tone at the top,” or the overall ethical climate that originates at the top of the organization with the
board of directors, the audit committee of the board, and the CEO.
26
“Healthy skepticism” means having an attitude of doubt but not carrying it so far as to suspect wrongdoing everywhere. It means asking questions, gathering information, and making a decision based on objective analysis. In the
present context, it means directors should not just accept without question the information they are given by management but should “dig a little deeper” and find out the facts, because management may have “forgotten” to include some
of the facts. Directors are not present on a day-to-day basis and that makes their jobs more difficult than they would be
if the directors were on site. However, they can talk to people within the organization at all levels and ask questions,
and they should do that. Directors should not just assume that what they are being told is true or is the whole truth.
27
Not too long ago, the CEO frequently served also as Chairman of the board of directors, and the dual role was not
questioned. However, since the board’s responsibilities include monitoring the CEO, the CEO should not serve as Chairman of the board, because that creates a conflict of interest. The CEO would be leading the body that would be monitoring
the CEO.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
169
E.1. Governance, Risk, and Compliance
CMA Part 1
8)
Meetings and Information – The board and its committees should meet frequently for extended
periods of time and should have unrestricted access to the information and personnel they need
to perform their duties. The independent directors and each of the committees should meet in
executive session on a regular basis.
9)
Internal Audit – All public companies should maintain an effective, full-time internal audit function that reports directly to the audit committee of the board of directors through the Chief Audit
Executive.28 Companies also should consider providing an internal audit report to external stakeholders to describe the internal audit function, including its composition, responsibilities, and
activities.
10)
Compensation – The compensation committee and full board should carefully consider the compensation amount and mix (e.g., short-term vs. long-term, cash vs. equity) for executives and
directors. The compensation committee should evaluate the incentives and risks associated with a
heavy emphasis on short-term performance-based incentive compensation for executives and directors.
11)
Disclosure – Proxy statements29 and other communications (required filings and press releases)
should reflect board and corporate activities and transactions in a transparent and timely manner
(e.g., financial performance, mergers and acquisitions, executive compensation, director compensation, insider trades, related-party transactions). Companies with anti-takeover provisions should
disclose why such provisions are in the best interests of their shareholders.
12)
Proxy Access – The board should have a process for shareholders to nominate director candidates, including access to the proxy statement for long-term shareholders with significant
ownership stakes.
13)
Evaluation – The board should have procedures in place to evaluate on an annual basis the CEO,
the board committees, the board as a whole, and individual directors. The evaluation process
should be a catalyst for change in the best interests of the shareholders.
Hierarchy of Corporate Governance
Corporate governance includes all of the means by which businesses are directed and controlled. It spells
out the rules and procedures to be followed in making decisions for the corporation.
Formation, Charter, and Bylaws
U.S. corporations are formed under authority of state statutes. Application for a charter must be made to
the proper authorities of a state in order to form a corporation. A business usually incorporates in the state
where it intends to transact business but it may be formed in one state, while at the same time have its
principal place of business or conduct its business operations in another state or states. A company that
wants to have its principal place of business located in a different state from its incorporation files with the
other state for a license to do business in that state, and it is known as a “foreign” (that is, “out of state”)
corporation in that state. The corporation will owe state income tax, state franchise tax, state sales taxes
and any other state taxes imposed on businesses not only to the state where it is incorporated, but also to
every state where it is licensed as a foreign corporation.
Although the means of organizing a corporation may vary from state to state to some extent, each state
usually requires that articles of incorporation (the charter) be filed with the secretary of state or another
designated official within that state.
28
If the company’s board of directors does not have an audit committee, then the full board serves as the audit committee, and the internal audit function, through the Chief Audit Executive, reports to the full board.
29
A proxy statement is a document containing the information the SEC requires publicly-held corporations reporting to
it to provide to shareholders so they can make informed decisions about matters that will be brought up at an annual
stockholders’ meeting.
170
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
E.1. Governance, Risk, and Compliance
The corporation’s charter, also referred to as its “Articles of Incorporation” or “Certificate of Incorporation,” details the following:
•
The name of the corporation. In many states, the corporate name must contain the word “corporation,” or “incorporated,” or “company,” or “limited,” or an abbreviation thereof. A corporate
name cannot be the same as, or deceptively similar to, the name of any other domestic corporation
or any other foreign corporation authorized to do business within the state.
•
The length of the corporation’s life, which is usually perpetual (meaning forever).
•
Its purpose and the nature of its business.
•
The authorized number of shares of capital stock that can be issued with a description of the
various classes of such stock.
•
Provision for amending the articles of incorporation.
•
Whether or not existing shareholders have the first right to buy new shares.
•
The names and addresses of the incorporators, whose powers terminate upon filing.
•
The names and addresses of the members of the initial board of directors, whose powers commence (begin) upon filing.
•
The name and address of the corporation’s registered agent for receiving service of process and
other notices.
The persons who sign the articles of incorporation are called the incorporators. Incorporators’ services
end with the filing of the articles of incorporation, and the initial board of directors, named in the articles
of incorporation, takes over.
State laws typically require that incorporators be natural persons (citizens of the U.S.), and over the
age of eighteen. State laws vary as to the number of incorporators required, but in most states, only one
incorporator is required.
Note: A corporation, being itself a legal entity, may act as an incorporator.
Most states provide standardized forms for articles of incorporation. A corporation can use the standardized
form or file another form as long as it complies with state requirements. The articles of incorporation are
filed with the designated state official for such filings, ordinarily the secretary of state. A corporation is
usually recognized as a legal entity as soon as the articles of incorporation are filed or when the
certificate of incorporation is issued by the state. However, some states may also require additional
filings in some counties before the corporation is recognized as a legal entity.
After the articles of incorporation have been filed and the certificate of incorporation has been issued by
the state, the following steps must be carried out by the new corporation:
1)
The incorporators elect the directors if they are not named in the articles,
2)
The incorporators resign,
3)
The directors meet to complete the organizational structure. At this meeting they:
a.
Adopt bylaws for internal management of the corporation. The bylaws specify:
o
The requirements for annual meetings of shareholders;
o
Specifications regarding what constitutes a quorum at a shareholders’ meeting and what
constitutes a majority vote on the part of shareholders;
o
Methods of calling special shareholders’ meetings;
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
171
E.1. Governance, Risk, and Compliance
CMA Part 1
o
How directors are to be elected by the shareholders, the number of directors and the length
of their terms, specifications for meetings of the board of directors and for what constitutes
a quorum at a board meeting;
o
How officers are to be elected by the board of directors, officer positions and the responsibilities of each officer position;
o
How the shares of the corporation shall be represented (for example, by certificates) and
how shares shall be issued and transferred;
o
Specifications for payments of dividends; and
o
How the bylaws can be amended. The directors ordinarily have the power to enact, amend
or repeal bylaws, but this authority may instead be reserved to the shareholders. Bylaws
must conform to all state laws and specifications in the articles of incorporation.
Note: Employees are not legally bound by the bylaws unless they have reason to be familiar
with them.
b.
Elect officers.
c.
Authorize establishment of the corporate bank account, designate the bank, and designate by
name the persons who are authorized to sign checks on the account.
d.
Consider for ratification any contracts entered into before incorporation.
e.
Approve the form of certificate that will represent shares of the company’s stock.
f.
Accept or reject stock subscriptions.
g.
Comply with any requirements for doing business in other states. For example, if a corporation
files with another state as a foreign corporation located in that state, it will need to appoint a
registered agent in that state.
h.
Adopt a corporate seal to be used for corporate documents for which a seal is required.
i.
Consider any other business as necessary for carrying on the business purpose of the corporation.
Amending the Articles of Incorporation
Most state corporation laws permit amendment of the articles of incorporation. An example of an amendment is an increase in the number of authorized shares of common stock.
Any amendment to the articles of incorporation must be something that could have been included in the
original articles of incorporation. Thus, an amendment is not allowed for something that the corporation
cannot legally do.
The board of directors usually adopts a resolution containing the proposed amendment, and then the resolution must be approved by a majority of the voting shares. After shareholder approval, the articles of
amendment are filed with the state authorities. The amendments become are effective only upon the issuance of a certificate of amendment.
Although the articles of incorporation specify the name and address of the corporation’s initial registered
agent, changing the registered agent can usually be done by the board of directors without the need for
shareholder approval.
172
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
E.1. Governance, Risk, and Compliance
Responsibilities of the Board of Directors
The board of directors of a company is responsible for ensuring that the company is operated in the best
interest of the shareholders, who are the owners of the company.
Thus, the members of the board of directors represent the owners of the company. The board’s responsibility is to provide governance, guidance and oversight to the management of the company. The board has
the following specific responsibilities:
•
Selecting and overseeing management. The board of directors elects the officers of the company
and the board of directors is responsible for overseeing the activities of the officers they elect.
•
Because it elects the company’s management, the board determines what it expects from management in terms of integrity and ethics and it confirms its expectations in its oversight activities.
•
The board has authority in key decisions and plays a role in top-level strategic objective-setting
and strategic planning.
•
Because of its oversight responsibility, the board is closely involved with the company’s internal
control activities.
•
Board members need to be familiar with the company’s activities and environment, and they need
to commit the time required to fulfill their board responsibilities, even though they may be outside,
independent directors.
•
Board members should investigate any issues they consider important. They must be willing to
ask the tough questions and to question management’s activities. They must have access to the
necessary resources to do investigations and must have unrestricted communications with all company personnel—including the internal auditors—as well as with the company’s external auditors
and its legal counsel.
•
Because board members are responsible for questioning and scrutinizing management’s activities,
it is important for the board members to be independent of the company. An independent director
has no material relationship with the company. In other words, an independent director is not an
officer or employee of the company and thus is not active in the day-to-day management of the
company. Boards of companies that are listed on secondary securities markets such as the New
York Stock Exchange are required to consist of a majority of independent directors.
Most boards of directors carry out their duties through committees. Committees of the board of directors
are made up of selected board members and are smaller, working groups of directors that are tasked with
specific oversight responsibilities. One of the committees whose membership is prescribed by SEC regulations is the audit committee. Other usual committees are governance, compensation, finance, nominating
and employee benefits. All of the committees of the board of directors are important parts of the company’s
internal control system, as their members can bring specific internal control guidance in their specific areas
of responsibility.
Audit Committee Requirements, Responsibilities and Authority
The responsibilities of the audit committee are particularly critical. The requirements for serving on an audit
committee of a publicly-held company have been formalized in law and regulations, first by the SarbanesOxley Act of 2002 and then, as directed by Sarbanes-Oxley, in SEC regulations. Secondary securities markets also include audit committee requirements in their listing regulations.
According to the New York Stock Exchange, the audit committee of the board of directors of a corporation
“stands at the crucial intersection of management, independent auditors, internal auditors and the board
of directors.” The audit committee of the board of directors is made up of members of the board of directors
who are charged with overseeing the audit function of the corporation. The audit committee members’ audit
committee responsibilities are in addition to their responsibilities as members of the larger board.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
173
E.1. Governance, Risk, and Compliance
CMA Part 1
The SEC first recommended that boards of directors of corporations have audit committees in 1972. Within
short order, stock exchanges began requiring or at least recommending that listed companies have audit
committees. The responsibilities of audit committees have been increased over the years.
The Sarbanes-Oxley Act of 2002 increased audit committees’ responsibilities to a great degree. It also
increased the qualifications required for members of audit committees, and it increased the authority of
audit committees. In response to the Sarbanes-Oxley Act, the stock exchanges and the SEC developed new
rules and regulations for the purpose of strengthening audit committees.
Under Section 3(a)(58) of the Exchange Act, as added by Section 205 of the Sarbanes-Oxley Act, the audit
committee is defined as:
•
A committee (or equivalent body) established by and amongst the board of directors of an issuer
for the purpose of overseeing the accounting and financial reporting processes of the issuer and
audits of the financial statements of the issuer; and
•
If no such committee exists with respect to an issuer, the entire board of directors of the issuer.
Accordingly, the SEC’s final rule on audit committees for issuers of securities states that an issuer either
may have a separately designated audit committee composed of members of its board or, if it fails to form
a separate committee or if it chooses, the entire board of directors will constitute the audit committee.
Thus, the requirements for and responsibilities of members of audit committees of public companies’ boards
of directors are highly regulated.
Requirements for Audit Committee and Audit Committee Members
1)
The audit committee is to consist of at least three members. This requirement is a listing requirement of the New York Stock Exchange and other stock exchanges. The Sarbanes-Oxley Act and
the SEC do not prescribe a minimum number of members for the audit committee but do state
that if the corporation does not form an audit committee, the entire board of directors will be
responsible for the audit committee function.
2)
All members of the audit committee must be independent per Section 10A 3(b)(3) of the Securities Exchange Act of 1934 (15 U.S.C. 78f), as amended. This requirement means that the
members of the audit committee may not be employed by the company in any capacity other
than for their service as board members and on any committee of the board.
3)
In addition, the New York Stock Exchange requires a five-year “cooling-off” period for former employees of the listed company or of its independent auditor before they can serve on the audit
committee of a listed company.
4)
One member of the audit committee must be a financial expert. This requirement is made by
stock exchanges. The Sarbanes-Oxley Act requires that if the audit committee does not include a
financial expert, this fact must be disclosed.30
5)
All members of the audit committee must be financially literate at the time of their appointment
or must become financially literate within a reasonable period of time after their appointment to
the audit committee. Financial literacy of all members of the audit committee is a listing requirement of the New York Stock Exchange and other stock exchanges.
30
The SEC defines the audit committee financial expert as an individual who the board of directors determines possesses
the following:
1) An understanding of financial statements and generally accepted accounting principles (GAAP)
2) An ability to assess the application of GAAP in connection with the accounting for estimates, accruals, and reserves
3) Experience preparing, auditing, analyzing, or evaluating financial statements that present a breadth and level of
complexity of accounting issues generally comparable to what can reasonably be expected to be raised by the issuer‘s
financial statements, or experience actively supervising those engaged in such activities
4) An understanding of internal control over financial reporting
5) An understanding of the audit committee‘s functions.
174
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
E.1. Governance, Risk, and Compliance
Responsibilities of the Audit Committee
1)
The audit committee is responsible for selecting and nominating the external auditor, approving
audit fees, supervising the external auditor, overseeing auditor qualifications and independence,
discussing with the auditors matters required under generally accepted auditing standards, and
reviewing the audit scope, plan, and results.
2)
The New York Stock Exchange’s Listing Manual requires that listed companies have an audit committee charter that “addresses the committee's purpose—which, at minimum, must be to: (A)
assist board oversight of (1) the integrity of the listed company's financial statements, (2) the
listed company's compliance with legal and regulatory requirements, (3) the independent auditor's
qualifications and independence, and (4) the performance of the listed company's internal audit
function and independent auditors; and (B) prepare an audit committee report as required by the
SEC to be included in the listed company's annual proxy statement.”
3)
Rule 10A 3(b)(4) of the Securities Exchange Act specifies that “each audit committee shall establish
procedures for (A) the receipt, retention, and treatment of complaints received by the issuer regarding accounting, internal accounting controls, or auditing matters; and (B) the confidential,
anonymous submission by employees of the issuer of concerns regarding questionable accounting
or auditing matters.” This rule relates to the “whistleblower” 31 requirement in the Sarbanes-Oxley
Act.
4)
In addition, the New York Stock Exchange specifically requires the following for companies listed
on the New York Stock Exchange (known as “listed companies”):
5)
o
The audit committee is to review the annual and quarterly financial statements and the MD&A
(Management Discussion and Analysis) in the company’s annual report filed with the SEC and
discuss them with management and the independent auditors.
o
The audit committee is to meet periodically and separately with management and with internal
auditors and independent auditors in order to uncover issues warranting committee attention.
o
The audit committee is to review with the independent auditor any audit problems or difficulties, including any restrictions on the scope of the independent auditor's activities or on access
to requested information, and any significant disagreements with management and management's response.
o
The audit committee is to set clear hiring policies for employees or former employees of the
independent auditors, taking into account the pressures that may exist for auditors consciously
or subconsciously when seeking a job with a company they audit.
The Blue Ribbon Committee report recommended that the audit committee monitor the company’s
internal control processes, and most audit committees do such monitoring. They oversee the internal audit function and monitor internal control systems for compliance with legal and regulatory
requirements.
31
A “whistleblower” is a person who informs on someone else or makes public disclosure of corruption or wrongdoing.
Section 301 of the Sarbanes-Oxley Act mandated that audit committees of public companies establish a system for
receiving, retaining, and treating whistleblower complaints regarding accounting, internal controls, or auditing matters.
Public companies are required to establish a means for confidential, anonymous submission by employees and others
about concerns they may have regarding questionable accounting and auditing matters. Furthermore, Section 806 of
Sarbanes-Oxley authorizes the U.S. Department of Labor to protect whistleblower complainants against employers who
retaliate and also authorizes the U.S. Department of Justice to criminally charge those responsible for any retaliation.
Section 1107 of the Act makes it a crime for a person to knowingly retaliate against a whistleblower for disclosing truthful
information to a law enforcement officer regarding an alleged federal offense.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
175
E.1. Governance, Risk, and Compliance
CMA Part 1
Authority and Funding of the Audit Committee
Rule 10A 3(b)(5) of the Securities Exchange Act provides that “each audit committee shall have the authority to engage independent counsel and other advisers, as it determines necessary to carry out its
duties.”
Rule 10A-3(b)(6) of the Securities Exchange Act provides that “each issuer shall provide for appropriate
funding, as determined by the audit committee, in its capacity as a committee of the board of directors, for
payment of compensation (A) to the registered public accounting firm employed by the issuer for the purpose of rendering or issuing an audit report; and (B) to any advisers employed by the audit committee
under paragraph (5).”
The audit committee has the authority to investigate any matter.
Responsibilities of the Chief Executive Officer (CEO)
The responsibilities of the CEO are determined by the corporation’s board of directors. A CEO’s responsibilities and authority can be extensive, or they can be very limited, depending on how much authority and
responsibility the board of directors delegates to the CEO.
The CEO should not serve as chairman of the board of directors. Since the board’s responsibilities include
monitoring the CEO, the CEO should not serve as Chairman of the Board, because that creates a conflict of
interest. The CEO would be leading the body that would be monitoring the CEO.
Election of Directors
The shareholders elect the members of the board of directors. Usually, each share of stock is allowed one
vote, and usually directors are elected by a plurality (whoever gets the most votes is elected, even if it is
not a majority).
The length of the directors’ term of office is set in the corporate bylaws. The term is usually one year, but
it may be longer, such as three years in staggered terms, with one-third of the board members up for
election at each annual shareholders’ meeting. Holding staggered elections provides for continuity on the
board as each year some of the members leave the board while the remainder of the members return.
Note: Power for the board to increase its size without shareholder approval can be reserved in the
articles of incorporation or the bylaws of the corporation.
176
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
Internal Control
Internal Control
Who Cares About Internal Control?
Ever since commercial organizations, nonprofit organizations and governments have existed, their leaders
have recognized the need to exercise control in order to ensure that their objectives were achieved. Today,
however, the leaders of an organization are not the only ones who care about its internal control policies
and procedures.
•
For a public company, information on the effectiveness of its internal control system is important
to investors to enable them to evaluate management’s performance of its stewardship responsibilities as well as the reliability of its financial statements.
•
The company’s external auditors recognize that an audit of a company with effective internal
controls can be performed more efficiently.
•
The potential for U.S. corporations to make illegal payments to foreign governments is of concern
to legislative and regulatory bodies. Compliance with legislation prohibiting such activities is
addressed through internal control policies and procedures.
•
The development of larger organizations with increased numbers of employees has made it necessary for management to limit and direct employees’ authority and discretion through
internal control policies and procedures.
•
Even customers have an indirect interest in internal controls because a strong internal control
system may reduce the costs of production, and therefore also reduce products’ prices.
Internal Control Definition
According to the COSO publication, Internal Control – Integrated Framework,32
Internal control is a process, effected by33 an entity’s board of directors, management, and
other personnel, designed to provide reasonable assurance regarding the achievement of objectives relating to operations, reporting, and compliance.
Thus, internal control is a process that is carried out (effected) by an entity’s board of directors, management, and other personnel that is designed to provide reasonable assurance that the company’s
objectives relating to operations, reporting, and compliance will be achieved.
1)
Operations objectives relate to the effectiveness and efficiency of operations, or the extent
to which the company’s basic business objectives are being achieved and its resources are being
used effectively and efficiently. Operations objectives include operational and financial performance goals and safeguarding of assets against loss. A company’s operations objectives will vary
depending on the choices management makes about structure and performance. As part of the
objective-setting process, management should specify its risk tolerance. For operations objectives,
risk tolerance might be expressed as an acceptable level of variance related to the objective.
2)
Reporting objectives pertain to internal and external financial and non-financial reporting.
Reporting objectives include reliability, timeliness, transparency, or other requirements as set forth
by regulators, recognized standard setters, or the entity’s policies. External reporting objectives
are driven by rules, regulations, and standards as established by regulators and standard-setters
32
Internal Control – Integrated Framework, copyright 1992, 1994 and 2013 by the Committee of Sponsoring Organizations of the Treadway Commission. Used by permission. The Committee of Sponsoring Organizations of the Treadway
Commission includes the following five organizations: American Institute of Certified Public Accountants (AICPA), American Accounting Association (AAA), Institute of Internal Auditors (IIA), Institute of Management Accountants (IMA), and
Financial Executives International (FEI).
33
To “effect” something means to cause it to happen, put it into effect, or to accomplish it. Thus, “effected by” means
“put into effect by” or “accomplished by.”
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
177
Internal Control
CMA Part 1
external to the organization. Internal reporting objectives are driven by the entity’s strategic direction and by reporting requirements established by management and the board of directors.
3)
Compliance objectives relate to the organization’s compliance with applicable laws and regulations, encompassing all laws and regulations to which the company is subject. These laws and
regulations establish minimum standards of behavior and may include marketing, packaging, pricing, taxes, environmental protection, employee safety and welfare, and international trade as well
as many others. For a publicly-held corporation or any corporation that reports to the SEC, compliance objectives also include the SEC’s reporting requirements. A company’s record of compliance
or noncompliance with laws and regulations affects its reputation in its communities and its risk of
being the recipient of disciplinary procedures.
Note: The three categories of company objectives with which internal control is concerned—operations,
reporting, and compliance—are very important to know.
The three categories address different needs and they may be the direct responsibilities of different managers. But every internal control should be directed toward the achievement of objectives in at least one
and possibly more than one of the three categories.
The three categories of objectives are distinct, but they do overlap. Therefore, a specific control objective
for a specific company could fall under more than one category. For example, the reporting objective of
ensuring reliable external financial reporting in accordance with accounting standards also concerns the
compliance objective of adhering to accounting standards as established by standard-setters and, for publicly-held corporations, complying with the SEC’s reporting requirements in accordance with that body’s
regulations.
Fundamental Concepts
The definition of internal control reflects several fundamental concepts, as follows:
1)
The purpose of internal control is to help the company achieve its objectives. The focus is on
achieving objectives. The objectives that internal control applies to fall into the three categories
above: operations, reporting, and compliance.
2)
Internal control is an ongoing process. It is not something that can be done once and be completed. It is a journey, not a destination. It consists of ongoing tasks and activities. It is a means
to an end, not an end in itself.
3)
Internal control is effected (accomplished) by people. It is something that must be put into effect
by people—it is not policies and procedures. People are located throughout the organization at
every level, from the members of the board of directors to the staff. Simply writing policy manuals
that call for internal control procedures is not enough. To be effective, people must put the policies
and procedures into effect.
4)
Internal control procedures can provide reasonable assurance only—not absolute assurance
and not a guarantee—to the entity’s board of directors and senior management that the company’s objectives will be achieved in the three named areas. This statement reflects the
fundamental concepts that (1) the cost of an internal control system should not exceed the expected benefits, and (2) the overall impact of a control procedure should not hinder operating
efficiency.
5)
Internal control must be flexible in order to be adaptable to the entity’s structure. Internal control
needs to be adaptable to apply to an entire entity or just to a particular subsidiary, division, operating unit, or business process.
178
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
Internal Control
The Importance of Objectives
Since internal control’s purpose is to provide reasonable assurance regarding the achievement of objectives
relating to operations, reporting, and compliance, it stands to reason that internal control cannot operate
effectively unless objectives have been set. Setting objectives is part of the strategic planning process by
management and the board of directors. Objectives should be set with consideration given to laws, regulations, and standards as well as management’s choices. Internal control cannot establish the entity’s
objectives.
Who Is Responsible for Internal Control?
The board of directors is responsible for overseeing the internal control system. The board’s oversight
responsibilities include providing advice and direction to management, constructively challenging management, approving policies and major transactions, and monitoring management’s activities. Consequently,
the board of directors is an important element of internal control. The board and senior management establish the tone for the organization concerning the importance of internal control and the expected
standards of conduct across the entity.
The CEO is ultimately responsible for the internal control system and the “tone at the top.” The CEO
should provide leadership and direction to the senior managers and review the way they are controlling the
business. The “tone at the top” (part of the control environment) is discussed in more detail below.
Senior managers delegate responsibility for establishment of specific internal control policies and procedures to personnel responsible for each unit’s functions.
Financial officers and their staffs are central to the exercise of control, as their activities cut across as
well as up and down the organization. However, all management personnel are involved, especially in
controlling their own units’ activities.
Internal auditors play a monitoring role. They evaluate the effectiveness of the internal controls established by management, thereby contributing to their ongoing effectiveness.
Virtually all employees are involved in internal control, because all employees produce information used
in the internal control system or carry out other activities that put the internal control systems into effect.
Furthermore, all employees are responsible for letting their managers know if they become aware of problems in operations or that rules, regulations or policies are being violated.
External parties provide information that is useful to effective internal control. For example, independent
auditors audit the financial statements and often provide other useful information as well to management
and the board. Other external parties that may provide useful information include legislators, regulators,
customers, financial analysts, bond rating agencies and the news media. However, external parties are
not part of the company’s internal control system, and they are not responsible for it.
Note: Internal auditors evaluate the effectiveness of the control systems and contribute to their ongoing
effectiveness, but they do not have responsibility for establishing or maintaining the control systems.
Note: Internal control should be an explicit or implicit part of everyone’s job description.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
179
Internal Control
CMA Part 1
Components of Internal Control
According to the COSO report, Internal Control – Integrated Framework (2013 update), five interrelated
components comprise internal control. If the five components are present and functioning effectively, their
effective functioning provides reasonable assurance regarding achievement of the company’s objectives.
Thus, these components are all necessary for effective internal control to be present. They are:
1)
Control Environment
2)
Risk Assessment
3)
Control Activities
4)
Information and Communication
5)
Monitoring Activities
Embedded within these five components are 17 principles.
Component 1: Control Environment
The control environment includes the standards, processes, and structures that provide the foundation for
carrying out internal control. The board of directors and senior management are responsible for establishing
the “tone at the top,” including expected standards of conduct that apply to all employees. Management is
responsible for reinforcing the expectations at all levels of the organization.
The control environment provides the organization’s ethical values. It influences the control consciousness
of all the people in the organization and sets the tone for the entire organization. If the control environment
does not include the necessary factors, none of the other components of internal control will be effective.
Organizations with effective control environments have the following characteristics, exemplified by these
five principles:
1)
They demonstrate a commitment to integrity and ethical values. They set a positive “tone at
the top” by communicating, verbally and by example as well as formally, the organization’s ethical
values and commitment to integrity.
Every company should establish standards of conduct and formal policies regarding acceptable
business practices, conflicts of interest, and expected standards of behavior. However, these official statements state only what management wants to have happen. Corporate culture, or the
“tone at the top,” determines what actually does happen. Top management, especially the CEO,
sets the ethical tone by modeling the ethical and behavioral standards expected of everyone in the
organization. Leadership by example is the most effective means of communicating that ethical
behavior is expected, because people imitate their leaders.
Management should foster a “control consciousness” by setting formal and clearly communicated
policies and procedures to be followed at all times, without exception, and that result in shared
values and teamwork.
Standards of integrity and ethical values extend to outsourced service providers and business
partners, as well. Management retains responsibility for the performance of processes it has delegated to outsourced providers and business partners.
Processes should be in place to identify issues and evaluate the performance of individuals and
teams against the expected standards of conduct, and deviations need to be addressed in a timely
and consistent manner. The actions taken by management when violations occur send a message
to employees, and that message quickly becomes a part of the corporate culture.
2)
180
The board of directors demonstrates independence from management and exercises
oversight over development and performance of internal control. The board of directors is responsible for setting corporate policy and for seeing that the company is operated in the best
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
Internal Control
interest of its owners, the shareholders. The attention and direction provided by the directors are
critical.
The board of directors should have a sufficient number of members who are independent from
management (not employed full-time by the company in management positions) to be independent
and objective in its evaluations and decision-making. Independence of the board from management is critical, so that if necessary, difficult and probing questions will be raised.
Board and audit committee members should hold regular meetings with chief financial and accounting officers and internal and external auditors. Sufficient and timely information should be
provided to board and audit committee members.
The board of directors has oversight responsibility for internal control, but the Chief Executive
Officer and senior management have direct responsibility for developing and implementing the
organization’s internal control system.
3)
With the oversight of the board, management establishes structures, reporting lines, and
appropriate authorities and responsibilities to enable the corporation to pursue its objectives.
The company’s organizational structure should define the key areas of authority and responsibility
and delineate lines for reporting. The organizational structure is key to the company’s ability to
achieve its objectives, because the organizational structure provides the framework for planning,
executing, controlling and monitoring the activities it pursues to achieve its objectives.
Authority and responsibility should be delegated to the extent necessary to achieve the organization’s objectives. The board of directors delegates authority and assigns responsibility to senior
management. Senior management delegates authority and assigns responsibility at the entity level
and to its subunits. The way management assigns authority and responsibility for operating activities affects the control environment because it determines how much initiative individuals are
encouraged to use in solving problems as well as the limits of their authority.
Delegation of authority means giving up centralized control of some of the business decisions and
allowing those decisions to be made at lower levels in the organization by the people who are
closest to the day-to-day operations of the business. Delegation of authority provides the organization with greater agility, but it also introduces complexity in risks to be managed. Senior
management with guidance from the board of directors needs to determine what is and is not
acceptable, in line with the organization’s regulatory or contractual obligations.
The challenge is to delegate only to the extent required to achieve the organization’s objectives.
The delegation should be based on sound practices for identifying and minimizing risk and on
weighing potential losses against potential gains from delegation.
4)
The organization demonstrates a commitment to attract, develop, and retain competent individuals in alignment with objectives. In order for tasks to be accomplished in accordance
with the company’s objectives and plans for achievement of those objectives, the company needs
to have competent personnel. In order to have competent personnel, management should specify
the knowledge and skills required for each position. Formal or informal job descriptions should
specify the competence level needed for each job, and the company should make every effort to
hire and retain competent people and to train them when necessary.
Background checks should be thorough when hiring new employees. At a minimum, the applicant’s
work history and education should be confirmed and references checked. Any embellishment or
undisclosed history should be a red flag.
Individuals who are working in positions for which they are unqualified create a risk simply because
they are not capable of adequately performing the work they are supposed to do. Their lack of
capability provides an opportunity for someone else to take advantage of their lack of knowledge
or skills and perpetrate a fraud. Therefore, appropriate personnel policies and procedures are integral to an efficient control environment.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
181
Internal Control
CMA Part 1
The board of directors should evaluate the competence of the CEO, and management should evaluate the competence across the organization and within outsourced providers in relation to
established policies and procedures and then act as necessary to address any shortcomings.
5)
The organization holds individuals accountable for their internal control responsibilities in
pursuit of objectives.
The board of directors holds the CEO accountable for understanding the risks faced by the organization and for establishing internal controls to support the achievement of the organization’s
objectives. The CEO and senior management are responsible for establishing accountability for
internal control at all levels of the organization.
Increased delegation requires personnel to have a higher level of competence and requires the
company to establish accountability. Management should monitor results because the number of
undesirable or unanticipated decisions may increase with increased delegation. The extent that
individuals recognize that they will be held accountable for results greatly affects the control environment. If a person does something that violates the company’s policies and standards, some
sort of disciplinary action should be taken against that person. If no penalty is assessed for violating
the company’s internal control policies, then other individuals will not see the need for compliance.
Management should regularly review the organization’s performance evaluation methods and incentives to ensure that they do not encourage inappropriate conduct. If increases in the bottom
line are the sole focus of performance evaluations, the organization is more likely to experience
unwanted behavior such as manipulation of accounting records and reports, offers of kickbacks,
and high-pressure sales tactics.
Internal controls are more likely to function well if management believes that the controls are important
and communicates that support to employees at all levels.
Component 2: Risk Assessment
Risk is the possibility that something will occur that will adversely affect the organization’s achievement of
its objectives. Within the control environment, management is responsible for the assessment of risk. The
questions should always be asked: “What could go wrong”? “What assets do we need to protect”?
Risk assessment is the process of identifying, analyzing, and managing the risks that have the potential to
prevent the organization from achieving its objectives, relative to the organization’s established risk tolerance. Assessment of risk involves determining the monetary value of assets that are exposed to loss as
well as the probability that a loss will occur. Management must determine how much risk it is willing to
accept and then work to maintain the risk within that level.
The 17 principles within the five components of Internal Control – Integrated Framework (2013) are numbered consecutively, so the principles relating to the risk assessment component begin with no. 6:
6)
The company’s objectives must be specified clearly enough so that the risks to those objectives can be assessed. Objective setting is therefore the first step in management’s process
of risk assessment. Objectives may be explicitly stated or they may be implicit, such as to continue
a previous level of performance. Setting the company’s operations, reporting, and compliance
objectives is a strategic planning function of management.
Establishing objectives is a required first step to establishing effective internal control because it
forms the basis for assessing risk, in other words determining what could go wrong that could
prevent the company from achieving its objectives. If the objectives are not known, then it is not
possible to determine what could prevent the company from achieving them.
7)
182
The organization identifies risks to the achievement of its objectives and analyzes them to
determine how the risks should be managed. The responsibility for risk identification and analysis
resides with management at the overall entity level and at the subunit level.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
Internal Control
Risks can arise from both internal and external factors that can affect the company’s ability to
achieve its objectives. Change in objectives creates risk, especially. The greater the difference in
the current objectives from objectives of the past—the greater the amount of change—the greater
the amount of risk. Even the objective of maintaining performance as it has been in the past carries
both internal and external risks.
The risk assessment process should consider all risks that may occur. The risk assessment should
be comprehensive and consider all significant interactions between the company and external
parties, throughout the organization. External parties to include in the assessment are suppliers
(current and potential), investors, creditors, shareholders, employees, customers, buyers, intermediaries, competitors, public bodies and the news media.
Once the risks have been identified, they should be analyzed in order to determine how best to
manage each one.
Risk Identification
Risk identification includes identification of risks at all levels of the organizational structure, including the overall entity and its subunits. Entity level risk identification is conducted at a high level
and does not include assessing transaction-level risks. The identification of risks at the process
level is more detailed and includes transaction-level risks. Risks originating in outsourced service
providers, key suppliers, and channel partners need to be included in the risk assessment, as they
could directly or indirectly impact the organization’s achievement of its objectives.
Entity-level risks arise from external or internal factors. Following are a few examples:
•
External factors include economic factors that impact financing availability, environmental
factors such as climate change that can require changes in operations, reduced availability of
raw materials, or loss of information systems, regulatory changes such as new reporting standards or new laws, changing customer needs, and technological changes that affect the
availability and the use of data.
•
Internal factors can include decisions that affect operations and the availability of infrastructure, changes in management responsibilities that affect the way controls are implemented,
changes in personnel that can influence the control consciousness in the organization, employee access to assets that could contribute to misappropriation of assets, and a disruption in
information systems processing that could adversely affect the organization’s operations.
Transaction-level risks occur at the level of subsidiaries, divisions, operating units, or functions
such as sales, production, or marketing. The potential risks depend upon what the objectives are.
For example,
•
An operational objective of maintaining an adequate raw materials inventory could lead to
identifying risks such as raw materials not meeting specifications, the failure of a key supplier,
supply disruptions in needed raw materials caused by weather conditions, or price increases
above acceptable levels.
•
An objective of complying with existing laws and regulations leads to identifying risks associated with lack of compliance.
•
The objective of protecting assets leads to identifying the risk of employee embezzlement accompanied by falsification of records to conceal the theft.
The number of potential risks is limitless, and practical limitations are needed in identifying them.
Some risks, such as a meteor falling out of the sky onto the company’s manufacturing facility, are
too minimal to be considered. But any situation that causes a change that could impact the system
of internal control should be included.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
183
Internal Control
CMA Part 1
Risk Analysis
Risk analysis forms the basis for determining how the risks will be managed. After the company
has identified its entity-level and transaction-level risks, it should perform a risk analysis to (1)
assess the likelihood or frequency of each risk’s occurring; (2) estimate the impact of each risk;
and (3) consider how each risk should be managed by assessing what actions need to be taken.
Risks that do not have a significant impact on the company and that have a low likelihood of
occurring would not warrant serious concern. However, risks with a high likelihood of occurring
and that carry the possibility of significant impact usually require serious attention. Risks that are
in between these two extremes require judgment.
Once the likelihood and estimated impact of risks have been assessed, the following steps should
be taken to manage the identified risks. Risk responses fall into the following categories:
•
Acceptance – No action is taken to affect the likelihood or impact of the risk.
•
Avoidance – Exiting the activity or activities that give risk to the risk, such as exiting a product
line or selling a division.
•
Reduction – Action is taken to reduce the likelihood or impact of the risk. The amount of the
potential loss from each identified risk should be estimated to the extent possible. Some risks
are indeterminate and can be described only as large, moderate or small.
•
Sharing – Reducing the risk likelihood or impact by transferring or sharing the risk such as
purchasing insurance or forming a joint venture.
If the decision is to reduce or to share the risk, the organization determines what action to take
and develops appropriate control activities for the action. If the decision is to accept or avoid the
risk, typically no control activities are required.
8)
In assessing the risks to the achievement of its objectives, management of the organization should
consider the potential for fraud. Fraud can include fraudulent financial reporting, possible loss
of assets, and corruption. Fraud can occur at any level and its possible impact needs to be considered as part of the risk identification and assessment. The potential for management fraud through
override of controls needs to be considered, and the directors’ oversight of internal control is necessary to reduce that risk. Fraud can arise at the employee level, as well, for example if two
employees collude34 to defraud the organization. Furthermore, fraud can be perpetrated from the
outside, from someone hacking into the computer systems for example.
When management detects fraudulent financial reporting, inadequate safeguarding of assets, or
corruption, remediation may be necessary. In addition to dealing with the improper actions, steps
may need to be taken to make changes in the risk assessment process and in other components
of the internal control system such as control activities.
9)
The organization identifies and assesses changes that could impact the organization’s system of internal control.
Changes can occur in the external environment, such as in the regulatory, economic, and physical
environment in which the entity operates. Changes can also occur in the internal environment such
as new product lines, acquired or divested businesses and their impact on the internal control
system, rapid growth, changes in leadership and their attitudes toward internal control.
Note: Risk assessment, which is a function of internal control, is different from the actions taken by
management to address the risks. The actions taken by management to address the risks are a function
of management and not of the internal control system.
34
To “collude” is to act together, often in secret, to achieve some illegal or improper purpose.
184
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
Internal Control
Component 3: Control Activities
Control activities are actions established by policies and procedures that help ensure that management’s
instructions intended to limit risks to the achievement of the organization’s objectives are carried out.
Control activities may be preventive or detective and can include a range of activities such as authorizations
and approvals, verifications, reconciliations, and business performance reviews. Segregation of duties is
typically built in to the selection and development of control activities.
Principles relating to control activities include:
10)
The organization selects and develops control activities that contribute to reducing to acceptable levels the identified risks to the achievement of its objectives.
Control activities should be integrated with risk assessment in order to put into effect actions
needed to carry out risk responses. The risk responses decided upon dictate whether control activities are needed or not. If management decides to accept or avoid a risk, control activities are
generally not required. The decision to reduce or share a risk generally makes control activities
necessary. The control activities include a variety of controls including both manual and automated
and preventive and detective controls.
Segregation of duties should be addressed wherever possible and if segregation of duties is not
practical, management should develop alternate control activities.
A preventive control is designed to avoid an unintended event while a detective control is
designed to discover an unintended event before the ultimate objective has occurred (for example,
before financial statements are issued or before a manufacturing process is completed).
11)
o
Examples of preventive controls are segregation of duties, job rotation, enforced vacations,
training and competence of personnel, employee screening practices, physical control over
assets such as dual access controls, requirements for authorizations, and requirements for
approvals.
o
Examples of detective controls are reconciliations, internal audits, periodic physical inventory
counts, variance analyses to detect ratios that might be out of line, random surprise cash
counts, supervisory review and approval of accounting work, management review and approval
of account write-offs, and exception reporting and review to identify unusual items.
The organization selects and develops general control activities over technology 35 to support the achievement of its objectives.
o
Control activities are needed when technology is embedded in business processes to mitigate
the risk of the technology operating improperly.
o
Control activities may be partially or wholly automated using technology. Automated controls
require technology general controls.
Control activities over technology designed to support the continued operation of technology and
to support automated control activities are called “technology general controls.” Technology general controls include controls over the technology infrastructure, security management, and
technology acquisition, development and maintenance.
Activities designed and implemented to restrict technology access to authorized users to protect
the organization’s assets from external threats are a particularly important aspect of technology
general controls. Guarding against external threats is particularly important for an entity that depends on telecommunications networks and the Internet.
35
See General Controls in Systems Controls and Security Measures later in this section for more information about
technology general controls.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
185
Internal Control
12)
CMA Part 1
The organization deploys control activities by developing policies that establish what is expected and procedures that put the policies into action. The control activities should be built
into business processes and employees’ day-to-day activities.
Whether a policy is in writing or not, it should establish clear responsibility and accountability. The
responsibility and accountability ultimately reside with the management of the entity and the subunit where the risk resides. Procedures should be clear on what the responsibilities are of the
personnel performing the control activity. The procedures should be timely and should be performed diligently and consistently by competent personnel.
Responsible personnel should investigate and if necessary take corrective action on matters identified as a result of executing control activities. For example, if a discrepancy is identified in the
process of doing a reconciliation, the discrepancy should be investigated. If an error occurred, the
error should be corrected and the correction reflected in the reconciliation.
Management should periodically review and reassess policies and procedures and related control
activities for continued relevance and revise them when necessary.
Component 4: Information and Communication
Relevant information must be identified, captured and communicated (shared) in a manner that enables
people to carry out the internal control responsibilities that support achievement of the organization’s objectives. Information and communication are both internal and external.
Principles relating to the information and communication component include:
13)
The organization should obtain or generate and use relevant, quality information to support
the functioning of internal control.
Relevant information can be financial or non-financial. The information can be generated from
internal or external sources. Regardless of whether it is from internal or external sources and
whether it is financial or non-financial information, timely and relevant information is needed to
carry out internal control responsibilities supporting all three of the categories of objectives.
14)
o
Some examples of internal sources include emails, production reports regarding quality,
minutes of meetings discussing actions in response to reports, time reports, information on
number of units shipped during a period, and internal contacts made to a whistle-blower hotline. Other types of operating information, such as measurements of emissions generated, are
needed to monitor compliance with emissions standards.
o
Some examples of external sources include data from outsourced providers, research reports,
information from regulatory bodies regarding new requirements, social media and blog postings containing comments or opinions about the organization, and external contacts made to
a whistle-blower hotline. External information about economic conditions and actions of competitors is needed for internal decision-making, such as decisions about optimum inventory
levels and inventory valuation.
The organization should internally communicate information, including objectives and responsibilities for internal control, necessary to support the functioning of internal control.
Information systems must provide accurate, timely reports to appropriate personnel so they can
carry out their responsibilities. The people who deal with the customers every day are often the
first to know about a problem, and they should have a way to communicate that information
upward. Furthermore, people in the organization need to receive a clear message from top management that their internal control responsibilities must be taken seriously.
o
186
Internal communication includes communications between the board of directors and management so that both have the information they need to fulfill their roles in achieving the
organization’s objectives.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
o
Internal Control
Internal communication of information also takes place across the organization through, for
example, policies and procedures, individual authorities, responsibilities and standards of conduct, specified objectives, and any matters of significance relating to internal control such as
instances of deterioration or non-adherence.
Internal communication can take many forms, such as dashboards, email messages, training (either live or online), one-on-one discussions, written policies and procedures, website postings, or
social media postings. Actions also communicate. The way managers behave in the presence of
their subordinates can communicate more powerfully than any words.
15)
The organization should communicate with external parties regarding matters affecting the
functioning of internal control.
Relevant and timely information needs to be communicated to external parties including shareholders, partners, owners, regulators, customers, financial analysts, and other external parties.
External communication also includes incoming communications from customers (customer feedback), consumers, suppliers, external auditors, regulators, financial analysts, and others. A
whistle-blower hotline should be available to external parties, as well.
o
Communication from customers and suppliers can provide valuable input on the design and
quality of the company’s products and services.
o
Communications from the external auditors inform management and the board about the organization’s operations and control systems.
o
Regulators report results of examinations and compliance reviews that can inform management of control weaknesses.
o
If the company is a public company, communications to shareholders, regulators, financial
analysts, and others need to provide information relevant to those people so they can understand the entity’s condition and risks.
o
Customer complaints often are an indication of operating problems that need to be addressed.
o
Any outsider dealing with the company must be informed that improper actions such as kickbacks or other improper dealings will not be tolerated.
o
Outgoing communication can take the form of press releases, blogs, social media, postings on
the company website, and emails.
Component 5: Monitoring Activities
Finally, management must monitor the entire internal control system. Monitoring is an activity overseen or
performed at the management level for the purpose of assessing the operation and effectiveness of existing
internal controls. Monitoring assesses the quality of the internal control system’s performance over time to
determine whether the components of internal control are present and are functioning. Management must
also revisit previously identified problems to make sure they have been corrected.
Monitoring ensures that the internal control system continues to operate effectively. Systems and procedures change over time, and the way controls are applied need to change in order to continue to be
effective. Management needs to determine whether the internal control system is still relevant and whether
it is still able to address new risks that may have developed.
Information received from monitoring activities is used in management’s assessment of the effectiveness
of its internal control. Principles relating to the monitoring activities component are:
16)
The organization selects, develops, and performs ongoing evaluations, separate evaluations, or some combination of both to ascertain whether the five components of internal control
are present and functioning.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
187
Internal Control
CMA Part 1
Monitoring can be done in two ways:
o
Through ongoing evaluations that are built into business processes during normal operations
and provide timely information.
o
Through separate evaluations conducted periodically by management with the assistance of
the internal audit function.
If monitoring is done regularly during normal operations, the need for separate evaluations is
lessened.
Note: Monitoring should be done on a regular basis. An advantage to ongoing monitoring is
that if operating reports are used to manage ongoing operations, exceptions to anticipated
results will be recognized quickly.
17)
The organization evaluates and communicates internal control deficiencies in a timely manner to parties responsible for taking corrective action, including senior management and the board
of directors, as appropriate.
Findings from monitoring activities are evaluated against established criteria and deficiencies are
communicated to management and the board of directors as appropriate. Remedial action should
be taken, and the results of the remedial action should also be monitored to be certain that the
situation has been corrected.
An example of evaluating a monitoring activity is reviewing a reconciliation to make sure it was
done properly and in a timely manner, that the sources of information used in the reconciliation
were appropriate, and to look for trends in the reconciling items. All the reconciling items should
have been investigated and resolved, and management should evaluate whether any new risks to
the operation have been caused by changes in the internal and external environments.
188
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
Internal Control
Summary: The 5 Components and the 17 Principles of Internal Control
Components
Principles
Control Environment
1)
There is a commitment to integrity and ethical values.
2)
The board of directors exercises oversight responsibility for internal control.
3)
Management establishes structures, authorities, and responsibilities.
4)
There is a commitment to competence.
5)
Individuals are held accountable for their internal control responsibilities.
Risk Assessment
Control Activities
Information and
Communication
Monitoring Activities
6)
Objectives are specified so risks to their achievement can be identified and assessed.
7)
Risks are identified and analyzed.
8)
Potential for fraud is considered.
9)
Changes that could impact internal control are identified and assessed.
10)
Control activities to mitigate risks are selected and developed.
11)
General control activities over technology are selected and developed.
12)
Control activities are deployed through policies and procedures.
13)
Relevant, quality information is obtained or generated and is used.
14)
Information is communicated internally.
15)
The organization communicates with external parties.
16)
Ongoing and separate evaluations are performed of the internal
control system.
17)
Internal control deficiencies are evaluated and communicated for
corrective action.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
189
Internal Control
CMA Part 1
efham CMA
What is Effective Internal Control?
An effective internal control system provides reasonable assurance regarding achievement of an entity’s
objectives by reducing to an acceptable level the risk of not achieving an entity objective. It requires each
of the five components and relevant principles of internal control to be present and functioning and for
the five components to be operating together in an integrated manner.
When an internal control system is effective, senior management and the board of directors have reasonable assurance that the organization:
•
Achieves effective and efficient operations or understands the extent to which operations are managed effectively and efficiently.
•
Prepares reports in conformity with applicable rules, regulations, and standards or with the entity’s
specified reporting objectives.
•
Complies with all applicable laws and regulations.
However, the board of directors and management cannot have absolute assurance that the organization’s
objectives will be achieved. Human judgment in decision-making can be faulty, errors do occur, management may be able to override internal controls, and management, other personnel, or third parties may be
able to circumvent internal controls through collusion.
Transaction Control Objectives
Commonly accepted transaction control objectives are:
•
Authorization. All transactions are approved by someone with the authority to approve the specific transactions.
•
Completeness. All valid transactions are included in the accounting records.
•
Accuracy. All valid transactions are accurate, are consistent with the originating transaction data,
are correctly classified, and the information is recorded in a timely manner.
•
Validity. All recorded transactions fairly represent the economic events that occurred, are lawful,
and have been executed in accordance with management’s authorization.
•
Physical safeguards and security. Access to physical assets and information systems are controlled and restricted to authorized personnel.
•
Error handling. Errors detected at any point in processing are promptly corrected and reported
to the appropriate level of management.
•
Segregation of duties. Duties are assigned in a manner that ensures that no one person is in a
position to both perpetrate and conceal an irregularity.
Types of Transaction Control Activities
190
•
Authorization and approvals. Authorization confirms that the transaction is valid, in other words
that it represents an actual economic event. Authorization generally is in the form of an approval
by a higher level of management or of another form of verification, such as an automatic comparison of an invoice to the related purchase order receiving report. When automated authorization
of payables is utilized, invoices within the tolerance level are automatically approved for payment,
while invoices outside the tolerance level are flagged for investigation.
•
Verifications. Items are compared with one another or an item is compared with a policy, and if
the items do not match or the item is not consistent with policy, follow up occurs.
•
Physical controls. Equipment, inventories, securities, cash, and other assets are secured physically in locked or guarded areas with physical access restricted to authorized personnel and are
periodically counted and compared with amounts in control records.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
Internal Control
•
Controls over standing data. Standing data, such as in a master file containing prices or inventory items, is often used in the processing of transactions. Controls need to be put into place over
the process of populating, updating, and maintaining the accuracy, completeness, and validity of
the data in the master files.
•
Reconciliations. Reconciliations compare two or more data elements and, if differences are found,
action is taken to make the data elements agree. For example, a bank reconciliation reconciles the
balance in the bank account according to internal records with the balance in the account according
to the bank. Reconciling items are items in transit (outstanding checks and deposits) and are to
be expected. However, differences that cannot be explained by items in transit must be investigated and corrective action taken. Reconciliations generally address the completeness and
accuracy of processing transactions.
•
Supervisory controls. Supervisory controls determine whether other transaction control activities are being performed completely, accurately, and according to policy and procedures. For
example, a supervisor may review a bank reconciliation performed by an accounting clerk to check
whether the bank balance as given on the reconciliation report matches the balance on the statement and whether reconciling items have been followed up and corrected and an appropriate
explanation provided.
Safeguarding Controls
Physical safeguarding of assets against loss is an important part of a company’s operations objectives. Loss
to assets can occur through unauthorized acquisition, use, or disposition of assets or through destruction
caused by natural disasters or fire.
Prevention of loss through waste, inefficiency, or poor business decisions relates to broader operations
objectives and is not specifically considered part of asset safeguarding.
Physical protection of assets requires:
•
Segregation of duties.
•
Physical protection and controlled access to records and documents such as blank checks, purchase
orders, passwords, and so forth.
•
Physical protection measures to restrict access to assets, particularly cash and inventory.
•
Effective supervision and independent checks and verification.
Segregation of Duties
Duties need to be divided among various employees to reduce the risk of errors or inappropriate activities.
Separating functions ensures that no single individual is given too much responsibility so that no employee is in a position to both perpetrate and conceal irregularities.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
191
Internal Control
CMA Part 1
Note: Different people must always perform the following four functions of related activities:
1)
Authorizing a transaction.
2)
Recordkeeping: Recording the transaction, preparing source documents, maintaining journals.
3)
Keeping physical custody36 of the related asset: For example, receiving checks in the mail.
4)
The periodic reconciliation of the physical assets to the recorded amounts for those assets.
In a question about an effective or ineffective internal control, keep in mind that the above four things
must be done by different people.
Following are a few examples of potential internal control failures that can result from inadequate segregation of duties:
•
If the same person has custody of cash received and also has the authority to authorize account
write-offs, that person could receive a cash remittance on account from a customer, authorize a
fraudulent write-off of the customer’s (paid) receivable, and divert the cash collected to his or her
own use.
•
If the same person who authorizes issuance of purchase orders is also responsible for recording
receipt of inventory and for performing physical inventory counts, that person could authorize the
issuance of a purchase order to a fictitious vendor using a post office box personally rented for the
purpose, then prepare a fictitious receiving record, and personally mail an invoice to the company
in the name of the fictitious vendor using the personal post office box. The accounts payable
department of the company would match the purchase order, the receiving report, and the invoice,
as it is supposed to do. Since all the documentation would match, the accounts payable department
would send a payment to the fictitious vendor for something the company never ordered or received. Furthermore, during physical inventory counting that same person could easily mark the
item as being in inventory when it never was in inventory, thereby concealing the fraud.
•
Also, if the same person who authorizes issuance of purchase orders is also responsible for recording receipt of fixed assets and for performing physical fixed asset inventories, that person could
purchase office furniture, have it delivered to another address, and proceed to sell it and pocket
the revenue from its sale.
•
If the same person prepares the bank deposit and also reconciles the checking account, that person
could divert cash receipts and cover the activity by creating “reconciling items” in the account
reconciliation report.
Be aware, however, that segregation of duties does not guarantee that fraud will not occur. Two or more
employees could collude with one another (work together to conspire) to commit fraud, covering for one
another and, presumably, sharing the proceeds.
Software tools are available to assist a business in identifying incompatible functions. An access management application can help to assess segregation-of-duties and access risks and conflicts.
Note: Collusion occurs when two or more individuals work together to overcome the internal control
system and perpetrate a fraud. When two or more people work together, they are able to get around
the segregation of duties that may have been set out.
36
In the context of internal control, custody involves keeping, guarding, caring for, watching over, inspecting, preserving, and maintaining the security of an item that is within the immediate personal care and control of the person to
whose custody it is entrusted.
192
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
Internal Control
Question 52: The proper segregation of duties requires that:
a)
The individual who records a transaction does not compare the accounting record of the asset
with the asset itself.
b)
The individual who records a transaction must maintain custody of the asset resulting from the
transaction.
c)
The individual who authorizes a transaction also records it.
d)
The individual who maintains custody of an asset must have access to the accounting records for
the asset.
(CMA Adapted)
Question 53: In a well-designed internal control system, two tasks that should be performed by different
people are:
a)
Posting of amounts from both the cash receipts journal and cash payments journal to the general
ledger.
b)
Distribution of payroll checks and approval of sales returns for credit.
c)
Approval of credit loss write-offs, and reconciliation of the accounts payable subsidiary ledger and
controlling account.
d)
Reconciliation of bank account and recording of cash receipts.
(CMA Adapted)
Question 54: One characteristic of an effective internal control structure is the proper segregation of
duties. The combination of responsibilities that could be performed by the same person is:
a)
Preparation of paychecks and check distribution.
b)
Timekeeping and preparation of payroll journal entries.
c)
Signing of paychecks and custody of blank payroll checks.
d)
Approval of time cards and preparation of paychecks.
(CMA Adapted)
Physical Protection – Controlled Access to Records and Documents
Checks should be stored in a locked area, and access to them should be limited to personnel who have
responsibility for preparing checks, subject to authorization and approvals by other individuals. The checks
should be pre-numbered, and the check numbers should be recorded in a log as they are used. Any checks
discovered missing should be promptly reported to supervisory personnel.
Purchase orders should also be pre-numbered, numbers logged as used, and access to them similarly
restricted.
Corporate credit cards should be kept in a locked cabinet and access to them controlled.
Password access should be controlled by limiting the access granted by each employee’s password to just
the information needed by that employee to do his or her job. Employees should be instructed not to leave
their passwords in exposed areas (for example, taped to the computer).
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
193
Internal Control
CMA Part 1
Physical Protection – Restricted Access to Assets
When cash must be stored until it can be deposited, it should be kept in a locked, fireproof file cabinet or
safe under controlled access.
Inventory can be a major portion of a company’s assets, and it is vulnerable to loss from theft, fire, and
natural disasters. The risk of loss can be at least partially transferred through purchase of insurance, but
internal controls are vital to protect as much as possible against loss.
•
Inventory should be kept in a physically locked area, and the locks should be technically advanced
(not just simple combination locks).
•
Requisitions for inventory should be approved by authorized personnel.
•
The inventory area should be monitored by a gatekeeper who verifies proper authorization for
requests to move goods.
•
Security cameras can be used to monitor activity in the inventory area and to help identify theft
and thieves. The very existence of cameras tends to discourage employee theft.
•
Security alarms on doors and windows can alert local police in case of a break-in.
•
A security guard may be employed during hours when employees are not present if the inventory
has high value.
•
Regular physical inventories should be taken and missing items should be investigated.
Effective Supervision and Independent Checks and Verifications
Supervision over the performance of clerical functions is necessary. For example:
194
•
Comparison of independent sets of records is necessary, such as comparing the report of the
physical count of inventory to the internal inventory records; or comparing the information on a
bank reconciliation with the actual bank statement to confirm the bank balance used in the reconciliation is correct.
•
Invoices should be prepared based on verified orders. The packing slip should be prepared at the
same time the invoice is prepared, and nothing should be shipped without a packing slip. Records
of actual shipments made should be compared with internal shipping documents, which should be
compared with invoices issued to verify that the procedures are being followed.
•
The process of receiving inventory should be supervised to make sure the inventory clerk is actually
counting the items received before affirming their receipt.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
Legislative Initiatives About Internal Control
Legislative Initiatives About Internal Control
In the United States, various federal legislative initiatives have been created to require companies to implement internal controls. CMA candidates should be able to do the following:
•
Identify and describe the major internal control provisions of the Foreign Corrupt Practices Act.
•
Describe the major internal control provisions of the Sarbanes-Oxley Act (Sections 201, 203, 204,
302, 404, and 407).
•
Identify the role of the Public Company Accounting Oversight Board (PCAOB) in providing guidance
on the auditing of internal controls.
•
Identify the PCAOB-preferred approach to auditing internal controls as outlined in Auditing Standard #5.
One significant issue with these federal laws is that some statutes apply only to publicly-traded companies
or only to companies that report to the SEC, and some statutes apply to all companies. Companies that are
not publicly traded and do not report to the SEC do not need to comply with many of these laws because
they do not fall under the jurisdiction of the SEC, which is the primary regulatory agency for these internal
control statutes. For example, the accounting and internal control provisions of the Foreign Corrupt Practices Act apply to all U.S. companies that are registered with and report to the SEC, not only those with
foreign operations, whereas the FCPA’s anti-bribery provisions apply to any company, public or private,
with significant operations in the United States, regardless of whether the corrupt act takes place inside or
outside the United States.
Foreign Corrupt Practices Act (FCPA)
The Foreign Corrupt Practices Act of 1977 (FCPA), substantially revised in 1988 and amended in 1998 by
the International Anti-Bribery and Fair Competition Act of 1998, was enacted in response to disclosures of
questionable payments that had been made by large companies. Investigations by the SEC had revealed
that over 400 U.S. companies had made questionable or illegal payments in excess of $300 million to
foreign government officials, politicians, and political parties. The payments were either illegal political
contributions or payments to foreign officials that bordered on bribery.
The FCPA has two main provisions: an anti-bribery provision, and an internal control provision.
Anti-Bribery Provision
The anti-bribery provisions of the FCPA apply to all companies, regardless of whether or not they are
publicly traded.
Under the FCPA, it is illegal for any company or anyone acting on behalf of a company to bribe any
foreign official in order to obtain or retain business. In addition, a firm, or an individual acting on
behalf of a firm, will be held criminally liable if it makes payments to a third party with the knowledge that
those payments will be used by the third party as bribes.
Note: This prohibition is against corrupt payments to a foreign official, a foreign political party or party
official, or any candidate for foreign political office only.
The entire company, not any one individual or position in the company, is responsible for ensuring that all payments are legal and lawful, although individuals are personally liable for their own
actions. Furthermore, the company must ensure that all transactions are made in accordance with management’s general or specific authorization and are recorded properly.
Note: A corrupt payment is one that is intended to cause the recipient to misuse his or her official
position in order to wrongfully direct business to the payer, whether or not the payment leads to
the desired outcome.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
195
Legislative Initiatives About Internal Control
CMA Part 1
Internal Control Provision
The fundamental premise of the internal control requirements of the FCPA is that effective internal control
acts as a deterrent to illegal payments. Therefore, under the Foreign Corrupt Practices Act corporate management is required to maintain books, records, and accounts that accurately and fairly reflect transactions
and to develop and maintain a system of internal accounting control.
Note: The internal control provision of the FCPA applies only to companies that are publicly traded.
Sarbanes-Oxley Act and the PCAOB
The Sarbanes-Oxley Act of 2002 (called SOX or SarbOx) contains provisions impacting auditors, management, and audit committees of boards of directors. Sarbanes-Oxley was enacted in response to several
major incidents of financial reporting fraud and audit failures, and it applies to all publicly-held companies
in the U.S., all of their divisions, and all of their wholly-owned subsidiaries. It also applies to any non-U.S.
owned, publicly-held multinational companies that operate in the U.S. In addition, some provisions apply
also to privately-held companies; and a privately-held company may comply with SOX in preparation for
an initial public offering, in preparation for raising private funding for a sale of the company, or on a voluntary basis (for example, using it as a best-practices benchmark).
Title I: Public Company Accounting Oversight Board (PCAOB)
Title 1 of the Sarbanes-Oxley Act established the Public Company Accounting Oversight Board (PCAOB),
whose mandate is to oversee the auditing of public companies that are subject to the securities laws, to
protect the interests of investors, and to enhance the public’s confidence in independent audit reports. As
an independent, non-governmental board, the PCAOB is a non-profit corporation that operates under the
authority of the SEC, which oversees the approval of its Board’s rules, standards, and budget.
According to the Act, public accounting firms that serve as external auditors are required to register with
the PCAOB. Furthermore, the PCAOB is charged with developing auditing standards to be used by registered
public accounting firms in their preparation and issuance of audit reports. In addition, the PCAOB conducts
regular inspections of the registered public accounting firms to assess their degree of compliance with the
Act and it has procedures to investigate and discipline firms that commit violations.
The formation of the PCAOB constitutes the first time that auditors of U.S. public companies became subject
to external and independent oversight. Previously, the profession had been self-regulated through a formal
peer review37 program administered by the American Institute of Certified Public Accountants (AICPA). That
peer review program continues, and accounting and audit firms that are required to be inspected by the
PCAOB are also subject to peer review.
The responsibilities of the PCAOB include:
1)
Registering public accounting firms that audit public companies. The Sarbanes-Oxley Act requires
all accounting firms (both U.S. and non-U.S. firms) that prepare or issue audit reports on or participate in audits of U.S. public companies to register with the PCAOB.
2)
Establishing auditing and related attestation, quality control, ethics, independence, and other
standards relating to the preparation of audit reports for securities issuers.
3)
Conducting inspections of registered public accounting firms, annually for firms that audit more
than 100 issuers and every three years for others. In the inspections, the Board assesses the firm’s
compliance with the Sarbanes-Oxley Act, the rules of the Board, the rules of the Securities and
37
“Peer review” is a process of self-regulation used in professions to evaluate the work performed by one’s equals, or
peers, to ensure it meets certain criteria. Peer review is performed by qualified individuals within the same profession.
A peer review is performed for an accounting and audit firm by professionals from another accounting and audit firm.
196
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
Legislative Initiatives About Internal Control
Exchange Commission (SEC), and its professional standards in connection with the firm’s performance of audits, issuance of audit reports, and related matters involving issuers.
4)
Enforcing compliance with the Act, the rules of the Board, professional standards, and securities
laws relating to audit reports and the obligations of accountants for them.
5)
Conducting investigations and disciplinary proceedings and imposing appropriate sanctions for violations of any provision of the Sarbanes-Oxley Act, the rules of the Board, the provisions of the
securities laws relating to the preparation and issuance of audit reports, or professional standards.
6)
Management of the operations and staff of the Board.
Title II: Auditor Independence
Section 201: Services Outside the Scope and Practice of Auditors
In order to maintain auditor independence, Section 201 lists specific non-audit services that may not be
provided by an external auditor to an audit client because their provision creates a fundamental conflict
of interest for the accounting firms. These services include:
1)
Bookkeeping services or other services relating to keeping the accounting records or preparing the
financial statements of the audit client.
2)
Financial information systems design and implementation.
3)
Appraisal or valuation services, fairness opinions, or contribution-in-kind reports.
4)
Actuarial services.
5)
Internal audit outsourcing services.
6)
Management functions.
7)
Human resource services.
8)
Broker/dealer, investment adviser, or investment banking services.
9)
Legal services.
10)
Expert services unrelated to the audit.
11)
Any other service that the Public Company Accounting Oversight Board (PCAOB) determines, by
regulation, is not permissible.
Section 203: Audit Partner Rotation
A public accounting firm that is registered with the PCAOB may not provide audit services to a client if the
lead audit partner38 or the concurring review audit partner has performed audit services for that client in
each of the five previous fiscal years of the client. Therefore, the lead audit partner and the concurring
review audit partner must rotate off a particular client’s audit after five years. They must then remain off
that audit for another five years. Other audit partners who are part of the engagement team must rotate
off after seven years and remain off for two years if they meet certain criteria.
Specialty partners, (partners who consult with others on the audit engagement regarding technical or industry-specific issues), do not need to rotate off. Examples of specialty partners are tax or valuation
specialists. Other partners who serve as technical resources for the audit team and are not involved in the
audit per se are also not required to rotate off the audit.
38
An “audit partner” is defined as any partner on the audit engagement team with responsibility for decision-making on
any significant audit, accounting or reporting matter affecting the company’s financial statements or who maintains
regular contact with management and the audit committee of the audit client.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
197
Legislative Initiatives About Internal Control
CMA Part 1
The purpose of the audit partner rotation requirement is to ensure that a “new look” is taken periodically
at the financial statements.
Section 204: Auditor Reports to Audit Committees
Section 204 requires each public accounting firm registered with the PCAOB that performs an audit for an
issuer of publicly-traded securities to report the following in a timely manner to the issuer’s audit committee:
1)
All critical accounting policies and practices to be used;
2)
All alternative treatments of financial information within generally accepted accounting principles
that have been discussed with the issuer’s management, the ramifications of the use of such alternative disclosures and treatments, and the treatment preferred by the registered public
accounting firm; and
3)
Other material written communication between the registered public accounting firm and the management of the issuer, such as any management letter or schedule of unadjusted differences.
If management of the company is using an alternative method of accounting for something, even though
it is within generally accepted accounting principles, the outside public accounting firm performing the audit
must report that fact to the company’s audit committee. The independent auditor must report all critical
accounting principles being used and any other material written communications between management and
themselves.
Title III: Corporate Responsibility
Section 302: Corporate Responsibility for Financial Reports
Sarbanes-Oxley requires that each annual (10K) or quarterly (10Q) financial report filed or submitted to
the SEC in accordance with the Securities Exchange Act of 1934 must include certifications by the company’s
principal executive officer or officers and its principal financial officer or officers. The certification must
indicate the following:
1)
The signing officer has reviewed the report.
2)
Based on the signing officer’s knowledge, the report does not contain any untrue material statement or omit to state any material fact that could cause the report to be misleading.
3)
Based on the signing officer’s knowledge, the financial statements and all the other related information in the report fairly present in all material respects the financial condition and results of
operations of the company for all of the periods presented in the report.
4)
The signing officers certify that they
198
o
are responsible for establishing and maintaining internal controls;
o
have designed the internal controls to ensure that they are made aware of all material
information relating to the company and all subsidiaries;
o
have evaluated the effectiveness of the company’s internal controls within the previous
ninety days; and
o
have presented in the report their conclusions about the effectiveness of their internal
controls, based on their evaluation as of the report date.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
5)
6)
Legislative Initiatives About Internal Control
The signing officers have disclosed to the company’s auditors and the audit committee of the board
of directors:
o
all significant deficiencies in the design or operation of the company’s internal controls that
could adversely affect the company’s ability to record, process, summarize, and report financial
data and have identified for the company’s auditors any material weaknesses in its internal
controls; and
o
any fraud, regardless of its materiality, that involves management or other employees who
have a significant role in the company’s internal controls.
The signing officers have stated in the report whether or not there were any significant changes in
internal controls or in any other factors that could significantly affect internal controls after the
date of their evaluation, including any corrective actions that have been taken with regard to
significant deficiencies and material weaknesses.
Note: Companies cannot avoid the requirements in the Sarbanes-Oxley Act by reincorporating outside
the United States or by transferring their company’s activities outside of the United States. The Act will
continue to have full legal force over any company that reincorporates outside the U.S. or that transfers
its activities outside of the U.S.
Title IV: Enhanced Financial Disclosures
Section 404: Management Assessment of Internal Controls and the Independent Auditor’s Attestation
to Management’s Assessment of Internal Controls
Section 404(a) requires each annual report required by the SEC to:
1)
state the responsibility of management for establishing and maintaining an adequate internal control structure and procedures for financial reporting, and
2)
contain an assessment by management of the adequacy of the company’s internal control over
financial reporting (or ICFR).
Section 404(b) requires the company’s independent auditor to report on and attest to management’s assessment of the effectiveness of the internal controls.
In other words, according to Section 404(a) management is required to document and test its internal
financial controls and to report on their effectiveness. In many firms, the internal audit activity is very
involved in the management review and testing of the internal controls. Furthermore, according to Section
404(b) the external auditors are then required to review the supporting materials used by management
and/or internal auditing in developing their internal financial controls report. The external auditor’s report
is done in order to assert that management’s report is an accurate description of the internal control environment.
The first step in a Section 404 compliance review is to identify the key processes. Here, the internal audit
activity can be of major assistance because it may already have defined the key processes during its annual
audit planning and documentation. The overall processes are generally organized in terms of the basic
accounting cycles, as shown here:
•
Revenue cycle: processing of sales and service revenue
•
Direct expenditures cycle: expenditures for material and direct production costs
•
Indirect expenditures cycle: operating costs other than for production activities
•
Payroll cycle: compensation of personnel
•
Inventory cycle: processes for the management of direct materials inventory until it is applied
to production
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
199
Legislative Initiatives About Internal Control
CMA Part 1
•
Fixed assets cycle: processes for accounting for property and equipment, such as periodic recording of depreciation
•
General IT cycle: general IT controls that are applicable to all IT operations
The internal controls covering the key processes are reviewed and documented, and then the controls are
tested. The external auditor then reviews the work and attests to its adequacy.
The extent of internal audit’s involvement in the Section 404 testing of internal controls varies from firm to
firm. It can take three forms:
1)
Internal auditors may act as internal consultants in identifying the key processes, documenting
the internal controls over these processes, and performing tests of the controls. Senior management’s approval of internal audit’s work is necessary.
2)
The company might designate some other internal or external consulting resource to perform the
Section 404 reviews. If an external party performs the Section 404 reviews, the internal auditors
may act as a resource to support the work. They may review and test internal control processes
as assistants or contractors for the outside entity doing the review.
3)
Internal audit may work with and assist other corporate resources that are performing the Section
404 reviews without being directly involved with those reviews, allowing the internal audit activity
to concentrate its resources on other internal audit projects.
Management, in its assessment of internal controls, and the independent auditor, in its attestation to management’s assessment, can have different testing approaches because their roles are different and therefore
they are subject to different guidance.
•
Guidance for the management assessment of internal controls is provided by the SEC.
•
Guidance for the independent auditor’s attestation to management’s assessment is contained in
the Public Company Accounting Oversight Board’s Auditing Standard No. 5.
Although the sources of guidance are different for management and the independent auditor, the PCAOB
intentionally aligned its guidance in Auditing Standard No. 5 with the SEC’s guidance for management,
particularly with regard to prescriptive requirements, definitions, and terms. Therefore, the guidance to
management and the guidance to independent auditors are not in conflict.
Both SEC and the PCAOB’s guidance have the effect of efficiently focusing Section 404 compliance on
the most important matters affecting investors.
They both prescribe a top-down, risk-based approach to evaluating internal control over financial reporting. A top-down approach begins by identifying the risks that a material misstatement of the financial
statements would not be prevented or detected in a timely manner. Beginning with risk assessment allows
the auditor to focus on higher-risk areas while spending less time and effort on areas of lower risk. The
auditor should test those controls that are important to the auditor's conclusion about whether the company's controls sufficiently address the assessed risk of misstatement to each relevant assertion. 39
It is important for the auditor to use a top-down approach, not a bottom-up approach. An auditor
who approaches the audit of internal controls from the bottom up would focus first on performing detailed
tests of controls at the process, transaction, and application levels. When the auditor uses a bottom-up
process, he or she often spends more time and effort than is necessary to complete the audit. Furthermore,
39
An assertion is a claim made. A management assertion is a claim made by management. Financial statement assertions are claims made by management in presenting financial information. Examples of broad financial statement
assertions are “Total liabilities at December 31, 20XX were $150,000,000” or “Net income for the year ended December
31, 20XX was $5,000,000.” Financial statement assertions can be much narrower also, as in “Receivables due from
Customer X on December 31, 20XX totaled $50,000.” The auditor’s role is to determine whether the assertions being
made by management are correct. Most of the work of a financial audit involves evaluating and forming opinions about
management assertions.
200
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
Legislative Initiatives About Internal Control
when an auditor takes a bottom-up approach, the auditor may spend relatively little time testing and evaluating entity-level controls but instead may rely almost exclusively on detailed tests of controls over
individual processes, transactions, and applications. Spending more effort than is necessary in lower-risk
areas can diminish the effectiveness of the audit because it may prevent a higher-risk area from receiving
the audit attention that it should.
A top-down approach ensures the proper testing of the controls for the assessed risk of misstatement to each relevant assertion. If a bottom-up approach is used, those controls that address the
risk of a material misstatement may not be tested.
Section 407: Disclosure of Audit Committee Financial Expert
Section 407 of the Sarbanes-Oxley Act required, and the SEC has issued rules requiring, each issuer of
publicly-traded securities to disclose whether or not the company’s audit committee consists of at least one
member who is a financial expert. If the audit committee does not have at least one member who is a
financial expert, the company must state the reasons why not.
The definition of a financial expert is a person who, through education and experience as a public accountant, auditor or a principal accounting or financial officer of an issuer of publicly-traded securities, has
1)
an understanding of generally accepted accounting principles and financial statements and the
ability to assess the general application of GAAP in connection with accounting for estimates, accruals, and reserves;
2)
experience in the preparation, auditing, or active supervision of the preparation or auditing of
financial statements of generally comparable issuers40 in terms of the breadth and level of complexity of accounting issues;
3)
experience and an understanding of internal accounting controls and procedures for financial reporting; and
4)
an understanding of audit committee functions.
If the company discloses that it has one financial expert on its audit committee, it must disclose the name
of the expert and whether that person is independent. If the company discloses that it has more than one
financial expert serving on its audit committee, it may, but is not required to, disclose the names of those
additional persons and indicate whether they are independent.
40
“Issuers” means issuers of securities, generally publicly-traded securities.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
201
External Auditors’ Responsibilities and Reports
CMA Part 1
External Auditors’ Responsibilities and Reports
The audit committee nominates the independent auditor and the shareholders ratify the independent auditor’s appointment at the annual meeting of shareholders.
For publicly-traded companies, the auditor presents an opinion letter that is included in the company’s
annual report and contains the following:
•
An opinion on whether the financial statements present fairly, in all material respects, the financial position, results of operations, and cash flows of the company, in conformity with generally
accepted accounting principles. An identification of the country of origin of those generally accepted
accounting principles also is required.
•
An opinion on how well the company’s management has maintained effective internal control over financial reporting. (This opinion is not required for companies that are not publicly
traded.)
The auditor’s only responsibilities are to express an opinion about the financial statements and to express
an opinion about the internal controls of the company (in the case of a publicly-traded company). In its
capacity as auditor, the external auditor has no responsibility to make recommendations or suggestions to
the client about the operation of the business. The external auditor has no responsibility to follow up any
audit findings from the previous year.
The auditor's opinion letter specifically states what the auditor’s responsibilities are and it also states that
it is the management of the company who is responsible for the financial statements. The auditor’s report
states:
1)
Management is responsible for the financial statements, for maintaining effective control over financial reporting, and for its assessment of the effectiveness of its internal control over financial
reporting.
2)
The auditor’s responsibility is to express opinions on the financial statements and on the company’s
internal control over financial reporting based upon its audit.
Financial Statement Opinion
The auditor conducts an independent examination of the accounting data prepared and presented by management and expresses an opinion on the financial statements. Though the auditor will not use the word
“correct,” the opinion is in essence whether or not the financial statements are “correct.” Instead, the
auditor would assert that the financial statements “present fairly, in all material respects” the financial
position of the company.
The auditor’s opinion on the financial statements contains either an opinion on the financial statements as
a whole or an assertion that an opinion cannot be expressed and the reasons why an opinion cannot be
expressed.
The auditor’s opinion may be:
202
•
Unqualified: Most audit opinions are unqualified, meaning that the results are “clean.” The auditor
expresses the opinion that the financial statements present fairly, in all material respects, the
financial position, results of operations, and cash flows of the company, in conformity with generally accepted accounting principles.
•
Qualified: A qualified opinion contains an exception, meaning that the financial statements do
not present a true and fair picture. The exception is usually not significant enough to cause
the statements as a whole to be misleading to the point that they should not be used. However, it
does prevent the auditor from issuing an unqualified opinion. Usually, a qualified opinion is issued
under one of the following conditions:
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
External Auditors’ Responsibilities and Reports
Section E
o
The scope of the auditor’s examination (that is, the work the auditor wanted to perform) was
limited or was affected by restrictions, or
o
One element of the financial statements (for example, fixed assets or inventory) is not stated
properly in conformity with generally accepted accounting principles or because of inadequate
disclosures.
A qualified opinion states that, except for the specific, named, matter, the financial statements
present fairly in all material respects, the financial position, results of operations, and cash flows
of the company in conformity with generally accepted accounting principles.
•
Adverse: An adverse opinion is issued when the exceptions are so material that, in the auditor’s
judgment, a qualified opinion is not appropriate. In other words, the financial statements, taken
as a whole, are not presented in conformity with generally accepted accounting principles. Adverse opinions are seldom issued because most companies change their accounting upon
the instructions of the auditor so that an adverse opinion is not warranted because their adjusted
financial statements do present a fair and true picture.
•
Disclaimer: A disclaimer of opinion is used when the auditor has not been able to gather enough
information on the financial statements to express an opinion.
Going Concern Opinions and Other Modifications
The auditor must also evaluate whether substantial doubt exists about the company’s ability to continue as
a going concern. As part of the audit, the auditor considers several factors that might indicate that the
company will no longer be in existence by the time the auditor does the next annual audit. Some of the
factors are recurring operating losses, working capital deficiencies, loan defaults, unlikely prospects for
more financing, and work stoppages. The auditor also considers external issues, like legal proceedings and
the loss of a key customer or supplier.
If the auditor is not satisfied by management’s plans to overcome the problems and remain in business and
has a substantial doubt about the company’s ability to remain a going concern, the auditor will add an
explanatory paragraph to the opinion describing the problem. The explanatory paragraph follows the auditor’s opinion. Below is an example of a going concern explanatory paragraph.
The accompanying financial statements have been prepared assuming that the Company
will continue as a going concern. As discussed in Note X to the financial statements, the
Company has suffered recurring losses from operations and has a net capital deficiency
that raise substantial doubt about its ability to continue as a going concern. Management's
plans in regard to these matters are also described in Note X. The financial statements do
not include any adjustments that might result from the outcome of this uncertainty.41
Doubt about the company’s ability to stay in business does not prevent the opinion from being unqualified,
if the financial statements present fairly, in all material respects, the financial position, results of operations
and cash flows of the company, in conformity with generally accepted accounting principles. Thus, the audit
opinion can still be an unqualified opinion even if the preceding explanatory note is added.
41
PCAOB AS 24.13.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
203
External Auditors’ Responsibilities and Reports
CMA Part 1
The auditor may add an explanatory paragraph in certain other circumstances, as well, even though the
opinion expressed is an unqualified one. These opinions essentially are stating, “The financial statements
are correct, but there is something else we would like you to know.” Note that the auditor will not use the
term “correct” in respect to financial statements, but this is in essence what the auditor is saying. Some of
the circumstances in which an additional paragraph will be added to the unqualified opinion include:
•
An uncertainty about something that will be resolved in the future, but until then, not enough is
known about the outcome of the matter to determine whether it might result in a loss.
•
If the company has made a change in its accounting principles or in its method of applying the
principles that has had a material effect on the comparability of the financial statements from year
to year, the auditor should include an explanatory paragraph identifying the nature of the change
and referring readers to the note in the financial statements where the change is discussed in
detail. It is presumed that the auditor has agreed with the change in accounting principle unless
the auditor states otherwise.
•
The auditor may simply want to emphasize something regarding the financial statements. An example might be the fact that the company has had significant transactions with related parties.
Internal Control Opinion
The second opinion contained in the auditor’s opinion letter, as required by the Sarbanes-Oxley Act, is the
auditor’s opinion as to whether or not the company’s management has maintained effective internal control
over financial reporting. The company’s annual report, filed with the SEC and incorporated into the annual
report to shareholders, must be accompanied by a statement by management that management is responsible for creating and maintaining adequate internal controls. Management’s statement must set forth its
assessment of the effectiveness of these controls. The company’s auditor must report on and attest to
management’s assessment of the effectiveness of the internal controls. The auditor’s internal control opinion is considered to be part of the core responsibility of the auditor and an integral part of the audit report.
The criteria used by the independent auditor in assessing the company’s internal control over financial
reporting come from the document Internal Control—Integrated Framework issued by the Committee of
Sponsoring Organizations (COSO) of the Treadway Commission.
If the independent auditor is satisfied with the company’s internal control over financial reporting, it includes
the following paragraph in its opinion letter:
In our opinion, the Company maintained, in all material respects, effective internal control
over financial reporting as of [date], based on criteria established in Internal Control Integrated Framework (2013) issued by the Committee of Sponsoring Organizations of the
Treadway Commission (COSO).
Reviews and Compilations
Reviews and compilations are two other financial statement services performed by auditors.
A review involves some high-level inquiries and analytical procedures, but the accountant makes no tests
of transactions or balances, no evaluation of internal control, and no performance of any other audit procedures. A review report consists of a statement that the accountant has reviewed the financial statements;
that a review includes primarily applying analytical procedures to management’s financial data and making
inquiries of company management; that a review is substantially less in scope than an audit, the objective
of which is the expression of an opinion regarding the financial statements; and that the accountant does
not express such an opinion.
The review report includes a paragraph stating that the review was performed in accordance with the
Statements on Standards for Accounting and Review Services issued by the American Institute of Certified
Public Accountants and that the information is the representation of management.
204
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
External Auditors’ Responsibilities and Reports
A review report gives negative assurance. Negative assurance means that the accountant states in the
report that he or she is not aware of any material modifications that should be made to the financial statements in order for them to be in conformity with accounting principles generally accepted in the country
where the client is located.
A compilation is simply a formatted financial statement presenting the assertions of management without
performing any procedures to verify, corroborate, or review the information. A compilation is less in scope
than a review and significantly less in scope than a full audit, and it provides no assurance whatsoever to
users regarding the accuracy of the financial statements.
Reviews and compilations are used by smaller, privately-owned entities. They are not acceptable for publicly-owned entities.
Reports to the Audit Committee of the Board of Directors
An external auditor that is registered with the PCAOB under the Sarbanes-Oxley Act is obligated under
Section 204 to make reports to the company’s audit committee of the board of directors of a publicly-traded
company that include:
•
The accounting principles being used.
•
All alternative treatments being used.
•
The ramifications of the use of such alternative treatments.
•
The treatment preferred by the public accounting firm.
•
All other material written communication between the registered public accounting firm and the
management of the company.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
205
E.2. Systems Controls and Security Measures
CMA Part 1
E.2. Systems Controls and Security Measures
Introduction to Systems Controls
Extensive use of computers in operations and accounting systems can tend to increase the company’s
exposure to inaccuracies and fraud.
Since computers apply the same steps to similar transactions, processing of transactions should involve no
opportunity for clerical (human) error. However, if the program itself contains a mistake, every transaction
processed using that defective program will contain an error. Furthermore, if a clerical error is made in
input, the result will be an output error.
Potential for fraud is always present in organizations and is a serious problem, even without computer
processing of data. The automatic processing of data, the volume of the data processed and the complexity
of the processing are aspects of computer processing that can increase both the risk of loss and the potential
monetary loss from exposures that would exist anyway. The concentration of data storage creates exposure, as well. The potential for fraud is further increased because of the fact that programs are used for the
processing. Fraud can potentially be committed within the program itself. Without proper controls, this type
of fraud may go undetected for a long period of time.
Further complicating the situation is the fact that because of the nature of the system, audit trails may
exist for only a short period of time, since support documents may be in electronic format and be periodically
deleted. An audit trail is a paper or electronic record that shows a step-by-step documented history of a
transaction. It enables an auditor or other examiner to trace the transaction from the general ledger back
to the source document such as an invoice or a receipt. The existence of an audit trail means that an
amount appearing in a general ledger account can be verified by evidence supporting all the individual
transactions that go into the total. The audit trail must include all of the documentary evidence for each
transaction and the control techniques that the transaction was subjected to, in order to provide assurance
that the transaction was properly authorized and properly processed. When an audit trail is absent, the
reliability of an accounting information system is questionable.
On the positive side, computer systems can provide large amounts of information to management in a very
short period of time, enabling management to maintain closer control over the activities of the company
and their results.
The objectives of controls for an information system are similar to the objectives of overall organizational
internal controls:
•
Promoting effectiveness and efficiency of operations in order to achieve the company’s objectives.
•
Maintaining the reliability of financial reporting through checking the accuracy and reliability of
accounting data.
•
Assuring compliance with all laws and regulations that the company is subject to, as well as adherence to managerial policies.
•
Safeguarding assets.
In Internal Control – Integrated Framework, internal control is defined as a process designed to provide
reasonable assurance that the company’s objectives will be achieved in the areas of effectiveness and
efficiency of operations, reliability of financial reporting, and compliance with applicable laws and regulations. The internal control system is the responsibility of the company’s board of directors, management
and other personnel.
206
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
E.2. Systems Controls and Security Measures
The internal control system consists of five interrelated components:
1)
The control environment
2)
Risk assessment
3)
Control activities
4)
Information and communication
5)
Monitoring42
Common exposures to loss that result from a failure to implement controls include competitive disadvantage, deficient revenues, loss of assets, inaccurate accounting, business interruption, statutory
sanctions, erroneous management decisions, fraud and embezzlement, and excessive costs.
The ultimate responsibility for internal control lies with management and the board.
Controls should be subjected to cost/benefit analysis. “Cost/benefit analysis” means that management
should not spend more on controls than the amount the company can expect to receive in benefits from
the controls. Determining what is required to attain reasonable assurance that the control objectives are
being met without spending more on them than can possibly be gained from them is a matter of management judgment.
Threats to Information Systems
Sources of threats to information systems and data are many. A few of them are:
•
Errors can occur in system design.
•
Errors can occur in input or input manipulation may occur.
•
Data can be stolen over the Internet.
•
Data and intellectual property, including trade secrets, can be stolen by employees who carry them
out on very small storage media or just email them.
•
Unauthorized alterations can be made to programs by programmers adding instructions that divert
assets to their own use.
•
Data and programs can be damaged or become corrupted, either deliberately or accidentally.
•
Data can be altered directly in the data file, without recording any transaction that can be detected.
•
Viruses, Trojan Horses, and worms can infect a system, causing a system crash, stealing data, or
damaging data.
•
Hardware can be stolen.
•
Physical facilities as well as the data maintained in them can be damaged by natural disasters,
illegal activity or sabotage.
High-profile incidents such as the theft of millions of peoples’ names and social security numbers from
databases have underscored the importance of protecting information systems and data. Effective systems controls are the first line of defense against events such as these incidents and against threats such
as those above. Controls must be a part of every system and application to preserve the integrity of the
data and reduce the risk of loss from inadequate records, inaccurate accounting, business interruption,
fraud, violations of the law, asset loss, and damage to the business’s competitive position. The controls
must not only exist, but they must also function effectively.
42
Internal Control – Integrated Framework, copyright 1992, 1994, and 2013 by the Committee of Sponsoring Organizations of the Treadway Commission. Used by permission.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
207
E.2. Systems Controls and Security Measures
CMA Part 1
The Classification of Controls
Controls for a computer system are broken down into two types.
•
General controls relate to all systems components, processes, and data in a systems environment.
•
Application controls are specific to individual applications and are designed to prevent, detect,
and correct errors and irregularities in transactions during the input, processing, and output stages.
Both general controls and application controls are essential.
General controls relate to the general environment within which transaction processing takes place. General controls are designed to ensure that the company’s control environment is stable and well managed.
A stable and well-managed control environment strengthens the effectiveness of the company’s application
controls. General controls include:
•
Administrative controls, including segregation of duties
•
Computer operations controls
•
Controls over the development, modification, and maintenance of computer programs
•
Software controls
•
Hardware controls
•
Data security controls
•
Provision for disaster recovery
The general controls above are organized according to the following categories.
1)
2)
3)
The organization and operation of the computer facilities and resources, including:
a.
Administrative controls such as provision for segregation of duties within the data processing function, segregation of the data processing function from other operations, and
supervision of personnel involved in control procedures to ensure that the controls are performing as intended. The information systems activity should have a high enough level in the
organization and adequate authority to permit it to meet its objectives.
b.
Computer operations controls. A systems control group should monitor production, keep
records, balance input and output, perform manual procedures designed to prevent and detect
errors, and see that work is completed on schedule.
General operating procedures, including:
a.
Written policies, procedures, and manuals that establish standards for controlling information systems operations should be available and should be kept up to date. They should
include controls over the setup of processing jobs, operations software and computer operations, and backup and recovery procedures, as well as instructions for running computer jobs.
The policies, procedures, manuals and instructions should be documented and reviewed and
approved by a manager.
b.
Operating procedures also specify the process to be followed in system development and
system changes in order to provide reasonable assurance that development of, and changes
to, computer programs are authorized, tested, and approved prior to use.
Software, hardware, and access controls, including
a.
208
Software controls that monitor the use of software and prevent unauthorized access to software applications and system software, including program security controls to prevent
unauthorized changes to programs and systems in use.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
E.2. Systems Controls and Security Measures
b.
Hardware controls to keep computer hardware physically secure, to protect it from fires and
extremes of temperature and humidity, and to provide for disaster recovery. Hardware controls
also include monitoring for equipment malfunctions, such as controls installed in computers
that can identify incorrect data handling or malfunction of the equipment.
c.
Access controls to equipment and data, such as controls over physical access to the computer
system that are adequate to protect the equipment from damage or theft. Access controls also
include data security controls that ensure that data files are not subject to unauthorized access,
change, or destruction, both when they are in use and when they are being stored.
General controls are discussed in greater detail below.
Application controls are specific to individual applications. They ensure that only authorized data are
processed by the application and that the data are processed completely and accurately. Thus, application
controls are designed to prevent, detect, and correct errors in transactions as they flow through the input,
processing, and output stages of work. They are organized into three main categories, as follows.
•
Input controls, designed to provide reasonable assurance that input entered into the system has
proper authorization, has been converted to machine-sensible form, and has been entered accurately.
•
Processing controls, designed to provide reasonable assurance that processing has occurred
properly and no transactions have been lost or incorrectly added.
•
Output controls, designed to provide reasonable assurance that input and processing have resulted in valid output.
Application controls are discussed in greater detail below.
Question 55: Systems control procedures are referred to as general controls or application controls.
The primary objective of application controls in a computer environment is to:
a)
Provide controls over the electronic functioning of the hardware.
b)
Maintain the accuracy of the inputs, files and outputs for specific applications.
c)
Ensure the separation of incompatible functions in the data processing departments.
d)
Plan for the protection of the facilities and backup for the systems.
(CMA Adapted)
General Controls
Organization and Operation of the Computer Facilities
Note: This is the first of three categories of general controls.
An IT Planning or Steering Committee should oversee the IT function. Members should include senior management, user management and representatives from the IT function. The committee should have regular
meetings and report to senior management.
The IT function should be positioned within the organization so as to ensure its authority as well as its
independence from user departments.
Staffing requirements should be evaluated whenever necessary to make sure that the IT function has sufficient, competent staff. Management should make certain that all personnel in the organization know their
responsibilities with respect to information systems and that they have sufficient authority to exercise their
responsibilities. Responsibilities should be delegated with appropriate segregation of duties, and duties
should be rotated periodically at irregularly scheduled times for key processing functions.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
209
E.2. Systems Controls and Security Measures
CMA Part 1
Administrative Controls – Segregation of Duties
Although the traditional segregation practiced in accounting of separating the responsibilities of authorization, record keeping, custody of assets, and periodic reconciliations may not be practiced in the same
manner in Information Systems (since the work is quite different), specific duties in the information systems
environment should still be separated from one another.
General guidelines for segregation of duties in information systems are as follows.
Separate Information Systems from Other Departments
Information Systems department personnel should be separated from the departments and personnel that
they support (called “users”). Examples of this segregation include:
•
Users initiate and authorize all systems changes, and a formal written authorization is required.
•
Asset custody remains with the user departments.
•
An error log is maintained and referred to the user for correction. The data control group follows
up on errors.
Separate Responsibilities within the Information Systems Department
Responsibilities within the Information Systems department should be separated from one another. An
individual with unlimited access to a computer, its programs, and its data could execute a fraud and at the
same time conceal it. Therefore, effective segregation of duties should be instituted by separating the
authority for and the responsibility for the function.
Although designing and implementing segregation of duties controls makes it difficult for one employee to
commit fraud, remember that segregation of duties cannot guarantee that fraud will not occur because two
employees could collude to override the controls.
Segregation of duties should be maintained between and among the following functions:
•
Systems analysts
•
Information systems use
•
Data entry
•
Data control clerks
•
Programmers
•
Computer operators
•
Network management
•
System administration
•
Librarian
•
Systems development and maintenance
•
Change management
•
Security administration
•
Security audit
Following are some of the various positions within a computer system, the responsibilities of each, and
incompatible responsibilities of each.
•
210
Systems analysts are responsible for reviewing the current system to make sure that it is meeting the needs of the organization, and when it is not, they provide the design specifications for the
new system to the programmers. Systems analysis consists of organizational analysis to learn
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
E.2. Systems Controls and Security Measures
about the current system’s strengths and weaknesses; identification of users’ requirements for the
new system; identifying the system requirements to fulfill the information needs of the users;
evaluating alternative designs using cost-benefit analysis; and preparing a systems analysis report
documenting the design of the proposed system and its specifications. Systems analysts should
not do programming, nor should they have access to hardware, software or data files.
•
Programmers write, test, and document the systems. They are able to modify programs, data
files, and controls but should not have access to the computers and programs in actual use for
processing. For instance, if a bank programmer were allowed access to actual live data and had
borrowed money from the bank, the programmer could delete his or her own loan balance while
conducting a test. When programmers must do testing, they should work with copies of records
only and should not have the authority, opportunity, or ability to make any changes in master
records or files.
•
Computer operators perform the actual operation of the computers for processing the data. They
should not have programming functions and should not be able to modify any programs. Their job
responsibilities should be rotated so no one operator is always overseeing the running of the same
application. The most critical segregation of duties is between programmers and computer operators.
•
The data control group receives user input, logs it, monitors the processing of the data, reconciles input and output, distributes output to authorized users, and checks to see that errors are
corrected. The group personnel also maintain registers of computer access codes and coordinate
security controls with other computer personnel. They must keep the computer accounts and access authorizations current at all times. They should be organizationally independent of
computer operations. Systems control personnel, not computer operators, should be responsible
for detecting and correcting errors.
Transaction authorization: No personnel in the Information Systems group should have authority to initiate or authorize any entries or transactions. Users should submit a signed form with each
batch of input data to verify that the input has been authorized and that the proper batch control
totals43 have been prepared. Data control group personnel should verify the signatures and batch
control totals before submitting the input for processing to prevent a payroll clerk, for instance,
from submitting an unauthorized pay increase.
•
Data conversion operators perform tasks of converting and transmitting data. They should have
no access to the library or to program documentation, nor should they have any input/output
control responsibilities.
•
Librarians maintain the documentation, programs, and data files. The librarian should restrict
access to the data files and programs to authorized personnel at scheduled times. The librarian
maintains records of all usage, and those records should be reviewed regularly by the data control
group for evidence of unauthorized use. Librarians should have no access to equipment.
•
Only authorized people should be able to call program vendor technical support departments. If
vendor-supplied systems are used, the vendors’ technical support area should have a means of
identifying callers and should give technical instructions for fixing problems only to employees who
are authorized to receive such instructions.
43
Batch control totals are any type of control total or count applied to a specific group of transactions, such as total
sales revenue in a batch of billings. Batch control totals are used to ensure that all input is processed correctly by the
computer. In batch processing, items are batched in bundles of a preset number of transactions. If a batch consists of
financial transactions, a batch control document that goes with the batch includes the bundle number, the date and the
total monetary amount of the batch. As the computer processes the batch, it checks the batch control total (the total
monetary amount) for the batch and compares the processed total with the batch control total. If they match, the batch
is posted. If they do not, the posting is rejected, and the difference must be investigated. Batch control totals can also
be calculated and used for non-financial fields in transactions. For instance, a batch control total might be the total hours
worked by employees.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
211
E.2. Systems Controls and Security Measures
CMA Part 1
•
The database administrator controls access to various files, making program changes, and making source code details available only to those who need to know.
•
The location of any off-site storage facilities should be known by as few people as possible.
•
No Information Systems personnel should have access to any assets that are accounted for in the
computer system.
Question 56: The process of learning how the current system functions, determining the needs of users
and developing the logical requirements of a proposed system is referred to as:
a)
Systems analysis.
b)
Systems feasibility study.
c)
Systems implementation.
d)
Systems maintenance.
(CMA Adapted)
Question 57: The most critical aspect of separation of duties within information systems is between:
a)
Programmers and computer operators.
b)
Programmers and project leaders.
c)
Programmers and systems analysts.
d)
Systems analysts and users.
(CMA adapted)
Administrative Controls – Supervision of Control Procedures
Adequate supervision of personnel involved in systems control procedures is necessary. Supervisors can
spot weaknesses, correct errors, and identify deviations from standard procedures. Without adequate supervision, controls may be neglected or deliberately by-passed.
General Operating Procedures
Note: This is the second of three categories of general controls.
Policies and procedures should be established formally in writing and approved by an appropriate management level. Accountabilities and responsibilities should be clearly specified.
Procedures should be documented and kept up to date for all IT operations, including network operations.
Procedure documentation should include:
212
•
The start-up process
•
Job scheduling
•
Setup of processing jobs
•
Instructions for running jobs
•
Processing continuity during operator shift changes
•
Operations logs
•
Backup and recovery procedures
•
Procedures that ensure connection and disconnection of links to remote operations
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
E.2. Systems Controls and Security Measures
Task descriptions should be written for each job function; personnel should be trained in their jobs; assigned
duties should be rotated periodically for key processing functions.
Physical safeguards should be established over forms such as negotiable instruments and over sensitive
output devices such as signature cartridges. Sequential numbers on individual forms should be printed in
advance so missing forms can be detected.
Turnaround documents should be used whenever appropriate. A turnaround document is a document
created by a computer, then some additional information is added to it and it is returned to become an
input document to the computer. The documents are printed with Optical Character Recognition (OCR)
fonts so that they can be read by the computer when they are scanned and thus only the added information
needs to be keyed in. Some examples of turnaround documents are invoices with a top section that the
customer tears off and returns with payment (with the amount paid filled in); or magazine subscription
renewal notices that the subscriber returns with renewal instructions. The use of a turnaround document
limits the chances of input errors occurring and reduces the need for manual data entry.
General operating procedures also specify the process to be followed in system and program development
and changes. The process to follow should be documented in order to provide reasonable assurance that
development of, and changes to, computer programs are authorized, tested, and approved prior to the use
of the program.
An outline of system and program development and change controls is included later in this section.
Software, Hardware, and Access Controls
Note: This is the third of three categories of general controls.
Software Controls
Software controls monitor the use of the software and prevent unauthorized access to it. Program security
controls are used to prevent unauthorized changes to applications and systems. Control of system software
is particularly important as system software performs overall control functions for the application programs.
System software controls are used for compilers, utility programs, operations reporting, file handling and
file setup, and library activities.
Example: Programs are written in source code, the language the programmer uses for coding the
program, and they also exist in object code, the machine language the processor can understand. The
source code file is converted to object code by means of a program called a compiler, and the object
code, not the source code, is what actually runs on the computer.
The above fact is important because although in theory the source code and the object code
should correspond, the computer does not require them to correspond. It would be possible for
a knowledgeable person to make a copy of the source code, rewrite portions of the instructions, compile
the modified source code into a new object code file for use by the computer, and then destroy the
modified source code file, leaving the authorized source code file unchanged. The result is that the
executable object code—the actual instructions used by the computer—will not match the authorized
source code.
If controls over compiling and cataloging activities are not adequate, this security weakness can be used
to commit computer fraud. Despite the strongest internal controls over day-to-day operations in user
departments, a fraudulent change to a program could divert company funds to an individual, and the
fraud could continue for some time without being detected.
A reliable, automatically generated report of compilation dates of the programs being used should
be available and consulted regularly by management and system auditors.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
213
E.2. Systems Controls and Security Measures
CMA Part 1
Hardware Controls
Hardware controls include keeping the computer equipment physically secure.
•
Computer equipment in the computer center should be kept in a locked room and access limited
to computer operations personnel (see Access Controls, next).
•
Computer equipment should be protected from extremes of temperature and humidity.
•
The location of the computer center should be in a place where it is protected from fire and other
natural disasters as much as possible.
•
The computer center should be equipped with smoke and water detectors, fire suppression devices,
burglar alarms, and surveillance cameras monitored by security personnel.
•
A defined backup procedure should be in place, and the usability of the backups should be verified
regularly.
Insurance is the last resort, to be called upon only if all else fails, because it does not actually protect from
loss but rather compensates for loss after it occurs. Insurance policies for computer damages are usually
restricted to actual monetary losses suffered, and monetary loss is difficult to assess. For example, computers may have a market value that is far less than the value of their services to the company.
Computer hardware frequently contains mechanisms to check for equipment malfunctions, another aspect
of hardware controls. For example:
•
Parity checks are used to detect alteration of bits within bytes during processing caused by equipment malfunctions.
•
Echo checks verify that the data received is identical to the data sent. The data is retransmitted
back to the sending device for comparison with the original.
Computer networks require special controls due to the decentralized nature of the hardware.
•
Checkpoint and rollback recovery processing should be used to enable recovery in case of a
system failure. Checkpoint control procedures are performed several times per hour, and during
that time, the network system will not accept posting. The system stops and backs up all the data
and other information needed to restart the system. The checkpoint is recorded on separate
media. Then, if a hardware failure occurs, the system reverts (“rolls back”) to the last saved copy,
restarts, and reprocesses only the transactions that were posted after that checkpoint.
•
Routing verification procedures protect against transactions routed to the wrong computer
network system address. Any transaction transmitted over the network must have a header label
identifying its destination. Before sending the transaction, the system verifies that the destination
is valid and authorized to receive data. After the transaction has been received, the system verifies
that the message went to the destination code in the header.
•
Message acknowledgment procedures can prevent the loss of part or all of a transaction or
message on a network. Messages are given a trailer label, which the receiving destination checks
to verify that the complete message was received.
Business continuity planning and disaster recovery are also part of hardware controls. Business continuity planning and disaster recovery are covered in a separate topic later.
Access Controls
Access controls include logical security and physical security.
Logical security consists of access and ability to use the equipment to protect it from damage or theft
and data security controls to ensure that data files are not subject to unauthorized access, change, or
destruction.
214
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
E.2. Systems Controls and Security Measures
efham CMA
Logical security includes access controls for users to limit actions they can perform; authentication processes to verify the identity of users; and cryptographic techniques such as encryption of messages and
digital signatures.
•
Unauthorized personnel, online connections and other system entry ports should be prevented
from accessing computer resources.
•
Passwords should be changed regularly for all those authorized to access the data. Procedures
should be established for issuing, suspending, and closing user accounts, and access rights should
be reviewed periodically.
•
All passwords should be issued with levels of authority that permit the users to access only the
data that they need to be able to access in order to do their jobs. For example, a person who does
invoicing needs access to the invoicing module of the accounting program but does not need access
to the general ledger module. The person who does receiving needs access to the purchase order
module but not to invoicing.
•
Dual access and dual control should be established to require two independent, simultaneous actions before processing is permitted.
•
Only authorized software from known sources should be allowed to be used in the system. Authorized software should be verified to be free of viruses and other malware before it is used.44
Data security controls are an important sub-set of logical security. Data security controls are established
to prevent access to data files without authorization and to prevent unauthorized or accidental change or
destruction.
•
•
Online input and real-time systems are vulnerable because they can be accessed remotely. When
data can be input online, unauthorized input entry must be prevented. Systems that enable online
inquiries must have data files secured.
o
Terminals can be physically restricted to permit access only to authorized personnel.
o
Passwords should be assigned only to authorized personnel.
o
Authorities should be assigned to passwords. For example, only individuals who are authorized
to update specific files should have the ability to do so, while others are able to read the files
or are denied access altogether.
Data files can be controlled more easily in batch systems than in online input and real-time systems
because access in batch systems is limited to operators running the batch jobs. Data files that are
maintained offline should be secured in locked storage areas with tight controls for release only
for authorized processing. Usage logs and library records can be maintained for each offline storage
device.
File security and storage controls are an important part of data security controls.
File security control procedures include:
•
Labeling the contents of discs (CDs, DVDs, external hard drives), tapes, flash drives or memory
cards, and any other removable media, both externally and internally as part of the data file.
•
The read-only file designation is used to prevent users from altering or writing over data.
•
Database Management Systems use lockout procedures to prevent two applications from
updating the same record or data item at the same time.
44
“Malware” is short for “malicious software.” It is software that is intended to damage or disable computers and
computer systems.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
215
E.2. Systems Controls and Security Measures
CMA Part 1
Note: A deadly embrace occurs when two different applications or transactions each have a
lock on data needed by the other application or transaction. Neither process is able to proceed,
because each is waiting for the other to do something. In these cases, the system must have a
method of determining which transaction goes first, and then it must let the second transaction
be completed using the information updated by the first transaction.
•
The librarian’s function is particularly critical, because documentation, programs and data files are
assets of the organization and require protection the same as any other asset would. The data files
contain information critical to the enterprise, such as accounting records. Although backup procedures could reconstruct lost or damaged data, it is less costly to prevent a data loss than to repair
it. Furthermore, confidential information is contained in the data files and must be protected from
misuse by unauthorized individuals.
•
Protection of program documentation is critical. Data can be changed within a file by someone who
knows how to do it, and technical manuals containing file descriptions are one way to get the
necessary information. Only authorized people who have the responsibility to repair data files that
may become corrupt should have access to technical manuals.
Logical security also includes Internet security (firewalls) and virus protection procedures. Internet security
is covered in a separate topic later.
Physical security involves protecting the physical assets of the computer center: the hardware, peripherals, documentation, programs, and data files in the library.
•
The computer processing center should be in a locked area, and access to it should be restricted
to authorized persons.
•
Servers and associated peripheral equipment should be kept in a separated, secure room with bars
on the windows and use of blinds or reflective film on the windows for heat blocking as well as
physical protection.
•
Hardware components should be monitored to prevent them from being removed from the premises.
•
Offsite backup media should be secured.
•
The location of wiring that connects the system should be known.
•
Uninterruptible power supplies should be maintained.
Media library contents should be protected. Responsibilities for storage media library management should
be assigned to specific employees. Contents of the media library should be inventoried systematically, so
any discrepancies can be remedied and the integrity of magnetic media is maintained. Policies and procedures should be established for archiving. Some means of accomplishing access restrictions are:
216
•
Have company personnel wear color-coded ID badges with photos. People authorized to enter the
computer area are assigned an ID badge of a particular color.
•
An IT member should escort visitors when they enter the computer facilities, and a visitor’s log
should be kept and reviewed regularly.
•
With magnetic ID cards, each employee’s entry into and exit from the computer center can be
automatically logged.
•
Biometrics such as fingerprints, voice verification, and so forth may also be used to identify a
person based on physical or behavioral characteristics.
•
The door can be kept locked, and a person can enter only if “buzzed” in by the control person, who
permits only authorized people to enter.
•
Keys may be issued to authorized personnel, or combination locks can be used to limit access. If
keys are used, they should be keys that cannot be easily duplicated, and locks need to be changed
periodically. If a combination lock is used, the combination should be changed periodically.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
E.2. Systems Controls and Security Measures
Application Controls
Application controls focus on preventing, detecting and correcting errors in transactions as they flow
through the input, processing, and output stages of work in an information system. Application controls
are specific controls within an individual computer application such as payroll or accounts receivable. Application controls focus on the following objectives:
1)
Input completeness and updating completeness. All authorized transactions reach the computer
and are recorded in data files.
2)
Input accuracy and updating accuracy. Data is captured accurately and correctly recorded in data
files.
3)
Validity. Data is authorized before being input and processed to approve the appropriateness of
the transactions. The transactions reflect the actual event that took place.
4)
Data is processed in a timely manner.
5)
Data files are accurate, complete, and current.
6)
Output is accurate and complete.
7)
Records are available that track the progress of data from input to storage to output.
Below are some things that can go wrong and that adequate controls can prevent, detect and correct:
•
Input loss can occur when transaction information is transmitted from one location to another.
•
Input duplication can occur if an input item is thought to be lost and is recreated, but the original
item is subsequently found or was never actually lost.
•
Inaccurate input in the form of typographical errors in numbers or in spelling can occur.
•
Missing information makes the input incomplete.
•
Unrecorded transactions can occur as accidental failures or can be the result of theft or embezzlement.
•
In a volume-processing environment, management authorization of every individual transaction
may not take place, allowing improper transactions to slip through.
•
Automated transactions may be set up for regular orders or payments to suppliers. However,
unusual situations can call for special transactions, and when a special transaction is needed, automated transactions can cause problems.
•
Output can be sent to the wrong people or may be sent too late to be used.
•
Programming errors or clerical errors can result in incomplete processing.
•
Processing may be delayed.
•
Files can be lost during processing.
•
Poor documentation and a loss of knowledgeable people can result in errors and omissions.
Application controls are classified as input controls, processing controls, and output controls.
Input Controls
Note: This is the first of three categories of application controls.
Input controls are controls designed to provide reasonable assurance that input entered into the system
has proper authorization, has been converted to machine-sensible form, and has been entered accurately
and completely. Input controls can also provide some assurance that items of data (including data sent
over communications lines) have not been lost, suppressed, added, or changed in some manner.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
217
E.2. Systems Controls and Security Measures
CMA Part 1
Input is the stage with the most human involvement and, as a result, the risk of errors is higher in the
input stage than in the processing and output stages. Most errors in systems result from input errors.
If information is not entered correctly, the output has no chance of being correct. Processing might be done
perfectly, but if the input into the system is inaccurate or incomplete, the output will be useless. Effective
input controls are vital.
Input controls are divided into three classifications, though there is some overlap and some controls appear
in more than one classification. The three classifications are:
1)
Data observation and recording
2)
Data transcription
3)
Edit tests
Data Observation and Recording
Note: This is the first of three categories of application controls – input controls.
One or more observational control procedures may be practiced:
•
Feedback mechanisms are manual systems that attest to the accuracy of a document. For instance, a sales person might ask a customer to confirm an order with a signature, attesting to the
accuracy of the data in the sales order. Feedback mechanisms include authorization, endorsement, and cancellation.
•
Dual observation means more than one employee sees the input documents. In some cases,
“dual observation” will mean a supervisor reviews and approves the work.
•
Point-of-sale devices such as bar codes that are scanned can decrease input errors substantially.
In addition, point-of-sale devices eliminate the need to manually convert the data to machinereadable format.
•
Preprinted forms such as receipt and confirmation forms can ensure that all the data required
for processing have been captured. For example, if a form utilizes boxes for each character in an
inventory part number, it is more likely that the correct number of characters will be entered.
•
Batch control totals should be used in the input phase for transactions grouped in batches to
track input as it travels from place to place before it reaches the computer, to make sure no input
is lost. In batch processing, items are batched in bundles of a preset number of transactions. Batch
control totals are any type of control total or count applied to a specific group of transactions, such
as total sales revenue in a batch of billings. Batch control totals are used to ensure that all input
is processed correctly by the computer. The application recalculates the batch total and compares
it with the manual batch total. Batches for which the control totals do not balance are rejected.
Example: For a batch that consists of financial transactions, the batch control document that
goes with the batch includes the bundle number, the date and the total monetary amount of
the transactions in the batch. As the computer processes the batch, it checks the batch control
total (the total monetary amount) for the batch and compares the processed total with the
batch control total. If they match, the batch is posted. If they do not match, the posting is
rejected and the difference must be investigated.
Batch control totals can also be calculated and used for non-financial fields in transactions. For
instance, a batch control total might be the total hours worked by employees.
Batch control totals do not work as well with real-time systems, because input is entered at remote
terminals sporadically and by different people. Transactions cannot be easily batched. However,
entries can and should be displayed on a screen for visual verification and checked against source
218
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
E.2. Systems Controls and Security Measures
documents. In addition, control totals can be created to reconcile with the hard copy source documents used for a group of transactions input by an individual.
Furthermore, information input can be checked against the database, and edit programs can be
used to make sure that each field has the proper format (see following topics).
•
Transaction trails should be created by the system that show the date, terminal ID, and individual
responsible for the input, especially in a real-time system. All inputs should be logged to a special
file that contains these identifying tags that identify the transactions. Including this additional,
audit-oriented information along with original transaction data is called tagging.
•
Transaction logs also provide a source of control totals.
Data Transcription
Note: This is the second of three categories of application controls – input controls.
Data transcription is the preparation of the data for processing. If input is entered from source documents, the source documents should be organized in a way that eases the input process.
The actual data input usually takes place at a workstation with a display terminal. A preformatted input
screen can assist in the transcription process. For example, a date field to be filled in would be presented
onscreen as __/__/____.
Format checks are used to verify that each item of data is entered in the proper mode: numeric data in a
numeric field, a date in a date field, and so forth.
Edit Checks
Note: This is the third of three categories of application controls – input controls.
Edit programs or input validation routines are programs that check the validity and accuracy of input
data. They perform edit tests by examining specific fields of data and rejecting transactions if their data
fields do not meet data quality standards. As each transaction is input into a real-time system it can be
edited, and the operator is notified immediately if an error is found. The system can be designed to reject
any further input until the error has been corrected or to print an error report for review by supervisory
personnel.
Statistics on data input and other types of source errors should be accumulated and reviewed to determine
remedial efforts needed to reduce errors.
Edit tests include:
•
Completeness, or field, checks, which ensure that input has been entered into all required fields
and that the input is in the proper format. For example, a field check would not permit numbers
to be input into a field for a person’s name.
•
Limit checks, which ensure that only data within predefined limits will be accepted by the system.
For example, the number of days worked in a week cannot exceed seven.
•
Validity checks, which match the input data to an acceptable set of values or match the characteristics of input data to an acceptable set of characteristics.
•
Overflow checks, which make sure that the number of digits entered in a field is not greater than
the capacity of the field.
•
Check digits, which determine whether a number has been transcribed properly. A check digit is
a number that is a part of an account or other type of number. The check digit is a function of the
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
219
E.2. Systems Controls and Security Measures
CMA Part 1
other digits within the number, determined by a mathematical algorithm.45 If a digit in the account
number is keyed in incorrectly, the check digit will be incorrect, and the system will generate an
error message and refuse to accept the input. Check digits are commonly used in credit card
account and bank account numbers, and they are especially helpful in detecting transposition errors. If an operator keys in a number incorrectly, the operator will get an error message such as
“invalid account number.”
•
Key verification, or keystroke verification, is the process of inputting the information twice
and comparing the two results. Key verification is often used when changing a password, to confirm
that the password has been typed correctly. Key verification can also be used to require input of
the same information twice by different people.
•
Hash totals are another type of control total. Hash totals are totals of nonmonetary information.
For example, if a batch contains data on receipts from accounts receivable customers, the sum of
all the customers’ account numbers might be computed to create a hash total. The sum is, of
course, useful only for control purposes. A hash total can be run on a group of records to be input
before processing or transmission and again after processing. If the hash total changes during
processing, it indicates something has changed or some transactions may be lost.
•
Format checks check whether the input has been entered in the proper mode and within the
proper fields.
•
Reasonableness checks compare input with other information in existing records and historical
information to detect items of data that are not reasonable.
•
Numerical checks assure that numeric fields are used only for numeric data.
•
Reconciliations are used to determine whether differences exist between two amounts that
should be equal. If differences are found, the differences are analyzed to detect the reason or
reasons, and corrections are made if necessary.
Corrections of errors present additional problems. Often, attempts to correct an error result in additional errors. Error reports need to be analyzed, the action required to make the correction needs to be
determined, the incorrect information needs to be reversed and correct information needs to be input.
Often, corrections are needed in multiple data files.
Inquiries of data or master files need to be designed so that an incorrectly keyed-in inquiry cannot change
the information in the file. For example, when a user is displaying a customer’s record and then the user
needs to locate a second customer’s record, typing the second customer’s name into the “Name” field that
is displayed and submitting it as an inquiry should not cause the name on the first customer’s record to be
changed to the second customer’s name.
Question 58: Routines that use the computer to check the validity and accuracy of transaction data
during input are called:
a)
Operating systems.
b)
Compiler programs.
c)
Edit programs.
d)
Integrated test facilities.
(CMA Adapted)
45
An algorithm is a step-by-step procedure for solving a problem, usually a mathematical problem, in a finite number
of steps.
220
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
E.2. Systems Controls and Security Measures
Question 59: The online data entry control called preformatting is:
a)
A check to determine if all data items for a transaction have been entered by the terminal operator.
b)
A program initiated prior to regular input to discover errors in data before entry so that the errors
can be corrected.
c)
The display of an input screen with blanks for data items to be entered by the terminal operator.
d)
A series of requests for required input data that requires an acceptable response to each request
before a subsequent request is made.
(CMA Adapted)
Question 60: Data input validation routines include:
a)
Passwords.
b)
Terminal logs.
c)
Backup controls.
d)
Hash totals.
(CMA Adapted)
Question 61: An employee in the receiving department keyed in a shipment from a remote terminal
and inadvertently omitted the purchase order number. The best systems control to detect this error
would be:
a)
Completeness test.
b)
Batch total.
c)
Reasonableness test.
d)
Sequence check.
(CMA Adapted)
Question 62: Edit checks in a computerized accounting system:
a)
Are preventive controls.
b)
Must be installed for the system to be operational.
c)
Should be performed on transactions prior to updating a master file.
d)
Should be performed immediately prior to output distribution.
(CMA Adapted)
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
221
E.2. Systems Controls and Security Measures
CMA Part 1
Processing Controls
Note: This is the second of three categories of application controls.
Processing controls are controls designed to provide reasonable assurance that processing has occurred
properly and that no transactions have been lost or incorrectly added. Processing controls prevent or discourage the improper manipulation of data and ensure satisfactory operation of hardware and software.
Processing controls overlap with general controls because they include the physical security of the equipment. At one time, processing controls were limited to the computer room. But with more and more
distributed processing taking place, processing controls are moving outside the room where the computer
equipment is located.
Access to the computer should be permitted only to people who are authorized to operate the equipment,
and operators should be given access only to information they need to set up and operate the equipment.
Processing controls fall into two classifications:
1)
Data access controls - processing controls at the time of data access
2)
Data manipulation controls - controls involving data manipulation later in the processing
Data Access Controls
Note: This is the first of two categories of application controls – processing controls.
Transmittal documents such as batch control tickets are used to control movement of data from the
source to the processing point or from one processing point to another. Batch sequence numbers are
used to number batches consecutively to make sure all batches are accounted for.
Batch control totals were mentioned as input controls, but they are also processing controls. Batch control
totals are any type of control total or count applied to a specific group of transactions, such as total sales
revenue in a batch of billings. Batch control totals are used to ensure that all input is processed correctly
by the computer. In batch processing, items are batched in bundles of a preset number of transactions. If
a batch consists of financial transactions, a batch control document that goes with the batch includes
the bundle number, the date and the total monetary amount of the batch. As the computer processes the
batch, it checks the batch control total (the total monetary amount) for the batch and compares the
processed total with the batch control total. If they match, the batch is posted. If they do not match, the
posting is rejected and the difference must be investigated. Batch control totals can also be calculated and
used for non-financial fields in transactions. For instance, a batch control total might be the total hours
worked by employees.
A hash total is another control that is both an input and a processing control. For instance, if a batch
contains data on receipts from multiple accounts receivable customers, the sum of all the customers’ account numbers might be computed to create a hash total. The hash total is compared with the total of the
same numbers computed during processing to make sure no items were lost during processing. A hash
total is useful only for control purposes, because a total of all the account numbers in a batch, for example,
is meaningless by itself.
A record count utilizes the number of transaction items and counts them twice, once when preparing the
transactions in a batch and again when performing the processing.
Data Manipulation Controls
Note: This is the second of two categories of application controls – processing controls.
Standard procedures should be developed and used for all processing.
222
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
E.2. Systems Controls and Security Measures
Examining software documentation, such as system flowcharts, program flowcharts, data flow
diagrams, and decision tables, can also be a control, because it makes sure that the programs are
complete in their data manipulation.
Computer programs are error tested by using a compiler, which checks for programming language errors.
Test data can be used to test a computer program.
System testing can be used to test the interaction of several different computer programs. Output from
one program is often input to another, and system testing tests the linkages between the programs.
Other tests of processing include the following.
•
Batch balancing is comparing the items actually processed against a predetermined control total.
•
Cross-footing compares the sum of the individual components to a total.
•
A zero-balance check is used when a sum should be zero. All of the numbers are added together
and the total is compared with zero.
•
Run-to-run totals are output control totals from one process used as input control totals over
subsequent processing. Critical information is checked to ensure that it is correct. The run-to-run
totals tie one process to another.
•
Default option is the automatic use of a predefined value when a certain value is left blank in
input. However, a default option may be correct, or it may be an incorrect value for a particular
transaction, so the default should not be automatically accepted.
Question 63: In an automated payroll processing environment, a department manager substituted the
time card for a terminated employee with a time card for a fictitious employee. The fictitious employee
had the same pay rate and hours worked as the terminated employee. The best control technique to
detect this action using employee identification numbers would be a:
a)
Hash total.
b)
Batch total.
c)
Subsequent check.
d)
Record count.
(CMA Adapted)
Output Controls
Note: This is the third of three categories of application controls.
Output controls are used to provide reasonable assurance that input and processing have resulted in valid
output. Output can consist of account listings, displays, reports, files, invoices, or disbursement checks, to
name just a few of the forms output can take. Controls should be in place to make sure that the output is
sent to the right people, that it is accurate and complete, it is sent in a timely manner, and that the proper
reports are retained for the appropriate time period.
The output of the system is supervised by the data control group. Output controls consist of:
•
Validating processing results
•
Controls over printed output
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
223
E.2. Systems Controls and Security Measures
CMA Part 1
Validating Processing Results
Note: This is the first of two categories of application controls – output controls.
Activity, or proof, listings that document processing activity provide detailed information about all
changes to master files and create an audit trail. The proof listings should be compared with the batch
control totals that went along with the input and processing functions in order to confirm that all of the
transactions were processed correctly.
Transaction trails should be available for tracing the contents of any individual transaction record backward or forward, and between output, processing, and source. Records of all changes to files should be
maintained.
Output totals should be reconciled with input and processing totals. A reconciliation is the analysis of
differences between two values that should be the same. The nature of the reconciling items is used to
identify whether differences are caused by errors or whether they are valid differences.
A suspense account is used as a control total for items awaiting further processing, such as a file of backordered products awaiting receipt so they can be shipped to fulfill orders.
Output control also includes review of the processing and error logs by the control group to determine
whether all of the correct computer jobs were executed properly and for review of the output by the users.
A discrepancy report is a listing of items that have violated some detective control and need to be investigated, such as a list of all past-due accounts sent to the credit manager. End-of-job markers are printed
at the end of the report and enable the user to easily determine if the entire report has been received.
Upstream resubmission is the resubmission of corrected error transactions as if they were new transactions, so that they pass through all the same detective controls as the original transactions.
Printed Output Controls
Note: This is the second of two categories of application controls – output controls.
Forms control, such as physical control over company blank checks, is one type of printed output control.
Checks should be kept under lock and key, and only authorized persons should be permitted access.
However, another control is needed with checks, because checks are pre-numbered. The preprinted
check number on each completed check must match the system-generated number for that
check, which may or may not be also printed on the check. The preprinted numbers on the checks
are sequential; the system-generated numbers also are sequential. The starting system-generated number
must match the pre-printed number on the first check in the stack, or all the check numbers in the whole
check run will be off. If the physical check number of each check in the check run does not match its check
number in the system, an investigation must be done because the starting number in the system should
be one more than the number of the last check legitimately issued. If the system number of the
first check printed does not match the preprinted number on the first check in the stack to be printed, one
or more blank checks could be missing.
Any form should be pre-numbered and controlled in the same manner as checks.
Companies are increasingly creating their own checks, using blank check stock and printers that print all of
their information as well as the MICR (Magnetic Ink Character Recognition) line as the check itself is printed.
The physical equipment used to create checks as well as the blank check stock must be strictly controlled.
Output control also concerns report distribution. For example, a payroll register with all the employees’
social security numbers and pay rates is confidential information and thus its distribution must be restricted.
Formal distribution procedures should be documented along with an authorized distribution list specifying authorized recipients of output reports, checks, and other critical output. An adequate number of copies
of reports should be generated to permit only one report to be distributed to each person on the list. For a
confidential report, it is preferable to have a representative pick the report up personally and sign for it. If
224
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
E.2. Systems Controls and Security Measures
it is not possible for a representative to pick the report up personally, a bonded employee can be used to
hand deliver the reports. The employee’s supervisor should make random checks on the report distribution.
Confidential reports should be shredded when they are no longer needed.
Controls Classified as Preventive, Detective and Corrective
Just as financial controls can be classified as preventive, detective, and corrective, information systems
controls can be classified in the same manner.
•
Preventive controls prevent errors and fraud before they occur. Examples of preventive controls are segregation of duties, job rotation, training and competence of personnel, dual access
controls, authorization, approval, endorsement and cancellation, and preformatted input.
•
Detective controls uncover errors and fraud after they have occurred. Examples of detective
controls are transmittal documents, batch control totals and other batch transmittal documents,
completeness checks, hash totals, batch balancing, check digits, limit checks, and validity checks.
The use of a turnaround document is also a detective control, because it checks on completeness
of input. Completeness-of-processing detective controls include run-to-run totals, reconciliations,
and use of suspense accounts. Error logs and exception reports are correctness of processing
detective controls and may also be completeness of processing controls.
•
Corrective controls are used to correct errors. Upstream resubmissions are corrective controls.
Question 64: The reporting of accounting information plays a central role in the regulation of business
operations. The importance of sound internal control practices is underscored by the Foreign Corrupt
Practices Act of 1977, which requires publicly-owned U.S. corporations to maintain systems of internal
control that meet certain minimum standards. Preventive controls are an integral part of virtually all
accounting processing systems, and much of the information generated by the accounting system is
used for preventive control purposes. Which one of the following is not an essential element of a sound
preventive control system?
a)
Documentation of policies and procedures.
b)
Implementation of state-of-the-art software and hardware.
c)
Separation of responsibilities for the recording, custodial and authorization functions.
d)
Sound personnel practices.
(CMA Adapted)
Question 65: An advantage of having a computer maintain an automated error log in conjunction with
computer edit programs is that:
a)
Less manual work is required to determine how to correct errors.
b)
Better editing techniques will result.
c)
The audit trail is maintained.
d)
Reports can be developed that summarize the errors by type, cause and person responsible.
(CMA Adapted)
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
225
E.2. Systems Controls and Security Measures
CMA Part 1
Controls Classified as Feedback, Feedforward and Preventive
Another way of classifying information systems controls looks at them as feedback, feedforward, or
preventive controls.
•
Feedback controls produce feedback that can be monitored and evaluated to determine if the
system is functioning as it is supposed to. Feedback controls are required in order to produce
usable information for end users. With the addition of feedback controls, a system becomes a selfmonitoring, self-regulating system.
A feedback loop is a part of a control system. It uses feedback to measure differences between
the actual output and the desired output. It then adjusts the operation according to those differences. Thus, it self-corrects. A self-monitoring system is sometimes called a cybernetic system.
For example, in a manufacturing situation where ingredients are being combined, computers may
monitor and control the mixing process, making adjustments as necessary to maintain the correct
proportions of each ingredient in the mix. In an accounting system, data entry displays or edit
sheets provide control of data entry activities, and accounting procedures such as reconciliations
provide feedback.
A report that summarizes variances from budgeted amounts us another example of a feedback
control.
•
A feedforward control system may be used in addition to the feedback loop to provide better
controls. A feedforward system attempts to predict when problems and deviations will occur before they actually occur. It gives people guidance about what problems could occur, so they can
plan the necessary changes or actions to prevent the problem or deviation from occurring. Or, if it
is not possible to prevent the problem, a feedforward control can enable the company to minimize
the effects of the problem. A budget is a feedforward control. Policies, procedures, and rules are
also feedforward controls because they establish the way things are supposed to be done. When
people have detailed instructions and follow them, the chances of something going wrong may be
minimized.
•
A preventive control attempts to stop a variance or problem from ever occurring, because it is
more cost effective to prevent a problem than it is to fix the problem after it occurs. Maintenance
is often given as an example of a preventive control. A preventive control is slightly different from
a feedforward control, in that the feedforward control simply tries to identify the potential problem,
whereas the preventive control attempts to prevent the problem from occurring. Segregation of
duties is a preventive control.
Question 66: Preventive controls are:
a)
Usually more cost beneficial than detective controls.
b)
Usually more costly to use than detective controls.
c)
Found only in accounting transaction controls.
d)
Found only in general accounting controls.
(CMA Adapted)
226
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
System and Program Development and Change Controls
System and Program Development and Change Controls
(A general operating procedure within general controls)
Systems development controls during the development stage of an information system enhance the ultimate accuracy, validity, safety, security and adaptability of the new system’s input, processing, output and
storage functions.
Controls are instituted at the development stage for multiple reasons.
1)
To ensure that all changes are properly authorized and are not made by individuals who lack
sufficient understanding of control procedures, proper approvals and the need for adequate testing.
2)
To prevent errors in the resulting system that could cause major processing errors in data.
3)
To limit the potential for a myriad of other problems during the development process and after its
completion.
Following are a few of the control considerations in systems development. This list is not exhaustive but is
presented to give candidates an idea of what is involved.
1) Statement of Objectives
A written proposal is prepared, including the need for the new system, the nature and scope of the project
and timing issues in terms of need and employee availability. A risk assessment is done to document
security threats, potential vulnerabilities, and the feasible security and internal control safeguards needed
to mitigate the identified risks.
•
The written proposal should be reviewed by the IT Steering Committee.
•
The nature and scope of the project should be defined clearly in writing.
A clear, written statement of objectives and a risk assessment at the Statement of Objectives stage can
limit the number of changes needed later on and shorten the time required to identify solutions and get
approvals.
2) Investigation and Feasibility Study of Alternative Solutions
Needs are identified and feasibility of alternative solutions are assessed, including the availability of required
technology and resources. A cost-benefit analysis is done, including both tangible costs and benefits (such
as hardware costs or increases in cash flow) and intangible costs and benefits (such as loss of customer
goodwill or better customer service).
•
Questions to be answered in the cost-benefit analysis include whether the system will provide an
adequate payback; whether it will fit into the existing software environment; whether it will run
on existing hardware or whether a hardware upgrade will be needed; whether new storage media
will be required; whether the resources are available for the project; whether the application would
require extensive user or programmer training; and what effect it would have on existing systems.
•
The feasibility study should include an analysis of needs, costs, implementation times, and potential risks.
•
In evaluating possible solutions, criteria should be developed for consideration of in-house development, purchased solutions and outsourcing options.
•
The technological feasibility of each alternative for satisfying the business requirements should be
examined; and the costs and benefits associated with each alternative under consideration should
be analyzed.
•
Key users should assist in the analysis and recommendations.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
227
System and Program Development and Change Controls
CMA Part 1
•
The development team should have a good knowledge of the solutions available and limit their
consideration to proven technology rather than experimenting with new technology, unless experimentation is justified by the situation.
•
Senior management should review the reports of the feasibility studies and approve or disapprove
proceeding with the project.
•
For each approved project, a master plan should be created to maintain control over the project
throughout its life, which includes a method of monitoring the time and costs incurred.
The cost-benefit analysis done at the Investigation and Feasibility Study stage is extremely important as a
control tool. The cost-benefit analysis can reduce changes later on caused by the discovery of unexpected
costs. Furthermore, if the project is seriously flawed, it can be rejected at this stage before a major investment is made.
3) Systems Analysis
The current system is analyzed to identify its strong and weak points, and the information needs from the
new system, such as reports to be generated, database needs, and the characteristics of its operation are
determined.
•
Business requirements satisfied by the existing system and those that the proposed system expects to attain should be clearly defined, including user requirements, specifications as to what the
new system is supposed to accomplish, and alternatives for achieving the specifications such as
in-house development versus a vendor package.
•
Inputs, processing requirements, and output requirements should be defined and documented.
•
All security requirements should be identified and justified, agreed upon, and documented.
•
A structured analysis process should be used.
If the information required by the users is not clear, the new system cannot possibly support the business
process, leading to delays in implementation and additional costs to redesign the system.
4) Conceptual Design
The conceptual design is what the users are expecting the system to be. Systems analysts work with users
to create the design specifications and verify them against user requirements. System flowcharts and report
layouts are developed. The applications, network, databases, user interfaces, and system interfaces are
designed.
228
•
Design specifications should be reviewed and approved by management, the user departments,
and senior management.
•
Detailed program specifications should be prepared to ensure that program specifications agree
with system design specifications.
•
Data elements are defined. Each field in each file is listed and defined.
•
The file format should be defined and documented for the project to ensure that data dictionary
rules are followed.
•
All external and internal interfaces should be properly specified, designed, and documented.
•
An interface between the user and the machine that is easy to use and contains online help functions should be developed.
•
Mechanisms for the collection and entry of data should be specified for the development project.
•
Adequate mechanisms for audit trails should be developed for the selected solution that provide
the ability to protect sensitive data.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
System and Program Development and Change Controls
Benefits of properly-completed conceptual design include reduced interoperability problems, better integration with existing systems, and reduced costs for modifications later.
5) Physical Design
The physical system design involves determining the workflow, what and where programs and controls are
needed, the needed hardware, backups, security measures, and data communications.
•
Hardware and software selection should identify mandatory and optional requirements. The potential impact of new hardware and software on the performance of the overall system should be
assessed.
•
An up-to-date inventory of hardware and software infrastructure should be available.
•
Acquisition policies and practices should be clearly understood, and the selection process should
focus on using reusable components.
•
Contracts with suppliers should include a definition of acceptance criteria and procedures, and
dependency on single-source suppliers should be managed.
•
If a vendor package or packages are to be used, they should be evaluated rigorously. Factors to
consider include the stability of the vendor, how long the system has been on the market, whether
it has a base of users and the satisfaction level of current users of the system, the vendor’s quality
control standards, the adequacy of the documentation, the availability of vendor technical support,
and flexibility of the system such as whether it has a report writer that users can use to develop
reports themselves. Also, the system’s processing speed on the organization’s systems is a consideration.
•
Because only authorized people should be able to call vendor technical support departments, the
evaluation of a vendor system should include inquiries as to the means the vendor has to identify
callers to its technical support area and determine whether the caller has authority to receive
technical instructions for fixing problems.
•
Performance and capacity requirements should be duly considered.
•
Key requirements should be prioritized in case of possible scope reductions.
Benefits of proper physical design planning include reduced delays later due to inadequate infrastructure.
6) Development and Testing
The design is implemented into source code, the technical and physical configurations are fine-tuned, and
the system is integrated and tested. Data conversion procedures are developed.
•
Programs are coded according to the specifications developed in the systems design and development stage.
•
Procedures should provide for a formal evaluation and approval, by management of the user department or departments and management of the IT function, of work accomplished and test
results in each phase of the cycle before work on the next phase begins.
•
There should be a separation between development activities and testing activities.
•
A formal process for handover from development to testing to operations should be defined.
•
Resources should be available for a separate testing environment that reflects as closely as possible the live environment, and sufficient time should be allowed for the testing process.
•
Parallel or pilot testing should be performed, and criteria for ending the testing process should be
specified in advance.
•
Testing should be done both of the individual application and of the application within the system.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
229
System and Program Development and Change Controls
CMA Part 1
•
An independent group should do the testing and try to make the system fail.
•
Both in-house systems and vendor packages should be tested.
•
Software from a vendor should be tested on a stand-alone (non-networked) computer before being
distributed for general use because a vendor’s software could be infected with a virus.
Testing is the final check to make sure the system performs as it should.
7) System Implementation and Conversion
The site is prepared, equipment is acquired and installed, and conversion procedures, including data conversion, are implemented. Documentation is completed and users are trained.
•
An implementation plan should be prepared, reviewed and approved by relevant parties and also
used to measure progress. The plan should include site preparation, equipment acquisition and
installation, user training, installation of operating software changes and implementation of operating procedures and conversion procedures.
•
Data conversion controls such as record counts, reviews of reports and other types of reconciliations are required.
•
The degree and form of documentation required is agreed upon and followed in the implementation. Documentation will include
o
System documentation, which is narrative descriptions, flowcharts, input and output
forms, file and record layouts, controls, authorizations for any changes and backup procedures.
o
Program documentation, or descriptions of the programs, program flowcharts, program
listings of source code, input and output forms, change requests, operator instructions, and
controls.
o
Operating documentation, the information about the performance of the program.
o
Procedural documentation, providing information about the master plan and the handling of files.
o
User documentation, including all the information a user will need in order to use the
program.
•
Standard operating procedures should be documented, distributed, and maintained using
knowledge management, workflow techniques, and automated tools.
•
Staff of the user departments and the operations group of the IT function should be trained in
accordance with the training plan.
•
Formal evaluation and approval of the test results and the level of security for the system by
management of the user department and the IT function should cover all components of the information system.
•
Before the system is put into operation, the user should validate its operation as a complete product under conditions similar to the application environment.
•
The decision should be made as to whether the new system will be implemented using a parallel
conversion (running both the old and the new systems together for a period of time), a phased
conversion (converting only parts of the application at a time or only a few locations at a time),
pilot conversion (the new system is tested in just one work site before full implementation), or a
direct conversion (changing over immediately from the old system to the new).
Documentation provides a basis for effective operation, use, audit, and future system enhancements. A
detailed record of the system’s design is necessary in order to install, operate, or modify an application and
230
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
System and Program Development and Change Controls
for diagnosing and correcting programming errors. It provides a basis for reconstruction of the system in
case of damage or destruction.
Benefits of good documentation and other controls at this stage are more seamless integration of the new
system into existing business processes and greater user proficiency and satisfaction.
8) Operations and Maintenance
The system is put into a production environment and used to conduct business. Follow-up occurs to determine whether weaknesses in the previous system have been eliminated and whether or not any new
problems have arisen. Continuous monitoring and evaluation take place to determine what is working and
what needs improvement, to support continuous improvement of the system. Maintenance includes modifying the system as necessary to adapt to changing needs, replacing outdated hardware as necessary,
upgrading software, and making needed security upgrades.
•
The system is audited periodically to make sure it continues to operate properly.
•
A maintenance process is utilized to correct errors.
•
A process should be in place to manage coordination between and among changes, recognizing
interdependencies.
•
Production duties should be segregated from development duties.
•
Maintenance personnel should have specific assignments, their work should be monitored, and
their system access rights should be controlled.
•
Any modifications to the system should be authorized by user management, made in accordance
with the same standards as are used for system development, and should be tested and approved
by the user and Information Systems management. Senior management should approve major
projects.
•
When changes are being tested, they should be tested not only by using correct information, but
also by using incorrect information to make sure that the program will detect any errors and has
the necessary controls.
•
Whenever system changes are implemented, associated documentation and procedures should be
updated accordingly.
•
For a vendor package, maintenance procedures are of concern from a systems control standpoint.
Updates released by the vendor should be installed on a timely basis. For an organization with
integrated software, releases must be kept compatible. If one portion of the system is upgraded
and another part is not, the two systems may no longer interface properly.
•
If vendor-supplied software has had custom changes made to the vendor’s source code and the
changes are not properly reinstalled on top of new releases, erroneous processing can result. The
organization should maintain change controls to verify that all custom changes are properly identified. A good audit trail of all program changes is necessary. Another concern with vendor update
releases when in-house changes have been made is that the changes may need to be not only
reinstalled, but completely rewritten. The changes made to the prior release of the program might
not work properly with the vendor’s new release.
•
Heavy modification of vendor code with no intention of installing new vendor releases because of
the necessity to reinstall the modifications should be avoided, because the system becomes essentially an in-house system without the benefit of vendor support.
The benefits include reduced errors and disruptions due to poorly-managed changes, reduced resources
and time required for changes, and reduced number of emergency fixes.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
231
Internet Security
CMA Part 1
Internet Security
(A logical security access control within general controls)
Once a company is connected to an outside network (usually the Internet), a number of additional security
issues must be properly addressed. The company must make sure that the policies that it puts in place
allow the intended and authorized users to have access to the network as needed. However, accessibility
also creates vulnerability.
Electronic eavesdropping can occur if computer users are able to observe transmissions intended for
someone else. Therefore, organizations must ensure that information sent over a network is properly protected to maintain the confidentiality of company information. Furthermore, the company must ensure that
company files cannot be accessed or changed without authorization.
At a minimum, the system should include user account management, a firewall, anti-virus protection,
and encryption.
•
User account management is the process of assigning people accounts and passwords. In order
for user account management to be as effective as possible, the company must keep the accounts
and passwords up-to-date. Inactive accounts should be eliminated and active passwords changed
frequently.
•
A firewall serves as a barrier between the internal and the external networks and prevents unauthorized access to the internal network. A properly configured firewall makes a computer’s ports
invisible to port scans.46 In addition to protecting a computer from incoming probes, a firewall can
also prevent backdoor47 applications, Trojan horses,48 and other unwanted applications from sending data from the computer. Most firewalls will usually prepare a report of Internet usage, including
any abnormal or excessive usage and attempts to gain unauthorized entry to the network. A firewall can be in the form of software directly installed on a computer, or it can be a piece of hardware
installed between the computer and its connection to the Internet.
•
Antivirus software, regularly updated with the latest virus definitions, is the best defense against
viruses, Trojan horses, and worms. Antivirus software recognizes and incapacitates viruses before
they can do damage. Users must keep their antivirus software up-to-date, however, because new
viruses appear constantly. Programs that specifically defend against Trojan horses are also available.
•
Encryption is the best protection against traffic interception resulting in data leaks. Encryption
converts data into a code and then a key is required to convert the code back to data. Unauthorized
people can receive the coded information, but without the proper key they cannot read it. Thus,
an attacker may be able to see where the traffic came from and where it went but not the content.
46
A “port” is a place where information goes into and out of a computer. A “port scanner” is software that probes a
computer for open ports. Port scanners are used legitimately by administrators to verify security policies of their networks
and illegitimately by attackers to exploit vulnerabilities.
47
A “backdoor” is an undocumented means to gain access to a program, online service or an entire computer system
by bypassing the normal authentication process. A backdoor is a potential security risk.
48
A Trojan horse is “any program that does something besides what a person believes it will do.” A Trojan horse can
appear to be something desirable, but in fact it contains malicious code that, when triggered, will cause loss or even
theft of data. A typical example of a Trojan horse is a program hidden inside of a humorous animation that opens a
backdoor into the system. Another example of a Trojan horse is commercial software that collects data on the person
running the program and sends it back to the originating company without warning the target.
232
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
Internet Security
Viruses, Trojan Horses, and Worms
A computer virus is a program that alters the way another computer operates. Viruses can damage programs, delete files or reformat the hard disk. Other viruses do not do damage but replicate themselves and
present text, video, and audio messages. Although these other viruses may not cause damage directly,
they create problems by taking up computer memory and causing erratic behavior or system crashes that
can lead to data loss.
To be considered a virus, a virus must meet two criteria:
1)
It must execute itself. A virus often places its own code in the path of the execution of another
program.
2)
It must replicate itself. A virus can replace other executable files with a copy of the virus-infected
file.
A virus can be received from an infected disk, a downloaded file, or an email attachment, among other
places.
A Trojan horse is different from a virus. A very important distinction between Trojan horses and viruses
is that Trojan horses do not replicate themselves, whereas viruses do. The purpose of a Trojan horse is
not to spread like a virus, but to have a particular target — a particular computer — on which to run a
program. A strict definition of a Trojan horse is, “any program that does something besides what a person
believes it will do.” A Trojan horse can appear to be something desirable, but in fact it contains malicious
code that, when triggered, will cause loss or even theft of data. A typical example of a Trojan horse is a
program hidden inside of a humorous animation that opens a backdoor into the system. Another example
of a Trojan horse is commercial software that collects data on the person running the program and sends
it back to the originating company without warning the target.
A computer user can get a Trojan horse only by inviting it into his or her computer. Two examples are by:
1)
Opening an email attachment.
2)
Downloading and running a file from the Internet. Many mass-mailing worms are considered Trojan
horses because they must convince someone to open them. The SubSeven server, which is software that lets an attacker remotely control any computer it is installed on, is an example of a
program typically embedded in a Trojan horse.
A worm is a program that replicates itself from system to system without the use of any host file. The
difference between a worm and a virus is that the worm does not require the use of an infected host file,
while the virus does require the spreading of an infected host file. Worms generally exist inside of other
files, often Word or Excel documents. However, worms use the host file differently from viruses. Usually
the worm releases a document that has the “worm” macro inside the document. The entire document
spreads from computer to computer, so the entire document is, in essence, the worm.
A virus hoax is an email telling the recipient that a file on his or her computer is a virus when the file is
not a virus. Such an email will tell recipients to look on their systems for a file with a specific name and, if
they see it, to delete it because the file contains a virus that is unrecognizable by anti-virus programs.
Everyone with the targeted operating system will find that file because it is a system file that is needed for
the computer to operate correctly. Someone who believes the virus hoax email and deletes the file will find
that the computer malfunctions afterward.
Note: The difference between a virus and a Trojan horse is that a virus replicates itself, but a Trojan
horse does not.
The difference between a virus and a worm is that the virus requires an infected host file in order to
replicate itself, while the worm can replicate itself without a host file.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
233
Internet Security
CMA Part 1
Cybercrime
The Internet, online communications, and e-business are all subject to computer crime, and the threat is
growing every day.
Cyberattacks can be perpetrated by criminals, international adversaries, and terrorists. Adversaries target
the nation’s critical infrastructure and critical processes. Companies are targeted for competitive information and sensitive corporate data, including consumer credit card information and personal identification
information. Universities are targeted to steal their research and development data. Individuals are targeted
by identity thieves and children are targeted by online predators.
In the U.S., the Cyber Division of the FBI is responsible for investigating cyberattacks. The FBI maintains
specially trained cyber squads at FBI headquarters and in each of its 56 field offices. In addition, Cyber
Action Teams are available to travel around the world to assist in computer intrusion cases, providing
investigative support and helping to answer critical questions that can move a hacking case forward. In
addition, the FBI has 93 Computer Crimes Task Forces nationwide that work with federal, state, and local
counterparts.
The FBI’s priorities are:
•
Computer and network intrusions
•
Ransomware
•
Identity theft
•
Online predators
Some specific computer crimes include:
234
•
Copyright infringement such as the illegal copying of copyrighted material, whether intellectual
property, such as computer programs or this textbook, or entertainment property such as music
and movies.
•
Denial of Service (DOS) attacks in which a website is accessed repeatedly so that other, legitimate users cannot connect to it.
•
Intrusions and theft of personal information, including emails, for gain.
•
Phishing, a high-tech scam that uses spam email to deceive consumers into disclosing their credit
card numbers, bank account information, Social Security numbers, passwords or other sensitive
personal information.
•
Installation of malware on a computer without the user’s knowledge. An example of malware is
a keylogger that records every keystroke and sends it back to the hacker. Keylogging software has
been used to gather bank information, credit card information, and passwords. Other malware
turns a PC into a “zombie,” giving hackers full control over the machine. Hackers set up “botnets”
— networks consisting of millions of zombies — that can be made to each send out tens of thousands of spam emails or emails infected with viruses, and the computer users do not even know
it is happening.
•
In tech support scams, scammers use fake warning popup boxes on users’ computers to
convince potential victims that their computers suffer from security problems and to offer a tollfree number to call for a “Microsoft” or “Apple” support line. Users who believe it and call the
number are usually told to visit a website and download a “remote administration tool” that gives
the scammer access to their computer so the scammer can “run tests” on the computer. The
scammer then presents supposed “results” showing the computer infected with malware or having
been breached by hackers and offers to fix it for a fee of several hundred dollars. Users who fall
for the scheme are out the “support fee.” Domain hosting companies usually discover the fraud
and remove the websites within a few days, and the victim has nowhere to turn for compensation.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
Internet Security
Cybercriminals’ Tools
Port scanners. A port scanner is software that can probe a server or a computer for ports that are open.
Although port scanners have legitimate uses, they are also used by hackers to locate and exploit vulnerabilities. Using port scans, hackers can look for a particular make of computer or a particular software
program, because they know of weaknesses in those computers or programs that they can exploit. Once a
hacker has identified a vulnerable computer or software application, the hacker can leave a back door
open in the computer in order to re-enter it at any time. If the original entry point is detected and closed,
the back door functions as a hidden, undetected way back in.
Sniffers. A sniffer is a piece of software that grabs all of the traffic flowing into and out of a computer
attached to a network. Like port scanners, sniffers have legitimate as well as illegitimate uses. Intrusion
Detection Systems (IDS) use sniffers legitimately to match packets against a rule set designed to flag things
that appear malicious or strange. Network utilization and monitoring programs often use sniffers to gather
data necessary for metrics and analysis. However, sniffers are also used to gather information illegally.
Many personal computers are on Local Area Networks (LANs), meaning they share a connection with several
other computers. If a network is not switched (a switch is a device that filters and forwards packets
between segments of the LAN), traffic intended for any machine on a segment of the network is broadcast
to every machine on that segment. Thus, every computer actually sees the data traveling to and from each
of its neighbors but normally ignores it. The sniffer program tells a computer to stop ignoring all the traffic
headed to other computers and instead pay attention to that traffic. The program then begins a constant
read of all information entering the computer.
Anything transmitted in plain text over the network is vulnerable to a sniffer — passwords, web pages,
database queries and messaging, to name a few. Once traffic has been captured, hackers can quickly
extract the information they need. The users will never know their information has been compromised,
because sniffers cause no damage or disturbance to the network environment.
Other tools of hackers include:
•
Password crackers, which is software that creates different combinations of letters and numbers
in order to guess passwords.
•
Logic bombs or errors in the logic of computer programs that result in the destruction of computer
data or a malicious attack when specific criteria are met.
•
Buffer overflow, which sends too much data to the buffer in a computer’s memory, crashing it
or enabling the hacker to gain control over it.
Some computer crime tactics involve efforts in person as well as computer activities. Tactics involving
personal effort include social engineering and dumpster diving. Social engineering involves deceiving
company employees into divulging information such as passwords, usually through a fraudulent email but
it may be through something as simple as a telephone call. Dumpster diving is sifting through a company’s
trash for information that can be used either to break into its computers directly or to assist in social
engineering.
Another online scam is directed against companies that advertise on search engines on a “pay-per-click”
basis. Google is probably the best-known example of a search site that charges advertisers each time a
visitor clicks on an ad link. In one version of this scam, a competitor will write a software program that
repeatedly clicks on a business’s online ads in order to run up its advertising charges. Ultimately, after too
many clicks within a 24-hour period, the ad is pushed off the search engine site, resulting in lost business
for the company along with the inflated advertising fees.
However, outsiders are not the only ones who commit computer crimes against a company. Insiders—or
company employees—are a primary source of trouble. Employees who are planning to leave one employer
and go to work for a competitor can use their company email to transmit confidential information from the
current employer to the future employer or can carry it out on a flash drive.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
235
Internet Security
CMA Part 1
Insider crime can also include using the company computer for private consulting, personal financial business, playing video games on company time or browsing pornography sites. A legitimate use of sniffers,
described earlier, is monitoring network usage to reveal evidence of improper use. Many businesses install
software that enables them not only to monitor their employees’ access to websites but also to block access
to certain websites. Improper use of the Internet and email at work can get an employee fired immediately.
Defenses Against Cybercrime
The best defense against port scans is a good firewall. A firewall serves as a barrier between the internal
and the external networks and prevents unauthorized access to the internal network. A properly configured
firewall makes a computer’s ports invisible to port scans. In addition to protecting a computer from incoming
probes, a firewall can also prevent backdoor applications, Trojan horses and other unwanted applications
from sending data from the computer. Most firewalls will usually prepare a report of Internet usage, including any abnormal or excessive usage and attempts to gain unauthorized entry to the network. A firewall
can be in the form of software directly installed on a computer, or it can be a piece of hardware installed
between the computer and its connection to the Internet.
An organization may also use a proxy server, which is a computer and software that creates a gateway
to and from the Internet. The proxy server contains an access control list of approved web sites and handles
all web access requests, limiting access to only those sites contained in the access control list. A proxy
server enables an employer to deny its employees access to sites that are unlikely to have any productive
benefits. The proxy server also examines all incoming requests for information and tests them for authenticity, thus functioning as a firewall. The proxy server can also limit the information stored on it to
information the company can afford to lose, and if the proxy server is broken into, the organization’s main
servers remain functional.
Tools called anti-sniffers are available to defend against sniffers. When a sniffer program is active on a
computer, the computer’s network interface card (NIC) is placed in a state called promiscuous mode. The
anti-sniffer scans networks to determine if any network interface cards are running in promiscuous mode.
Anti-sniffers can be run regularly to detect evidence of a sniffer on the network. A switched network is
also a deterrent, because it eliminates the broadcasting of traffic to every machine, although hackers do
have programs they can use to get around a switched network.
The best defense against phishing and other scams is in the hands of the recipient. Recipients need to know
not to respond to any email that requests personal or financial information and not to click on any link given
in such an email that could take them to a spoofed website. Similarly, recipients of unexpected email
attachments need to know not to open them, even if a virus scan has not identified any virus in the attachment. New viruses appear every day and one could slip past an antivirus program, even an antivirus
program that is updated regularly. Recipients of popup “notifications” about malware on their computers
should know that there is no legitimate scenario in which a computer will tell them it is infected and ask
them to call a toll-free number. Thus, employee education is a vital part of Internet security.
Encryption
Encryption is the best protection against traffic interception resulting in data leaks. Encryption converts
data into a code and then a key is required to convert the code back to data. Unauthorized people can
receive the coded information, but without the proper key, they cannot read it. Thus, an attacker may be
able to see where the information came from and where it went but not the content.
Encryption can be accomplished by two methods: secret key and public key/private key.
•
236
In a secret key encryption system, each sender and recipient pair has a single key that is used
to encrypt and decrypt the messages. The disadvantage to a secret key system is that every pair
of senders and receivers must have a separate set of keys that match. If several pairs all used the
same set, then anyone having the key could decrypt anyone else’s message and it would not be a
secret. Use of a secret key system is impractical over the Internet, because any one company
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
Internet Security
could have thousands of potential customers as well as others from whom it would need to receive
messages.
•
The public key/private key encryption system is a better system for companies to use. In a
public-key/private-key encryption system, each entity that needs to receive encrypted data publishes a public key for encrypting data while keeping a private key to itself as the only means for
decrypting that data. Anyone can encrypt and send data to the company using its published public
key, but only the company’s private key can be used to decrypt the data, and only the company
that published the public key has the private key.
A company obtains a public key and the private key to go with it by applying to a Certificate
Authority, which validates the company’s identity and then issues a certificate and unique public
and private keys. The certificate is used to identify a company, an employee or a server within a
company. The certificate includes the name of the entity it identifies, an expiration date, the name
of the Certificate Authority that issued the certificate, a serial number and other identification. The
certificate always includes the digital signature of the issuing Certificate Authority, which permits
the certificate to function as a “letter of introduction” from the Certificate Authority. One example
of public/private encryption keys is SSL (Secure Sockets Layer), used on secure web sites.
Public key/private key encryption can be illustrated with an analogy. Imagine a door with two
locks, one keyed to the public key, and the other keyed to the private key. Both locks are unlocked,
and a pile of keys that fit the public key lock is available on a table nearby. When someone wants
to leave something in the room, they take a public key and lock the door. Upon locking the door
with the public key, the private key lock is locked automatically. Only the person with the private
key can open the door and look at what is in the room. Therefore, the person who left the message
in the room knows the message is safe from being read by anyone else and that only the intended
recipient can read it.
The most important point to remember here is that only someone with the private key (which
should be closely guarded) can open the door and have access to the contents of the room. While
it is possible to "pick the lock," doing so requires a large amount of time, skill and determination,
just as in real life.
Businesses can use public key cryptography when sending information. If Ronnie Retailer needs to
order from Smith Supply, Ronnie can use Smith Supply’s public key to encrypt the information it
wants to send via the Internet. Smith Supply can then use its private key to decrypt the message
and process whatever transaction was requested. Anyone else who happens to intercept Ronnie
Retailer’s message will see only gibberish.
Among other uses for it, public key/private key encryption is used to transmit credit card information securely over the Internet to online merchants.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
237
Business Continuity Planning
CMA Part 1
Business Continuity Planning
(A hardware control within application controls)
Business continuity planning involves defining the risks facing a company in the event of a disaster, assessing those risks, creating procedures to mitigate those risks, regularly testing those procedures to ensure
that they work as expected, and periodically reviewing the procedures to make sure that they are up to
date. In the context of an information technology system, it is essential that the company have plans for
the backup of data and the recovery of data, especially in the context of disaster recovery.
Several different processes and backup plans function as part of the backup and recovery plan.
•
Program files, as well as data files, should be backed up regularly.
•
Copies of all transaction data are stored as a transaction log as they are entered into the system.
Should the master file be destroyed during processing, computer operations will roll back to the
most recent backup; recovery takes place by reprocessing the data transaction log against the
backup copy.
•
Backups should be stored at a secure, remote location, so that in the event of data destruction
due to a physical disaster, the records can be reconstructed. It would do very little good to have
backup media in the same room as the computer if that area were destroyed by fire, for example.
•
Backup data can be transmitted electronically to the backup site through a process called electronic vaulting, or backing up to the cloud. Backing up to the cloud carries its own control
considerations. Some considerations include
o
Data security – A cloud backup provider should be certified compliant with international security standards.
o
Data privacy – Encryption alone is no guarantee of privacy. If encryption keys are held with
data in the cloud, the cloud provider may be required to turn over data if subpoenaed. For
maximum privacy, the encryption key can be further encrypted, called digital envelope encryption.
o
Dependability of the remote location – Management considering a cloud solution should understand the backup providers’ Service Level Agreements to determine whether they offer
automatic data redundancy across multiple data centers, so if something happens to one center, other centers can take over. Without such data redundancy, data can be permanently lost.
o
Legal data requirements of the country where the data is resident – Multinational organizations
with employees all over the world need to understand the data regulations in each country in
which they are located. Using a provider with a single location may result in a violation of local
data residency laws. If a cloud backup center has international locations, multinational entities
can choose which data centers to use for their data backups to ensure they are in compliance
with local data regulations.
•
Grandparent-parent-child processing is used because of the risk of losing data before, during
or after processing work. Data files from previous periods are retained and if a file is damaged
during updating, the previous data files can be used to reconstruct a new current file. Like backup
media, these files should also be stored off-premises.
•
Computers should be on Uninterruptible Power Supplies (UPS) to provide some protection in
the event of a power failure. Software is available that works in tandem with the UPS to perform
an orderly shutdown of the system during that short period of power maintenance that the UPS
can give the computer.
•
Fault-Tolerant Systems are systems designed to tolerate faults or errors. They often utilize
redundancy49 in hardware design, so that if one system fails, another one will take over.
49
Redundancy is the duplication of critical components or functions of a system in an effort to increase the reliability of
the system. If one component fails, the duplicate component can take over the processing.
238
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
Business Continuity Planning
Disaster Recovery
Not many firms could survive for long without computing facilities. Therefore, an organization should have
a formal disaster recovery plan to fall back on in the event of a hurricane, fire, earthquake, flood, or criminal
or terrorist act. A disaster recovery plan specifies:
1)
Which employees will participate in disaster recovery and what their responsibilities will be. One
person should be in charge of disaster recovery and another should be second in command.
2)
What hardware, software, and facilities will be used.
3)
The priority of applications that should be processed.
Arrangements for alternative facilities as a disaster recovery site and offsite storage of the company’s
databases are also part of the disaster recovery plan. An alternative facility might be a different facility
owned by the company, or it might be a facility contracted by a different company. The different locations
should be a significant distance away from the original processing site.
A disaster recovery site may be a hot site, a cold site, a warm site, or a mobile site.
A hot site, or a mirrored data center, is a backup facility that has a computer system similar to the one
used regularly. The hot site must be fully operational and immediately available, with all necessary telecommunications hookups for online processing. A hot site also has current, live data being replicated to it
in real time from the live site by automated data communications.
A cold site is a facility where space, electric power, and heating and air conditioning are available and
processing equipment can be installed, though the equipment and the necessary telecommunications are
not immediately available. If an organization uses a cold site, its disaster recovery plan must include arrangements to quickly get computer equipment installed and operational.
A warm site is in between a hot site and a cold site. It has the computer equipment and necessary data
and communications links installed, just as a hot site does. However, it does not have live data. If use of
the warm site is required because of a disaster, current data will need to be restored to it.
A mobile site is a disaster recovery site on wheels. It can be a hot site, a warm site, or a cold site. It is
usually housed in a trailer and contains the necessary electric power, heat and air conditioning. If it is a
warm or a hot site, it also contains the computer equipment; and if it is a hot site, it also contains current
data.
A mobile site may be company-owned or it may be available to the company on a contracted basis. Several
companies operate mobile recovery centers on a contract basis. In the event of a disaster that destroys
operations facilities, they arrive within hours in a tractor-trailer or van that is fully equipped with their
client’s platform requirements, 50 to 100 workstations, and staffed with technical personnel to assist in
recovery.
Personnel should be trained in emergency procedures and re-training should be done regularly to keep their
knowledge fresh. The disaster recovery plan should be tested periodically by simulating a disaster in order
to reveal any weaknesses in the plan. This test should be conducted using typical volumes, and processing
times should be recorded. The disaster recovery plan should be reviewed regularly and revised when necessary because organizational and operational changes made that have not been incorporated into the
recovery plan could make the recovery plan unusable.
All members of the disaster recovery team should each keep a current copy of the plan at home.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
239
System Auditing
CMA Part 1
efham CMA
Question 67: A critical aspect of a disaster recovery plan is to be able to regain operational capability
as soon as possible. To accomplish this, an organization can have a fully operational facility available
that is configured to the user's specific needs. This is best known as a(n):
a)
Uninterruptible power system.
b)
Parallel system.
c)
Cold site.
d)
Hot site.
(CMA Adapted)
System Auditing
Assessing Controls by Means of Flowcharts
A flowchart is one of the methods used by an auditor to understand the flow of information through processes and systems. A flowchart depicts the flow of work: how procedures are initiated, authorized, and
processed, and when and how information flows between and among multiple systems. Flowcharts use
geometric symbols to show the steps and arrows to show the sequence of the steps.
A flowchart assists in properly identifying risks at each point in the process or system, identifying controls
necessary to address the risks, and assessing the effectiveness of existing controls. It helps to identify the
preventive controls that have been implemented, the detective controls that are in place, and the corrective controls that are being practiced. It also helps to identify gaps in existing controls and risk areas
that may have not been previously identified.
The main elements shown in a flowchart are:
•
Data sources (where the information comes from).
•
Data destinations (where the information goes).
•
Data flows (how the data gets there).
•
Transformation process (what happens to the data).
•
Data storage (how the data are stored for the long term).
A system flowchart shows the different departments or functions involved in a process horizontally across
the top. It documents the manual processes as well as the computer processes and the input, output and
processing steps. A systems flowchart also clearly shows the segregation of duties in the system.
The flowchart identifies specific control points in the system. A control point is a point in a process where
an error or irregularity is likely to occur, creating a need for control.
A flowchart can be used to determine whether the system ensures that authorizations, verifications, reconciliations, and physical control activities are properly designed, documented, and operating effectively.
For example, in the payroll cycle a flowchart can help to illustrate whether access to employees’ personal
information in the payroll records is properly secured at every point in the process.
240
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
System Auditing
The auditor’s understanding of the controls is accomplished through:
•
Inquiries made of employees.
•
Analytical procedures.
•
Observation of processes, or “walk-throughs.”
•
Inspection of documents and documentation.
Finally, the auditor will use a system to assign a value to the probability that the controls will or will not
prevent or detect and correct an error or a fraudulent activity. Professional judgment is necessary at this
step to determine whether the overall assessment represents a pass or a fail of the IT control system.
Computerized Audit Techniques
Internal auditors use the computer to evaluate the processing being done by the computer and the controls
in place. Auditors can use a variety of tools to audit information systems, as follows.
Generalized Audit Software
Generalized audit software (GAS) permits the computer to be used by auditors as an auditing tool. The
computer can select, extract, and process sample data from computer files. Generalized audit software can
be used on mainframe computers and also on PCs. Generalized audit software can check computations,
search files for unusual items, and perform statistical selection of sample data. It can also prepare confirmation requests.
Test Data
To use test data, an auditor prepares input containing both valid and invalid data. The input is processed
manually to determine what the output should look like. The auditor then processes the test data electronically and compares the manually-processed results with the electronically-processed results. Test data
are used not only by auditors but also by programmers to verify the processing accuracy of the programs
they write and the programming changes they make.
Data used as test data might be real or fictitious. Since test data should not actually be processed, it is
important to ensure that the test transactions do not actually update any of the real data files maintained
by the system.
Test data can evaluate only the operation of programs. Other tests that verify the integrity of input and
output are required as well. Furthermore, the test data usually cannot represent all possible conditions that
a computer program might encounter in use. Another limitation is that test data can be run only on a
specific program at a specific time; because the test data must be processed separately from other data,
the auditor cannot be sure that the program being tested is the same program actually being used for
processing.
Integrated Test Facility
An Integrated Test Facility (ITF) involves the use of test data and also creation of test entities that do not
really exist, such as vendors, employees, products, or customers. The fictitious entities are actually included
in the system’s master files, and the test data are processed concurrently with real transactions. The transactions are processed against live master files that contain the real records as well as the fictitious records.
The major difference between test data and an ITF is that the test data used in an ITF are processed along with real data, whereas test data are not actually processed. No one knows that the
data being processed in the ITF includes the fictitious entries to fictitious records. Therefore, the auditor
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
241
System Auditing
CMA Part 1
can be sure that the programs being checked are the same programs as those being used to process the
real data.
The difficulty with using the ITF approach is that the fictitious transactions must be excluded from the
normal outputs of the system in some way. They may be excluded manually, or they may be excluded by
designing or modifying the application programs. Either way, the fictitious transactions must be identified
by means of special codes so they can be segregated from the real data. Careful planning is required to
make sure that the ITF data do not become mixed in with the real data, corrupting the real data.
If careful planning is done, the costs of using an ITF are minimal, because no special processing is required
and thus no interruption of normal computer activity takes place. Costs are involved in developing an ITF,
both while the application is being developed and as later modifications are made to it. However, once the
initial costs are past, the ongoing operating costs are low.
An ITF is normally used to audit large computer systems that use real-time processing.
Parallel Simulation
Parallel simulation is an audit technique that uses real data rather than simulated data but processes it
through test or audit programs. The output from the parallel simulation is compared with the output from
the real processing. Parallel simulation is expensive and time-consuming and is usually limited to sections
of an audit that are of major concern and are important enough that they require an audit of 100% of the
transactions.
Since parallel simulation is done using test programs, the parallel simulation can be done on a computer
other than the one used for the real processing. However, the auditor should make sure that the system
used for the real processing of the output that is used for comparison is the same one that is used all the
time.
Embedded Audit Routines
Embedded audit routines involve modifying a regular production program by building special auditing routines into it so that transaction data can be analyzed. Embedded audit data collection is one type of
embedded audit routine, and it uses specially programmed modules embedded as inline code within the
regular program code. The embedded routine selects and records data as it is processing the data for
normal production purposes, for later analysis and evaluation by an auditor.
Transactions are selected by the embedded audit routine according to auditor-determined parameters for
limits and reasonableness. Transactions that violate those parameters are written to a file as exceptions.
Alternatively, transactions might be selected randomly. If transactions are selected randomly, the objective
is to create a statistical sample of transactions for auditing.
The approach that selects transactions that violate established limits is called a system control audit
review file (SCARF). The approach that selects random transactions is called a sample audit review file
(SARF).
It is easier to develop embedded audit routines when a program is initially developed than to add them
later.
Extended Records and Snapshot
Many different processing steps may be combined in a program, and therefore the audit trail for a single
transaction may exist in several different files. Extended records refer to modifying a program to tag
specific transactions and save all their processing steps in an extended record, permitting an audit trail to
be reconstructed from one file for those transactions. Transactions might be selected randomly, or they
might be selected as exceptions to edit tests.
242
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section E
System Auditing
The snapshot technique “takes a picture” of a transaction as it is processed. Program code is added to the
application to cause it to print out the contents of selected memory areas when the snapshot code is
executed. A snapshot is used commonly as a debugging technique. As an audit tool, snapshot code can be
used for transactions that exceed predetermined limits.
Extended records and snapshot are very similar. The difference is that snapshot generates a printed audit
trail, whereas extended records incorporates snapshot data in the extended record file rather than on hard
copy.
Tracing
Tracing provides a detailed audit trail of all the instructions executed by a program. A single trace can
produce thousands of output records, so the auditor must take care to limit the number of transactions
tagged for tracing. Tracing might be used to verify that internal controls in an application are being executed
as the program is processing data, either live data or test data.
A trace may also reveal sections of unexecuted program code, which can indicate incorrect or unauthorized
modifications made to the program.
Mapping
Mapping involves using special software to monitor the execution of a program. The software counts the
number of times each program statement50 in the program is executed. Mapping can help determine
whether program application control statements that appear in the source language listing of the program
are actually executed when the program runs and have not been bypassed. It can also locate “dead” program codes and can flag codes that may be being used fraudulently.
Mapping can be used with a program running test data. The output of the mapping program can indicate
whether the program contains unused code. All unused codes are investigated, and the purpose of the
unused code is evaluated to determine whether it should stay in the program or be removed.
Mapping originally was a technique used for program design and testing, but auditors also use it to determine whether program statements are being executed.
50
A “program statement” is an instruction to the computer. Each programming language has its own acceptable format
for program statements. The program statements are translated by the compiler into object code, the machine language
that the processor can understand.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
243
Section F – Technology and Analytics
CMA Part 1
Section F – Technology and Analytics
Introduction to Technology and Analytics
Section F constitutes 15% of the CMA Part 1 Exam. Section F includes Information Systems, Data Governance, Technology-enabled Finance Transformation, and Data Analytics.
•
The focus of Information Systems is on accounting information systems, inputs to them, and the
use of their outputs.
•
Data Governance involves the management of an organization’s data assets and data flows. The
objective of data governance is to enable reliable and consistent data so that management is able
to properly assess the organization’s performance and make decisions.
•
Technology-enabled Finance Transformation covers issues of importance for management accountants such as robotic process automation, artificial intelligence, cloud computing, and blockchains.
•
Data Analytics also covers current technological and analytical issues that management accountants need to be familiar with. Business intelligence, data mining in large datasets, the use of
statistics and regression analysis, and visualization of data by charting are covered.
F.1. – Information Systems
The Value Chain and the Accounting Information System
The goal of any organization is to provide value to its customers. A firm’s value chain is the set of business
processes it uses to add value to its products and services.
The more value a company creates and provides to its customers, the more its customers will be willing to
pay for its products or services, and the more likely they are to keep buying those products or services. A
business will be profitable if the value it creates for its customers is greater than its cost of producing the
products and services it offers. All of the activities in the value chain contain opportunities to increase the
value to the customer or to decrease costs without decreasing the value the customer receives.
Support Activities
The value chain was discussed earlier in this volume. To review, the value chain as envisioned by Michael
Porter looks like the following:
Infrastructure
Human Resources Management
Technological Development
Value Added
Minus Cost
= Margin
Service
Marketing
and Sales
Outbound
Logistics
Operations
Inbound
Logistics
Procurement
Primary Activities
244
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
F.1. – Information Systems
Section F
Though various organizations may view their value chains differently depending on their business models,
the preceding graphic encompasses the business processes used by most organizations. The primary activities include the processes the organization performs in order to create, market, and deliver products
and services to its customers and to support the customers with service after the sale. The support activities
make it possible for the organization to perform the primary activities.
An organization’s accounting information system (AIS) interacts with every process in the value chain. The
AIS adds value to the organization by providing accurate and timely information so that all of the value
chain activities can be performed efficiently and effectively. For example:
•
Just-in-time manufacturing and raw materials inventory management is made possible by an accounting information system that provides up-to-date information about inventories of raw
materials and their locations.
•
Sales information can be used to optimize inventory levels at retail locations.
•
An online retailer can use sales data to send emails to customers suggesting other items they
might be interested in based on items they have already purchased.
•
Allowing customers to access accounting information such as inventory levels and their own sales
orders can reduce costs of interacting with customers and increase customer satisfaction.
•
A variance report showing a large unfavorable variance in a cost indicates that investigation and
possibly corrective action by management is needed.
•
An AIS can provide other information that improves management decision-making. For instance:
o
It can store information about the results of previous decisions that can be used in making
future decisions.
o
The information provided can assist management in choosing among alternative actions.
The Supply Chain and the Accounting Information System
Parts of a company’s value chain are also parts of its supply chain. A company’s supply chain describes
the flow of goods, services, and information from the suppliers of materials and services to the organization
all the way through to delivery of finished products to customers. In contrast to the value chain, the activities of a company’s supply chain also take in outside organizations.
Nearly every product that reaches an end-user represents the coordinated efforts of several organizations.
Suppliers provide components to manufacturers who in turn convert them into finished products that they
ship to distributors for shipping to retailers for purchase by the consumer. All of the organizations involved
in moving a product or service from suppliers to the end-user (the customer) are referred to collectively as
the supply chain.
A well-designed accounting information system can improve the efficiency and effectiveness of a company’s
supply chain, thus enhancing the company’s profitability.
Automated Accounting Information Systems (AIS)
Automated accounting information systems are computer-based systems that transform accounting data
into information using the fundamental elements of paper-based accounting systems, but with electronic
processing.
Elements of Automated Accounting Information Systems
Any financial accounting information system, automated or otherwise, captures transactions or business
events that affect an entity’s financial condition. The transactions are recorded in journals and ledgers.
Journals are used to record accounting transactions. A journal contains a chronological record of the events
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
245
F.1. – Information Systems
CMA Part 1
that have impacted the business’s finances. Journal entries show all the information about a transaction,
including the transaction date, the accounts debited and credited, and a brief description. Journal entries
are then posted to the general ledger.
The general ledger contains a separate account for each type of transaction. The list of general ledger
accounts used by an organization is called its chart of accounts. In a paper-based accounting system,
only an account name is needed, but in an automated accounting information system, the accounts need
to have numbers so that input is done in a consistent manner.
An automated accounting information system stores information in files. Master files store permanent
information, such as general ledger account numbers and history or customer account numbers and historical data for each customer. Transaction files are used to update master files, and they store detailed
information about business activities, such as detail about sales transactions or purchase of inventory. For
each general ledger account number, the general ledger master file stores the transactions that have adjusted that account’s balance along with some basic information such as date and source. The detail needed
to maintain an audit trail is stored in transaction files. The detail in the transaction files may be needed in
future years if it becomes necessary to adjust or restate prior period financial statements, so it is a good
idea to maintain the transaction files for a number of years. After the transaction details are no longer
needed, though, the transaction files can be deleted.
In an automated accounting information system, accounts in the general ledger chart of accounts are numbered using block codes. Block codes are sequential codes that have specific blocks of numbers reserved
for specific uses. For example, a general ledger account numbering system is used to organize the accounts
according to assets, liabilities, equity, incomes, and expenses. An entity can use any numbering scheme it
wants, but one method of organizing account numbers uses account numbers beginning with “1” for assets,
“2” for liabilities, “3” for equity, “4” for incomes, and “5” for expenses. Thus, if 4-digit account numbers are
being used, then all asset accounts would be in the 1000 block, all liability accounts in the 2000 block, and
so forth. The numbers following the number in the first position subdivide the types of accounts more finely.
For example, in the asset section, current asset accounts might begin with “11” while noncurrent asset
accounts begin with “12.” Within the current asset section, then, cash might be 1110, accounts receivable
might be 1120, and inventory might be 1130.
Special journals are used for specific kinds of transactions, and in a computerized system, the journals are
known as modules. For example, an order entry module may be used to record sales. When a credit sale
is recorded in the order entry module, the order entry module updates the customer’s account in the accounts receivable module (to add the charge to the account); it updates the sales module (to increase the
sales revenue); and it updates the inventory module (to reduce the items on hand by the items sold). It
also updates the general ledger to record the increase to accounts receivable, the increase to sales revenue,
the decrease in the value of inventory by the cost of the items sold, and the equivalent increase in cost of
goods sold expense. If sales tax or value-added tax is invoiced, those items are recorded as well in the
general ledger. When a customer pays, the receipt is recorded in the cash receipts module, which updates
the customer’s account in the accounts receivable module at the same time. The journal transactions update
the general ledger to record the increase to the cash account and the decrease to the accounts receivable
account.
In an automated accounting system, transactions are recorded electronically. They may be input by employees, but they may just as easily be recorded automatically. For example, a transaction may be created
automatically in the order entry module when a customer places an order over the Internet.
When a transaction is created in a module, the input includes a transaction code that identifies, for
example, a transaction in the order entry module as a sale transaction. The code causes the data entered
with that transaction code to be recorded in the other modules and in the proper general ledger accounts.
The specific transaction code for a sale may be set as the default code in the transaction code input field
within the order entry module, although the code can be changed if instead, for example, a sales return is
being processed.
246
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
F.1. – Information Systems
Section F
Codes, both numeric and alphanumeric, are used elsewhere in an automated AIS, as well. For example,
sequence codes are used to identify customer sales invoices created in the order entry module and may
be used for new customer accounts in the accounts receivable master file. Sequence codes are simply
assigned consecutively by the accounting system. For example, when a new customer account is added to
the accounts receivable master file, the AIS may automatically assign the next consecutive unused account
number to the new account. The number has no other meaning.
In a responsibility accounting system, a code is used to identify the responsibility center that is the source
of the transaction, and that code is part of transactions input to the AIS, as well.
Example: General ledger expense account numbers used for advertising expenses include a code for
the department that initiated the cost. The expense account number identifies the type of advertising
medium, for example television advertising, followed by codes indicating the type of advertising expense
and the product advertised.
Thus, a production expense for a television commercial advertising a specific kitchen appliance such as
a blender is coded to the television advertising account for commercial production for blenders. As the
cost is recorded in the AIS, the cost is directed to the correct responsibility center code and expense
account, as follows:
120 =
Advertising
Department
5=
16 =
2=
Television
7=
Production
Costs
Expense
Advertising
120
5
16
31 =
Blender
2
7
31
Thus, the full expense account number charged is 5162731 in department 120. That account number in
that responsibility center accumulates only expenses for television advertising production costs for
blender advertising that have been committed to by the advertising department.
As a result, the different types of advertising expenses are clearly delineated in the general ledger according to responsibility center, type of expense, advertising medium, type of cost, and product
advertised, enabling easier analysis of the data.
Output of an Automated Accounting Information System
The data collected by an AIS is reported on internal reports. The internal reports are used by accountants
to prepare adjusting entries, by management for analysis and decision-making, and by both accountants
and management to produce the external financial reports needed.
An AIS needs to be designed so that it will be able to produce the reports that users will need. In the
preceding example, for instance, before the expense codes were developed, management decided it needed
to know not only total advertising expense, but also how much was spent for advertising in each advertising
medium, within each medium how much was spent for advertising production and how much for media
charges, and within those subdivisions, how much was spent to advertise each product.
Reports from an AIS may be paper reports, screen reports, or reports in various other forms such as audio
reports. They could be regularly scheduled reports or they could be produced on demand. Good reports
should have the following characteristics:
1)
The report should include a date or dates. For example, a current customer list should show the
date the report is “as of.” A balance sheet should also show the “as of” date of the report. An
income statement for a period of time such as the year to date should indicate the period covered
by the report, such as “January 1 through April 30, 20X9.”
2)
The report should be consistent over time so managers can compare information from different
time periods, consistent across segments so management can compare segment performance, and
consistent with generally accepted accounting principles so the report can be understood and used.
3)
The report should be in a convenient format and should contain useful information that is easy to
identify. Summary reports should contain financial totals and comparative reports should provide
related numbers such as actual versus budgeted amounts in adjacent columns.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
247
F.1. – Information Systems
CMA Part 1
Accounting Information System Cycles
Transaction cycles are grouped business processes for which the transactions are interrelated. They include:
•
Revenue to cash cycle.
•
Purchasing and expenditures cycle.
•
Production cycle.
•
Human resources and payroll cycle.
•
Financing cycle.
•
Fixed asset cycle (property, plant, and equipment).
•
General ledger and reporting systems.
Revenue to Cash Cycle
The revenue to cash cycle involves activities related to the sale of goods and services and the collection of
customers’ cash payments. The cycle begins with a customer order and ends with the collection of the cash
from the customer.
To include the collection of customers’ cash payments for sales, the company needs to maintain accurate
records of customers’ outstanding invoices. Thus, the accounts receivable subsidiary ledger is an important
function of the AIS. Customer records also include payment history, assigned credit limit, and credit rating.
The accounting information system is used for:
•
Tracking sales of goods and services to customers.
•
Recording the fulfilling of customer orders.
•
Maintaining customer records.
•
Billing for goods and services.
•
Recording payments collected for goods and services provided.
•
Forecasting sales and cash receipts using the outputs of the AIS.
The process of entering an order and sale into the AIS also updates the inventory module so that items
sold are deducted from on-hand inventory and their costs charged to cost of goods sold.
The AIS should include a means to invoice customers as products are shipped. The production of the invoice
and the production of the packing slip should occur simultaneously, and nothing should be shipped without
a packing slip.
The AIS needs to be able to produce analytical reports of sales orders, sales terms, and payment histories
for customers for use in predictive analytics. Sales orders can be used to predict future sales, and the sale
terms can be used to make cash flow forecasts.
248
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.1. – Information Systems
Activities in and inputs to the revenue to cash cycle include:
•
Receipt of customer orders and creation of sales orders. The sales order serves as input to the AIS
and contains full customer information, payment method, item or items sold, prices, and terms.
•
Inventory is checked to determine whether the item or items ordered are in stock. A process needs
to be in place to track and follow up on backordered items.
•
An order confirmation may be emailed to the customer, particularly for orders received on the
Internet.
•
If the order includes a request for credit, the customer’s credit is checked. A process needs to be
in place to screen for fraudulent orders, as well, and much of that can be automated in the AIS.
•
If the order is to be paid by credit card, preliminary credit card authorization is obtained for the
amount due.
•
Input is created to the AIS for the invoice and the packing slip. (If the order was received over the
Internet, the input to the AIS may be automatically created.)
•
Notice is sent to the warehouse to print the packing slip, pick the order, and prepare it for shipping.
For a service, the service is scheduled.
•
Shipping information is processed, the package is moved to the shipping dock to be picked up by
the carrier, or the service is provided. For a credit card sale, the credit card charge is released. For
a sale made on credit, the invoice is sent to the customer.
•
The accounts receivable module, inventory module, and general ledger are updated and reports
are processed.
•
A process needs to be in place to handle approval and processing of returns. If a return is approved,
the process needs to include making sure the returned item is physically received and, if it can be
resold, that it is physically delivered back to the warehouse and added to inventory on hand in the
inventory module at the correct cost. A credit is created to the customer’s account and an appropriate journal entry is made to the general ledger.
•
When payment is received for a credit sale, a remittance advice (usually detached from the invoice
and sent back by the customer) should accompany the customer’s payment. The remittance advice
is used to input the information about the payment received to the correct customer account in
the accounts receivable module, including the amount received and the invoice or invoices the
payment is to be applied to. The AIS also updates the cash receipts journal and the general ledger
to record the increase to cash and the decrease to accounts receivable.
Inputs to the revenue cycle can be made by desktop computer, but they may also be voice inputs, input
with touch-tone telephones, or input by wireless capability using tablet computers. Sales made on the
Internet may automatically update the subledgers and the general ledger in the AIS. Outside sales people
may use laptops, mobile devices, portable bar code scanners, and other types of electronic input devices
to enter sales orders.
Outputs of the revenue to cash cycle include:
•
Internal management reports such as sales reports and inventory reports.
•
Customer statements summarizing outstanding sales invoices by customer, payments received,
and balance owed.
•
Customer aging reports, showing the total accounts receivable outstanding balance broken down
according to ranges of time outstanding, such as amounts outstanding for 0 to 30 days, 31 to 60
days, and so forth.
•
Collection reports, showing specific accounts that need follow-up for overdue balances.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
249
F.1. – Information Systems
CMA Part 1
•
Information from sales reports, receivables aging reports, and receipts reports is used as input to
a cash receipts forecast.
•
Various outputs are used in preparing closing entries and producing external financial statements.
For example, the aging report and other information is used in estimating credit loss expense for
the period.
Purchasing and Expenditures Cycle
The purchasing and expenditures process involves obtaining items and services in a timely manner at the
lowest price consistent with the quality required, managing the inventory, and seeing that payment is made
for items purchased and services received. The responsibilities of the purchasing function include:
•
Locating and approving reputable vendors who offer quality goods and services at reasonable
prices. Vendor shipping and billing policies, discount terms, and reliability are important concerns
in approving vendors.
•
Maintaining relationships with suppliers and other partners in the company’s supply chain.
The purchasing and expenditures cycle begins with a request for goods or services and ends with payment
to the vendor for the goods or to the provider of the service.
The accounting information system is used for:
•
Tracking purchases of goods or services.
•
Tracking amounts owed and making timely and accurate vendor payments.
•
Maintaining vendor records and a list of authorized vendors.
•
Managing inventory to ensure that all goods purchased are received, properly recorded in the AIS,
and properly dispensed from inventory.
•
Forecasting purchasing needs and cash outflows.
The inventory control function in the AIS interfaces with production departments, purchasing, vendors, and
the receiving department.
Activities and inputs to the purchasing and expenditures cycle include:
250
•
An internal purchase requisition is prepared by the requesting manager or department. Requests
for raw materials may be automated in the AIS when inventories fall below pre-determined levels,
or employees may key in the requests.
•
The request is transmitted electronically to the purchasing department.
•
The purchasing department selects the vendor, prepares input to the AIS for a purchase order,
and transmits the purchase order to the vendor.
•
The purchasing department also transmits the information from the purchase order to the company’s receiving department so it will have the record of the order when the order is delivered.
However, as a control, the information the receiving department has access to should not include
the quantities ordered of each item. Thus, the receiving clerk must actually count and record the
items received rather than simply assuming the quantities ordered were the quantities received.
•
When items are received, the receiving department creates an electronic receiving report. The
receiving report may be integrated in the AIS with the purchase order input as long as the receiving
personnel cannot access the quantities ordered. In other words, the receiving clerk may create the
receiving report by accessing each item on the purchase order electronically and inputting the
quantity counted that was received, but the receiving clerk should not be able to see the quantity
ordered.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
•
F.1. – Information Systems
Recording the receipt of the items electronically in the AIS updates the inventory on hand in the
inventory module, increases inventory in the general ledger by the amount of the cost, and creates
a payable in the accounts payable module and in the general ledger.
Note: Items shipped FOB shipping point belong to the purchaser as soon as they are shipped.
Items shipped FOB shipping point that have been shipped but not yet received as of a financial
statement date should be accrued as payables and the items should be included in ending inventory. The AIS should have a means of properly handling those transactions.
•
The invoice received from the vendor is compared with the purchase order and the receiving report.
A packing slip and a bill of lading from the freight carrier may also be received and they should be
included in the review.
•
A process should be in place to investigate any differences in the items, quantity, and prices between and among the purchase order, receiving report, invoice, packing slip, and bill of lading.
•
The invoice information is input into the accounts payable module to complete the information that
was added to the accounts payable module by the receiving report.
•
The AIS should include controls that limit the potential for duplicate invoices to be paid. For example, if an invoice is input that matches one from the same vendor with the same invoice number
or amount that has already been paid, it should be flagged for investigation before payment.
•
If the item “received” was a service, approval should be received from a manager above the level
of the requesting manager before payment is prepared and sent. The higher-level approval is a
control to limit the opportunity for a manager to create a fictitious company, give fictitious service
business to that company, and approve payment to be sent to that company (that is actually sent
to him- or herself).
•
If everything on the purchase order was received and if the items, quantities, and prices on the
invoice match the items, quantities, and prices on the purchase order, payment is prepared in the
AIS and sent. The payment may be sent as a paper check printed by the AIS. However, payments
are increasingly being sent by electronic funds transfer (EFT) through the automated clearing
house (ACH) whereby the funds are deducted from the purchaser’s bank account and sent electronically to the vendor.
•
The accounts payable module and the general ledger are updated and reports are processed.
•
Petty cash or procurement cards may be used for smaller purchases.
Inputs to the purchasing and expenditures cycle can be made with desktop computers, bar code scanners,
radio or video signals, magnetic ink characters on checks, scanned images, or entered into a tablet computer and transmitted wirelessly.
Outputs of the purchasing and expenditures cycle include:
•
The check (if a paper check is used) or payment advice (for an ACH transfer) and the payment
register.
•
A cash requirements forecast.
•
Discrepancy reports that note differences in items, quantities, or amounts on the purchase order,
the receiving report, and the vendor’s invoice or duplicate invoice numbers or duplicate amounts
to a vendor.
As noted previously, the discrepancy report is needed to prevent authorization of payment to a vendor until
any differences between or among the items, quantities, or prices on the purchase order, the receiving
report, and the purchase invoice or any potential duplicate invoices have been investigated and resolved.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
251
F.1. – Information Systems
CMA Part 1
Using unpaid vendor invoices, outstanding purchase orders, and reports of items received for which the
invoices have not yet been received, the AIS can predict future cash payments that will be needed and the
dates they will be needed.
Production Cycle
The production cycle involves conversion of raw materials into finished goods and the goal is to do that as
efficiently as possible. Computer-assisted design technology and robotics are often used.
The production process begins with a request for raw materials for the production process. It ends with the
completion of manufacturing and the transfer of finished goods inventory to warehouses.
The accounting information system is used for:
•
Tracking purchases of raw materials.
•
Monitoring and controlling manufacturing costs.
•
Managing and controlling inventories.
•
Controlling and coordinating the production process.
•
Providing input for budgets.
•
Collecting cost accounting data for operational managers to use in making decisions.
•
Providing information for manufacturing variance reports, usually using job costing, process costing, or activity-based costing systems.
Activities of and inputs to the production cycle include:
•
Production managers issue materials requisition forms when they need to acquire raw material
from the storeroom.
•
Physical inventory is taken periodically and reconciled to inventory records. The number of items
physically counted is input to the raw materials inventory module for reconciliation, and the accounting records are updated.
•
If the level of raw materials inventory in the storeroom falls below a predetermined level, a purchase requisition is issued to the purchasing department. The issuance of the purchase requisition
may be automated in the accounting information system so that it occurs automatically when the
inventory reaches the reorder point.
•
A bill of materials for each product is used to show the components needed and the quantities
of each needed to manufacture a single unit of product.
•
The master production schedule shows the quantities of goods needed to meet anticipated sales
and when the quantities need to be produced in order to fulfill sales projections and maintain
desired inventory levels.
•
Labor time needs to be tracked for costing purposes. Job time cards may be used to capture the
distribution of labor to specific orders.
•
Enterprise Resource Planning systems are used in most large- and medium-sized firms to collect,
store, manage, and interpret data across the organization. ERP systems can help a manufacturer
to track, monitor, and manage production planning, raw materials purchasing, and inventory management, and are also integrated with tracking of sales and customer service.
Data entry is accomplished with automated technology such as bar code scanners, RFID (radio frequency
identification systems), GPS locators, and other input technologies that can reduce input errors and support
fast and accurate data collection for production.
252
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.1. – Information Systems
RFID can be used to track components to products and the products themselves along the production
process and the supply chain. Tags containing electronically-stored information are attached to components
and products.
Example: An RFID tag attached to an automobile can track its progress through the assembly line, and
RFID tags attached to individual components of the automobile can be used to make sure everything is
assembled properly.
Outputs of the production cycle include:
•
Materials price lists showing prices paid for raw materials, kept up to date by the purchasing department and used by cost accountants to determine actual and standard costs for production.
•
Usage reports showing usage of raw materials by various production departments, used by cost
accountants and managers to detect waste by comparing actual raw material usage to standard
raw material usage for the actual production.
•
Inventory reconciliations comparing physical inventories with book balances.
•
Inventory status reports that enable purchasing and production managers to monitor inventory
levels.
•
Production cost reports detailing actual costs for cost elements, production processes, or jobs.
These reports are used by cost accountants to calculate variances for materials, labor, and overhead.
•
Manufacturing status reports, providing managers with information about the status of specific
processes or jobs.
•
Reports output from the production process are used in developing financial statements.
Human Resources and Payroll Cycle
The human resources management and payroll cycle involves hiring, training, paying, and terminating
employees.
The accounting information system is used for:
•
Recording the hiring and training of employees.
•
Processes associated with employee terminations.
•
Maintaining employee earnings records.
•
Complying with regulatory reporting requirements, including payroll tax withholdings.
•
Reporting on payroll benefit deductions such as for pensions or medical insurance.
•
Making timely and accurate payroll payments to employees, payments for benefits such as pensions and medical insurance, and payroll tax payments to taxing authorities.
Inputs to the human resource management and payroll cycle include forms sent by the human
resources department to payroll processing:
•
Personnel action forms documenting hiring of employees and changes in employee pay rates and
employee status.
•
Time sheets or time cards that record hours worked.
•
Payroll deduction authorization forms.
•
Tax withholding forms.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
253
F.1. – Information Systems
CMA Part 1
Outputs of the human resource management and payroll cycle include:
•
Employee listings showing current employees and information about them such as home addresses.
•
Payroll payment registers listing gross pay, deductions, and net pay for each employee, used to
make journal entries to the general ledger. The journal entries may be automated in the AIS.
•
Preparing paychecks or, for direct deposits to employees’ bank accounts, electronic funds transfers.
•
Deduction reports containing company-wide or individual segment employee deduction information.
•
Payroll summaries used by managers to analyze payroll expenses.
•
Tax reports, used for remitting payroll taxes to the taxing authorities, both amounts withheld from
employees’ pay and employer taxes.
•
Reporting to employees the information on their income and taxes paid that they need for their
personal tax reporting.
The payroll process is outsourced by many companies.
Financing Cycle
The financing process is responsible for acquiring financial resources by borrowing cash or selling stock and
for investing financial resources. It involves managing cash effectively, minimizing the cost of capital,51
investing in a manner that balances risk and reward, and making cash flow projections in order to make
any adjustments necessary to have cash on hand when it is needed for operations, investing, and repaying
debt.
Minimizing the cost of capital requires determining how much capital should be in the form of debt and how
much in the form of equity. Financial planning models are often used by management to help them
determine the optimum strategies for acquiring and investing financial resources.
Managing cash effectively includes collecting cash as quickly as possible, and many firms use lockbox arrangements to reduce the collection time for payments received.52 Managing cash effectively also means
paying invoices as they are due and taking advantage of discounts for prompt payment when the discounts
offered are favorable. Electronic funds transfer through the automated clearing house is often used to pay
accounts payable and also to pay employees by depositing the funds directly to the payees’ bank accounts.
Use of electronic funds transfer enables a business to closely control the timing of funds being deducted
from its operating and payroll accounts.
Projecting cash flows involves using a cash receipts forecast—an output of the revenue cycle—and cash
disbursements forecasts—outputs of the purchasing and expenditures and the human resources and payroll
cycles.
51
“Capital” as used in the context of the “cost of capital” is the term used for the long-term funding used by firms that
is supplied by its lenders or bondholders and its owners (its shareholders). A company’s capital consists of its long-term
debt and its equity. A company’s cost of capital is the return expected by investors on a portfolio consisting of all the
company’s outstanding long-term debt and its equity securities. The cost of capital is tested on the CMA Part 2 exam
and is covered in more depth in the study materials for that exam.
52
With a lockbox system, a company maintains special post office boxes, called lockboxes, in different locations. Invoices
sent to customers contain the address of the lockbox nearest to each customer as that customer’s remittance address,
so customers send their payments to the closest lockbox. The company then authorizes local banks with which it maintains deposit relationships to check these post office boxes as often as is reasonable, given the number of receipts
expected. Because the banks are making the collections, the funds that have been received are immediately deposited
into the company’s accounts without first having to be processed by the company’s accounting system, thereby speeding
up cash collection. Cash management and the use of lockboxes are tested on the CMA Part 2 exam and are covered in
more depth in the study materials for that exam.
254
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.1. – Information Systems
Although the finance department utilizes the accounting information system, finance responsibilities should
be segregated from accounting responsibilities. A common approach is to have a controller who manages
the accounting function and a treasurer or CFO who manages the finance function.
The accounting information system is used for:
•
The AIS can provide information about how quickly customers are paying their bills and can show
trends in cash collections for use in managing the collection of cash.
•
An AIS with EFT capability can be used to make payments by electronic funds transfer through the
automated clearing house, or a separate EFT application that interfaces with the AIS can be used.
•
Estimates of interest and dividend payments and receipts are used to develop cash flow forecasts.
Activities of and inputs to the financing cycle mostly originate outside the organization, as follows:
•
Paper checks and remittance advices returned by customers with their payments are used to apply
the payments to customers’ accounts.
•
Deposit receipts issued by the bank are used to document bank deposits.
•
Bank statements are used to reconcile the cash balance according to the company’s ledger with
the cash balance in the bank account.
•
Economic and market data, interest rate data and forecasts, and financial institution data are used
in planning for financing.
Outputs of the financing cycle
The production of periodic financial statements draws on general ledger information about the financing
processes, as follows:
•
Interest revenue and expense.
•
Dividend revenue and dividends paid.
•
Summaries of cash collections and disbursements.
•
Balances in investment accounts and in debt and equity.
•
Cash budget showing projected cash flows.
Reports about investments and borrowings produced by the AIS for the financing cycle include:
•
Changes in investments for a period.
•
Dividends paid.
•
Interest earned.
•
New debt and retired debt for the period, including payments of principal and interest made for
the period and information about lending institutions and interest rates.
•
Significant ratios such as return on investment for the organization as a whole and for individual
segments of it can be calculated by a financial planning model to help management make decisions
regarding investing and borrowing.
Property, Plant, and Equipment (Fixed Asset) System
The fixed asset management system manages the purchase, valuation, maintenance, and disposal of the
firm’s fixed assets, also called property, plant, and equipment.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
255
F.1. – Information Systems
CMA Part 1
The accounting information system is used for:
•
Recording newly-purchased fixed assets in the fixed asset module and the general ledger.
•
Maintaining depreciation schedules and recording depreciation in order to calculate the book values
of fixed assets.
•
Tracking differences between depreciation as calculated for book purposes and depreciation as
calculated for tax purposes in order to maintain deferred tax records.
•
Maintaining records of the physical locations of fixed assets, as some of them may be moved
frequently.
•
Tracking repair costs and distinguishing between repair costs that are expensed and repair costs
that are capitalized.
•
Recording impairment of fixed assets.
•
Tracking disposal of fixed assets and calculating the amount of gain or loss on the sale.
Activities and inputs to the fixed asset management system include:
•
A request for a fixed asset purchase. The individual making the request uses a purchase requisition
form, usually input electronically. The request usually requires approval by one or more higherlevel managers.
•
The purchasing department usually gets involved in vendor selection and issues a purchase order,
similar to the way it is done for other purchases. Receiving reports and supplier invoices are handled the way they are for other purchases.
•
If the company builds the fixed asset rather than acquiring it, a work order is used that details the
costs of the construction.
•
Repair and maintenance records need to be maintained for each fixed asset or for categories of
fixed assets. Activities should be recorded on a repair and maintenance form so either the asset
account can be updated or an expense account can be debited in the AIS, as appropriate, for the
cost.
•
When an existing fixed asset is moved from one location to another, those responsible should
complete a fixed asset change form so that its location can be tracked.
•
A fixed asset change form should also be used to record the sale, trade, or retirement of fixed
assets.
Outputs of the fixed asset management system include:
•
A list of all fixed assets acquired during a particular period.
•
A fixed asset register listing the assigned identification numbers of each fixed asset held and each
asset’s location as of the register date.
•
A depreciation register showing depreciation expense and accumulated depreciation for each fixed
asset owned.
•
Repair and maintenance reports showing the current period’s repair and maintenance expenses
and each fixed asset’s repair and maintenance history.
•
A report on retired assets showing the disposition of fixed assets during the current period.
All of the reports are used in developing information for the external financial statements.
256
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.1. – Information Systems
The General Ledger and Reporting Systems
The general ledger contains accounts for all the assets, liabilities, equity, revenues, and expenses. The
individual modules (or journals) support the general ledger and provide the detail behind the activity recorded in the general ledger.
In a responsibility accounting system, a journal such as accounts receivable can be subdivided according
to responsibility center using responsibility center codes. Subsidiary ledgers are maintained for each responsibility center containing only the accounts used by that responsibility center.
In an automated accounting information system, most day-to-day transactions are initially recorded in the
individual modules such as the accounts receivable or accounts payable module. Recording transactions in
a module updates that module and possibly other modules and automatically creates transactions to the
general ledger. Thus, the general ledger obtains data from the other cycles and processes it so that financial
reports may be prepared. To be in accordance with generally accepted accounting principles, however,
many valuation and adjusting entries are also required.
Financial Reporting Systems
The primary purpose of the financial reporting system is to produce external financial statements for the
company’s stakeholders and other external users such as analysts. The reports include the statement of
financial position (balance sheet), income statement, statement of cash flows, statement of comprehensive
income, and statement of changes in stockholders’ equity.
Various internal reports are used by accountants who are reviewing data and making adjusting entries in
order to produce the external financial statements. Two of them are:
•
A trial balance is a columnar report that lists each account in the general ledger in the first
column, followed by its balance as of a certain date and time in either the second or the third
columns from the left. Debit balances are shown in the second column from the left and credit
balances are shown in the third column from the left. All the debits are totaled and all the credits
are totaled, and the total debits and total credits on the trial balance must balance. A trial balance
is used to check preliminary balances before adjusting entries are made. It is printed after the
adjustments are recorded to confirm that the adjusted balances are correct.
•
A general ledger report is a report covering a specific period of time that shows either all the
individual general ledger accounts and the transactions that adjusted them during that period, or
it may be printed for only one or for only a few accounts. A general ledger report is used to analyze
transactions posted to a specific account.
Management Reporting Systems
The information in the external financial statements is not what managers need to know for their decision
making. Managers need the detailed internal statements produced by the AIS.
Cost accounting systems collect labor, material, and overhead costs that are used to determine the
inventory costs of manufactured goods. Their output is used to determine the value of the inventories
reported on the balance sheet, but cost accounting systems are also used to report variances from anticipated costs that production managers need in order to control production, as described in Section C,
Performance Management, in Volume 1 of this textbook.
Profitability reporting systems and responsibility reporting systems involve comparisons between
actual and planned amounts and variances. They are usually prepared according to responsibility center.
Responsibility reporting traces events to the responsibility of a particular responsibility center or a particular
manager. Each significant variance should be explained by the manager who has knowledge of what caused
the variance. A variance may be a favorable variance, and an explanation of how it occurred may be useful
information for other responsibility centers in the organization. If a variance is unfavorable, knowing how
it occurred can be the first step to taking corrective action.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
257
F.1. – Information Systems
CMA Part 1
Databases
A database is a collection of related data files, combined in one location to eliminate redundancy, that can
be used by different application programs and accessed by multiple users.
Basic Data Structure
The most commonly-used type of database is a relational database, which is a group of related tables.
When the database is developed, specific data fields and records are defined. The data must be organized
into a logical structure so it can be accessed and used. Data is stored according to a data hierarchy, and
the data is structured in levels.
A data field is the first level in the data hierarchy. A field is information that describes one attribute of an
item, or entity, in the database such as a person or an object. In an employee file, for example, one data
field would be one employee’s last name. Another field would be the same employee’s first name. A field
may also be called an “attribute,” or a “column.”
A database record is the second level of data. A database record contains all the information about one
item, or entity, in the database. For example, a single database record would contain information about
one employee. Each item of information, such as the employee’s Employee ID number, last name, first
name, address, department code, pay rate, and date of hire, is in a separate field within the employee’s
record. The data fields contained in each record are part of the record structure. The number of fields in
each record and the size of each field is specified for each record.
A file, also called a table, is the third level of the data hierarchy. A table is a set of common records, such
as records for all employees.
A complete database is the highest level. Several related files or tables make up a database. For example,
in an accounting information system, the collection of tables will contain all the information needed for an
accounting application.
Database
File or Table
Record
Field
Example: Consider the example of worker at a company who, over time, will have received numerous
monthly paychecks. The relational database will contain at least two database files, or tables, for
employees. The first table, the “Employees” table, contains all the records of the individual employees’
IDs and their names. The second table, the “Paychecks” table, contains data on all the paychecks that
have been issued and, for each paycheck, the Employee ID of the employee to whom it was issued.
A database management system can be used to locate all of the paychecks issued for one particular
employee by using the employee ID attached to the person’s name in that employee’s record in the
Employees table and locating all of the individual paycheck records for that same employee ID in the
Paychecks table.
The Employee ID ties the information in the Paychecks table to the information in the Employees table.
258
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.1. – Information Systems
Database Keys
The primary key is a data field in a record that distinguishes one record from another in the table. Every
record in a database has a primary key, and each primary key is unique. The primary key is used to find a
specific record, such as the record for a specific employee. A primary key may consist of one data field or
more than one data field. For example, in an Employees table, each employee record contains an Employee
ID. The Employee ID is the primary key in the Employees table.
Every record will have a primary key, and some records will also have foreign keys. Foreign keys connect
the information in a record to one or more records in other tables.
Example: Using the example of employees, the first table, the “Employees” table, contains all the records of the individual employees’ IDs and their names. The second table, the “Paychecks” table, contains
records of all the paychecks that have been issued and, for each paycheck, the Employee ID of the
employee to whom it was issued. In the Employees table, the Employee ID in each employee record
serves as the primary key. In the Paychecks table, the Employee ID in each paycheck record is a
foreign key.
Entity-Relationship Modeling
Database administrators use the Entity-Relationship Model to plan and analyze relational database files and
records. An entity-relationship diagram utilizes symbols to represent the relationships between and
among the different entities in the database. The three most important relationship types are one-to-one,
one-to-many, and many-to-many. These relationship types are known as database cardinalities and
show the nature of the relationship between the entities in the different files or tables within the database.
Example: Using the example of employees again, the connection between each employee (one person)
and each employee’s paychecks (which are many) is an example of a one-to-many relationship.
Database Management System (DBMS)
A database management system is a software package that serves as an interface between users and the
database. A database management system manages a set of interrelated, centrally-coordinated data files
by standardizing the storage, manipulation, and retrieval of data. It is used to create the database, maintain
it, safeguard the data, and make the data available for applications and inquiries. Because of its standardized format, a database can be accessed and updated by multiple applications.
Database management systems perform four primary functions:
1) Database development. Database administrators use database management systems to develop
databases and create database records.
2) Database maintenance. Database maintenance includes record deletion, alteration, and reorganization.
3) Database interrogation. Users can retrieve data from a database using the database management system and a query language in order to select subsets of records to extract information.
4) Application development. Application development involves developing queries, forms, reports,
and labels for a business application and allowing many different application programs to easily
access a single database.
Note: A database management system is not a database but rather a set of separate computer programs
that enables the database administrator to create, modify, and utilize database information; it also enables applications and users to query the database.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
259
F.1. – Information Systems
CMA Part 1
A database management system provides languages for database development, database maintenance,
and database interrogation, and it provides programming languages to be used for application development.
The languages use statements, which is another word for commands. For example, a database administrator uses statements to create a database. A database administrator uses the DBMS not only to create a
database, but also sometimes to create applications that will access the data in the database.
Database Development
When a relational database is developed, the data fields and records to be used in the database must be
structured. The database administrator uses a database management system and a Data Definition
Language (DDL) to create a description of the logical and physical structure or organization of the database and to structure the database by specifying and defining data fields, records, and files or tables. The
database administrator also specifies how data is recorded, how fields relate to each other, and how data
is viewed or reported.
The structure of the database includes the database’s schema, subschemas, and record structures.
•
The schema is a map or plan of the entire database—its logical structure. It specifies the names
of the data elements contained in the database and their relationships to the other data elements.
•
A particular application or user may be limited to accessing only a subset of the information in the
database. The limited access for an application or a user is called a subschema or a view. One
common use of views is to provide read-only access to data that anyone can query, but only some
users can update. Subschemas are important in the design of a database because they determine
what data each user has access to while protecting sensitive data from unauthorized access.
Note: The schema describes the design of the database, while the subschemas describe the
uses of the database. The database schema should be flexible enough to permit creation of all
of the subschemas required by the users.
•
In defining the record structure for each table, the database administrator gives each field a
name and a description, determines how many characters the field will have, and what type of
data each field will contain (for example, text, integer, decimal, date), and may specify other
requirements such as how much disk space is needed.
The database administrator also defines the format of the input (for example, a U.S. telephone number
will be formatted as [XXX] XXX-XXXX).
The input mask for a data field creates the appearance of the input screen a user will use to enter data
into the table so that the user will see a blank field or fields in the style of the format. For example, a date
field will appear as __ __ /__ __ / __ __ __ __. The input mask helps ensure input accuracy.
Once the record structure of the database table is in place, the records can be created.
Database Maintenance
A data manipulation language (DML) is used to maintain a database and consists of “insert,” “delete,”
and “update” statements (commands). Databases are usually updated by means of transaction processing
programs that utilize the data manipulation language. As a result, users do not need to know the specific
format of the data manipulation commands.
Database Interrogation
Users can retrieve data from a database by using a query language. Structured Query Language (SQL)
is a query language, and it is also a data definition language and a data manipulation language. SQL has
been adopted as a standard language by the American National Standards Institute (ANSI). All relational
databases in use today allow the user to query the database directly using SQL commands. SQL uses the
260
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.1. – Information Systems
“select” command to query a database. However, business application programs usually provide a graphical user interface (GUI) that creates the SQL commands to query the database for the user, so users
do not need to know the specific format of SQL commands.
Application Development
Database management systems usually include one or more programming languages that can be used
to develop custom applications by writing programs that contain statements calling on the DBMS to perform
the necessary data handling functions. When writing a program that uses a database that is accessed with
a DBMS, the programmer needs only the name of the data item, and the DBMS locates the data item in
the storage media.
Note: One of the key characteristics of a database management system is that the applications that
access the database are programmed to be independent of the data itself, meaning the programs
do not refer to a specific number or item, but rather to the name of the data item.
This independence is similar to changing a number in a spreadsheet cell that is referenced in a formula
in another cell elsewhere in the spreadsheet. It is not necessary to change the formula because the
formula relates to the cell and not to the number itself. Whatever number appears in that cell will be
used in the formula in the other cell.
Enterprise Resource Planning Systems
An accounting information system utilizes a database specific to the accounting, finance, and budgeting
functions. Information systems are used throughout organizations for much more than financial applications, though, and they all require their own databases. For example, materials requirements planning
systems are used to determine what raw materials to order for production, when to order them, and how
much to order. Personnel resource systems are used to determine personnel needs and to track other
personnel data.
However, problems arise when an organization’s various systems do not “talk” to one another. For instance:
•
Production is budgeted based on expected sales, and raw materials need to be ordered to support
the budgeted production. If the materials requirements planning system cannot access the financial and budgeting system, someone needs to manually input the planned production of every
product into the materials requirements planning system after the sales have been budgeted and
the budgeted production has been set.
•
A salesperson takes an order. If the items ordered are in stock, the salesperson submits the order
to an order entry clerk, who prepares the invoice and shipping documents. The documents are
delivered manually to the shipping department, and the shipping department prepares the shipment and ships the order. After shipping, the sale is recorded in the accounting information system
and the customer’s account is updated with the receivable due. The order information is entered
separately into the database used by the customer relations management system, so that if the
customer calls about the order, the customer service personnel will be able to locate the order
information, because the customer service agents do not have access to the accounting records.
Inputting what is basically the same information multiple times is duplication of effort. Not only does it
waste time, but the multiple input tasks cause delays in making the information available to those who
need it. Furthermore, each time information is input manually into a separate system, the opportunity for
input errors increases. Thus, when the information finally is available, it may not even be accurate.
Enterprise Resource Planning (ERP) can help to overcome the challenges of separate systems because it
integrates all aspects of an organization’s activities—operational as well as financial—into a single system
that utilizes a single database.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
261
F.1. – Information Systems
CMA Part 1
ERP systems consist of the following components:
•
Production planning, including determining what raw materials to order for production, when to
order them, and how much to order.
•
Logistics, both inbound (materials management) and outbound (distribution).
•
Accounting and finance.
•
Human resources.
•
Sales, distribution, and order management.
Features of ERP systems include:
1)
Integration. The ERP software integrates the accounting, customer relations management, business services, human resources, and supply chain management so that the data needed by all
areas of the organization will be available for planning, manufacturing, order fulfillment, and other
uses. The system tracks all of a firm’s resources, including cash, raw materials, inventory, fixed
assets, and human resources, forecasts their requirements, and tracks shipping, invoicing, and the
status of commitments such as orders, purchase orders, and payroll.
2)
Centralized database. The data from the separate areas of the organization flows into a secure
and centralized database rather than several separate databases in different locations. All users
use the same data that has been derived through common processes.
3)
Usually require business process reengineering. An ERP system usually forces organizations
to reengineer or redesign their business processes in order to use the system. Because ERP software is “off-the-shelf” software, customization is usually either impossible or prohibitively
expensive. Thus, business processes used must accommodate the needs of the system and many
may need to be redesigned.
When budgeted production is set, the information will immediately be available to determine what raw
materials should be ordered, how much, and when. When a salesperson enters a customer’s order into the
system (or when it is automatically entered as a result of an online order), inventory availability can be
immediately checked. If the order is a credit order, the customer’s credit limit and payment history is
checked. If everything is OK, the warehouse is notified to ship, the accounting information system is automatically updated, and if the order is a credit card order, the customer’s credit card is charged. Information
on the order and its status is immediately visible to customer service personnel so they can answer questions about the order if they receive an inquiry from the customer.
Information about exceptions is also immediately available and can be addressed automatically. For example:
•
If the customer’s credit card charge is declined, shipment of the order can be automatically held
until an investigation can be performed and the order possibly cancelled.
•
If the item ordered is not in stock, a backorder report is automatically generated for follow-up with
the customer, and the ERP system can trigger the production system to manufacture more product.
The production system can revise the production schedules accordingly, and human resources may
be involved if additional employees will be required.
Extended ERP Systems
Extended enterprise resource planning systems include customers, suppliers, and other business partners. The systems interface with customers and suppliers through supply chain management
applications that give partners along the supply chain access to internal information of their suppliers and
customers. Suppliers can access the company’s internal information such as inventory levels and sales
orders, enabling the company to reduce its cycle time for procuring raw materials for manufacturing or
goods for sale. Customers can also view their supplier’s information about their pending orders.
262
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.1. – Information Systems
Advantages of ERP Systems
•
Integrated back-office systems result in better customer service and production and distribution
efficiencies.
•
Centralizing computing resources and IT staff reduces IT costs versus every department maintaining its own systems and IT staff.
•
Centralization of data provides a secure location for all data that has been derived through common
processes, and all users are using the same data.
•
Day-to-day operations are facilitated. All employees can easily gain access to real-time information
they need to do their jobs. Cross-functional information is quickly available to managers regarding
business processes and performance, significantly improving their ability to make business decisions and control the factors of production. As a result, the business is able to adapt more easily to
change and quickly take advantage of new business opportunities.
•
Business processes can be monitored in new and different ways, such as with dashboards.
•
Communication and coordination are improved across departments, leading to greater efficiencies
in production, planning, and decision-making that can lead to lower production costs, lower marketing expenses, and other efficiencies.
•
Data duplication is reduced and labor required to create inputs and distribute and use system outputs is reduced. Potential errors caused by inputting the same data multiple times are reduced.
•
Expenses can be better managed and controlled.
•
Inventory management is facilitated. Detailed inventory records are available, simplifying inventory
transactions. Inventories can be managed more effectively to keep them at optimal levels.
•
Trends can be more easily identified.
•
The efficiency of financial reporting can be increased.
•
Resource planning as a part of strategic planning is simplified. Senior management has access to
the information it needs in order to do strategic planning.
Disadvantages of ERP Systems
•
Business re-engineering (developing business-wide integrated processes for the new ERP system)
is usually required to implement an ERP system and it is time-consuming and requires careful
planning.
•
Converting data from existing systems into the new ERP system can be time-consuming and costly
and, if done incorrectly, can result in an ERP system that contains inaccurate information.
•
Training employees to use the new system disrupts existing workflows and requires employees to
learn new processes.
•
An unsuccessful ERP transition can result in system-wide failures that disrupt production, inventory
management, and sales, leading to huge financial losses. Customers who are inconvenienced by
the implementation may leave. Because the entire business relies on the new ERP system, it is
critical that it be completely functional and completely understood by all employees before it “goes
live.” No opportunities are available to “work out the bugs” or “learn the ropes” when the entire
business relies on the one system.
•
Ongoing costs after implementation include hardware costs, system maintenance costs, and upgrade costs.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
263
F.1. – Information Systems
CMA Part 1
Data Warehouse, Data Mart, and Data Lake
Data Warehouse
A copy of all of the historical data for the entire organization can be stored in a single location known as a
data warehouse, or an enterprise data warehouse. A data warehouse is separate from an ERP system
because a data warehouse is not used for everyday transaction processing. By having all of the company’s
information from different departments in one location for analysis, a company is able to more efficiently
manage and access the information for data mining to discover patterns in the data for use in making
decisions. For example, the marketing department can access production data and be better able to inform
customers about the future availability of products.
Managers can use business intelligence tools to extract information from the data warehouse. For instance, a company can determine which of its customers are most profitable or can analyze buying trends.
Note: Business intelligence is the use of software and services to collect, store, and analyze data produced by a firm’s business activities. Business intelligence tools are used to access and analyze data
generated by a business and present easy-to-understand reports, summaries, dashboards, graphs, and
charts containing performance measures and trends that provide users with detailed information about
the business that can be used to make strategic and tactical management decisions. Business intelligence
is covered in more detail later in this section.
To be useful, data stored in a data warehouse should:
1)
Be free of errors.
2)
Be uniformly defined.
3)
Cover a longer time span than the company’s transactions systems to enable historical research.
4)
Allow users to write queries that can draw information from several different areas of the database.
Note: The data in a data warehouse is a copy of historical data, and therefore is not complete with the
latest real-time data. Furthermore, information in a data warehouse is read-only, meaning users cannot
change the data in the warehouse.
Because the data stored in a data warehouse exists in different formats in the various sources from which
it is copied, all differences need to be resolved to make the data available in a unified format for analysis.
The process of making the data available in the data warehouse involves the following.
1)
Periodically, data is uploaded from the various data sources, usually to a staging server before
going to the data warehouse. The data upload may occur daily, weekly, or with any other established frequency.
2)
The datasets from the various sources are transformed to be compatible with one another by
adjusting formats and resolving conflicts. The transformation that must take place before the data
can be loaded into a data warehouse is known as Schema-on-Write because the schema is applied before the data is loaded into the data warehouse.
3)
The transformed data is loaded into the data warehouse to be used for research, analysis, and
other business intelligence functions.
264
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.1. – Information Systems
efham CMA
Data Mart
A data mart is a subsection of a data warehouse that provides users with analytical capabilities for a
restricted set of data. For example, a data mart can provide users in a department such as accounts receivable access to only the data that is relevant to them so that the accounts receivable staff do not need
to sift through unneeded data to find what they need.
A data mart can provide security for sensitive data because it isolates the data certain people are authorized
to use and prevents them from seeing data that needs to be kept confidential. Furthermore, because each
data mart is used only by one department, the demands on the data servers can be distributed; one department’s usage does not affect other departments’ workloads.
Data marts can be of three different types:
1)
A dependent data mart draws on an existing data warehouse. It is constructed using a top-down
approach and withdraws a defined portion of the data in the data warehouse when it is needed for
analysis. Several dependent data marts, used by different areas of the organization, can draw on
a single data warehouse.
2)
An independent data mart is created without the use of a data warehouse through a bottom-up
approach. The data for just one data mart for a single business function is uploaded, transformed,
and then loaded directly into the data mart. If an organization needs several data marts for different areas of the organization, each one would need to be created and updated separately, which
is not optimal. Furthermore, the idea of independent data marts is antithetical to the motivation
for developing a data warehouse.
3)
A hybrid data mart combines elements of dependent and independent data marts, drawing some
data from an existing data warehouse and some data from transactional systems. Only a hybrid
data mart allows analysis of data from a data warehouse with data from other sources.
Data Lake
Much of the data captured by businesses is unstructured, such as social media data, videos, emails, chat
logs, and images of invoices, checks, and other items. Such data cannot be stored in a data warehouse
because the types of data are so disparate and unpredictable that the data cannot be transformed to be
compatible with the data in a data warehouse. A data lake is used for unstructured data.
A data lake is a massive body of information fed by multiple sources for which the content has not been
processed. Unlike data warehouses and data marts, data lakes are not “user friendly.” Data lakes have
important capabilities for data mining and generating insights, but usually only a data scientist is able to
access it because of the analytical skills needed to make sense of the raw information. The information in
a data lake is transformed and a schema is applied by the data scientist when it is analyzed, called Schemaon-Read.
Data lakes are discussed in more detail later in the topic of Data Mining.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
265
F.1. – Information Systems
CMA Part 1
Enterprise Performance Management
Enterprise Performance Management (EPM), also known as Corporate Performance Management (CPM) or
Business Performance Management (BPM), is a method of monitoring and managing the performance of an
organization in reaching its performance goals. It is the process of linking strategies to plans and execution.
The organization’s strategic goals and objectives must be clearly communicated to managers and be incorporated into their budgets and plans. Then, periodically the performance of the organization is reviewed
with respect to its progress in attaining the strategic goals and objectives. Key Performance Indicators
(KPIs), Balanced Scorecards, and Strategy Maps are frequently used and are monitored and managed. If
the organization or a segment of it is not performing as planned, adjustments are made, either in the
strategy or in the operations.
Enterprise Performance Management software is available that integrates with an organization’s accounting
information system, ERP system, customer relations management system, data warehouse, and other systems. It is designed to gather data from multiple sources and consolidate it to support performance
management by automating the collection and management of the data needed to monitor the organization’s performance in relation to its strategy. Users can create reports and monitor performance using the
data captured and generated in the other systems. Some examples of an EPM’s capabilities include:
•
Reports comparing actual performance to goals.
•
Reports on attainment of KPIs by department.
•
Balanced scorecards, strategy maps, and other management tools.
•
Creating and revising forecasts and performing modeling.
•
Generating dashboards presenting current information customized to the needs of individual users.
EPM software can also automate budgeting and consolidations. Tasks that in the past may have required
days or weeks can now be completed very quickly.
Note: EPM software can be on premises or it can be deployed as Software as a Service (SaaS), otherwise
known as “the cloud.” Cloud computing is covered in this section in the topic Technology-enabled Finance
Transformation.
266
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.2. Data Governance
F.2. Data Governance
Definition of Data Governance
Corporate governance includes all of the means by which businesses are directed and controlled, including the rules, regulations, processes, customs, policies, procedures, institutions, and laws that affect the
way the business is administered. Corporate governance spells out the rules and procedures to be followed
in making decisions for the corporation.
Data governance is similar, but it is specific to data and information technology. Data governance encompasses the practices, procedures, processes, methods, technologies, and activities that deal with the
overall management of the data assets and data flows within an organization. Data governance is a process
that helps the organization better manage and control its data assets. In a sense, data governance is quality
control for data. It enables reliable and consistent data, which in turn makes it possible for management to
properly assess the organization’s performance and make management decisions.
Data governance includes the management of the following.
•
Data availability, or the process of making the data available to users and applications when it
is needed and where it is needed.
•
Data usability, including its accessibility to users and applications, its quality, and its accuracy.
•
Data integrity, or the completeness, consistency, reliability, and accuracy of data.
•
Data security, meaning data protection, including prevention of unauthorized access and protection from corruption and other loss, including backup procedures.
•
Data privacy, that is, determining who is authorized to access data and which items of data each
authorized person can access.
•
Data integration, which involves combining data from different sources (which can be both internal and external) and providing users with a unified view of all the data.
•
System availability, that is, maximizing the probability that the system will function as required
and when required.
•
System maintenance, including modifications of the system done to correct a problem, to improve the system’s performance, to update it, or to adapt it to changed requirements or a changed
environment.
•
Compliance with regulations, such as laws regulating privacy protections.
•
Determination of roles and responsibilities of managers and employees.
•
Internal and external data flows within the organization.
IT Governance and Control Frameworks
IT governance and control frameworks have been developed to provide models, or sets of standardized
guidelines, for the management of IT resources and processes. Frameworks provide numerous benefits to
an organization.
•
They identify specific roles and responsibilities that need to be met.
•
They provide a benchmark for assessing risks and controls.
•
Following a framework provides a higher likelihood of implementing effective governance
and controls.
•
Frameworks break down objectives and activities into groups.
•
Regulatory compliance may be easier to achieve by following effective governance and control frameworks.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
267
F.2. Data Governance
CMA Part 1
Following is an overview of two of the most prominent IT governance frameworks currently in use: COSO’s
internal control framework and ISACA’s COBIT.
Internal Control – Integrated Framework by COSO, the Committee of Sponsoring Organizations
The Committee of Sponsoring Organizations (COSO)53, which consists of five professional organizations, created one of the first internal control frameworks in 1992 with the publication of Internal Control—
Integrated Framework and it introduced the concept of controls for IT systems. Internal Control—Integrated
Framework was updated in 2013. Internal Control—Integrated Framework is covered in this volume in
Section E, Internal Control, so it will be reviewed only briefly here.
Internal Control—Integrated Framework defines internal control as “a process, effected by54 an entity’s
board of directors, management, and other personnel, designed to provide reasonable assurance regarding
the achievement of objectives relating to operations, reporting, and compliance.” According to the Integrated Framework, the internal control system should consist of the following five interrelated components.
1)
The control environment: the standards, processes, and structures that provide the foundation
for carrying out internal control.
2)
Risk assessment: the process of identifying, analyzing, and managing the risks that have the
potential to prevent the organization from achieving its objectives, relative to the organization’s
established risk tolerance.
3)
Control activities: the actions established by policies and procedures that help ensure that management’s instructions intended to limit risks to the achievement of the organization’s objectives
are carried out.
4)
Information and communication: obtaining, generating, using, and communicating relevant,
quality information necessary to support the functioning of internal control. Communication needs
to be both internal and external.
5)
Monitoring: overseeing the entire internal control system to assess the operation of existing internal controls to ensure that the internal control system continues to operate effectively.
COBIT® by ISACA
COBIT® is an I & T (Information and Technology) framework for the governance and management of enterprise information and technology. “Enterprise information and technology” refers to all the technology
and information processing used by the whole enterprise to achieve its goals, no matter where the technology and information processing occurs in the enterprise. Thus, while enterprise I & T includes the
organization’s IT department, it is not limited to the IT department.
The COBIT® Framework was first introduced for information systems in 1996 by ISACA, an independent,
nonprofit, global association dedicated to the development, adoption, and use of globally-accepted
knowledge and practices for information systems. ISACA was previously known as the Information Systems
Audit and Control Association. However, ISACA now serves a wide constituency of other IT professionals,
including consultants, educators, security professionals, risk professionals, and chief information officers.
To reflect the fact that it now serves such a broad range of IT professionals, the organization is now known
simply by its acronym, ISACA.
53
The Committee of Sponsoring Organizations sponsored the Treadway Commission, the National Commission on Fraudulent Financial Reporting, that was created in 1985 to identify the causal factors of fraudulent financial reporting and to
make recommendations to reduce its incidence. The sponsoring organizations included the AICPA (American Institute of
Certified Public Accountants), the AAA (American Accounting Association), the IIA (Institute of Internal Auditors), the
FEI (Financial Executives International), and the IMA (Institute of Management Accountants).
54
To “effect” something means to cause it to happen, put it into effect, or to accomplish it. So “effected by” means “put
into effect by” or “accomplished by.”
268
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.2. Data Governance
In line with the early identity of the organization, the early focus of COBIT® was on information systems
audit and control, and COBIT® was an acronym for Control OBjectives for Information and Related Technology. However, in the intervening years the focus of the Framework has changed to IT governance and
management in recognition of the needs of the wide range of IT professionals that ISACA serves. When
COBIT® 5 was introduced in 2012, ISACA dropped the Framework’s full name entirely, and like ISACA,
COBIT® is now known simply by its acronym.55
ISACA published an updated version of the Framework, COBIT® 2019, in November 2018. The information
that follows is from COBIT® 2019.
COBIT® 2019 draws a distinction between governance and management. According to the Framework,
governance and management are different disciplines that involve different activities, different organizational structures, and serve different purposes.
Governance is usually the responsibility of the board of directors under the leadership of the chair of the
board of directors. The purpose of governance is to ensure that:
•
Stakeholder56 needs are considered and conditions and options are evaluated in order to determine
balanced, agreed-on enterprise objectives.
•
Prioritization and decision-making are used to set direction.
•
Performance and compliance are monitored in terms of the agreed-on direction and enterprise
objectives.
Management is usually the responsibility of the executive management under the chief executive officer’s
(CEO’s) leadership. The purpose of management is to plan, build, run, and monitor activities, in accordance
with the direction set by the body responsible for governance such as the board of directors, in order to
achieve the enterprise objectives.57
The guidance in the COBIT Framework is generic in nature so that users can customize it to focused guidance for the enterprise.58
Components of a Governance System
Each enterprise needs to establish and sustain a governance system that includes the following components, or factors, that contribute to the operations of the enterprise’s governance system over information
and technology (I & T).
•
Processes. A process is a set of practices and activities needed to achieve a specific objective and
produce outputs that support achievement of IT-related goals.
•
Organizational structures. Organizational structures are the primary decision-making entities
within the enterprise.
•
Principles, policies, and frameworks. Principles, policies, and frameworks provide practical
guidance for the day-to-day management of the enterprise.
•
Information. Information includes all the information produced and used by the enterprise.
COBIT® 2019 focuses on the information needed for effective governance of the enterprise.
55
ISACA also promulgates separate IS auditing and IS control standards, so IS audit and control have not been left
behind.
56
Stakeholders for enterprise governance of information and technology include members of the board of directors,
executive management, business managers, IT managers, assurance providers such as auditors, regulators, business
partners, and IT vendors. (COBIT® 2019 Framework: Introduction and Methodology, p. 15, © 2018 ISACA. All rights
reserved. Used by permission.) Stakeholders are discussed in more detail later in this topic.
57
COBIT® 2019 Framework: Introduction and Methodology, p. 13, © 2018 ISACA. All rights reserved. Used by permission.
58
Ibid., p. 15.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
269
F.2. Data Governance
CMA Part 1
•
Culture, ethics, and behavior. The culture of the enterprise and the ethics and behavior of both
the enterprise and the individuals in it are important factors in the success of governance and
management activities.
•
People, skills, and competencies. People and their skills and competencies are important for
making good decisions, for corrective action, and for successful completion of activities.
•
Services, infrastructure, and applications. These include the infrastructure, technology, and
applications used to provide the governance system for information and technology processing.59
Components of a Governance System
According to COBIT® 2019
Processes
Principles,
Policies, and
Frameworks
Organizational
Structures
Governance
System
Information
People, Skills,
and
Competencies
59
Culture, Ethics,
and Behavior
Services,
Infrastructure,
and
Applications
Ibid., pp. 21-22.
270
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.2. Data Governance
Governance and Management Objectives
Certain governance and management objectives need to be achieved in order for information and technology to contribute to enterprise goals. COBIT® 2019 organizes the governance and management objectives
into five domains: one domain for governance and four domains for management.
Governance objectives are organized in one domain: EDM ‒ Evaluate, Direct and Monitor. The governance body—usually the board of directors—evaluates strategic options, directs senior management in
achieving the chosen strategies, and monitors the achievement of the strategies.
Management objectives are organized in the following four domains.
1)
APO ‒ Align, Plan, and Organize includes the overall organization, strategy, and supporting
activities for information and technology.
2)
BAI ‒ Build, Acquire, and Implement covers defining, acquiring, and implementing of I & T
solutions and integrating them into business processes.
3)
DSS ‒ Deliver, Service, and Support addresses operations for delivery and support of information and technology services. DSS includes security.
4)
MEA ‒ Monitor, Evaluate, and Assess involves performance monitoring of I & T and its conformance with internal performance targets, internal control objectives, and external requirements
such as laws and regulations.60
Governance
Objectives
EDM
Evaluate, Direct,
and Monitor
Management Objectives
APO
Align, Plan,
and Organize
BAI
Build, Acquire,
and Implement
DSS
Deliver, Service,
and Support
MEA
Monitor, Evaluate,
and Assess
Design Factors for a Governance System
Design factors are factors that can impact the governance system, influence its design, and position the
enterprise for success in its use of information and technology. Design factors include any combination of
the following:
60
1)
Enterprise strategy. Enterprise strategy is developed through the strategic planning process as
described in Section B, Planning, Budgeting, and Forecasting, in Volume 1 of this textbook. For
example, an enterprise might focus on differentiation, cost leadership, a particular marketing niche
or segment, or a number of other specific strategies. The strategies suggested in the COBIT® 2019
Framework are growth/acquisition, innovation/differentiation, cost leadership, and client service/stability.
2)
Enterprise goals. Enterprise goals support the enterprise strategy, and their achievement is necessary in order to realize the enterprise strategy. The COBIT® 2019 Framework defines these goals
in terms of the four perspectives used in the Balanced Scorecard—Financial, Customer, Internal
Process (called simply “Internal” in COBIT® 2019), and Learning and Growth (called simply
“Growth” in COBIT® 2019). The Balanced Scorecard is covered in Section C, Performance Management, in Volume 1 of this textbook.
3)
Risk profile. The enterprise’s risk profile identifies the types of information and technology risks
the enterprise is exposed to and indicates which risks exceed the enterprise’s risk appetite.
Ibid., p. 20.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
271
F.2. Data Governance
CMA Part 1
4)
I & T-related issues. I & T-related issues are I & T-related risks that have materialized.
5)
Threat landscape. The “threat landscape” is the general environment of threats under which the
enterprise operates. For example, a political situation may be causing a high-threat environment,
or the particular industry in which the enterprise operates may be experiencing challenges.
6)
Compliance requirements. The compliance requirements the enterprise is subject to may be
low, normal, or high, depending on the extent to which the industry within which the enterprise
operates is regulated.
7)
Role of IT (information technology). The role of IT within the enterprise may be any of the
following:
a.
Support – IT is not critical for the running and continuity of the business processes and services or for innovation of business processes and services.
b.
Factory – Any failure in IT would have an immediate impact on the running and continuity of
business processes and services, but IT is not a driver for innovation in business processes
and services.
c.
Turnaround – IT is a driver for innovating business processes and services, but the enterprise
does not have a critical dependency on IT for the current running and continuity of its business
processes and services.
d.
Strategic – IT is critical for running business processes and services and also for innovation
in the enterprise’s business processes and services.
8)
Sourcing model for IT. Choices of sourcing include outsourcing (using services of a third party
to provide IT services), the cloud, insourced (the enterprise employs its own IT staff and provides
for its services), and hybrid (using a combination of outsourcing, the cloud, and insourcing).
9)
IT implementation methods. Methods of implementing IT can be any of the following:
a.
Agile – Agile development methods61 are used for software development.
b.
DevOps – DevOps methods62 are used for software building, deployment, and operations.
c.
Traditional – A traditional approach is used for software development, and software development is separated from operations.63
d.
Hybrid, or bimodal, IT – A mix of traditional and modern methods of IT implementation are
used.
61
Agile software development refers to a set of frameworks and software development practices based on a document called the “Manifesto for Agile Software Development,” which says: “We are uncovering better ways of developing
software by doing it and helping others do it. Through this work we have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
“That is, while there is value in the items on the right, we value the items on the left more.” (© 2001, the Agile Manifesto
authors, https://www.agilealliance.org/agile101/the-agile-manifesto/, accessed April 9, 2019.)
62
DevOps is the practice of operations and development engineers participating together in the entire service lifecycle,
from design through the development process to production support. It is also characterized by operations staff making
use of many of the same techniques as developers for their systems work. (https://theagileadmin.com/what-is-devops/,
accessed April 9, 2019.)
63
The traditional method of developing systems and programs is covered in System and Program Development and
Change Controls in Section E of this volume, Systems Controls and Security Measures. It is also reviewed briefly as the
Systems Development Life Cycle (SDLC) in this Section as part of Section F.3., Technology-enabled Finance Transformation.
272
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
10)
11)
F.2. Data Governance
Technology strategy – Methods of adopting new technologies followed by the enterprise may be
any of the following:
a.
First mover – The enterprise tries to gain first-mover advantage by generally adopting new
technologies as early as possible.
b.
Follower - The enterprise usually waits to adopt new technologies until they have been proven
and have become mainstream technologies.
c.
Slow adopter – The enterprise is usually exceptionally slow to adopt new technologies.
Enterprise size. An enterprise may be considered a large enterprise—one with more than 250
full-time equivalent employees—or a small and medium enterprise—one with 50 to 250 full-time
equivalent employees. (Micro-enterprises, that is, those with fewer than 50 employees, are not
addressed by COBIT® 2019.)64
Goals Cascade
Enterprise goals, one of the design factors for a governance system, are involved in transforming stakeholder needs into actionable strategy for the enterprise.
Stakeholders for enterprise governance of information and technology (EGIT) and the target audience for
COBIT® include the following:
Internal stakeholders:
•
Members of the board of directors, for whom COBIT® provides insight into how to obtain value
from the use of I & T and explains relevant board responsibilities.
•
Executive management, for whom COBIT® provides guidance in organizing and monitoring performance of I & T.
•
Business managers, for whom COBIT® helps understanding in how to obtain the I & T solutions
that the enterprise requires and how best to use new technology and exploit it for new strategic
opportunities.
•
IT managers, for whom COBIT® provides guidance in how best to structure and operate the IT
department and manage its performance.
•
Assurance providers such as auditors, for whom COBIT® helps in managing assurance over
IT, managing dependency on external service providers, and ensuring an effective and efficient
system of internal controls.
•
Risk management, for whom COBIT® helps with identification and management of IT-related
risk.
External stakeholders:
•
Regulators, in relation to which COBIT® helps ensure an enterprise is compliant with applicable
rules and regulations and has an adequate governance system in place to manage compliance.
•
Business partners, in relation to which COBIT® helps the enterprise to ensure that a business
partner’s operations are secure, reliable, and compliant with applicable rules and regulations.
•
IT vendors, in relation to which COBIT® helps the enterprise to ensure that IT vendors’ operations
are secure, reliable, and compliant with applicable rules and regulations.65
64
COBIT® 2019 Framework: Introduction and Methodology, pp. 23-27, © 2018 ISACA. All rights reserved. Used by
permission.
65
Ibid., p. 15.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
273
F.2. Data Governance
CMA Part 1
Management objectives are prioritized based on prioritization of enterprise goals, which in turn are prioritized based on stakeholder drivers and needs. Alignment goals emphasize the alignment of the IT efforts
with the goals of the enterprise.66
COBIT® 2019 Goals Cascade
Stakeholder
Drivers and
Needs
Cascade to
Enterprise
Goals
Cascade to
Alignment
Goals
Cascade to
Governance
and
Management
Objectives
The Enterprise Goals and the Alignment Goals are organized according to the Balanced Scorecard perspectives:
•
Financial
•
Customer
•
Internal Process (called “Internal” in COBIT® 2019).
•
Learning and Growth (called “Growth” in COBIT® 2019).
The Governance and Management Objectives are organized according to the five domains:
66
•
EDM ‒ Evaluate, Direct and Monitor for governance objectives
•
APO ‒ Align, Plan, and Organize for management objectives
•
BAI ‒ Build, Acquire, and Implement for management objectives
•
DSS ‒ Deliver, Service, and Support for management objectives
•
MEA ‒ Monitor, Evaluate, and Assess for management objectives
Ibid., p. 28.
274
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.2. Data Governance
Performance Management in COBIT® 2019
Performance management includes the activities and methods used to express how well the governance
and management systems and the components of an enterprise work, and if they are not achieving the
required level, how they can be improved. Performance management utilizes the concepts capability levels and maturity levels.67
Performance management is organized in COBIT® 2019 according to the components that make
up the enterprise’s governance system over information and technology. Although not all of the
components are specifically addressed in COBIT® 2019 as to performance management issues at this time,
to review, the components include:
•
Processes
•
Organizational structures
•
Principles, policies and frameworks
•
Information
•
Culture, ethics, and behavior
•
People, skills, and competencies
•
Services, infrastructure, and applications.
Those components for which performance management issues have been addressed include the following.
Managing the Performance of the Processes Component of the I & T Governance System
Governance and management objectives consist of several processes, and a capability level is assigned
to all process activities. The capability level is an expression of how well the process is implemented and is
performing. A process reaches a certain capability level when all the activities of that level are performed
successfully. Capability levels range from 0 (zero) to 5, as follows:
67
Level
General Characteristics
0
Lack of any basic capability; incomplete approach to address governance and management purpose; may or may not be meeting the intent of any Process practices.
1
The process more or less achieves its purpose through the application of an incomplete
set of activities that can be characterized as initial or intuitive—not very organized.
2
The process achieves its purpose through the application of a basic, yet complete, set of
activities that can be characterized as performed.
3
The process achieves its purpose in a much more organized way using organizational
assets. Processes typically are well defined.
4
The process achieves its purpose, is well defined, and its performance is (quantitatively)
measured.
5
The process achieves its purpose, is well defined, its performance is measured to improve
performance, and continuous improvement is pursued.68
Ibid., p. 37.
68
COBIT® 2019 Framework: Governance and Management Objectives, pp. 19-20, © 2018 ISACA. All rights reserved.
Used by permission.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
275
F.2. Data Governance
CMA Part 1
Performance Management of the Organizational Structures Component of the I & T Governance System
Organizational structures (the primary decision-making entities within the enterprise) can be less formally
assessed according to criteria such as the following:
•
Successful performance of processes for which the organizational structure or role has accountability or responsibility.
•
Successful application of good practices for organizational structures.69
Performance Management of the Information Component of the I & T Governance System
No generally accepted method exists for assessing Information items. However, according to COBIT® 2019,
Information items can be less formally assessed using the information reference model from the previous
version of COBIT®, COBIT® 5: Enabling Information.70
According to COBIT® 5, the goals of the information component are organized into three dimensions of
quality: Intrinsic, Contextual, and Security/Accessibility.
1)
Intrinsic – The extent to which data values are in conformance with the actual or true values.
2)
Contextual – The extent to which information is applicable to the task of the information user and
is presented in an intelligible and clear manner, recognizing that information quality depends on
the context of use.
3)
Security/Accessibility – The extent to which information is available or obtainable.
Each dimension is subdivided into several quality criteria. For example, a criterion of the Intrinsic dimension
is “accuracy,” a criterion of the Contextual dimension is “relevancy,” and a criterion of the Security/Accessibility dimension is “availability.”71
The performance of an information item is assessed by evaluating the extent to which the relevant quality
criteria are achieved.72
Performance Management of the Culture, Ethics, and Behavior Component of the I & T Governance
System
To manage the performance of the Culture, Ethics, and Behavior component, a set of desirable behaviors
for good governance and management of IT should be defined and levels of capability should be assigned
to each.73
Some examples of desirable behaviors from the COBIT® 2019 Framework: Governance and Management
Objectives publication are:
•
The culture and underlying values should fit the overall business strategy of the enterprise.
•
Find ways to speed up processes. Introduce a culture and behavior that support moving faster,
perhaps by having more frequent strategy leadership meetings or by automating activities. 74
•
Leaders must create a culture of continuous improvement in IT solutions and services. 75
69
COBIT® 2019 Framework: Introduction and Methodology, p. 40, © 2018 ISACA. All rights reserved. Used by permission.
70
Ibid., p. 41.
71
COBIT® 5: Enabling Information, p. 31, © 2013 ISACA. All rights reserved. Used by permission.
72
COBIT® 2019 Framework: Introduction and Methodology, p. 42, © 2018 ISACA. All rights reserved. Used by permission.
73
Ibid., p. 43.
74
COBIT® 2019 Framework: Governance and Management Objectives, p. 71, © 2018 ISACA. All rights reserved. Used
by permission.
75
Ibid., p. 196.
276
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.2. Data Governance
Focus Areas
A focus area is a governance topic, domain, or issue such as small and medium enterprises, information
security, DevOps, and cloud computing. A focus area can be addressed by governance and management
objectives.76
Managing the Performance of Focus Areas
Focus areas such as information security or DevOps are associated with maturity levels. Using the capability levels associated with the Processes component as a basis, a given maturity level is achieved if all
the processes within the focus area have achieved that particular capability level. Like the capability levels
associated with the Processes component, maturity levels range from 0 (zero) to 5.
The maturity levels used in managing the performance of focus areas are as follows:
Level
General Characteristics
0
Incomplete — Work may or may not be completed toward achieving the purpose of governance and management objectives in the focus area.
1
Initial — Work is completed, but the full goal and intent of the focus area are not yet
achieved.
2
Managed — Planning and performance measurement take place, although not yet in a
standardized way.
3
Defined — Enterprise-wide standards provide guidance across the enterprise.
4
Quantitative — The enterprise is data driven, with quantitative performance improvement.
5
Optimizing — The enterprise is focused on continuous improvement.77
76
COBIT® 2019 Framework: Introduction and Methodology, p. 22, © 2018 ISACA. All rights reserved. Used by permission.
77
Ibid., p. 40.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
277
F.2. Data Governance
CMA Part 1
Data Life Cycle
The data life cycle encompasses the period from creation of data and its initial storage through the time
the data becomes out of date or no longer needed and is purged. The stages of the data life cycle include
data capture, data maintenance, data synthesis, data usage, data analytics, data publication, data archival,
and data purging.
The stages do not describe sequential data flows, because data may pass through various stages several
times during its life cycle. Furthermore, data does not have to pass through all of the stages. However,
data governance challenges exist in all of the stages and each stage has distinct governance needs, so it is
helpful to recognize the various stages and some of the governance challenges associated with each.
•
278
Data capture is the process of creating new data values that have not existed before within the
organization. Data can be captured through external acquisition, data entry, or signal reception.
o
External acquisition. Data can be acquired from an outside organization, often through contracts governing how the data may be used. Monitoring performance with the contracts is a
significant governance challenge.
o
Data entry. Data can be created through entry by human operators or by devices that generate data. Data governance requires monitoring the accuracy of the input.
o
Signal reception. Data may be received by transmission, for example from sensors. Data
governance challenges include monitoring the function of the devices and the accuracy of the
data received.
•
Data maintenance is the processing of data before deriving any value from it, such as performing
data integration. A governance issue is determining how best to supply the data to the stages at
which data synthesis and data usage occur.
•
Data synthesis is the creation of new data values using other data as input. To “synthesize”
something means to combine different things to make something new. Data synthesis is therefore
combining data from different sources to create new data. Governance issues include concerns
about data ownership and the need for citation, the quality and adequacy of the input data used,
and the validity of the synthesized data.
•
Data usage, which is the application of the data to tasks, whether used in support of the organization or used by others as part of a product or service the organization offers. A governance
challenge with respect to data usage is whether the data can legally be used in the ways the users
want to use it. Regulatory or contractual constraints on the use of the data may exist, and the
organization must ensure that all constraints are observed.
•
Data analytics, or the process of gathering and analyzing data in a way that produces meaningful
information to aid in decision-making. As businesses become more technologically sophisticated,
their capacity to collect data increases. However, the stockpiling of data is meaningless without a
method of efficiently collecting, analyzing, and utilizing it for the benefit of the company. A governance issue with respect to data analytics is ensuring that the company’s data is accurately
recorded, stored, evaluated, and reported.
•
Data publication occurs when data is sent outside of the organization or leaves it in other ways.
Periodic account statements sent to customers are an example of data publication, but data
breaches also constitute data publication. The governance issue is that once data has been released
it cannot be recalled, because it is beyond the reach of the organization. If published data is incorrect or if a data breach has occurred, data governance is needed in deciding how to deal with
it.
•
Data archival is the removal of data from active environments and its storage in case it is needed
again. No maintenance, usage, or publication occurs while the data is archived, but if it is needed,
it can be restored to an environment where it will again be maintained, used, or published.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
•
F.2. Data Governance
Data purging occurs at the end of the data’s life cycle. Data purging is the removal of every copy
of a data item from all locations within the organization. Purging should generally be done only of
data that has been previously archived. Data governance considerations include development and
maintenance of data retention and destruction policies that comply with all laws and regulations
regarding record retention; conformance with established policies; confirmation that purging has
been done properly; and documentation of data purged.
Records Management
Every organization should have a documented records management policy that establishes how records are
to be maintained, identified, retrieved, preserved, and when and how they are to be destroyed. The policy
should apply to everything defined by the organization as a “record,” which includes both paper documents
and data records. The concern for information systems is, of course, data records.
Although not every item of data will be determined to be a “record,” consideration in the policy should also
be given to management of data that is not actually “records,” such as drafts of documents. The records
management policy should identify the information that is considered records and the information that is
not considered records but that nevertheless should be subject to the guidance in the policy.
Factors to be considered in developing a records management policy include:
•
Federal, state, and local document retention requirements. U.S. federal requirements include Internal Revenue Service requirements for retaining income tax information and various
employment laws and laws governing employee benefits. State and local requirements may also
apply. Regulations and laws provide minimum records retention requirements, but those should
not be regarded as guidance on when records must be destroyed. Decisions may be made to retain
specific records longer than their required minimum periods due to other factors such as ongoing
business use or internal audit requirements.
•
Requirements of the Sarbanes-Oxley Act of 2002. Section 802 of the Sarbanes-Oxley Act
prohibits altering, destroying, mutilating, concealing, or falsifying records, documents, or tangible
objects with the intent to obstruct, impede, or influence a potential or actual federal investigation
or bankruptcy proceeding. Violation is punishable by fines and/or imprisonment for up to 20 years.
Furthermore, accountants must maintain certain audit records or review work papers for a period
of five years from the end of the fiscal period during which the audit or review was concluded.
Section 1102 of the Act states that corruptly altering or destroying a record or other object with
the intent to impair its integrity or availability for use in an official proceeding is also punishable
with fines and/or imprisonment for up to 20 years.
•
Statute of limitations information. A statute of limitations is the time period during which an
organization may sue or be sued or the time period within which a government agency can conduct
an examination.
•
Accessibility. An important consideration with electronic records is hardware, software, and media obsolescence. Records can become inaccessible if they are in an obsolete format, and the
records management policy must include a means to either migrate the records to new versions,
or the old hardware and software must be retained so the records can be accessed. If the records
are to be migrated to new formats, quality control procedures must be in place to ensure none of
the content is lost or corrupted.
•
Records of records. The records management policy should establish the practice of maintaining
an index of active and inactive records and their locations and of maintaining logs containing records of all purged data.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
279
F.2. Data Governance
CMA Part 1
Needless to say, it is not enough to simply have a documented records management policy. The policy
must be followed consistently. However, if a lawsuit is pending or anticipated, all document and data destruction should cease. All documents and data should be preserved, even if they would otherwise be purged
under the policy.
Electronic data management should be supported by top management, and responsibility for custody of the
records should be assigned.
Benefits of a Documented and Well-Executed Records Management Policy
•
Locating documents that are needed is easier and more efficient.
•
In the event of an examination, investigation, or lawsuit, having a documented records management policy that is consistently followed demonstrates a legitimate and neutral purpose for
having previously destroyed any requested documents or data purged in accordance with the policy.
•
Having a documented policy that is consistently followed increases the probability of the organization’s compliance with all federal, state, and local regulations relating to document and data
retention and destruction.
•
Records will be adequately protected and their accessibility maintained.
•
Records that are no longer needed or that are no longer of value will be destroyed at the appropriate
time.
Cyberattacks
Cybersecurity is the process or methods of protecting Internet-connected networks, devices, or data from
attacks. Cyberattacks are usually made to access, change, or destroy data, interrupt normal business operations, or, as with ransomware, they may involve extortion.
Some specific cybersecurity risks include the following:
•
Copyright infringement is the theft and replication of copyrighted material, whether intellectual
property, such as computer programs or textbooks, or entertainment property such as music and
movies.
•
Denial of Service (DOS) attacks occur when a website or server is accessed so frequently that
legitimate users cannot connect to it. Distributed Denial of Service (DDOS) attacks use multiple
systems in multiple locations to attack one site or server, which makes stopping or blocking the
attack difficult. Hackers gain access to unsecured Internet of Things (IoT)78 devices on which
the default passwords have not been changed and use malware tools to create a botnet made up
of innumerable devices. A botnet is a network of devices connected through the Internet that are
all infected with the malware. A hacker or a group of hackers control the botnet without the owners’
knowledge. The hacker directs the botnet to continuously and simultaneously send junk Internet
traffic to a targeted server, making it unreachable by legitimate traffic. Sophisticated firewalls and
network monitoring software can help to mitigate DOS and DDOS attacks.
•
Buffer overflow attacks are designed to send more data than expected to a computer system,
causing the system to crash, permitting the attacker to run malicious code, or even allowing for a
complete takeover of the system. Buffer overflow attacks can be easily prevented by the software
programs adequately checking the amount of data received, but this common preventative measure is often overlooked during software development.
78
Internet of Things (IoT) devices are products used in homes and businesses that can be controlled over the Internet
by the owner. Examples are door locks, appliances, lights, energy- and resource-saving devices and other devices that
can be controlled either remotely or on-premises using voice commands.
280
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.2. Data Governance
•
Password attacks are attempts to break into a system by guessing passwords. Brute force
attacks use programs that repeatedly attempt to log in with common and/or random passwords,
although most modern systems effectively prevent brute force attacks by blocking login attempts
after several incorrect tries. Two-factor authentication can also prevent brute force attacks from
being successful because a password alone will not allow access to the system. Systems should
include sophisticated logging and intrusion-detection systems to prevent password attacks, and
password requirements should be in place to reject short or basic passwords such as “password”
or “123456.”
•
Phishing is a high-tech scam that uses spam email to deceive people into disclosing sensitive
personal information such as credit card numbers, bank account information, Social Security numbers, or passwords. Sophisticated phishing scams can mock up emails to look like the information
request is coming from a trusted source, such as state or local government, a bank, or even a
coworker. The best defense against phishing is awareness and common sense. Recipients should
be wary about any email that requests personal or financial information and should resist the
impulse to click on an embedded link.
•
Malware broadly refers to malicious software, including viruses. Spyware can secretly gather
data, such as recording keystrokes in order to harvest banking details, credit card information,
and passwords. Other types of malware can turn a PC into a bot or zombie, giving hackers full
control over the machine without alerting the owner to the problem. Hackers can then set up
“botnets,” which are networks consisting of thousands or millions of “zombies,” which can be made
to send out spam emails, emails infected with viruses, or as described above, to cause distributed
denial of service attacks.
•
Ransomware is particularly dangerous malware that encrypts data on a system and then demands a ransom (a payment) for decryption. If the ransom is not paid, the data is lost forever.
The most common way that ransomware is installed is through a malicious attachment or a download that appears to come from a trusted source. The primary defenses against ransomware are
to avoid installing it in the first place and having data backups.
•
“Pay-per-click” abuse refers to fraudulent clicks on paid online search ads (for example, on
Google or Bing) that drive up the target company’s advertising costs. Furthermore, if there is a set
limit on daily spending, the ads are pushed off the search engine site after the maximum-clicks
threshold is reached, resulting in lost business as well as inflated advertising costs. Such scams
are usually run by one company against a competitor.
Some cybercrime is conducted on a more personal and in-person fashion. Through social engineering an
individual may pose as a trustworthy coworker, perhaps someone from the company’s IT support division,
and politely ask for passwords or other confidential information. Dumpster diving is the act of sifting
through a company’s trash for information that can be used either to break into its computers directly or
to assist in social engineering.
Outsiders are not the only threat to the security of a company’s systems and data. Insiders can also be a
source of security risks. For example, disgruntled employees or those who are planning to work for a
competitor can sabotage computer systems or steal proprietary information.
Defenses Against Cyberattack
Encryption is an essential protection against hacking. Encryption protects both stored data and data that
could be intercepted during transmission. If a hacker gains access to encrypted files, the hacker is not able
to read the information.
Ethical hackers are network and computer experts with hacking skills who attempt to attack a secured
system. They use the same methods as are used by malicious hackers, but if they find vulnerabilities that
could be exploited by a malicious hacker, they report them to the owner or manager so they can be remedied. Ethical hacking is called intrusion testing, penetration testing, and vulnerability testing.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
281
F.2. Data Governance
CMA Part 1
Advanced firewalls are firewalls that perform traditional firewall protection but have other capabilities,
as well. Traditional firewalls use packet filtering to control network access by monitoring outgoing and
incoming packets. Packets are used in networking to carry data during transmission from its source to its
destination. They have a header that contains information that is used to help them find their way and to
reassemble the data after transmission. Traditional firewalls permit or deny access based on the information
in the header about the source and the destination Internet Protocol (IP) addresses, protocols, and ports.
Advanced firewalls are called Next Generation Firewalls (NGFW). In addition to the traditional firewall
protection, advanced firewalls can filter packets based on applications and can distinguish between safe
applications and unwanted applications because they base their detection on packet contents rather than
on information in packet headers. Thus, they are able to block malware from entering a network, which is
something that traditional firewalls cannot do. However, both traditional and advanced firewalls rely on
hardware. For cloud-based (Software as a Service, or SaaS) applications, a cloud-based (Firewall as a
Service) application is needed.
Access Controls
Access controls provide additional defenses against cyberattack. Access controls include logical access
controls and physical access controls.
Logical Access Controls
Companies need to have strict controls over access to their proprietary data. Poor data oversight can leave
a company vulnerable to accidents, fraud, and malicious parties who manipulate equipment and assets.
Logical security focuses on who can use which computer equipment and who can access data.
Logical access controls identify authorized users and control the actions that they can perform.
To restrict data access only to authorized users, one or more of the following strategies can be adopted:
1)
Something you know
2)
Something you are
3)
Something you have
Something You Know
User IDs and passwords are the most common “something you know” way of authenticating users. Security
software can be used to encrypt passwords, require changing passwords after a certain period of time, and
require passwords to conform to a certain structure (for example, minimal length, no dictionary words, restricting the use of symbols). Procedures should be established for issuing, suspending, and closing user
accounts, and access rights should be reviewed periodically.
Something You Are
Biometrics is the most common form of “something you are” authentication. Biometrics can recognize
physical characteristics such as:
•
Iris or retina of the eyes
•
Fingerprints
•
Vein patterns
•
Faces
•
Voices
Biometric scanners can be expensive and are generally used only when a high level of security is required.
282
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.2. Data Governance
Something You Have
Some very high-security systems require the presence of a physical object to certify an authorized user’s
identity. The most common example of this “something you have” authentication is a fob, a tiny electronic
device that generates a unique code to permit access; for increased security, the code changes at regular
intervals. A lost fob may be inconvenient but not a significant problem because the fob by itself is useless.
Furthermore, a stolen fob can be remotely deactivated.
Two-Factor Authentication
Two-factor authentication requires two independent, simultaneous actions before access to a system is
granted. The following are examples of two-factor authentication:
•
In addition to a password, some systems require entering additional information known only to the
authorized user, such as a mother’s maiden name or the answer to another security question
chosen by the authorized person. However, this security feature can be undermined if the secondary information can be obtained easily by an unauthorized third party.
•
Passwords can be linked to biometrics.
•
In addition to a password, a verification code is emailed or sent via text message that must be
entered within a few minutes to complete the login.
•
A biometric scan and a code from a fob are combined to allow access.
Considerations when evaluating the effectiveness of a logical data security system include:
•
Does the system provide assurance that only authorized users have access to data?
•
Is the level of access for each person appropriate to that person’s needs?
•
Is there a complete audit trail whenever access rights and data are modified?
•
Are unauthorized access attempts denied and reported?
Other User Access Considerations
Besides user authentication, other security controls related to user access and authentication to prevent
abuse or fraud include:
•
Automatic locking or logoff policies. Any login that is inactive for a specified period of time can
automatically be logged out in order to limit the window of time available for someone to take
advantage of an unattended system.
•
Logs of all login attempts, whether successful or not. Automatic logging of all login attempts
can detect activities designed to gain access to an account by repeatedly guessing passwords.
Accounts under attack can then be proactively locked in order to prevent unauthorized access.
•
Accounts that automatically expire. If a user needs access to a system only for a short period
of time, the user’s access to the system should be set to automatically expire at the end of that
period, thus preventing open-ended access.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
283
F.2. Data Governance
CMA Part 1
Physical Access Controls
Physical access controls are used to secure equipment and premises. The goal of physical access controls
is to reduce or eliminate the risk of harm to employees and of losing organizational assets. Controls should
be identified, selected, and implemented based on a thorough risk analysis. Some common examples of
general physical security controls include:
•
Walls and fences
•
Locked gates and doors
•
Manned guard posts
•
Monitored security cameras
•
Guard dogs
•
Alarm systems
•
Smoke detectors and fire suppression systems
Physical access to servers and networking equipment should be limited to authorized persons. Keys are
the least expensive way to manage access but also the weakest way because keys can be copied. A more
effective method is card access, in which a magnetically encoded card is inserted into or placed near a
reader. The card access also provides an audit trail that records the date, time, and identity of the person
(or at least of the card of the person) who entered. One significant limitation of card access, however, is
that a lost or stolen card can be used by anyone until it is deactivated.
Biometric access systems, discussed above as logical access controls, also serve as physical access controls. They can be used when physical security needs to be rigorous. Biometric access systems use physical
characteristics such as blood vessel patterns on the retina, handprints, or voice authentication to authorize
access. In general, such systems have a low error rate. That said, no single system is completely errorfree, so biometric access systems are usually combined with other controls.
Controls can also be designed to limit activities that can be performed remotely. For example, changes
to employee pay rates can be restricted to computers physically located in the payroll department. Thus,
even if online thieves managed to steal a payroll password, they would be prevented from changing pay
rates because they would not have access to the premises.
284
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.3. Technology-enabled Finance Transformation
F.3. Technology-enabled Finance Transformation
Systems Development Life Cycle (SDLC)
The systems development life cycle was described previously in System Controls and Security Measures as
System and Program Development and Change Controls, and it will be briefly reviewed here.
1)
Statement of objectives. A written proposal is prepared, including the need for the new system,
the nature and scope of the project and timing issues. A risk assessment is done to document
security threats, potential vulnerabilities, and the feasible security and internal control safeguards
needed to mitigate the identified risks.
2)
Investigation and feasibility study of alternative solutions. Needs are identified and feasibility of alternative solutions are assessed, including the availability of required technology and
resources. A cost-benefit analysis is done.
3)
Systems analysis. The current system is analyzed to identify its strong and weak points, and the
information needs from the new system, such as reports to be generated, database needs, and
the characteristics of its operation are determined.
4)
Conceptual design. Systems analysts work with users to create the design specifications and
verify them against user requirements.
5)
Physical design. The physical design involves determining the workflow, what and where programs and controls are needed, the needed hardware, backups, security measures, and data
communications.
6)
Development and testing. The design is implemented into source code, the technical and physical configurations are fine-tuned, and the system is integrated and tested. Data conversion
procedures are developed.
7)
System implementation and conversion. The site is prepared, equipment is acquired and installed, and conversion procedures, including data conversion, are implemented. System
documentation is completed, procedures are developed and documented, and users are trained.
8)
Operations and maintenance. The system is put into a production environment and used to
conduct business. Continuous monitoring and evaluation take place to determine what is working
and what needs improvement. Maintenance includes modifying the system as necessary to adapt
to changing needs, replacing outdated hardware as necessary, upgrading software, and making
needed security upgrades.
If follow-up studies indicate that new problems have developed or that previous problems have recurred,
the organization begins a new systems study.
Business Process Analysis
Business process reengineering and redesign, discussed in Section D, Cost Management,79 relies on technology to accomplish its objectives. When a business process needs to be reengineered or redesigned, new
information systems will be needed.
For example, if the same input for a process is being keyed in more than once, such as into a database and
also into a spreadsheet, that process needs to be redesigned so that the input is keyed in only one time.
Not only is the duplication of effort wasteful, but inputting the information multiple times opens the process
to the risk that the data will be keyed in differently each time. However, in order to successfully accomplish
the redesign of the process, the information system being used will need to be redesigned, as well.
79
See Business Process Reengineering and Accounting Process Redesign.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
285
F.3. Technology-enabled Finance Transformation
CMA Part 1
Any business operation, or process, consists of a group of related tasks or events that has a particular
objective. Business process analysis is used to analyze a business process to determine the specific way
the process is presently being accomplished from beginning to end. Business process analysis can provide
information needed to monitor efficiency and productivity, locate process weaknesses, pinpoint potential
improvements, and determine whether the potential improvements should be carried out.
Business process analysis involves the following steps:
1)
Determine the process to be analyzed. Processes that influence revenue, expenses, the end
product, and other critical functions are processes that should be analyzed periodically, as are
processes that appear to be underperforming. A new process just implemented might also be
analyzed to determine whether it is working as intended. Determine the beginning and ending
points of the process to be analyzed.
2)
Collect information about the process that will be needed to analyze it. Go through the
documentation, interview the people involved, and do any other research necessary to answer any
questions that arise.
3)
Map the process. Business process mapping is visualizing the whole process from start to finish
to better understand the various roles and responsibilities involved. Mapping makes it easier to
see the big picture, what is working and what is not working, and where the risks are.
A flowchart can be created for this step, or several software solutions, called business process
analysis tools, are available for business process mapping.
4)
Analyze the process. For example, determine the most important components and whether they
could be improved. Look for any delays or other problems and determine whether they can be
fixed. Look for ways to streamline the process so it uses fewer resources.
5)
Determine potential improvements. Make recommendations for ways to improve the process.
For example, determine whether incremental changes are needed and if so, what they are, or
whether the process needs to be completely reengineered. Business process analysis tools can be
an important part of this step, because they can be used to model changes to the process and
prepare visuals.
Robotic Process Automation (RPA)
Robotic process automation (RPA), a type of artificial intelligence (see next topic), is not the same thing as
the use of industrial robots. Robotic process automation software automates repetitive tasks by interacting
with other IT applications to execute business processes that would otherwise require a human. RPA software can communicate with other systems to perform a vast number of repetitive jobs, and it can operate
around the clock with no human errors. The automation of the repetitive part of a job frees up employees
to do other things. RPA software cannot replace an employee, but it can increase the employee’s productivity.
For example, RPA can be used when a change in policy necessitates a change in processing that would
otherwise require additional employee time to implement or when sales growth causes changes in a system
that would require either costly integration with another system or employee intervention.
The RPA software is not part of the organization’s IT infrastructure. Instead, it interacts with the IT infrastructure, so no change is needed to the existing IT systems. Thus, RPA allows the organization to automate
what would otherwise be a manual process without changing the existing systems.
Note: Robotics process automation allows users to create their own robots that can perform highvolume, repeatable tasks of low complexity faster and more accurately than humans can.
286
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.3. Technology-enabled Finance Transformation
The software robots, also called “clients” or “agents,” can log into applications, move files, copy and paste
items, enter data, execute queries, do calculations, maintain records and transactions, upload scanned
documents, verify information for automatic approvals or rejections, and perform many other tasks.
•
RPA can be used to automate portions of transaction reporting and budgeting in the accounting
area.
•
RPA can automate manual consolidations of financial statements, leaving the accountants more
time to follow up on items that require investigation, perhaps because of significant variances.
•
Financial institutions can use RPA to automate account openings and closings.
•
Insurance companies can use it to automate claims processing.
•
RPA can be used in supply chain management for procurement, automating order processing and
payments, monitoring inventory levels, and tracking shipments.
Any task that is high volume, rules-driven, and repeatable qualifies for robotic process automation. The
RPA software can create reports and exception alerts so employees or management know when to get
involved. The result can be cost savings, a higher accuracy rate, faster cycle times, and improved scalability
if volumes increase or decrease. Employees can focus on more value-added activities.
Benefits of Robotic Process Automation
•
Developing a process in RPA software does not require coding knowledge. RPA software usually has
“drag-and-drop” functionality and simple configuration wizards that the user can employ to create
an automated process. Some RPA solutions can be used to define a process by simply capturing a
sequence of user actions.
•
It enables employees to be more productive because they can focus on more advanced and engaging tasks, resulting in lower turnover and higher employee morale.
•
It can be used to ensure that business operations and processes comply with regulations and standards.
•
The tasks performed can be monitored and recorded, creating valuable data and an audit trail to
further help with regulatory compliance as well as to support process improvement.
•
Once an RPA process has been set up, the process can be completed much more rapidly.
•
Robotic process automation can result in cost savings.
•
It can help provide better customer service by automating customer service tasks. Customer service
personnel can make use of it, or in some cases, RPA can even be used to converse with customers,
gathering information and resolving their queries faster and more consistently than a person could.
•
Low-volume or low-value processes that would not be economical to automate via other means can
be automated using RPA.
•
Business process outsourcing providers can use RPA tools to lower their cost of delivery or to offer
“robots-as-a-service.”
•
Robots follow rules consistently, do not need to sleep, do not take vacations, do not get sick, and
do not make typographical errors.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
287
F.3. Technology-enabled Finance Transformation
CMA Part 1
Limitations of Robotic Process Automation
•
Robots are not infallible. Like any machine, their reliability is not 100%. Poor quality data input can
cause exceptions, and their accuracy can be affected by system changes.
•
Robots cannot replicate human reasoning. RPA software can mimic human behavior in the way it
interacts with application user interfaces, but it can only follow highly methodical instructions.
•
Robots have no “common sense.” If a flaw in the instructions creates an error that would be obvious
to a human, the robot will continue to follow the instructions provided without deviation and the
error may be replicated hundreds or thousands of times before it is recognized by a human. Then,
correcting all the incidents of the error could be very difficult (unless the errors could be corrected
using the same automated tools).
•
Because RPA can be used to automate processes in a “noninvasive” manner (in other words, without
changing the IT system), management may be tempted to deploy RPA without relying on assistance
from the IT department. However, although RPA can be deployed without involving the IT department, doing so may lead to unexpected problems. The IT department needs to be involved in the
effort so the deployment is stable.
Artificial Intelligence (AI)
Artificial intelligence is a field in computer science dedicated to creating intelligent machines, especially
computers, that can simulate human intelligence processes. AI uses algorithms, which are sets of stepby-step instructions that a computer can execute to perform a task. Some AI applications are able to learn
from data and self-correct, according to the instructions given.
Artificial intelligence is categorized as either weak AI, also called narrow AI, or strong AI, also called
artificial general intelligence (AGI).
•
•
Weak AI is an AI system that can simulate human cognitive functions but although it appears to
think, it is not actually conscious. A weak AI system is designed to perform a specific task, “trained”
to act on the rules programmed into it, and it cannot go beyond those rules.
o
Apple’s Siri80 voice recognition software is an example of weak AI. It has access to the whole
Internet as a database and is able to hold a conversation in a narrow, predefined manner; but
if the conversation turns to things it is not programmed to respond to, it presents inaccurate
results.
o
Industrial robots and robotic process automation are other examples of weak AI. Robots can
perform complicated actions, but they can perform only in situations they have been programmed for. Outside of those situations, they have no way to determine what to do.
Strong AI is equal to human intelligence and exists only in theory. A strong AI system would be
able to reason, make judgments, learn, plan, solve problems, communicate, create and build its
own knowledge base, and program itself. A strong AI system would be able to find a solution for
an unfamiliar task without human intervention. It could theoretically handle all the same work that
a human could, even the work of a highly-skilled knowledge worker.
Artificial intelligence is increasingly being used in administrative procedures and accounting. Robotic process automation, covered in the previous topic, is one application of AI. Other applications are digital
assistants powered by AI and speech recognition (such as Siri), machine vision, and machine learning.
80
Siri is a trademark of Apple Inc., registered in the U.S. and other countries.
288
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.3. Technology-enabled Finance Transformation
Digital assistants have become standard in smartphones and for controlling home electronics, and their
use has expanded into enterprise applications, as well. Some Enterprise Resource Planning systems incorporate digital assistants. The Oracle Cloud application includes functionality for enterprises to create
chatbots and virtual assistants. A chatbot is software that can conduct a conversation via auditory or text
methods. Examples of the use of chatbots are customer service and for acquisition of information.
Machine vision includes cameras, image sensors, and image processing software. It can automate industrial processes such as quality inspections by enabling robots to “see” their surroundings. Machine vision is
also used in non-industrial settings such as surveillance and medical applications. It is increasingly being
used in administrative and accounting applications, as well.
•
•
Machine vision can be used to analyze satellite imagery for several purposes.
o
Insurance agents can use it to verify property information provided by existing clients or identify physical features of properties such as the roof condition and validate property features
such as building size prior to providing an insurance quote to a new client, thereby reducing
inspection costs.
o
Investment firms can use it to determine economic trends and forecast retail sales based on
the number of cars in a retail parking lot on an hourly basis.
o
Financial institutions can monitor the status of construction on projects for construction lending
purposes.
o
Businesses making investments in projects can use it to assess the degree to which a project
is complete for accounting purposes and for monitoring and management.
Machine vision can be used to automate document data extraction.
o
Businesses can assess large numbers of incoming paper documents or forms to extract the
information from them. When documents are fed to the system, the software can identify each
document as to its type and sort the documents to be forwarded to the appropriate processing
group.
o
Incoming paper documents can be digitized for review by human employees, eliminating the
need for manual data entry. Instead of doing the manual data input, the human employees
can instead spend their time reviewing and ensuring the accuracy of fields entered by the
machine learning software. The machine vision can even “read” handwritten text. When the
data extraction is manually done, in many cases the full amount of the data is not input but
only the most important data points are extracted. With machine vision, the data that results
is complete and organized, and greater insights can be gained from data analytics.
Machine learning is another aspect of artificial intelligence being put to use in the accounting area. In
machine learning, computers can learn by using algorithms to interpret data in order to predict outcomes
and learn from successes and failures. Computers can “learn” to perform repeatable and time-consuming
jobs such as the following.
•
Checking expense reports. Computers can learn a company’s expense reimbursement policies,
read receipts, and audit expense reports to ensure compliance. The computer can recognize questionable expense reimbursement claims and forward them to a human to review.
•
Analyzing payments received on invoices. When a customer makes a payment that needs to
be applied to multiple invoices or that does not match any single invoice in the system, accounts
receivable staff might need to spend time figuring out the proper combination of invoices to clear
or may need to place a call to the customer, requiring considerable time and effort. However, a
“smart machine” can analyze the possible invoices and match the payment to the right combination
of invoices. Or, if the payment is short (is less than the amount due), the computer can apply the
short payment and automatically generate an invoice for the remaining amount without any human
intervention.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
289
F.3. Technology-enabled Finance Transformation
CMA Part 1
efham CMA
•
Risk assessment. Machine learning can be used to compile data from completed projects to be
used to assess risk in a proposed project.
•
Data analytics. Using available data, machines can learn to perform one-time analytical projects
such as how much the sales of a division have grown over a period of time or what the revenue
from sales of a specific product was during a period of time.
•
Bank reconciliations. Machines can learn to perform bank reconciliations.
AI-enabled robots will not replace accountants, but they will substantially transform what accountants do.
When machines are able to do the repetitious work of calculating, reconciling, transaction coding, and
responding to inquiries, accountants can focus less on tasks that can be automated and more on work such
as advisory services that can be done only by humans, thereby increasing their worth in an organization.
Accountants will need to monitor the interpretation of the data processed by AI to ensure that it continues
to be useful for decision making. Accountants will need to embrace AI, keep their AI and analytical skills
current, and be adaptive and innovative in order to remain competitive.
Considerations in Instituting Artificial Intelligence
•
Processes should be re-imagined where possible, rather than just using the AI to replicate existing
processes.
•
Activities to be performed by AI should be those that are standardized and not often changed.
•
Processes that are automated should be fully documented.
•
Data quality, both input and output, must be reviewed. Potential exceptions and errors requiring
human intervention must be identified and investigated.
Cloud Computing
Cloud computing is a method of essentially outsourcing the IT function. It is a way to increase IT capacity
or add capabilities without having to invest in new infrastructure or license new software.
The National Institute of Standards and Technology (NIST) of the U.S. Department of Commerce defines
cloud computing as follows:
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers,
storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.81
Thus, cloud computing means the use of business applications offered over the Internet. Cloud computing
resources include data storage, infrastructure and platform (that is, hardware and operating system), and
application software. Cloud service providers offer all three types of resources.
81
The NIST Definition of Cloud Computing, Special Publication (NIST SP) Report Number 800-145, Computer Security
Division, Information Technology Laboratory, National Institute of Standards and Technology, U.S. Department of Commerce, Gaithersburg, MD, September 2011, p. 2, https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800145.pdf, accessed April 22, 2019.
290
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.3. Technology-enabled Finance Transformation
Software as a Service (SaaS) is defined by NIST as follows:
The capability provided to the consumer is to use the provider’s applications running on a
cloud infrastructure. The applications are accessible from various client devices through
either a thin client interface, such as a web browser (e.g., web-based email), or a program
interface. The consumer does not manage or control the underlying cloud infrastructure
including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration
settings.82
In other words, SaaS is software that has been developed by a cloud provider for use by multiple businesses
(called multi-tenant use), and all business customers use the same software. Applications available as
SaaS applications include enterprise resource planning (ERP), customer relationship management (CRM),
accounting, tax and payroll processing and tax filing, human resource management, document management, service desk management, online word processing and spreadsheet applications, email, and many
others.
Cloud computing also includes Platform as a Service (PaaS), and Infrastructure as a Service (IaaS).
NIST’s definition of Platform as a Service is
The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries,
services, and tools supported by the provider. The consumer does not manage or control
the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for
the application-hosting environment.83
If a company uses Platform as a Service, the company deploys its own applications to the cloud using the
cloud provider’s operating systems, programming languages, libraries, services, and tools. PaaS services
include operating systems, database solutions, Web servers, and application development tools.84
Infrastructure in the context of cloud computing is both the hardware resources that support the cloud
services being provided, including server, storage, and network components, and the software deployed. 85
The definition of Infrastructure as a Service (IaaS) according to NIST is
The capability provided to the consumer is to provision processing, storage, networks, and
other fundamental computing resources where the consumer is able to deploy and run
arbitrary software, which can include operating systems and applications. The consumer
does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications and possibly limited control of select
networking components (e.g., host firewalls).86
A company using Infrastructure as a Service is provided with physical and virtual processing, storage,
networks, and other computing resources. The company can use the infrastructure to run software and
operating systems. Although the company does not manage or control the cloud infrastructure, it does have
control over the operating systems, storage, and deployed applications it uses, and it may have some
control over things like configuration of a host firewall. Examples of Infrastructure as a Service include
storage servers, network components, virtual machines, firewalls, and virtual local area networks. 87
82
Ibid.
83
Ibid., pp. 2-3.
84
Moving to the Cloud, Joseph Howell, Strategic Finance magazine, June 2015, © Institute of Management Accountants,
https://sfmagazine.com/post-entry/june-2015-moving-to-the-cloud/, accessed April 22, 2019.
85
The NIST Definition of Cloud Computing, Special Publication (NIST SP) Report Number 800-145, Computer Security
Division, Information Technology Laboratory, National Institute of Standards and Technology, U.S. Department of Commerce, Gaithersburg, MD, September 2011, p. 2, note 2,
https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145.pdf, accessed April 22, 2019.
86
Ibid., p. 3.
87
Moving to the Cloud, Joseph Howell, Strategic Finance magazine, June 2015, © Institute of Management Accountants,
https://sfmagazine.com/post-entry/june-2015-moving-to-the-cloud/, accessed April 22, 2019.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
291
F.3. Technology-enabled Finance Transformation
CMA Part 1
Benefits of Cloud Computing, SaaS, PaaS, and IaaS
•
Users pay for only what they use, either on a periodic basis or on a usage basis. Thus, cloud computing is scalable. A firm can quickly increase or decrease the scale of its IT capability.
•
Since the provider owns and operates the hardware and software, a user organization may be able
to decrease its investment in its own hardware and software.
•
The provider keeps the software updated, so the user organizations do not need to invest in upgrades or be concerned with applying them.
•
Applications and data resident in the cloud can be accessed from anywhere, from any compatible
device.
•
Technology available in the cloud can be leveraged in responding to new and existing requirements
for external compliance reporting, sustainability and integrated reporting, internal management
reporting, strategic planning, budgeting and forecasting, performance measurement, risk management, advanced analytics, and many others.
•
Cloud technology can be used to free up accountants so they can handle more higher-value activities
and streamline lower-value processes.
•
The cloud can enable the CFO to move into a more strategic role instead of spending time on
transactional activities.
•
The cloud can provide greater redundancy of systems than an on-site IT department may be able
to offer, particular for small to medium-sized entities that may not be able to afford backup systems.
•
The cloud can offer to companies of all sizes the advanced computing power needed for advanced
analytics, something that otherwise only the largest companies would be able to afford due to cost.
As a result, small to medium-sized businesses can be better positioned to compete with much larger
competitors.
•
Although security is a concern with the cloud, security is a concern with on-site IT, as well. The
cloud frequently can provide stronger infrastructure and better protection than an on-site IT department may be able to.
Limitations, Costs, and Risks of Cloud Computing, SaaS, PaaS, and IaaS
•
Reliability of the Internet is a concern. If the Internet goes down, operations stop.
•
The quality of the service given by the provider needs to be monitored, and the service contract
needs to be carefully structured.
•
Loss of control over data and processing introduces security concerns. Selection of the cloud vendor
must include due diligence. The cloud vendor must demonstrate that it has the proper internal
controls and security infrastructure in place, and the vendor’s financial viability as a going concern
needs to be ascertained. Furthermore, the vendor’s internal controls over data security and its infrastructure, as well as its continued viability as a going concern, need to be monitored on an
ongoing basis.
•
Contracting with overseas providers may lead to language barrier problems and time-zone problems
as well as quality control difficulties.
•
The ability to customize cloud solutions is limited, and that may hamper management from achieving all that it hopes to achieve.
•
Service provided by automatic backup service providers may be problematic because timing of automatic backups may not be controllable by the user and may not be convenient for the user.
(Continued)
292
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.3. Technology-enabled Finance Transformation
•
The cloud cannot overcome weak internal controls. People are the greatest area of weakness with
both internal IT and with cloud technologies. Security awareness training, proper hiring procedures,
good governance, and protection from malware continue to be necessary after a company moves
to the cloud, just as they are when the IT is on-site.
•
The company’s data governance must be structured to cover the cloud and the risks inherent in it,
such as employees downloading new applications without authorization.
•
Expected cost savings may not materialize. An organization may find that managing its own IT
internally, even with all of its attendant problems, is less expensive than using the cloud.
Blockchains, Distributed Ledgers, and Smart Contracts
The concept of the blockchain and the first cryptocurrency, bitcoin, was first presented in 2009 in a paper
titled Bitcoin: A Peer-to-Peer Electronic Cash System, ostensibly written by Satoshi Nakamoto. No one
knows who Satoshi Nakamoto is, and it may be a pseudonym for one or more people.88
The blockchain was initially envisioned as a peer-to-peer system for sending online payments from one
party to another party without using a financial institution. While online payments are still important, blockchain technology has expanded and is now used in many other areas.
Blockchain Terminology
Blockchain – A blockchain is a public record of transactions in chronological order, more specifically “a
way for one Internet user to transfer a unique piece of digital property to another Internet user, such that
the transfer is guaranteed to be safe and secure, everyone knows that the transfer has taken place, and
nobody can challenge the legitimacy of the transfer.” 89 Thus, a blockchain is a continuously growing digital
record in the form of packages, called blocks, that are linked together and secured using cryptography. A
blockchain is a system of digital interactions that does not need an intermediary such as a financial institution to act as a third party to transactions. Many users write entries into a record of information, the
transactions are timestamped and broadcast, and the community of users controls how the record of information is updated. The digital interactions are secured by the network architecture of blockchain
technology. The blocks are maintained via a peer-to-peer network of computers, and the same chain of
blocks, called a ledger, is stored on many different computers. A blockchain can be public, private, or a
hybrid, which is a combination of public and private.
Public blockchain - A public blockchain is open to anyone, anyone can contribute data to the ledger, and
all participants possess an identical copy of the ledger. A public blockchain is also called a permissionless
ledger. The public blockchain has no owner or administrator, but it does have members who secure the
network, and they usually receive an economic incentive for their efforts.
Private blockchain – A private blockchain, also called a permissioned ledger, allows only invited participants to join the network. Permissioned ledgers are controlled by one or more network administrators. All
of the members—but only the members—have copies of the ledger. Private blockchains can be used by a
single entity, for example to manage its value chain.
Hybrid blockchain – A hybrid blockchain is a mix of a public and private blockchain. Some processes are
kept private and others are public. Participants in public or private networks are able to communicate with
each other, enabling transactions between them across networks. A hybrid blockchain can be used by a
supply chain group to control the supply chain.
88
For more information, see https://en.wikipedia.org/wiki/Satoshi_Nakamoto.
89
Marc Andreessen, quoted on https://www.coindesk.com/information/what-is-blockchain-technology, accessed April
25, 2019.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
293
F.3. Technology-enabled Finance Transformation
CMA Part 1
Encryption - Encryption is used to keep data on a blockchain private. Encryption makes use of an algorithm
that transforms the information into an unreadable format. A key is required to decrypt the data into its
original, readable format. A key for electronic/digitally encrypted information is bits of code that use an
algorithm to lock and unlock information.
Private key encryption - Private key encryption is a form of encryption in which a single private key can
both encrypt and decrypt information. While private key encryption is fast because it uses a single key, the
privacy of that secret key is essential because anyone with that key can decrypt the information. Because
of the risk of the private key being stolen or leaked, the private keys must be changed frequently.
Public key infrastructure – Public key infrastructure uses two keys: private and public. The public key is
distributed, but the private key is never shared. Each public key has a corresponding private key, and the
two keys are mathematically linked based on very large prime numbers.
Private key/public key encryption – A blockchain uses public key/private key encryption. On a blockchain, the private key of a user is converted into a private address, and the user’s public key is converted
into a public address. On a blockchain, private key cryptography fulfills the need to authenticate users. A
private key is used to create a digital signature. Possession of a private key equals ownership and
also prevents its owner from having to share any more personal information than is needed for the transaction, which protects the user from hackers.
When a user wants to send funds, the sender “signs” the transaction with his or her private key, and the
software on the user’s computer creates the digital signature, which is sent out to the network for validation.
Validation includes confirming that the sender owns the funds being sent and that the sender has not
already sent those same funds to someone else. The network validates the transaction by entering the
digital signature and the sender’s public key (which is publicly known) into the software. If the signature
that was created with the user’s private key corresponds to the user’s public key, the program validates
the transaction.
Node – A node is a powerful computer running software that keeps the blockchain running by participating
in the relay of information. Nodes communicate with each other to spread information around the network.
A node sends information to a few nodes, which in turn relay the information to other nodes, and so forth.
The information moves around the network quickly.
Mining nodes, or “miners” – Miners are nodes (computers) on the blockchain that group outstanding
transactions into blocks and add them to the blockchain.
Distributed ledger – A distributed ledger is a database held by each node in a network, and each node
updates the database independently. Records are independently constructed and passed around the network by the various nodes—they are not held by any central authority. Every node on the network has
information on every transaction and then comes to its own conclusion as to whether each transaction is
authentic (that is, whether the people are who they say they are) and, if the transaction is a payment
transaction, whether the sender has enough funds to cover the payment. The data sent in a transaction
contains all the information needed to authenticate and authorize the transaction. When there is a consensus among the nodes that a transaction is authentic and should be authorized, the transaction is added to
the distributed ledger, and all nodes maintain an identical copy of the ledger.
Hash – Hashing is taking an input string of any length and giving it an output of a fixed length using a
hashing algorithm. For example, bitcoin uses the hashing algorithm SHA-256 (Secure Hashing Algorithm
256) on Bitcoin networks. Any input, no matter how big or small, always has a fixed output of 64 symbols,
which is made up of 256 bits, the source of the “256” in its name. The fixed output is the hash.
Block – A block is a record in a blockchain that contains and confirms many waiting transactions. It is a
group of cryptocurrency transactions that have been encrypted and aggregated into the block by a miner.
Each block has a header that contains (1) the details of the transactions in the block, including the senders’
and receivers’ addresses for each transaction and the amount of funds to be transferred from each sender
to each receiver, (2) the hash of the information in the block just preceding it (which connects it to the
294
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.3. Technology-enabled Finance Transformation
blockchain), (3) a “nonce” (see below), and (4) the hash of the information in the block, including the
nonce.
Nonce - The nonce in a block is a random string of characters that is appended to the transaction information in the block before the block is hashed and it is used to verify the block. After a nonce is added to
the block, the information in the block, including the nonce, is hashed. The nonce needs to be a string of
characters that causes the hash of the whole block, including the nonce, to conform to a particular requirement: the hash of the block that results after the nonce is included must contain a certain number of leading
zeroes. If it does not, the nonce must be changed and the block hashed again.
The powerful mining nodes on the network all compete to determine the block’s nonce by trying different
values and re-hashing the block multiple times until one miner determines a nonce that results in the hash
of the block conforming to the requirement for the number of leading zeroes. The mining node on the
network that is the first to “solve the puzzle”—that is, calculate a nonce that results in a hash value for the
block that meets the requirement for the number of leading zeroes—receives a reward of a certain number
of units of the digital currency. (That is the source of the term “mining nodes.” Receiving the currency
reward is called “mining” the block because new digital currency is created and received.)
The blocks are hard to solve but easy to verify by the rest of the network once they are solved. Therefore,
after one mining node solves the block, the other nodes on the network check the work to determine
whether the block’s nonce and its hash have been correctly calculated. If the other nodes agree that the
calculations have been done correctly, the block is validated.
The determination of a nonce that fulfills the requirement for a new block’s hash usually takes about 10
minutes and the nonce that fulfills the requirement is called “Proof of Work.”
Note: “Proof of Work” is the consensus algorithm used on the Bitcoin blockchain. A proof of work is
a piece of data that satisfies certain requirements, is costly and time-consuming to produce, but is easy
for others to verify.
A consensus algorithm is a set of rules and number of steps for accomplishing a generally accepted
decision among a group of people. For blockchains, consensus algorithms are used to ensure that a
group of nodes on the network agree that all transactions are validated and authentic. The Proof of Work
(PoW) algorithm is the most widely used consensus algorithm, but other blockchains may use different
algorithms to accomplish the same thing.
Consensus algorithms are used to prevent double spending by a user, that is, spending the same digital
currency more than once. In a traditional means of exchange, financial intermediaries ensure that currencies are removed from one account and placed into another, so that the same money is not spent
more than once. However, digital currencies are decentralized and so there are no financial intermediaries. Consensus algorithms are used to make sure that all the transactions on the blockchain are
authentic and that the currency moves from one entity to another entity. All the transactions are stored
on the public ledger and can be seen by all participants, creating full transparency.
Confirmation – When a block has been validated on a blockchain, the transactions processed in it are
confirmed. Confirmation means the transactions in the block have been processed by the network. Transactions receive a confirmation when they are included in a block and when each subsequent block is linked
to them. Once a transaction has received just one confirmation it is not likely to be changed because
changing anything in the block would change the block’s hash value. Since each block’s hash value is part
of the hash of the following block on the blockchain, changing anything in one block would require rehashing all of the blocks following the changed block. Re-hashing all of the following blocks would necessitate recalculating a proper nonce for each subsequent block in turn.
The general rule is that after six confirmations (in other words, six subsequent new blocks have been
created following an added block, requiring work of about 60 minutes, that is, 6 blocks at 10 minutes per
block), the work involved to make a change in the previously-added block makes doing so prohibitive. Each
added confirmation exponentially decreases the risk that a transaction will be changed or reversed.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
295
F.3. Technology-enabled Finance Transformation
CMA Part 1
If any change to a confirmed transaction is needed, a new transaction must be created.
Note: It is not impossible to change a previously-recorded transaction, but it would be very, very difficult
to do so. Because of the time that would be required to recalculate all the nonces in all the subsequent
blocks, it would be next to impossible to “catch up” to the most recent block, since new blocks would be
being added to the chain all the time. Thus, transactions on a blockchain are considered immutable,
which means “not subject or susceptible to change.”
Immutability is important because if a transaction can be changed, a hacker could change the receivers’
addresses and divert the payments, thus stealing a significant amount of the currency. That has actually
happened to some of the smaller, less active, cryptocurrencies.
Bitcoin and Other Cryptocurrencies
The first usage of a blockchain was to transfer virtual currency, or cryptocurrency. A virtual currency is a
digital representation of value that functions as a medium of exchange, a unit of account, and/or a store of
value.90 It is a piece of computer code that represents ownership of a digital asset.
Virtual currencies that have an equivalent value in real currency or that can act as a substitute for real
currency are called “convertible virtual currency.” Bitcoin was the first cryptocurrency and it is a convertible
virtual currency. It can be digitally traded between users and can be purchased for and exchanged into U.S.
dollars, Euros, and other currencies.91
The term “Bitcoin” is also used to refer to the protocol for the distributed ledger, that is, the distributed
network that maintains a ledger of balances held in bitcoin tokens by users on the network. The word
“bitcoin” with a lower-case “b” refers to the tokens, while the word “Bitcoin” with a capital “B” refers to the
Bitcoin protocol and the Bitcoin network.
Note: Do not confuse the “distributed ledger” of a blockchain with an accounting ledger in which doubleentry accounting is performed. When a blockchain is used to make payments, the transactions in the
blockchain are single entry transactions for each entity and they represent the amount of cryptocurrency
to be paid by the payor and received by the receiver. Although the transactions are not double-entry
accounting entries for each entity, in a sense each currency transaction will be double entry since a
sender’s payment is equal to the receiver’s receipt.
Bitcoin tokens can be used to make payments electronically if both parties agree to use it. The system
enables payments in bitcoin tokens to be sent from one user to another user without having to pass through
a central authority. When a transaction is finalized by being added to the blockchain, the sender’s balance
of bitcoin is reduced and the receiver’s balance of bitcoin is increased.
Bitcoin was the first cryptocurrency, but over a thousand other cryptocurrencies are being used now, as
well—ether, bitcoin cash, litecoin, XRP, and binance coin, to name just a few.
Bitcoin and other cryptocurrencies are not backed by any government or commodity such as gold. They are
backed by technology and cryptographic proof and do not rely on a trusted third party such as a financial
institution to act as intermediary. Users of cryptocurrencies put their trust in a digital, decentralized, and
basically anonymous system that maintains its integrity through cryptography and peer-to-peer networking. Cryptocurrencies are unregulated, and no central bank can take corrective actions to protect the value
of a cryptocurrency in the event of a crisis. In the U.S., cryptocurrencies do not currently have legal tender
status. Nevertheless, many cryptocurrencies can be purchased, sold, and exchanged on cryptocurrency
exchanges such as Bitstamp, Wirex, and Coinbase. The values of cryptocurrencies in terms of other currencies such as the U.S. dollar or the Euro are extremely volatile, however.
90
A CFTC Primer on Virtual Currencies,” October 17, 2017, https://www.cftc.gov/LabCFTC/Primers/Index.htm, p. 4,
accessed April 29, 2019.
91
Ibid.
296
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.3. Technology-enabled Finance Transformation
Other uses of blockchain include:
•
Private, permissioned blockchains can be used by financial institutions for trading, payments, clearings, settlements, and repurchase agreement transactions (short-term borrowing of securities).92
•
Intercompany transactions where different ERP systems are in use can be streamlined using a
blockchain.
•
Procurement and supply chain operations on blockchain can be used to optimize accounts payable
or accounts receivable functions.
Note: Investor Alert
On April 26, 2019, the U.S. CFTC (Commodity Futures Trading Commission) and the U.S. SEC (Securities
and Exchange Commission) jointly issued an investor alert warning about websites purporting to operate
advisory and trading businesses related to digital assets. In some cases, the fraudsters claim to invest
customers’ funds in proprietary crypto trading systems or in “mining” farms, promising high guaranteed
returns with little or no risk. After an investor makes an investment, usually using a digital currency such
as bitcoin, sometimes the fraudsters stop communicating. Sometimes the fraudsters require the investor
to pay additional costs to withdraw fake “profits” earned on the investment, “profits” that are nonexistent.
Investors should beware of “guaranteed” high investment returns, complicated language that is difficult
to understand, unlicensed and unregistered firms, and any offer that sounds too good to be true.
Other investor alerts issued recently by the CFTC have including warnings about virtual currency “pumpand-dump” schemes,93 “IRS approved” virtual currency IRAs, fraudulent “Initial Coin Offerings,” (ICOs)
and various other fraudulent promotions.
For more information, see:
https://www.cftc.gov/PressRoom/PressReleases/7917-19
https://www.cftc.gov/Bitcoin/index.htm
92
A CFTC Primer on Virtual Currencies,” The U.S. Commodity Futures Trading Commission, October 17, 2017, p. 8,
https://www.cftc.gov/LabCFTC/Primers/Index.htm, accessed April 29, 2019.
93
A “pump-and-dump” scheme involves aggressively promoting an investment by falsely promising outsize gains to
come (“pumping” it), thus inflating demand for the investment. The inflated demand inflates the investment’s market
price. When the price reaches a certain point, the promoters sell their holdings (“dump” them) on the open market, the
market price crashes, and the investors are left with large losses on a nearly worthless investment while the promoters
make off with the capital gain.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
297
F.3. Technology-enabled Finance Transformation
CMA Part 1
Smart Contracts
Blockchain technology can be used for more than payments. Blockchains are being used to store all kinds
of digital information and to execute contracts automatically. A contract that has been digitized and uploaded to a blockchain is called a smart contract.
According to Nick Szabo, a computer scientist who envisioned smart contracts in 1996,
“A smart contract is a set of promises, specified in digital form, including protocols within
which the parties perform on these promises.”94
A smart contract is created by translating the terms and conditions of a traditional agreement into a computational code written by blockchain developers in a programming language. It is basically a set of coded
computer functions with a set of rules. The computer code is self-executing and performs an action at
specified times or based on the occurrence or non-occurrence of an action or event such as delivery of an
asset or a change in a reference rate. A simple example is a translation of “If X occurs, then Y makes a
payment to Z.”
A smart contract may include the elements of a binding contract (offer, acceptance, and consideration), or
it may simply execute certain terms of a contract. When a smart contract is uploaded to a blockchain, the
validity of the contract is checked and the required steps are enabled. After that, it is automatically executed.
A blockchain executes smart contracts in basically the same way as it executes transactions, and when the
contract calls for payments to be made, they are made automatically in the cryptocurrency of the particular
blockchain system being used. As with payments, the information in the contract is encrypted and hashed
to a standardized size. A smart contract in a block of data is linked to the previous block. The smart contract
is executed and payments are transferred according to its terms within the blockchain. The contract is
funded by the payor so it can perform transactions automatically according to the agreement.
Specific technologies are available, and a smart contract is executed on a blockchain-based platform specific
to the technology being used. For example, an Ethereum smart contract is executed on an Ethereum blockchain using the Ethereum Virtual Machine (EVM). Ethereum is a blockchain-based platform that was
developed with smart contracts in mind and the smart contracts on Ethereum are self-executing. The cryptocurrency used to transfer funds on Ethereum is called Ether. Other platforms and cryptocurrencies are
also available for smart contracts, but Ethereum is one of the more well-known platforms.
Following are some examples of the uses of smart contracts on blockchains.
•
A blockchain can be used to ensure the authenticity of a product, so a purchaser of the product
can be assured that the product he or she is buying is genuine and not a counterfeit. The information stored on the blockchain is unchangeable, so it is easier to prove the origins of a given
product.
•
It can be used to protect intellectual property. For example, artists can protect and sell music on
a blockchain system. Artists who are due royalties from each sale of their material can receive the
payments due them automatically through a smart contract as sales are made. They do not need
to wait until the end of a period or wonder whether the publisher is being truthful about the number
of sales made during the period.
•
Blockchain and smart contracts have an important place in supply chain management, freight, and
logistics, particularly in international transactions. Blockchain supply chain management does not
rely on freight brokers, paper documents, or banks around the world to move goods and payments.
The blockchain can provide secure digital versions of all documents that can be accessed by all the
parties to a transaction. Defined events cause execution of the contract when the requirements
94
Nick Szabo, Smart Contracts: Building Blocks for Digital Markets, 1996, http://www.fon.hum.uva.nl/rob/Courses/
InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart_contracts_2.html, accessed
April 29, 2019.
298
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.3. Technology-enabled Finance Transformation
are fulfilled. The smart contract can manage the flow of approvals and make the transfers of
currency upon approval. For example, after goods have been received, the payment to the shipper
takes place automatically. The transaction has complete transparency from beginning to end.
•
On-demand manufacturing can be performed by machines that are automated and running on a
blockchain network. A design would be sent to a locked machine along with an order for a specific
number of units. The contract would be executed, the machine would be unlocked to produce the
correct number of units, the goods would be produced, and the machine would be locked again.
•
An insurance contract can be in the form of a smart contract. For example, an orchard owner is
concerned about a freeze that could destroy the year’s fruit crop. An insurance company offers
insurance against a freeze through a self-executing smart contract. The orchard owner and the
insurance company agree to the contract terms and digitally sign a smart contract that is uploaded
to a blockchain. The orchard owner’s periodic premium payments are automated, and the blockchain checks a third party source such as the National Weather Service daily for a possible freeze
event. If a freeze event occurs, payment is sent automatically from the insurance company to the
orchard owner.
Note: The third party source for a smart contract is called an oracle.
•
Fish are being tracked from their sources to consumers in world markets and restaurants to prevent illegal fishing.
•
The title to real property can be transferred using a blockchain. The whole history of the property
owners and all the buy and sell transactions can be maintained in a blockchain dedicated to that
piece of property. The blockchain can provide consensus regarding the current owner of the property and the historic record of property owners. The blockchain technology makes changing the
historical records prohibitively difficult and costly. As a result, a title agency—a third party—is not
needed to research the property records each time a piece of property is sold. The owner of the
property can be identified using public key cryptography.
•
Various other public records are being maintained by state and local governments on blockchains.
•
The provenance of high-value investment items such as art can be tracked on a blockchain to help
protect against forgeries and fraud.
•
Blockchains can be used to store data on archaeological artifacts held in museums. If an artifact
or artifacts are stolen, the museum could release data using the hash codes to law enforcement
so they could prevent the export or sale of artifacts matching the descriptions of the stolen items.
Furthermore, if an artifact has a blockchain record, each time it crossed a border or was sold to a
new collector the record would be updated to show it was legitimate. Looted antiquities would have
no record, and faked records could not be manufactured.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
299
F.3. Technology-enabled Finance Transformation
CMA Part 1
Benefits of Smart Contracts
•
Smart contracts can authenticate counter-party identities, the ownership of assets, and claims of
right by using digital signatures, which are private cryptographic keys held by each party.
•
Smart contracts can access outside information or data to trigger actions, for example, commodity
prices, weather data, interest rates, or an occurrence of an event.
•
Smart contracts can self-execute. The smart contract will take an action such as transferring a payment without any action required by the counter-parties. The automatic execution can reduce
counter-party risk and settlement risk.
•
The decentralized, distributed ledger on the blockchain prevents modifications not authorized or
agreed to by the parties.
•
Smart contracts can enhance market activity and efficiency by facilitating trade execution.
•
Use of standardized code and execution may reduce costs of negotiations.
•
Automation reduces transaction times and manual processes.
•
Smart contracts can perform prompt regulatory reporting.
Limitations and Risks of Smart Contracts
•
The operation of a smart contract is only as smart as the information it receives and the computer
code that directs it.
•
Existing laws and regulations apply to all contracts equally regardless of what form a contract takes,
so contracts or parts of contracts that are written in code are subject to otherwise applicable law and
regulation. However, if a smart contract unlawfully circumvents rules and protections, to the extent
it violates the law, it is not enforceable. For example, if a U.S. derivative contract is traded on or
processed by a facility that is not appropriately registered with the U.S. Commodity Futures Trading
Commission (CFTC) or is executed by entities required to be registered with the CFTC but which are
not and do not have an exception or exemption from registration, the contract is prohibited.95
•
A smart contract could introduce operational, technical, and cybersecurity risk.
Operational risk: For example, smart contracts may not include sufficient backup and failover mechanisms in case of operational problems, or the other systems they depend on to fulfill contracts terms
may have vulnerabilities that could prevent the smart contract from functioning as intended.
Technical risk: For example, humans could make a typographical error when coding, Internet service
can go down, user interfaces may become incompatible, the oracle (the third party source used by
the smart contract to authorize payments) may fail or other disruptions can occur with the external
sources used to obtain information on reference prices, events, or other data.
Cybersecurity risk: Smart contract systems may be vulnerable to hacking, causing loss of digital
assets. For example, an attacker may compromise the oracle, causing the smart contract to improperly transfer assets. There may be limited or no recourse if hackers transfer digital assets to
themselves or others.96
•
A smart contract may be subject to fraud and manipulation. For example, smart contracts can include
deliberately damaging code that does not behave as promised or that may be manipulated. Oracles
may be subject to manipulation or may themselves be fraudulent and may disperse fraudulent information that results in fraudulent outcomes.97
95
A Primer on Smart Contracts, The U.S. Commodity Futures Trading Commission, Nov. 27, 2018, p. 25,
https://www.cftc.gov/sites/default/files/2018-11/LabCFTC_PrimerSmartContracts112718_0.pdf, accessed April 29,
2019.
96
Ibid, pp. 27-29.
97
Ibid., p. 30.
300
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.4. Data Analytics
Governance for Smart Contracts
Good governance is important for smart contracts, whch require ongoing attention and may require action
and possible revision. Governance standards and frameworks are needed and appear to be in the early
stages of development.
•
Governance standards may assign responsibility for smart contract design and operation and establish mechanisms for dispute resolution.
•
Standards may incorporate terms or conditions that smart contracts need to have in order to be
enforceable.
•
Standards could create presumptions regarding the legal character of a smart contract, depending
on its attributes and manner of use.
•
Good governance standards may help address the risks that smart contracts present.98
F.4. Data Analytics
Data analytics is the process of gathering and analyzing data in a way that produces meaningful
information that can be used to aid in decision-making. As businesses become more technologically
sophisticated, their capacity to collect data increases. However, the stockpiling of data is meaningless without a method of efficiently collecting, aggregating, analyzing, and utilizing it for the benefit of the company.
Data analytics can be classified into four types: descriptive analytics, diagnostic analytics, predictive
analytics, and prescriptive analytics.
98
1)
Descriptive analytics report past performance. Descriptive analytics are the simplest type of
data analytics and they answer the question, “What happened”?
2)
Diagnostic analytics are used with descriptive analytics to answer the question, “Why did it
happen”? The historical data is mined to understand the past performance and to look for the
reasons behind success or failure. For example, sales data might be broken down into segments
such as revenue by region or by product rather than revenue in total.
3)
Predictive analytics focus on the future using correlative99 analysis. Predictive analytics answer
the question, “What is likely to happen”? Historical data is combined with other data using rules
and algorithms. Large quantities of data are processed to identify patterns and relationships between and among known random variables or data sets in order to make predictions about what
is likely to occur in the future. A sales forecast made using past sales trends is a form of predictive
analytics.
4)
Prescriptive analytics answer the question “What needs to happen?” by charting the best course
of action based on an objective interpretation of the data. Prescriptive analytics make use of
structured and unstructured data and apply rules to predict what will happen and to prescribe how
to take advantage of the predicted events. For example, prescriptive analytics might generate a
sales forecast and then use that information to determine what additional production lines and
employees are needed to meet the sales forecast. In addition to anticipating what will happen and
determining what needs to happen, prescriptive analytics can help determine why it will happen.
Prescriptive analytics can incorporate new data and re-predict and re-prescribe, as well. Prescriptive analytics is most likely to yield the most impact for an organization, but it is also the most
complex type of analytics.
Ibid., p. 31.
99
If two things are correlated with one another, it means there is a close connection between them. It may be that one
of the things causes or influences the other, or it may be that something entirely different is causing or influencing both
of the things that are correlated.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
301
F.4. Data Analytics
CMA Part 1
Business Intelligence (BI)
Business intelligence is the combination of architectures, analytical and other tools, databases, applications,
and methodologies that enable interactive access—sometimes in real time—to data such as sales revenue,
costs, income, and product data. Business intelligence provides historical, current, and predicted values for
internal, structured data regarding products and segments. Further, business intelligence gives managers
and analysts the ability to conduct analysis to be used to make more informed strategic decisions and thus
optimize performance.
The business intelligence process involves the transformation of data into information, then to knowledge,
then to insight, then to strategic decisions, and finally to action.
•
Data is facts and figures, but data by itself is not information.
•
Information is data that has been processed, analyzed, interpreted, organized, and put into context such as in a report, in order to be meaningful and useful.
•
Knowledge is the theoretical or practical understanding of something. It is facts, information, and
skills acquired through experience or study. Thus, information becomes knowledge through experience, study, or both.
•
Insight is a deep and clear understanding of a complex situation. Insight can be gained through
perception or intuition, but it can also be gained through use of business intelligence: data analytics, modeling, and other tools.
•
The insights gained from the use of business intelligence lead to recommendations for the best
action to take. Strategic decisions are made by choosing from among the recommendations.
•
The strategic decisions made are implemented and turned into action.
A Business Intelligence system has four main components:
1)
A data warehouse (DW) containing the source data.
2)
Business analytics, that is, the collection of tools used to mine, manipulate, and analyze the data
in the DW. Many Business Intelligence systems include artificial intelligence capabilities, as well as
analytical capabilities.
3)
A business performance management component (BPM) to monitor and analyze performance.
4)
A user interface, usually in the form of a dashboard.
Note: A dashboard is a screen in a software application, a browser-based application, or a desktop
application that displays in one place information relevant to a given objective or process, or for senior
management, it may show patterns and trends in data across the organization.
For example, a dashboard for a manufacturing process might show productivity information for a period,
variances from standards, and quality information such as the average number of failed inspections per
hour. For senior management, it might present key performance indicators, balanced scorecard data, or
sales performance data, to name just a few possible metrics that might be chosen.
A dashboard for a senior manager may show data on manufacturing processes, sales activity, and current
financial metrics.
A dashboard may be linked to a database that allows the data presented to be constantly updated.
302
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.4. Data Analytics
Big Data and the Four “V”s of Big Data
Big Data refers to vast datasets that are too large to be analyzed using standard software tools and so
require new processing technologies. Those new processing technologies are data analytics.
Big Data can be broken down into three categories:
1)
Structured data is in an organized format that enables it to be input into a relational database
management system and analyzed. Examples include the data in CRM or ERP systems, such as
transaction data, customer data, financial data, employee data, and vendor data.
2)
Unstructured data has no defined format or structure. It is typically free-form and text-heavy,
making in-depth analysis difficult. Examples include word processing documents, email, call center
communications, contracts, audio and video, photos, data from radio-frequency identification
(RFID) tags, and information contained on websites and social media.
3)
Semi-structured data has some format or structure but does not follow a defined model. Examples include XML files, CSV files, and most server log files.
Big Data is characterized by four attributes, known as the four V’s: volume, velocity, variety, and veracity.
1)
Volume: Volume refers to the amount of data that exists. The volume of data available is increasing exponentially as people and processes become more connected, creating problems for
accountants. The tools used to analyze data in the past—spreadsheet programs such as Excel and
database software such as Access—are no longer adequate to handle the complex analyses that
are needed. Data analytics is best suited to processing immense amounts of data.
2)
Velocity: Velocity refers to the speed at which data is generated and changed, also called its
flow rate. As more devices are connected to the Internet, the velocity of data grows and organizations can be overwhelmed with the speed at which the data arrives. The velocity of data can
make it difficult to discern which data items are useful for a given decision. Data analytics is designed to handle the rapid influx of new data.
3)
Variety: Variety refers to the diverse forms of data that organizations create and collect. In the
past, data was created and collected primarily by processing transactions. The information was in
the form of currency, dates, numbers, text, and so forth. It was structured, that is, it was easily
stored in relational databases and flat files. However, today unstructured data such as media files,
scanned documents, Web pages, texts, emails, and sensor data are being captured and collected.
These forms of data are incompatible with traditional relational database management systems
and traditional data analysis tools. Data analytics can capture and process diverse and complex
forms of information.
4)
Veracity: Veracity is the accuracy of data, or the extent to which it can be trusted for decisionmaking. Data must be objective and relevant to the decision at hand in order to have value for use
in making decisions. However, various distributed processes—such as millions of people signing
up online for services or free downloads—generate data, and the information they input is not
subject to controls or quality checks. If biased, ambiguous, irrelevant, inconsistent, incomplete, or
even deceptive data is used in analysis, poor decisions will result. Controls and governance over
data to be used in decision-making are essential to ensure the data’s accuracy. Poor-quality data
leads to inaccurate analysis and results, commonly referred to as “garbage in, garbage out.”
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
303
F.4. Data Analytics
CMA Part 1
Some data experts have added two additional Vs that characterize data:
5)
Variability: Data flows can be inconsistent, for example, they can exhibit seasonal peaks. Furthermore, data can be interpreted in varying ways. Different questions require different
interpretations.
6)
Value: Value is the benefit that the organization receives from data. Without the necessary data
analytics processes and tools, the information is more likely to overwhelm an organization than to
help the organization. The organization must be able to determine the relative importance of different data to the decision-making process. Furthermore, an investment in Big Data and data
analytics should provide benefits that are measurable.
Data Science
Data science is a field of study and analysis that uses algorithms and processes to extract hidden
knowledge and insights from data. The objective of data science is to use both structured and unstructured
data to extract information that can be used to develop knowledge and insights for forecasting and strategic
decision making.
The difference between data analytics and data science is in their goals.
•
The goal of data analytics is to provide information about issues that the analyst or manager either
knows or knows he or she does not know (that is, “known unknowns”).
•
On the other hand, the goal of data science is to provide actionable insights into issues where
the analyst or manager does not know what he or she does not know (that is, “unknown unknowns”).
Example: Data science would be used to try to identify a future technology that does not exist today
but that will impact the organization in the future.
Decision science, machine learning (that is, the use of algorithms that learn from the data in order to
predict outcomes), and prescriptive analytics are three examples of means by which actionable insights
can be discovered in a situation where “unknowns are unknown.”
Data science involves data mining, analysis of Big Data, data extraction, and data retrieval. Data science
draws on knowledge of data engineering, social engineering, data storage, natural language processing,
and many other fields.
The size, value, and importance of Big Data has brought about the development of the profession of data
scientist. Data science is a multi-disciplinary field that unifies several specialized areas, including statistics,
data analysis, machine learning, math, programming, business, and information technology. A data scientist is a person with skills in all the areas, though most data scientists have deep skills in one area and less
deep skills in the other areas.
Note: “Data mining” involves using algorithms in complex data sets to find patterns in the data that can
be used to extract usable data from the data set. Data mining is discussed in more detail below.
304
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.4. Data Analytics
Data and Data Science as Assets
Data and data science capabilities are strategic assets to an organization, but they are complementary
assets.
•
Data science is of little use without usable data.
•
Good data cannot be useful in decision-making without good data science talent.
Good data and good data science, used together, can lead to large productivity gains for a company and
the ability to do things it has never done before. Data and data science together can provide the following
opportunities and benefits to an organization:
•
They can enable the organization to make decisions based on data and evidence.
•
The organization can leverage relevant information from various data sources in a timely manner.
•
When the cloud is used, the organization can get the answers it needs using any device, any time.
•
The organization can transform data into actionable insights.
•
The organization can discover new opportunities.
•
The organization can increase its competitive advantage.
•
Management can explore data to get answers to questions.
The result can be maximized revenue, improved operations, and mitigated risks. The return on investment
from the better decision-making that results from using data and data science can be significant.
As with any strategic asset, it is necessary to make investments in data and data science. The investments
include building a modern business intelligence architecture using the right tools, investing in people with
data science skills, and investing in the training needed to enable the staff to use the business intelligence
and data analytics tools.
Challenges of Managing Data Analytics
Some general challenges of managing data analytics include data capture, data curation (that is, the organization and integration of disparate data collected from various sources), data storage, security and
privacy protection, data search, data sharing, data transfer, data analysis, and data visualization.
In addition, some specific challenges of managing data analytics include:
•
The growth of data and especially of unstructured data.
•
The need to generate insights in a timely manner in order for the data to be useful.
•
Recruiting and retaining Big Data talent. Demand has increased for data engineers, data scientists,
and business intelligence analysts, causing higher salaries and creating difficulty filling positions.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
305
F.4. Data Analytics
CMA Part 1
Data Mining
Data mining is the use of statistical techniques to search large data sets to extract and analyze data in
order to discover previously unknown, useful patterns, trends, and relationships within the data that go
beyond simple analysis and that can be used to make decisions. Data mining uses specialized computational
methods derived from the fields of statistics, machine learning, and artificial intelligence.
Data mining involves trying different hypotheses repeatedly and making inferences from the results that
can be applied to new data. Data mining is thus an iterative process. Iteration is the repetition of a
process in order to generate a sequence of outcomes. Each repetition of the process is a single iteration,
and the outcome of each iteration is the starting point of the next iteration. 100
Data mining is a process with defined steps, and thus it is a science. Science is the pursuit and application
of knowledge and understanding of the natural and social world following a systematic methodology based
on evidence.101
Data mining is also an art. In data mining, decisions must be made regarding what data to use, what tools
to use, and what algorithms to use. For example, one word can have many different meanings. In mining
text, the context of words must be considered. Therefore, instead of just looking for words in relation to
other words, the data scientist looks for whole phrases in relation to other phrases. The data scientist must
make thoughtful choices in order to get usable results.
Data mining differs from statistics. Statistics focuses on explaining or quantifying the average effect of an
input or inputs on an outcome, such as determining the average demand for a product based on some
variable like price or advertising expenditures. Statistical analysis includes determining whether the relationships observed could be a result of the variable or could be a matter of chance instead. A simple example
of statistical analysis is a linear regression model102 that relates total historical sales revenues (the dependent variable) to various levels of historical advertising expenditures (the independent variable) to discover
whether the level of advertising expenditures affects total sales revenues. Statistics may involve using a
sample from a dataset to make predictions about the population as a whole. Alternatively, it may involve
using the entire dataset to estimate the best-fit model in order to maximize the information available about
the hypothesized relationship in the population and predict future results.
In contrast, data mining involves open-ended exploring and searching within a large dataset without putting
limits around the question being addressed. The goal is to predict outcomes for new individual records. The
data is usually divided into a training set and a validation set. The training set is used to estimate the
model, and the validation set is used to assess the model’s predictive performance on new data.
Data mining might be used to classify potential customers into different groups to receive different marketing approaches based on some characteristic common to each group that is yet to be discovered. It may
be used to answer questions such as what specific online advertisement should be presented to a particular
person browsing on the Internet based on their previous browsing habits and the fact that other people
who browsed the same topics purchased a particular item.
Thus, data mining involves generalization of patterns from a data set. “Generalization” is the ability to
predict or assign a label to a “new” observation based on a model built from past experience. In other
words, the generalizations developed in data mining should be valid not just for the data set used in observing the pattern but should also be valid for new, unknown data.
Software used for data mining uses statistical models, but it also incorporates algorithms that can “learn”
from the patterns in the data. An algorithm is applied to the historical data to create a mining model, and
then the model is applied to new data to create predictions and make inferences about relationships. For
100
Definition of “iteration” from Wikipedia, https://en.wikipedia.org/wiki/Iteration, accessed May 8, 2019.
101
Definition of “science” from the Science Council, https://sciencecouncil.org/about-science/our-definition-of-science/,
accessed May 8, 2019.
102
In statistics, a “model” is the representation of a relationship between variables in the data. It describes how one or
more variables in the data are related to other variables. “Modeling” is building a representative abstraction from the
observed data set.
306
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.4. Data Analytics
example, data mining software can help find customers with common interests and determine which products customers with each particular interest typically purchase, in order to direct advertising messages
about specific products to the customers who are most likely to purchase those products.
Data mining is used in predictive analytics. Basic concepts of predictive analytics include:
•
Classification – Any data analysis involves classification, such as whether a customer will purchase or not purchase. Data mining is used when the classification of the data is not known. Similar
data where the classification is known is used to develop rules, and then those rules are applied
to the data with the unknown classification to predict what the classification is or will be. For
example, customers are classified as predicted purchasers or predicted non-purchasers.
•
Prediction – Prediction is similar to classification, but the goal is to predict the numerical value of
a variable such as the amount of a purchase rather than (for example) simply classifying customers as predicted purchasers or predicted non-purchasers. Although classification also involves
prediction, “prediction” in this context refers to prediction of a numerical value, which can be an
integer (a whole number such as 1, 2, or 3) or a continuous variable.103
•
Association rules – Also called affinity analysis, association rules are used to find patterns of
association between items in large databases, such as associations among items purchased from
a retail store, or “what goes with what.” For example, when customers purchase a 3-ring notebook,
do they usually also purchase a package of 3-hole punched paper? If so, the 3-hole punched paper
can be placed on the store shelf next to the 3-ring notebooks. Similar rules can be used for bundling
products.104
•
Online recommendation systems – In contrast to association rules, which generate rules that
apply to an entire population, online recommendation systems use collaborative filtering to
deliver personalized recommendations to users. Collaborative filtering generates rules for “what
goes with what” at the individual user level. It makes recommendations to individuals based on
their historical purchases, online browsing history, or other measurable behaviors that indicate
their preferences, as well as other users’ historical purchases, browsing, or other behaviors.
•
Data reduction – Data reduction is the process of consolidating a large number of records into a
smaller set by grouping the records into homogeneous groups.
•
Clustering – Clustering is discovering groups in data sets that have similar characteristics without
using known structures in the data. Clustering can be used in data reduction to reduce the number
of groups to be included in the data mining algorithm.
•
Dimension reduction – Dimension reduction entails reducing the number of variables in the data
before using it for data mining, in order to improve its manageability, interpretability, and predictive ability.
•
Data exploration – Data exploration is used to understand the data and detect unusual values.
The analyst explores the data by looking at each variable individually and looking at relationships
between and among the variables in order to discover patterns in the data. Data exploration can
include creating charts and dashboards, called data visualization or visual analytics (see next item).
Data exploration can lead to the generation of a hypothesis.
•
Data visualization – Data visualization is another type of data exploration. Visualization, or visual
discovery, consists of creating graphics such as histograms and boxplots for numerical data in
103
A continuous variable is a numerical variable that can take on any value at all. It does not need to be an integer such
as 1, 2, 3, or 4, though it can be an integer. A continuous variable can be 8, 8.456, 10.62, 12.3179, or any other number,
and the variable can have any amount of decimal points.
104
Product bundling occurs when a seller bundles products, features, or services together and offers the bundle at a
price that is lower than the price of the items if purchased individually. For example, a software vendor may create a
suite of software applications and offer the applications together at a reduced price.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
307
F.4. Data Analytics
CMA Part 1
order to visualize the distribution of the variables and to detect outliers.105 Pairs of numerical
variables can be plotted on a scatter plot graph in order to discover possible relationships. When
the variables are categorized, bar charts can be used. Visualization is covered in more detail later
in this section.
Supervised and Unsupervised Learning in Data Mining
Supervised learning algorithms are used in classification and prediction. In order to “train” the algorithm, it is necessary to have a dataset in which the value of the outcome to be predicted is already known,
such as whether or not the customer made a purchase. The dataset with the known outcome is called the
training data because that dataset is used to “train” the algorithm. The data in the dataset is called
labeled data because it contains the outcome value (called the label) for each record. The classification
or prediction algorithm “learns” or is “trained” about the relationship between the predictor variables and
the outcome variable in the training data. After the algorithm has “learned” from the training data, it is
tested by applying it to another sample of labeled data for which the outcome is already known but is
initially hidden (called the validation data) to see if it works properly. If several different algorithms are
being tested, additional test data with known outcomes should be used with the selected algorithm to
predict how well it will work. After the algorithm has been thoroughly tested, it can be used to classify or
make predictions in data where the outcome is unknown.
Example: Simple linear regression is an example of a basic supervised learning algorithm. The x
variable, the independent variable, serves as the predictor variable. The y variable, the dependent variable, is the outcome variable in the training and test data where the y value for each x value is known.
The regression line is drawn so that it minimizes the sum of the squared deviations between the actual
y values and the values predicted by the regression line. Then, the regression line is used to predict the
y values that will result for new values of x for which the y values are unknown.106
Unsupervised learning algorithms are used when there is no outcome variable to predict or classify.
Association rules, dimension reduction, and clustering are unsupervised learning methods.
Neural Networks in Data Mining
Neural networks are systems that can recognize patterns in data and use the patterns to make predictions
using new data. Neural networks derive their knowledge from their own data by sifting through the data
and recognizing patterns. Neural networks are used to learn about the relationships in the data and combine
predictor information in such a way as to capture the complicated relationships among predictor variables
and between the predictor variables and the outcome variable.
Neural networks are based on the human brain and mimic the way humans learn. In a human brain, neurons
are interconnected and humans can learn from experience. Similarly, a neural network can learn from its
mistakes by finding out the results of its predictions. In the same way as a human brain uses a network of
neurons to respond to stimuli from sensory inputs, a neural network uses a network of artificial neurons,
called nodes, to simulate the brain’s approach to problem solving. A neural network solves learning problems by modeling the relationship between a set of input signals and an output signal.
The results of the neural network’s predictions—the output of the model—becomes the input to the next
iteration of the model. Thus, if a prediction made did not produce the expected results, the neural network
uses that information in making future predictions.
105
Outliers are data entries that do not fit into the model because they are extreme observations.
106
Regression analysis is covered in more detail later in this section.
308
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.4. Data Analytics
Neural networks can look for trends in historical data and use it to make predictions. Some examples of
uses of neural networks include the following.
•
Picking stocks for investment by performing technical analysis of financial markets and individual
investment holdings.
•
Making bankruptcy predictions. A neural network can be given data on firms that have gone bankrupt and firms that have not gone bankrupt. The neural network will use that information to learn
to recognize early warning signs of impending bankruptcy, and it can thus predict whether a particular firm will go bankrupt.
•
Detecting fraud in credit card and other monetary transactions by recognizing that a given transaction is outside the ordinary pattern of behavior for that customer.
•
Identifying a digital image as, for example, a cat or a dog.
•
Self-driving vehicles use neural networks with cameras on the vehicle as the inputs.
The structure of neural networks enables them to capture complex relationships between predictors and an
outcome by fitting the model to the data. It calculates weights for the individual input variables. The weights
allow each of the inputs to contribute a greater or lesser amount to the output, which is the sum of the
inputs. Depending on the effect of those weights on how well the output of the model—the prediction it
makes—fits the actual output, the neural network then revises the weights for the next iteration.
Goodness of fit refers to how closely the predicted values resulting from a model match the actual, observed values.
One of the weaknesses of neural networks is that they can “overfit” the data. Overfitting the data means
the model fits the training data perfectly, but the model does not generalize well and thus does not do a
good job of making predictions using new data. Overfitting occurs because of “noise” in the training data.
Noise is outliers (extreme observations) and randomness.
Overfitting can be detected by examining the performance of the model on the validation dataset. The
errors in the model’s performance on the validation dataset decrease in the early iterations of the training,
but after a while, the errors begin to increase. Therefore, it is important to limit the number of training
iterations to the number where the validation errors are minimized. The weights for the variables used at
that stage are probably the best model to use with new data.
Underfitting can occur as well. Underfitting happens when the model is too simple, because if the model
is too simple, it will not be flexible enough in learning from the data.
Neural Networks and Sensitivity Analysis
Neural networks are called black boxes. They work best when solving problems where the input data and
the output data are fairly simple but the process that relates the input to the output is extremely complex.
Even though neural networks can approximate a function that relates the input to the output, studying the
structure of the network does not produce any insights into the function being approximated. In other
words, there is no simple link between the weights used for the relative importance or frequency of the
input variables and the resulting output.
However, in some cases it is possible to learn something about the relationships being captured by a neural
network by conducting sensitivity analysis on the validation dataset. Sensitivity analysis estimates the
amount of change in the output of a model that is caused by a single change in the model inputs. It can be
used to determine which input parameters are more important for achieving accurate output values.
First, all the predictor values are set to their means107 and the network’s prediction is obtained. Then the
process is repeated over and over with each predictor set sequentially to its minimum and then to its
107
A mean is an average.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
309
F.4. Data Analytics
CMA Part 1
maximum value. By comparing the predictions created by the different levels of the predictors, the analyst
can gain a sense of which predictors affect the predictions the most and in what way. For example:
•
Which model parameters contribute the most to output variability?
•
Which parameters are insignificant and can be eliminated from the model?
•
Which parameters interact with each other?
Sensitivity analysis is also used to understand the behavior of the system being modeled, to determine
whether the model is doing what it was developed to do, to evaluate whether the model is applicable to the
data, and to determine the model’s stability.
Challenges of Data Mining
Some of the challenges inherent in data mining include the following.
•
Poor data quality. Data stored in relational databases may be incomplete, out of date, or inconsistent. For example, mailing lists can contain duplicate records, leading to duplicate mailings and
excess costs. Poor quality data can lead to poor decisions.
Furthermore, use of inaccurate data can cause problems for consumers. For example, when credit
rating agencies have errors in their data, consumers can have difficulty obtaining credit.
•
Information exists in multiple locations within the organization and thus is not centrally located,
for example Excel spreadsheets that are in the possession of individuals in the organization. Information that is not accessible cannot be used.
•
Biases are amplified in evaluating data. The meaning of a data analysis must be assessed by a
human being, and human beings have biases. A “bias” is a preference or an inclination that gets
in the way of impartial judgment. Most people tend to trust data that supports their pre-existing
positions and tend not to trust data that does not support their pre-existing positions. Other biases
include relying on the most recent data or trusting only data from a trusted source. All such biases
contribute to the potential for errors in data analysis.
•
Analyzed data often displays correlations.108 However, correlation does not prove causation.
Establishing a causal relationship is necessary before using correlated data in decision-making. If
a causal relationship is assumed where none exists, decisions made on the basis of the data will
be flawed.
•
Ethical issues such as data privacy related to the aggregation of personal information on millions
of people. Profiling according to ethnicity, age, education level, income, and other characteristics
results from the collection of so much personal information.
•
Data security is an issue because personal information on individuals is frequently stolen by
hackers or even employees.
•
A growing volume of unstructured data. Data items that are unstructured do not conform to
relational database management systems, making capturing and analyzing unstructured data more
complex. Unstructured data includes items such as social media posts, videos, emails, chat logs,
and images, for example images of invoices or checks received.
108
A “correlation” is a relationship between or among values in multiple sets of data where the values in one data set
move in relation to the values in one or more other data set or sets.
310
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.4. Data Analytics
NoSQL, Data Lakes, and the Challenge of Unstructured Data
Unstructured data cannot be stored in a data warehouse because the data in a data warehouse must be
uniformly defined so that users can write queries using Structured Query Language to extract information.
Thus, the data in a data warehouse must be transformed, that is, put into a proper format before being
added to the data warehouse. This transformation before data can be loaded to a data warehouse is known
as Schema-on-Write.109
However, unstructured items of data do not conform to any relational database schema, so they cannot be
transformed to be used in a data warehouse. Therefore, a data lake can be used for data storage and
analysis when unstructured data must be included. Because unstructured data items do not conform to
relational database management systems that use Structured Query Language (SQL), a data lake utilizes
a non-relational database management system, called NoSQL.
NoSQL stands for “Not only SQL.” A NoSQL database management system can be used to analyze high
volume and disparate data, including unstructured and unpredictable data types. Unlike relational databases, NoSQL databases do not require SQL to analyze the data and most do not use a schema and thus
they are more flexible.110 A NoSQL database management system can be used with a data lake that contains
both structured and unstructured data.
Note: SQL can be used as a query language with a NoSQL database management system, but SQL is
not the main query language used because its usage is limited to structured data.
Items of structured and unstructured data from all sources are stored in a data lake as “raw data,” that is,
untransformed data items that have not been uniformly defined. The data is transformed and a schema
applied only when specific analysis is needed. Such data transformation is called Schema-on-Read.
The primary users of a data lake and a NoSQL database management system are data scientists who have
the analytical and programming skills to do deep analysis of both structured and unstructured data.
109
See the topic Data Warehouse, Data Mart, and Data Lake in Section F.1. – Information Systems in this volume for
more information about data warehouses.
110
A database’s schema is a map or plan of the entire database, that is, the database’s logical structure. The schema
specifies the names of the data elements contained in the database and their relationships to the other data elements.
For more information about relational databases, please see the topic Databases in this volume in Section F.1. – Information Systems in this volume.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
311
F.4. Data Analytics
CMA Part 1
What Data Mining is NOT
Data mining is not simple querying of a database using Structured Query Language. SQL searches of
databases are descriptive analytics and are based on rules developed by the user, such as “find all credit
card holders in a given ZIP code who charge more than $25,000 annually and who have had no delinquent
payments in the last two years.” The database query displays information from the database, but the
information displayed is not hidden information. Data mining, on the other hand, involves statistical modeling and automated algorithmic methods of discovering previously hidden, unknown relationships.
Data mining is also not “discovering” historical patterns in data that are random relationships (rather than
real relationships) and assuming they will repeat. The results of analysis must be assessed with intelligence
to determine whether the relationships are meaningful and can be generalized to new data.
Steps in Data Mining
A typical data mining project will include the following steps.
1)
Understand the purpose of the project. The data scientist needs to understand the user’s
needs and what the user will do with the results. Also, the data scientist needs to know whether
the project will be a one-time effort or ongoing.
2)
Select the dataset to be used. The data scientist will take samples from a large database or
databases, or from other sources. The samples should reflect the characteristics of the records of
interest so the data mining results can be generalized to records outside of the sample. The data
may be internal or external.
3)
Explore, clean, and preprocess the data. Verify that the data is in usable condition, that is,
whether the values are in a reasonable range and whether there are obvious outliers. Determine
how missing data (that is, blank fields) should be handled. Visualize the data by reviewing the
information in chart form. If using structured data, ensure consistency in the definitions of fields,
units of measurement, time periods covered, and so forth. New variables may be created in this
step, for example using the start and end dates to calculate the duration of a time period.
4)
Reduce the data dimension if needed. Eliminate unneeded variables, transform variables as
necessary, and create new variables. The data scientist should be sure to understand what each
variable means and whether it makes sense to include it in the model.
5)
Determine the data mining task. Determining the task includes classification, prediction, clustering, and other activities. Translate the general question or problem from Step 1 into the specific
data mining question.
6)
Partition the data. If supervised learning will be used (classification or prediction), partition the
dataset randomly into three parts: one part for training, one for validation, and one for testing.
7)
Select the data mining techniques to use. Techniques include regression, neural networks,
hierarchical clustering, and so forth.
8)
Use algorithms to perform the task. The use of algorithms is an iterative process. The data
scientist tries multiple algorithms, often using multiple variants of the same algorithm by choosing
different variables or settings. The data scientist uses feedback from an algorithm’s performance
on validation data to refine the settings.
9)
Interpret the results of the algorithm. The data scientist chooses the best algorithm and tests
the final choice on the test data to learn how well it will perform.
10)
Deploy the model. The model is run on the actual records to produce actionable information that
can be used in decisions. The chosen model is used to predict the outcome value for each new
record, called scoring.111
A data mining project does not end when a particular solution is deployed, however. The results of the data
mining may raise new questions that can then be used to develop a more focused model.
111
Shmueli, Galit, Bruce, Peter C., Yahav, Inbal, Patel, Nitin R., and Lichtendahl Jr., Kenneth C., Data Mining for Business
Analytics: Concepts, Techniques, and Applications in R, 1st Edition, John Wiley & Sons, Hoboken, NJ, 2018, pp. 19-21.
312
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.4. Data Analytics
Analytic Tools
Linear Regression Analysis
Regression analysis measures the extent to which an effect has historically been the result of a specific
cause. If the relationship between the cause and the effect is sufficiently strong, regression analysis using
historical data can be used to make decisions and predictions.
Time Series Analysis
Note: Time series analysis was introduced in Section B in Volume 1 of this textbook, topic B.3. Forecasting Techniques. Candidates may wish to review that information before proceeding. The trend
pattern in time series analysis was introduced in Forecasting Techniques and will be further explained in
this topic. Additional patterns will be discussed in this topic, as well.
A time series is a sequence of measurements taken at equally-spaced, ordered points in time. A time series
looks at relationships between a variable and the passage of time. The variable may be sales revenue for
a segment of the organization, production volume for a plant, expenses in one expense classification, or
anything being monitored over time. Only one set of historical time series data is used in time series analysis
and that set of historical data is not compared to any other set of data.
A time series can be descriptive or predictive. Time series analysis is used for descriptive modeling, in
which a time series is modeled to determine its components, that is, whether it demonstrates a trend
pattern, a seasonal pattern, a cyclical pattern, or an irregular pattern. The information gained from a time
series analysis can be used for decision-making and policy determination.
Time series forecasting, on the other hand, is predictive. It involves using the information from a time
series to forecast future values of that series.
A time series may have one or more of four patterns (also called components) that influence its behavior
over time:
1)
Trend
2)
Cyclical
3)
Seasonal
4)
Irregular
Trend Pattern in Time Series Analysis
A trend pattern is the most frequent time series pattern and the one most amenable to use for predicting
because the historical data exhibits a gradual shifting to a higher or lower level. If a long-term trend exists,
short-term fluctuations may take place within that trend; however, the long-term trend will be apparent.
For example, sales from year to year may fluctuate but overall, they may be trending upward, as is the
case in the graph that follows.
A trend projection is performed with simple linear regression analysis, which forecasts values using
historical information from all available past observations of the value. The regression line is called a trend
line when the regression is being performed on a time series.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
313
F.4. Data Analytics
CMA Part 1
Note: The x-axis on the graph of a time series is always the horizontal axis and the y-axis is always the
vertical axis. The x-axis represents the independent variable, also known as the predictor variable, and
the y-axis represents the dependent variable, also known as the outcome variable.
In a time series regression analysis, the passage of time is the independent variable and is on the xaxis.
The equation of a simple linear regression line is:
ŷ = a + bx
Where:
ŷ=
the predicted value of ŷ on the regression line corresponding to each value of x, the dependent variable.
a=
the y-intercept, or the value of ŷ on the regression
line when x is zero, also called the constant coefficient.
b=
the slope of the line and the amount by which the ŷ
value of the regression line changes (increases or decreases) when the value of x increases by one unit, also
called the variable coefficient.
x=
the independent variable, the value of x on the xaxis that corresponds to the predicted value of ŷ on the
regression line.
Example of a Trend Pattern in Time Series Analysis
Sales for each year, 20X0 through 20X9, are as follows:
314
Year
Sales
20X0
$2,250,000
20X1
$2,550,000
20X2
$2,300,000
20X3
$2,505,000
20X4
$2,750,000
20X5
$2,800,000
20X6
$2,600,000
20X7
$2,950,000
20X8
$3,000,000
20X9
$3,200,000
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.4. Data Analytics
efham CMA
The following chart illustrates the trend pattern of the sales. It indicates a strong relationship between the
passage of time (the x variable) and sales (the y variable) because the historical data points fall close to
the regression line.
Trend Pattern
Sales 20X0 - 20X9
$4,000,000
$3,500,000
$3,000,000
$2,500,000
$2,000,000
Sales
$1,500,000
Regression Line: ŷ = 2,273,636 + 92,636x
$1,000,000
Years
20X0 is x = 0, 20X1 is x = 1, and so forth
The regression equation, ŷ = 2,273,636 + 92,636x, means the regression line begins at 2,273,636 and
increases by 92,636 each succeeding year.
On the chart, 20X0 is at x = 0, 20X1 is at x = 1, and so forth.
The symbol over the “y” in the formula is called a “hat,” and it is read as “y-hat.” The y-hat indicates
the predicted value of y, not the actual value of y. The predicted value of y is the value of y on the
regression line (the line created from the historical data) at any given value of x.
Thus, in 20X4, where x = 4, the predicted value of y, that is, ŷ, is
ŷ = 2,273,636 + (92,636 × 4)
ŷ = 2,273,636 + 370,544
ŷ = 2,644,180
In 20X7, where x = 7, the predicted value of y is
ŷ = 2,273,636 + (92,636 × 7)
ŷ = 2,273,636 + 648,452
ŷ = 2,922,088
Those values for ŷ (the value on the regression line) for 20X4 and 20X7 can be confirmed by looking at the
chart.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
315
F.4. Data Analytics
CMA Part 1
Note: It would be a good idea to calculate the predicted value of y, that is, ŷ, at various other values of
x and confirm the values on the graph. The predicted value of y at a given value of x may be
required in an exam question.
The actual equation of the regression line as shown above may not be given in an exam question.
The y-intercept and the slope of the regression line may be given instead. The constant coefficient,
2,273,636 in the above equation, is the y-intercept, and the variable coefficient, 92,636 in the above
equation, is the slope of the line.
Thus, candidates need to know that the y-intercept of a regression line is the constant coefficient (the
number that stands by itself) and the slope of the regression line is the variable coefficient (the number
next to the x in the equation).
Furthermore, an exam question may use different letters to represent the variables and the coefficients,
so candidates should be able to recognize the form of the equation and the meaning of each component
of the formula based on its usage in the formula.
Trends in a time series analysis are not always upward and linear like the preceding graph. Time series
data can exhibit an upward linear trend, a downward linear trend, a nonlinear (that is, curved) trend, or no
trend at all. A scattering of points that have no relationship to one another would represent no trend at all.
Note: The CMA exam tests linear regression only.
Cyclical Pattern in Time Series Analysis
Any recurring fluctuation that lasts longer than one year is attributable to the cyclical component of the
time series. A cyclical component in sales data is usually due to the cyclical nature of the economy.
A long-term trend can be established even if the sequential data fluctuates greatly from year to year due
to cyclical factors.
Example of a Cyclical Pattern in Time Series Analysis
Sales for each year, 20X0 through 20X9, are as follows:
Year
Sales
20X0
$1,975,000
20X1
$2,650,000
20X2
$2,250,000
20X3
$2,450,000
20X4
$2,250,000
20X5
$2,250,000
20X6
$2,600,000
20X7
$2,450,000
20X8
$3,000,000
20X9
$2,750,000
The following chart illustrates the cyclical pattern of the sales. The fluctuations from year to year are greater
than they were for the chart containing the trend pattern. However, a long-term trend is still apparent.
316
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.4. Data Analytics
Cyclical Pattern
Sales 20X0 - 20X9
$4,000,000
$3,500,000
Sales
$3,000,000
$2,500,000
$2,000,000
Sales
$1,500,000
Regression Line: ŷ = 2,139,091 + 65,758x
$1,000,000
Years
20X0 is x = 0, 20X1 is x = 1, and so forth
Seasonal Pattern in Time Series Analysis
Usually, trend and cyclical components of a time series are tracked as annual historical movements over
several years. However, a time series can fluctuate within a year due to seasonality in the business. For
example, a surfboard manufacturer’s sales would be highest during the warm summer months, whereas a
manufacturer of snow skis would experience its peak sales in the wintertime. Variability in a time series
due to seasonal influences is called the seasonal component.
Note: Seasonal behavior can take place within any time period. Seasonal behavior is not limited to
periods of a year. A business that is busiest at the same time every day is said to have a within-theday seasonal component. Any pattern that repeats regularly is a seasonal component.
Seasonality in a time series is identified by regularly spaced peaks and troughs with a consistent direction
that are of approximately the same magnitude each time, relative to any trend. The graph that follows
shows a strongly seasonal pattern. Sales are low during the first quarter each year. Sales begin to increase
each year in the second quarter and they reach their peak in the third quarter, then they drop off and are
low during the fourth quarter. However, the overall trend is upward, as illustrated by the trend line.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
317
F.4. Data Analytics
CMA Part 1
Example of a Seasonal Pattern in Time Series Analysis
Sales for each quarter, March 20X6 through December 20X8, are as follows:
Year
Sales
Mar. 20X6
$1,200,000
Jun. 20X6
$2,500,000
Sep. 20X6
$3,200,000
Dec. 20X6
$1,500,000
Mar. 20X7
$1,400,000
Jun. 20X7
$2,800,000
Sep. 20X7
$3,800,000
Dec. 20X7
$1,400,000
Mar. 20X8
$1,700,000
Jun. 20X8
$2,500,000
Sep. 20X8
$3,900,000
Dec. 20X8
$1,600,000
The chart that follows contains historical sales by quarter for three years and forecasted sales by quarter
for the fourth year. The fourth year quarterly forecasts were calculated in Excel using the FORECAST.ETS
function, which is an exponential smoothing algorithm.
Note: Exponential smoothing is outside the scope of the CMA exams, so it is not covered any further in
these study materials.
The chart illustrates that sales volume begins to build in the second quarter of each year. The sales volume
reaches its peak in the third quarter and is at its lowest in the fourth quarter of each year.
318
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.4. Data Analytics
Seasonal Pattern
Historical Sales By Quarter, 20X6-20X8
with 20X9 Quarterly Forecasts
Sales
Forecast
Trend Line
Sales
$4,000,000
$3,000,000
$2,000,000
$1,000,000
Quarters
Irregular Pattern in a Time Series
A time series may vary randomly, not repeating itself in any regular pattern. Such a pattern is called an
irregular pattern. It is caused by short-term, non-recurring factors and its impact on the time series
cannot be predicted.
Example of an Irregular Pattern in Time Series Analysis
Sales for each year, 20X0 through 20X9, are as follows:
Year
Sales
20X0
$1,500,000
20X1
$3,200,000
20X2
$2,100,000
20X3
$2,500,000
20X4
$1,400,000
20X5
$1,600,000
20X6
$3,600,000
20X7
$2,000,000
20X8
$2,500,000
20X9
$1,700,000
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
319
F.4. Data Analytics
CMA Part 1
The following chart exhibits the irregular pattern of the sales:
Irregular Pattern
Sales 20X0 - 20X9
$4,000,000
$3,500,000
Sales
$3,000,000
$2,500,000
$2,000,000
$1,500,000
Sales
$1,000,000
Years
Time Series and Regression Analysis
A time series that has a long-term upward or downward trend can be used to make a forecast. Simple
linear regression analysis is used to create a trend projection and to forecast values using historical information from all available past observations of the value.
Note: Simple regression analysis is called “simple” to differentiate it from multiple regression analysis.
The difference between simple linear regression and multiple linear regression is in the number of independent variables.
• A simple linear regression has only one independent variable. In a time series, that independent
variable is the passage of time.
• A multiple linear regression has more than one independent variable.
Linear regression means the regression equation graphs as a straight line.
Simple linear regression analysis relies on two assumptions:
•
Variations in the dependent variable (the value being predicted) are explained by variations in
one single independent variable (the passage of time, for a time series).
•
The relationship between the independent variable and the dependent variable (whatever is being
predicted) is linear. A linear relationship is one in which the relationship between the independent
variable and the dependent variable can be approximated by a straight line on a graph. The regression equation, which approximates the relationship, will graph as a straight line.
The equation of a simple linear regression line is:
ŷ = a + bx
320
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
F.4. Data Analytics
Where:
ŷ=
the predicted value of the dependent variable, ŷ, on the
regression line corresponding to each value of x.
a=
the constant coefficient, or the y-intercept, the value
of ŷ on the regression line when x is zero.
b=
the variable coefficient and the slope of the regression line, which is the amount by which the ŷ value of the
regression line changes (either increases or decreases)
when the value of x increases by one unit.
x=
the independent variable, or the value of x on the xaxis that corresponds to the predicted value of ŷ on the
regression line.
Note: The equation of a simple linear regression line graphs as a straight line because none of the
variables in the equation are squared or cubed or have any other exponents. If an equation contains any
exponents, the graph of the equation will be a curved line.
The line of best fit as determined by simple linear regression is a formalization of the way one would fit a
trend line through the graphed data just by looking at it. To fit a line by looking at it, one would use a ruler
or some other straight edge and move it up and down, changing the angle, until it appears the differences
between the points and the line drawn with the straight edge have been minimized. The line that results
will be a straight line located at the position where approximately the same number of points are above the
line as are below it and the distance between each point and the line has been minimized (that is, the
distance is as small as possible).
Linear regression is used to calculate the location of the regression line mathematically. Linear
regression analysis is performed on a computer or a financial calculator, using the observed values of x and
y.
On a graph, the difference between each actual, observed point and its corresponding point on the calculated regression line is called a deviation. When the position of the regression line is calculated
mathematically, the line will be in the position where the deviations between each graphed value and
the regression line have been minimized. The resulting regression line is the line of best fit. That line
can then be used to predict the value of y for any given value of x.
Note: The statistical method used to perform simple regression analysis is called the Least Squares,
also known as the Ordinary Least Squares method or OLS. The regression line is called the least squares
regression line.
Simple linear regression was used to calculate the regression line and the forecast on the graph presented
earlier as an example of a trend pattern. The regression line was extended out for one additional year to
create a forecast for that year.
Before Developing a Prediction Using Regression Analysis
Before using regression analysis to predict a value, determine whether regression analysis even can be
used to make a prediction.
1)
The dependent variable, y, must have a linear relationship with the independent variable, x.
To determine whether a linear relationship exists, make a scatter plot of the actual historical
values in the time series and review the results. Plotting the x and y coordinates on a scatter plot
will indicate whether or not there is a linear relationship between them.
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
321
F.4. Data Analytics
CMA Part 1
Note: The x-axis on a scatter plot is on the horizontal and the y-axis is on the vertical, and
each observation is plotted at the intersection of its x-value and its y-value.
If the long-term trend appears to be linear, simple linear regression analysis may be able to
be used (subject to correlation analysis as described below) to determine the location of the linear
regression line, and that linear regression line can be used to make a prediction.
Below is a scatter plot that exhibits no correlation between the x-variable, time, and the yvariable, sales. The chart below is also an example of the irregular pattern described previously.
Scatter Plot
Sales Year 1-Year 10
$4,000,000
Sales
$3,000,000
$2,000,000
$1,000,000
0
1
2
3
4
5
6
7
8
9
10
11
Years
Historical sales like the above indicate that regression analysis using a time series would not be a
good way to make a prediction.
On the other hand, the following scatter plot does display a linear relationship between the xvalues and y-values, and the use of regression analysis to make a prediction could be helpful.
Scatter Plot
Sales Year 1-Year 10
$4,000,000
Sales
$3,000,000
$2,000,000
$1,000,000
0
1
2
3
4
5
6
7
8
9
10
11
Years
322
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
Section F
2)
F.4. Data Analytics
Correlation analysis should indicate a high degree of correlation between the independent variable, x, and the dependent variable, y.
In addition to plotting the points on a scatter plot graph, correlation analysis should be performed before relying on regression analysis to develop a prediction.
Correlation analysis determines the strength of the linear relationship between the x-values
and their related y-values. The results of the correlation analysis tell the analyst whether the
relationship between the independent variable (the passage of time, for a time series) and the
dependent variable (sales, for example) is reasonable.
Note: Correlation describes the degree of the relationship between two variables. If two
things are correlated with one another, it means there is a close connection between them.

If high measurements of one variable tend to be associated with high measurements of the
other variable, or low measurements of one variable tend to be associated with low measurements of the other variable, the two variables are said to be positively correlated.

If high measurements of one variable tend to be associated with low measurements of the
other variable, the two variables are said to be negatively correlated.

If there is a close match in the movements of the two variables over a period of time, either
positive or negative, it is said that the degree of correlation is high.
However, correlation alone does not prove causation. Rather than one variable causing the
other variable to occur, it may be that some other, entirely different, factor is affecting both variables.
In a time series, the only independent variable is the passage of time. Many factors in addition to
time can affect the dependent variable. For example, if sales are being predicted, economic cycles,
promotional programs undertaken, and industry-wide conditions such as new government regulations can cause changes in sales volume. If time series regression analysis is used to develop a
prediction, the prediction should be adjusted for other known factors that may have affected the
historical data and that may affect the prediction.
Note: Correlation does not prove causation.
Correlation Analysis
Correlation analysis is used to assess how well a model can predict an outcome.
Correlation analysis involves several statistical calculations, all done with a computer or a financial calculator, using the observed values of x and y. Correlation analysis is used to determine how well correlated the
variables are in order to decide whether the independent variable or variables can be used to make decisions
regarding the dependent variable used in the analysis.
Some of the most important statistical calculations for determining correlation are:
1)
The correlation coefficient, R
2)
The standard error of the estimate, also called the standard error of the regression
3)
The coefficient of determination, R2
4)
The T-statistic
The Correlation Coefficient (R)
The correlation coefficient measures the relationship between the independent variable and the dependent variable. The coefficient of correlation is a number that expresses how closely related, or correlated,
© 2019 HOCK international, LLC. For personal use only by original purchaser. Resale prohibited.
323
F.4. Data Analytics
CMA Part 1
the two variables are and the extent to which a variation in one variable has historically resulted in a
variation in the other variable.
Mathematically, the correlation coefficient, represented by R, is a numerical measure that expresses both
the direction (positive or negative) and the strength of the linear association between the two variables.
In a time series using linear regression analysis, the period of time serves as the independent variable (xaxis) while the variable such as the sales level serves as the dependent variable (y-axis).
When a time series (such as sales over a period of several years) is graphed, the data points on the graph
may show an upsloping linear pattern, a downsloping linear pattern, a nonlinear pattern (such as a curve),
or no pattern at all. The pattern of the data points indicates the amount of correlation between the values
on the x-axis (time) and the values on the y-axis (sales).
The amount of correlation, or correlation coefficient (R), is expressed as a number between −1 and
+1. The sign of the correlation coefficient and the absolute value of the correlation coefficient describe
the direction and the magnitude of the relationship between the two variables.
•
A correlation coefficient (R) of +1 means the linear relationship between each value for x and its
corresponding value for y is perfectly positive (upsloping). When x increases, y increases by the
same proportion; when x decreases, y decreases by the same proportion.
•
A correlation coefficient (R) of −1 means the linear relationship between each value for x and its
corresponding value for y is perfectly negative (downsloping). When x increases, y decreases
by the same proportion; when x decreases, y increases by the same proportion. In a time series,
the x value (time) can only increase, so if the correlation coefficient is negative, the level of the y
value decreases with the passage of time.
•
A correlation coefficient (R) that is close to zero usually means there is very little or no relationship between each value of x and its corresponding y value. However, a correlation coefficient that
is close to zero may mean there is a strong relationship, but the relationship is not a linear one.
(Candidates do not need to know how to recognize a non-linear relationship. Just be aware that
non-linear relationships occur.)
A high correlation coefficient (R), that is, a number close to either +1 or −1, means that simple linear
regression analysis would be useful as a way of making a projection. Generally, a correlation coefficient of
±0.50 or higher indicates enough correlation that a linear regression can be useful for forecasting. The
closer R is to ±1, the better the forecast should be.
•
If R is a positive number close to +1 (such as 0.83), it indicates that the data p
Download