sqm notes - WordPress.com

advertisement
1
UNIT-I
FUNDAMENTALS OF SOFTWARE QUALITY ENGINEERING
CONCEPTS OF QUALITY
Definition of Quality:
The totality of features and characteristics of a product/service that bear on its
ability to satisfy specified/implied needs. Quality is the totality of features and
characteristics of a product that bears on it’s ability to satisfy the stated or
implied needs --ASQC
Quality:
Ability of the product/service to fulfill its function
Hard to define
Impossible to measure
Easy to recognize in its absence
Transparent when present
Characteristics of Quality:
Quality is not absolute
Quality is multidimensional
Quality is subject to constraints
2
Quality is about acceptable compromises
Quality criteria are not independent, but interact with each other causing
conflicts
DIMENSIONS OF QUALITY
• Performance
• Aesthetics
• Special features: convenience, high tech
• Safety
• Reliability
• Durability
• Perceived Quality
• Service after sale
IMPORTANCE OF QUALITY
• Lower costs (less labor, rework, scrap)
• Motivated employees
• Market Share
• Reputation
• International competitiveness
•
Revenues generation increased (ultimate goal)
SOFTWARE QUALITY FACTORS
A software quality factor is a non-functional requirement for a software
program which is not called up by the customer's contract, but nevertheless is a
desirable requirement which enhances the quality of the software program.
Understandability
Completeness
3
Maintainability
Conciseness
Portability
Consistency
Testability
Usability
Reliability
Structured ness
Efficiency
Security
HIERARCHICAL MODEL OF QUALITY:

To compare quality in different situations, both qualitatively and
quantitatively, it is necessary to establish a model of quality.
Many
model
suggested
for
quality.
Most are hierarchical in nature.
 A quantitative assessment is generally made, along with a more

quantified assessment.
Two principal models of this type, one by Boehm (1978) and one by
McCall in 1977. A hierarchical model of software quality is based upon a set of
quality criteria, each of which has a set of measures or metrics associated with
it.
4
The issues relating to the criteria of quality are
What
criteria
of
quality
should
be
employed?
•
How
•
How may the associated metrics be combined into a meaningful overall
do
they
inter-relate?
measure of Quality?
THE HIERARCHICAL MODELS OF BOEHM AND MCCALL
THE GE MODEL (MCCALL, 1977 AND 1980)
This model was first proposed by McCall in 1977.
It was later revised as the MQ model, and it is aimed by system
developers to be used during the development process
In early attempt to bridge the gap between users and developers, the criteria
were chosen in an attempt to reflect
user’ s views as well as developer’ s
priorities.
The criteria appear to be technically oriented, but they are described by a
series of questions which define them in terms to non specialist managers.
The three areas addressed by McCall’ s model (1977)
Product operation : requires that it can be learnt easily, operated
efficiently And it results are those required by the users.
5
Product revision : it is concerned with error correction and
adaptation Of the system and it is most costly part of software
development.
Product transition
: it is an important application and it is distributed processing and the rapid
rate of change in hardware is Likely to increase.
Quality Models
_ Why a quality model?
_ Enables quality comparison both qualitatively and quantitatively
_ Hierarchical models
_ considers quality under a series of quality characteristics
or criteria, each having a set of associated measures or metrics
_ combined in a hierarchical nature into an overall assessment of quality
_ Questions
what criteria should be employed?
how do they inter-relate?
how can they be combined to provide an overall assessment of quality?
6
7
McCall’s Model
Boehm’s Model
Barry W. Boehm is known for his many contributions to software engineering.
He was the first to identify software as the primary expense of future computer
systems; he developed COCOMO, the spiral model, wideband Delphi, and many
more contributions through his involvement in industry and academia.
8
McCall’ s criteria of quality defined
Efficiency is concerned with the use of resources e.g. processor time,
storage. It falls into two categories: execution efficiency and storage efficiency
Integrity is the protection of the program from unauthorized access.
Correctness is the extent to which a program fulfils its specification.
Reliability is its ability not to fail.
Maintainability is the effort required to locate and fix a fault in the
program within its operating environment.
Flexibility is the ease of making changes required by changes in the
operating environment
9
Testability is the ease of testing the programs, to ensure that it is errorfree and meet its specification.
Portability is the effort required to transfer a program
from one
environment to another.
Reusability is the ease of refusing software in a different context.
Interoperability is the effort required to couple the
system to another system.
10
McCall's Quality Model - 1977
Jim McCall produced this model for the US Air Force and the intention was to
bridge the gap between users and developers. He tried to map the user view with
the
developer's
priority.
McCall identified three main perspectives for characterizing the quality
attributes
of
a
software
product.
These perspectives are:-
Product revision
The product revision perspective identifies quality factors that influence the
ability to change the software product, these factors are:-
business.
Product transition
The product transition perspective identifies quality factors that influence the
ability to adapt the software to new environments:-
11
bility to transfer the software from one environment to
another.
context.
together.
Product operations
The product operations perspective identifies quality factors that influence the
extent to which the software fulfils its specification:-
tem fails.
In total McCall identified the 11 quality factors broken down by the 3
perspectives, as listed above.
For each quality factor McCall defined one or more quality criteria (a way of
measurement), in this way an overall quality assessment could be made of a
given software product by evaluating the criteria for each factor.
For example the Maintainability quality factor would have criteria of
simplicity, conciseness and modularity.
Boehm's Quality Model - 1978
12
Barry W. Boehm also defined a hierarchical model of software quality
characteristics, in trying to qualitatively define software quality as a set of
attributes
and
metrics
(measurements).
At the highest level of his model, Boehm defined three primary uses (or basic
software requirements), these three primary uses are:As-is utility, the extent to which the as-is software can be used (i.e. ease of
use, reliability and efficiency).
Maintainability, ease of identifying what needs to be changed as well as
ease of modification and retesting.
Portability, ease of changing software to accommodate a new environment.
These three primary uses had quality factors associated with them , representing
the
next
level
of
Boehm's
hierarchical
Boehm identified seven quality factors, namely:-
computer configurations (i.e. operating systems, databases etc.).
absence of defects.
model.
13
with regard to purpose and structure.
These quality factors are further broken down into Primitive constructs that can
be measured, for example Testability is broken down into:- accessibility,
communicativeness, structure and self descriptiveness. As with McCall's
Quality Model, the intention is to be able to measure the lowest level of the
model.
-defined, well-differentiated characteristics of
software quality.
so that quality criteria are subdivided.
or ‘ as is’ and the utilities are a subtype of the general utilities, to the product
operation.
further split into primitive characteristics which are amenable to measurement.
on a much larger set of criteria than McCall’ s model,
but retains the same emphasis on technical criteria.
SUMMARY OF THE TWO MODELS
Although only a summary of the two example software factor models has been
given, some comparisons and observations can be made that generalize the
overall
quest
to
characterize
software.
14
Both of McCall and Boehm models follow a similar structure, with a similar
purpose. They both attempt to breakdown the software artifact into constructs
that can be measured. Some quality factors are repeated, for example: usability,
portability,
efficiency
and
reliability.
The presence of more or less factors is not, however, indicative of a better or
worse
model.
The value of these, and other models, is purely a pragmatic one and not in the
semantics
or
structural
differences.
The extent to which one model allows for an accurate measurement (cost and
benefit)
of
the
software
will
determine
its
value.
It is unlikely that one model will emerge as best and there is always likely to be
other models proposed, this reflects the intangible nature of software and the
essential difficulties that this brings. The ISO 9126 represents the ISO's recent
attempt to define a set of useful quality characteristics.
The two models share a number of common characteristics are:• The quality criteria are supposedly based upon the user’ s view.
• The models focus on the parts that designers can more readily analyze.
•
Hierarchical models cannot be tested or validated. It cannot be shown that
the metrics accurately reflect the criteria.
15
• The measurement of overall quality is achieved by a
weighted summation of the characteristics.
Boehm talks of modifiability where McCall distinguishes expandability
from adaptability and documentation, understandability and clarity.
HOW THE QUALITY CRITERIA INTERRELATE
The individual measure of software quality provided do not provide an
over all measure of software quality.
The individual measures must be combined.
The individual measures of quality may conflict with each other.
Some of these relationships are described below;
Integrity vs. efficiency (inverse) the control of
access to data or software
requires additional code and processing leading to a longer runtime and
additional storage requirement.
Usability vs. efficiency (inverse) Improvements in
the human /
computer interface may significantly increase the amount of code and power
required
Maintainability and testability vs. efficiency
(inverse) Optimized and
compact code is not easy to maintain.
Portability vs. efficiency (inverse) the use of
system utilities will lead to decrease in probability.
optimized software or
16
Flexibility, reusability and interoperability vs.
efficiency
(inverse) the
generally required for a flexible system, the use if interface routines and the
modularity desirable for reusability will all decrease efficiency.
Flexibility and reusability vs. integrity (inverse)
the general flexible data
structures required for flexible and reusable software increase the security and
protection problem.
Interoperability vs. integrity (inverse)C oupl e dsystem allow more avenues
of access to more and different users.
Reusability vs. reliability (inverse)reusable
software is required to be
general: maintaining accuracy and error tolerance across all cases is difficult.
Maintainability vs. flexibility (direct)mai nt ain abl ecode arises from code
that is well structured.
Maintainability
 Well
in
vs. reusability (direct)
structured
other
easily
programs
maintainable
either
as
a
code
is
easier
to
library
of
routines
reuse
or
as
code placed directly within another program.
Portability vs. reusability (direct) portable code is
likely to be free of environment-specific features Correctness vs. efficiency
(neutral) the correctness
of code, i.e. its conformance to specification does
not influence its efficiency
Process Maturity / Capability Maturity
The concepts of process or capability maturity are increasingly being applied to
many aspects of product development, both as a means of assessment and as
17
part of a framework for improvement. Maturity models have been proposed for
a range of activities including quality management, software development,
supplier relationships, R&D effectiveness, product development, innovation,
product design, product development collaboration and product reliability.
The principal idea of the maturity grid is that it describes in a few phrases, the
typical behaviour exhibited by a firm at a number of levels of ‘maturity’, for
each of several aspects of the area under study. This provides the opportunity to
codify what might be regarded as good practice (and bad practice), along with
some intermediate or transitional stages.
This page traces the origins of the maturity grid and reviews methods of
construction and application. It draws on insights gained during development of
maturity grids for product design and for the management of product
development collaborations, and discusses alternative definitions of 'maturity'.
One conclusion is that it is difficult to make the maturity grid development
process completely rigorous and it is suggested that some compromise is
necessary and appropriate in the interests of producing a useful and usable tool.
Origins
and
overview
of
maturity
approaches
Maturity approaches have their roots in the field of quality management. One of
the earliest of these is Crosby's Quality Management Maturity Grid (QMMG)
which describes the typical behaviour exhibited by a firm at five levels of
‘maturity’, for each of six aspects of quality management (see Fig. 1). The
QMMG had a strong evolutionary theme, suggesting that companies were likely
to evolve through five phases - Uncertainty, Awakening, Enlightenment,
Wisdom, and Certainty – in their ascent to quality management excellence.
Measurem Stage
I: Stage
II: Stage
III: Stage
IV: Stage
V:
18
ent
Uncertainty Awakening Enlightenm
Categories
Wisdom
Certainty
ent
No
Recognisin
comprehen
g
sion
While going Participati
that through
of quality
quality
ng.
quality
Understan
managem
quality as a manageme improvemen d
Manageme manageme
nt
nt
nt may be t
tool. of
understand Tend
Consider
ent
program absolutes
an
essential
value learn more of quality part
to but
not about
of
manageme company
ing
blame
willing
to quality
nt.
and
quality
provide
attitude
department money
or t; becoming their
for "quality time
to supportive
personal
problems"
it and helpful.
role
system.
managemen Recognise
make
happen.
in
continuing
emphasis.
Quality
organisatio
n
status
Quality
is A stronger Quality
hidden
in quality
manufacturi leader
ng
Quality
Department manager is manager
is reports
or appointed
engineering but
Quality
to an officer on board
top
of
main managemen company;
department emphasis is t,
all effective
s.
still
on appraisal is status
Inspection
appraisal
incorporate
probably
and
d
reporting
and and
of
directors.
Preventio
n is main
concern.
Quality is
19
not part of moving the manager
organisatio
product.
preventati
a thought
has role in ve action. leader.
n. Emphasis Still part of managemen Involved
on appraisal manufactu
and sorting. ring
t
of with
or company.
other.
consumer
affairs and
special
assignmen
ts.
Problems
are
Teams are Corrective
fought set up to action
as
they attack
occur;
Except in
are
the most
communicat identified
no major
ion
early
established.
their
resolution;
problems.
inadequate
Long-range Problems
Problem
definition;
solutions
handling
lots
of are
Problems
are
unusual
in cases,
problems
developme are
faced nt.
All prevented
not openly and functions
yelling and solicited.
resolved in are
accusations
an
.
open
orderly to
way.
suggestion
and
improvem
ent.
Cost
of Reported:
Reported:
Reported:
Reported:
Reported:
quality
as unknown
3%
8%
6.5%
2.5%
20
% of sales
Actual: 20% Actual:
Actual: 12%
Actual: 8%
18%
Quality
improveme
nt
actions
Actual:
2.5%
No
Trying
Implementa Continuing Quality
organised
obvious
tion of the the
activities.
"motivatio
14-step
No
nal" short- program
14- improvem
step
ent is a
program
normal
understandi range
with
and
and
ng of such efforts.
thorough
starting
continued
activities.
understandi
Make
activity.
ng
and Certain
establishme
nt of each
step.
"We
don't "Is
it "Through
"Defect
"We know
know
why absolutely
managemen prevention why
we
we
have necessary
t
not
is a routine do
Summation problems
of
with
to
have
t and quality operation" problems
company
problems
improvemen
with
quality
with
t
quality"
posture
quality?"
identifying
quality"
always commitmen part of our have
we
and
resolving
our
are
21
problems"
Figure 1: Crosby's Quality Management Maturity Grid (QMMG)
Perhaps the best known derivative from this line of work is the Capability
Maturity Model (CMM) for software. The CMM takes a different approach
however, identifying instead a cumulative set of ‘key process areas’ (KPAs)
which all need to be performed as the maturity level increases. As an alternative
to the complexity of CMM-derived maturity models, others have built on the
Crosby grid approach, where performance of a number of key activities are
described at each of 4 levels.
Level
Optimising
Description
Process areas
Continuous process improvement is
· Defect Prevention
enabled by quantitative feedback
· Technology Change
from the process and from piloting
Management
innovative ideas and technologies
·
Process
Change
Management
Managed
Detailed measures of the software
·
process and product quality are
Process
collected.
Management
process
Both
and
quantitatively
the
software
Quantitative
products
are
· Software Quality
understood
and
Management
22
controlled.
The software process for both
·
management
Process
activities
Defined
and
is
engineering
documented,
Organization
·
Focus
Organization
standardised, and integrated into a
Process
standard software process for the
· Training Program
organisation. All projects use an
· Integrated Software
approved, tailored version of the
Management
organisation's standard software
· Software Product
process
Engineering
for
developing
and
maintaining software
·
Definition
Intergroup
Coordination
· Peer Reviews
Basic
Repeatable
project
management
·
Requirements
processes are established to track
Management
cost, schedule, and functionality.
· Software Project
The necessary process discipline is
Planning
in place to repeat earlier successes
· Software Project
on
Tracking
projects
applications
with
similar
and
Oversight
·
Software
Subcontract
Management
· Software Quality
23
Assurance
·
Software
Configuration
Management
The
software
process
is
characterised as ad hoc, and
Initial
occasionally even chaotic. Few
·
no
processes are defined, and success
processes
required
depends on individual effort and
heroics
Maturity levels and process areas of the Software CMM
Typology
What is maturity? The literal meaning of the word maturity is 'ripeness',
conveying the notion of development from some initial state to some more
advanced state. Implicit in this is the notion of evolution or ageing, suggesting
that the subject may pass through a number of intermediate states on the way to
maturity.
Although a number of different types of maturity model have been proposed
(see Fig. 3), they share the common property of defining a number of
dimensions or process areas at several discrete stages or levels of maturity, with
a description of characteristic performance at various levels of granularity.
The various components which may or may not be present in each model are:

a number of levels (typically 3-6)
24

a descriptor for each level (such as initial / repeatable / defined /
managed / optimising)

a generic description or summary of the characteristics of each level as a
whole

a number of dimensions or 'process areas'

a number of elements or activities for each process area

a description of each activity as it might be performed at each maturity
level
The number of levels is to some extent arbitrary. One key aspect of the Quality
Grid approach is that it provides descriptive text for the characteristics traits of
performance at each level. This becomes increasingly difficult as the number of
levels is increased, and adds to the complexity of the tool, such that descriptive
grids typically contain no more than 5 levels.
A sample of maturity models, showing range of subject and architecture
A distinction is made between those models in which different activities may
be scored to be at different levels, and those in which maturity levels are
'inclusive', where a cumulative number of activities must all be performed. In
25
the terminology of the SEI CMM, this is referred to as 'continuous' and 'staged'
respectively.
A typology is proposed here which divides maturity models into three basic
groups:

Maturity grids

Hybrids and Likert-like questionnaires

CMM-like models
If there has been no improvement since the beginning of your relationship with
your SEO company, they need to explain why. Then you have a great seo
Company working for you. If you don't want to do it yourself, an SEO company
being paid monthly should detail exactly what they are doing for you. | Any link
exchanges promising hundreds or links immediately upon joining should be
avoided unless the only search engine you care about is MSN. Using a link
exchange as the only way you get links will also not be a path you want to
wander down. Exchanging links with every site under the sun is also bad.
The Likert-like questionnaire, when constructed in a particular way, can be
considered to be a simple form of maturity model. In this case, the 'question' is
simply a statement of 'good practice' and the respondent is asked to score the
relative performance of the organisation on a scale from 1 to n. This is
equivalent to a maturity grid in which only the characteristics of the top level
are described. Note that if n=2, this form of instrument becomes a checklist.
Models combining a questionnaire approach with definitions of maturity are
referred to here as 'hybrids'. Typically, there is an overall description of the
levels of maturity, but no additional description for each activity.
26
CMM-like models define a number of key process areas at each level. Each key
process area is organised into five sections called 'common features' commitment to perform, ability to perform, activities performed, measurement
& analysis and verifying implementation. The common features specify the key
practices that, when collectively addressed, accomplish the goals of the key
process area. These models are typically rather verbose.
Application
We have developed maturity grids for product design and for the management
of product development collaborations, and these have been applied both
standalone and in workshop settings. The grids were found to be a useful way of
capturing good and not-so-good practice, but in the interests of usability, a
balance was necessary between richness of detail (verbosity) and conciseness
(superficiality).
Chiesa et al. produced two instruments as part of a technical innovation audit: a
simple maturity-grid ‘scorecard’ and a more detailed ‘self-evaluation’. Even
though the self-evaluation was less onerous than CMM-based procedures, they
report that companies still preferred the simplicity of the scorecard.
Maturity assessments can be performed by an external auditor, or by selfassessment. Whilst self-assessments can be performed by an individual in
isolation, they are perhaps more beneficial if approached as a team exercise.
One reason for this is to eliminate single-respondent bias. Also, by involving
people from different functional groups within a project team, the assessment
has a cross-functional perspective and provides opportunity for consensus and
team-building.
Other studies have confirmed the value of team discussion around an audit tool.
Discussing their innovation audit tool, Chiesa et al. report that ‘the feedback on
27
team use was very positive’. Group discussion also plays a part in use of
Cooper’s NewProd system, while Ernst and Teichert recommend using a
workshop in an NPD benchmarking exercise to build consensus and overcome
single-informant bias.
According to the software CMM,
"Software process maturity is the extent to which a specific process is
explicitly defined, managed, measured, controlled, and effective."
Furthermore, maturity implies that the process is well-understood, supported by
documentation and training, is consistently applied in projects throughout the
organisation, and is continually being monitored and improved by its users:
"Institutionalization entails building an infrastructure and a corporate culture
that supports the methods, practices, and procedures of the business so that
they endure after those who originally defined them have gone".
The CMM identifies five levels of maturity: initial, repeatable, defined,
managed and optimizing. Dooley points out that the CMM maturity definition
mixes process attributes (defined, managed, measured, controlled) with
outcomes (effective), and proposes an alternative definition of maturity:
"The extent to which a process is explicitly defined, managed, measured, and
continuously improved".
This is a logical modification, but raises an interesting issue: is the inclusion of
'effective' in the CMM definition of maturity a caveat, acknowledging that
defined, managed, measured and controlled may not be enough? Moreover, can
an ineffective process be described as mature even if it is defined, managed
measured and controlled? Perhaps the key aspects of maturity might be better
28
defined as effective AND institutionalised (i.e. repeatable), regardless of the
degree of management or bureaucracy associated with it.
Our current working definition is shown below:
"The degree to which processes and activities are executed following 'good
practice' principles and are defined, managed and repeatable."
A synthesis of maturity levels is shown in Fig. 4, which represents maturity as a
combination of the presence of a process and the organization’s attitude to it.
The lowest level is characterized by ad hoc behavior. As the level increases, the
process is defined and respected. At the highest level, good practice becomes
ingrained or 'cultural', and it could be said that whilst the process exists, it is not
needed.
******************
UNIT II
DEVELOPMENTS IN MEASURING QUALITY
Selecting Quality Goals And Measures
What is software quality, and why is it so important that it is pervasive in the Software Engineering
Body of Knowledge? Within an information system, software is a tool, and tools have to be selected
for quality and for appropriateness. That is the role of requirements. But software is more than a
tool. It dictates the performance of the system, and it is therefore important to the system quality.
The notion of “quality” is not as simple as it may seem. For any engineered product, there are many
desired qualities relevant to a particular project, to be discussed and determined at the time that the
product requirements are determined. Qualities may be present or absent, or may be matters of
degree, with tradeoffs among them, with practicality and cost as major considerations. The software
engineer has a responsibility to elicit the system’s quality requirements that may not be explicit at
the outset and to discuss their importance and the difficulty of attaining them. All processes
29
associated with software quality (e.g. building, checking, improving quality) will be designed with
these in mind and carry costs based on the design. Thus, it is important to have in mind some of the
possible attributes of quality.
Software Quality Fundamentals
Agreement on quality requirements, as well as clear communication to the software engineer on
what constitutes quality, require that the many aspects of quality be formally defined and
discussed.A software engineer should understand the underlying meanings of quality concepts and
characteristics and their value to the software under development or to maintenance.
The important concept is that the software requirements define the required quality characteristics
of the software and influence the measurement methods and acceptance criteria for assessing these
characteristics.
Software Engineering Culture and Ethics
Software engineers are expected to share a commitment to software quality as part of their culture.
Ethics can play a significant role in software quality, the culture, and the attitudes of software
engineers. The IEEE Computer Society and the ACM have developed a code of ethics and
professional practice based on eight principles to help software engineers reinforce attitudes related
to quality and to the independence of their work.
Value and Costs of Quality
The notion of “quality” is not as simple as it may seem. For any engineered product, there are many
desired qualities relevant to a particular perspective of the product, to be discussed and determined
at the time that the product requirements are set down. Quality characteristics may be required or
not, or may be required to a greater or lesser degree, and trade-offs may be made among them.
The cost of quality can be differentiated into prevention cost, appraisal cost, internal failure cost,
and external failure cost. A motivation behind a software project is the desire to create software that
has value, and this value may or may not be quantified as a cost. The customer will have some
maximum cost in mind, in return for which it is expected that the basic purpose of the software will
be fulfilled. The customer may also have some expectation as to the quality of the software.
Sometimes customers may not have thought through the quality issues or their related costs. Is the
characteristic merely decorative, or is it essential to the software? If the answer lies somewhere in
30
between, as is almost always the case, it is a matter of making the customer a part of the decision
process and fully aware of both costs and benefits. Ideally, most of these decisions will be made in
the software requirements process, but these issues may arise throughout the software life cycle.
There is no definite rule as to how these decisions should be made, but the software engineer
should be able to present quality alternatives and their costs.
31
Quality function deployment (QFD)
INTRODUCTION
The Quality Function Deployment (QFD) philosophy was pioneered by Yoji
Akao and Shigeru Mizuno. It aims to design products that assure customer satisfaction and
value - the first time, every time.
The QFD framework can be used for translating actual customer statements and needs
("The voice of the customer") into actions and designs to build and deliver a quality product .
Quality function deployment (QFD) is a “method to transform user demands into design
quality, to deploy the functions forming quality, and to deploy methods for achieving the design
quality into subsystems and component parts, and ultimately to specific elements of the
manufacturing process.”
QFD is designed to help planners focus on characteristics of a new or existing
product or service from the viewpoints of maarket segments, company, or technologydevelopment needs. The technique yields graphs and matrices.
QFD helps transform customer needs (the voice of the customer {VOC}) into engineering
characteristics (and appropriate test methods) for a product or service, prioritizing each product or
service characteristic while simultaneously setting development targets for product or service.
Quality Function Deployment (QFD) was developed to bring this
personal interface to modern manufacturing and business. In today's industrial
society, where the growing distance between producers and users is a concern, QFD
links the needs of the customer (end user) with design, development, engineering,
manufacturing, and service functions.
In Short QFD is:
1. Understanding Customer Requirements
32
2. Quality Systems Thinking + Psychology + Knowledge/Epistemology
3. Maximizing Positive Quality That Adds Value
4. Comprehensive Quality System for Customer Satisfaction
5. Strategy to Stay Ahead of The Game
Typical tools and techniques used within QFD include:

Affinity Diagrams. To surface the "deep structure" of voiced
customer requirements.

Relations Diagrams. To discover priorities and root causes of
process problems and unspoken customer requirements.

Hierarchy Trees. To check for missing data and other purposes.

Various Matrixes. For documenting relationships, prioritization and
responsibility.

Process Decision Program Diagrams. To analyze potential failures of
new processes and services.

Analytic Hierarchy Process. To prioritize a set of requirements, and
to select from alternatives to meet those requirements.

Blueprinting. To depict and analyze all the processes which are
involved in providing a product or service.

House of Quality.
STEPS IN QUALITY FUNCTION DEPLOYMENT PROCESS
Typically, a QFD process has the following stages:
1.
Derive top-level product requirements or technical characteristics from customer
needs, using the Product Planning Matrix.
2.
Develop product concepts to satisfy these requirements.
3.
Evaluate product concepts to select the most optimal concept, using the Concept
Selection Matrix.
4.
Partition the system concept or architecture into subsystems or assemblies, and
transfer the higher-level requirements or technical characteristics to these subsystems or
assemblies.
5.
Derive lower-level product requirements (assembly or part characteristics) and
specifications from subsystem/assembly requirements (Assembly/Part Deployment Matrix).
33
6.
For critical assemblies or parts, derive lower-level product requirements (assembly
or part characteristics) into the process planning.
7.
Determine manufacturing process steps that correspond to these assembly or part
characteristics.
8.
Based on these process steps, determine set-up requirements, process controls and
quality controls to assure the achievement of these critical assembly or part characteristics.
CONDITIONS OF QUALITY FUNCTION DEPLOYMENT:
The market survey results are accurate.

Customer needs cannot be documented and captured and they remain stable during
the whole process.
BENEFITS OF THE QUALITY FUNCTION DEPLOYMENT MODEL:Quality Function Deployment has the following benefits: -

QFD seeks out both
“spoken” and
“unspoken” customer requirements and
maximizes “positive ” quality (such as ease of use, fun, luxury) that creates value. Traditional
quality systems aim at minimizing negative quality (such as defects, poor service).

Instead of conventional design processes that focus more on engineering capabilities
and less on customer needs, QFD focuses all product development activities on customer
needs.

QFD makes invisible requirements and strategic advantages visible. This allows a
company to prioritize and deliver on them.

Reduced time to market.

Reduction in design changes.

Decreased design and manufacturing costs.

Improved quality.

Increased customer satisfaction.
THE MAIN BENEFITS OF QFD ARE(IN SHORT)

Improved Customer Satisfaction

Reduced Development Time
34

Improved Internal Communications

Better Documentation of Key Issues

Save Money
SOME ADDITIONAL BENEFITS ARE:

Improved morale, organizational harmony

Improved efficiency

Reduction in design changes

Increased competitiveness

High market acceptance

Problems are easier to identify
DISADVANTAGES OF QUALITY FUNCTION DEPLOYMENT:The following are the disadvantages of quality function deployment :
As with other Japanese management techniques, some problems can occur when
we apply QFD within the western business environment and culture.

Customer perceptions are found by market survey. If the survey is performed in a
poor way, then the whole analysis may result in doing harm to the firm.

The needs and wants of customers can change quickly nowadays. Comprehensive
system- and methodical thinking can make adapting to changed market needs more
complex.
APPLICATIONS OF THE QUALITY FUNCTION DEPLOYMENT METHOD:The following are the Applications of the quality function deployment method:-

To prioritize customer demands and customer needs. Spoken and unspoken;

Translating these needs into actions and designs such as technical characteristics
and specifications; and

To build and deliver a quality product or service, by focusing various business
functions toward achieving a common goal of customer satisfaction.
35
QFD has been applied in any industry: aerospace, manufacturing, software, communication, IT,
chemical and pharmaceutical, transportation, defense, government, R&D, food, and service industry.
QFD SUMMARY
 Orderly Way Of Obtaining Information & Presenting It
 Shorter Product Development Cycle
 Considerably Reduced Start-Up Costs
 Fewer Engineering Changes
 Reduced Chance Of Oversights During Design Process
 Environment Of Teamwork
 Consensus Decisions
 Preserves Everything In Writing
Quality function deployment (QFD) is a “method to transform user demands into design quality, to
deploy the functions forming quality, and to deploy methods for achieving the design quality into
subsystems and component parts, and ultimately to specific elements of the manufacturing
process.” QFD is designed to help planners focus on characteristics of a new or existing product or
service from the viewpoints of market segments, company, or technology-development needs. The
technique yields graphs and matrices.
QFD helps transform customer needs (the voice of the customer [VOC]) into engineering
characteristics (and appropriate test methods) for a product or service, prioritizing each product or
service characteristic while simultaneously setting development targets for product or service.
VOICE OF THE
CUSTOMER
36
QFD – HISTORY
STATISTICAL
PROCESS CONTROL
DESIGN QUALITY
QFD LIFE CYCLE CONSIDERATIONS


VALUE
ENGINEERING
SQFD Process
QFD Process
TRADITIONAL QFD PHASES
SQFD PROCESS
QFD
37
This diagram represents the QFD House of Quality concepts applied to requirements engineering, or
SQFD.
“SQFD is a front-end requirements elicitation technique, adaptable to any software
engineering methodology, that quantifiably solicits and defines critical customer requirements”
[Haag, 1996].
Step 1 – Customer requirements are solicited and recorded
Step 2 – Customer requirements are converted to technical and measurable statements and
recorded
Step 3 – Customers identify the correlations between technical requirements and their own
verbatim statements of requirement
Step 4 – Customer requirement priorities are established
Step 5 – Applying weights and priorities to determine the technical product specification priorities
(this is a calculation)
QFD MATRIX
38
QFD Matrix
Technical
Descriptors
Primary
Interrelationship between
Technical Descriptors
(correlation matrix)
HOWs vs. HOWs
QFD PROCESS
QFD Process
HOWs
Phase I
Product Planning
WHATs
WHATs
HOWs
HOW
MUCH
Our
A’s
B’s
Customer Importance
Target Value
Scale-up Factor
Sales Point
Absolute Weight
Prioritized Technical
Descriptors
Customer
Competitive
Assessment
Technical Our
Competitive A’s
Assessment
B’s
Degree of Technical Difficulty
Target Value
Absolute Weight and Percent
Relative Weight and Percent
HOW
MUCH
Strong
Medium
Weak
Prioritized
Customer
Requirements
Secondary
Primary
Strong Positive
Positive
Negative
Strong Negative
+9
+3
+1
Customer
Requirements
+9
+3
-3
-9
Secondary
Relationship between
Customer Requirements
and
Technical Descriptors
WHATs vs. HOWs
39
Phase I
Product Planning
Customer
Requirements
Design
Requirements
PHASE II
PART DEVELOPMENT
Phase II
Part Development
Design
Requirements
Part Quality
Characteristics
40
Phase III
Process Planning
Part Quality
Characteristics
Key Process
Operations
Phase IV
Production Planning
Key Process
Operations
Production
Requirements
Production Launch
QUALITY CHARACTERISTICS TREE
The quality characteristics are used as the targets for validation (external quality) and
verification (internal quality) at the various stages of development.
41
They are refined into sub-characteristics, until the attributes or measurable properties are obtained.
In this context, metric or measure is a defined as a measurement method and measurement means
to use a metric or measure to assign a value.
In order to monitor and control software quality during the development process, the
external quality requirements are translated or transferred into the requirements of intermediate
products, obtained from development activities. The translation and selection of the attributes is a
non-trivial activity, depending on the stakeholder personal experience, unless the organization
provides an infrastructure to collect and to analyze previous experience on completed projects
A requirement is defined as "a condition or capability to which a system must conform".
There are many different kinds of requirements. One way of categorizing them is described as the
FURPS+ model [GRA92], using the acronym FURPS to describe the major categories of requirements
with subcategories as shown below.

Functionality

Usability

Reliability

Performance

Supportability
The "+" in FURPS+ reminds you to include such requirements as:

Design constraints

Implementation requirements

Interface requirements

Physical requirements.
Functional requirements specify actions that a system must be able to perform, without taking
physical constraints into consideration. These are often best described in a use-case model and in
use cases. Functional requirements thus specify the input and output behavior of a system.
Requirements that are not functional, such as the ones listed below, are sometimes called nonfunctional requirements. Many requirements are non-functional, and describe only attributes of the
system or attributes of the system environment. Although some of these may be captured in use
42
cases, those that cannot may be specified in Supplementary Specifications. Nonfunctional
requirements are those that address issues such as those described below.
A complete definition of the software requirements, use cases, and Supplementary Specifications
may be packaged together to define a Software Requirements Specification (SRS) for a particular
"feature" or other subsystem grouping.
Functionality
Functional requirements may include:

feature sets

capabilities

security
Usability
Usability requirements may include such subcategories as:

human factors

aesthetics

consistency in the user interface

online and context-sensitive help

wizards and agents

user documentation

training materials
Reliability
Reliability requirements to be considered are:

frequency and severity of failure

recoverability

predictability

accuracy

mean time between failure (MTBF)
Performance
43
A performance requirement imposes conditions on functional requirements. For example, for a
given action, it may specify performance parameters for:

speed

efficiency

availability

accuracy

throughput

response time

recovery time

resource usage
Supportability
Supportability requirements may include:

testability

extensibility

adaptability

maintainability

compatibility

configurability

serviceability

installability

localizability (internationalization)
FURPS
FURPS is an acronym representing a model for classifying software quality attributes (functional &
non-functional requirements):

Functionality - Feature set, Capabilities, Generality, Security

Usability - Human factors, Aesthetics, Consistency, Documentation

Reliability - Frequency/severity of failure, Recoverability, Predictability, Accuracy, Mean
time to failure

Performance - Speed, Efficiency, Resource consumption, Throughput, Response time
44

Supportability - Testability, Extensibility, Adaptability, Maintainability, Compatibility,
Configurability, Serviceability, Installability, Localizability, Portability
There are many definitions of these Software Quality Attributes but a common
one is the FURPS+ model which was developed by Robert Grady at HewlettPackard.
Under
the
FURPS,
the
following
characteristics
are
identified:-
Functionality
The F in the FURPS+ acronym represents all the system-wide functional requirements
that
we
would
expect
to
see
described.
These usually represent the main product features that are familiar within the business
domain
of
the
solution
being
developed.
For example, order processing is very natural for someone to describe if you are
developing
The
an
functional
order
requirements
can
also
processing
be
very
system.
technically
oriented.
Functional requirements that you may consider to be also architecturally significant
system-wide functional requirements may include auditing, licensing, localization,
mail, online help, printing, reporting, security, system management, or workflow.
Each of these may represent functionality of the system being developed and they are
each
a
system-wide
functional
requirement.
Usability
Usability includes looking at, capturing, and stating requirements based around user
interface issues, things such as accessibility, interface aesthetics, and consistency
within
the
user
interface.
Reliability
Reliability includes aspects such as availability, accuracy, and recoverability, for
example, computations, or recoverability of the system from shut-down failure.
Performance
Performance involves things such as throughput of information through the system,
system response time (which also relates to usability), recovery time, and startup time.
45
Supportability
Finally, we tend to include a section called Supportability, where we specify a number
of other requirements such as testability, adaptability, maintainability, compatibility,
configurability,
installability,
scalability,
localizability,
and
so
on.
The FURPS-categories are of two different types:
Functional (F) and Non-functional (FURPS).
These categories can be used as both product requirements as well as in the assessment of product
quality.
Functional (what behaviors it does) and non-functional(how it does them)
Functional requirements describe system behaviors
1. Priority: rank order the features wanted in importance
2. Criticality: how essential is each requirement to the overall system?
3. Risks: when might a requirement not be satisfied? What can be done to reduce this risk?
Non-functional requirements describe other desired attributes of overall system
1. Product cost (how do measure cost?)
2. Performance (efficiency, response time? Startup time?)
3. Portability (target platforms?), binary or byte- code compatibility?
4. Availability (how much down time is acceptable?)
46
5. Security (can it prevent intrusion?)
6. Safety (can it avoid damage to people or environment?)
7. Maintainability (in OO context: extensibility, reusability)
FURPS classification
47
FURPS+
Functionality
_ All functional requirements
_ Usually represent main product features.
E.g. Order Processing requirements.
_ Can also be architecturally significant.
Auditing, Licensing, Localization, Mail, Online help, Printing, Reporting, Security,
System management, Workflow.
Usability
User interface issues such as accessibility, aesthetics and consistency.
Reliability
Availability, accuracy, recoverability.
Performance
Throughput, response time, recovery time, start-up time.
Supportability
Testability, adaptability, maintainability, compatibility, configurability ,Installability,
scalability and localizability.
+
Design requirement
48
A design requirement, often called a design constraint, specifies or constrains the design of
a system.
E.g. a relational database is required.
Implementation requirement
An implementation requirement specifies or constrains the coding or construction of a
system. Examples are:

required standards

implementation languages

policies for database integrity

resource limits

operation environments
E.g. required standards, platform or implementation language.
Interface requirement
An interface requirement specifies:

an external item with which a system must interact

constraints on formats, timings, or other factors used by such an interaction
Physical requirement
A physical constraint imposed on the hardware used to house the system; for example,
shape, size and weight.
This type of requirement can be used to represent hardware requirements, such as the physical
network configurations required.
49
Gilb Approach
Tom Gilb proposed an approach to defining quality described as "design by measurable
objectives"
Design by Measurable Objectives
Break down high-level abstract concepts to more concrete ideas that can be measured.
For Example:

Reliability
o
Number of errors in system over a period
o
System up-time
This approach is tailored to the product being developed, so is more focussed and relevant to
product needs.
However, because each product is different, and will have different criteria, it's very difficult
to compare the quality of different products.
50
Quality Prompts
Quality prompts ask you to argue in support of the qualities of a person, place or thing. Qualities means positive aspects
or characteristics, for example: When answering a quality prompt, use G+3TiC=C and the six steps to demonstrate
OPDUL=C in your essay.
Quality prompts t What are the qualities of a good university? Develop your position using illustrations
and reasons.
UNIT III
QUALITY MANAGEMENT SYSTEM
ELEMENTS OF A QUALITY ENGINEERING PROGRAM
QUALITY CONTROL
Quality control
Quality control is a process employed to ensure a certain level of quality in a product or
service. It may include whatever actions a business deems necessary to provide for the
control and verification of certain characteristics of a product or service. The basic goal of
quality control is to ensure that the products, services, or processes provided meet specific
requirements and are dependable, satisfactory, and fiscally sound.
Essentially, quality control involves the examination of a product, service, or process for
certain minimum levels of quality. The goal of a quality control team is to identify products
or services that do not meet a company’s specified standards of quality. If a problem is
identified, the job of a quality control team or professional may involve stopping production
51
temporarily. Depending on the particular service or product, as well as the type of problem
identified, production or implementation may not cease entirely.
Quality control (QC) is a procedure or set of procedures intended to ensure that a
manufactured product or performed service adheres to a defined set of quality criteria or
meets the requirements of the client or customer. QC is similar to, but not identical with,
quality assurance (QA). QA is defined as a procedure or set of procedures intended to ensure
that a product or service under development (before work is complete, as opposed to
afterwards) meets specified requirements. QA is sometimes expressed together with QC as a
single expression, quality assurance and control (QA/QC).
In order to implement an effective QC program, an enterprise must first decide which specific
standards the product or service must meet. Then the extent of QC actions must be
determined (for example, the percentage of units to be tested from each lot). Next, real-world
data must be collected (for example, the percentage of units that fail) and the results reported
to management personnel. After this, corrective action must be decided upon and taken (for
example, defective units must be repaired or rejected and poor service repeated at no charge
until the customer is satisfied). If too many unit failures or instances of poor service occur, a
plan must be devised to improve the production or service process and then that plan must be
put into action. Finally, the QC process must be ongoing to ensure that remedial efforts, if
required, have produced satisfactory results and to immediately detect recurrences or new
instances of trouble.
Quality control is a process by which entities review the quality of all factors involved in
production. This approach places an emphasis on three aspects
1. Elements such as controls, job management, defined and well managed processes
2. performance and integrity criteria, and identification of records
3. Competence, such as knowledge, skills, experience, and qualifications
4. Soft elements, such as personnel integrity, confidence, organizational culture,
motivation, team spirit, and quality relationships.
The quality of the outputs is at risk if any of these three aspects is deficient in any way.
52
Quality control emphasizes testing of products to uncover defects, and reporting to
management who make the decision to allow or deny the release, whereas quality assurance
attempts to improve and stabilize production, and associated processes, to avoid, or at least
minimize, issues that led to the defects in the first place. For contract work, particularly work
awarded by government agencies, quality control issues are among the top reasons for not
renewing a contract.
Quality control in project management
In project management, quality control requires the project manager and the project team to
inspect the accomplished work to ensure it's alignment with the project scope. In practice,
projects typically have a dedicated quality control team which focuses on this area.
QUALITY ASSURANCE
Quality assurance, or QA for short, is the systematic monitoring and evaluation of the
various aspects of a project, service or facility to maximize the probability that minimum
standards of quality are being attained by the production process. QA cannot absolutely
guarantee the production of quality products.
Two principles included in QA are: "Fit for purpose" - the product should be suitable for the
intended purpose; and "Right first time" - mistakes should be eliminated. QA includes
regulation of the quality of raw materials, assemblies, products and components, services
related to production, and management, production and inspection processes.
Quality is determined by the product users, clients or customers, not by society in general. It
is not the same as 'expensive' or 'high quality'. Low priced products can be considered as
having high quality if the product users determine them as such.
Steps for a typical quality assurance process
There are many forms of QA processes, of varying scope and depth. The application of a
particular process is often customized to the production process.
A typical process may include:
53

test of previous articles

plan to improve

design to include improvements and requirements

manufacture with improvements

review new item and improvements

test of the new item
Failure testing
Valuable processes to perform on a whole consumer product is failure testing or stress
testing. In mechanical terms this is the operation of a product until it fails, often under
stresses such as increasing vibration, temperature, and humidity. This exposes many
unanticipated weaknesses in a product, and the data are used to drive engineering and
manufacturing process improvements. Often quite simple changes can dramatically improve
product service, such as changing to mold-resistant paint or adding lock-washer placement to
the training for new assembly personnel.
Statistical control
Many organizations use statistical process control to bring the organization to Six Sigma
levels of quality,[ in other words, so that the likelihood of an unexpected failure is confined to
six standard deviations on the normal distribution. This probability is less than four onemillionths. Items controlled often include clerical tasks such as order-entry as well as
conventional manufacturing tasks.
Traditional statistical process controls in manufacturing operations usually proceed by
randomly sampling and testing a fraction of the output. Variances in critical tolerances are
continuously tracked and where necessary corrected before bad parts are produced.
Total quality management
The quality of products is dependent upon that of the participating constituents, some of
which are sustainable and effectively controlled while others are not. The process(es) which
are managed with QA pertain to Total Quality Management.
54
If the specification does not reflect the true quality requirements, the product's quality cannot
be guaranteed. For instance, the parameters for a pressure vessel should cover not only the
material and dimensions but operating, environmental, safety, reliability and maintainability
requirements.
QA is not limited to the manufacturing, and can be applied to any business or non-business
activity:

Design work

Administrative services

Consulting

Banking

Insurance

Computer software development

Retailing

Transportation

Education

Translation
It comprises a quality improvement process, which is generic in the sense it can be applied to
any of these activities and it establishes a behavior pattern, which supports the achievement
of quality.This in turn is supported by quality management practices which can include a
number of business systems and which are usually specific to the activities of the business
unit concerned.In manufacturing and construction activities, these business practices can be
equated to the models for quality assurance defined by the International Standards contained
in the ISO 9000 series and the specified Specifications for quality systems.In the system of
Company Quality, the work being carried out was shop floor inspection which did not reveal
the major quality problems. This led to quality assurance or total quality control, which has
come into being recently
Quality management can be considered to have four main components: quality planning,
quality control, quality assurance and quality improvement. Quality management is focused
not only on product/service quality, but also the means to achieve it. Quality management
55
therefore uses quality assurance and control of processes as well as products to achieve more
consistent quality.
VARIABLES THAT AFFECT THE QUALITY OF RESULTS
The educational background and

The educational background and training of the laboratory personnel
 The condition of the specimens
 The controls used in the test runs
 Reagents
 Equipment
 The interpretation of the results
 The transcription of results
 The reporting of results
What Is the Difference Between Quality Assurance and Quality Control?
The Prime Focus of
Quality Management
Quality Assurance
Achieving results that satisfy the requirements for Demonstrating that the requirements for
quality.
quality have been (and can be) achieved.
Motivated
by stakeholders
organization,
especially
internal
the
to
the Motivated
by
stakeholders,
especially
organization’s customers, external to the organization
management
Goal is to satisfy all stakeholders
Goal is to satisfy all customers.
Effective, efficient, and continually improving, Confidence in the organization’s products is
overall
quality-related
performance
is
the the intended result
56
intended result.
Scope covers all activities that affect the total Scope of demonstration coves activities that
quality-related business results of the organization directly affect quality-related process and
product results
HISTORICAL PERSPECTIVE ELEMENTS OF QMS
TIME MANAGEMENT
Time management is the act or process of exercising conscious control over the amount of
time spent on specific activities, especially to increase efficiency or productivity. Time
management may be aided by a range of skills, tools, and techniques used to manage time
when accomplishing specific tasks, projects and goals. This set encompasses a wide scope of
activities, and these include planning, allocating, setting goals, delegation, analysis of time
spent, monitoring, organizing, scheduling, and prioritizing. Initially, time management
referred to just business or work activities, but eventually the term broadened to include
personal activities as well. A time management system is a designed combination of
processes, tools, techniques, and methods. Usually time management is a necessity in any
project development as it determines the project completion time and scope.
Time management and related concepts
Time management has been considered as subsets of different concepts such as:

Project management. Time Management can be considered as a project management
subset and is more commonly known as project planning and project scheduling.
57
Time Management has also been identified as one of the core functions identified in
project management.

Attention management: Attention Management relates to the management of
cognitive resources, and in particular the time that humans allocate their mind (and
organizations the minds of their employees) to conduct some activities.
Personal knowledge management(Personal time management) .Time management
strategies are often associated with the recommendation to set personal goals. These goals are
recorded and may be broken down into a project, an action plan, or a simple task list. For
individual tasks or for goals, an importance rating may be established, deadlines may be set,
and priorities assigned. This process results in a plan with a task list or a schedule or calendar
of activities. Authors may recommend a daily, weekly, monthly or other planning periods
associated with different scope of planning or review. This is done in various ways, as
follows.
Time management also covers how to eliminate tasks that don't provide the individual or
organization value.
One goal is to help yourself become aware of how you use your time
as one resource in organizing, prioritizing, and succeeding in your
studies in the context of
competing activities of friends, work, family, etc.
Strategies
on
using
time:
These applications of time management have proven to be effective as good study habits.
As we go through each strategy, jot down an idea of what each will look like for you:

Blocks
of
study
time
and
breaks
As your school term begins and your course schedule is set, develop and plan for,
blocks of study time in a typical week. Blocks ideally are around 50 minutes, but
perhaps you become restless after only 30 minutes? Some difficult material may
require more frequent breaks. Shorten your study blocks if necessary—but don’t
forget to return to the task at hand! What you do during your break should give you
an opportunity to have a snack, relax, or otherwise refresh or re-energize yourself. For
58
example, place blocks of time when you are most productive: are you a morning
person or a night owl?

Jot down one best time block you can study. How long is it? What makes for
a good break for you? Can you control the activity and return to your studies?
Dedicated
study
spaces
Determine a place free from distraction (no cell phone or text messaging!) where you can
maximize your concentration and be free of the distractions that friends or hobbies can
bring! You should also have a back-up space that you can escape to, like the
library, departmental study center, even a coffee shop where you can be anonymous. A
change of venue may also bring extra resources.

What is the best study space you can think of? What is another?
Weekly
reviews
Weekly reviews and updates are also an important strategy. Each week, like a Sunday night,
review your assignments, your notes, your calendar. Be mindful that as deadlines and exams
approach, your weekly routine must adapt to them!

What is the best time in a week you can review?
Prioritize
your
assignments
When studying, get in the habit of beginning with the most difficult subject or task. You’ll
be fresh, and have more energy to take them on when you are at your best. For more difficult
courses of study, try to be flexible: for example, build in “reaction time” when you can get
feedback on assignments before they are due.


What subject has always caused you problems?
Achieve
“stage
one”--get
something
done!
The Chinese adage of the longest journey starting with a single step has a couple of
meanings: First, you launch the project! Second, by starting, you may realize that
there are some things you have not planned for in your process. Details of an
assignment are not always evident until you begin the assignment. Another adage is
that “perfection is the enemy of good”, especially when it prevents you from starting!
Given that you build in review, roughly draft your idea and get going! You will have
time to edit and develop later.

What is a first step you can identify for an assignment to get yourself started?
Postpone
unnecessary
activities
until
the
work
is
done!
Postpone tasks or routines that can be put off until your school work is finished!
59
This can be the most difficult challenge of time management. As learners we always meet
unexpected opportunities that look appealing, then result in poor performance on a test, on a
paper, or in preparation for a task. Distracting activities will be more enjoyable later without
the pressure of the test, assignment, etc. hanging over your head. Think in terms of pride of
accomplishment. Instead of saying “no” learn to say “later”.

What is one distraction that causes you to stop studying?
Identify
resources
to
help
you
Are there tutors? An “expert friend”? Have you tried a keyword search on the Internet to get
better explanations?
resources?
Are there specialists in the library that can point you to
What about professionals and professional organizations.
Using outside
resources can save you time and energy, and solve problems.

Write
down
three
examples
for
that
difficult
subject
above?
Be as specific as possible.
Use
your
free
time
wisely
Think of times when you can study "bits" as when walking, riding the bus, etc. Perhaps
you’ve got music to listen to for your course in music appreciation, or drills in language
learning? If you are walking or biking to school, when best to listen? Perhaps you are in a
line waiting? Perfect for routine tasks like flash cards, or if you can concentrate, to read or
review a chapter. The bottom line is to put your time to good use.

What is one example of applying free time to your studies?
Review
notes
and
readings
just
before
class
This may prompt a question or two about something you don’t quite understand, to ask about
in class, or after. It also demonstrates to your teacher that you are interested and have
prepared.

How
would
you
make
time
to
review?
Is there free time you can use?

Review
Then
lecture
review
lecture
notes
material
just
immediately
after
after
class
class.
The first 24 hours are critical. Forgetting is greatest within 24 hours without review!
60

How
would
you
do
this?
Is there free time you can use?
Effective aids:

This simple program will help you identify a few items, the reason for doing them, a
timeline for getting them done, and then printing this simple list and posting it for
reminders.

Daily/weekly
planner
Write down appointments, classes, and meetings on a chronological log book or chart.
If
First
you
thing
are
in
more
the
visual,
morning,
sketch
check
out
what's
your
ahead
for
schedule
the
day
always go to sleep knowing you're prepared for tomorrow

Long
Use
term
a
monthly
chart
so
planner
that
you
can
plan
ahead.
Long term planners will also serve as a reminder to constructively plan time for
yourself
Time Management Tips
1) Realize that time management is a myth.
No matter how organized we are, there are always only 24 hours in a day. Time doesn't
change. All we can actually manage is ourselves and what we do with the time that we have.
2) Find out where you're wasting time.
Many of us are prey to time-wasters that steal time we could be using much more
productively. What are your time-bandits? Do you spend too much time 'Net surfing, reading
email, or making personal calls? Tracking Daily Activities explains how to track your
activities so you can form a accurate picture of what you actually do, the first step to effective
time management.
61
3) Create time management goals.
Remember, the focus of time management is actually changing your behaviors, not changing
time. A good place to start is by eliminating your personal time-wasters. For one week, for
example, set a goal that you're not going to take personal phone calls while you're working.
(See Set Specific Goals for help with goal setting.) For a fun look at behaviors that can
interfere with successful time management, see my article Time Management Personality
Types. Find out if you're a Fireman, an Aquarian or a Chatty Kathy!
4) Implement a time management plan.
Think of this as an extension of time management tip # 3. The objective is to change your
behaviors over time to achieve whatever general goal you've set for yourself, such as
increasing your productivity or decreasing your stress. So you need to not only set your
specific goals, but track them over time to see whether or not you're accomplishing them.
5) Use time management tools.
Whether it's a Day-Timer or a software program, the first step to physically managing your
time is to know where it's going now and planning how you're going to spend your time in
the future. A software program such as Outlook, for instance, lets you schedule events easily
and can be set to remind you of events in advance, making your time management easier.
6) Prioritize ruthlessly.
You should start each day with a time management session prioritizing the tasks for that day
and setting your performance benchmark. If you have 20 tasks for a given day, how many of
them do you truly need to accomplish? For more on daily planning and prioritizing daily
tasks, see Start The Day Right With Daily Planning.
7) Learn to delegate and/or outsource.
No matter how small your business is, there's no need for you to be a one-person show. For
effective time management, you need to let other people carry some of the load. Determining
Your Personal ROI explains two ways to pinpoint which tasks you'd be better off delegating
62
or outsourcing, while Decide To Delegate provides tips for actually getting on with the job of
delegating.
8) Establish routines and stick to them as much as possible.
While crises will arise, you'll be much more productive if you can follow routines most of the
time.
9) Get in the habit of setting time limits for tasks.
For instance, reading and answering email can consume your whole day if you let it. Instead,
set a limit of one hour a day for this task and stick to it.
10) Be sure your systems are organized.
Are you wasting a lot of time looking for files on your computer? Take the time to organize a
file management system. Is your filing system slowing you down? Redo it, so it's organized
to the point that you can quickly lay your hands on what you need. You'll find more
information about setting up filing systems and handling data efficiently in my Data
Management library.
11) Don't waste time waiting.
From client meetings to dentist appointments, it's impossible to avoid waiting for someone or
something. But you don't need to just sit there and twiddle your thumbs. Always take
something to do with you, such as a report you need to read, a checkbook that needs to be
balanced, or just a blank pad of paper that you can use to plan your next marketing campaign.
Technology makes it easy to work wherever you are; your PDA and/or cell phone will help
you stay connected.
Improved
Time
Management
Includes
Setting
Three
Priorities
Everyone is looking for ways to improve time management. Whether it is the management of
an organization looking for business improvement or an individual looking for ways to better
spend
their
time,
time
management
is
important
to
both.
63
Better time management can be achieved if goals have been set and then all future work is
prioritized based on how it moves the individual or organization towards meeting the goals.
Many time management priority methods exist. The most popular ones are the A, B, C
method and number ranking according to order in which tasks should be done. Both methods
encourage looking at things that move one closer to meeting important goals as the highest
priority to set. Things not related to goals would be lower priority. Here is a description at the
three
priorities
and
how
they
relate
to
general
time
management
practices.
• High priority items (rank A or 1) are those tasks, projects, and appointments that yield the
greatest results in accomplishing individual or organizational goals. For individuals, this
could be related to goals of career advancement or small business growth and ties directly to
promises made to customers or co-workers, or it could be unrelated to the job such as more
family or leisure time goals and promises. For organizations, this would likely be related to
increased profits, new business, key projects, and other strategic business items. High priority
items should be the first work planned for each day and blocked into a time that falls within
the
individual's
peak
performance
period.
• Medium priority items (rank B or 2) are those standard daily, weekly, or monthly tasks,
projects, and appointments that are part of the work that must be done in order to maintain the
status quo. For individuals, this would relate to getting their standard work done, and might
mean going to scheduled family or outside group activities as expected. For organizations,
this is every day business items like project meetings, cost reduction, as well as regular
administrative, sales, and manufacturing work. Medium priority work is scheduled after or
between high priority functions, because this work does not require high levels of
concentration, it can be done during non-peak periods as long as it is completed on schedule.
• Low priority items (rank C or 3) are those tasks, projects, and potential appointments that
are nice-to-do, can be put off until another time, and will not directly affect goals or standard
work practices. For individuals, this might mean learning a new skill or starting a new hobby
that may seem like good ideas but are not directly related to most desirable personal goals.
For organizations, this could be purging old files or evaluating existing work processes that
currently
run
smoothly
enough.
64
It does not matter if time management priority methods like A, B, C, numbering, or simply
marking high, medium, low using a personalized coding or coloring method. It only matters
that the practice has no more than three priorities used in moving closer to meeting important
goals. More than three priority levels can bog the time manager in the process of prioritizing
rather
than
doing
valuable
work.
Whether organization management or an individual looking for ways to better utilize their
time, time management is important to both. Anyone looking for ways to improve time
management, will benefit from establishing and following a priority setting method for
completing work towards accomplishing goals.
Let’s start with a picture, which is always nice and polite. Take a look at the illustration
below:
The relationship between life management, personal productivity, and time management.
Before we start grinding more deeply on time management, let’s take a quick look at the two
layers below it.
From Life Management to Personal Productivity to Time Management
The following picture gives an overview of our basic stance:
65
Question: How does time management relate to to life management? Answer: This way.
Life management
All people manage their life in one way or another. We have split life management in five
areas: personal productivity, physical training, social relationships, financial management,
and eating habits.
The split above reflects our view, that we have a head (A), a body (B), and in addition to that,
an environment (C) where we live in, and in which we influence.
As we all know, we have to take care of our body. At its core, this is done by some kind of
physical exercise, and by striving for balanced eating habits. If there isn’t enough physical
exercise, our body may suffer. If we eat wrongly for a prolonged period of time, our body
may suffer. If we don’t focus on these areas at least somewhat, we seldom do them right.
Sooner or later our doctor will also verify this, as we develop our modern western lifestyle
diseases.
Then we have our environment. We interact with our surroundings via social interactions.
This might mean meeting with friends, living everyday life with our partner, using Facebook,
or chatting to our local store clerk while looking for the right credit card. Workplaces are also
an important place to mingle with people.
66
People who are too much alone often become unhappy. It is important to plan enough time
with your friends. Sports often combine both physical exercise and socializing.
As our society is built today, money is important. This is why we need financial skills too.
There exists a plethora of advisories and methods for that. Some are rip-offs, others could
benefit you. Only the people in the know survive in the long run.
Last but not least we have personal productivity, which is key for doing and achieving.
Personal productivity
We have discussed personal productivity in a separate post. To put it shortly, we define
personal productivity as a set of skills. These skills are time management, learning, personal
organization, mental control, creativity, and decision making.
As we have a separate post on the subject, we leave personal productivity here, and go on to
the beef. Here it comes, time management, give it to me baby!
Time Management
So, whereas personal productivity is an important component of life management, time
management is THE part of personal productivity that we are mostly interested in here at
Time-management-solutions.com. The six skills of time management are as follows:
1. Managing goals is important because if we don’t know what we strive at, we are lost like
elks in thick fog. Yes, we may still be doing all kind of things with our sleeves wrapped and
sweat pears on our forehead, but if it is the wrong thing we are doing, we are not getting
towards our true goals. In that case, we might as well lie on the couch chilling. Instantly when
we define a goal, we know our direction. The doing then automatically starts to focus at the
right direction.
2. Managing tasks is also needed, as we always end up with many goals and too many things
to do. That is why we need a system for storing these tasks, and as a place to jot down our
“AHAA! This is what I should do” moments. If we don’t manage our tasks, we forget
important things, and miss deadlines.
67
3. Prioritizing complements task management, because time is limited. Prioritizing means
putting things in an order of importance, and taking this importance in consideration when
making decisions. Many times your prioritizing is simple as you only need to look at
deadlines. Other times you might need to ask for your bosses opinion (if you have one), or
rely on your gut feeling. The bottom line is that prioritizing skills help in situations where we
have several things we should start doing. It answers the questions: what is most important to
me, and what should I do next?
4. Calendar utilization. As a platform for managing appointments, we all need a calendar. It
may be in a classic paper form, your mobile phone, or your Outlook calendar (ideally the two
latter are synchronized). The best solution for you is one you feel confident using. But don’t
think you can manage without one. You cannot. Trust me.
5. Managing procrastination. We all know the situations when we just can’t bring ourselves
to start something. This is called procrastination, and it is a complicated, yet very important
subject. The reasons for procrastination are often rooted deeply in our souls, and some people
have a stronger tendency toward it than others. That is why you actually know who these
people are at your work. It is the ones that activate themselves just before important
deadlines.
The good news is, that there exists methods for managing procrastination. The tips for
beating procrastination are basic time management skills. Any good time manager should
have these skills in their skill set, and use them when needed (I know I have to, sometimes).
6. Follow-up systems. Finally, we need some kind of follow-up systems for ensuring that
things get finished. Many times, especially in work life, projects are started, and in the
beginning they are also closely monitored. A little later, new projects are started. The old
projects are forgotten. Most often, they are left undone. For good. With proper follow-up
systems this can be avoided. At least the old projects should be closed, or be put on hold
consciously.
Time Management Techniques and Systems
There exists many ways to exercise time management. The most classical way is to take a
piece of paper, and to write down a list of things to accomplish “today”.
68
Some others are found in the table below:
Technique
/
System
Description
Simple ad hoc The most basic time management technique
to-do list
Getting
everyone uses sometime.
things
done – GTD
David Allen’s method for time management,
glorified by many. Based on multiple open lists.
Process based.
In opposed to open to-do lists (a list into which you
Closed list time
management
constantly add and add new items) there exists a
school of thought, which supports closed to-do
lists (= finish this list before starting anything
new). Main advocate: Mark Forster.
Software based Comprises of a group of IT based systems and
time
techniques. Main advocates: M. Linenberger and S.
management
McGhee.
The important thing to remember when talking about time management techniques is that
everyone develops their own favorites. There is no right or wrong system for managing time.
One has to use what works for them.
Time Management – Summary
By using proper time management skills for different purposes, you should be able to
maximize your free time. It is almost contradictory how it is O.K. to be motivated to learn
time management, with the purpose of avoiding too long work days (at least for longer
periods).
Life is a whole experience. We must not focus only on only one area of it, which often ends
up being work. By using time management tools and techniques, we can get more time for
doing all the other things we love, too.
69
Luckily, skills in personal productivity and time management really help. Personally, I could
not live without these skills anymore.
QMS FOR SOFTWARE
70
ISO9000 SERIES–A GENERIC QUALITY MANAGEMENT STANDARD
The ISO 9000 family of standards relate to quality management systems and are designed to
help organizations ensure they meet the needs of customers and other stakeholders
(Poksinska et al, The standards are published by ISO, the International Organization for
Standardization and available through National standards bodies.
ISO 9000 deals with the fundamentals of quality management systems (Tsim et al, 2002 [2] ),
including the eight management principles (Beattie and Sohal, 1999;[3] Tsim et al, 2002 [2])
on which the family of standards is based. ISO 9001 deals with the requirements that
organizations wishing to meet the standard have to meet.
Independent confirmation that organizations meet the requirements of ISO 9001 may be
obtained from third party certification bodies. Over a million organizations worldwide [4] are
independently certified making ISO 9001 one of the most widely used management tools in
the world today.
Reasons for use
The global adoption of ISO 9001 may be attributable to a number of factors. A number of
major purchasers require their suppliers to hold ISO 9001 certification. In addition to several
stakeholders’ benefits, a number of studies have identified significant financial benefits for
71
organizations certified to ISO 9001. Corbett et al (2005)showed that certified organizations
achieved superior return on assets compared to otherwise similar organizations without
certification.
BENEFITS TO USING THE ISO-9000 STANDARDS:
- Quality improvements
- Improved liability protection
- Documented and published proof of compliance by a non-interested third party
Background
ISO 9001 was first published in 1987. It was based on the BS 5750 series of standards from
BSI that were proposed to ISO in 1979. Its history can however be traced back some twenty
years before that when the Department of Defense published its MIL-Q-9858 standard in
1959. MIL-Q-9858 was revised into the NATO AQAP series of standards in 1969, which in
turn were revised into the BS 5179 series of guidance standards published in 1974, and
finally revised into being the BS 5750 series of requirements standards in 1979, before being
submitted to ISO.
BSI has been certifying organizations for their quality management systems since 1978. Its
first certification (FM 00001) is still extant and held by the Tarmac company, a successor to
the original company which held this certificate. Today BSI claims to certify organizations at
nearly 70,000 sites globally. The development of the ISO 9000 series is shown in the diagram
to the right.
Certification
ISO does not itself certify organizations. Many countries have formed accreditation bodies to
authorize certification bodies, which audit organizations applying for ISO 9001 compliance
certification. Although commonly referred to as ISO 9000:2000 certification, the actual
72
standard to which an organization's quality management can be certified is ISO 9001:2008.
Both the accreditation bodies and the certification bodies charge fees for their services. The
various accreditation bodies have mutual agreements with each other to ensure that
certificates issued by one of the Accredited Certification Bodies (CB) are accepted
worldwide.
The applying organization is assessed based on an extensive sample of its sites, functions,
products, services and processes; a list of problems ("action requests" or "non-compliance")
is made known to the management. If there are no major problems on this list, or after it
receives a satisfactory improvement plan from the management showing how any problems
will be resolved, the certification body will issue an ISO 9001 certificate for each
geographical site it has visited.
An ISO certificate is not a once-and-for-all award, but must be renewed at regular intervals
recommended by the certification body, usually around three years. There are no grades of
competence within ISO 9001: either a company is certified (meaning that it is committed to
the method and model of quality management described in the standard), or it is not. In this
respect, it contrasts with measurement-based quality systems such as the Capability Maturity
Model
Auditing
Two types of auditing are required to become registered to the standard:

auditing by an external certification body (external audit) and

audits by internal staff trained for this process (internal audits).
The aim is a continual process of review and assessment, to verify that the system is working
as it's supposed to, find out where it can improve and to correct or prevent problems
identified. It is considered healthier for internal auditors to audit outside their usual
management line, so as to bring a degree of independence to their judgments.
Under the 1994 standard, the auditing process could be adequately addressed by performing
"compliance auditing":

Tell me what you do (describe the business process)
73

Show me where it says that (reference the procedure manuals)

Prove that this is what happened (exhibit evidence in documented records)
The 2000 standard uses a different approach. Auditors are expected to go beyond mere
auditing for rote "compliance" by focusing on risk, status and importance. This means they
are expected to make more judgments on what is effective, rather than merely adhering to
what is formally prescribed. The difference from the previous standard can be explained thus:
Advantages
It is widely acknowledged that proper quality management improves business, often having a
positive effect on investment, market share, sales growth, sales margins, competitive
advantage, and avoidance of litigation. The quality principles in ISO 9000:2000 are also
sound, according to Wade and also to Barnes, who says that "ISO 9000 guidelines provide a
comprehensive model for quality management systems that can make any company
competitive implementing ISO often gives the following advantages:
1. Create a more efficient, effective operation
2. Increase customer satisfaction and retention
3. Reduce audits
4. Enhance marketing
5. Improve employee motivation, awareness, and morale
6. Promote international trade
7. Increases profit
Reduce waste and increases productivity
TOOLS FOR QUALITY
Seven Basic Tools of Quality
The Seven Basic Tools of Quality is a designation given to a fixed set of graphical techniques
identified as being most helpful in troubleshooting issues related to quality. They are called
basic because they are suitable for people with little formal training in statistics and because
they can be used to solve the vast majority of quality-related issues.
74
The tools are

The cause-and-effect or Ishikawa diagram

The check sheet

The control chart

The histogram

The Pareto chart

The scatter diagram

Stratification (alternately flow chart or run chart)
The Seven Basic Tools stand in contrast with more advanced statistical methods such as
survey sampling, acceptance sampling, statistical hypothesis testing, design of experiments,
multivariate analysis, and various methods developed in the field of operations research.
1. Cause-and-effect diagram (also called Ishikawa or fishbone chart): Identifies many
possible causes for an effect or problem and sorts ideas into useful categories.
2. Check sheet: A structured, prepared form for collecting and analyzing data; a generic
tool that can be adapted for a wide variety of purposes.
3. Control charts: Graphs used to study how a process changes over time.
4. Histogram: The most commonly used graph for showing frequency distributions, or
how often each different value in a set of data occurs.
5. Pareto chart: Shows on a bar graph which factors are more significant.
6. Scatter diagram: Graphs pairs of numerical data, one variable on each axis, to look
for a relationship.
7. Stratification: A technique that separates data gathered from a variety of sources so
that patterns can be seen (some lists replace "stratification" with "flowchart" or "run
chart").
75
UNIT-IV
PRINCIPLES AND PRACTICES IN QMS
Process–Product–Project–People In Software Development And Management Spectrum
THE MANAGEMENT SPECTRUM
Effective software project management focuses on the four P’ s:
People
Productt
Process
Project
The order is not arbitrary. The manager who forgets that software
engineering work is an intensely human endeavor will never have success in project management.
A manager who fails to encourage comprehensive customer communication
early in the evolution of a project risks building an elegant solution for the wrong problem.
The manager who pays little attention to the process runs the risk of
inserting competent technical methods and tools into a vacuum.
The manager who embarks without a solid project plan jeopardizes the
success of the product.
The People
•
In fact, the “ people factor” is so important that the Software Engineering Institute has
developed a people management capability maturity (PM- CMM), “ to enhance the readiness
of software organizations to undertake increasingly complex applications by helping to attract,
grow, motivate, deploy, and retain the talent needed to improve their software development
capability”
76
•
The people management maturity model defines the following key practice areas for software
people: recruiting, selection, performance management, training, Compensation, career
development, organization and work design, and team/culture development.
•
Organizations that achieve high levels of maturity in the people management area has a higher
likelihood of implementing effective software engineering practices.
The Product
• Before a project can be planned, product’ objectives and scope should be established, alternative
solutions should be considered, and technical and management
• Constraints should be identified. Without this information, it is impossible to define reasonable (and
accurate) estimates of the cost, an effective assessment of risk, a realistic breakdown of project tasks,
or a manageable project schedule that provides a meaningful indication of progress.
• The software developer and customer must meet to define prod objectives and scope. In many
cases, this activity begins as part of the system engineering or business process engineering and
continues as the first step in software requirements analysis.
• Objectives identify the overall goals for the product (from the customer’ s
point of view) without considering how these goals will be achieved.
• Scope identifies the primary data, functions and behaviors that characterize the product, and more
important, attempts to bound these characteristics in a quantitative manner.
Once the product objectives and scope are understood, alternative solutions are considered.
Although very little detail is discussed, the alternatives enable managers and practitioners to
select a ‘ best’ approach, given the constraints imposed by delivery deadlines, budgetary
restrictions, personnel availability, technical interfaces, and myriad other factors.
The Process
• A software process provides the framework from which a Comprehensive
plan for software development can be established.
• A small number of framework activities are applicable to all software
77
projects, regardless of their size or complexity.
• A number of different tasks set —tasks, milestones, work products and quality assurance points —
enable the framework activities to be adapted to the Characteristics of the software project and the
requirements of the project team.
The Project
• We conduct planned and controlled software projects for one primary
reason —it is the only known way to manage complexity.
• And yet, we still struggle. In 1998, industry data indicated that 26 percent of software projects
failed outright and 46 percent experienced cost and schedule overruns.
THE PEOPLE:
The Players
The software process is populated by players are categorized into one of five
constituencies:
1. Senior managers who define the business issues that often have sign
influence on the project.
2. Project (technical) managers who must plan, motivate, organize, and
control the practitioners who do software work.
3.Practitioners who deliver the technical skills that are necessary to
engineer a product or application
4.Customers who specify the requirements for the software to be
Engineered and other stakeholders who have a peripheral interest in the outcome.
5.End-users who interact with the software once it is released for
78
production use.
People who fall within this taxonomy populate every software project. To be effective,
the project team must be organized in a way that maximizes each person’ s skills and abilities.
And that’ s the job of the team leader.
Team Leaders
• Project management is a people-intensive activity, and for this reason, competent practitioners
often make poor team leaders. They simply don’ t have the right mix of people skills.
And yet, as Edgemont states: “Unfortunately and all too frequently it Seems, individuals just
fall into a project manager role and become accidental Project managers.”
• In an excellent book of technical leadership, Jerry Weinberg suggests a MOI model of leadership:
Motivation
The ability to encourage (by “push or pull” ) technical people to produce to their best ability.
Organization
The ability to mold existing processes (or invent new ones) that will enable the initial concept
to be translated into a final product.
Ideas or innovation
The ability to encourage people to create and feel creative even
when they must work within bounds established for a particular software product or application.
Weinberg suggests that successful project leaders apply a problem solving management style.
That is, a software project manager should concentrate on.
79
Understanding the problem to be solved, managing the flow of ideas, and at the same time,
letting everyone on the team know (by words and, far more important, by actions) that quality
counts and that it will not be compromised.
Characteristics of an effective project manager emphasize four key traits:
Problem solving
An effective software project manager can diagnose the technical and organizational issues that
are most relevant, systematically structure a solution or properly motivate other practitioners to
develop the solution, apply lessons learned from past projects to new situations, and remain flexible
enough to change direction if initial attempts at problem solution are fruitless.
Managerial identity
A good project manager must take charge of the project. He/She must have the
confidence to assume control when necessary and the assurance to allow good technical people
to follow their instincts.
Achievement
To optimize the productivity of a project team, a manager must reward initiative and
accomplishment and demonstrate through his own actions that controlled risk taking will not
be punished.
Influence and team building
An effective project manager must be able to “ read” people; he/she must be able to
understand verbal and nonverbal signals and react to the needs of the people sending these
signals. The manager must remain under control in high- stress situations.
The Software Team
There are almost as many human organizational structures for software Development as there
are organizations that develop software.
80
For better or worse, Organizational structure cannot be easily modified. Concerns
with the practical and political consequences of organizational change are not within the software
project Manager’ s scope of responsibility.
However, the organization of the people directly involved.
The following options are available for applying human resources to a project that will require n
people working for k years:
1. n individuals are assigned to m different functional tasks, relatively little Combined
work occurs; coordination is the responsibility of a software manager who may have six other
projects to be concerned with.
2. n individuals are assigned to m different functional tasks (m <n) so that Informal “
teams” are established; an ad hoc team leader may be appointed; coordination among teams is
the responsibility of a software manager.
3. n individuals are organized into t teams; each team is assigned one or more
functional tasks; each team has a specific structure that is defined for all Teams working on a
project; coordination is controlled by both the team and a software project manager.
Democratic decentralized (DD). This software engineering team has no permanent leader. Rather, ‘
task coordinators are appointed for short durations and then replaced by others who may coordinate
different tasks.’ Decisions on problems and approach are made by group consensus. Communication
among team members is horizontal.
Controlled decentralized (CD). This software engineering team has a defined leader who coordinates
specific tasks and secondary leaders that have responsibility for subtasks. Problem solving remains a
group activity, but implementation of solutions is partitioned among subgroups by the team leader.
Communication among subgroups and individuals is horizontal. Vertical communication along the
control hierarchy also occurs.
Controlled Centralized (CC). Top-level problem solving and internal team
81
coordination are managed by a team leader. Communication between the leader and team
members is vertical. Mantel describes seven project factors that should be considered when
planning the structure of software engineering teams:
The difficulty of the problem to be solved.
The size of the resultant program(s) in lines of code or function points.
The time that the team will stay together (team lifetime).
The degree to which the problem can be modularized.
The required quality and reliability of the system to be built.
The rigidity of the delivery date.
The degree of sociability (communication) required for the project.
Because a centralized structure completes tasks faster. It is the most adept at
handling simple problems
Democratic decentralized (DD). This software engineering team has no permanent leader. Rather, ‘
task coordinators are appointed for short durations and then replaced by others who may coordinate
different tasks.’ Decisions on problems and approach are made by group consensus. Communication
among team members is horizontal.
Controlled decentralized (CD). This software engineering team has a defined
leader who coordinates specific tasks and secondary leaders that have responsibility for
subtasks. Problem solving remains a group activity, but implementation of solutions is
partitioned among subgroups by the team leader. Communication among subgroups and
individuals is horizontal. Vertical communication along the control hierarchy also occurs.
Controlled Centralized (CC). Top-level problem solving and internal team coordination are managed
by a team leader. Communication between the leader and team members is vertical. Mantel
describes seven project factors that should be considered when planning the structure of software
engineering teams:
The difficulty of the problem to be solved.
The size of the resultant program(s) in lines of code or function points.
The time that the team will stay together (team lifetime).
82
The degree to which the problem can be modularized.
The required quality and reliability of the system to be built.
The rigidity of the delivery date.
The degree of sociability (communication) required for the project.
Because a centralized structure completes tasks faster. It is the most adept at handling simple
problems.
Democratic decentralized (DD). This software engineering team has no
permanent leader. Rather, ‘ task coordinators are appointed for short durations and then
replaced by others who may coordinate different tasks.’ Decisions on problems and approach
are made by group consensus. Communication among team members is horizontal.
Controlled decentralized (CD). This software engineering team has a defined
leader who coordinates specific tasks and secondary leaders that have responsibility for
subtasks. Problem solving remains a group activity, but implementation of solutions is
partitioned among subgroups by the team leader. Communication among subgroups and
individuals is horizontal. Vertical communication along the control hierarchy also occurs.
Controlled Centralized (CC). Top-level problem solving and internal team
coordination are managed by a team leader. Communication between the leader and team
members is vertical. Mantel describes seven project factors that should be considered when
planning the structure of software engineering teams:
The difficulty of the problem to be solved.
The size of the resultant program(s) in lines of code or function points.
The time that the team will stay together (team lifetime).
The degree to which the problem can be modularized.
The required quality and reliability of the system to be built.
The rigidity of the delivery date.
The degree of sociability (communication) required for the project.
Because a centralized structure completes tasks faster. It is the most adept at handling simple
problems.
83
The DD team structure is best applied to problems with
relatively low modularity, because of the
higher volume of communication needed.
When high modularity is possible (and people can do their own
thing), the CC or CD structure
will work well.
Constantine suggests four “organizational paradigms” for software engineering
teams:
1. A closed paradigm structures a team along a traditional hierarchy of authority (similar to a
CC team). Such teams can work well when producing soft ware that is quite similar to past efforts,
but they will be less likely to be Innovative when working within the closed paradigm.
2. The random paradigm structures a team loosely and depends on individual
initiative of the team members. When innovation or technological break through is required,
teams following the random paradigm will excel. But such teams may struggle when “
orderly performance” is required.
3. The open paradigm attempts to structure a team in a manner that achieves some of
the controls associated with the closed paradigm but also much of the innovation that occurs
when using the random paradigm. Work is per formed collaboratively, with heavy
communication and consensus-based decision making the trademarks of open paradigm teams.
Open paradigm team structures are well suited to the solution of complex problems but may not
perform as efficiently as other teams.
4. The synchronous paradigm relies on the natural compartmentalization of a
Problem and organizes team members to work on pieces of the problem with little active
communication among themselves.
High-performance team:
• Team members must have trust in one another.
• The distribution of skills must be appropriate to the problem.
• Mavericks may have to be excluded from the team, if team cohesiveness is to be maintained.
84
Coordination and Communication Issues
There are many reasons that software projects get into trouble.
Thescale of many development efforts is large, leading to complexity, confusion, and significant
difficulties in coordinating team members.
Uncertaint is common, resulting in a continuing stream of changes that ratchets the project
team.
Interoper ability has become a key characteristic of many systems. New software must
communicate with existing software and conform to predefined constraints imposed by the system
or product.
These characteristics of modern software —scale, uncertainty, and interoperability —are facts of
life.
Karl and Streeter examine a collection of project coordination techniques that are categorized in
the following manner:
Formal, impersonal approaches include software engineering documents and deliverables (including
source code), technical memos, project milestones, schedules, and project control tools, change
requests and related documentation, error tracking reports, and repository data.
Formal, interpersonal procedures focus on quality assurance activities.
Applied to software engineering work products. These include status review meetings and design
and code inspections.
Informal, interpersonal procedures include group meetings for information dissemination and
problem solving and “ collocation of requirements and development staff.”
Electronic communication:
85
Electronic mail
Electronic bulletin boards
Voice based conference etc
The product
A software project manager is confronted with a dilemma at the very
beginning of software
engineering project.
Quantitative estimates and an organized plan a required, but solid
information is unavailable.
A detailed analysis of software requirements would provide necessary
information for estimates, but analysis often take weeks or months to complete. Worse,
requirements may be fluid, changing regular as the project proceeds.
Yet, a plan is needed ‘ now!” Therefore, we must examine the product and the
problem it is intended to solve the very beginning of the project.
At a minimum, the scope of the product must b established and bounded.
Software Scope
The first software project management activity is the determination of software scope is defined by
answering the following questions:
Context. How the software to be built does fit into a larger system, product, business context and
what constraints are imposed as a result of the context Information objectives.
What customer-visible data objects are produced as output from the
objects are required for input Function and
software? What data
performance?
What function does the software perform transform input data into
output?
86
Are any special performance characteristics to be addressed?
Software project scope must be unambiguous and understandable at the
management and technical levels.
Problem Decomposition
Problem decomposition, sometimes called partitioning or problem elaboration
is an activity that sits at the core of software requirements analysis.
During the scoping activity no attempt is made to fully decompose the
problem.
Rather, decomposition is applied in two major areas: (I) the functionality that
must be delivered and (2) the process that will be used to deliver it.
Human beings tend to apply a divide and conquer strategy when they are
con fronted with
complex problems.
Stated simply, a complex problem is partitioned into smaller problems that
are more
manageable.
This is the strategy that applies as project planning begins. Software , described in the statement of
scope, are evaluated and refined to provide more detail prior to the beginning of estimation.
As the statement of scope evolves, a first level of partitioning naturally
occurs.
The project team learns that the marketing department has talked with
potential customers
and found that the following functions should be part of automatic copy editing:
(I) Spell checking,
(II) Sentence grammar checking,
(III) Reference checking for arranges documents (e.g., Is a reference to a bibliography entry
found in the list of entries n the bibliography?),
87
(IV) Section and chapter reference validation for large documents. Each of these features
represents a sub function to be implemented in software. Each can be further refined if the
decomposition will make planning easier
THE PROCESS
Definition: The generic phases that characterize the software process
development, —are applicable to all software. The problem is to select the process
model that is appropriate for the software to be engineered by a project team
• The linear sequential model
• The prototyping model
• The RAD model
• The incremental model
• The spiral model
• The WINWIN spiral model
• The component-based development model
• The concurrent development model
• The formal methods model
• The fourth generation techniques model
The project manager must decide which process model is most appropriate
for
(1) ‘ me customers who have requested the product and the people who will
do the work,
(2) The characteristics of the product itself, and
(3) the project environment in which the software team works. When a process
model has been selected, the team then defines a preliminary project plan based on the set of
common process framework activities.
88
Once the preliminary plan is established, process decomposition begins. That is, a
complete plan, reflecting the work tasks required to populate the frame work activities must
be created.
Melding the Product and the Process
Project planning begins with the melding of the product and the process. Each
function to be engineered by the software team must pass through the set of framework
activities that have been defined for a software organization. Assume that the organization
has adopted the following set of framework activities
Customer communication —tasks required to establish effective requirements
elicitation between developer and customer.
Planning —tasks required to define resources, timelines, and other project related
information.
• Risk analysis —tasks required to assess both technical and management risks.
•Engineering —tasks required to build one or more representations of the
application.
• Construction and release —tasks required to construct, test, install, and pro
vide user support (e.g., documentation and training).
THE PROJECT
In order to manage a successful software project, we must understand what can go
wrong (so that problems can be avoided) and how to do it right. In an excellent paper on
software projects, John Reel IREE99I defines ten signs that indicate that an information
systems project is in jeopardy:
1. Software people don’ t understand their customer’ s needs.
89
2. The product scope is poorly defined.
3. Changes are made poorly
4. The chosen technology changes.
5. Business needs change. Deadlines are unrealistic.
7. Users are resistant.
8. Sponsorship is lost.
9. The project team lacks people with appropriate skills.
10. Managers [practitioners] avoid best practices and lessons learned
A five-part commonsense approach to software projects:
1. Start on the right foot. This is accomplished by working hard (very hard) to understand the
problem that is to be solved and then setting realistic objects and expectations for everyone who
will be involved in the project. It is reinforced by building the right team and giving the team the
autonomy, authority, and technology needed to do the job.
2. Maintain momentum. Many projects get off to a good start and then slowly disintegrate. To
maintain momentum, the project manager must pro vide incentives to keep turnover of personnel to
an absolute minimum, the team should emphasize quality in every task it performs, and senior
management should do everything possible to stay out of the team’ s way.
3. Track progress. For a software project, progress is tracked as work products (e.g., specifications,
source code, sets of test cases) are produced and approved (using formal technical reviews) as part
of a quality assurance activity. In addition, software process and project measures (Chapter 4) can be
collected and used to assess progress against averages developed for the software development
organization.
4. Make smart decisions. In essence, the decisions of the project manager and the software
team should be to “ keep it simple.” Whenever possible, decide to use commercial off-the-shelf
software or existing software components, decide to avoid custom interfaces when standard
approaches are available, decide to identify and then avoid obvious risks, and decide to allocate
more time than you think is needed to complex or risky tasks (you’ ll need every minute).
5. Conduct a postmortem analysis. Establish a consistent mechanism for
90
extracting lessons learned for each project. Evaluate the planned and actual schedules, collect
and analyze software project metrics, get feedback from team members and customers, and
record findings in written form
ISO 9001
ISO
9001
is
a
quality
of
organizations.
It
It
can
both
help
doesn't
management
matter
product
standard.
what
and
size
service
It
they
applies
are
or
organizations
to
what
achieve
all
they
types
do.
standards
of quality that are recognized and respected throughout the world.
DEVELOP YOUR QUALITY MANAGEMENT SYSTEM (QMS)

Establish your organization's process-based QMS.

Document your organization's process-based QMS.

Implement your organization's process-based QMS.

Maintain your organization's process-based QMS.

Improve your organization's process-based QMS.
DOCUMENT YOUR QUALITY MANAGEMENT SYSTEM (QMS)
MANAGE QUALITY MANAGEMENT SYSTEM DOCUMENTS

Develop documents for your organization's QMS.

Make
sure
that
your
organization's
respect and reflect what you do and how you do it.
PREPARE QUALITY MANAGEMENT SYSTEM MANUAL
QMS
documents
91

Establish a quality manual for your organization.

Maintain your organization's quality manual.
CONTROL QUALITY MANAGEMENT SYSTEM DOCUMENTS

Control your organization's QMS documents.

Control documents that are used as QMS records.
ESTABLISH QUALITY MANAGEMENT SYSTEM RECORDS

Establish your organization's QMS records.

Establish a procedure to control your QMS records.
MANAGEMENT REQUIREMENTS
SHOW YOUR COMMITMENT TO QUALITY

Support the development of your organization's QMS.

Support the implementation of your organization's QMS.

Support
efforts
to
continually
improve
the
ensuring
that
effectiveness of your organization's QMS.
FOCUS ON YOUR CUSTOMERS

Enhance
customer
satisfaction
by
customer requirements are being identified.

Enhance
customer
satisfaction
that customer requirements are being met.
SUPPORT YOUR QUALITY POLICY
by
ensuring
92

Ensure
that
your
organization's
quality
policy serves its overall purpose.

Ensure
that
your
quality
policy
makes
it clear that requirements must be met.

Ensure
that
your
quality
policy
makes
a
commitment
policy
supports
to continually improve the effectiveness of your QMS.

Ensure
that
your
quality
your organization's quality objectives.

Ensure
that
your
quality
policy
is
communicated
and discussed throughout your organization.

Ensure
that
your
quality
policy
reviewed to make sure that it is still suitable.
CARRY OUT YOUR QMS PLANNING
ESTABLISH QUALITY OBJECTIVES

Support the establishment of quality objectives.

Establish quality objectives for your organization.

Make sure that your quality objectives are effective.
PLAN QUALITY MANAGEMENT SYSTEM (QMS)

Plan the establishment of your QMS.

Plan the documentation of your QMS.

Plan the implementation of your QMS.

Plan the maintenance of your QMS.
is
periodically
93

Plan the continual improvement of your QMS.
ALLOCATE QMS RESPONSIBILITY AND AUTHORITY
DEFINE RESPONSIBILITIES AND AUTHORITIES

Ensure that QMS responsibilities and authorities are defined.

Ensure
that
QMS
responsibilities
and
authorities
are
communicated throughout your organization.
CREATE MANAGEMENT REPRESENTATIVE ROLE

Appoint
a
member
of
your
organization's
management to oversee your QMS.

Give
your
management
representative
authority
over and responsibility for your organization's QMS.
SUPPORT INTERNAL COMMUNICATION

Ensure
that
appropriate
communication
processes
are established within your organization.

Ensure
that
internal
communication
occurs throughout your organization.
PERFORM QMS MANAGEMENT REVIEWS
REVIEW QUALITY MANAGEMENT SYSTEM (QMS)

Carry
out
management
organization's QMS at planned intervals.
reviews
of
your
94

Evaluate improvement opportunities.

Assess the need to make changes.

Maintain a record of your management reviews.
EXAMINE MANAGEMENT REVIEW INPUTS

Examine information about your QMS (inputs).
GENERATE MANAGEMENT REVIEW OUTPUTS

Generate
management
review
decisions
and
and
actions
decisions
and
actions (outputs) to improve your organization.

Generate
management
review
decisions
(outputs) to change your general quality orientation.

Generate
management
review
actions (outputs) to address resource needs.
6.
RESOURCE
REQUIREMENTS
6.1 PROVIDE REQUIRED QMS RESOURCES

Identify the resources that your QMS needs.

Provide the resources that your QMS needs.
PROVIDE COMPETENT QMS PERSONNEL
95
ENSURE THE COMPETENCE OF WORKERS

Ensure
QMS
the
competence
who
could
of
directly
anyone
or
within
indirectly
your
affect
your
ability to meet product requirements.
MEET COMPETENCE REQUIREMENTS

Identify
within
or
the
your
competence
QMS
indirectly
who
affect
requirements
perform
your
work
of
that
organization's
personnel
could
ability
directly
to
meet
to
meet
product requirements.

Provide
training,
or
take
other
suitable
steps,
your organization's QMS competence requirements.

Evaluate
the
effectiveness
of
your
organization's
QMS training and awareness activities.

Maintain
suitable
records
which
show
that
personnel within your QMS are competent.
6.3 PROVIDE NECESSARY INFRASTRUCTURE

Identify
the
infrastructure
that
your
organization
needs
organization
needs
organization
needs
in order to ensure that product requirements are met.

Provide
the
infrastructure
that
your
in order to ensure that product requirements are met.

Maintain
the
infrastructure
that
your
in order to ensure that product requirements are met.
96
6.4 PROVIDE SUITABLE WORK ENVIRONMENT

Identify
the
work
environment
that
your
organization
needs
organization
needs
in order to ensure that product requirements are met.

Manage
the
work
environment
that
your
in order to ensure that product requirements are met.
REALIZATION REQUIREMENTS
CONTROL PRODUCT REALIZATION PLANNING

Establish a product realization planning process.

Use
your
product
realization
planning
process
to
plan the realization of your organization's products.

Prepare
planning
outputs
that
are
suitable
and
will
need
consistent with your organization's methods.

Develop
the
processes
that
you
to use in order to realize products.
7.2 CONTROL CUSTOMER-RELATED PROCESSES
7.2.1 IDENTIFY YOUR UNIQUE PRODUCT REQUIREMENTS

Identify
the
requirements
that
your
are
dictated
customers want you to comply with.

Identify
the
requirements
that
97
by your product's intended use or purpose.

Identify
the
requirements
that
are
imposed
on your products by external agencies.

Identify
any
additional
requirements
that
are
important to your organization and must be met.
7.2.2 REVIEW CUSTOMERS' PRODUCT REQUIREMENTS

Review your customers' product requirements.

Maintain a record of your product requirement reviews.

Control changes in customers' product requirements.
7.2.3 COMMUNICATE WITH YOUR CUSTOMERS

Establish customer communication arrangements.

Implement customer communication arrangements.
7.3 CONTROL PRODUCT DESIGN AND DEVELOPMENT
7.3.1 PLAN PRODUCT DESIGN AND DEVELOPMENT

Plan the design and development of your products.

Control the design and development of your products.

Update your planning outputs whenever product design
and development progress makes this necessary.
7.3.2 IDENTIFY DESIGN AND DEVELOPMENT INPUTS

Define product design and development inputs.

Maintain a record of design and development inputs.

Review your product design and development inputs.
98
7.3.3 GENERATE DESIGN AND DEVELOPMENT OUTPUTS

Produce product design and development outputs.

Approve
product
design
and
development
outputs before they are formally released.

Verify that product design and development outputs
meet design and development input requirements.
7.3.4 CARRY OUT DESIGN AND DEVELOPMENT REVIEWS

Perform
systematic
design
and
development
reviews
throughout the design and development process.

7.3.5
Maintain a record of design and development reviews.
PERFORM
DESIGN
AND
DEVELOPMENT
VERIFICATIONS

Carry out design and development verifications.

Maintain a record of design and development verifications.
7.3.6
CONDUCT
DESIGN
AND
DEVELOPMENT
VALIDATIONS

Perform design and development validations.

Maintain a record of design and development validations.
7.3.7 MANAGE DESIGN AND DEVELOPMENT CHANGES

Identify changes in design and development.

Record changes in design and development.

Review changes in design and development.
99

Verify changes in design and development.

Validate changes in design and development.

Approve
changes
in
design
and
development
before you implement these changes
The benefits of implementing ISO 9001
Implementing a Quality Management System will motivate staff by defining their key roles and
responsibilities. Gain ISO 9001 certification by completing our Free Quote form. Cost savings can be
made through improved efficiency and productivity, as product or service deficiencies will be
highlighted. From this, improvements can be developed, resulting in less waste, inappropriate or
rejected work and fewer complaints. Customers will notice that orders are met consistently, on time
and to the correct specification. This can open up the market place to increased opportunities.
Why seek certification to ISO 9001?

Registration to ISO 9001 by an accredited certification body shows committed to quality,
customers, and a willingness to work towards improving efficiency.

It demonstrates the existence of an effective quality management system that satisfies the
rigours of an independent, external audit.

An ISO 9001 certificate enhances company image in the eyes of customers, employees and
shareholders alike.

It also gives a competitive edge to an organisation's marketing.
Capability Maturity Model (CMM)
The Capability Maturity Model (CMM) is a service mark registered with the U.S. Patent
and Trademark Office by Carnegie Mellon University (CMU) and refers to a development
model that was created after study of data collected from organizations that contracted with
the U.S. Department of Defense, who funded the research. This became the foundation from
100
which CMU created the Software Engineering Institute (SEI). Like any model, it is an
abstraction of an existing system.
When it is applied to an existing organization's software development processes, it allows an
effective approach toward improving them. Eventually it became clear that the model could
be applied to other processes. This gave rise to a more general concept that is applied to
business processes and to developing people.
The Capability Maturity Model (CMM) is a methodology used to develop and refine an
organization's software development process. The model describes a five-level evolutionary
path of increasingly organized and systematically more mature processes. CMM was
developed and is promoted by the Software Engineering Institute (SEI), a research and
development center sponsored by the U.S. Department of Defense (DoD). SEI was founded
in 1984 to address software engineering issues and, in a broad sense, to advance software
engineering methodologies. More specifically, SEI was established to optimize the process of
developing, acquiring, and maintaining heavily software-reliant systems for the DoD.
Because the processes involved are equally applicable to the software industry as a whole,
SEI advocates industry-wide adoption of the CMM.
The CMM is similar to ISO 9001, one of the ISO
Maturity model
A maturity model can be viewed as a set of stlevels that describe how well the behaviors,
practices and processes of an organization can reliably and sustainably produce required
outcomes. A maturity model may provide, for example :

a place to start

the benefit of a community’s prior experiences

a common language and a shared vision

a framework for prioritizing actions.

a way to define what improvement means for your organization.
A maturity model can be used as a benchmark for comparison and as an aid to understanding
- for example, for comparative assessment of different organizations where there is
something in common that can be used as a basis for comparison. In the case of the CMM,
101
for example, the basis for comparison would be the organizations' software development
processes.
Structure
The Capability Maturity Model involves the following aspects:

Maturity Levels: a 5-level process maturity continuum - where the uppermost (5th) level is a
notional ideal state where processes would be systematically managed by a combination of
process optimization and continuous process improvement.

Key Process Areas: a Key Process Area (KPA) identifies a cluster of related activities that,
when performed together, achieve a set of goals considered important.

Goals: the goals of a key process area summarize the states that must exist for that key
process area to have been implemented in an effective and lasting way. The extent to which
the goals have been accomplished is an indicator of how much capability the organization
has established at that maturity level. The goals signify the scope, boundaries, and intent of
each key process area.

Common Features: common features include practices that implement and institutionalize a
key process area. There are five types of common features: commitment to Perform, Ability
to Perform, Activities Performed, Measurement and Analysis, and Verifying Implementation.

Key Practices: The key practices describe the elements of infrastructure and practice that
contribute most effectively to the implementation and institutionalization of the KPAs.
Levels
There are five levels defined along the continuum of the CMM and, according to the SEI:
"Predictability, effectiveness, and control of an organization's software processes are believed
to improve as the organization moves up these five levels. While not rigorous, the empirical
evidence to date supports this belief."
1. Initial (chaotic, ad hoc, individual heroics) - the starting point for use of a new process.
2. Managed - the process is managed in accordance with agreed metrics.
3. Defined - the process is defined/confirmed as a standard business process, and decomposed
to levels 0, 1 and 2 (the latter being Work Instructions).
4. Quantitatively managed
5. Optimizing - process management includes deliberate process optimization/improvement.
102
Within each of these maturity levels are Key Process Areas (KPAs) which characterize that
level, and for each KPA there are five definitions identified:
1. Goals
2. Commitment
3. Ability
4. Measurement
5. Verification
The KPAs are not necessarily unique to CMM, representing — as they do — the stages that
organizations must go through on the way to becoming mature.
The CMM provides a theoretical continuum along which process maturity can be developed
incrementally from one level to the next. Skipping levels is not allowed/feasible.
N.B.: The CMM was originally intended as a tool to evaluate the ability of government
contractors to perform a contracted software project. It has been used for and may be suited
to that purpose, but critics pointed out that process maturity according to the CMM was not
necessarily mandatory for successful software development. There were/are real-life
examples where the CMM was arguably irrelevant to successful software development, and
these examples include many shrinkwrap companies (also called commercial-off-the-shelf or
"COTS" firms or software package firms). Such firms would have included, for example,
Claris, Apple, Symantec, Microsoft, and Lotus. Though these companies may have
successfully developed their software, they would not necessarily have considered or defined
or managed their processes as the CMM described as level 3 or above, and so would have
fitted level 1 or 2 of the model. This did not - on the face of it - frustrate the successful
development of their software.
Level 1 - Initial (Chaotic)
It is characteristic of processes at this level that they are (typically) undocumented and in a
state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive
manner by users or events. This provides a chaotic or unstable environment for the
processes.
Level 2 - Repeatable
103
It is characteristic of processes at this level that some processes are repeatable, possibly
with consistent results. Process discipline is unlikely to be rigorous, but where it exists it may
help to ensure that existing processes are maintained during times of stress.
Level 3 - Defined
It is characteristic of processes at this level that there are sets of defined and documented
standard processes established and subject to some degree of improvement over time.
These standard processes are in place (i.e., they are the AS-IS processes) and used to
establish consistency of process performance across the organization.
Level 4 - Managed
It is characteristic of processes at this level that, using process metrics, management can
effectively control the AS-IS process (e.g., for software development ). In particular,
management can identify ways to adjust and adapt the process to particular projects
without measurable losses of quality or deviations from specifications. Process Capability is
established from this level.
Level 5 - Optimizing
It is a characteristic of processes at this level that the focus is on continually improving
process
performance
through
both
incremental
and
innovative
technological
changes/improvements.
At maturity level 5, processes are concerned with addressing statistical common causes of
process variation and changing the process (for example, to shift the mean of the process
performance) to improve process performance. This would be done at the same time as
maintaining the likelihood of achieving the established quantitative process-improvement
objectives.
Software process framework
The software process framework documented is intended to guide those wishing to assess an
organization/projects consistency with the CMM. For each maturity level there are five
checklist types:
104
TypeSD
Description
Policy
Describes the policy contents and KPA goals recommended by the CMM.
Standard
Describes the recommended content of select work products described in the
CMM.
Describes the process information content recommended by the CMM. The
process checklists are further refined into checklists for:
Process
Procedure

roles

entry criteria

inputs

activities

outputs

exit criteria

reviews and audits

work products managed and controlled

measurements

documented procedures

training

tools
Describes the recommended content of documented procedures described in
the CMM.
Provides an overview of an entire maturity level. The level overview checklists
are further refined into checklists for:

KPA purposes (Key Process Areas)
Level

KPA Goals
overview

policies

standards

process descriptions

procedures

training
105

tools

reviews and audits

work products managed and controlled

measurements
Six Sigma
Six Sigma stands for Six Standard Deviations (Sigma is the Greek letter used to represent standard
deviation in statistics) from mean. Six Sigma methodology provides the techniques and tools to
improve the capability and reduce the defects in any process.
It was started in Motorola, in its manufacturing division, where millions of parts are made using the
same process repeatedly. Eventually Six Sigma evolved and applied to other non manufacturing
processes. Today you can apply Six Sigma to many fields such as Services, Medical and Insurance
Procedures, Call Centers.
Six Sigma methodology improves any existing business process by constantly reviewing and retuning the process. To achieve this, Six Sigma uses a methodology known as DMAIC (Define
opportunities, Measure performance, Analyze opportunity, Improve performance, Control
performance)
Six Sigma is a business management strategy originally developed by Motorola, USA in
1986As of 2010, it is widely used in many sectors of industry, although its use is not without
controversy.
Six Sigma seeks to improve the quality of process outputs by identifying and removing the
causes of defects (errors) and minimizing variability in manufacturing and business processes
It uses a set of quality management methods, including statistical methods, and creates a
special infrastructure of people within the organization ("Black Belts", "Green Belts", etc.)
who are experts in these methods Each Six Sigma project carried out within an organization
follows a defined sequence of steps and has quantified financial targets (cost reduction or
profit increase)
106
The term Six Sigma originated from terminology associated with manufacturing, specifically
terms associated with statistical modeling of manufacturing processes. The maturity of a
manufacturing process can be described by a sigma rating indicating its yield, or the
percentage of defect-free products it creates. A six sigma process is one in which 99.99966%
of the products manufactured are statistically expected to be free of defects (3.4 defects per
million). Motorola set a goal of "six sigma" for all of its manufacturing operations, and this
goal became a byword for the management and engineering practices used to achieve it.
Six Sigma is a systematical process of “quality improvement through the disciplined data-analyzing
approach, and by improving the organizational process by eliminating the defects or the obstacles
which prevents the organizations to reach the perfection”.
Six sigma points out the total number of the defects that has come across in an organizational
performance. Any type of defects, apart from the customer specification, is considered as the defect,
according to Six Sigma. With the help of the statistical representation of the Six Sigma, it is easy to
find out how a process is performing on quantitatively aspects. A Defect according to Six Sigma is
nonconformity of the product or the service of an organization.
Since the fundamental aim of the Six Sigma is the application of the improvement on the specified
process, through a measurement-based strategy, Six Sigma is considered as a registered service
mark or the trade mark. Six Sigma has its own rules and methodologies to be applied. In order to
achieve this service mark, the process should not produce defects more than 3.4. These numbers of
defects are considered as “the rate of the defects in a process should not exceed beyond the rate
3.4 per million opportunities”. Through the Six Sigma calculation the number of defects can be
calculated. For this there is a sigma calculator, which helps in the calculation.
In order to attain the fundamental objectives of Six Sigma, there are Six Sigma methodologies to be
implemented. This is done through the application of Six Sigma improvement projects, which is
accomplished through the two Six Sigma sub-methodologies. Under the improvement projects came
the identification, selection and ranking things according to the importance. The major two subdivisions of the improvement projects are the Six Sigma DMAIC and the Six Sigma DMADV. These
sub-divisions are considered as the processes and the execution of these processes are done
through three certifications.
107
HISTORICAL OVERVIEW
Six Sigma originated as a set of practices designed to improve manufacturing processes and
eliminate defects, but its application was subsequently extended to other types of business
processes as well.In Six Sigma, a defect is defined as any process output that does not meet
customer specifications, or that could lead to creating an output that does not meet customer
specifications
Bill Smith first formulated the particulars of the methodology at Motorola in 1986 Six Sigma
was heavily inspired by six preceding decades of quality improvement methodologies such as
quality control, TQM, and Zero Defects, based on the work of pioneers such as Shewhart,
Deming, Juran, Ishikawa, Taguchi and others.
Like its predecessors, Six Sigma doctrine asserts that:

Continuous efforts to achieve stable and predictable process results (i.e., reduce process
variation) are of vital importance to business success.

Manufacturing and business processes have characteristics that can be measured, analyzed,
improved and controlled.

Achieving sustained quality improvement requires commitment from the entire
organization, particularly from top-level management.
Features that set Six Sigma apart from previous quality improvement initiatives
include:

A clear focus on achieving measurable and quantifiable financial returns from any Six Sigma
project.

An increased emphasis on strong and passionate management leadership and support.

A special infrastructure of "Champions," "Master Black Belts," "Black Belts," "Green Belts",
etc. to lead and implement the Six Sigma approach.

A clear commitment to making decisions on the basis of verifiable data, rather than
assumptions and guesswork.
The term "Six Sigma" comes from a field of statistics known as process capability studies.
Originally, it referred to the ability of manufacturing processes to produce a very high
proportion of output within specification. Processes that operate with "six sigma quality" over
108
the short term are assumed to produce long-term defect levels below 3.4 defects per million
opportunities (DPMO)Six Sigma's implicit goal is to improve all processes to that level of
quality or better.
Six Sigma is a registered serce mark and trademark of Motorola Inc.As of 2006 Motorola
reported over US$17 billion in saving from Six Sigma.
Other early adopters of Six Sigma who achieved well-publicized success include Honeywell
(previously known as AlliedSignal) and General Electric, where Jack Welch introduced the
method. By the late 1990s, about two-thirds of the Fortune 500 organizations had begun Six
Sigma initiatives with the aim of reducing costs and improving quality.
In recent years, some practitioners have combined Six Sigma ideas with lean manufacturing
to yield a methodology named Lean Six Sigma.
METHODS
Six Sigma projects follow two project methodologies inspired by Deming's Plan-Do-CheckAct Cycle. These methodologies, composed of five phases each, bear the acronyms DMAIC
and DMADV.

DMAIC is used for projects aimed at improving an existing business process. DMAIC is
pronounced as "duh-may-ick".

DMADV is used for projects aimed at creating new product or process designs.[12] DMADV is
pronounced as "duh-mad-vee".
DMAIC
The DMAIC project methodology has five phases:

Define the problem, the voice of the customer, and the project goals, specifically.

Measure key aspects of the current process and collect relevant data.

Analyze the data to investigate and verify cause-and-effect relationships. Determine what
the relationships are, and attempt to ensure that all factors have been considered. Seek out
root cause of the defect under investigation.
109

Improve or optimize the current process based upon data analysis using techniques such as
design of experiments, poka yoke or mistake proofing, and standard work to create a new,
future state process. Set up pilot runs to establish process capability.

Control the future state process to ensure that any deviations from target are corrected
before they result in defects. Implement control systems such as statistical process control,
production boards, and visual workplaces, and continuously monitor the process.
DMADV or DFSS
The DMADV project methodology, also known as DFSS ("Design For Six Sigma"),[12]
features five phases:

Define design goals that are consistent with customer demands and the enterprise strategy.

Measure and identify CTQs (characteristics that are Critical To Quality), product capabilities,
production process capability, and risks.

Analyze to develop and design alternatives, create a high-level design and evaluate design
capability to select the best design.

Design details, optimize the design, and plan for design verification. This phase may require
simulations.

Verify the design, set up pilot runs, implement the production process and hand it over to
the process owner(s).
Quality management tools and methods used in Six Sigma
Within the individual phases of a DMAIC or DMADV project, Six Sigma utilizes many
established quality-management tools that are also used outside of Six Sigma.
IMPLEMENTATION ROLES
One key innovation of Six Sigma involves the "professionalizing" of quality management
functions. Prior to Six Sigma, quality management in practice was largely relegated to the
production floor and to sin a separate quality department. Formal Six Sigma programs adopt
a ranking terminology (similar to some martial arts systems) to define a hierarchy (and career
path) that cuts across all business functions.
Six Sigma identifies several key roles for its successful implementation.
110

Executive Leadership includes the CEO and other members of top management. They are
responsible for setting up a vision for Six Sigma implementation. They also empower the
other role holders with the freedom and resources to explore new ideas for breakthrough
improvements.

Champions take responsibility for Six Sigma implementation across the organization in an
integrated manner. The Executive Leadership draws them from upper management.
Champions also act as mentors to Black Belts.

Master Black Belts, identified by champions, act as in-house coaches on Six Sigma. They
devote 100% of their time to Six Sigma. They assist champions and guide Black Belts and
Green Belts. Apart from statistical tasks, they spend their time on ensuring consistent
application of Six Sigma across various functions and departments.

Black Belts operate under Master Black Belts to apply Six Sigma methodology to specific
projects. They devote 100% of their time to Six Sigma. They primarily focus on Six Sigma
project execution, whereas Champions and Master Black Belts focus on identifying
projects/functions for Six Sigma.

Green Belts are the employees who take up Six Sigma implementation along with their other
job responsibilities, operating under the guidance of Black Belts.
Some organizations use additional belt colours, such as Yellow Belts, for employees that have
basic training in Six Sigma tools.
Certification
In the United States, Six Sigma certification for both Green and Black Belts is offered by the
Institute of Industrial Engineers and by the American Society for Quality.[15] Many
organizations also offer certification programs to their employees. Many corporations,
including early Six Sigma pioneers General Electric and Motorola developed certification
programs as part of their Six Sigma implementation. All branches of the US Military also
train and certify their own Black and Green Belts[citation needed].
111
Origin and meaning of the term "six sigma process"
Graph of the normal distribution, which underlies the statistical assumptions of the Six Sigma model.
The Greek letter σ (sigma) marks the distance on the horizontal axis between the mean, µ, and the
curve's inflection point. The greater this distance, the greater is the spread of values encountered.
For the curve shown above, µ = 0 and σ = 1. The upper and lower specification limits (USL, LSL) are at
a distance of 6σ from the mean. Because of the properties of the normal distribution, values lying
that far away from the mean are extremely unlikely. Even if the mean were to move right or left by
1.5σ at some point in the future (1.5 sigma shift), there is still a good safety cushion. This is why Six
Sigma aims to have processes where the mean is at least 6σ away from the nearest specification
limit.
The term "six sigma process" comes from the notion that if one has six standard deviations
between the process mean and the nearest specification limit, as shown in the graph,
practically no items will fail to meet specifications.[8] This is based on the calculation method
employed in process capability studies.
Capability studies measure the number of standard deviations between the process mean and
the nearest specification limit in sigma units. As process standard deviation goes up, or the
mean of the process moves away from the center of the tolerance, fewer standard deviations
will fit between the mean and the nearest specification limit, decreasing the sigma number
and increasing the likelihood of items outside specification.
Role of the 1.5 sigma shift
Experience has shown that processes usually do not perform as well in the long term as they
do in the short term. As a result, the number of sigmas that will fit between the process mean
112
and the nearest specification limit may well drop over time, compared to an initial short-term
study. To account for this real-life increase in process variation over time, an empiricallybased 1.5 sigma shift is introduced into the calculationAccording to this idea, a process that
fits 6 sigma between the process mean and the nearest specification limit in a short-term
study will in the long term only fit 4.5 sigma – either because the process mean will move
over time, or because the long-term standard deviation of the process will be greater than that
observed in the short term, or both.
Hence the widely accepted definition of a six sigma process is a process that produces 3.4
defective parts per million opportunities (DPMO). This is based on the fact that a process that
is normally distributed will have 3.4 parts per million beyond a point that is 4.5 standard
deviations above or below the mean (one-sided capability study). So the 3.4 DPMO of a six
sigma process in fact corresponds to 4.5 sigma, namely 6 sigma minus the 1.5-sigma shift
introduced to account for long-term variation. This allows for the fact that special causes may
result in a deterioration in process performance over time, and is designed to prevent
underestimation of the defect levels likely to be encountered in real-life operation.
Sigma levels
113
A control chart depicting a process that experienced a 1.5 sigma drift in the process mean toward
the upper specification limit starting at midnight. Control charts are used to maintain 6 sigma quality
by signaling when quality professionals should investigate a process to find and eliminate specialcause variation.
The table below gives long-term DPMO values corresponding to various short-term sigma
levels.
It must be understood that these figures assume that the process mean will shift by 1.5 sigma
toward the side with the critical specification limit. In other words, they assume that after the
initial study determining the short-term sigma level, the long-term Cpk value will turn out to
be 0.5 less than the short-term Cpk value. So, for example, the DPMO figure given for 1
sigma assumes that the long-term process mean will be 0.5 sigma beyond the specification
limit (Cpk = –0.17), rather than 1 sigma within it, as it was in the short-term study (Cpk =
0.33). Note that the defect percentages only indicate defects exceeding the specification limit
to which the process mean is nearest. Defects beyond the far specification limit are not
included in the percentages.
Sigma level DPMO Percent defective Percentage yield Short-term Cpk Long-term Cpk
1
691,462 69%
31%
0.33
–0.17
2
308,538 31%
69%
0.67
0.17
3
66,807 6.7%
93.3%
1.00
0.5
4
6,210
0.62%
99.38%
1.33
0.83
5
233
0.023%
99.977%
1.67
1.17
6
3.4
0.00034%
99.99966%
2.00
1.5
7
0.019
0.0000019%
99.9999981%
2.33
1.83
APPLICATION
114
Six Sigma mostly finds application in large organizations. An important factor in the spread
of Six Sigma was GE's 1998 announcement of $350 million in savings thanks to Six Sigma, a
figure that later grew to more than $1 billionAccording to industry consultants like Thomas
Pyzdek and John Kullmann, companies with fewer than 500 employees are less suited to Six
Sigma implementation, or need to adapt the standard approach to make it work for them. This
is due both to the infrastructure of Black Belts that Six Sigma requires, and to the fact that
large organizations present more opportunities for the kinds of improvements Six Sigma is
suited to bringing about.
CRITICISM
Lack of originality
Noted quality expert Joseph M. Juran has described Six Sigma as "a basic version of quality
improvement", stating that "there is nothing new there. It includes what we used to call
facilitators. They've adopted more flamboyant terms, like belts with different colors. I think
that concept has merit to set apart, to create specialists who can be very helpful. Again, that's
not a new idea. The American Society for Quality long ago established certificates, such as
for reliability engineers.
Role of consultants
The use of "Black Belts" as itinerant change agents has (controversially) fostered an industry
of training and certification. Critics argue there is overselling of Six Sigma by too great a
number of consulting firms, many of which claim expertise in Six Sigma when they only
have a rudimentary understanding of the tools and techniques involved.[3]
Potential negative effects
A Fortune article stated that "of 58 large companies that have announced Six Sigma
programs, 91 percent have trailed the S&P 500 since". The statement was attributed to "an
analysis by Charles Holland of consulting firm Qualpro (which espouses a competing qualityimprovement process)." The summary of the article is that Six Sigma is effective at what it is
intended to do, but that it is "narrowly designed to fix an existing process" and does not help
in "coming up with new products or disruptive technologies." Advocates of Six Sigma have
argued that many of these claims are in error or ill-informed.
115
A BusinessWeek article says that James McNerney's introduction of Six Sigma at 3M had the
effect of stifling creativity and reports its removal from the research function. It cites two
Wharton School professors who say that Six Sigma leads to incremental innovation at the
expense of blue skies research This phenomenon is further explored in the book, Going Lean,
which describes a related approach known as lean dynamics and provides data to show that
Ford's "6 Sigma" program did little to change its fortunes.[25]
Based on arbitrary standards
While 3.4 defects per million opportunities might work well for certain products/processes, it
might not operate optimally or cost effectively for others. A pacemaker process might need
higher standards, for example, whereas a direct mail advertising campaign might need lower
standards. The basis and justification for choosing 6 (as opposed to 5 or 7, for example) as
the number of standard deviations is not clearly explained. In addition, the Six Sigma model
assumes that the process data always conform to the normal distribution. The calculation of
defect rates for situations where the normal distribution model does not apply is not properly
addressed in the current Six Sigma literature.[3]
Criticism of the 1.5 sigma shift
The statistician Donald J. Wheeler has dismissed the 1.5 sigma shift as "goofy" because of its
arbitrary nature. Its universal applicability is seen as doubtful
The 1.5 sigma shift has also become contentious because it results in stated "sigma levels"
that reflect short-term rather than long-term performance: a process that has long-term defect
levels corresponding to 4.5 sigma performance is, by Six Sigma convention, described as a
"six sigma process." The accepted Six Sigma scoring system thus cannot be equated to actual
normal distribution probabilities for the stated number of standard deviations, and this has
been a key bone of contention about how Six Sigma measures are defined The fact that it is
rarely explained that a "6 sigma" process will have long-term defect rates corresponding to
4.5 sigma performance rather than actual 6 sigma performance has led several commentators
to express the opinion that Six Sigma is a
116
HOW DOES SIX SIGMA WORK?
Six Sigma has both management components and technical components. Using this dual approach
allows for everyone to have a role in making the Six Sigma plan a success.
The management side focuses on using the management system to line up the right projects and
match them with the right individuals. Management also focuses on getting the right goals and
process metrics to insure that projects are successfully completed and that these gains can be
sustained over time.
The technical side focuses on enhancing process performance using process data, statistical thinking,
and methods. This focused process improvement methodology has five key stages: Define, Measure,
Analyze, Improve and Control. Define is to define process improvement goals that are consistent
with customer demands and company strategy. Next measure the key aspects of the current
processes that your company is using and collect relevant data about these processes and current
results. Then analyze the data to verify cause and affect relationships, be sure to consider all
possible factors involved. Then improve or optimize the process based upon data analysis using
techniques like Design of Experiments or observational study. The last step is to control to ensure
that any deviations are corrected before they result in defects. During this step you will also set up
pilot runs to establish process capability and will continuously monitor the process.
All tools statistical or not are linked and sequenced in a unique way that makes 6 Sigma both easy
and effective to use. The basic approach focuses on the identification of the key process drivers
(variables that have the largest effect on output) and relies on software such as Minitab for
statistical calculations.
117
What is it Zero Defects, ?
Zero Defects, pioneered by Philip Crosby, is a business practice which aims to reduce and minimise
the number of defects and errors in a process and to do things right the first time. The ultimate aim
will be to reduce the level of defects to zero. However, this may not be possible and in practice and
what it means is that everything possible will be done to eliminate the likelihood of errors or defects
occurring. The overall effect of achieving zero defects is the maximisation of profitability.
More recently the concept of zero defects has lead to the creation and development of six sigma
pioneered by Motorola and now adopted worldwide by many other organisations.
The concept of zero defects as explained and initiated by Philip Crosby is a business system
that aims at reducing the defects in a process, and doing the process correctly the first of Zero
Defects
The Concept built to specifications without any drawbacks, then it is an acceptable product.
In terms of defects, a product will be acceptable when it is free of defects.
When considering the concept of zero defects, one might want to know what that zero defect
level is, if acceptable levels can be achieved for a product.
Attaining perfect zero defects may not be possible, and there is always a chance of some
errors or defect occurring. Zero defects means reaching a level of infinity sigma - which is
nearly impossible. In terms of Six Sigma, zero defects would mean maximization of
profitability and improvement in quality.
118
A process has to be in place that allows for the achievement of zero defects. Unless
conditions are perfect, the objective of zero defects is not possible. It is possible to measure
non-conformance in terms of waste. Unless the customer requirements are clear, you will not
be able to achieve the product that matches these requirements and is not
How can it be used ?
The concept of zero defects can be practically utilised in any situation to improve quality and reduce
cost. However it doesn’t just happen, as the right conditions have to be established to allow this to
take place. A process, system or method of working has to be established which allows for the
achievement of zero defects. If this process and the associated conditions are not created then it will
not be possible for anyone involved in the process to achieve the desired objective of zero defects.
In such a process it will be possible to measure the cost of none conformance in terms of wasted
materials and wasted time.
Any process that is to be designed to include this concept must be clear on its customer expectations
and desires. The ideal is to aim for a process and finished article that conforms to customer
requirements and does not fall short of or exceed these requirements. For example, in recent years
many financial organisations have made claims regarding how quickly they can process a home loan
application. But what they may have failed to realise is that in spending a great deal of time and
money reducing processing time they are exceeding customer requirements (even if they believe
that they know them). In these cases they have exceeded the cost of conformance when it was not
necessary to do so.
Impact on the Workforce and Supply Chain
Employees are aware of the need to reduce defects, and they strive to achieve continual
improvement. However, over-emphasis of zero defects levels may be demoralizing, and may
even lead to non-productivity. Unless a level of zero defects is achieved, it would be regarded
as an unacceptable level.
119
When the zero defect rule is applied to suppliers and any minor defects are said to be
unacceptable, then the company's supply chain may be jeopardized - which in itself is not the
best business scenario.
It may be acceptable to have a policy of continuous improvement rather than a zero defect
one. Companies may be able to achieve decent reduction in costs and improved customer
satisfaction levels to achieve a bigger market share.
Principles of Zero Defects
The principles of the methodology are four-fold:
1. Quality is conformance to requirements
Every product or service has a requirement: a description of what the customer needs. When
a particular product meets that requirement, it has achieved quality, provided that the
requirement accurately describes what the enterprise and the customer actually need. This
technical sense should not be confused with more common usages that indicate weight or
goodness or precious materials or some absolute idealized standard. In common parlance, an
inexpensive disposable pen is a lower-quality item than a gold-plated fountain pen. In the
technical sense of Zero Defects, the inexpensive disposable pen is a quality product if it
meets requirements: it writes, does not skip nor clog under normal use, and lasts the time
specified.
2. Defect prevention is preferable to quality inspection and correction
The second principle is based on the observation that it is nearly always less troublesome,
more certain and less expensive to prevent defects than to discover and correct them.
3. Zero Defects is the quality standard
The third is based on the normative nature of requirements: if a requirement expresses what is
genuinely needed, then any unit that does not meet requirements will not satisfy the need and
is no good. If units that do not meet requirements actually do satisfy the need, then the
requirement should be changed to reflect reality.
120
4. Quality is measured in monetary terms – the Price of Nonconformance
(PONC)
The fourth principle is key to the methodology. Phil Crosby believes that every defect
represents a cost, which is often hidden. These costs include inspection time, rework, wasted
material and labor, lost revenue and the cost of customer dissatisfaction. When properly
identified and accounted for, the magnitude of these costs can be made apparent, which has
three advantages. First, it provides a cost-justification for steps to improve quality. The title
of the book, "Quality is free," expresses the belief that improvements in quality will return
savings more than equal to the costs. Second, it provides a way to measure progress, which is
essential to maintaining management commitment and to rewarding employees. Third, by
making the goal measurable, actions can be made concrete and decisions can be made on the
basis of relative return.
Advantages

Cost reduction caused by a decrease in waste. This waste
could be both wasted materials and wasted time due to
unnecessary rework

Cost reduction due to the fact that time is now being spent
on only producing goods or services that are produced
according to the requirements of consumers.

Building and delivering a finished article that conforms to
consumer requirements at all times will result in increased
customer satisfaction, improved customer retention and
increased profitability.

Possible to measure the cost of quality

A process can be over engineered by an organisation in its
Disadvantages
efforts to create zero defects. Whilst endeavouring to create
121
a situation of zero defects increasing time and expense may
be spent in an attempt to build the perfect process that
delivers the perfect finished product, which in reality may
not be possible. For example, a consumer requirement may
be a desire to buy a motor car that is 100% reliable, never
rusts and maximises fuel consumption. However, in this
instance, in practice, if an organisation doesn’t have some
kind of built in obsolescence it will have a more limited life.
122
Statistical quality Control
Alternative term for statistical process control.
Statistical process control
Statistical process control (SPC) is the application of statistical methods to the monitoring
and control of a process to ensure that it operates at its full potential to produce conforming
product. Under SPC, a process behaves predictably to produce as much conforming product
as possible with the least possible waste. While SPC has been applied most frequently to
controlling manufacturing lines, it applies equally well to any process with a measurable
output. Key tools in SPC are control charts, a focus on continuous improvement and designed
experiments.
Much of the power of SPC lies in the ability to examine a process and the sources of variation
in that process using tools that give weight to objective analysis over subjective opinions and
that allow the strength of each source to be determined numerically. Variations in the process
that may affect the quality of the end product or service can be detected and corrected, thus
reducing waste as well as the likelihood that problems will be passed on to the customer.
With its emphasis on early detection and prevention of problems, SPC has a distinct
advantage over other quality methods, such as inspection, that apply resources to detecting
and correcting problems after they have occurred.
In addition to reducing waste, SPC can lead to a reduction in the time required to produce the
product or service from end to end. This is partially due to a diminished likelihood that the
final product will have to be reworked, but it may also result from using SPC data to identify
bottlenecks, wait times, and other sources of delays within the process. Process cycle time
reductions coupled with improvements in yield have made SPC a valuable tool from both a
cost reduction and a customer satisfaction standpoint.
Statistical process control (SPC) involves using statistical techniques to measure and
analyze the variation in processes. Most often used for manufacturing processes, the intent
of SPC is to monitor product quality and maintain processes to fixed targets. Statistical
quality control refers to using statistical techniques for measuring and improving the quality
of processes and includes SPC in addition to other techniques, such as sampling plans,
experimental design, variation reduction, process capability analysis, and process
improvement plans.
123
SPC is used to monitor the consistency of processes used to manufacture a product as
designed. It aims to get and keep processes under control. No matter how good or bad the
design, SPC can ensure that the product is being manufactured as designed and
intended. Thus, SPC will not improve a poorly designed product's reliability, but can be used
to maintain the consistency of how the product is made and, therefore, of the manufactured
product itself and its as-designed reliability.
A primary tool used for SPC is the control chart
HISTORY
Statistical process control was pioneered by Walter A. Shewhart in the early 1920s. W.
Edwards Deming later applied SPC methods in the United States during World War II,
thereby successfully improving quality in the manufacture of munitions and other
strategically important products. Deming was also instrumental in introducing SPC methods
to Japanese industry after the war had ended.
Shewhart created the basis for the control chart and the concept of a state of statistical control
by carefully designed experiments. While Dr. Shewhart drew from pure mathematical
statistical theories, he understood that data from physical processes seldom produces a
"normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell
curve"). He discovered that observed variation in manufacturing data did not always behave
the same way as data in nature (for example, Brownian motion of particles). Dr. Shewhart
concluded that while every process displays variation, some processes display controlled
variation that is natural to the process (common causes of variation), while others display
uncontrolled variation that is not present in the process causal system at all times (special
causes of variation).[3]
In 1988, the Software Engineering Institute introduced the notion that SPC can be usefully
applied to non-manufacturing processes, such as software engineering processes, in the
Capability Maturity Model (CMM). This idea exists today within the Level 4 and Level 5
practices of the Capability Maturity Model Integration (CMMI). This notion that SPC is a
useful tool when applied to non-repetitive, knowledge-intensive processes such as
engineering processes has encountered much skepticism, and remains controversial today.
GENERAL
124
The following description relates to manufacturing rather than to the service industry,
although the principles of SPC can be successfully applied to either. For a description and
example of how SPC applies to a service environment, refer to Roberts (2005).[6] SPC has
also been successfully applied to detecting changes in organizational behavior with Social
Network Change Detection introduced by McCulloh (2007).[citation
needed]
Selden describes
how to use SPC in the fields of sales, marketing, and customer service, using Deming's
famous Red Bead Experiment as an easy to follow demonstration.[7]
In mass-manufacturing, the quality of the finished article was traditionally achieved through
post-manufacturing inspection of the product; accepting or rejecting each article (or samples
from a production lot) based on how well it met its design specifications. In contrast,
Statistical Process Control uses statistical tools to observe the performance of the production
process in order to predict significant deviations that may later result in rejected product.
Two kinds of variation occur in all manufacturing processes: both these types of process
variation cause subsequent variation in the final product. The first is known as natural or
common cause variation and consists of the variation inherent in the process as it is designed.
Common cause variation may include variations in temperature, properties of raw materials,
strength of an electrical current etc. The second kind of variation is known as special cause
variation, or assignable-cause variation, and happens less frequently than the first. With
sufficient investigation, a specific cause, such as abnormal raw material or incorrect set-up
parameters, can be found for special cause variations.
For example, a breakfast cereal packaging line may be designed to fill each cereal box with
500 grams of product, but some boxes will have slightly more than 500 grams, and some will
have slightly less, in accordance with a distribution of net weights. If the production process,
its inputs, or its environment changes (for example, the machines doing the manufacture
begin to wear) this distribution can change. For example, as its cams and pulleys wear out,
the cereal filling machine may start putting more cereal into each box than specified. If this
change is allowed to continue unchecked, more and more product will be produced that fall
outside the tolerances of the manufacturer or consumer, resulting in waste. While in this case,
the waste is in the form of "free" product for the consumer, typically waste consists of rework
or scrap.
125
By observing at the right time what happened in the process that led to a change, the quality
engineer or any member of the team responsible for the production line can troubleshoot the
root cause of the variation that has crept in to the process and correct the problem.
SPC indicates when an action should be taken in a process, but it also indicates when NO
action should be taken. An example is a person who would like to maintain a constant body
weight and takes weight measurements weekly. A person who does not understand SPC
concepts might start dieting every time his or her weight increased, or eat more every time his
or her weight decreased. This type of action could be harmful and possibly generate even
more variation in body weight. SPC would account for normal weight variation and better
indicate when the person is in fact gaining or losing weight.
HOW TO USE SPC
Statistical Process Control may be broadly broken down into three sets of activities:
understanding the process, understanding the causes of variation, and elimination of the
sources of special cause variation.
In understanding a process, the process is typically mapped out and the process is monitored
using control charts. Control charts are used to identify variation that may be due to special
causes, and to free the user from concern over variation due to common causes. This is a
continuous, ongoing activity. When a process is stable and does not trigger any of the
detection rules for a control chart, a process capability analysis may also be performed to
predict the ability of the current process to produce conforming (i.e. within specification)
product in the future.
When excessive variation is identified by the control chart detection rules, or the process
capability is found lacking, additional effort is exerted to determine causes of that variance.
The tools used include Ishikawa diagrams, designed experiments and Pareto charts. Designed
experiments are critical to this phase of SPC, as they are the only means of objectively
quantifying the relative importance of the many potential causes of variation.
Once the causes of variation have been quantified, effort is spent in eliminating those causes
that are both statistically and practically significant (i.e. a cause that has only a small but
statistically significant effect may not be considered cost-effective to fix; however, a cause
126
that is not statistically significant can never be considered practically significant). Generally,
this includes development of standard work, error-proofing and training. Additional process
changes may be required to reduce variation or align the process with the desired target,
especially if there is a problem with process capability.
In practise, most people (in a manufacturing environment) will think of SPC as a set of rules
and a control chart (paper and / or digital). SPC ought to be a PROCESS, that is, when
conditions change such 'rules' should be re-evaluated and possibly updated. This does not,
alas, take place usually; as a result the set of rules known as "the Western Electric rules" can
be, with minor variations, found in a great many different environs (for which they are very
rarely actually suitable).
For digital SPC charts, so-called SPC rules usually come with some rule specific logic that
determines a 'derived value' that is to be used as the basis for some (setting) correction. One
example of such a derived value would be (for the common N numbers in a row ranging up
or down 'rule'); derived value = last value + average difference between the last N numbers
(which would, in effect, be extending the row with the to be expected next value).
The fundamentals of Statistical Process Control (though that was not what it was called at the
time) and the associated tool of the Control Chart were developed by Dr Walter A Shewhart
in the mid-1920’s. His reasoning and approach were practical, sensible and positive. In order
to be so, he deliberately avoided overdoing mathematical detail. In later years, significant
mathematical attributes were assigned to Shewharts thinking with the result that this work
became better known than the pioneering application that Shewhart had worked up.
The crucial difference between Shewhart’s work and the inappropriately-perceived purpose
of SPC that emerged, that typically involved mathematical distortion and tampering, is that
his developments were in context, and with the purpose, of process improvement, as opposed
to mere process monitoring. I.e. they could be described as helping to get the process into that
“satisfactory state” which one might then be content to monitor. Note, however, that a true
adherent to Deming’s principles would probably never reach that situation, following instead
the philosophy and aim of continuous improvement.
127
Explanation and Illustration:
What do “in control” and “out of control” mean?
Suppose that we are recording, regularly over time, some measurements from a process. The
measurements might be lengths of steel rods after a cutting operation, or the lengths of time
to service some machine, or your weight as measured on the bathroom scales each morning,
or the percentage of defective (or non-conforming) items in batches from a supplier, or
measurements of Intelligence Quotient, or times between sending out invoices and receiving
the payment etc., etc..
A series of line graphs or histograms can be drawn to represent the data as a statistical
distribution. It is a picture of the behaviour of the variation in the measurement that is being
recorded. If a process is deemed as “stable” then the concept is that it is in statistical control.
The point is that, if an outside influence impacts upon the process, (e.g., a machine setting is
altered or you go on a diet etc.) then, in effect, the data are of course no longer all coming
from the same source. It therefore follows that no single distribution could possibly serve to
represent them. If the distribution changes unpredictably over time, then the process is said to
be out of control. As a scientist, Shewhart knew that there is always variation in anything that
can be measured. The variation may be large, or it may be imperceptibly small, or it may be
between these two extremes; but it is always there.
What inspired Shewhart’s development of the statistical control of processes was his
observation that the variability which he saw in manufacturing processes often differed in
behaviour from that which he saw in so-called “natural” processes – by which he seems to
have meant such phenomena as molecular motions.
Wheeler and Chambers combine and summarise these two important aspects as follows:

"While every process displays variation, some processes display controlled variation, while
others display uncontrolled variation."
In particular, Shewhart often found controlled (stable variation in natural processes and
uncontrolled (unstable variation in manufacturing processes. The difference is clear. In the
former case, we know what to expect in terms of variability; in the latter we do not. We may
128
predict the future, with some chance of success, in the former case; we cannot do so in the
latter.
Why is "in control" and "out of control" important?
Shewhart gave us a technical tool to help identify the two types of variation: the control chart
What is important is the understanding of why correct identification of the two types of
variation is so vital. There are at least three prime reasons.
First, when there are irregular large deviations in output because of unexplained special
causes, it is impossible to evaluate the effects of changes in design, training, purchasing
policy etc. which might be made to the system by management. The capability of a process is
unknown, whilst the process is out of statistical control.
Second, when special causes have been eliminated, so that only common causes remain,
improvement then has to depend upon management action. For such variation is due to the
way that the processes and systems have been designed and built – and only management has
authority and responsibility to work on systems and processes. As Myron Tribus, Director of
the American Quality and Productivity Institute, has often said:

“The people work in a system.

The job of the manager is

o
To work on the system
o
To improve it, continuously,
With their help.”
Finally, something of great importance, but which has to be unknown to managers who do
not have this understanding of variation, is that by (in effect) misinterpreting either type of
cause as the other, and acting accordingly, they not only fail to improve matters – they
literally make things worse.
These implications, and consequently the whole concept of the statistical control of
processes, had a profound and lasting impact on Dr Deming. Many aspects of his
management philosophy emanate from considerations based on just these notions.
129
So why SPC?
The plain fact is that when a process is within statistical control, its output is indiscernible
from random variation: the kind of variation which one gets from tossing coins, throwing
dice, or shuffling cards. Whether or not the process is in control, the numbers will go up, the
numbers will go down; indeed, occasionally we shall get a number that is the highest or the
lowest for some time. Of course we shall: how could it be otherwise? The question is - do
these individual occurrences mean anything important? When the process is out of control,
the answer will sometimes be yes. When the process is in control, the answer is no.
So the main response to the question Why SPC? is therefore this: It guides us to the type of
action that is appropriate for trying to improve the functioning of a process. Should we react
to individual results from the process (which is only sensible, if such a result is signalled by a
control chart as being due to a special cause) or should we instead be going for change to the
process itself, guided by cumulated evidence from its output (which is only sensible if the
process is in control)?
Process improvement needs to be carried out in three chronological phases:

Phase 1: Stabilisation of the process by the identification and elimination of special causes:

Phase 2: Active improvement efforts on the process itself, i.e. tackling common causes;

Phase 3: Monitoring the process to ensure the improvements are maintained, and
incorporating additional improvements as the opportunity arises.
Control charts have an important part to play in each of these three Phases. Points beyond
control limits (plus other agreed signals) indicate when special causes should be searched for.
The control chart is therefore the prime diagnostic tool in Phase 1. All sorts of statistical tools
can aid Phase 2, including Pareto Analysis, Ishikawa Diagrams, flow-charts of various kinds,
etc., and recalculated control limits will indicate what kind of success (particularly in terms of
reduced variation) has been achieved. The control chart will also, as always, show when any
further special causes should be attended to. Advocates of the British/European approach will
consider themselves familiar with the use of the control chart in Phase 3. However, it is
strongly recommended that they consider the use of a Japanese Control Chart (q.v.) in order
to see how much more can be done even in this Phase than is normal practice in this part of
the world.
130
Statistical process control (SPC) involves using statistical techniques to measure and analyze the
variation in processes. Most often used for manufacturing processes, the intent of SPC is to monitor
product quality and maintain processes to fixed targets. Statistical quality control refers to using
statistical techniques for measuring and improving the quality of processes and includes SPC in
addition to other techniques, such as sampling plans, experimental design, variation reduction,
process capability analysis, and process improvement plans. SPC is used to monitor the consistency
of processes used to manufacture a product as designed. It aims to get and keep processes under
control. No matter how good or bad the design, SPC can ensure that the product is being
manufactured as designed and intended. Thus, SPC will not improve a poorly designed product's
reliability, but can be used to maintain the consistency of how the product is made and, therefore, of
the manufactured product itself and its as-designed reliability.
A primary tool used for SPC is the control chart, a graphical representation of certain descriptive
statistics for specific quantitative measurements of the manufacturing process. These descriptive
statistics are displayed in the control chart in comparison to their "in-control" sampling
distributions. The comparison detects any unusual variation in the manufacturing process, which
could indicate a problem with the process. Several different descriptive statistics can be used in
control charts and there are several different types of control charts that can test for different
causes, such as how quickly major vs. minor shifts in process means are detected. Control charts are
also used with product measurements to analyze process capability and for continuous process
improvement efforts
Benefits:

Provides surveillance and feedback for keeping processes in control

Signals when a problem with the process has occurred

Detects assignable causes of variation

Accomplishes process characterization

Reduces need for inspection

Monitors process quality

Provides mechanism to make process changes and track effects of those changes

Once a process is stable (assignable causes of variation have been eliminated), provides
process capability analysis with comparison to the product tolerance
131
Capabilities:

All forms of SPC control charts
o
Variable and attribute charts
o
Average (X—
), Range (R), standard deviation (s), Shewhart, CuSum, combined
Shewhart-CuSum, exponentially weighted moving average (EWMA)

Selection of measures for SPC

Process and machine capability analysis (Cp and Cpk)

Process characterization

Variation reduction

Experimental design

Quality problem solving
Rules for determining statistical control
Run tests
If the process is stable, then the distribution of subgroup averages will be approximately
normal. With this in mind, we can also analyze the patterns on the control charts to see if
they might be attributed to a special cause of variation. To do this, we divide a normal
distribution into zones, with each zone one standard deviation wide. Figure IV.25 shows the
approximate percentage we expect to find in each zone from a stable process.
132
Figure IV.25. Percentiles for a normal distribution.
Zone C is the area from the mean to the mean plus or minus one sigma, zone B is from plus
or minus one to plus or minus two sigma, and zone A is from plus or minus two to plus or
minus three sigma. Of course, any point beyond three sigma (i.e., outside of the control limit)
is an indication of an out-of-control process.
Since the control limits are at plus and minus three standard deviations, finding the one and
two sigma lines on a control chart is as simple as dividing the distance between the grand
average and either control limit into thirds, which can be done using a ruler. This divides each
half of the control chart into three zones. The three zones are labeled A, B, and C as shown
on Figure
133
Zones on a control chart.
Based on the expected percentages in each zone, sensitive run tests can be developed for
analyzing the patterns of variation in the various zones. Remember, the existence of a nonrandom pattern means that a special cause of variation was (or is) probably present.
Rules for determining statistical control
Run tests
If the process is stable, then the distribution of subgroup averages will be approximately
normal. With this in mind, we can also analyze the patterns on the control charts to see if
they might be attributed to a special cause of variation. To do this, we divide a normal
distribution into zones, with each zone one standard deviation wide. Figure IV.25 shows the
approximate percentage we expect to find in each zone from a stable process.
134
Figure IV.25. Percentiles for a normal distribution.
Zone C is the area from the mean to the mean plus or minus one sigma, zone B is from plus
or minus one to plus or minus two sigma, and zone A is from plus or minus two to plus or
minus three sigma. Of course, any point beyond three sigma (i.e., outside of the control limit)
is an indication of an out-of-control process.
Since the control limits are at plus and minus three standard deviations, finding the one and
two sigma lines on a control chart is as simple as dividing the distance between the grand
average and either control limit into thirds, which can be done using a ruler. This divides each
half of the control chart into three zones. The three zones are labeled A, B, and C as shown
on Figure
135
Zones on a control chart.
Based on the expected percentages in each zone, sensitive run tests can be developed for
analyzing the patterns of variation in the various zones. Remember, the existence of a nonrandom pattern means that a special cause of variation was (or is) probably present.
136
ISO 9001 is the internationally recognised standard for the quality management of
businesses. ISO 9001



applies to the processes that create and control the products and services an organisation
supplies
prescribes systematic control of activities to ensure that the needs and expectations of
customers are met
is designed and intended to apply to virtually any product or service, made by any process
anywhere in the world
ISO 9001 is one of the standards in the ISO 9000 family
BENEFITS
Implementing a Quality Management System will motivate staff by defining their key roles
and responsibilities. Gain ISO 9001 certification by completing our Free Quote form. Cost
savings can be made through improved efficiency and productivity, as product or service
deficiencies will be highlighted. From this, improvements can be developed, resulting in less
waste, inappropriate or rejected work and fewer complaints. Customers will notice that orders
are met consistently, on time and to the correct specification. This can open up the market
place to increased opportunities.
market place to increased opportunities.
WHY SEEK ISO 9001 CERTIFICATION




ISO 9001 certification by an accredited certification body shows commitment to quality,
customers, and a willingness to work towards improving efficiency.
It demonstrates the existence of an effective quality management system that satisfies the
rigours of an independent, external audit.
ISO 9001 certification enhances company image in the eyes of customers, employees and
shareholders alike.
It also gives a competitive edge to an organisation's marketing increased opportunities.
HOW DO U START TO IMPLEMENT ISO 9001





Identify the requirements of ISO 9001 and how they apply to the business involved.
Establish quality objectives and how they fit in to the operation of the business.
Produce a documented quality policy indicating how these requirements are satisfied.
Communicate them throughout the organisation.
Evaluate the quality policy, its stated objectives and then prioritise requirements to ensure
they are met.
137



Identify the boundaries of the management system and produce documented procedures as
required.
Ensure these procedures are suitable and adhered to.
Once developed, internal audits are needed to ensure the system carries on working.
ASSESSMSENT TO ISO9001
Once all the requirements of ISO 9001 have been met, it is time for an external audit. This
should be carried out by a third party, accredited certification body. In the UK, the body
should be accredited by UKAS (look for the ‘crown and tick’ logo). The chosen certification
body will review the quality manuals and procedures. This process involves looking at the
company’s evaluation of quality and ascertains if targets set for the management programme
are measurable and achievable. This is followed at a later date by a full on-site audit to ensure
that working practices observe the procedures and stated objectives and that appropriate
records are kept.
After a successful audit, a certificate of registration to ISO 9001 will be issued. There will
then be surveillance visits (usually once or twice a year) to ensure that the system continues
to work. This is covered in more detail in ISOQAR’s Audit Procedure information sheet.
 ISO 9000 – Fundamentals and Vocabulary: this introduces the user to the concepts behind
the management systems and specifies the terminology used.
 ISO 9001 – Requirements: this sets out the criteria you will need to meet if you wish to
operate in accordance with the standard and gain certification.
UNIT V
MEASURES AND METRICS IN PROCESS AND PROJECT DOMAINS
Key Measures For Software Engineers
Software quality may be defined as conformance to explicitly stated functional and performance
requirements, explicitly documented development standards and implicit characteristics that are
expected of all professionally developed software.
The three key points in this definition:
138
1. Software requirements are the foundations from which quality is measured.
Lack of conformance to requirement is lack of quality.
2. Specified standards define a set of development criteria that guide the management in
software engineering.
If criteria are not followed lack of quality will usually result.
3. A set of implicit requirements often goes unmentioned, for example ease of use,
maintainability etc.
If software conforms to its explicit requirements but fails to meet implicit requirements,
software quality is suspected.
Defects – Productivity And Quality
Dr. Robert Burnett describes the need to focus our future thinking on defect rates to better
manage the analytical quality of laboratory tests. In the midst of our preoccupation with the
profound changes that are taking place in health care delivery in general, and laboratory
139
medicine in particular, it might be of some comfort to realize that there are some fundamental
things that have remained the same. Two management objectives that have not changed in
organizations, including clinical laboratories, are the need for high quality and the need for high
productivity
Two management objectives that have not changed in organizations, including clinical
laboratories, are the need for high quality and the need for high productivity.
Current priority on productivity
Perhaps the emphasis has shifted. Fifteen or twenty years ago we could afford to focus
mainly on the quality of our product. It was fine to be efficient, but with a lot of inpatient
days, a high volume of ordered tests, and a predominately fee-for-service payment system, it
didn't have to be a top priority. Now, changes in reimbursement have made laboratories cost
centers. The majority of patients' hospital bills are paid either on the basis of a fixed per diem,
a fixed amount determined by the diagnosis, or some other variety of flat rate that has nothing
to do with the actual procedures and tests performed for an individual patient. In hospital
laboratories, the sudden shift from profit center to cost center has prompted downsizing and
reorganizing. At the same time much more effort is being spent to control test utilization and
to reduce the cost of performing those tests that remain. The prevailing feeling in most
laboratories is that quality is high enough, but test costs need to be reduced more, i.e.,
productivity needs to be higher. What I will review here are the factors that determine
analytical quality from the customer's perspective, the interdependence that exists between
quality and productivity, and the trade-offs that are often made.
Current thinking about quality
What evidence do we have that analytical quality is generally high? In a recent publication
Westgard et al. found that only one of eighteen common laboratory tests was routinely
performed with precision high enough that current QC practices could detect medically
important errors. This raises questions: Why do we laboratory directors not perceive that
there is an enormous problem here? And why are we not being bombarded with complaints
from the medical staff? To answer these questions, think about how analytical quality is
perceived by people outside the laboratory. As laboratory professionals, we are aware of
several different components and indicators of analytical quality, but our customers are
140
sensitive only to the "bottom line" - which we call the defect rate. This quantity has been
defined in the literature on the basis of a fraction of analytical runs with an unacceptable
number of erroneous results, but here I want to define defect rate in terms of test results specifically, the fraction of test results reported with an error greater than TEa, the total error
deemed allowable on the basis of medical usefulness.
Need to consider defect rate
The defect rate for a test represents the best single indicator of analytical quality, as perceived
by our customers, that we can derive. Unfortunately, measuring defect rate is not as simple as
one might think. But to get a rough idea of what a typical defect rate might be, let's say we
are running a test on an automated chemistry analyzer, performing QC once a day. If we run
every day, and sometimes have an extra run thrown in, we might have 400 runs in a year
with, say, an average of 100 samples per run, for a total of 40,000 patient results per year.
Importance of error frequency
In quality control system design, it is important to know the frequency of critical error (f)
associated with the method. This is defined as the frequency of runs in which the distribution
of results has shifted such that 5% or greater have errors larger than TEa. Let's say our
automated analyzer has f equal to 2% - not an unreasonable figure for a well-designed
instrument. This means that in a year, eight of the four hundred runs will have at least five
results with errors larger than TEa. But we have a quality control system in place, the purpose
of which is to detect such problems. Unfortunately, practical considerations in QC system
design often dictate that we can expect to catch only a fraction of the errors we would like to
detect. However, even if the probability of error detection, Ped, is a modest 50% (at the
critical error level) then four of the eight runs would be expected to fail QC and erroneous
results would presumably not be reported. This leaves four runs and a total of 20 erroneous
results that would be reported, which is 0.05% of the total number of results, or 5 defects per
10,000 test results, a defect rate of 1 in 2,000.
Defects vs. blunders
I digress here to acknowledge that the defect rate seen by the customer must also include
"blunders", or what statisticians call outliers. These include results reported on the wrong
141
specimen, or on specimens that were mishandled or improperly collected. Also included are
transcription errors. One might expect that we have fewer such blunders in the laboratory
than we had ten or twenty years ago because of automation, bar-coded identification labels
and instrument interfaces. On the other hand, downsizing has resulted in more pressure on the
remaining technologists, and this might be expected to increase the blunder rate, especially in
non-automated sections of the laboratory.
Difficulties in estimating error frequencies
How realistic is the above estimate of defect rate? Well, it's only as good as the estimate of
the critical error frequency of the method or instrument, so the question becomes, how can
we know the actual value of f? This is a difficult problem, and probably represents the most
serious obstacle to knowing how cost-effective our QC systems really are. It might be
expected that critical error frequency is related to the frequency of runs rejected as being out
of control. In fact these two quantities would be equal if we were using a QC system with an
ideal power curve. Such a function would have a probability of false rejection (Pfr) equal to
zero and a probability of error detection also equal to zero until error size reached the critical
point. At this point Ped would go to 100%. Such a power curve is depicted below.
In the real world however, power curves look like the one below
142
Inspection of these "real" power curves reveals a problem that I don't believe is widely
appreciated. Most quality control systems will reject many runs where the error size is less
than critical. Pfr gives the probability of rejecting a run with zero error, but what I call the
true false rejection rate is much higher, because all runs rejected with error sizes less than
critical should be considered false rejections from a medical usefulness viewpoint. Note that
although Ped is lower for these smaller errors, this is offset by the fact that small errors occur
more frequently than large ones.
Two management objectives that have not changed in organizations, including clinical
laboratories, are the need for high quality and the need for high productivity.
Current priority on productivity
Perhaps the emphasis has shifted. Fifteen or twenty years ago we could afford to focus
mainly on the quality of our product. It was fine to be efficient, but with a lot of inpatient
days, a high volume of ordered tests, and a predominately fee-for-service payment system, it
didn't have to be a top priority. Now, changes in reimbursement have made laboratories cost
centers. The majority of patients' hospital bills are paid either on the basis of a fixed per diem,
a fixed amount determined by the diagnosis, or some other variety of flat rate that has nothing
to do with the actual procedures and tests performed for an individual patient. In hospital
laboratories, the sudden shift from profit center to cost center has prompted downsizing and
reorganizing. At the same time much more effort is being spent to control test utilization and
to reduce the cost of performing those tests that remain. The prevailing feeling in most
laboratories is that quality is high enough, but test costs need to be reduced more, i.e.,
productivity needs to be higher. What I will review here are the factors that determine
143
analytical quality from the customer's perspective, the interdependence that exists between
quality and productivity, and the trade-offs that are often made.
Current thinking about quality
What evidence do we have that analytical quality is generally high? In a recent publication
Westgard et al. found that only one of eighteen common laboratory tests was routinely
performed with precision high enough that current QC practices could detect medically
important errors. This raises questions: Why do we laboratory directors not perceive that
there is an enormous problem here? And why are we not being bombarded with complaints
from the medical staff? To answer these questions, think about how analytical quality is
perceived by people outside the laboratory. As laboratory professionals, we are aware of
several different components and indicators of analytical quality, but our customers are
sensitive only to the "bottom line" - which we call the defect rate. This quantity has been
defined in the literature on the basis of a fraction of analytical runs with an unacceptable
number of erroneous results, but here I want to define defect rate in terms of test results specifically, the fraction of test results reported with an error greater than TEa, the total error
deemed allowable on the basis of medical usefulness.
Need to consider defect rate
The defect rate for a test represents the best single indicator of analytical quality, as perceived
by our customers, that we can derive. Unfortunately, measuring defect rate is not as simple as
one might think. But to get a rough idea of what a typical defect rate might be, let's say we
are running a test on an automated chemistry analyzer, performing QC once a day. If we run
every day, and sometimes have an extra run thrown in, we might have 400 runs in a year
with, say, an average of 100 samples per run, for a total of 40,000 patient results per year.
Importance of error frequency
In quality control system design, it is important to know the frequency of critical error (f)
associated with the method. This is defined as the frequency of runs in which the distribution
of results has shifted such that 5% or greater have errors larger than TE. Let's say our
automated analyzer has f equal to 2% - not an unreasonable figure for a well-designed
instrument. This means that in a year, eight of the four hundred runs will have at least five
144
results with errors larger than TEa. But we have a quality control system in place, the purpose
of which is to detect such problems. Unfortunately, practical considerations in QC system
design often dictate that we can expect to catch only a fraction of the errors we would like to
detect. However, even if the probability of error detection, Ped, is a modest 50% (at the
critical error level) then four of the eight runs would be expected to fail QC and erroneous
results would presumably not be reported. This leaves four runs and a total of 20 erroneous
results that would be reported, which is 0.05% of the total number of results, or 5 defects per
10,000 test results, a defect rate of 1 in 2,000.
Defects vs. blunders
I digress here to acknowledge that the defect rate seen by the customer must also include
"blunders", or what statisticians call outliers. These include results reported on the wrong
specimen, or on specimens that were mishandled or improperly collected. Also included are
transcription errors. One might expect that we have fewer such blunders in the laboratory
than we had ten or twenty years ago because of automation, bar-coded identification labels
and instrument interfaces. On the other hand, downsizing has resulted in more pressure on the
remaining technologists, and this might be expected to increase the blunder rate, especially in
non-automated sections of the laboratory.
Difficulties in estimating error frequencies
How realistic is the above estimate of defect rate? Well, it's only as good as the estimate of
the critical error frequency of the method or instrument, so the question becomes, how can
we know the actual value of f? This is a difficult problem, and probably represents the most
serious obstacle to knowing how cost-effective our QC systems really are. It might be
expected that critical error frequency is related to the frequency of runs rejected as being out
of control. In fact these two quantities would be equal if we were using a QC system with an
ideal power curve. Such a function would have a probability of false rejection (Pfr) equal to
zero and a probability of error detection also equal to zero until error size reached the critical
point. At this point Ped would go to 100%. Such a power curve is depicted below.
145
In the real world however, power curves look like the one below
Inspection of these "real" power curves reveals a problem that I don't believe is widely
appreciated. Most quality control systems will reject many runs where the error size is less
than critical. Pfr gives the probability of rejecting a run with zero error, but what I call the
true false rejection rate is much higher, because all runs rejected with error sizes less than
critical should be considered false rejections from a medical usefulness viewpoint. Note that
although Ped is lower for these smaller errors, this is offset by the fact that small errors occur
more frequently than large ones.
Defect Tracking for Improving Product Quality and Productivity
Vision
146
Defect Tracking for Improving Product Quality and Productivity useful for a p p l i c a t i o n s
developed in an organization.
T h e D e f e c t Tracking for Improving Product Quality
and Productivity is a web based application that can be accessed throughout the organization.
This
system
can
be
used
for
logging
defects
against an application/module, assigning defects to individuals and tracking the defects to
resolution. There a r e f e a t u r e s l i k e e m a i l n o t i f i c a t i o n s , u s e r m a i n t e n a n c e ,
user access control, report generators etc in this system.
Project Specification
T h i s s y s t e m c a n b e u s e d a s a n a p p l i c a t i o n f o r t h e a n y product
based company to reduce the defects in product’s q u a l i t y a n d p r o d u c t i v i t y . U s e r
l o g g i n g s h o u l d b e a b l e t o upload the information of the user.
Functional Components
Following tasks can be performed with the application:(a)User Maintenance: Creating,
Granting
&
Revoking
access
and
deleting
users
from
application.(b)Component
Maintenance: Creating a component(application being developed / enhanced), Granting
&Revoking access on components to Users and Marking a component as “Active” or
“Closed”.( c ) D e f e c t
Tracking:
Creating,
Assigning
defects
t o users, Modifying and Closing a defect. A defect screen should at least have following details
•Defect Number and Title
•Defect priority
•Date created
•Defect description
•Defect diagnosis
147
•Name of originator
•Name of Assignee
•Status
•Resolution( d ) F i n d
User:
A
search
screen
to
find
users
a n d display results(e)Find component: A search screen to find components and
display results(f) Find defect: A search screen to find defects and display results(g)Report:
Generate reports on defects
Accordingly there would be following levels of user privileges:
•Application admin having all privileges.
•Component admin having privileges (b),(d),(e),(f) for the components they own.
•Users having privileges for (b),(d),(e),(f) for components they have access to.
•All should have privileges for (c).
1. A user should be able to
•L o g i n t o t h e s y s t e m t h r o u g h t h e f i r s t p a g e o f t h e
application.
•Change the password after logging into system.
•View the defects assigned to the User.
•Find defects for components on which the user has access.
•Find components on which the user has access.
•M o d i f y t h e d e f e c t s b y c h a n g i n g / p u t t i n g v a l u e s i n fields.
•A s s i g n d e f e c t s t o o t h e r u s e r s h a v i n g a c c e s s t o t h e component.
•Find details of other Users.
•Generate reports of defects for components on which the user has access.
148
2. As soon as a defect is assigned to a user a mail should be send to the User.
3. A Component Admin should be able to do the following tasks in addition to 1:
•Add a User to the component for creating and modifying defects against that component.
•Remove a user from the component.
•M a r k
a
component
as
“Active”
/
“Closed”.
No
new
defects can be created against a “Closed” component. A component cannot be
closed until all defects against the component are also closed
4. The Application Admin should be able to do the following tasks in addition to 1 & 3:
Add a new component.
•Add a user to a component as Component Admin.
•Remove Component Admin privilege from a user.
•Add a new user.
•Remove a user.
User Interface Requirements
Web based , user friendly interface
Database Requirements
Centralized
Integration Requirements
Web/Pervasive enabled
Preferred Technologies
Solutions must be created using
149
•HTML, CSS (Web Presentation )
•JavaScript (Client-side Scripting)
•Java (as programming language)
•JDBC, JNDI, Servlets, JSP (for creating web applications)
•Eclipse with MyEclipse Plug-in (IDE/Workbench)
•Oracle/SQL Server/Access (Database)
•Windows XP/2003 or Linux/Solaris (Operating System)
•BEA WebLogic/JBoss/WebSphere (Server Deployment)
Other Details
The application should be highly secured and with different levels &
categories of access control.
Defects and Metrics
The Six Sigma metrics used in the manufacturing industries are equally useful for the service sector.
The metrics will change as per the service processes. The appropriate selection of the process,
qualitative as well as quantitative, in the use of Six Sigma is necessary.
For example, while developing a website, certain factors like site design, color schemes, user
interaction and easy navigation need to be kept in mind. When Six Sigma concepts are applied to the
site development process, all these variables will be examined for their effect on the customer – and
the ones that need improvement will be determined. It is a bit like carrying out a simple
improvement in the manufacturing process.
Defects in the service sector can be defined as the problem in the process that leads to low customer
satisfaction. Thee defects can be characterized as qualitative and quantitative. When a defect is
measured quantitatively, it should also be converted into equivalent qualitative measures and vice
versa.
150
For example, if the customer satisfaction for a service is being measured qualitatively, then it should
also be converted to quantitative as “satisfaction level” on a scale of 10. Below a certain level, the
process needs improvement.
Another example is defining defects in quantitative measures, such as delivery services. For example,
newspaper delivery has to happen before a certain time to be effective.
Measurements of Service Processes
Level of measurement: Using the appropriate level of measurement is very important for it to be
useful and meaningful. For example, there may be 20 percent processes, which may be taking 80
percent of the total time of the project. When analyzing the qualitative measures, 20 percent of the
customers may account for 80 percent of customer dissatisfaction (i.e. defects).
The measurement of key areas, in contrast to detailed study, is necessary to get the larger picture of
the process defects.
Accounting for variations: In service processes there are large numbers of variations that may arise,
depending upon the complexity of the given task. The measurement of the typical task has to be
done, as well as for special cases or situations that arise.
Emphasize quantitative as well as qualitative measures: A proper mix of the qualitative and the
quantitative measures is very important to get useful results. A retailer’s process, which has more
personal customer contact, needs to measure the qualitative steps of the process.
A company that provides services where speed is relevant needs to concentrate more on the study
of quantitative measures.
Emphasize management communication and support: In a service-based industry such as insurance,
the claims process may have to be measured. There are different groups of people affected by the
process who may resist any change.
Management should communicate the relevance and effect of Six Sigma with the people involved to
achieve the support for it.
As Six Sigma in service processes are linked to customer satisfaction ultimately leading to increase in
sales, the need to measure and improve these processes is important.
151
Measuring And Improving The Development Process
Measuring & Improving Processes
Think about it, all performance improvement methodologies (PDCA, Six Sigma, TQM,
reengineering, etc.) have four elements in common:
1. Customer Requirements
2. Process Maps and Measures
3. Data/Root Cause Analysis
4. Improvement Strategies
Improving processes begins with the customer, be it internal or external. Understanding
which customers and which requirements are most critical to your business determines which
processes should be improved. Before relying on advanced Six Sigma techniques, significant
process learning can be achieved using tools such as trend charts, Pareto charts, histograms
and fishbone diagrams. Using these tools and other techniques included in Measuring and
Improving Processes, you will be able to:

Measure process performance

Determine the stability, capability, and flexibility of your processes

Identify the factors that limit quality, slow service time and increase costs

Develop results-oriented solutions that will yield improved business results
How To Reduce Costs and Improve Quality
Six Sigma performance is a worthy business goal. However, the investment required to train
Green Belts and Black Belts is significant, as is the cultural shift that may be needed to
embrace advanced statistical methods. Fortunately, it is not an all-or-nothing proposition for
your organization. You can begin the journey simply by enhancing current process
improvement techniques.
Ishikawa Diagram
152
Definition: A graphic tool used to explore and display opinion about sources of variation in a
process. (Also called a Cause and Effect Chart or Fishbone Diagram.) Ishikawa diagrams (also called
fishbone diagrams, cause-and-effect diagrams or Fishikawa) are diagrams that show the causes of a
certain event -- created by Kaoru Ishikawa (1990).[1] Common uses of the Ishikawa diagram are
product design and quality defect prevention, to identify potential factors causing an overall effect.
Each cause or reason for imperfection is a source of variation. Causes are usually grouped into major
categories to identify these sources of variation.
Ishikawa Diagram The figure below shows an Ishikawa diagram. Note that this tool is referred to by
several different names: Ishikawa diagram, Cause and Effect diagram, Fishbone and Root Cause
Analysis. These names all refer to the same tool. The first name is after the inventor of the tool - K.
Ishikawa (1969) who first used the technique in the 1960s. Cause and Effect also aptly describes the
tool, since the tool is used to capture the causes of a particular effect and the relationships between
cause and effect. The term fishbone is used to describe the look of the diagram on paper. The basic
use of the tool is to find root causes of problems; hence, this last name.
When to Use a Fishbone Diagram
1. When you are not able to identify possible causes for a problem.
2. when your team’s thinking tends to fall into ruts and dont knwo where to start with.
How to Construct:

Place the KQC of the process in a box on the right.

Have the team generate and clarify all the potential sources of variation.

Number the potential sources of variation.

Sort the potential sources of variation into the decision matrix on page 26-6. Put each
source's number in the appropriate column.

Use an affinity diagram to sort the process variables into naturally related groups. The labels
of these groups are the names for the major bones on the Ishikawa diagram.

Place the process variable on the appropriate bones of the Ishikawa diagram.

Combine each bone in turn, insuring that the process variables are specific, measurable, and
controllable. If they are not, branch or "explode" the process variables until the ends of the
branches are specific, measurable, and controllable.
Tip:
153

Take care to identify causes rather than symptoms.

Post diagrams to stimulate thinking and get input from other staff.

Self-adhesive notes can be used to construct and Ishikawa diagram. Sources of variation can
be rearranged to reflect appropriate categories with minimal rework.

Insure that the ideas placed on the Ishikawa diagram are process variables, not special
caused, other KQCs, tampering, etc.

Review the quick fixes and rephrase them, if possible, so that they are process variables.
Consider this figure. The basic concept in the fishbone diagram is that the name of a basic problem is
entered at the right of the diagram at the end of the main 'bone.' This is the problem of interest. At
an angle to this main bone are located typically three to six sub-bones which are the contributing
general causes to the problem under consideration. Associated with each of the sub-bones are the
causes which are responsible for the problem designated. This subdivision into ever increasing
154
specificity continues as long as the problem areas can be further subdivided. The practical maximum
depth of this tree is usually about four or five levels. When the fishbone is complete, one has a
rather complete picture of all the possibilities about what could be the root cause for the designated
problem.
The fishbone diagram can be used by individuals or teams; probably most effectively by a group. A
typical utilization is the drawing of a fishbone diagram on a blackboard by a team leader who first
asserts the main problem and asks for assistance from the group to determine the main causes
which are subsequently drawn on the board as the main bones of the diagram. The team assists by
making suggestions and, eventually, the entire cause and effect diagram is filled out. Once the entire
fishbone is complete, team discussion takes place to decide what are the most likely root causes of
the problem. These causes are circled to indicate items that should be acted upon, and the use of
the fishbone tool is complete.
The Ishikawa diagram, like most quality tools, is a visualization and knowledge organization tool.
Simply collecting the ideas of a group in a systematic way facilitates the understanding and ultimate
diagnosis of the problem. Several computer tools have been created for assisting in creating
Ishikawa diagrams. A tool created by the Japanese Union of Scientists and Engineers (JUSE) provides
a rather rigid tool with a limited number of bones. Other similar tools can be created using various
commercial tools. One example of creating a fishbone diagram is shown in an upcoming chapter.
Only one tool has been created that adds computer analysis to the fishbone. Bourne et al. (1991)
reported using Dempster-Shafer theory (Shafer and Logan, 1987) to systematically organize the
beliefs about the various causes that contribute to the main problem. Based on the idea that the
main problem has a total belief of one, each remaining bone has a belief assigned to it based on
several factors; these include the history of problems of a given bone, events and their causal
relationship to the bone, and the belief of the user of the tool about the likelihood that any
particular bone is the cause of the problem.
Purpose: To clearly illustrate the various sources of variation affecting a given KQC by sorting and
relation the sources to the effect.
The categories typically include:

People: Anyone involved with the process
155

Methods: How the process is performed and the specific requirements for doing it, such as
policies, procedures, rules, regulations and laws

Machines: Any equipment, computers, tools etc. required to accomplish the job

Materials: Raw materials, parts, pens, paper, etc. used to produce the final product

Measurements: Data generated from the process that are used to evaluate its quality

Environment: The conditions, such as location, time, temperature, and culture in which the
process operates
Causes
Causes in the diagram are often categorized, such as to the 8 M's, described below. Causeand-effect diagrams can reveal key relationships among various variables, and the possible
causes provide additional insight into process behavior.
Causes can be derived from brainstorming sessions. These groups can then be labeled as
categories of the fishbone. They will typically be one of the traditional categories mentioned
above but may be something unique to the application in a specific case. Causes can be traced
back to root causes with the 5 Whys technique.
Typical categories are:
The 8 Ms (used in manufacturing)

Machine (technology)

Method (process)

Material (Includes Raw Material, Consumables and Information.)

Man Power (physical work)/Mind Power (brain work): Kaizens, Suggestions

Measurement (Inspection)

Milieu/Mother Nature (Environment)

Management/Money Power

Maintenance
The 8 Ps (used in service industry)

Product=Service
156

Price

Place

Promotion/Entertainment

People(key person)

Process

Physical Evidence

Productivity & Quality
The 4 Ss (used in service industry)

Surroundings

Suppliers

Systems

Skills
Questions to ask while building a Fishbone Diagram

People
– Was the document properly interpreted? – Was the information properly disseminated? –
Did the recipient understand the information? – Was the proper training to perform the task
administered to the person? – Was too much judgment required to perform the task? – Were
guidelines for judgment available? – Did the environment influence the actions of the
individual? – Are there distractions in the workplace? – Is fatigue a mitigating factor? – How
much experience does the individual have in performing this task?

Machines
– Was the correct tool used? – Are files saved with the correct extension to the correct
location? – Is the equipment affected by the environment? – Is the equipment being properly
maintained (i.e., daily/weekly/monthly preventative maintenance schedule) – Does the
software or hardware need to be updated? – Does the equipment or software have the features
to support our needs/usage? – Was the machine properly programmed? – Is the
tooling/fixturing adequate for the job? – Does the machine have an adequate guard? – Was
the equipment used within its capabilities and limitations? – Are all controls including
157
emergency stop button clearly labeled and/or color coded or size differentiated? – Is the
equipment the right application for the given job?

Measurement
– Does the gauge have a valid calibration date? – Was the proper gauge used to measure the
part, process, chemical, compound, etc.? – Was a guage capability study ever performed? Do measurements vary significantly from operator to operator? - Do operators have a tough
time using the prescribed gauge? - Is the gauge fixturing adequate? – Does the gauge have
proper measurement resolution? – Did the environment influence the measurements taken?

Material (Includes Raw Material, Consumables and Information )
– Is all needed information available and accurate? – Can information be verified or crosschecked? – Has any information changed recently / do we have a way of keeping the
information up to date? – What happens if we don't have all of the information we need? – Is
a Material Safety Data Sheet (MSDS) readily available? – Was the material properly tested?
– Was the material substituted? – Is the supplier’s process defined and controlled? – Were
quality requirements adequate for part function? – Was the material contaminated? – Was the
material handled properly (stored, dispensed, used & disposed)?

Environment
– Is the process affected by temperature changes over the course of a day? – Is the process
affected by humidity, vibration, noise, lighting, etc.? – Does the process run in a controlled
environment? – Are associates distracted by noise, uncomfortable temperatures, fluorescent
lighting, etc.?

Method
– Was the canister, barrel, etc. labeled properly? – Were the workers trained properly in the
procedure? – Was the testing performed statistically significant? – Was data tested for true
root cause? – How many “if necessary” and “approximately” phrases are found in this
process? – Was this a process generated by an Integrated Product Development (IPD) Team?
– Did the IPD Team employ Design for Environmental (DFE) principles? – Has a capability
study ever been performed for this process? – Is the process under Statistical Process Control
158
(SPC)?
– Are the work instructions clearly written?
– Are mistake-proofing
devices/techniques employed? – Are the work instructions complete? – Is the tooling
adequately designed and controlled? – Is handling/packaging adequately specified? – Was the
process changed? – Was the design changed? – Was a process Failure Modes Effects
Analysis (FMEA) ever performed? – Was adequate sampling done? – Are features of the
process critical to safety clearly spelled out to the Operator?
Criticism of Ishikawa Diagrams
In a discussion of the nature of a cause it is customary to distinguish between necessary and
sufficient conditions for the occurrence of an event. A necessary condition for the occurrence
of a specified event is a circumstance in whose absence the event cannot occur. A sufficient
condition for the occurrence of an event is a circumstance in whose presence the event must
occur.Ishikawa diagrams have been criticized for failing to make the distinction between
necessary conditions and sufficient conditions. It seems that Ishikawa was not even aware of
this distinction
Advantage:
•Different opinions by teamwork
•Easy to apply
•Little effortto practise
•Better understanding for causes and effects
Disadvantage:
•No clarity in very complex problems
•Interactions and chronological dependence can’t be displayed
To successfully build a cause and effect diagram:
1. Be sure everyone agrees on the effect or problem statement before beginning.
2. Be succinct.
3. For each node, think what could be its causes. Add them to the tree.
4. Pursue each line of causality back to its root cause.
159
5. Consider grafting relatively empty branches onto others.
6. Consider splitting up overcrowded branches.
7. Consider which root causes are most likely to merit further investigation.
Other uses for the Cause and Effect tool include the organization diagramming, parts
hierarchies, project planning, tree diagrams, and the 5 Why's.
Creating a Fish Bone Diagram
Steps
Activities
Identify
the Write the problem/issue to be studied in the "head of the fish". From this box
problem
originates the main branch (the 'fish spine') of the diagram.
Brainstorm the major categories of causes of the problem. Label each bone of
the fish. Write the categories of causes as branches from the main arrow.
If
this
is
difficult
use
generic
headings
The 6 M's
Identify the major
factors involved
The 8 P's
Product.
The 4 S's
Brainstorm all the possible causes of the problem.
branch from the appropriate category.
Identify
possible
-causes
causes
branching off the causes.
160
ideas, focus attention to places on the chart where ideas are few.
adequate amount of detail has been provided under each major category.
Interpret
your
diagram
These become the most likely causes.
most likely causes, the team should reach
consensus on listing those items in priority order with the first item being the
most probable cause.
Metrics For Small Organizations.
Small Organization
Small organizations, by definition of the opposite of the large organizations have fewer than
200 employees although it should be noted that many of the companies studied have 30 or
fewer employees. This is noted because businesses with employees greater than an arbitrary
number, say 100, may be considered by organizations with fewer employees to be large and
sufficient enough in man- power to provide the resources typically associated with needs
required by industry accepted SPMM.
REASONS FOR SMALL ORGANIZATION APPREHENSION TO ADOPT SOFTWARE
PROCESS MODELS
Small organizations may first balk at the seemingly prohibitive cost of adopting SPMM. One study
suggests that the cost to implement the Capability Maturity Model is between $490 and $2,004 per
person with the median being $1,475 and that achieving the next level of the Capability Maturity
Model can cost upward of $1,000,000. In addition, software product assessment can take between
161
$45,000 and $100,000. Meeting the goals of some key processes can be financially taxing and it may
be necessary to tailor the SPMM's key process areas.
WHY SMALL ORGANIZATIONS SHOULD ADOPT SOFTWARE PROCESS MODELS
Despite there being the fundamental size differences between large and small organizations, it is
possible for small organizations to overcome the hurdles that SPMM presents. It has been found
that by structuring the intentions. They are created in a manner by which the software\ developers
can generate the necessary components efficiently as well as lessening the burden inherent in
guessing. The organization is also able to keep valuable resources from having to determine the
scope and intent of the project after the development phase has begun.
More Readily Predict Issues
Greatly enhanced levels of predictability can be achieved by SPMM adoption. One of the
most significant benefits of adopting the SPMM is being able to gather data about the past
and current projects and using that data to apply a set of metrics which allows the
organization to measure weaknesses, ultimately being more readily able to predict where
failures may occur. Predictability alone is an incredible asset from which a small organization
can tremendously benefit.
Increase Workflow
The use of SPMM increases productivity and workflow by providing a standard by which the
organization can abide. SPMM are the assembly line of the software industry – they provide
the team of developers a structure by which they may efficiently and effectively deliver
products. A structured environment is extremely beneficial is helping to prevent unnecessary
deviation found in organizations who adhere to no SPMM.
Achieve Better Marketability and Competitiveness
Successful implementation of SPMM include relationships with larger companies with
greater process maturity and also becoming a member of an organization that helps with
162
software process improvement. Small organizations that adopt SPMM are more marketable
and, from that, more competitive. Growing organizations require the operational efficiency
that
SPMM
provides
and
that
efficiency is seen by the market as maturity and capability. SPMM adherence is expected by
other or larger entities. Organizations who place bids on contracts are given preference if they
show adherence to SPMM. Adopting and adhering to a model is a way of telling potential
customers that the resulting product is one worthy of emerging from the crowded
marketplace.
Improve Customer Satisfaction
Of course, the penultimate goal of an organization is to remain in business, whether it be for
money or altruistic purposes. The way to achieve that is through customer satisfaction and
small organizations can significantly increase the level of customer satisfaction by adopting
SPMM. SPMM adherence shows the customer that the organization in serious in its venture
and assures the customer that the product meets quality assurance standards – all this can lead
to
improved
customer
satisfaction
before
they even have the final product.
ANALYSIS OF SMALL ORGANIZATION SOFTWARE PROCESS MODEL ADOPTION AND EXECUTION
Small organizations have adopted CMMI with great success despite initial perceived
negative. They see organization success after applying the model and can expect to grow
their business because of SPMM adoption. CMMI improves the quality of product and
perception both within and from outside the organization, helping to give them essential skills
and structure necessary to produce better software.
Adopting the Model
Small organizations often believe they can't afford expensive models because of perceived
implementation cost while still being able to bid low on competitive contracts. They also feel
that they simply do not have the manpower and resources necessary to be able to adopt
SPMM but it is of utmost importance for the small organization to establish a continuous
improvement activity. Re-organization, education, and new hires are all potential necessities
when trying to comply with CMMI while establishing configuration and requirement
163
procedures are priority. It seemingly is initially easy to identify technical area improvements
and hard to analyze organizational improvements. The organization may need to perform
organizational
improvements
before
being
able
to
apply
CMMI
to
software operations. Small organization SPMM implementation strategy has higher employee
involvement and flexibility that large organizations; the main difference between small and
large organizations is that smaller organizations adapt to change or instability through
exploration. It has been shown that small organizations can improve business by
implementing
SPMM
elements
and
that
size
does
not
affect
SPMM benefits. Small organization size also allows for all employees to be trained, leading
to better cohesion and understanding of why the process was adopted and how it applies to
software development - versus larger organization where time and money may inhibit that as
well as less cohesive groups
Organizational Improvement
CMMI is a framework that other projects can adapt to, not just a one-off model, and it is
enterprise- wide. Adaption of CMMI for small organizations can begin with few written
policies. Moving to adopt some other models then fully adopting CMMI while achieving the
target capability level within one year of beginning the model is a strategy employed by some
fledgling business first entering SPMM maturity. Small organization adoption objectives can
include fitting in with their current system, having little overhead, not interfering with
projects and having measurable pay-offs and benefits. manner, requiring less software
rework, and providing national recognition manner, requiring less software rework, and
providing national recognition .
**************
164
Download