INF5180 Third delive..

advertisement
INF5180 Product and Process Improvement in Software Development
UNIVERSITY OF OSLO
Faculty of Mathematics and Natural
Sciences
INF5180 – Product and Process
Improvement in Software Development
Autum 2006
Redesign of a SPI system to make it more
sustainable: A case of COBOL software
maintenance at the Norwegian
Directorate of Taxes
Written by:
Petter Øgland
Draft version 3.1
Delivered: 28.11.06
Fall 2006
Page 1 of 24
INF5180 Product and Process Improvement in Software Development
Abstract
We argue that software process improvement (SPI) systems are often difficult to design
and implement, and that they have a tendency for breaking down after a relatively short
period of time. While there seem to be limited research on the case of SPI systems, in the
case of total quality management (TQM) systems, there is a body of knowledge on why
such systems break down and what can be done in order to make them more sustainable
(Deming, 1992). We notice that Deming’s theory links with action research (AR), and
provides a similar but more profound alternative to SPI cycles like IDEAL, QIP and
PDCA.
We have tested this idea of using AR by collecting data from a SPI project run among
COBOL programmers at the Norwegian Directorate of Taxes. The results show six years
of continuous improvement. When we compare the results against the way we have
interpreted Deming’s theory for doing SPI, we notice that the SPI system has survived for
at least six years, indicating at least a certain level of sustainability as most SPI system
last no longer than three years.
Fall 2006
Page 2 of 24
INF5180 Product and Process Improvement in Software Development
Table of Contents
Abstract ............................................................................................................................... 2
1 Introduction ................................................................................................................. 4
2 Theoretical framework ................................................................................................ 6
2.1
Appreciation of a system .................................................................................... 6
2.2
Understanding of variation ................................................................................. 7
2.3
Theory of knowledge .......................................................................................... 7
2.4
Psychology .......................................................................................................... 8
3 Methodology ............................................................................................................... 9
3.1
Introduction ......................................................................................................... 9
3.2
Population ........................................................................................................... 9
3.3
Research instruments ........................................................................................ 10
3.4
Data collection .................................................................................................. 10
4 Case description ........................................................................................................ 11
4.1
The COBOL software maintenance system ...................................................... 11
4.2
Measurements and feedback ............................................................................. 11
4.3
Changing and integrating the SPI system ......................................................... 13
4.3.1
Changes in data collection procedures ...................................................... 13
4.3.2
Changes in presentation formats ............................................................... 13
4.3.3
Integration with other systems .................................................................. 14
4.4
Understanding programmer psychology ........................................................... 14
5 Discussion ................................................................................................................. 16
5.1
Overview of findings ........................................................................................ 16
5.1.1
Systems theory .......................................................................................... 16
5.1.2
Theory of variation ................................................................................... 16
5.1.3
Theory of knowledge ................................................................................ 17
5.1.4
Psychology ................................................................................................ 17
5.2
A consideration of the findings in light of existing research studies ................ 18
5.3
Implications for the study of the current theory................................................ 18
6 Conclusion ................................................................................................................ 19
References ......................................................................................................................... 20
Appendix ........................................................................................................................... 24
Fall 2006
Page 3 of 24
INF5180 Product and Process Improvement in Software Development
1 Introduction
The Capability Maturity Model (CMM) has been described as an application of the
concepts of Total Quality Management (TQM) to software development and maintenance
(Paulk, 1995). W. E. Deming (1900-1993) has often been mentioned as one of the
leading authorities in the TQM movement, and his name is often quoted in software
process improvement (SPI) literature (e.g. Humphrey, 1989; Florac & Carleton, 1999;
Chrissis, Konrad & Shrum, 2003; Poppendieck & Poppendieck, 2003; Sommerville,
2004; Ahern, Clouse & Turner, 2004; Boehm & Turner, 2004).
Deming’s systematic summary of his thinking on performance improvement, arguably
his most important contribution to management theory, was published posthumously
(Deming, 1994). In this final work, he puts his ontology, epistemology, ethic into an
integrated system of thought (figure 1).
Figure 1 – Deming’s “System of Profound Knowledge” (www.pmi.co.uk/values)
In our interpretation, Deming’s main contribution to management theory has to do with
his contributions to the theory of scientific management (Taylor, 1911) through the
application of statistical thinking and statistical methods. More specifically, what we
believe was his main insights was to link Shewharts’ view on the scientific method from
the viewpoint of quality control (Shewhart, 1939) with the early writing on operations
research (Churchman, Ackoff & Arnoff, 1957) into a modernized philosophy of scientific
management (Tsutsui, 1998: chapter 6).
Although Deming was working as a consultant on statistical methods in industry, he was
also a professor at the New York University with more than 150 scientific publications.
Some of his books (Deming, 1986; 1994) read more like comments on the philosophy of
science for doing process improvement than engineering advice. In this area, however,
we believe there may be a gap in SPI literature, as SPI literature seem to focus on how to
Fall 2006
Page 4 of 24
INF5180 Product and Process Improvement in Software Development
build systems for software process improvement rather than how to build systems
(instruments) for researching software process improvement.
The aim for this document is to investigate whether it is possible to describe a software
process improvement system from the perspective of being an instrument for process
improvement research.
Our argument goes as follows: In chapter 2 we will give a short summary of Deming’s
systematic philosophy, as viewed through the perspective of SPI. In chapter 3 we will
explain some SPI challenges at the Norwegian Directorate of Taxes (NTAX) and how we
design an SPI system as an instrument for doing SPI research. In chapter 4 we present a
summary of the results of the experiment. In chapter 5 we discuss and analyze the case
through the use of Deming’s theory.
Fall 2006
Page 5 of 24
INF5180 Product and Process Improvement in Software Development
2 Theoretical framework
In order to understand management and process improvement, Deming (1994) suggests
looking at four components that are all related to each other:




Appreciation of a system
Understanding variation
Theory of knowledge
Psychology
If we were to use this framework for the behavioral sciences, such as Tolman’s studies of
rats finding food by running through mazes (Wikipedia, 2006a), the mazes would be the
systems, the variation could be the variation in time for the mice to find food, theory of
knowledge relates to how we conceptualize the patterns of behavior from the
experiments, and the psychology would be the psychology of the rats.
While Tolman was interested in carrying out such experiments in order to understand
aspects of cognition in animals and humans, Deming’s area of research was how to
optimize systems of workflow. If we think of a software development company
(including suppliers and customers) as rats in a maze, the problem is to coordinate the
behavior of the rats, so the total workflow reaches an optimum for the total system.
2.1 Appreciation of a system
Focusing on individual events rather than seeing the organization as a total system may
cause workflow problems, like unclear expectations, poor predictions, optimal solutions
for one area causing havoc in another area, and people getting blamed for events they
have little or no control over.
Software processes and management systems is a way of introducing systems in a
software development organization (Sommerville, 2004). Requirements standards like
some of the ISO, IEEE and SEI standards can be used for assessing current practice and
provide goals for structural improvements.
If we look at the SEI-CMM framework (Humphrey, 1989; Zahran, 1997), in addition to
the roles of the software developers and the managers, there are additional roles defined,
such as software quality assurance (SQA) and the software engineering process group
(SEPG). Both SQA and SEPG fulfil roles that are more or less to the production system.
Why the CMM family of models seem to differentiate between the SQA and the SEPG
(i.e. quality assurance and process improvement) does not seem fully clear, as most of the
references in the SWEBOK on software engineering process and software quality are the
same (Abran & Moore, 2004: Chapter 9 & 11). However, from the perspective of
Deming, it seems reasonable to think of the SQA people as organizational scientists,
collecting data in order to understand how the system works, while the SEPG people are
organizational engineers, using the analysis from the SQA people in order to redesign
processes.
Fall 2006
Page 6 of 24
INF5180 Product and Process Improvement in Software Development
2.2 Understanding of variation
Deming writes (1986: pp. 475-6): “There is no such thing as arrival exactly on time. In
fact, exactly on time can not be defined. This principle came to my mind one day in
Japan as I stepped on to the railway platform and observed the time to be six seconds
before the scheduled time of arrival. Of course, I observed, it has to be ahead half the
time, and behind half the time, if its performance is on time”.
The idea behind statistical process control (SPC) is to help distinguish between variation
inherent in the process and signals indicating something unexpected. Not being able to
distinguish between process noise and process signals may lead to wrong conclusions and
wrong decisions. One may see trends where there are no trends, and miss trends where
there are trends. One may not be able to understand past performance, and may not be
able to predict future performance.
2.3 Theory of knowledge
From the perspective of trying to see quality control as science, the basic argument comes
from Shewhart (1939: pp. 44-5): “It may be helpful to think of the three steps in the mass
production process as steps in the scientific method. In this sense, specification,
production, and inspection correspond respectively to making a hypothesis, carrying out
an experiment, and testing the hypothesis. The three steps constitute a dynamic process
of acquiring knowledge. From this viewpoint, it might be better to show them as forming
a sort of spiral gradually approaching a circular path which would represent the idealized
case where no evidence is found (during inspection) to indicate a need for changing the
specification (or scientific hypothesis) no matter how many times we repeat the three
steps. Mass production viewed this way constitutes a continuing a self-corrective method
for making the most efficient use of raw and fabricated materials”.
Deming refers to this cycle as the “Shewhart cycle” or the Plan-Do-Check-Act (PDCA)
cycle (Deming, 1986: p. 88). This is the learning cycle of the Shewhart/Deming quality
management thinking, producing knowledge about the workflow. When going into more
philosophical detail on the theory of knowledge (epistemology), he (Deming, 1994: pp.
101-107) talks about prediction. In other words, from the perspective of the problems
Shewhart and Deming want to understand, a proper understanding corresponds to having
the statistical distribution of the parameter in concern. As a consequence of this, both
Shewhart and Deming refer for C. I. Lewis (1929).
Unlike other American pragmatists, Lewis based his philosophy on Kant and thus
believes in Cartesian dualism. This means that he believes in cognitive maps (mental
models) or what could perhaps be described as computer ontologies (Wikipedia, 2006b).
From the viewpoint of TQM and SPI this may perhaps be interpreted in a simplified
manner as knowledge consisting of pairs of ontologies and epistemologies, such as
flowcharts and SPC diagrams, or Ishikawa diagrams and Pareto charts. In other words,
the Shewhart/Deming definition of knowledge seems to be quite similar to what has been
suggested among people researching artificial intelligence (Russell & Norvig, 2003;
Hawkins & Blakeslee, 2004).
Fall 2006
Page 7 of 24
INF5180 Product and Process Improvement in Software Development
Given this perspective, learning can be measured along two dimensions. Firstly, a
change in understanding will cause a change in the ontology model (i.e. a rewriting of a
flowchart or an Ishikawa diagram). Secondly, a change in understanding will result in a
change in behaviour that will cause a new statistical distribution.
Not being able to change theory (ontology model) makes behaviour remain the same, and
there will be no sustainable improvements.
2.4 Psychology
Sometimes we may have simplistic and wrong theories about why people behave the way
they do. When not understanding psychology, it is difficult to motivate and difficult to
predict how people will behave.
Fall 2006
Page 8 of 24
INF5180 Product and Process Improvement in Software Development
3 Methodology
3.1 Introduction
In 1997, one of the COBOL programmers died at the Norwegian Directorate of Taxes
(NTAX), and other programmers had to step in. Due to a lack of a standard way of
programming, this caused major problems, and everybody quickly realized that there was
a severe need for a way of programming that would make the software maintainable. A
standard was suggested by the programmers, it was accepted by the management, it was
monitored by quality assurance personnel, and it has now been running for six years
producing statistical trends that show continuous improvement. The case is documented
through NTAX reports and has also been described and analysed from the perspective of
sociological analysis (Øgland, 2006a) and from the perspective of how the improvement
process gradually changed into an action research process (Øgland, 2006b).
3.2 Population
In this study we are primarily interested in describing and analyzing the case from the
perspective of the four components in Deming’s “system of profound knowledge”
(Deming, 1994: chapter 4).
This section describes the research setting and the empirical strategy adopted. The
research was conducted at the Software Maintenance Section at the IT department of the
Norwegian Directorate of Taxes (NTAX).
IT Department
Information Security
Systems Development
Section
Group 1
Projects: MVA, FOS, DSF
Systems Maintenance
Section
Group 2
Projects: GLD, ER@, FLT
Research Scientist
Systems Production
Section
Technical Infrastructure
Section
Group 4
Projects: PSA, LFP, LEP,
RISK
Figure 2 – Simplified organizational chart for the IT department at NTAX
Ten of the NTAX mainframe information systems are based on COBOL software, and
need to be maintained on an annual basis. Seven of the systems (LFP, LEP, PSA, ER@,
GLD, FLT, FOS) follow annual life cycles, meaning that maintenance and COBOL
standardization are carried out during specific times of the year. The remaining three
systems (MVA, DSF, RISK) are systems that are being maintained on an ongoing basis.
Fall 2006
Page 9 of 24
INF5180 Product and Process Improvement in Software Development
The maintenance is taken care of by approximately 40 programmers, each of the projects
delegated among three groups. The distribution between male and female programmers
is about 50/50. The age distribution was from about mid thirties to mid sixties, with most
of the people being in the age slot between 40 and 50. Few of the programmers have
formal computer education, although the employment policy in recent years has focused
on getting people with formal computer background.
3.3 Research instruments
The research was designed as a part of a quality improvement strategy in 2000, and, as
illustrated in figure 2, the research is carried out by a researcher being a part of the
organization. In order to handle problems having to do with doing research in ones own
organization, an action research approach for doing research in one’s own organization
has been adopted (Coghlan & Brannick, 2001).
The study is part of the broader context of an action research initiative dealing with
several NTAX processes. The case study is based on the empirical data collected from
the COBOL standardization process 1998-2006 by the researcher who during the period
of research held the position of quality manager, a function organized as a part of the
system development section (figure 2).
3.4 Data collection
The empirical data was collected through unstructured interviews, document analysis and
observation. Interviews were held with programmers when presenting them with results
from document analysis. Managers were mostly interviewed at the end of one cycle and
the beginning of the next. Document analysis consisted of going through various drafts
and final versions of the internal COBOL standard, system documentation made by the
programmers, quality statistics provided by the programmers, plus various sorts of
literature the programmers were using for designing and updating the internal COBOL
standard.
During the whole period from 2000 to the present, about 50 interviews with programmers
were conducted, about 3 interviews with group managers, about 3 interviews with the
systems maintenance section manager and 2 interviews with the IT manager. The
interviews were conducted in an improvisational manner without notes or minutes being
written.
Fall 2006
Page 10 of 24
INF5180 Product and Process Improvement in Software Development
4 Case description
As the process of collecting and analyzing data from the COBOL programmers has been
going on for several years, it seems reasonable to present the case by explaining the
developing within each iteration.
4.1 The COBOL software maintenance system
The diagram in figure 3 illustrates how
start
Define
/improve
standard
Develop
code
Evaluate
against
standard
Act
end
Figure 3 – Flowchart for making software maintainable
The SPI change agent (“action researcher”) believed the best way to make the metrics
system work would be by having the programmers themselves complete the standard,
present it to management for acceptance, themselves define the metrics and themselves
create the software needed for producing the statistics, while the role of the SPI change
agent would be restricted to doing statistical analysis of the measurements, as this would
be the only task where a sort of competence not found among the programmers
(statistical competence) was needed and could not be found.
4.2 Measurements and feedback
<… her er noen diagrammer…data for 2006 er ikke ferdig innsamlet, så målingene er
approksimasjoner… selv om jeg velger å ta med hele datarekken for samtlige
diagrammer, så kan det muligens være bedre å kommentere dem i årene der skjer noe, for
eksempel diagrammene nedenfor viser interessante sprang for 2003/2004 som følge av
revisjon av standard, og diagrammene bør ta trolig presentere i forbindelse med femte
iterasjon… problemstillingen nå er at forbedringsraten ikke er like god som den var før,
vi har fått flere programmerere som saboterer opplegget, og ledelsen viser ikke lenger
engasjement i prosessen… muligens kommer opplegget til å terminere med mindre vi
finner på noe smart…>
Fall 2006
Page 11 of 24
INF5180 Product and Process Improvement in Software Development
80
180
160
60
140
120
40
100
80
20
60
2006
2005
2004
2003
2002
2001
2000
1999
2006
2005
2004
2003
2002
-20
0
2001
20
2000
0
40
-40
Delta NC
UCL = 22
NC level
AVG = 14
LCL = 6
Figure 5 – Improvement and improvement rates (NC = Non-Conformity)
<…>
180
160
140
120
100
80
60
40
20
0
120
y = -16,327x + 174,36
R2 = 0,9865
113
106
100
80
66
66
62
60
58
50
40
25
20
13
9
PSA
FLT
FOS
RISK
LFP
DSF
GLD
MVA
ER@
NC level (LFP)
Linear regression
LEP
0
1998 1999 2000 2001 2002 2003 2004 2005 2006
NC level 2006
Figure 3. Standardization results for one project (LFP) and benchmarking results for this project (LFP) as
compared to the other nine projects.
Fall 2006
Page 12 of 24
INF5180 Product and Process Improvement in Software Development
20
20
16,5
15,3
15
15
10
9,8
9,6
7,1
10
6,4
5,8
3,6
5
5
3,4
0
0
-5
-5
0
-4,7
50
100
2
R = 0,4758
-10
-10
Improvement rate
NC level / delta NC
Least squares parable
Figure 4. Benchmarking against NC improvement rates and performing correlation analysis for
understanding the relationship between individual NC levels and NC improvement rates
4.3 Changing and integrating the SPI system
As the system in figure 3 was rather simple to begin with, not many changes were done
after the system was established. Basically, there have been four major types of changes:
4.3.1 Changes in data collection procedures
During the first year of collection, the time of the year for selecting data from each of the
projects was done slightly on speculation. When the results were given to the
programmers, they were asked how data collection fitted with annual life cycles, and
adjustments were done as needed.
The program used for collecting data also contained errors of various sorts, not having
been tested systematically before being put to use. Over time, however, the program was
adjusted several times. The biggest change was due to the COBOL standard being
revised. Historical data had to be adjusted due to this, in order to be able to use historical
data for identifying trends and making predictions.
Have the standard rewritten could be thought of as double loop learning (Argyris &
Schön, 1978).
4.3.2 Changes in presentation formats
The project report has been changing format every year (NTAX, 2001; 2002a; 2004;
2005; 2006). The first report was short and simple, but as we wanted to investigate the
SPI process in a more scientific manner, various experiments with different theories,
different layouts, different structures, different types of statistics etc were tested out.
Changing the project report is costly in terms of SQA man-hours, but as the right format
hasn’t been found yet, it still has to be changed.
Fall 2006
Page 13 of 24
INF5180 Product and Process Improvement in Software Development
The individual summary statistics, trends and presentations for programmers is simpler,
and has changed less. The aim of these reports are only to give simple indications to
programmers and managers whether there are good improvements or not and to identify
areas for improvement.
4.3.3 Integration with other systems
A draft standard was completed, and circulated among the programmers for comments.
It was then revised and presented to management for acceptance. IT management then
decided to ask the director general of the Directorate of Taxes to give a lecture to the
programmers on the importance of following standards. This was followed by one of the
programmers developing the metrics software, in order to produce data on how the
various software packages deviated from the requirements of the standard. The SPI
change agent was given data, analysed, and discussed the results with the programmers.
I Start
II. Godkjenn
III. Godkjenn
utvikling (V10.1) krav.spek (V10.2) design spek (V10.3)
Kravspesifisering
Kravspek
Spek.design
Løsning
Løsningsbeskrivelse
Testplan
IV. Godkjenn
system (V10.4-6; N7)
Implementasjon
Systembeskrivelse
Testrapport
V. Godkjenn
endringer (V10.1)
Drift/
Vedlikehold
Kildekode
Brukerveiledning
Erfaring
Vedtatte
endringer
Erfaringsrapport
Driftsveiledning
Figure 4 – Generic NTAX software lifecycle model (adapted from NTAX, 1998)
In figure 4 we illustrate how measurements of software maintainability is done as part of
quality control phase IV as a part of the measurements that are supposed to be done with
quality control procedure V10 at step 4 to evaluate system documentation.
As it had been possible to get some historical data as well as new data, the results showed
that most software project had been producing software that got more and more filled up
with gotos, long paragraphs and other issues that the programmers themselves considered
as “bad practices”, the major achievement of the first iteration was that the current
measurements defined a baseline for further improvement and that a SPI system was now
up and running. The results were documented in a project report (NTAX, 2001) that was
distributed among programmers and management.
4.4 Understanding programmer psychology
Fall 2006
Page 14 of 24
INF5180 Product and Process Improvement in Software Development
Fall 2006
Page 15 of 24
INF5180 Product and Process Improvement in Software Development
5 Discussion
5.1 Overview of findings
5.1.1 Systems theory
As illustrated in figure 3, the process of making current and future software readable and
maintainable could be seen as a system consisting of three processes:
(1) The software engineering process of defining and improving methods, such as
making sure all relevant procedures and standards exist and are maintained.
(2) The process writing new software and maintaining old software, in compliance
with the standards and procedures.
(3) The software quality assurance process of measuring practice against procedures
and products against standards, as in this case consisted of measuring the practice
of updating the COBOL standard against the procedure for updating standards
and measuring the COBOL software against the current version of the COBOL
software standard.
Various groups at NTAX were responsible for each of these processes. The Software
Engineering Process Group (SEPG) were responsible for the first task, the Software
Quality Assurance Group (SQA) were responsible for the third task, but when it got
down to actual work, the programmers themselves were the ones who defined what the
standard should be like and what the quality control should be like.
During the description of the case, it was also mentioned that writing and maintaining
COBOL software was not something done independently of other work, but it was done
as an integrated part of the system implementation as a part of the annual life cycles of
the NTAX systems.
5.1.2 Theory of variation
Software process improvement (SPC) was used extensively for monitoring and predicting
the behavior of the COBOL software maintenance system, from the simple level of
monitoring and predicting the number of code lines in each software project to the
complicated level of monitoring and predicting the overall improvement rates.
As the maintenance cycle at NTAX is an annual cycle, this means that it takes several
years in order to get good estimates for estimating the statistical process parameters for
using SPC, and it also means that the improvement process will have to be stable during
those years. As was illustrated from the case presentation and the diagrams in figure 5, it
is far from certain that the improvement process is stable. However, a pragmatic estimate
for the process average in the SPC diagram resulted in an average improvement rate of 14
units per year, with a standard deviation of 3 units per year.
If next years improvement is less than 2 units or more than 26 units, then the SPI process
may be out of control, otherwise the improvement result is as expected.
Fall 2006
Page 16 of 24
INF5180 Product and Process Improvement in Software Development
5.1.3 Theory of knowledge
Once a system has been established, such as the COBOL maintenance system, production
costs should be optimized (minimized) in order to move resources into innovation
activities. From the perspective of Deming and TQM, process knowledge consists of a
procedure (flow chart), explaining how the process is carried out, and it consists of
statistical distributions. Such distributions are typically related to time, cost, quality or
whatever that might be relevant and important for the process.
In the case of the COBOL maintenance system, illustrated in figure 3, we showed in
chapter 4 how the system is broken down in to operational procedures, and we have
stated that the current implementation of the system appears to result in a normally
distributed improvement rate with an average of 14 units per year and a standard
deviation of 3 units per year.
As we believe it would be difficult to increase the improvement rates, we believe it
should be easier to reduce the costs while maintaining the same improvement rates.
As making the COBOL software more compliant with the COBOL standard was fully
integrated with the ordinary programming activities, it was not possible to measure the
cost of actual updates. Furthermore, as neither the SEPG group nor the programmers
keep cost accounts, it would be impossible to figure out the cost of the maintenance
anyway. However, cost statistics from the SQA group showing that they spend an annual
average of 99 man-hours working on COBOL maintenance activities, an overall
reference value for improving the system could be measured against the initial index
value of 14 / 99 = 14% improvement per man-hour.
The knowledge that produces our current results is explained through the flow charts and
statistics in chapter 4. What we need now is knowledge in terms of how to simplify the
tasks performed by the SQA group.
5.1.4 Psychology
As pointed out in chapter 4, using internal benchmarking seemed to stimulate the
competitive instinct among some programmers, giving them a motivational boost in
terms of cleaning up old software and making sure that new software was compliant with
the standards. Other programmers, however, seemed to ignore the benchmarks, perhaps
getting irritated or frustrated, and for one of the projects we have six years of software
getting less and less compliant with the standard every year.
There may be some insights on this problem by going to the literature on the sociology
and psychology of programmers (e.g. Hohmann, 1997; Weinberg, 1971), but as “what
gets measured gets done” seems to work in 9 out of 10 cases, it may also be a case of a
few “difficult” people doing as they please, not matter how much one might measure and
motivate.
Fall 2006
Page 17 of 24
INF5180 Product and Process Improvement in Software Development
5.2 A consideration of the findings in light of existing research
studies
As pointed out in chapter 2, there have been…
5.3 Implications for the study of the current theory
Comparing with the literature we have gone through so far, we have found many
references to fragments of Deming’s ideas, but no reference putting an emphasize on
using Deming’s systematic approach for performance improvement (Deming, 1994).
As we have illustrated through this case study, however, Deming’s systematic approach
is simple, flexible and seems to be equally relevant for software process improvement as
of any other type of process improvement.
Fall 2006
Page 18 of 24
INF5180 Product and Process Improvement in Software Development
6 Conclusion
We started by identifying two way of implementing software process improvement (SPI),
either from the perspective of engineering or from the perspective of research. We
argued that following the path of total quality management (TQM) expert W. E. Deming
should imply a research approach, while most of the SPI literature based on Deming, take
the engineering approach.
We then presented Deming’s holistic philosophy of quality management, described an
SPI experiment carried out at the Norwegian Directorate of Taxes (NTAX), and
discussed the results of that experiment from the viewpoint of the Deming theory.
We used this to illustrate a way of implementing the Deming philosophy that is different
from traditional SPI in ways of aiming for theoretical insights as primary goal and
expecting practical improvement results as a side effect.
We believe this “action research” approach is more consistent with what Deming actually
says in his books (Deming, 1986; 1994) than how his ideas have been interpreted by the
SPI people (e.g. Humphrey, 1989; Florac & Carleton, 1999; Poppendieck &
Poppendieck, 2003).
Fall 2006
Page 19 of 24
INF5180 Product and Process Improvement in Software Development
References
Ahern, D.M., Clouse, A. and Turner, R. (2004). CMMI Distilled. Second Edition. SEI
Series in Software Engineering. Addison-Wesley: Reading, Massachusettes.
Abran, A. and Moore, J.W. (2004). Swebok: Guide to the Software Engineering Body of
Knowledge. IEEE Computer Society: Los Alamitos, California.
Argyris, C. & Schön, D. (1978): “Organizational Learning: A Theory of Action
Perspective”, Addison-Wesley: Reading, Massachusettes.
Axelrod, R. and Cohen, M. D. (2000). Harnessing Complexity: Organizational
Implications of a Scientific Frontier. Basic Books: New York.
Bach, J. (1994) The Immaturity of the CMM, American Programmer, 7(9), 13-18.
Bach, J. (1995) Enough about Process: What We need are Heroes, IEEE Software, 12
(March), 96-98.
Boehm, B. and Turner, R. (2004). Balancing Agility and Discipline: A Guide for the
Perplexed. Addison-Wesley: Boston.
Bollinger, T. B. and McGowan, C. (1991) A Critical Look at Software Capability
Evaluations, IEEE Software, 8 (4), 25-41.
Ciborra, C. U. et al (2000). From Control to Drift: The Dynamics of Corporate
Information Infrastructures. Oxford University Press: Oxford.
Chrissis, M.B., Konrad, M. and Shrum, S. (2003). CMMI: Guildelines for Process
Integration and Product Improvement. SEI Series in Software Engineering. AddisonWesley: Boston.
Churchman, C.W., Ackoff, R.L. and Arnoff, E.L. (1957). Introduction of Operations
Research. John Wiley & Sons: New York.
Coghlan, D. & Brannick, T. (2001): “Doing Action Research in Your Own
Organization”, SAGE: London.
Curtis, B. (1994) A Mature View of the CMM, American Programmer, 7 (9), 13-18.
Curtis, B. (1998) Which Comes First, the Organization or Its Processes? IEEE Software,
Nov/Dec, 10-13.
Dahlbom, B. (2000): “Postface. From Infrastructure to Networking”. In Ciborra, C. e. a.
(Ed.) From Control to Drift, Oxford University Press: Oxford.
Fall 2006
Page 20 of 24
INF5180 Product and Process Improvement in Software Development
Davenport, Thomas H. & Laurence Prusak (1998): “Working Knowledge: How
Organizations Manage What They Know”. Harvard Business School Press.
Deming; W. E. (1986): “Out of the Crisis”, The MIT Press: Cambridge; Massachusettes.
Deming; W. E. (1994): “The New Economics for Industry, Government, Education”, 2nd
edition, The MIT Press: Cambridge; Massachusettes.
Dybå, T., Dingsøyr, T. and Moe, N. B. (2002). Praktisk prosessforbedring: En håndbok
for IT-bedrifter. Fagbokforlaget: Bergen.
EFQM (2006): http://efqm.org, [Accessed on May 12th 2006]
Florac, W.A. and Carleton, A.D. (1999). Measuring the Software Process: Statistical
Process Control for Software Process Improvement. SEI Series in Software Engineering.
Addison-Wesley: Reading, Massachusettes.
Fujimoto, T. (1999). The Evolution of a Manufacturing System at Toyota. Oxford
University Press: Oxfor.
Hanseth, O. & Monteiro, E. (1998): “Understanding Information Infrastructure”, Oslo.
Hawkins, J. and Blakeslee, S. (2004). On Intelligence. Owl Books: New York.
Humphrey, W.S. (1989). Managing the Software Process. SEI Series in Software
Engineering. Addison-Wesley: Reading, Massachusettes.
Imai, M. (1986): “Kaizen: The Key to Japan’s Competitive Success”, McGrawHill/Irwin: New York.
ISO (2000a): “Quality Management Systems – Terms and Definitions (ISO 9000:2000)”,
International Standards Organization: Geneva.
ISO (2000b): “Quality Management Systems – Requirements (ISO 9001:2000)”,
International Standards Organization: Geneva.
ISO (2000c): “Quality Management Systems – Guidelines for Performance Improvement
(ISO 9004:2000)”, International Standards Organization: Geneva.
Juran, J. (1964): “Managerial Breakthrough”, McGraw-Hill: New York.
Lewis, C. I. (1929): “Mind and the World Order: Outline of a Theory of Knowledge”,
Dover Publications: New York.
NTAX (1998): “Strategisk plan for bruk av IT i skatteetaten”, SKD nr 62/96, Oslo.
Fall 2006
Page 21 of 24
INF5180 Product and Process Improvement in Software Development
NTAX (2001). Opprydding og standardisering av COBOL-baserte IT-systemer, SKD nr
61/01, Norwegian Directorate of Taxes, Oslo.
NTAX (2002a). Opprydding og standardisering av COBOL-baserte IT-systemer, SKD
2002–018, Norwegian Directorate of Taxes, Oslo.
NTAX (2002b). Forbedring av IT-prosesser ved bruk av ISO 15504, SKD 2002–032.
Skattedirektoratet: Oslo.
NTAX (2004). Opprydding og standardisering av COBOL-baserte IT-systemer, SKD
2004–001, Norwegian Directorate of Taxes, Oslo.
NTAX (2005). Opprydding og standardisering av COBOL-baserte IT-systemer, SKD
2005–003, Norwegian Directorate of Taxes, Oslo.
NTAX (2006). Opprydding og standardisering av COBOL-programvare, Norwegian
Directorate of Taxes, Oslo.
Paulk, M. (1995). The Rational Planning of (Software) Projects. Proceedings of the First
World Congress for Software Quality. San Francisco, CA, 20-22 June 1995, section 4.
Poppendieck, M. and Poppendieck, T. (2003). Lean Software Development: An Agile
Toolkit. The Agile Software Development Series. Addison-Wesley: Boston.
Russell, S. and Norvig, P. (2003). Artificial Intelligence: A Modern Approach. Second
Edition. Prentice-Hall: London.
Shewhart, W.A. (1939). Statistical Method from the Viewpoint of Quality Control.
Dover: New York.
Sommerville, I. (2004). Software Engineering. 6th Edition. Addison-Wesley: London.
Tsutsui, W. M. (1998): “Manufacturing Ideology: Scientific Management in TwentiethCentury Japan”, Princeton University Press: Princeton.
Wikipedia (2006a) “Edward C. Tolman” online at
http://en.wikipedia.org/wiki/Edward_Tolman, Downloaded 19112006
Wikipedia (2006b) “Ontology (Computer Science)” online at
http://en.wikipedia.org/wiki/Ontology_%28computer_science%29, Dowloaded 19112006
Womack, J. P., Jones, D. T. and Roos, D. (1990). The Machine that changed the World:
The Story of Lean Production. Harper Perennial: New York.
Womack, J. P. and Jones, D. T. (2003). Lean Thinking. Second Edition. Harper
Perennial: New York.
Fall 2006
Page 22 of 24
INF5180 Product and Process Improvement in Software Development
Zahran, S. (1998). Software Process Improvement: Practical Guidelines for Business
Success. Addison-Wesley: Harlow, England.
Øgland, P. (2006a). Using internal benchmarking as strategy for cultivation: A case of
improving COBOL software maintainability. In Proceedings of the 29th Information
Systems Research in Scandinavia (IRIS 29): “Paradigms Politics Paradoxes”, 12-15
August, 2006, Helsingør.
Øgland, P. (2006b). Improving Research Methodology as Part of Doing Software
Process Improvement. Submitted to: 15th European Conference on Information Systems
(ECIS 15): "Relevant rigor - Rigorous relevance", June 07-09, 2007, St. Gallen.
Fall 2006
Page 23 of 24
INF5180 Product and Process Improvement in Software Development
Appendix
Tabellen viser hvordan jeg selv vurderer oppgaven:
Vurderingsområde
Sammenheng
forretningsmessige
forhold og
forbedringsplan
Bruk av
pensumkunnskap
og referanser
Argumentasjon
Struktur
Fall 2006
Målsetting
Egenvurdering
De relevante
forretningsmessige
rammebetingelsene og
målsetningene er meget godt
analysert og beskrevet. De
tiltakene som settes i verk er
direkte knyttet til opplevde
problemer og målsetninger og
de er prioritert.
Pensum brukes i stor grad i
besvarelsen med sentrale og
relevante referanser. Der
annen litteratur legges til grunn
er også referansene gode og
relevante.
Det argumenteres særdeles
godt for de tiltak som settes i
verk. Realismen i tiltakene
vurderes på en overbevisende
måte.
Besvarelsen har et godt språk
og en logisk oppbygging som
leder leseren gjennom
forbedringsplan og
argumentasjon uten
unødvendige gjentagelser.
Page 24 of 24
Download