Applying Statistical Methods in Business and Industry

advertisement
Deliverable 8.7 Report of the state of the art in
Applying Statistical Methods in Business and
Industry
Mission Statement for the report

To promote the wider understanding and application of contemporary and
emerging statistical methods

For Students

For people working in business and industry worldwide

And for the professional development of statistical practitioners (statistical
practitioner is any person using statistical methods whether formally trained or
not),

To foster best practice for the benefit of business and industry
Readership
All members of ENBIS
Contents
1. Introduction to the report, to ENBIS and pro-ENBIS
2. Industrial statistics - history and background
3. Statistical Consultancy
4. From Process improvement to Quality by design
5. Management statistics
6. Service quality
7. Risk, Finance and Insurance
8. Applying data mining methods to business and industry
9. Process Monitoring, Improvement and Control
10. Measurement system analysis
11. Design and analysis of experiments (DoE)
12. Safety and reliability engineering
13. Multivariate analysis focussing on multiscale modelling
14. Simulation
15. Communication
16. Summary, cross-fertilisation and future directions
Chapter 1
Introduction to the report, to ENBIS and pro-ENBIS
Shirley Coleman, Tony Fouweather and Dave Stewardson
The report was conceived as a deliverable in the original description of the proENBIS project proposal as a natural consequence of the three year working
partnership between many of the foremost players in this field. Contributions have
been volunteered from each of the work package leaders and from most of the other
project partners and members. Many of the contributions are collaborative efforts not
only ensuring that the subjects covered are dealt with from more than one perspective,
but also carrying on the initial drive of the project in promoting collaborations to
further the subject as a whole. The resulting report provides an overview of the state
of the art in business and industrial statistics in Europe.
The subject areas are dealt with in a logical manner, starting with a brief overview of
the history of industrial statistics, before tackling a variety of subjects which have
become vital to many areas of business and industry throughout the world. The
subjects and methods covered by this report are essential for any company wishing to
compete in an increasingly efficient market place.
With the current popularity of the six sigma movement, which seeks to implement
statistical methods and tools into businesses and industry, there is a growing need for
a comprehensive overview of the subject. The applications of these techniques are far
reaching, and can be extended to improve every aspect of the way a company is
managed. Often the use of these methods is the only reason that a business can gain
an edge over their competitors, and increasingly their use is crucial to the survival of
the business.
The report covers methodology and techniques currently in use and those about to be
launched and includes references to miscellaneous sources of information including
web-sites.
Such a collection of experts working together in partnership has been possible through
the support of the European Commission’s fifth framework growth programme and
the thematic network pro-ENBIS, and also via the recently established ENBIS webbased society.
For completeness, this introductory chapter gives a brief description of ENBIS
followed by an overview of the pro-ENBIS project.
In August 1999 a group of about 20 statisticians and statistical practitioners met in
Linköping, Sweden at the end of the First International Symposium on Industrial
Statistics to initiate creating a European Society for Industrial and Applied Statistics
(ESIAS). As a first step, a list of interested members was drawn up and dialogue
started. The internet was identified as a feasible platform to cheaply and efficiently
co-ordinate the activities of any body formed out of these meetings. A network of
people from industry and academia from all European nations interested in applying,
promoting and facilitating the use of statistics in business and industry was created to
address 2 main observations


As with many disciplines, applied statisticians and statistical practitioners
often work in professional environments where they are rather isolated from
interactions and stimulation from like minded professionals.
Statistics is vital for the economic and technical development and improved
competitiveness of European industry.
By February 2000 an executive committee was formed which held a founding
meeting at EURANDOM in Eindhoven, the Netherlands, on February 26th and 27th.
The name ENBIS (European Network for Business and Industrial Statistics) was
adopted as the definitive name for the society. It was decided to have a founding
conference on December 11, 2000 in Amsterdam and during that meeting, ENBIS
was formally launched. The conference was accompanied by a 3-day workshop on
design of experiments lead by Søren Bisgaard, a renowned European statistician.
The mission of ENBIS:
 Foster and facilitate the application and understanding of statistical methods to
the benefit of European business and industry,
 Provide a forum for the dynamic exchange of ideas and facilitate networking
among statistical practitioners (a statistical practitioner is any person using
statistical methods whether formally trained or not),
 Nurture interactions and professional development of statistical practitioners
regionally and internationally.
ENBIS has adopted the subsequent points as its vision.








To promote the widespread use of sound science driven, applied statistical
methods in European business and industry,
That membership consists primarily of statistical practitioners from business
and industry,
To emphasise multidisciplinary problem solving involving statistics,
To facilitate the rapid transfer of statistical methods and related technologies
to and from business and industry,
To link academic teaching and research in statistics with industrial and
business practice,
To facilitate and sponsor continuing professional development,
To keep its membership up to date in the field of statistics and related
technologies,
To seek collaborative agreements with related organisations.
ENBIS is a web based society and the activities can be found on www.enbis.org
ENBIS has so far arranged 5 annual conferences at various locations around Europe
which have allowed the showcasing of a broad spectrum of applications and generated
discussion about the use of statistics in a wide range of business and industrial areas.
ENBIS also plans to start a mid-year workshop meeting as well. Several ideas for new
projects are being developed and the group of partners and members continue to seek
ways to further their working relationships. Pro-ENBIS encouraged these
collaborations.
The thematic network, pro-ENBIS sought to build on the success of ENBIS and to
develop partnerships within Europe to support selected projects at the forefront of
industrial and business statistics, with the specific mission “to promote the widespread
use of sound science driven, applied statistical methods in European business and
industry.”
Pro-ENBIS was contracted for three years until 31st December 2004 by the European
Commission with a budget of about 800 000 Euros. The project was co-ordinated by
the University of Newcastle upon Tyne (UK) and had contractors and members from
across Europe.
The thematic network was funded so that it could achieve specific outcomes, such as
promoting industrial statistics through workshops, industrial visits and through the
publishing of papers and articles. These activities relate to the network’s aim to
provide a forum for the dissemination of industrial statistical methodology directly
from statisticians and practitioners to European industry.
The deliverables were grouped around statistical themes, with 8 work packages.
 WP1 Design of experiments
 WP2 Data mining/warehousing
 WP3 General statistical modelling, process modelling and control
 WP4 Reliability, safety and quality improvement
 WP5 Discovering European resources and expertise
 WP6 Drafting and initiating further activity
 WP7 Management statistics
 WP8 Network management
Outputs can be viewed on the project web-site http://www.enbis.org/pro-enbis/
Achievements and the future
 ENBIS Magazine within Scientific Computing World posted free to ENBIS
members every 2 months
 George Box medal for exceptional contributions to industrial statistics
established and awarded annually at ENBIS conference
 Prizes for best presentation, young statistician and supporting manager
awarded annually at ENBIS conference
 Establish local networks in most European countries.
 Have ENBIS members in the top 10 companies in each European country
 Continue Pro-ENBIS type workshop and research activities.
 Promote ENBIS membership widely
Presidents of ENBIS:
Henry Wynn (2000), Dave Stewardson (2001), Tony Greenfield (2002), Poul
Thyregod (2003), Shirley Coleman (2004), Fabrizio Ruggeri (2005).
Chapter 2
Industrial Statistics – History and Background
Jeroen de Mast, Ronald Does
Industrial statistics in a historical perspective
The quality situation pre 1980:
Before the industrial revolution, quality was assured by the guild system and
craftmanship. The Industrial revolution led to standardization, specifications and the
invention of management as discipline. There followed Taylorism / Fordism which
became the standard way of working in the West from 1910 till 1980. Mass
fabrication spread across Europe and the western world. Quality was seen as a tradeoff with productivity. There was an inward-perspective with the focus firmly on
efficiency and volume. The usual organization was command-and-control.
Early statistical contributions to industry:
The earliest well documented work was by Gosset during his time in the Guinness
brewery in Dublin. It was there that he proposed the t-test, which he famously
attributed to ‘Student’. Walter Shewhart developed control charts and statistical
process control during his time in munitions. He promoted statistics as a catalyst to
empirical learning. Dodge perfected inspection plans for use in production.
Statistical contributions post World War II:
George Box worked extensively on experimentation and response surface
methodology for process optimisation. W.Edwards Deming went to Japan and created
the movement towards company wide quality improvement.
Deming’s work ignited the Japanese revolution in which quality and productivity are
seen as synergetic. This was accompanied by a new outward-in perspective (what and
how to produce was not determined by a company’s tradition, technical specialists,
etc., but by what the customer needs and wants and how much he is willing to pay for
it). This led to decentralization and empowerment and the involvement of topmanagement. It was the start of a race for operational efficiency and effectiveness: the
Japanese have a manufacturing system (lean manufacturing) which is far superior to
the Taylor/Ford system. Western companies started to embark on a race to make up
the difference. This resulted in heavily increased competitiveness over issues like
operational efficiency, innovativeness and quality.
Statistical quality improvement
Amongst the early statistical quality improvement methodologies, two of the most
influential were the Taguchi method and the Shainin System. Taguchi has become a
by-word for experimental design and the sequential step-wise approach of Shainin has
been adopted by a number of major companies.
From a focus on statistical tools, currently the emphasis has moved on to statistical
thinking. This is exemplified by the notion that it is important that statistical tools and
inferential procedures are generally mastered by employees as members of
organisations.
Progress in the last 30 years was marked by a steady institutionalisation of approaches
to quality control and improvement, for example the ISO 9000 series. Professional
statisticians stand aside and watch the quality movement overemphasize the
qualitative aspects of quality management.
The quality improvement initiative Six Sigma was the next powerful influence in
these developments. Six sigma draws from previous innovations, for example
Taylor’s focus on efficiency, SPC’s use of data and statistical methods, Japan’s lean
manufacturing and aggressive defect reduction, customer focus and decentralized
project structure. But it is also a step forward, being the first programme to combine
statistical and non-statistical tools, a project methodology, an organizational structure
for projects, a metric system for performance management, and more aspects into a
single integrated programme. It is also innovative in prescribing a taskforce of
dedicated project leaders: the investments in time spent on process improvement in
Six Sigma companies are unprecedented.
Industrial statistics as a discipline
Historically there has always been a strong mathematical background to industrial
statistics, in particular the relationship with probability and mathematical statistics.
Statistics can be seen as a reconstruction of inferential procedures. Probabilistic
reasoning underpins much statistical problem solving.
Statistics has always had strong relationships with other disciplines: economics,
management science, philosophy of science. This is even more so the case with
industrial statistics which requires a genuine interest in the processes under
investigation. Industrial statistics has now emerged as a discipline in itself. It is
motivated by its own programme of development and progress including scientific
research of methodologies and analysis techniques.
References
Box GEP, Hunter WG and Hunter JS (1978), Statistics for Experimenters, Wiley,
New York.
Deming WE (1986) Out of the Crisis. MIT, Cambridge (MA).
Drucker PE (1954) The Practice of Management, Harper & Row, New York.
Fisher Box J (1987) “Guinness, Gosset, Fisher, and small samples” Statistical Science
2(1) 45–52.
Ishikawa K (1982) Guide to Quality Control. Asian Productivity Organization.
Juran JM (1989) Juran on Leadership for Quality: An Executive Handbook. Free
Press, New York.
Shewhart WA (1931) Economic Control of Quality of Manufactured Product. Van
Nostrand Reinhold, Princeton.
Student (WS Gosset) (1908), On the probable error of a mean, Biometrika 6, pp. 1-25.
Chapter 3
Statistical Consultancy
Ronald Does
The statistician in industry
The role of statisticians in industry has for many years been a source of anxiety to
statisticians. There are many papers about this topic, and many colleagues have
considerable experience as SPC and Six Sigma experts and statistics consultant.
Skills for a statistician working in consultancy
Caulcutt (2001) describes Six Sigma by listing the characteristics shared by the
organisations that have successfully adopted the Six Sigma approach. His list
includes “Black Belts” and he emphasises the important contribution that Black Belts
make in the pursuit of business improvement. There must be few, if any, companies
that have successfully pursued the objectives of Six Sigma without using employees
in the Black Belt role. Caulcutt’s paper focuses on Black Belts and describes what
they do and what skills they require if they are to perform this important role.
Statistical Consultancy units
Statistical consulting units can be successful and much thought has gone into how
they can be established to enhance their chances of being successful. They can be setup in Departments of Statistics & Mathematics, Business Studies and in other bodies.
Good examples exist of European statistical consulting units at universities.
Running a commercial consultancy bureau
As a contrast to being located within a university, it is possible to run a consultancy
unit on a commercial basis. Such a consultancy bureaus exist by providing statistical
services in exchange for financial remuneration.
Statistical Structuring
Statistical Consultancy is, by nature, based on the two fundamental disciplines of
applied statistics: ‘data analysis’ and ‘probability calculus’. But too often a statistical
consultant, especially in industry, reviewing a finished project is confused. Although
his/her contribution is recognized as a break-through added value, he/she probably
only used simple statistical methods. What made the contribution so valuable for the
customer? The real added value of a statistical consultant is his power to structure a
problem (one might also say, reality) in a specific way. This distinguishes our view on
problems from all other disciplines. This is what is meant by ‘Statistical Structuring’.
References
R. Caulcutt (2004), Black Belt types, Quality and Reliability Engineering International 20,
pp. 427-432.
R.J.M.M. Does & A. Trip (2001), The impact of statistics in industry and the role of
statisticians, Austrian Journal of Statistics 30, pp. 7-20.
R.J.M.M. Does & A. Zempléni (2001), Establishing a commercial statistical consulting unit at
universities, Kwantitatieve Methoden. 67, pp. 51-63.
Chapter 4
From Process Improvement to Quality by Design
Ron Kenett
Keywords: Improvement teams, process improvement, quality by design, robust
design, optimization, Six Sigma, DMAIC, Design for Six Sigma, problem solving,
new product development, quality ladder.
The Quality Ladder
Modern industrial organizations in manufacturing and services are subjected to
increased competitive pressures and rising customer expectations. Management teams
on all five continents are attempting to satisfy and even “delight” their customers while
simultaneously cutting costs. An increasing number of organizations have shown that
the apparent conflict between high productivity and high quality can be resolved
through improvements in work processes and quality of designs. The different
approaches to the management of industrial organizations can be summarized and
classified using a four step Quality Ladder. The four approaches are
1) Fire Fighting,
2) Inspection,
3) Process Control and
4) Quality by Design. To each management approach there corresponds a particular set
of statistical methods. Managers involved in reactive fire fighting need to be exposed to
basic statistical thinking. Managers attempting to contain quality and inefficiency
problems through inspection and 100% control can have their tasks alleviated by
implementing sampling techniques. More proactive managers investing in process
control and process improvement are well aware of the advantages of control chart and
process control procedures. At the top of the quality ladder is the quality by design
approach where up front investments are secured in order to run experiments designed
to impact product and process specifications. Efficient implementation of statistical
methods requires a proper match between management approach and statistical tools.
There are many well-known companies who have gone up the quality ladder and are
now enjoying increased efficiency and profitability.
Improvement teams
Revolutionary improvement of quality within an organization involves improvement
projects. A project by project improvement strategy relies on successful employee
participation in project identification, analysis and implementation. Factors relating
to a project's success or maximizing success are numerous, but are often overlooked
and probably unknown. A project by project quality improvement program must be
supported by quality principles, analysis techniques, effective leaders and facilitators,
and extensive training. It is feasible to identify specific factors that affect the success
or maximise the success of the quality improvement project team.
The Juran Trilogy and the Six Sigma DMAIC steps
It is feasible to review the Juran Trilogy of Improvement, Planning and Control and
map it into the DMAIC process.
The Taguchi approach and Design For Six Sigma
It is feasible to review the Taguchi approach of robust design and disucss its
implementation within the DFSS context.
Practical Statistical Efficiency
Practical Statistical Efficiency is an attempt to measure the full impact of statistical
tools on real life problems. We define Practical Statistical Efficiency (PSE) as:
PSE = E{R} x T {I} x P {I} x P {S} x V {PS} x V {P} x V {M} x V {D}
Where:
V{D} = value of the data actually collected.
V{M} = value of the statistical method employed.
V{P} = value of the problem to be solved.
V{PS} = value of the problem actually solved.
P{S} = probability that the problem actually gets solved.
P{I} = probability the solution is actually implemented.
T{I} = time the solution stays implemented.
E{R} = expected number of replications.
Conclusion
It can be shown how “going up the quality ladder” – from process improvement to
quality by design – can increase practical statistical efficiency.
References
Aubrey, C. A. and P. K. Felkins, Teamwork: Involving People in Quality and
Productivity Improvement, 1988, Quality Press.
Bickel, P. and K. Doksum, Mathematical Statistics: Basic Ideas and Selected
Topics, 1977, Holden-Day: San Francisco.
Chambers, P.R.G, J.L. Piggott and S.Y.Coleman, “SPC – a team effort for process
improvement across four area control centers”, J. Appl. Stats, 2001, 28,3: 307-324.
Coleman, S.Y and D.J. Stewardson “Use of measurement and charts to inform
management decisions”, Managerial Auditing Journal, 2002, 17,1/2: 16-19.
Coleman, S.Y., A. Gordon and P.R. Chambers, “SPC – making it work for the gas
transportation industry”, J. Appl. Stats, 2001, 28,3: 343-351.
Coleman, S.Y., G. Arunakumar, F. Foldvary and R. Feltham, “SPC as a tool for
creating a successful business measurement framework”, J. Appl. Stats, 2001, 28,3:
325-334.
Godfrey, A. Blanton, "Statistics, Quality and the Bottom Line," Part 1, ASQC
Statistics Division Newsletter, 1988, 9, 2, Winter: 211-213.
Godfrey, A. Blanton, "Statistics, Quality and the Bottom Line," Part 2, ASQC
Statistics Division Newsletter, 1989, 10, 1, Spring: 14-17.
Hoadley, B. “A Zero Defect Paradigm”, ASQC Quality Congress Transaction,
Anaheim, 1986.
Juran, J.M., Juran on Leadership for Quality, 1989, Free Press.
Kenett, R. S. and D. Albert, “The International Quality Manager: Translating quality
concepts into different cultures requires knowing what to do, why it should be done
and how to do it”, Quality Progress, 2001, 34, 7: 45-48.
Kenett, R.S., S. Y. Coleman and D. Stewardson, “Statistical Efficiency: The Practical
Perspective”, Quality and Reliability Engineering International, 2003, 19: 265-272.
Kenett, R. S. and S. Zacks, Modern Industrial Statistics: Design and Control of
Quality and Reliability, 1988, Duxbury Press: San Francisco.
Chapter 5
Management Statistics
Irena Ograjenšek, Ron Kenett, Jeroen de Mast, Ronald Does and Soren Bisgaard
Rationale for Evidence-Based Management
Following the logic of the WP7 research programme, it can be shown that the key
question of what is the contribution of statistics to management processes in
organisations can be addressed from many different perspectives, such as historical,
economic and the perspective of quality management.
Data Sources for Evidence-Based Management
Organisations make use of both internal and external data sources (ranging from
customer transaction and survey data to official statistics). These can be used to
support decision-making under uncertainty (the so-called informed decision-making).
Data from different sources are evaluated according to criteria such as cost, accuracy,
timeliness, etc.
Role of Key Performance Indicators in Evidence-Based Management
Three complementary approaches to set up Evidence-Based Key Performance
Indicators in an organisation exist:



Integrated Models used to map out cause and effect relationships,
the Balanced Scorecard that provides management with a navigation panel
Economic Value Added (EVA), an economic model measuring the creation of
wealth.
A first example of an integrated model was implemented by Sears Roebuck and Co.
into what they call the employee-customer-profit model. Another example of an
Integrated Model is the American Customer Satisfaction Index (ACSI) that is based
on a model originally developed in Sweden. ACSI is a Structural Equations Model
with six endogenous variables measuring perceived and expected quality, perceived
value, satisfaction level, customer complaints and customer retention. In order to
assess the customer satisfaction level, a Partial Least Squares analysis is conducted on
exogenous data gathered through telephone surveys.
The Balanced Scorecard was created Kaplan and Norton to translate vision and
strategy into objectives. The balanced scorecard is meant to help managers keep their
finger on the pulse of the business. It usually has four broad categories, such as
financial performance, customers, internal processes and learning and growth.
Typically, each category will have two to five measures. If the business strategy is to
increase market share and reduce operating costs, the measures may include market
share and cost per unit.
EVA is the one measure that is used to monitor the overall value creation in a
business. EVA is not the strategy; it is the way we measure the results. There are
many value drivers that need to be managed, but there can be only one measure that
demonstrates success. A single measure is needed as the ultimate reference of
performance to help managers balance conflicting objectives.
Role of Statistical Programmes in Evidence-Based Management
In order to introduce statistical methods in a coherent and operational form in
business and industry, numerous statistical programmes have been proposed, such as
Total Quality Management (TQM), Statistical Process Control (SPC), Six Sigma,
Taguchi’s method, the Shainin System. These programmes would benefit from a
detailed overview and cost-benefit comparison along with practical examples of
successful or failed applications (there are numerous references to these in the
literature).
Case for European Six Sigma Academy
The exact content of Six Sigma courses varies between institutions and countries
across Europe. Some institutions tailor their Six Sigma courses to individual
companies. A number of the institutions have accredited courses and certify people
who pass as Black Belts and Green Belts. A few of the members have also arranged
for students to complete the exams set by the ASQ to gain the ASQ Black Belt
certification in addition to their own certification.
Companies wishing to send staff on Six Sigma courses often want to know if the
course has been approved by any outside organisations. Having ENBIS accreditation
for their Six Sigma courses would be of benefit. It would also be sensible if Six Sigma
exams could count towards university qualifications.
There is a case for agreeing the core curriculum of a Six Sigma course and this should
be the basis for accrediting courses. There must be a common standard or the
accreditation will have no value.
Companies employing Black Belts and Green Belts have difficulty knowing whether
the person is indeed trained to this level, ENBIS accreditation would show that the
person had taken a course covering the core areas and meeting the set standard. This
would also be of use to the employee, as their Black Belt / Green Belt qualification
would be of a certain recognisable standard should they wish to move companies.
The best form of certification for a Black Belt is working for a company known to
have made good savings and improvements using Six Sigma.
Rather than sign up to any other supplier, ENBIS could give accreditation to courses
which it believes trains students up to a standard where they would pass the ASQ
exams.
It is sensible to take a broader long term view and not concentrate solely on Six Sigma
as Six Sigma will most likely have a limited life span. Instead the academy or
accreditation scheme could be for Six Sigma and other up coming subject areas; it
could be a general academy for all the business areas of ENBIS.
ENBIS already has a procedure relating to endorsements. At present the information
on this procedure is not widely available. It was suggested that the existing procedure
should be looked at, updated if necessary and made available (perhaps through the
website). It is also noteworthy that as Six Sigma covers statistics and management
issues, ENBIS cannot cover the management training aspects.
References
Albright, S.C., W.L. Winston and C.J. Zappe (2000): Managerial Statistics. Duxbury, Pacific
Grove.
Albright, S.C., W.L. Winston and C.J. Zappe (1999): Data Analysis and Decision Making
with Microsoft Excel. Duxbury, Pacific Grove.
Anderson, E.W. and C. Fornell (2000): The Customer Satisfaction Index as a Leading
Indicator. In Swartz, T.A. and D. Iacobucci, eds., Handbook of Services Marketing &
Management. Sage Publications, Thousand Oaks, 255-267.
Anderson, E.W. and C. Fornell (2000): Foundations of the American Customer Satisfaction
Index. Total Quality Management, 7, 869-882.
Anderson, E.W., C. Fornell and D.R. Lehmann (1994): Customer Satisfaction, Market Share
and Profitability: Findings from Sweden. Journal of Marketing, 3, 53-66.
Anderson, E.W. and M.W. Sullivan (1993): The Antecedents and Consequences of Customer
Satisfaction for Firms. Marketing Science, 2, 125-143.
Beauregard, M.R., R.J. Mikulak and B.A. Olson (1992): A Practical Guide to Statistical
Quality Improvement. Opening up the Statistical Toolbox. Van Nostrand Reinhold, New
York.
Bendell, T., J. Kelly, T. Merry and F. Sims (1993): Quality: Measuring and Monitoring.
Century Business, London.
Bisgaard, S. (2000): The Role of Scientific Method in Quality Management. Total Quality
Management, 3, 295-306.
Breyfogle, Forrest W. III (1999): Implementing Six Sigma. Smarter Solutions Using
Statistical Methods. John Wiley & Sons, New York.
Cole, W.E. and J.W. Mogab (1995): The Economics of Total Quality Management: Clashing
Paradigms in the Global Market. Blackwell Publishers, Cambridge (MA).
Czarnecki, M.T. (1999): Managing by Measuring. How to Improve Your Organisation’s
Performance Through Effective Benchmarking. AMACOM, American Management
Association, New York.
Dransfield, S.B., N.I. Fisher and N.J. Vogel (1999): Using Statistics and Statistical Thinking
to Improve Organisational Performance. International Statistical Review, 2, 99-150.
Drummond, H. (1994): The Quality Movement. Nichols Publishing, New Jersey.
Easton, G.S. (1995): A Baldrige Examiner’s Assessment of U.S. Total Quality Management.
In Cole, R.E., ed., The Death and Life of the American Quality Movement. Oxford University
Press, New York, 11-41.
Easton, G.S. and S.L. Jarrell (2000): The Effects of Total Quality Management on Corporate
Performance. An Empirical Investigation. In Cole, R.E. and W.R. Scott, eds., The Quality
Movement & Organization Theory. Sage Publications, Inc., Thousand Oaks, 237-270.
Easton, G.S. and S.L. Jarrell (2000a): Patterns in the Deployment of Total Quality
Management. An Analysis of 44 Leading Companies. In Cole, R.E. and W.R. Scott, eds., The
Quality Movement & Organization Theory. Sage Publications, Inc., Thousand Oaks, 89-130.
Fornell, C. (1992): A National Customer Satisfaction Barometer: The Swedish Experience.
Journal of Marketing, 1, 6-21.
Fornell, C., M.D. Johnson, E.W. Anderson, J. Cha and B.E. Bryant (1996): The American
Customer Satisfaction Index: Nature, Purpose and Findings. Journal of Marketing, 4, 7-18.
Garvare, R. and H. Wiklund (1997): Facilitating the Use of Statistical Methods in Small and
Medium Sized Enterprises. Proceedings of the 41st Congress of the European Organization
for Quality, Vol. 3. European Organization for Quality, Trondheim, 211-220.
Hoerl, R.W. (1998): Six Sigma and the Future of the Quality Profession. Quality Progress,
June, 35-69.
Hogg, R.V. (1997): The Quality Movement: Where It Stands and the Role of Statisticians in
Its Future. In Ghosh, S., W.R. Schucany and W.B. Smith, eds., Statistics of Quality. Marcel
Dekker, Inc., New York, 11-20.
Kaplan, R.S. and D.P. Norton (2004): Strategy Maps. Harvard Business School Press.
Kaplan, R.S. and D.P. Norton (1996): The Balanced Scorecard. Harvard Business School
Press.
Kim, J.S. and M.D. Larsen (1997): Integration of Statistical Techniques into Quality
Improvement Systems. Proceedings of the 41st Congress of the European Organization for
Quality, Vol. 2. European Organization for Quality, Trondheim, 277-284.
Ograjenšek, I. and P. Thyregod (2004): Qualitative vs. quantitative methods. Quality
Progress, Jan. 2004, Vol. 37, 1, 82-85.
Snee, R.D. (1999): Statisticians Must Develop Data-Based Management and Improvement
Systems as Well as Create Measurement Systems. International Statistical Review, 2, 139144.
Thyregod, P. and K. Conradsen (1999): Discussion. International Statistical Review, 2, 144146.
Chapter 6
Service Quality
Irena Ograjenšek
Rationale for Statistical Quality Control and Improvement in the Service Sector
Service industries embraced the basic quality improvement ideas simultaneously with
the manufacturing sector, but have been neglecting the use of statistical methods in
quality improvement processes even more than their manufacturing counterparts. One
of the major arguments against the use of statistical methods is given by differences in
the nature of services and manufactured goods. These differences have always been
emphasised in the literature, especially with regard to measurability of service quality
attributes and, consequently, characteristics of the measurement process. Whether
such emphasis is still valid in the 21st century is a moot point, perhaps it is now
possible to talk about a paradigm change.
Approaches to Statistical Quality Control and Improvement in the Service
Sector
There are important differences among the transcendent or philosophic approach, the
manufacturing-based approach, the product-based approach, the user-based approach
and the value-based approach to quality improvement in the service sector. The tools
used in the framework of each approach are specific in terms of their focus (internal
or external) and practical applicability.
Statistical Toolbox of the Manufacturing-Based Approach to Service Quality
There is a contrast between observational and inferential studies in service operations:
on the one hand there is the application of the basic 7 tools as defined by Ishikawa
and on the other hand the design of experiments, conjoint analysis, Markov chains,
risk analysis, etc.
Statistical Toolbox of the Product-Based Approach to Service Quality
The analysis of quality attributes is particularly important in this approach. There are
pros and cons for the application of rating scales (e.g. SERVQUAL, SERVPERF,
etc.), penalty/reward analysis and the vignette method, etc. Survey research and
structural equation modelling (SEM) also are important features of the statistical
toolbox.
Statistical Toolbox of the User-Based Approach to Service Quality
The technicalities of mystery shopping fall into the user-based approach, along with
methods and techniques of the analysis of overall quality (critical incident technique,
analysis of complaints, analysis of contacts /service blueprinting).
Statistical Toolbox of the Value-Based Approach to Service Quality
Tools such as e.g. cost of quality analysis, or analysis of willingness to pay, are
important to the value-based approach.
References
Babakus, E. and G.W. Boller (1992): An Empirical Assessment of the SERVQUAL Scale.
Journal of Business Research, 3, 253-268.
Babakus, E. and W.G. Mangold (1992): Adapting the SERVQUAL Scale to Hospital
Services: An Empirical Investigation. Health Services Research, 6, 767-786.
Babakus, E. and M. Inhofe (1991): Measuring Perceived Service Quality as a Multiattribute
Attitude. Journal of International Marketing.
Bisgaard, S. (2000a): Service Quality. In Belz, C. and T. Bieger, Dienstleistungskompetenz
und innovative Geschäftsmodelle. Verlag Thexis des Forschungsinstituts für Absatz und
Handel an der Universität St. Gallen, St. Gallen, 296-308.
Bolton R.N. and J.H. Drew (1994): Linking Customer Satisfaction to Service Operations and
Outcomes. In Rust R.T. and R.L. Oliver, eds., Service Quality. New Directions in Theory and
Practice. Sage Publications, Thousand Oaks, 173-200.
Bolton, R.N. and J.H. Drew (1991): A Multistage Model of Customers’ Assessments of
Service Quality and Value. Journal of Consumer Research, March, 375-384.
Boshoff, C., G. Mels and D. Nel (1995): The Dimensions of Perceived Service Quality: The
Original European Perspective Re-Visited. In Bergadaà, M., Proceedings of the 24th
European Marketing Academy Conference. ESSEC, Cergy-Pontoise, 161-175.
Brensinger, D.P. and D.M. Lambert (1990): Can the SERVQUAL Scale be Generalised to
Business-to-Business Services? In Enhancing Knowledge Development in Marketing.
American Marketing Association, Chicago, starting page 289.
Brown, S.W. and E.U. Bond III (1995): The Internal Market/External Market Framework and
Service Quality: Toward Theory in Services Marketing. Journal of Marketing Management,
1-3, 25-39.
Brown, T.J., G.A. Churchill and J.P. Peter (1993): Improving the Measurement of Service
Quality. Research Note. Journal of Retailing, 1, 127-139.
Buttle, F.A. (1995): What Future for SERVQUAL? In Bergadaà, M., Proceedings of the 24th
European Marketing Academy Conference. ESSEC, Cergy-Pontoise, 211-230.
Campanella, J. and F.J. Corcoran (1991): Principles of Quality Costs. In Drewes, W.F., ed.,
Quality Dynamics for the Service Industry. ASQC Quality Press, Milwaukee, 85-102.
Carman, J.M. (1990): Consumer Perceptions of Service Quality: An Assessment of the
SERVQUAL Dimensions. Journal of Retailing, 2, 33-55.
Carroll, J.D. and P.E. Green (1995): Psychometric Methods in Marketing Research: Part I,
Conjoint Analysis. Journal of Marketing Research, November, 385-391.
Cronin, J.J.Jr. and S.A. Taylor (1994): SERVPERF Versus SERVQUAL: Reconciling
Performance-Based and Perceptions-Minus-Expectations Measurement of Service Quality.
Journal of Marketing, 1, 125-131.
Cronin, J.J.Jr. and S.A. Taylor (1992): Measuring Service Quality: A Reexamination and
Extension. Journal of Marketing, 3, 55-68.
DeSarbo, W.S., L. Huff, M.M. Rolandelli and J. Choi (1994): On the Measurement of
Perceived Service Quality. A Conjoint Analysis Approach. In Rust R.T. and R.L. Oliver, eds.,
Service Quality. New Directions in Theory and Practice. Sage Publications, Thousand Oaks,
201-222.
Dolan, R.J. (1990): Conjoint Analysis: A Manager’s Guide. A Harvard Business School Case
Study, 9-590-059. Harvard Business School, Boston.
Drewes, W.F. (1991): The Cost of Delivering a Quality Product. In Drewes, W.F., ed.,
Quality Dynamics for the Service Industry. ASQC Quality Press, Milwaukee, 103-106.
Edvardsson, B. and B. Gustavsson (1991): Quality in Service and Quality in Service
Organizations. In Brown, S.W., E. Gummesson, B. Edvardsson and B. Gustavsson, eds.,
Service Quality. Multidisciplinary and Multinational Perspectives. Lexington Books,
Lexington, 319-340.
George, W.R. and B.E. Gibson (1991): Blueprinting. A Tool for Managing Quality in
Service. In Brown, S.W., E. Gummesson, B. Edvardsson and B. Gustavsson, eds., Service
Quality. Multidisciplinary and Multinational Perspectives. Lexington Books, Lexington, 7391.
Gummesson, E. (1996): Service Quality is Different, Yes Different! In Sandholm, L., ed.,
Quality Without Borders. Sandholm Associates, Djursholm.
Hallowell, R. and L.A. Schlesinger (2000): The Service Profit Chain. Intelectual Roots,
Current Realities and Future Prospects. In Swartz, T.A. and D. Iacobucci, eds., Handbook of
Services Marketing & Management. Sage Publications, Thousand Oaks, 203-221.
Maas, P. (2000): Transformation von Dienstleistungsunternehmen in Netzwerken Empirische Erkenntnisse im Bereich der Assekuranz. In Belz, C. and T. Bieger,
Dienstleistungskompetenz und innovative Geschäftsmodelle. Verlag Thexis des
Forschungsinstituts für Absatz und Handel an der Universität St. Gallen, St. Gallen, 52-74.
Ograjenšek, I. (2003): Use of Customer Data Analysis in Continuous Quality Improvement of
Service Processes. Proceedings of the Seventh Young Statisticians Meeting (Metodološki
zvezki 21), 51-69.
Ograjenšek, I. (2002): Applying Statistical Tools to Improve Quality in the Service Sector.
Developments in Social Science Methodology (Metodološki zvezki 18), 239-251.
Parasuraman, A., L.L. Berry and V. Zeithaml (1991): Understanding, Measuring and
Improving Service Quality. Findings from a Multiphase Research Program. In Brown, S.W.,
E. Gummesson, B. Edvardsson and B. Gustavsson, eds., Service Quality. Multidisciplinary
and Multinational Perspectives. Lexington Books, Lexington, 253-268.
Parasuraman, A., V. Zeithaml and L.L. Berry (1994): Reassessment of Expectations as a
Comparison Standard in Measuring Service Quality: Implications for Further Research.
Journal of Marketing, 1, 111-124.
Parasuraman, A., V. Zeithaml and L.L. Berry (1988): SERVQUAL: A Multiple-Item Scale
for Measuring Consumer Perceptions of Service Quality. Journal of Retailing, 1, 12-40.
Parasuraman, A., V. Zeithaml and L.L. Berry (1985): A Conceptual Model of Service Quality
and Its Implications for Future Research. Journal of Marketing, Fall, 41-50.
Smith, A.M. (1995): Measuring Service Quality: Is SERVQUAL Now Redundant? Journal of
Marketing Management, 1-3, 257-276.
Taylor, S.A. (1997): Assessing Regression-Based Importance Weights for Quality
Perceptions and Satisfaction Judgements in the Presence of Higher Order and/or Interaction
Effects. Journal of Retailing, Spring, 135-159.
Taylor, S.A. and T.L. Baker (1994): An Assessment of the Relationship Between Service
Quality and Customer Satisfaction in the Formation of Consumer’s Purchase Intentions.
Journal of Retailing, 2, 163-178.
Teas, R.K. (1994): Expectations as a Comparison Standard in Measuring Service Quality: An
Assessment of a Reassessment. Journal of Marketing, 1, 132-139.
Teas, R.K. (1993): Consumer Expectations and the Measurement of Perceived Service
Quality. Journal of Professional Services Marketing, 2, 33-54.
Teas, R.K. (1993a): Expectations, Performance Evaluation, and Consumers’ Perceptions of
Quality. Journal of Marketing, 4, 18-34.
Zahorik, A.J. and R.T. Rust (1992): Modeling the Impact of Service Quality on Profitability.
In Bowen, D.E., T.A. Swartz and S.E. Brown, Advances in Service Marketing and
Management. JAI, Greenwich, 247-276.
Zeithaml, V.A. (2000): Service Quality, Profitability and the Economic Worth of Customers:
What We Know and What We Need to Learn. Journal of the Academy of Marketing Science,
1, 67-85.
Zeithaml, V.A., L.L. Berry and A. Parasuraman (1996): The Behavioral Consequences of
Service Quality. Journal of Marketing, 2, 31-46.
Zeithaml, V.A. and M.J. Bitner (2000): Service Marketing. McGraw-Hill Education, New
York.
Zeithaml, V.A., L.L. Berry and A. Parasuraman (1993): The Nature and Determinants of
Customer Expectations of Service. Journal of the Academy of Marketing Science, 1, 1-12.
Chapter 7
Risk, Finance and Insurance
Henry Wynn
Introduction
Risk management and a number of specialist techniques constitute one of the faster
areas of growth in the application of statistical and data-analytic methods. These
range from what might be called soft risk assessment using simple scoring methods to
very advanced option pricing methods in finance. From the perspective of finance,
risk is usually divided into different categories: market risk, credit risk and
operational risk. The last of these is a catch-all, which covers everything normally
associated with areas such as reliability (covered separately in this report). A special
feature is that much of risk management is driven by new codes of practice in
corporate governance and financial regulation, such as the Basle I and II accords.
Risk metrics, risk scoring
The measurement of risk continues to be problematical. A principal reason is that
there is a strong element of judgement in areas not covered by adequate data capture.
Thus scoring methods are used as simple as a three point scale: red, amber green.
There is a long ranging debate about the different axes of risk: probability, effect
(loss) and other interesting concepts such as controllability. It is of considerable
interest that the more advanced risk metrics consider stochastic components such as
the probability of excess over a threshold, standard deviation as well as means, which
is very familiar in quality improvement (robust design, SPC, reliability). It is to be
hoped that there will be good cross-fertilization in the future.
Risk assessment and risk management
Broadly speaking risk assessment means assessing the level of risk for some
operation, product etc which lies in the future. Risk management is the continuous
control of risk during operations. A key component of risk management is the need to
take preventive or mitigating action and to periodically rescore.
Financial markets
There has been a very large increase in the use of mathematical and statistical
methods in finance driven by trading on the stock market and the design of special
financial derivatives. The core of this is the theory of option pricing which is now
embedded into much software used for trading, even automatic trading. There is a
continual search for advantage in trading through innovations in mathematical
technique and statistical analysis. One such area is “statistical arbitrage” where the
huge data available from trading are analysed to look for so-called “imperfect
markets” from which gains can be made.
Credit risk
More static analysis is used to assess the credit worthiness of consumers and
companies. Regression style models are used. For example, a yes/no response (allow/
do not allow credit) leads to the use of logistic regression, very familiar in medical
statistics (death/ survival).
Regulation
Most modern regulation relates to the annual audit function and to financial
management. It stresses openness, accountability and the realistic approach to risk,
including, importantly, operational risk. There is a greater appreciation of the
interdependence of the different activities of companies and organisations. This has
been given urgency by a number of notorious company failures or near failures.
Chapter 8
Applying data mining methods to business and industry
Paolo Giudici
Introduction
Data mining is a relatively new discipline that has developed mainly from studies
carried out in other disciplines such as in the field of computing, marketing, and
statistics. Many of the methodologies used in data mining come from two branches
of research, one developed in the machine learning community and the other
developed in the statistical community, particularly in multivariate and
computational statistics.
There are different perspectives on data mining from different scientific communities.
It is important to formalise how a data mining process should be entertained, to match
statistical quality criteria. There is a need for a “guide for the reader” that is often
confused about what data mining really does. Different methods are to be considered
for each application at hand
In many applications the aim is to gain value from customer data and investigate
business complexity. Techniques including decision trees, logistic regression, cluster
analysis and self-organising maps are used to measure customer value, segment
customer data, predict customer attrition and understand the uptake of new
technologies such as electronic purchasing.
Brief description of the most relevant business and industrial applications
Much work is underway in a number of important application areas of data mining,
from a business and industrial viewpoint. Some recent work in collaboration with a
bank has involved developing statistical models to predict the churn (abandon) of online clients of the bank. These ideas can be extended to other industries, such as
telephone companies, where churn is a key problem. Another current area of interest
is in comparison of predictive credit rating models. Further work in collaboration with
banks involves developing internal statistical models to predict credit ratings of
customers, in compliance with Basel II requirements.
Much research is underway in web mining, in particular looking at association models
for web usage mining and comparison of different models to understand the best
association model that can be used to describe web visit patterns.
Examples of research in text mining are predictive models developed to classify
documents and the comparison of different predictive models to classify text
documents.
REFERENCES
Akaike, H. (1974). A new look at statistical model identification. IEEE Transactions
on Automatic Control, 19, 716—723.
Bernardo, J.M. and Smith, A.F.M. (1994). Bayesian Theory, New York: Wiley.
Bickel, P.J. and Doksum, K. A. (1977). Mathematical Statistics, New York: Prentice
Hall.
Berry, M and Linoff, G. (1997). Data mining techniques for marketing. New York,
Wiley.
Brooks, S.P., Giudici, P. and Roberts, G.O. (2003). Efficient construction of
reversible jump MCMC proposal distributions. Journal of The Royal
Statistical Society series B, 1, 1-37..
Castelo, R. and Giudici, P. (2003). Improving Markov Chain model search for data
mining. Machine learning, 50, 127-158.
Giudici, P. (2001). Bayesian data mining, with application to credit scoring and
benchmarking. Applied Stochastic Models in Business and Industry, 17, 6981.
Giudici, P. (2003). Applied data mining, London, Wiley.
Hand, D.J., Mannila, H. and Smyth, P (2001). Principles of Data Mining, New York:
MIT Press.
Hand, D. (1997). Construction and assessment of classification rules. London: Wiley.
Han, J. and Kamber, M (2001). Data mining: concepts and techniques. New York:
Morgan and Kaufmann.
Hastie, T., Tibshirani, R., Friedman, J. (2001). The elements of statistical learning:
data mining, inference and prediction, New York: Springer-Verlag.
Heckerman, D., Chickering, D.M, Meek, C, Rounthwaite, R and Kadie, C.,
Dependency Networks for Inference, Collaborative Filtering and Data Visualization.
Journal of Machine Learning Research, 1, pp. 49{75, 2000.
Lauritzen, S.L. (1996). Graphical models, Oxford, Oxford University Press.
Mood, A.M., Graybill, F.A. and Boes, D.C. (1991). Introduction to the theory of
Statistics, Tokyo: Mc Graw Hill.
SAS Institute Inc. (2004). SAS enterprise miner reference
manual. Cary: SAS
Institute.
Schwarz, G. (1978). Estimating the dimension of a model. Annals of Statistics 62,
461-464.
Whittaker, J. (1990). Graphical models in applied multivariate Statistics, Wiley.
Witten, I. and Frank, E. (1999). Data Mining: practical machine learning tools and
techniques with Java implementation. New York: Morgan and Kaufmann.
Zucchini, Walter (2000). An Introduction to Model Selection. Journal of data mining,
inference and prediction. Springer-Verlag, Berlin.
Chapter 9
Process Monitoring, Improvement and Control
Oystein Evandt, Alberto Luceño, Ron Kenett, Rainer Goeb
Introductory comments
The main types of statistical processes used to model industrial processes are “Pure
Random Noise Processes” (in the sense of independent identically distributed random
deviations from a constant value), “Stationary Autocorrelated Processes” and
“Nonstationary Processes”. The differences between these processes can be
visualised when examples of data from such processes are plotted for comparison,
like in section 1.2 in Box and Luceño (1997). (A formal definition of a stationary
process is not given here, we just describe a stationary process as one that varies
around a mean value as time goes by, as in section 1.2 in Box and Luceño (1997). For
a formal definition of a stationary process, see The Oxford Dictionary of Statistical
Terms (2003).)
SPC originated in the parts industry, where large numbers of articles are produced,
and it is aimed at making these articles as equal as possible, with respect to important
characteristics, e.g. dimensions, weight and shape. Processes in this industry can at
best be modelled (approximately) by Pure Random Noise Deviations from the target
value of product characteristics. The deviations should preferably be as small as
possible. SPC is the natural tool for achieving this ideal state of a process, and for
maintaining this state. The Taguchi Loss Function helps to reinforce the implications
of the size of deviations from the target.
Engineering process control (EPC) came into existence in the process industry, where
a typical example is to keep the temperature under which a process is run as constant
as possible at a specified value, to keep the percentage conversion of chemicals in the
production process constant at a value as high as possible etc. To most people it will
seem obvious that time series of measurements of such quantities cannot be modelled
by Pure Random Noise Deviations from a constant value, but that the measurements
will be autocorrelated. (If a melting furnace is too warm at a certain time, it is likely
to be too warm shortly afterwards too.) It will seem reasonable to readers that one
should be aiming at having stationary processes, varying around a mean value as
equal as possible to the target value of the process, and with as little variation as
possible.
The economic gains of controlling processes, in the parts industry and the process
industry respectively, if the ideal states mentioned above are close to being obtained,
are often substantial.
SPC – Statistical Process Control
It is important to differentiate between retrospective analysis of data by means of
control charts, and real time applications of such charts. The most well-known and
probably well-used chart is the Xbar-R chart for grouped data, and variations of it,
and the XmR chart for individual values. Strictly speaking the only alarm rule of a
Shewhart Chart is the three-sigma limit rule. Three-sigma limits work well even if
data do not come from a normal distribution, see for example, the description in
Wheeler and Chambers (1992), section 4.2, “Why Three Sigma Limits?”.
Among most usual alarm rules, in addition to the three-sigma limits rule, are two
successive observations outside of the two sigma limits, 7 successive points
increasing or decreasing and 7 successive points either above or below the central line
of the chart. The application of such rules leads to a chart having memory. Also
applying too many of these rules may increase the tendency of false alarms
unacceptably. The four so-called Western Electric Zone Rules often provide a good
compromise between increased ability to detect out-of-control situations and avoiding
false alarms. See, for example Wheeler and Chambers (1992), section 5.4 “Four Rules
for Defining Lack of Control”.
The exponentially weighted moving average (EWMA) chart is particularly efficient at
detecting certain out-of- control situations, including those caused by underlying
trends. The CUSUM chart is also efficient at detecting these and other out-of- control
situations. Recently other types of control charts have been proposed, most of which
are good in particular situations and have their own their strengths and weaknesses.
Multivariate SPC
Many if not most situations are measured by more than one metric. When two or more
correlated characteristics are observed simultaneously in order to control them,
detection of out-of-control situations in the multivariate process cannot be properly
done by monitoring each characteristic separately by means of univariate control
charts. Further, multivariate SPC is usually based on Hotellings T2 statistic. For this
statistic to follow the T2 distribution, it is assumed that the simultaneously observed
characteristics come from independent observations from a multinormal distribution.
However, multivariate control charts based on T2 are very sensitive to deviations from
the assumption of multinormally distributed observations. If this assumption is not
sufficiently well fulfilled, a way out may be to use control charts based on principal
components. How this is done is accounted for in e.g. Ryan (2000).
On using software for SPC
Site specific SPC software is often a tempting alternative to buying a standard
software package which may need to be adjusted to give reports in the format most
useful for a company. It is difficult to review specific software packages for SPC,
since the packages are relatively frequently updated. Besides the larger,
comprehensive and well-known packages, such as e.g. Minitab, Statistica and
StatGraphics, there is also a plethora of smaller bespoke packages available on the
market. It is important to check each package to ensure that the control limits are
computed correctly, since for some control charts in some of the packages on the
market, this is done incorrectly.
Acceptance Sampling in Quality Control
The main focus here is product control (control of finished product) versus process
control. Lot inspection is a tool of product control and acceptance sampling is the
statistical version of lot inspection. The roots of statistical lot inspection schemes are
in the 1920s at Bell Telephone with the contribution of H. F. Dodge. An early
Military Standard was developed during World War II. Later there was further
development of the Military Standard and the movement to replace military and
national standards with ISO standards. Progress went alongside World Trade
Organization (WTO) agreements on tariffs and trade.
There has been a decline in statistical lot inspection in the quality movement over the
last decades. The emphasis has been on proactive as opposed to reactive quality
control, based on SPC, i.e. prevention versus inspection (sorting). This is generally a
healthy development. It does however not mean that statistical lot inspection is totally
obsolete. There are situatuions in which statistical lot inspection is the proper thing to
do. This is accounted for in a good way in Vardeman (1986).
Lot inspection and statistical lot inspection feature in ISO 9000-9004. The misleading
connotation in "acceptance" sampling is the sorting purpose (traditional) of statistical
lot inspection versus supporting purpose (modern) of statistical lot inspection. There
are now new ISO standards on the subject. These include ISO 13448 (allocation of
priorities principle), ISO 14560 (scheme for quality levels in nonconforming items per
million), ISO 18414 (accept-zero credit-based for outgoing quality), ISO 21247
(accept-zero single and continuous plans for attributes and variables), plus a few
sequential sampling plan standards. An important subset includes sampling plans
based on more than the traditional two states of product classification (primarily three
classes). Fundamental issues with the correct classification of products as
conforming, nonconforming, marginally conforming, etc. affect the operation of
sampling plans and schemes. There is now a movement properly to quantify
measurement uncertainty and include this in product conformity determinations (an
ISO standard was issued on this in 2003). Inspector error is another type of this
general issue. Sampling inspection can be based on new item and lot quality
indicators, e. g. squared or absolute deviation from target, one-sided deviation from
target. There are also corresponding applications in legal metrology and consumer
protection.
References
Box, George and Alberto Luceño (1997): Statistical Control by Monitoring and
Feedback Adjustment, John Wiley & Sons, Inc.
Ryan, Thomas P. (2000): Statistical Methods for Quality Improvement, John Wiley &
Sons, Inc.
The Oxford Dictionary of Statistical Terms (2003), Oxford University Press
Vardeman, Stephen B. (1986): The Legitimate Role of Inspection in Modern SQC.
The American Statistician, Vol. 40, No. 4.
Wheeler, Donald J. and David S. Chambers (1992): Understanding Statistical Process
Control, Second Edition, SPC Press, Inc., Knoxville, Tennesse
Chapter 10
Measurement Systems Analysis
Raffaello Levi
Statistical methods play a key role in the analysis of measurement and testing
work performed within the framework of quality systems. Measurement and testing
procedure organisation, as supported particularly by DOE, call for particular attention
to a fundamental quality index related to both cost and benefit, namely uncertainty
associated to measurement under consideration.
Indication of uncertainty is mandated by current quality standards, starting from
ISO 9001 and, more specifically, ISO 17025, governing management of testing and
measurement laboratories. Appraising the paramount importance of the subject led
major international organisations covering metrology and standardisation, such as
BIPM, IEC, IFCC, IUPAC, IUPAP, OIML, to draft and publish under the aegis of
ISO a fundamental reference text, “Guide to the expression of uncertainty in
measurement”, currently referred to as GUM and embodied into European standard
ENV 13005.
Statistical procedures dictated by GUM cover a broad range of applications.
Besides definition of such a delicate undertaking as evaluation of measurement
uncertainty with a clear set of rules accepted worldwide, by no means a minor
achievement, they cater for planning measurement and testing work aimed at specific
levels of uncertainty, in order to avoid both failure to reach mandated accuracy and
costly overdesign. These methods - covering both specific metrological work, such as
e.g. calibration of sensors and instrument systems, and generic testing work – deal
with three main items, namely:
 contributions to uncertainty belonging to A category, estimated according to
statistical methods. Besides managing random errors (always present in
measurements) with such tools as normal and Student’s distributions, they
enable detection, and estimation, of systematic effects (e.g. through tests of
normality, of linearity, and ANOVA), and furthermore proper treatment of
such outliers and mavericks typically associated to the ever increasing
diffusion of electronic instrumentation. Their inherently high sensitivity makes
these instrument systems often open to spurious signals due to electromagnetic
noise, leading to measurement incidents to be dealt with according to proper
exclusion procedures;
 contributions to uncertainty belonging to B category, assessed according to
non statistical methods (mainly according to technical expertise, and
accumulated experience), and transformed into equivalent variance
components, according to uniform, U shaped or triangular distributions
selected according to experience;
 composition of contributions mentioned above, and allotment of proper
degrees of freedom, leading to evaluation of overall uncertainty under the
form of a confidence interval, related to normal or Student’s distributions.
Categorisation and tabulation of terms entering measurement uncertainty
computation cater for assessment and comparative evaluation of contributions
pertaining to individual factors, according to an iterative computation routine (PUMA
method, ISO 14253-2). An awkward part of the process of uncertainty budget
management, namely that dealing with individual contributions to be associated with
every uncertainty factor, and with suggesting which factors are best acted upon in
order to cut costs and/or measurement uncertainty if need be, may thus be made
straightforward.
Chapter 11
Design and Analysis of Experiments (DOE)
Henry Wynn.
Introduction.
There has been a rapid growth in the application of experimental design to industry
from the early 1980s and largely as a component in quality improvement. This growth
was particularly rapid in the area associated with the name of Genichi Taguchi,
namely robust engineering design, but has spread to the use of experimental design
generally. Of course, DOE had already made major contributions in the areas of
factorial design and response surface with the work of G E P Box, N Draper and
others. The robust design ideas have now been absorbed into the mainstream
alongside factorial design, response surface design and to this mix must be added
developments in the optimal design of experiments, computer experiments and
simulation.
Factorial design.
This is the bedrock of the field. The basic idea is that to estimate effects (parameters)
within the context of regression it is not necessary to measure a response at every
level of every factor (explanatory variable). By careful choice of configurations, that
is to say an experimental design, much can be learned. Roughly, the more complex
the model the more complex and larger an experiment is needed.
Response surfaces: design and models.
This term is usually taken to mean designs which lie outside the somewhat rigid
structures of traditional factorial design and may be more adventurous in layout:
center points, star points, composites of different points, designs to guard against
hidden trends, designs for mixtures experiments and so on.
Optimal design.
A contribution from the more mathematical wing of statistics was to set up
experimental design as a decision problem and to optimize. J. Kiefer and co-workers
launched this field around 1960 but it took a number of years for the methods to get
out of academia into industry. Now algorithms for optimal design are available (as for
factorial design) in a number of packages. They are particularly useful for getting a
good experiment for a non standard region of experimentation.
Robust Engineering Design.
The major impact of this area has been to help reduce variability by paying attention
not only to the mean response but also to the variability. Special types of experimental
design can be used to estimate both the mean and the variability due to so-called
“noise” factors, and therefore need to be controlled in some way (using “control”
factors). This is sometimes called the dual response surface method. The modelbased version models all factors and then propagates the variability through the model
analytically or via simulation.
Computer experiments.
Many complex engineering products are designed using simulation packages. These
may for example solve banks of differential equations. Even though computers get
faster a single simulation may still take hours. This means that design of experiments
can be used to save computer runs. Also this has led to the growth of more complex
modelling including kernel methods, as opposed to the polynomial of the usual
response surface method.
Chapter 12
Safety and reliability engineering
Chris McCollin, Maria Ramalhoto
Introduction
The safety issue in engineering is very often a broad subject of a huge complex
nature. However, it might be substantially improved in many situations if we
understand better the stochastic aspects of reliability, maintenance, control, and its
interactions usually present in the equipment’s failure that caused the lack of safety.
Innovative ways to introduce that knowledge in the available safety frameworks
might prove useful.
Reliability analysis is a well-established part of most risk and safety studies. The
causal part of a risk analysis is usually accomplished by reliability techniques, like
failure mode and effects analysis (FMEA) and fault tree analysis. Reliability analysis
is often applied in risk analysis to evaluate the availability and applicability of safety
systems, ranging from single component safety barriers (e.g., valves) up to complex
computer based process safety systems.
A wide range of standards with respect to reliability and safety have been issued.
Among them are: British Standards BS 5760; MIL-STD (783, 882 D, 756, 105 D and
others,) IEEE Std. 352, IEC300 and so on. It is also interesting to notice the relation
between the ISO 9000 and the IEC 300 series of standards.
As part of the formation of the European Union a number of new EU directives have
been issued. Among these are the machine safety directive, the product safety
directive, major hazards directive and the product liability directive. The producers of
equipment must, according to these directives, verify that their equipment complies
with those requirements. Reliability analysis and reliability demonstration testing are
necessary tools in the verification process. It has already been reported that in the
USA courts, consultancy on these matters is becoming a booming business.
Engineering design reliability
The areas of design which mainly affect reliability are reliability specification; the
initial systems design process incorporating feasibility and concept design; the
development process incorporating the main CAD/CAE activities including FMEA,
circuit design and simulation packages; the development process incorporating
prototype build; components and training.
Recent papers on design for reliability are given in the references. General approaches
to reliability development cover four areas:
 Costing, Management and Specification
 Design Failure Investigation Techniques (DFITs)
 Development Testing
 Failure Reporting
Skills for a statistician/engineer working in safety and reliability engineering
Aviation, naval and offshore among other industries have already realized the
important connection between maintenance and reliability and implemented the
reliability centered maintenance (RCM) methodology. Identification of general and
specific industry targeted skills is important as is the presentation of a framework to
guarantee communication between different groups of stakeholders and to provide upto-date information on required skills.
Perspectives for safety and reliability engineering: trans-disciplinary issues
under the umbrella of the “Stochastics for the Quality Movement" Concept
Keeping in mind that Reliability is Quality over time and Maintenance is a Reliability
Function, an “Extended Quality Movement” integrating among others the Reliability
and the Maintenance Cultures makes sense. SQM (stochastics for the quality
movement), introduced in Ramalhoto (1999), embraces all the non-deterministic
quantitative methods relevant to industry and business practices. One of the aims is to
supply a complete body of trans-disciplinary and targeted up-to-date knowledge on
theory and practice, and facilitate their synergy development. That is a very
demanding and ambitious dream that has been already discussed among several
members of the Pro-ENBIS project. A vision for an ideal targeted framework could be
outlined in the context of maritime safety.
Signature analysis and predictive maintenance.
It is very advantageous if future wear and failure can be detected before it occurs.
This enables preventive action such as maintenance to be carried out, with cost
savings and the avoidance of possibly dangerous failure. Signature analysis is the
method of fault detection via a special pattern of, typically, a dynamic characteristic
such as noise, vibration, electrical output. Ideally every failure mode will have its own
special signature. Signal processing methods are typically used such as time
frequency plots using wavelet analysis.
References
Akao, Y. (1990) Quality Function Deployment Integrating Customer Requirements
into Product Design. Productivity Press.
Andrews, J.D. and Dugan, J.B. (1999) Advances in Fault tree analysis. Proceedings of
the SARSS. pp10/1-10/12.
Bartlett, L.M. and Andrews, J.D. (1999) Comparison of variable ordering
heuristics/algorithms for binary decision diagrams. Proceedings of the SARSS.
pp12/1-12/15.
Braglia, M. (2000) MAFMA: multi-attribute failure mode analysis. International
Journal of Quality and Reliability Management, Vol. 17 No. 9, pp1017-1033.
British Standards Institution. (2000) BS EN ISO 9001: Quality Management Systems
- Requirements. London: British Standards Institution.
Cini, P.F. and Griffith, P. (1999) Designing for MFOP: towards the autonomous
aircraft. Journal of Quality in Maintenance Engineering. Vol. 5 No. 4, pp 296-306.
DEF-STAN-0041 Parts 1 to 5: MOD Practices and Procedures for R & M. HMSO.
Drake Jr., P. (1999) Dimensioning and Tolerancing Handbook McGraw-Hill New
York.
Edwards, I.J. (2000) The impact of variable hazard-rates on in-service reliability. 14th
ARTS Conference University of Manchester.
Fagan, M.E. (1976) Design and Code inspections to reduce errors in program
development. IBM Systems Journal 15(3) pp182-211.
Feynman, R.P, (1989) "What do you care what other people think?" Harper Collins.
Gall, B. (1999) An advanced method for preventing business disruption. Proceedings
of the SARSS. pp1/1-1/12.
Gastwirth, J. L. (1991) The potential effect of unchecked statistical assumptions.
Journal of the Royal Statistical Society. Series A Vol 154 part 1, pp121-123.
Goldstein, H., Rasbash, J., Plewis, I., Draper, D., Browne, W., Yang, M., Woodhouse,
G. and Healy, M.J.R. (1998) A User's Guide to MLWin. London: Institute of
Education.
Goodacre, J. (2000) Identifying Current Industrial Needs from Root Cause Analysis
activity 14th ARTS Conference University of Manchester 28th-30th November 2000.
Gray, C., Harris, N., Bendell, A. and Walker, E.V. (1988) The Reliability Analysis of
Weapon Systems. Reliability Engineering and System Safety. 21. pp 245-269.
Harvey, A.C. (1993) Time Series Models Harvester Wheatsheaf 2nd edition p285.
Ke, H. and Shen, F. (1999) Integrated Bayesian reliability assessment during
equipment development. International Journal of Quality and Reliability
Management, Vol. 16 No. 9, pp892-902.
Kenett R., Ramalhoto M. F., Shade J. (2003). A proposal for management
knowledge of stochastics in the quality movement. In T.Bedford & P. H. A. L.
M. van Gelder (Eds), Safety & Reliability - ESREL 2003, Vol. 1, pp. 881-888,
Rotterdam: Balkema.
McCollin, C. (1999) Working around failure. Manufacturing Engineer, Volume 78
No. 1. February 1999. pp37-40.
Morrison, S.J. (2000) Statistical Engineering Design. The TQM magazine. Vol. 12
No. 1 pp26-30.
Morrison, S.J. (1957) Variability in Engineering Design. Applied Statistics, Vol 6.
Pp133-138.
O’Connor, P.D.T. (2000) Commentary: reliability – past, present and future. IEEE
Transactions on Reliability Vol. 49, issue 4, pp335-341.
Parry-Jones, R. (1999) Engineering for Corporate Success in the New Millennium.
Speech to the Royal Academy of Engineering, London 10th.
and A new way to teach statistics to engineers. To be reported in MSOR Connections.
Paulk, M. et al. (editors) (1995) The Capability Maturity Model Guidelines for
improving the software process. Addison-Wesley.
Ramalhoto M. F. (1999). A way of addressing some of the new challenges of
quality management. In: G. I. Schueller and P. Kafka (Eds) Safety & Reliability ESREL 1999, Vol. 2, pp. 1077-1082, Rotterdam: Balkema.
Reason, J. (1999) Managing the risks of organisational accidents Ashgate, Aldershot.
pp159.
Reunen, M., Heikkila, J. and Hanninen, S. (1989) On the Safety and Reliability
Engineering during Conceptual Design Phase of Mechatronic Products in Reliability
Achievement The Commercial Incentive Ed. T. Aven Elsevier.
Sankar, N.R. and Bantwal, S.P. (2001) Modified approach for prioritization of failures
in a system failure mode and effects analysis. International Journal of Quality and
Reliability Management, Vol. 18 No. 3, pp324-335.
Sexton, C.J., Lewis, S.M. and Please, C.P. (2001) Experiments for derived factors
with application to hydraulic gear pumps. Journal of the Royal Statistical Society.
Series C (Applied Statistics) Vol 50. Part 2. pp155-170.
Silverman, M. Why HALT cannot produce a meaningful MTBF number and why this
should not be a concern. http://www.qualmark.com/hh/01MTBFpaper.htm/
Strutt, J.E. (2000) Design for Reliability: A Key Risk Management Strategy in
Product Development. 14th ARTS Conference University of Manchester.
Swift, K.G., Raines, M and Booker, J.D. (1999) Designing for Reliability: a
probabilistic approach. Proceedings of the SARSS. pp3/1-3/12.
Thompson, G. A. (2000) Multi-objective approach to design for reliability. 14th ARTS
Conference University of Manchester.
Woodall, A., Bendell, A. and McCollin, C. Results of the Engineering Quality Forum
Survey to establish ongoing requirements for education and competency for engineers
in the field of quality management. Available from the IMechE. Website:
http://www.imeche.org.uk/manufacturing/quality%5Fand%5Fengineering%5Fsurvey.
htm/
Chapter 13
Multivariate analysis focussing on Multiscale modelling
Marco S. Reis and Pedro M. Saraiva
Classical modelling approaches, typically use (either explicit or implicitly) a single
particular time scale (usually that corresponding to lowest acquisition rate or that of
the most important set of variables), where they base the whole analysis, according to
the objectives to attain. This happens, e.g., in state-space, time-series, mechanistic and
regression models, and this limitation is passed on to other tasks that rely on them,
such as model parameter estimation (where such structures are used along with
properly generated data and a quality criteria of fitness that measures the adequacy of
predictions, usually at the lowest acquisition rate), as well as, for instance: data
rectification, fault detection and diagnosis, process monitoring, optimization and
design. However, in most of the application domains found in practice, the relevant
phenomena take place simultaneously at different time scales. In more technical
terms, this means that usually events occur at different locations and with different
localizations in the time (and frequency) domain. Thus, techniques that "zoom" the
analysis focus only at the finest scale, establishing their quality criteria in terms of the
fastest dynamics, are likely to neglect important information that corresponds to other
localizations in the time-frequency plan.
In this regard, tools derived from wavelet theory (Mallat, 1998) have been finding
wide been acceptance in applications where the data typically present multiscale
features, notably in signal de-noising and compression applications (Donoho and
Johnstone, 1992; Vetterli and Kovačević, 1995), but others can also be referred, such
as process monitoring (Bakshi, 1998; Ganesan et al., 2004) and system identification
(Tsatsanis and Giannakis, 1993). The wavelet-based methodologies do enable the
incorporation of the concept of scale right into the core of the data analysis tasks,
constituting the adequate mathematical language to describe multiscale phenomena.
Much of the success of wavelets arises from the efficiency they describe data
composed by events having different localization properties in time or frequency. In
fact, a proper analysis of these signals would require, for instance, a large number of
Fourier transform coefficients, meaning that it is not an “adequate” language for a
compact translation of the signal key features into the transform domain. This
happens because the form of the time/frequency windows (Vetterli and Kovačević,
1995; Mallat, 1998) associated with their basis functions does not change across the
time/frequency plane, in order to cover effectively and efficiently the localized high
energy zones of the several features present in the signal. Therefore, to cope with such
multiscale features, a more flexible tiling of the time/frequency space is required, and
can be provided by the wavelet basis functions, whose coefficients are called wavelet
transform. In practice, it is often the case that signals are composed of short duration
events of high frequency and low frequency events of long duration, and this is
exactly the kind of tiling that a wavelet basis does provide, since the relative
frequency bandwidth of these basis functions is a constant (i.e., the ratio between a
measure of the size of the frequency band and the mean frequency,   , is constant
for each wavelet function), a property also referred to as a “constant-Q” scheme.
Therefore, by developing approaches that integrate wavelet theory concepts with
classical multivariate methodologies, one can handle both the data features arising
from data dimensionality and correlation issues, with those arising from multiscale
characteristics. As some examples of efforts directed towards such goal, one can refer
the Multiscale Principal Component Analysis (MS-PCA; Bakshi, 1998), Multiscale
Regression (Depczynsky et al., 1999) and Multiscale Classification (Walczak et al.,
1996).
The comments above focus on the developments of multiscale (or multiresolution)
approaches for conducting classical data analysis tasks, which naturally integrate the
multiscale nature of the underlying phenomena, through the incorporation of a proper
scale variable right into the systems modelling, analysis and optimization paradigms.
There are several widely different application domains found in industry, where
multiscale approaches have been developed.
References
BAKSHI, B.R.  Multiscale Principal Component Analysis with Application to
Multivariate Statistical Process Monitoring. AIChE Journal. 44:7 (1998), p. 1596 –
1610.
DEPCZYNSKY, U.; JETTER, K.; MOLT, K.; NIEMÖLLER, A.—The Fast Wavelet
Transform on Compact Intervals as a Toll in Chemometrics – II. Boundary Effects,
Denoising and Compression. Chemometrics and Intelligent Laboratory Systems. 49
(1999): 151-161.
DONOHO, D.L.; JOHNSTONE, I.M. — Ideal Spatial Adaptation by Wavelet Shrinkage.
Department of Statistics, Stanford University (1992). Technical report.
GANESAN, R., DAS, T. K., AND VENKATARAMAN, V. — Wavelet Based Multiscale
Process Monitoring - A Literature Review. IIE Trans. on Quality and Reliability Eng.,
36 (9) (2004).
MALLAT, S. — A Wavelet Tour of Signal Processing. San Diego [etc.]: Academic
Press, 1998.
TSATSANIS, M.K.; GIANNAKIS,G.B. — Time-Varying Identification and Model
Validation Using Wavelets. IEEE Transaction on Signal Processing. 41:12, (1993):
3512-3274.
VETTERLI, M.; KOVAČEVIĆ, J. — Wavelets and Subband Coding. New Jersey:
Prentice Hall, 1995.
WALCZAK, B.; VAN DEN BOGAERT, V.; MASSART, D.L. — Application of Wavelet
Packet Transform in Patter Recognition of Near-IR Data. Anal. Chem. 68 (1996).
1742-1747.
Chapter 14
SIMULATION
David Rios Insua, Jorge Muruzabal, Jesus Palomo,
Fabrizio Ruggeri, Julio Holgado and Raul Moreno
Purpose
Once we have realised that a given system is not operating as desired, we would look
for ways to improving it. Sometimes it is possible to experiment with the real system
and, through observation and the aid of Statistics, reach valid conclusions towards
system improvement. Experimenting with a real system may entail ethical and/or
economical problems, which may be avoided by dealing with a prototype, a physical
model. However, sometimes we are not able, or it is not feasible or possible, to build
a prototype. Yet, we may obtain a mathematical model describing, through equations
and relations, the essential behaviour of the system. Its analysis may be done,
sometimes, through analytical or numerical methods. But the model may be too
complex to be dealt with. In such extreme cases, we may use Simulation. Essentially,
Simulation consists of (i) building a computer model that describes the behaviour of
the system; and (ii) experimenting with this model to reach conclusions that support
decision making.
There are several key concepts, methods and tools from the world of Simulation
which may prove useful to the industrial statistics practitioner. The basic four-step
process in any simulation experiment is as follows:
1.
2.
3.
4.
Obtain a source of random numbers.
Transform them into inputs to the model.
Transform these inputs into outputs of the model.
Analyse the outputs to reach conclusions.
These steps have been successfully applied to many practical situations to help solve
real problems, for example modelling a workflow line within the digitalisation
industry.
It is important to consider carefully the random number generation. It is not always
sensible to use the standard generators in standard software packages. Checks on the
randomness may show that there are significant patterns in the occurrence of
particular pairings of numbers, for example. Other properties are relevant as well:
computational efficiency (e.g. speed; little memory consumed), portability;
implementation simplicity, reproducibility, mutability and long enough period.
Good sources of random number generators are at Statlib at http://www.stat.cmu.edu/.
Other important sites in relation with random number generation are L'Ecuyer's page
at http://www.iro.umontreal.ca/lecuyer and http://random.mat.sbg.ac.at. A set of
statistical
tests
for
random
number
generators
are
available
at
http://csrc.nist.gov/rng/rng5.html.
The next step in a simulation experiment is to convert the random numbers into inputs
appropriate for the model at hand. The most popular traditional method for drawing
from a distribution F is based on inversion, i.e. in generating from a uniform
distribution and computing the inverse of F at the drawn value. In Bayesian statistics
the most important techniques used to simulate from a posterior distribution are the
Markov chain Monte Carlo (MCMC) ones.
As far as software is concerned, the Numerical Recipes (Press et al, 1992,or
http://www.nr.com) include code to generate from the exponential, normal, gamma,
Poisson and binomial distributions, from which many other distributions may be
sampled based on the principles outlined above. Many generators are available at
http://www.netlib.org/random/index.html.
WINBUGS and OpenBUGS are
downloadable from http://www.mrc-bsu.cam.ac.uk/bugs, facilitating MCMC
sampling in many applied settings. Another useful library is GSL, available at
http://www.gnu.org/software/gsl.
The third step in a simulation process consists of passing the inputs through the
simulation model to obtain outputs to be analysed later. Monte Carlo simulation is a
key method in Industrial Statistics. We may use MC methods for optimisation
purposes (say to a obtain an MLE or a posterior mode); for resampling purposes, as in
the bootstrap; within MC hypothesis tests and confidence intervals; for computations
in probabilistic expert systems, ... the key application being Monte Carlo integration,
especially within Bayesian statistics.
The final stage of a simulation experiment consists of the analysis of the output
obtained through the experiment. To a large extent, we may use standard estimation
methods, point estimates and precision measures, with the key observation that the
output might be correlated. Clearly, as we deal with stochastic models, each repetition
of the experiment will lead to a different result, provided that we use different seeds to
initialise the random number generators at each replication. The general issue here is
to provide information about some performance measure of our system, e.g.
unbiasedness, mean square error and consistency in point estimation. In MCMC
simulation, within Bayesian statistics, convergence detection is an important issue, for
which CODA http://www.mrc-bsu.cam.ac.uk/bugs/classic/coda04/readme.shtml has
been developed.
Finally, there are some tactical issues in simulation: how do we design simulation
experiments, how do we combine simulation with optimisation and the issue of
variance reduction.
References
Balci, O. (ed) (1994). Simulation and Modelling, Annals of Operations
Research, 53.
Banks, J. (1998). Handbook of Simulation, Wiley, New York.
Bays, C. and Durham, S. (1976). Improving a poor random number generator,
ACM Transactions in Mathematical Software, 2, 59-64.
Bratley, P., Fox, B. and Schrage, L. (1987). A Guide to Simulation, Springer,
New York.
Chick, S.E. (2001). Input Distribution Selection for Simulation Experiments:
Accounting for Input Uncertainty, Operations Research, 49, 744-758.
Extend, official web page. http://www.imaginethatinc.com
Fishman, G. S. (1996). Monte Carlo: Concepts, Algorithms and Applications,
Springer, New York.
French, S. and Rios Insua, D. (2000). Statistical Decision Theory, Arnold,
London.
Fu, M.C. (1994). Optimization via simulation, Annals of Operations Research, 53,
199-248.
Gamerman, D. (1997). Markov Chain Monte Carlo: Stochastic Simulation
for Bayesian Inference, Chapman & Hall, New York.
Imaginethatinc (2000). Extend Manual, Imagine That Inc.
Kleijnen, J.P.C. (1987). Statistical Tools for Simulation Practitioners, Dekker,
New York.
Knuth, D. (1981). The Art of Computer Programming, Vol. 2: Seminumerical
Algorithms, Addison Wesley, New York.
Law, A. M. and Kelton, W. D. (1991). Simulation Modeling and Analysis,
McGraw-Hill, New York.
L'Ecuyer, P. (1990). Random numbers for simulation, Communications
ACM, 33, 85-97.
L'Ecuyer, P. (1994). Uniform random number generators, Annals of Operations
Research, 53, 77-120.
L'Ecuyer (1998). Random Number Generators and Empirical Tests, Lecture
Notes in Statistics 127, Springer-Verlag, 1998, 124-138.
L'Ecuyer, P. (2001). Software for Uniform Random Number Generation:
Distinguishing the good and the bad, Proceedings of the 2001 Winter
Simulation Conference, IEEE Press, Dec, 95-105.
L'Ecuyer, P. and Granger-Piche, J. (2003). Combined Generators with
Components from Different Families, Mathematics and Computers in
Simulation, 62, 395-404.
L'Ecuyer, P., Simard, R., Chen, E.J. and Kelton, W.D. (2002). An objectoriented random number package with many long streams and substreams, Operation Research, 50, 1073-1075.
Lehmer, D. H. (1951). Mathematical methods in large-scale computing
units, Proceedings of the Second Symposium on Large Scale Digital
Computing Machinery, 141146, Harvard University Press, Cambridge.
Lewis, P.A., Goodman, A.S. and Miller, J.M. (1969). A pseudo-random
number generator for the system/360, IBM System's Journal, 8, 136143.
Matsumoto, M. and Nishimura, T. (1998). Mersenne twister: A 623dimensionally equidistributed uniform pseudo-random number generator, ACM
Transactions on Modelling and Computer Simulation, 8, 3-30..
Neelamkavil, F. (1987). Computer Simulation and Modelling, Wiley, New
York.
Niederreiter, H. (1992). Random Number Generation and Quasi-Monte
Carlo Methods, SIAM, Philadelphia.
Park, S. and Miller, K. (1988). Random number generators: good ones are
hard to find, Communications ACM, 31, 1192-1201.
Press, W.H., Teukolsky, S.A., Vetterling, W.T. and Flannery, B.P. (1992).
Numerical recipes in C, Cambridge University Press, Cambridge.
R, official web page. http://www.r-project.org
RAND Co. (1955). A Million Random Digits with 100000 Normal Deviates,
Free Press.
Rios Insua, D., Rios Insua, S. and Martin, J. (1997). Simulacion, Metodos
y Aplicaciones., RA-MA, Madrid.
Ripley, B. (1987). Stochastic Simulation, Wiley, New York.
Robbins, H. and Monro, S. (1951). A stochastic approximation method,
Annals of Mathematical Statistics, 22, 400-407.
Ross, S. (1991). A Course in Simulation, MacMillan, New York.
Rubinstein, R. and Mohamed, B. (1998). Modern Simulation and Modeling,
Wiley, New York.
Schmeiser, B. (1990). Simulation Methods, in Stochastic Models (Heyman
and Sobel eds), North Holland, Amsterdam.
Schrage, L. (1979). A more portable FORTRAN random number generator,
ACM Transactions on Mathematical Software, 5, 132-138.
Spiegelhalter, D., Thomas, A., Best, N. and Gilks, W. (1994). BUGS:
Bayesian inference using Gibbs sampling, version 0.30, MRC Biostatistics Unit,
Cambridge.
Tanner, M.A. (1996). Tools for Statistical Inference: Methods for the Exploration
of Posterior Distributions and Likelihood Functions, 3nd ed.,
Springer, New York.
Whitt, W. (1989). Planning queueing simulations, Management Science,
35, 1341-1366.
Chapter 15
Communication
Tony Greenfield and John Logsdon
You have just finished a project. Why did you start it? Was it just a topic that
grabbed your interest with no other interested people in mind? Did you come across a
problem, either published or in conversation with colleagues, on which you believed
you could cast more light? Did some colleagues offer you a problem, on which they
had been working, in their belief that you could take the solution further forward? Or
was it a problem that arose in a business or industrial context and for which managers
sought your help? Will the solution benefit a small special interest group, or will it be
of wider interest: to the whole industry, to the general public, to financiers, to
government?
The reason for your project’s existence and its origin will determine the style, the
length, the medium for its onward communication. Perhaps your purpose will specify
several styles, several lengths and several media for the promotion of your work.
Your first thought on completing a project may be of a peer-reviewed paper in an
academic journal: a publication to augment your CV, to tell other workers in your
field that you are with them, that you are as clever. But, in this report, we are not so
concerned with personal career enhancement as promoting the vision of ENBIS:
To promote the widespread use of sound science driven, applied statistical
methods in European business and industry.
Different styles and media are required for:
1. News items for popular newspapers
2. Feature articles for popular newspapers
3. News items for technical magazines
4. Feature articles for technical magazines
5. Articles for company house journals
6. Internal technical reports for companies
7. Promotional material for companies
8. Documentary radio scripts
9. Documentary television films
10. Company internal memoranda
11. Company training
12. Training course for wider industrial audiences
13. Short seminars
14. Public lectures
15. Posters for conferences
16. Platform presentations for conferences
17. Web pages
Each of these requires specific details in terms of the written and spoken word and of
printed and projected visual images, of 3-D models, of computer programs.
Throughout, it is vital to keep to the fore the questions:

Why was this work done?

For whom was it done?

To whom do you want to communicate information about the work?

Why would they be interested?

What information for what audiences?

Who may be the beneficiaries of the work?
It is also important to keep communication in mind from the start of a project:

Who originated it?

What exchange, style and content of memoranda were needed to clarify
the purpose of the project?

What communication measures were needed to establish high quality and
timely data collection?

What support was needed from colleagues or specialists?

What progress memoranda and reports were written and for whom
References
Altman D G, Gore S M, Gardner M J, Pocock S J, (1983), ‘Statistical guidelines for
contributors to medical journals’. BMJ, 286, 1489-1493.
Good guidance for presentation of statistics in all subjects, not only medicine.
Barrass R (1978) Scientists must write. Chapman and Hall, London.
Blamires H (2000) The Penguin guide to Plain English. Penguin , London.
Cooper B M (1975) Writing Technical Reports. Penguin , London.
Cresswell J (2000) The Penguin Dictionary of Clichés.
Penguin , London.
Ehrenberg A S C (1982) A Primer in Data Reduction. Wiley, London.
Guidance for presentation of data in tables.
Finney D J (1995), Statistical science and effective scientific communication. Journal of
Applied Statistics, Vol.22 (2), pp 193-308.
O’Connor M, Woodford F P (1978) Writing Scientific Papers in English, Pitman Medical,
London.
Kirkman J (1992) Good Style: Writing for Science. E & FN Spon, London.
Lindsay D (1997) A Guide to Scientific Writing. Longman, Melbourne.
Partridge E, (1962) A Dictionary of Clichés. Routledge and Keegan Paul , London
Tufte E R, (1997) Visual Explanations, Graphics Press, Connecticut.
Graphics for quantities, evidence and narrative.
Pocket Style Book. The Economist, London.
Guide to English Style and Usage. The Times, London.
Chapter 16
Summary, cross-fertilisation and future directions
Shirley Coleman and Tony Fouweather
Summary and Conclusion
The pro-ENBIS project has been enjoyable and energising. Experts from many fields,
institutions and locations have offered their knowledge of modern business and
industrial statistics to give a concise and easily readable summary of the current state
of their science.
Future directions
Following pro-ENBIS, the partners and members are in an excellent position to direct
their shared knowledge and vast experience towards improvements in the theory and
applications of statistics. Traditional barriers between many specialties have been
responsible for slow development in those specialties. The best example was the slow
uptake of agricultural-style designed experiments in manufacturing industries.
Parochial prejudices in different sectors are evident. For example, in the nonmanufacturing sector, designed experiments are called conjoint analysis.
Nomenclature is completely different. Their ‘part-worths’, for instance, are effectively
the same as ‘factor effects’ (van Lottum, 2003). The exploration of established
methods of experimental design in non-manufacturing and service sectors could
reveal powerful techniques for marketing, retail and other commercial processes.
There are many other examples where different sectors have different notations for
similar analysis methods. For example, semi-variograms used by soil scientists can be
just as useful in survey sampling in other specialties and in situations where time
replaces distance as the population dimension (Coleman et al, 2001). The crossfertilisation of these ideas will be very fruitful and will be aided by the close
partnerships developed during the pro-ENBIS project.
Applying the wider ideas of statistical thinking from industrial statistics to
service industries including health
Business and industry are increasingly aware of the benefits of statistical methods;
continuous improvement through six-sigma is popular; the opportunities to apply
these methods have become more common. Many people in service industries, as well
as in manufacturing industries, now recognise the huge benefits that can be gained.
These methods apply equally well to service industries as to those industries, such as
steel production, where they have been used for a long time.
The DMAIC method, promoted in six-sigma, is a logical procedure for using
statistical methods and tools. It allows any process, such as the issuing of invoices,
or the discharge of patients from a hospital, to be investigated and improved in the
same way as an industrial production line. Any process in a service industry can be
improved and reap the same benefits as a process in a factory. The DMAIC scheme
logically addresses any problem by first defining what the problem is, deciding on
what measure will adequately describe the problem and then measuring the extent of
the problem. The next logical steps are to analyse the results and to improve the
process before finally controlling the process to hold any gains.
Over recent years there has been an increasing readiness to use quantitative methods
to solve problems in new application areas as well as in the more traditional areas.
The benefits are becoming apparent to practitioners in many areas where these
methods have not previously been tried, such as in the financial markets.
Companies now realise the importance of good data collection, database management
and data mining methods to monitor their businesses and to improve efficiency and
hence to increase profits. Programmers realise the power of company data in the
application of six sigma methodology to continuous improvement within their
companies. They are writing clever software to handle the data. Statistical software is
becoming more affordable especially for SMEs. Many SMEs now realise the benefits
of having in house six sigma practitioners, trained to use the statistical techniques and
software. Companies are starting to see that the cost to train a six sigma black belt
will save them money in the long term. A company’s own in house statistical
specialist will reduce, or even obviate, the need to hire a consultant. This assumes that
the six sigma black belt will have a sound understanding of statistics. Standardisation
of six sigma curricula was an area of great interest in pro-ENBIS and the discussion
will continue in Enbis.
Statistics is becoming more and more mainstream as businesses see the benefits and
opportunities that it brings. Even the use of basic level statistics, such as charts, to
illustrate performance or down time can have a dramatic effect on the morale of the
workforce and on the efficiency of the process. The democratisation of statistics,
which is enhanced with six sigma and other quality improvement initiatives, is good
in that it raises awareness of the power of numbers and quantitative analysis. On the
other hand, it is dangerous in that the finer detail may be overlooked and this can have
serious consequences in some, perhaps rare, circumstances (Burnham, 2004)
The importance of any improvement in a business is vital in the face of increasing
competition. Any way to sharpen the competitive edge can make the difference
between survival and closure for the company. Statistical techniques can give this
edge to a company. A statistical practitioner who applies the techniques rigorously
can make a process more efficient so that the company will beat its competition on
such things as product quality, production costs and delivery times.
The health sector has great potential for making deeper use of statistical thinking.
In the UK, the National Health Service (NHS) is Britain’s largest employer. It
promotes scientific research at all levels but has been slow to take up modern
statistical methods for quality improvement. There is now a new initiative to promote
statistical thinking. The problems of quantifying quality are particularly relevant in
this sector. Some six sigma projects have shown the benefits of logical problem
solving techniques but also show the knock-on effects of making changes. For
example, projects aimed at reducing the length of stay in hospital have the knock-on
effect of more need for health services outside of the hospital and recently the ‘hotbedding’ resulting from shorter stays in hospital has been accused of increasing cases
of MRSA infection.
Interest in SPC in the NHS is strong at the moment, for example Regional Strategic
Health Authorities are active in providing seminars to review the possibilities for SPC
in the NHS both as part of a six sigma campaign and as a stand-alone skill set. There
are vast and complex opportunities for data analysis and problem solving in the health
sector and as health care is needed in all European countries, there is ample scope for
joint projects across the pro-ENBIS partnership.
pro-ENBIS has been a great support for statistical practitioners throughout Europe.
Contributions are invited to this state of the art report and can be sent to the proENBIS co-ordinators (Shirley.Coleman@ncl.ac.uk and Tony.Fouweather@ncl.ac.uk)
and via the Enbis discussion forum.
References
Burnham, S. C. (2004) ‘Democratization of statistics’, dissertation for MSc University
of Newcastle, supervisor S.Y.Coleman, passed with Distinction.
Coleman, S.Y., C. Faucherand and A. Robinson (2001) ‘Pipeline inspection with
sparse sampling’, poster presented at RSS Spatial Statistics conference in Glasgow
Van Lottum, C.E. (2003) ‘Opportunities for the application of quantitative
management and industrial statistics’, dissertation for MSc University of Newcastle,
supervisor S.Y.Coleman, passed with Distinction.
Download