Copyright by Yu-Ting Lin 2005 - University of Texas Libraries

advertisement
Copyright
by
Yu-Ting Lin
2005
The Dissertation Committee for Yu-Ting Lin Certifies that this is the
approved version of the following dissertation:
Cost-Effective Designs of Field Service for Electronic Systems
Committee:
Anthony P. Ambler, Supervisor
Baxter F. Womack
Magdy Abadir
Margarida Jacome
Nur Touba
Ross Baldick
Cost-Effective Designs of Field Service for Electronic Systems
by
Yu-Ting Lin, B.S.M.E; M.S.E.E.
Dissertation
Presented to the Faculty of the Graduate School of
The University of Texas at Austin
in Partial Fulfillment
of the Requirements
for the Degree of
Doctor of Philosophy
The University of Texas at Austin
May 2005
To my family
Acknowledgements
I would like to express my gratitude to my advisor, Dr. Anthony P.
Ambler, for his guidance and help during my time at the University of Texas at
Austin. I would also like to thank Dr. Baxter F. Womack, Dr. Magdy Abadir, Dr.
Margarida Jacome, Dr. Nur Touba, and Dr. Ross Baldick for being my
dissertation committee members.
I am grateful to my colleagues at the Center for Effective Design and
Manufacture: Dr. David Williams, Il-Soo Lee, Shu Wang, Huan Zhang, Jaehoon
Jeong, and Feng Lu for their comments and invaluable suggestions on my
research. I want to especially thank Dr. David Williams for many useful
discussions and assistance, and for answering my many questions.
Finally, I would like to thank my parents and family for their support and
encouragement. They have always believed in my success and always been there
for me.
Yu-Ting Lin
The University of Texas at Austin
May 2005
v
Cost-Effective Designs of Field Service for Electronic Systems
Publication No._____________
Yu-Ting Lin, Ph. D.
The University of Texas at Austin, 2005
Supervisor: Anthony P. Ambler
Field service is a complex process that involves sending technicians to
perform repair tasks on consumer products. It is responsible for having failed
products fixed, achieving high customer satisfactions, and keeping a low service
cost. The costs of field service significantly increase due to the rule of ten. Also,
the cost of field service is an important corporate activity but it is often not well
understood. Field repair service is the major part of field service and it is well
known that the field repair cost is a vast amount of money. It is a serial of
diagnostics to decide whether failed products are to be repaired or discarded.
Minimal the field repair cost is particularly critical to global competitiveness of an
electronic company.
Trading off between the significance of impact on performance and
manageability of problem complexity as a first step, we focus on two systemdesign level problems, alternative maintenance policies and field repair service, in
vi
this dissertation. Exploiting the knowledge-based technology as well as the multiobjective analysis, we develop a decision tree selecting model (DTSM) for
alternative maintenance policies, and further, design a weight-based maintenance
policy (WBMP) for personal computer (PC) systems. Cores to the knowledgebased policy selecting are decision tree representation models. This design
identifies maintainability problems early in the system design and selects
economic maintenance policies for electronic systems. The key of the WBMP
system is to use product's status as a guide for performing maintenance tasks. A
simulation model based on a field service operation is built to estimate the value
of the WBMP system. Test runs of DTSM and WBMP over field service
compatible environments demonstrate both the feasibility and potential
effectiveness for applications.
For field repair test, we present a decision tree diagnosis tool to reduce the
cost of test. We also design a cost-effective approach for overall system-level test,
which incorporates a manufacturing system test model, the field failure rate
analysis, and a field repair model into a test cost model. The optimized system
design parameters from the cost model can be used to reduce the cost of systemlevel test.
vii
Table of Contents
List of Tables.......................................................................................................... xi
List of Figures .......................................................................................................xii
List of Figures .......................................................................................................xii
Chapter 1: Introduction .......................................................................................... 1
1.1 Challenges and Significance of Field Service.......................................... 2
1.2 Literature Survey...................................................................................... 5
1.3 Scope of Research .................................................................................... 6
1.4 Thesis Organization................................................................................ 10
Chapter 2: Test Economics of Field Service........................................................ 11
2.1 Field Service Model ............................................................................... 11
2.2 Field Repair Service Model.................................................................... 13
2.3 Summary ................................................................................................ 17
Chapter 3: A Decision Tree Selecting Model for Alternative Maintenance
Policies ......................................................................................................... 18
3.1 Representation Knowledge of Policy Selecting ..................................... 19
3.2 Decision Tree Selecting Model for Alternative Maintenance Policies .. 24
3.2.1 Weight Setting Module .............................................................. 26
3.2.2 Decision Making Module........................................................... 31
3.2.3 Sensitivity Analysis Module ...................................................... 36
3.3 Mathematic Models of Decision Tree.................................................... 39
3.3.1 Model of Top-Down Decision Tree ........................................... 39
3.3.2 Model of Logical Decision Tree ................................................ 40
3.4 Top-Down Decision Tree as a Special Case of Logical Decision Tree . 41
3.4.1 Proof by Construction ................................................................ 42
3.4.2 An example of Proof by Construction........................................ 48
viii
3.4 .3 Model Conversion of Top-Down Decision Tree to Logical
Decision Tree ............................................................................. 51
3.5 Integration of DTSM .............................................................................. 54
3.6 Example Cases of DTSM ....................................................................... 55
3.6.1 Example Case............................................................................. 55
3.6.2 Results ........................................................................................ 60
3.7 Summary ................................................................................................ 62
Chapter 4: Weight-Based Maintenance Policy .................................................... 63
4.1 Hardware Modeling of Personal Computer Systems ............................. 64
4.2 Weight-Based Maintenance Policy for PC Systems .............................. 65
4.3 A Generic model for Field Repair Service............................................. 67
4.4 Simulation Results and Analysis............................................................ 69
4.4.1 Simulation Models ..................................................................... 70
4.4.2 Simulation Results and Analysis................................................ 74
4.5 Summary ................................................................................................ 74
Chapter 5: The Economics of Overall System-Level Test................................... 76
5.1 Overall System-Level Test..................................................................... 77
5.1.1 Manufacturing System Test ....................................................... 77
5.1.2 Field Failure Model .................................................................... 79
5.1.3 Field Service Model ................................................................... 79
5.2 Economic Diagnosis Design for Field Repair Test ................................ 81
5.2.1 Decision Tree Learning Algorithm ............................................ 82
5.2.2 Decision Tree for Failure Diagnosis .......................................... 84
5.3 Example Results and Analysis of DTDT ............................................... 86
5.4 A Cost-Effective Design for Overall System-Level Test....................... 89
5.5 Summary ................................................................................................ 93
ix
Chapter 6: Conclusion and Future Works ............................................................ 95
Bibliography.......................................................................................................... 98
VITA ................................................................................................................... 102
x
List of Tables
Table 1.1:
Rule of ten of test economics ............................................................. 1
Table 2.1:
Cost estimation of maintenance policies.......................................... 16
Table 3.1:
Relationship between selecting knowledge and FOL ...................... 30
Table 3.2:
LDT score formula ........................................................................... 33
Table 3.3:
Policy candidate information of the example................................... 48
Table 3.4:
Attribute value for all candidates ..................................................... 50
Table 3.5:
Costs of 4 possible maintenance policies......................................... 56
Table 3.6:
Estimated cost of 4 possible maintenance policies .......................... 58
Table 4.1:
Parameters of two simulation models .............................................. 73
Table 4.2:
Simulation results of two maintenance policies............................... 74
Table 5.1:
Results of using decision tree diagnosis tool ................................... 89
xi
List of Figures
Figure 1.1: Two system-level problems of field service ...................................... 3
Figure 2.1: Field service model .......................................................................... 12
Figure 2.2: Field repair service model ................................................................ 15
Figure 3.1: Alternative maintenance policies in field service operations........... 19
Figure 3.2: Two sub-decisions of policy selecting ............................................. 20
Figure 3.3: Situation assessment of policy selecting .......................................... 21
Figure 3.4: Lot priority determination of policy selecting.................................. 22
Figure 3.5: The knowledge engineering tool ...................................................... 23
Figure 3.6: DTSM for alternative maintenance policies..................................... 25
Figure 3.7: Models of TDDT and LDT .............................................................. 26
Figure 3.8: The definition of user requirements with interview ......................... 27
Figure 3.9: The architecture of knowledge engineering ..................................... 28
Figure 3.10: Top-down decision tree model for SA ............................................. 29
Figure 3.11: LDT sub-fuction score calculation flows ......................................... 32
Figure 3.12: Simplified LDT model ..................................................................... 34
Figure 3.13: Logical decision tree model for policy priority determination ........ 35
Figure 3.14: An example of linear utility function for attribute grading.............. 36
Figure 3.15: DTSM engine ................................................................................... 39
Figure 3.16: Weight generations of proof by construction................................... 44
Figure 3.17: Flow chart of proof by construction................................................. 47
Figure 3.18: TDDT representation model of the example.................................... 49
Figure 3.19: Model Converter of TDDT Model to LDT Model........................... 51
xii
Figure 3.20: The basic structure of model converter ............................................ 53
Figure 3.21: System architecture of DTSM.......................................................... 55
Figure 3.22: Top-down decision tree weight base of the example case ............... 57
Figure 3.23: Logical decision tree created by Logical Decisions for Windows.... 59
Figure 3.24: Results of DTSM for the example case............................................ 60
Figure 4.1: Weight-based maintenance policy in field service........................... 64
Figure 4.2: Three levels of maintenance support................................................ 65
Figure 4.3: A generic field repair service model ................................................ 68
Figure 4.4: Simulation model without WBMP................................................... 71
Figure 4.5: Simulation model with WBMP ........................................................ 71
Figure 4.6: Simulation model with WBMP by Enterprise Dynamics ................ 72
Figure 5.1: System-level testing process of a PC manufacturing company ....... 77
Figure 5.2: The manufacturing system testing process for PC systems ............. 78
Figure 5.3: Simplified subsystem with 4 units ................................................... 82
Figure 5.4: A diagnosis decision tree for unit 1.................................................. 83
Figure 5.5: Diagnosis procedure for failed units ................................................ 85
Figure 5.6: Economic diagnosis approach in field repair service....................... 86
Figure 5.7: Subsystem modeling with 8 units..................................................... 87
Figure 5.8: An example of training data for decision tree learning algorithm ... 87
Figure 5.9: Diagnosis decision tree from Figure 5.8 .......................................... 88
Figure 5.10: New approaches for decision processes in the field repair center.... 90
Figure 5.11: A cost-effective design for system-level test ................................... 92
xiii
Chapter 1: Introduction
Field service is a complex process that involves sending technicians to
perform repair tasks on consumer products. It is responsible for having failed
products fixed, achieving high customer satisfaction, and keeping a low service
cost. The field service operation of any electronic company is characterized by a
low service operating cost and high customer satisfaction. The costs of field
service significantly increase due to the rule of ten [1-2] as shown in Table 1.1.
Also, the cost of field service is an important corporate activity but it is often not
well understood. Most industry companies do not have enough time to devote to
an economic analysis of field service, but it is clear that the benefits of doing so
are enormous. Further, minimal the field service cost is particularly critical to
competitiveness of an electronic company. Therefore, cost-effective designs of
field service for electronic systems are necessary to run field service operations
more profitably.
Test
Chip Testing
Economics
Cost of Test
Table 1.1:
1
Board
System
Testing
Testing
10
100
Rule of ten of test economics
1
Out in Field
1000
1.1 CHALLENGES AND SIGNIFICANCE OF FIELD SERVICE
To manage complex field service operations with higher levels of
profitability and customer satisfaction, cost effective designs are needed in current
field service practices. However, several characteristics of field service operations
make the service cost management particularly challenging. The following
requirements from current field service operations of an electronic company are
crucial in developing the cost-effective designs for field service.
•
Increase business productivity efficiently.
•
Have share information easily.
•
Achieve high customer satisfaction.
•
Support intelligent business decisions quickly.
•
Maintain a low service cost to increase service profitability.
Trading off between the significance of impact on performance and
manageability of problem complexity as a first step, we focus on the two systemdesign level problems in this dissertation: (1) the maintenance policy design
within a system design group; (2) the failure diagnosis and depot management in
a repair center as shown in Figure 1.1. The maintenance policy design uses raw
field service cost data and system-design information to decide an economic
maintenance policy. Streamline your test and repair processes and minimize costs
with a more effective, efficient repair policy to manage multiple repair tasks and
to share information with other interested departments. The repair policy selection
2
heavily relies on human intelligence and coordination among decision-makers.
The second problem executes test, diagnosis, and repair commands based on the
above repair policy decision.
System Design Level
?
Fixed
Alternative Field
Maintenance Policies
Ship
Telephone
service
Task
Manager
Maintenance
Service
Fixed
Users
Online test
Alternative Dispatching
Strategies
Economic Diagnosis
and Repair service
System Design Level
Fixed
Fixed
Figure 1.1: Two system-level problems of field service
In spite of the many research results and the current field service practices,
decision-makers and field engineers make field service decisions in a heuristic
way mainly based on their training and experiences. Operational complexity,
handling of changes and uncertainty, and incorporation of commands from a
higher level make many research designs impractical for real applications and
cost a vast of money in performing field service. Although some outstanding field
service performance has been achieved via proper field service management [312], there are a few problems with such a practice in above two problems.
3
Maintenance Policy Design:
(1) Good practices only rely on human intelligence;
(2) Maintenance policy selecting knowledge is implicit;
(3) Training new decision-makers is not an efficient use of veteran decisionmakers' time;
(4) Policy selecting knowledge documentation should be important but often
overlooked.
(5) Economic maintenance policy designs for personal computer (PC) systems
should significantly reduce the field repair cost but less effort has been made.
Field Repair Service:
(1) Most field testing processes are based on chip-and-board methods. As systems
become more and more complex, the previous chip-and-board-based testing
methods are unable to meet overall system functionality;
(2) Efficient field testing processes are uncertain and not well defined;
(3) The cost of field repair service has been increasing rapidly, but less effort has
been made on the economic repair policy design to reduce the field service cost;
(4) Field test is highly related to the field repair policy, and their relationship
should be well understood and be considered cooperatively.
The cost-effective designs to solve the above problems are therefore
important issues of field service management. Such cost-effective designs not
only are pertinent to an electronic company but also can be exploited in the field
service management software of a great diversity of service organizations.
4
1.2 LITERATURE SURVEY
In the literature of field service, Ambler [6] explored the issues of testing
costs and touched on all aspects of design through to field service. Reinhart and
Pecht [7] used computerized techniques to predict maintainability system
parameters and speed the design process. They concluded that the cost of field
service should be lowered through maintainability design. Lin et al. [8] developed
a simulation model for field service to estimate the value of a conditioned-based
system. Duffuaa et al. [9] described a generic conceptual model, which can be
used to develop a discrete event maintenance simulation model. Hori et al. [10]
developed a diagnostic case-based reasoning system, which infers possible defects
in a home electrical appliance. Szczerbicki and White [11] demonstrated the use
of simulation to model the management process for condition monitoring. Watson
et al. [12] conducted a simulation metamodel to improve response-time planning
and field service operations. They described how to use simulations to estimate
the performance of the field service model and had confirmed that an economic
maintenance policy for field service has significant cost reduction impacts.
Several studies have been conducted in the fault diagnosis; Sanchez et al.
[13] used a fault-based testing approach to provide a complementary technique
during the testing phase. Nolan et al. [14] described the diagnosis of incipient
faults based on fault trees derived using the induction learning algorithm. They
showed effectiveness designs of the diagnosis are suitable for use in practical
applications. Assaf and Dugan [15] presented a methodology for developing a
diagnostic map for systems that can be analyzed via a dynamic fault tree. This
5
approach is capable of producing diagnostic decision trees that reduces the
number of tests or checks required for a system diagnosis. Chen et al. [16]
presented a decision tree learning approach to diagnosing failures in large Internet
sites. They concluded with a look toward the future needs of using decision tree
model for fault diagnosis.
In the literature of system-level test, Williams [17] introduced the
manufacturing system test philosophy of Dell. Martin et al. [18] described the
evolution of the system test process for Motorola's GSM systems Division. Farren
and Ambler [19] developed a system-level test cost model to characterize system
behavior. They concluded with the importance and future needs of system-level
testing. Jain and Saraidaridis [20] developed a production sampling plan for
manufacturing testing. Ghosh and Grochowski [21] presented the methods of
dynamic control for computerized manufacturing test. They showed that a
significant improvement in the manufacturing testing cost can be achieved by the
sampling and statistical prediction approaches.
1.3 SCOPE OF RESEARCH
We propose cost-effective designs of field service for electronic systems
in this dissertation. For alternative maintenance policies, a unified model for
systematic representation of decision-makers' determination knowledge is built,
which serves as a foundation for knowledge-based decision supporting system.
Our methodology consists of knowledge extraction, building a knowledge base,
implementation, and inferring new facts [22].
6
There are many possible knowledge representation models toward
decision supporting system. Exploiting the advantages of knowledge-based
technology [22-23] as well as multi-objective analysis [24-26], a decision tree
model for decision-maker system to support field service operations is built. A
personal computer (PC) system as the conveyer of our research is selected. Cores
to the decision tree are top-down decision tree model for situation assessment,
logical decision tree model for policy priority determination, and sensitivity
analysis for decision uncertainty. We theoretically show that top-down decision
tree is a special case of logical decision tree and develop a top-down decision tree
to logical decision tree model conversion algorithm. As a result, we get a unified
model. The decision tree model has the advantage of human-like reasoning
procedures, which is important for selecting knowledge documentation.
We then develop a weight-based maintenance policy (WBMP) for PC
systems to reduce the field repair cost. The WBMP aims at developing a system,
which is capable of performing maintenance tasks based on product's status to
achieve a low field depot cost as well as short service time. Further, a decision
tree diagnosis tool (DTDT) for field repair service is built. The DTDT can reduce
the cost of test as well as achieve a high fault coverage rate. A decision tree
learning algorithm is applied to build the decision tree fault model, which is
useful for the online testing requirement. We also design a cost-effective design
for overall system-level test, which incorporates a manufacturing system test
model, the field failure rate analysis, and a field repair model into a test cost
7
model. The optimized system design parameters from the cost model can be used
to reduce the cost of overall system-level test.
To evaluate how close a decision tree selecting model (DTSM) conforms
to decision-makers' decisions of alternative maintenance policies based on the
field repair cost model, a prototype system is implemented, and test cases are
conducted to validate the DTSM. In integrating top-down decision tree model and
logical decision tree model, data linkages between codes and databases are
required. The integration architecture is proposed to make the system be modified
or reused more easily. The Logical Decisions for Windows [27] software is used
to configure the logical decision tree model since it has user-friendly GUI
environment and powerful analysis tool. For validating the DTSM, a measure of
matching rate is defined as the percentage of common selections of the two
decisions generated by the DTSM and the field repair cost model for the same
batch of policy candidates. We perform the validation example over 20 cases and
obtain an average matching rate as 90%. Validation results support that our
DTSM allows decision-makers to explicitly and intuitively express their selecting
knowledge by using the GUI. Also, DTSM clearly facilitates easy documentation
of knowledge.
A field maintenance simulation model is developed to estimate the cost
reduction of implementing a WBMP system. The Enterprise Dynamics [28]
simulation software is used to implement the simulation model. From our
simulation results, the field repair depot cost is reduced by 44% with a WBMP
system for electronic systems. Example cases of using DTDT are conducted and
8
the results show that testing cost is reduced significantly with a high fault
coverage rate.
In summary, the contributions of this dissertation study are
(1)
Computerized human decision-makers knowledge for field service: An
introduction of knowledge engineering methods;
(2)
Developed a decision tree selecting model for alternative maintenance
policies: A cost-effective and human-like reasoning model, which is generic for
field service operation;
(3)
Proved top-down decision tree model compatible to logical decision
tree model: A unified theoretic foundation;
(4)
Built a weight-based maintenance policy for field repair service: A
system uses product's status as a guide for performing maintenance tasks.
(5)
Developed a decision tree diagnosis tool for field repair testing: A
diagnosis tool supports online testing for field repair service.
(6)
Presented a cost-effective design for overall system-level test: Feed
design parameters back to the system design level to minimize and optimize the
cost of test.
(7)
Proposed economic designs for field service: The purpose of this
dissertation is to develop cost-effective designs for field service to reduce the cost
of test, and further, serve as a foundation of further knowledge engineering for
automated decision supporting system in field service such as knowledge
extraction via machine learning.
9
1.4 THESIS ORGANIZATION
The remainder of this thesis is organized as follows. In Chapter 2, we first
summarize the economic analysis of field service. Chapter 3 describes the
alternative maintenance policy problem. The decision tree selecting model for
decision-supporting knowledge base system is then introduced. The mathematical
models of top-down decision tree and logical decision tree are also presented in
Chapter 3. We then prove that the top-down decision tree model is a special case
of the logical decision tree model. A weight-based maintenance policy design for
PC systems is given in Chapter 4. To reduce the cost of test, the decision tree
diagnosis tool for field repair service is presented in Chapter 5. The cost-effective
design for overall system-level test is also given in Chapter 5. Chapter 6
summarizes the work discussed, provides a brief overview of future work and
concludes this dissertation.
10
Chapter 2: Test Economics of Field Service
As field service operations become increasingly complex, it is critical to
ensure that your service strategy is consistent with corporate productivity targets
and to reach overall corporate objectives. Configuring your field service cost
model is the first step to evaluate your corporate field service strategies. To
establish the field service cost model for electronic systems, there first needs a
good understanding of field service. We first describe the field service model and
its corresponding field service cost model in Section 2.1. The major part of field
service, filed repair service, is then presented in detailed in Section 2.2. Section
2.3 summarizes this chapter.
2.1 FIELD SERVICE MODEL
A field service cost model is used to analyze the economics of field
service domains for the electronic industry. The model is a straight forward
relationship derived from current electronic companies. Many factors are
incorporated, but it is not a complete model. The purpose is to introduce the
necessity of using this model to maintain a cost-effective process, and this is the
task of the field service cost model. As Figure 2.1 shows, we assume field service
cost consists of four main components: helpline service cost, online service cost,
online testing service cost, and repair service cost. The overall cost of field
service Cfs is given as
Cfs = Chl + Col + Cot + Cre
(2.1)
11
Chl: the cost of helpline service
Col: the cost of online service
Cot: the cost of online testing service
Cre: the cost of repair service
Online Service
Helpline Service
Inquiry
•Online malfunction research
•Frequently asked questions
•Downloads
What-to-ask
Casebase A
Browse
Products connecting
to internet
Online
Search
Symptoms
Casebase B
Test server running
testing software
?
Telephone
Operator
End User
Online
Test
Casebase C
Possible
Malfunctions
Solution
Service List
Online Testing Service
Service
Technician
•Visit/mail repair or exchange
•Repair center
•Product exchange
Possible
Malfunctions
Chips
containing
online test
Solution
Service List
Repair Service
Figure 2.1: Field service model
Chl catches fixed costs of malfunction service, consulting service, and
service contracts. The helpline cost depends on customer number and telephone
operator cost. The telephone assistant cost is a function of the human resource
required to serve customers.
Col consists of costs of online malfunction research, frequently asked
questions, downloads, and self-help service.
12
Cot is the cost required to do online testing service. It can be modeled as a
function of online testing malfunction repairs, installation and commissioning
assistance, measurements and system checks, and the testing system cost.
Cre is a major field service cost component. It consists of system design
cost, repair depot/replacement cost, built-in-self-test (BIST) cost, external test
equipment cost, reliable factor, production cost, logistic support cost, service
technician/personnel skills cost, service response time, life-cycle cost, downtime
cost, and customer satisfaction.
The inputs of field service cost model (2.1) are the parameters of each subcost, and its output is the cost of field service for an electronic company. It
represents customer-and-company relationships. In some cases, it can be selected
from company prior work. In other cases, it can be derived from expert field
managers. In system design flows, many complex processes and parameters in
design, manufacturing, testing, and field operations are applied to develop the
field service cost model, and these parameters make the developing approach of a
cost model highly difficult and time consuming. Also, the finance department of
the company should approve all equations used in the field service cost model.
2.2 FIELD REPAIR SERVICE MODEL
In current practices, the field repair cost is a major part of the field service
cost and becomes a key performance measure. Here is what field repair service
might look like in the field service department of a PC manufacturer. When a
system (e.g., desktop or laptop) fails to work, a customer makes a service call or
runs an online testing program. If the failed system can't be repaired by a helpline
13
operator or online repair functions, then the failed system is repaired by field
technicians or delivered to the repair center. For a short time warranty contract,
field technicians repair the failed systems by replacing with new ones or new
subsystems (e.g., motherboard, CDROM, or VGA card), and then take those
failed products (systems and subsystems) to the repair center. First, all failed
products in the repair center are processed to record and to save in a database
application. Second, test engineers test these failed products and determine
repairing tasks based on the selected maintenance policy, and then they are put in
the waiting queue prior to be repaired. If the subsystems are unable to be repaired
in the repair center, they are sent back to subsystem vendors. In the case of
costing a lot of money to repair the failed products, they are directly discarded.
Third, these failed products are repaired by repair center technicians or subsystem
vendors. Fourth, these repaired products are tested by test engineers to examine
their functions. If they pass the function test, they are sent to a repaired storage
and delivered back to customers. Otherwise, they are sent to the third step. Figure
2.2 depicts the field repair service model for a PC manufacturing company. The
overall cost model of field repair service Cre is given as
Cre = Csd + Crd + Cbist + Cete + Fre + Cpr + Cls + Csts + Fsrt + Clc + Cdt + Fcs
(2.2)
Csd
: the cost of system design
Crd
: the cost of repair depot/replacement
Cbist : the cost of built-in-self-test (BIST)
14
Cete : the cost of external test equipment
Fre
: the factor of reliability
Cpr : the cost of production
Cls
: the cost of logistic support
Csts : the cost of service technician/personnel skills
Fsrt
: the factor of service response time
Clc : the cost of life-cycle
Cdt
: the cost of downtime
Fcs
: the factor of customer satisfaction
Telephone
Operators
Field
Technicians
Database
Depot Storage
Customer
Sites
Field Repair Center
Failure System/
Subsystem
Test Department
Subsystem
Vendors
Repaired
Storage
Discard
Figure 2.2: Field repair service model
The field repair cost model (2.2) can be used to evaluate maintenance
policy candidates to determine an economic selection. There are many possible
maintenance policies in the industry. Alternative field maintenance policies for
electronic systems fit into three main classical categories: non-repairable system,
partially repairable system, and fully repairable system, and their cost estimations
[29-30] are shown in Table 2.1.
15
Maintenance Policy Categories of Electronic Systems
Cost Factors
System Design
Best
Non-Repairable
System
Partially
Fully
Repairable
Repairable
system
System
L
L
LM
H
L
H
LM
L
L
L
L
M
L
M
MH
H
Reliable Factor
H
H
MH
L
Production Cost
L
LM
LM
MH
L
LM
LM
H
L
L
L
MH
L
L
LM
H
Life-Cycle Cost
L
N/A
N/A
N/A
Downtime Cost
L
N/A
N/A
N/A
H
N/A
N/A
N/A
Cost
Replacement
Cost
BIST Cost
External Test
Equipment Cost
Logistic Support
Cost
Personnel Skill
Cost
Service Response
Time Factor
Satisfaction
Factor
L: Low; LM: Low-to-Medium; M: Medium; MH: Medium-to-High; H: High;
BIST: Built-in-self- Test; N/A: Not Applicable
Table 2.1:
Cost estimation of maintenance policies
16
No repair is accomplished and a failed subsystem item is replaced by a
spare when a non-repairable policy is selected. A partially repairable policy
indicates that subsystem repair constitutes the remove and replacement of failed
subsystem, which is repaired through the removal and replacement of units. By
applying a fully repairable system, individual units within subsystem are designed
as being repairable. Note that it is important to consider life-cycle cost, downtime
cost, and customer satisfaction factor in the policy decision-making process,
which are highly dependent on operational requirements and goals of an
electronic company.
2.3 SUMMARY
The current trend for the electronic industry is towards greatly expanding
and becoming international corporations. Field service management is a difficult
task, as it requires not only integrating technologies but also combining different
people and country cultures. The key to success is to build an accurate field
service cost model that identifies the possible impacts.
Two interesting problems of field service will be solved in the following
chapters: maintenance policy design within a system design group, and failure
diagnosis and depot management in a field repair center. First, we develop a
knowledge engineering tool for alternative maintenance policies in Chapter 3. For
lack of the clarity of a mathematical relationship between a top-down decision
tree model and a logical decision tree model, which are discussed later, we also
design a converter and explain its clear relation. Example results and analyses are
also given in Chapter 3.
17
Chapter 3: A Decision Tree Selecting Model for Alternative
Maintenance Policies
There are a number of possible maintenance policies, and how to decide
which policy should be performed is one of the field service cost reduction
problems. Figure 3.1 shows the problem of alternative maintenance polices in
field service operations. Functionally, there are two sub-decisions of policy
selecting: situation assessment (SA) and policy priority determination (PPD).
Although the decision-making problem of policy selecting has been an important
research topic in resource management of field service, effective methods for
alternative field maintenance policies to achieve high customer satisfaction and a
low service cost under dynamic and uncertain environments still pose significant
challenges to both researchers and practitioners.
Exploiting the advantages of the knowledge-based technology [22-23] and
the multi-objective analysis [24-26], this chapter presents a decision tree selecting
model (DTSM) for alternative maintenance policies to support field service
operations. A personal computer (PC) system as the conveyer of our research is
selected. The DTSM include a top-down decision tree (TDDT) model for SA, a
logical decision tree (LDT) model for PPD, and a sensitivity analysis for decision
confidence.
Section 3.1 summarizes the decision tree representation model of policy
selecting knowledge. Section 3.2 introduces the DTSM for alternative field
maintenance policies. In Section 3.3, we construct mathematical models of TDDT
and LDT. We then prove that TDDT is a special case of LDT in Sections 3.4, and
18
the converter between two representations is also given in Section 3.4. As a result,
we get a unified model. The integration of DTSM is presented in Section 3.5. In
Section 3.6, example cases of DTSM demonstrate the feasibility and effectiveness
for applications. Finally, Section 3.7 summarizes this chapter.
System Design Level
?
Fixed
Alternative Field
Maintenance Policies
Ship
Telephone
service
Task
Manager
Maintenance
Service
Fixed
Users
Online test
Alternative Dispatching
Strategies
Economic Diagnosis
and Repair service
System Design Level
Fixed
Fixed
Figure 3.1: Alternative maintenance policies in field service operations
3.1 REPRESENTATION KNOWLEDGE OF POLICY SELECTING
Policy selecting decides an economic candidate for an electronic product
under a specific service situation according to candidates' attributes. Policy
candidates' attributes include system design cost, replacement cost, built-in-selftest (BIST) cost, external test equipment cost, reliable factor, etc. To handle
19
problem complexity and to cope with the decision levels of field service
operations, functionally, there are two sub-decisions of policy selecting: situation
assessment (SA) and policy priority determination (PPD). Figure 3.2 depicts the
two sub-decisions of policy selecting.
Situation Assessment
SA
Selecting
Knowledge
Coordination Meeting among
Design, Manufacturing, Testing,
and Field Engineers
Field Service
Report Data
Policy Priority Determination
PPD
Priority Determination by
Evaluating all Factors
Figure 3.2: Two sub-decisions of policy selecting
The decision of SA classifies raw field system report data into
comprehensible terms about service situations as shown in Figure 3.3.
Identification of situation involves some fuzzy notions. It includes the estimated
cost of field service, the customer demand, field technician supporting level, etc.
Operation goals include minimal field depot cost, maximal repair throughput,
maximal technician utilization, minimal service time, etc.
20
-
System Information
in Field Service:
System Design Cost
Production Resources
Technician Status
Testing Cost
Repair Cost
Customer Satisfaction
Candidate Information
Situation
Assessment
Comprehensible Terms:
- Identification of
Situation
- Operational Goals
Figure 3.3: Situation assessment of policy selecting
PPD sets among policies priority of performing based on service situation,
policy attributes and system operational requirements. Policy candidate attributes,
input items of PPD, are the properties of the candidates such as system design
cost, replacement cost, customer satisfaction and so on. Figure 3.4 describes
policy priority determination of policy selecting. Selecting knowledge is the logic
of decision makers to select the maintenance policy. It represents the ideas of how
decision makers do selecting. Objectives of policy selecting are high customer
satisfaction, minimal service time, maximal repair throughput, maximal
technician utilization, minimal service cost, etc. Objective changes with situation
such as that less technician supporting can influence maximal throughput.
Therefore, objectives can have specific impacts on PPD.
21
Input Items:
Service Situation,
Candidate Attributes, and
Operational Goal, etc.
Policy Priority
Determination
Output Items:
An Economic
Policy Selection
Figure 3.4: Lot priority determination of policy selecting
In spite of the many research results, skilled decision-makers mainly make
maintenance policy selecting decisions in current field service department
practice. However, there are a few problems with current practices:
(1) Good practices only rely on human intelligence: In the current industrial
practice of using empirical or heuristic rules, there is a common set of rules
adopted by the decision makers of field service. However, individual decision
maker may either have somewhat different application of rules or come up with
some new rules for achieving a good performance. The quality of human
intelligence plays an important role and leads to the variation of selecting decision
quality.
(2) Policy selecting knowledge is implicit: Logs of selecting decisions are in a
raw data form. Selecting results can hardly be correlated with system report data
or operational goals at the time of decision. The knowledge for selecting is largely
in decision makers' heads. In spite of regular training, knowledge of good
selecting practices will be gone with the change of decision makers.
(3) Training new decision-makers is not an efficient use of veteran decisionmakers' time: Training new decision makers takes time and is not an efficient
use of veteran decision makers' time. There may not be enough veteran decision
22
makers when business is in fast expansion. As a result, selecting quality may
often be lowered during a fast business expansion. New decision makers may also
spend time and effort reinventing the wheel.
(4) Policy selecting knowledge documentation has been important but often
overlooked: Documentation of selecting knowledge is important for training new
decision makers. It may also help in transferring operational knowledge into
machine intelligence for fully automated decision supporting systems. However,
knowledge extraction and documentation are often overlooked because there are
heavy loaded works to decision makers every day.
To document decision makers' selecting knowledge, we build a selecting
knowledge base with decision makers' selecting knowledge. A process of building
a knowledge base called knowledge engineering (KE) is needed. Stuart Russell
and Peter Norvig [22] suggested four steps of KE: knowledge extraction, building
a knowledge base, implementation and inferring new facts as depicted in Figure
3.5.
Logic
1. Observe human actions
2. Collect desired data
3. Search information flow
Knowledge Extraction
Updating a Knowledge Base
Building a
Knowledge
Base
Knowledge Base
Debugging a Knowledge Base
Implementation
Figure 3.5: The knowledge engineering tool
23
Inferring New Facts
A good knowledge representation should permit the use of reasoning
inferring procedures, including input candidates, attributes of candidates, rules,
decision logic configuration and output suggested candidate. It should record
expert experiences systematically and easy of use reasoning in inferring
procedures. In addition to providing knowledge engineering tool to record veteran
decision makers' knowledge, a knowledge representation should facilitate a
functional capability for reasoning inferring procedures to achieve selecting
objectives. Therefore, a good knowledge representation should be reasoning,
expressive, concise, unambiguous, context-insensitive, and effective.
To establish the knowledge base for maintenance policy selecting, there
first needs a good knowledge representation model. Through surveying literatures
and meeting the selecting problem's needs, we deduce that decision tree
representation is a useful approach for extracting classification knowledge from a
set of knowledge-based selecting information of experienced decision-makers.
3.2 DECISION TREE SELECTING MODEL FOR ALTERNATIVE MAINTENANCE
POLICIES
Maintenance policy selecting is one of the most promising problems in the
field service management. There are many developed approaches toward decision
supporting systems. However, good selecting practices heavily rely on human
intelligence. We treat the problem as one of developing a knowledge
representation model for maintenance policy selecting. Therefore, a decision
supporting system with veteran dispatchers' knowledge is developed in this
dissertation.
24
To help decision-makers make maintenance policy trade-offs, we build a
decision tree selecting model (DTSM) for alternative field maintenance policies
as shown in Figure 3.6, and it consists of (1) a weight setting module for situation
assessment (SA) by using a top-down decision tree (TDDT) model, (2) a decision
making module for policy priority determination (PPD) by using a logical
decision tree (LDT) model, and (3) a sensitivity analysis module to analyze the
confidence of the decision made by DTSM. The decision tree model has the
advantage of human-like reasoning procedure, which is important for policy
selecting knowledge documentation. The TDDT and LDT models are shown in
Figure 3.7.
Decision Making Module
Weight Setting Module
Build top-down decision tree
(TDDT) weight base
Create logical decision tree
(LDT)
Obtain weights based on
field service situation
Calculate preference scores of
maintenance policies
Modify weights via GUI
Rank maintenance policies and
select the highest one
Sensitivity Analysis Module
The economic
selection
Figure 3.6: DTSM for alternative maintenance policies
25
H
Subcost1
C1
C2
C3
C4
Subcost2
C3
C2
C5
L
Subcost2
C1
C4
C5
H
TC1 C3
L
TC2
C2
H
TC3
C4
L
TC4 C1
C5
Top-down decision tree model
C1
C1
C2
C4
C3
C5
N
Subcost1
Concern
Y
C2
C4
C1
C5
C1
C2
C4
C3
C5
Subcost2
Concern
Y
C3
C2
N
C5
C1
C2
C4
C3
C5
C3 C4
Add Subcost2 Score
100
Add Subcost1 Score
1000
Logical decision tree model
Figure 3.7: Models of TDDT and LDT
3.2.1 Weight Setting Module
When interviewing with decision makers, a decision tree knowledge
representation model for reasoning and documentation can be built. Figure 3.8
depicts the definition of user requirements with interview.
26
Interview
with
Decision
Makers
(DMs)
C2
C1
C3 C4
C5
DMs
TDDT Knowledge
Representation
Figure 3.8: The definition of user requirements with interview
The SA knowledge is a logic of decision-makers to set maintenance policy
priorities according to operational goals, and it represents ideas of how decisionmakers to make economic maintenance policy decisions. To document decisionmakers' selecting knowledge, a weight base with service situations is built. Each
service situation is corresponding to a set of weights, which can be used in
decision making module to make PPD decisions. A knowledge engineering
procedure is designed to build the knowledge base. Our methodology consists of
knowledge extraction, design of knowledge representation model, implementation,
and validation of the model as described in Figure 3.9.
27
Knowledge Extraction
Knowledge Base
Inferring new facts
Dynamic Information:
Interview
TDDT
Representation
Updating
Knowledge Base
R ule
Rule
Static Information:
1. Candidates’
Information
2. System Report
Rule
1
R ule
R ule
2 3
DTSM
R ule
4
5
6
7
Sub-costs
Rules
First-Order-Logic (FOL)
Language
Expert
Thinking
Attributes
Compare and Extract
Implementation
Java Code Program
Figure 3.9: The architecture of knowledge engineering
A top-down decision tree (TDDT) model is used to construct the weight
base since it reflects decision-maker's reasoning in a natural way as depicted in
Figure 3.10. In building a TDDT weight base, rules are classifying functions
according to service statuses, and each node is labeled with the rules. Each leaf
node represents target classes where sets of weights are put. By examining the
paths that lead to leaf nodes, a set of weights for a specific service situation can be
obtained. TDDT selecting knowledge is formed by defining rules and configuring
TDDT via veteran decision-makers or by a decision tree learning algorithm such
as C4.5 [31]. To implement the SA knowledge, a piece of code and a link with
database are required. Weight base verification is to pose queries to the inference
procedure and get responses. After TDDT weight base is built, we can obtain a set
28
of SA weights for a specific service situation. Veteran decision-makers can
modify the weight setting for verifications by using a GUI design.
Root Node
O
Service Dept.
Situation Status 1
Node
H
P1 set of weight
Leaf Node
Service Dept.
Situation Status 2
L
P2 set of weight
Leaf Node
M
Node
H
Service Dept.
Situation Status 2
O: Over
M: Meet
Node
H
Leaf Node
Service Dept.
Situation Status 3
L
Leaf Node
H: High
L: Low
P5 set of weight
P3 set of weight
L
Leaf Node
P4 set of weight
Figure 3.10: Top-down decision tree model for SA
Decide on a vocabulary of candidates, attributes, rules and a top-down
decision tree configuration before building a knowledge base. In knowledge
representation, a domain is a section of the world about which we wish to express
some knowledge. We adopt First-Order-Logic (FOL) to represent rules
relationships and mathematical sets and use classification rules to configure topdown selecting knowledge. FOL can represent decision makers' selecting system
well, because it is universal, best understood, easy transformed to program and
mature inference mechanism.
First, we need to be able to classify candidates according to their attributes.
Clearly, the objects in our domain are candidates; this is handled by naming
object with candidates. Next, we need to know the attributes of candidates; a
relation is appropriate for this: State_1(Candidate). This introduces the state_1 of
the candidate; the other attributes are called state_2, state_3, state_4, etc.
29
We could have used a type relation as follows:
Bigger (State_1(Candidate), State_1_Th)
Next we consider an implication:
Bigger (State_1(Candidate), State_1_Th)ÆUpperClass (Candidate)
It means that if the candidate's state_1 greater than state_1 threshold, then
the candidate goes to a upper class.
Detailed relationship between selecting knowledge representation and
FOL representation is described in Table 3.1.
Selecting Knowledge
First-Order-Logic
FOL Representation
Candidate Information
Object
Candidate
State_1 of candidate
Relation
State_1 (Candidate)
If ….., then…….(syntax)
Implication
If
Representation
Table 3.1:
Relationship between selecting knowledge and FOL
Then, the selecting rule is shown as follows:
∃Candidate , Bigger ( State _ 1(Candidate ), State _ 1 _ Th ) ⇒ RightClass (Candidate )
It represents that if the candidate's state_1 > state_1_Th, then the
candidate goes to a right class.
Finally, we can construct a TDDT configuration by using candidates,
attributes and selecting rules. In summary, the method of constructing the TDDT
configuration is classification by using the First-Order-Logic as the representation
language in our knowledge-based selecting tool.
30
In our decision tree selecting model design, all selecting knowledge is
added into the knowledge base, and in principle the selecting knowledge is all
there to know about the selecting domain. If we allow rules that refer to past
selecting knowledge as well as the current selecting knowledge, then we can, in
principle, extend the capabilities of a selecting tool to the point where the
selecting tool is acting effectively to achieve user requirements. Writing such
rules, however, is remarkably tedious unless we adopt certain patterns of
reasoning that correspond to maintaining a knowledge representation model of the
selecting domain. It can be shown that any system that makes decisions on the
basis of past selecting knowledge can be rewritten to use instead a set of rules
about the current selecting domain, provided that these rules are updated.
3.2.2 Decision Making Module
The technique of logical decision tree (LDT) analysis [23-26] is used to
perform policy priority determination (PPD). LDT is a parallel-manner multiobjective system in which candidates are added scores when they meet subfunction's concerns, where a sub-function is a sub-cost. The weight in this
dissertation is defined as the score in sub-cost. Figure 3.11 depicts LDT score
calculation flows. The weight of sub-cost may imply that which sub-functions are
dominating and having higher influences. When candidates performed all subfunctions, the higher priority score the candidate have, the higher priority the
candidate is selected. The main goal of weight setting module in Section 3.2.1 is
to investigate the weights of sub-costs in LDT.
31
S ta rt
Y
A d d S u b - F u n c t io n 1 S c o r e
S u b - F u n c t io n 1
N
Y
S u b - F u n c t io n 2
A d d S u b - f u n c t io n 2 S c o r e
N
Y
S u b - F u n c t io n 3
A d d S u b - f u n c t io n 3 S c o r e
N
Y
S u b - F u n c t io n 4
A d d S u b - f u n c t io n 4 S c o r e
N
End
Figure 3.11: LDT sub-fuction score calculation flows
Score Formula for Each Sub-Function
Consider a LDT model with sub-functions and a set of weights, where
each sub-function transfers to priority score according to each candidate's
concerning attributes (A). We can use adjustment weights (B) to let engineers to
adjust, and constants (C) are used to imply the dominated sub-functions. Table 3.2
depicts the LDT score formula in detail.
The score formula of each sub-function is equal to (Ai×Bi×Ci), where
Ai = The factor transformed from the original value of attribute.
Bi = The weight given by engineers.
32
Ci = The pre-defined constant value of each sub-function.
Attribute
Value
Suba. 0~10
Function 1 b. 10~+∞
Factor
SubConcerning
Function 2 conditions
SubConcerning
Function 3 conditions
SubConcerning
Function 4 conditions
Table 3.2:
Transform Adjustment
(A)
(B)
a. 10
1
b. 0
1 (meet)
or
1
0 (not meet)
1 (meet)
or
1
0 (not meet)
1 (meet)
or
1
0 (not meet)
Constant
(C)
Score Range
100000
0~1000000
10000
0~10000
1000
0~1000
100
0~100
LDT score formula
The weight score of each candidate is obtained by adding up its each subfunction score and the candidate with higher priority score is selected. On one
hand, intuitively, a candidate coming from the first to the last sub-functions or
from the last to the first sub-functions will not affect the final results. On the other
hand, in order to analyze LDT model and TDDT model, the Ai, Bi and Ci will be
combined to a single weight, Wi, for each sub-function as described in Figure
3.12.
Wi =Ai×Bi×Ci
33
S ta r t
Y
S u b - F u n c tio n 1
Add W
1
S c o re
Add W
2
S c o re
Add W
3
S c o re
Add W
4
S c o re
N
Y
S u b - F u n c tio n 2
N
Y
S u b - F u n c tio n 3
N
Y
S u b - F u n c tio n 4
N
End
Figure 3.12: Simplified LDT model
A logical decision tree (LDT) model based on Table 2.1 with 12 sub-costs
for alternative field maintenance policies is built, as depicted in Figure 3.13, to
demonstrate how to use the multi-objective analysis to set priorities for
maintenance policy candidates. A field service department of the electronic
company as the conveyer of our research is selected. After creating the LDT
model, the policy candidate's attribute levels for all sub-costs can be graded from
estimated costs to attribute values by using a linear utility function. Figure 3.14
shows an example of using the linear utility function to convert an attribute level
to a value for a policy candidate. There are many possible utility functions, which
34
can be used for conversion. It depends on the profile of the field service
department to select an economic utility function. Then, we can multiply policy
candidate's attributes with sub-cost's weights and sum all together to get a priority
score for each policy candidate. The economic maintenance policy is selected
with the highest priority score under a specific field service situation.
System Design Cost
Replacement Cost
BIST Cost
Logical Decision Tree
Weight 1
Weight 2
Weight 3
External Test Equipment Cost
Weight 4
Reliable Factor
Weight 5
Production Cost
Weight 6
Logistic Support Cost
Weight 7
Personnel Skill Cost
Weight 8
Service Response Time Factor
Weight 9
Life-Cycle Cost
Weight 10
Downtime Cost
Weight 11
Satisfaction Factor
Weight 12
for PPD
Sub-Cost Function
Figure 3.13: Logical decision tree model for policy priority determination
35
Case 1: Low is best
1
Low
0.75
Attribute Value
0.5
0.25
0
Low-Medium
Medium
Medium-High
High
0.25
Attribute Value
0.5
0.75
1
Low-Medium
Medium
Medium-High
High
Attribute Level
Case 2: High is best
0
Low
Attribute Level
Figure 3.14: An example of linear utility function for attribute grading
3.2.3 Sensitivity Analysis Module
Different veteran decision makers will have unlike set of weights in the
top-down decision tree model. Different weight settings result in varied selecting
results. However, a good selecting maintenance policy achieves high customer
satisfaction and low service cost by an experienced decision-maker with a good
weight setting under a specific service situation. An improper weight setting is
selected by a new decision-maker or a decision-maker's mistaken operation.
Therefore, a sensitivity analysis to examine the uncertainty of decisions made by
DTSM is needed. With sensitivity analysis, a decision-maker can know the
36
confidence of the selecting decision. The notations for two sensitivity modes,
conservation mode and approximated mode, are defined as follows.
In a specific field service situation:
i, k
: index of maintenance policy candidate
j
: index of sub-cost in DTSM
N
: total number of maintenance policy candidates
M
: total number of sub-costs
Aij
: attribute value of i-th candidate corresponding to j-th sub-cost
Wj
: weight of j-th sub-cost
W
: vector of all sub-cost weights (a set of weights)
Wj'
: new weight of j-th sub-cost
W'
: vector of all new sub-cost weights
PSi
: priority score of i-th maintenance policy
PSi'
: new priority score of i-th maintenance policy
All policy candidates' priority scores are calculated by the following equation.
M
PSi = ∑W j ⋅ Aij , i = 1,2,..., N
j
where W1 + W2 + ... + WM = 1
(3.1)
If W is an improper setting and W' is an appropriate setting, the new
priority scores are calculated.
37
M
PS = ∑W j' ⋅ Aij , i = 1,2,..., N
'
i
j
where W1' + W2' + ... + WM' = 1
(3.2)
Assume that i-th candidate is an economic maintenance policy with the
highest priority score.
Conservation Mode:
If
PS i' − PS k' ≥ 0 for any set of wei ghts, k = 1,2 ,3..i-1,i + 1,...N
,
the selecting policy decision is absolutely confident.
Else
the selecting policy decision is probably confident.
Approximated Mode:
By applying DTSM in the field service applications, the weight setting
error may not be over 50% by a skilled decision-maker, and we assume that it is
among 10% in the sensitivity analysis module.
If
PS i − PS k ≥ 0.1 for k = 1,2,3,...,i-1,i + 1,...,N ,
the selecting policy decision is approximated absolutely confident.
Else
the selecting policy decision is approximated probably confident.
A DTSM selecting engine is shown in Figure 3.15.
38
Circuit types:
Product_1
Product_2
Product_3
…..
Policy selections for
field service depart.
Policy selecting decision
Prod_1
Policy_1
Prod_2
Policy_3
Prod_3
Policy_1
Cost analysis of
maintenance policies
Policy candidates: Policy_1,
Policy_2, Policy_3, Policy_4……
Decision-makers
DTSM
Sensitivity module
Weight setting module
Weight setting
Decision making module
System report data
Design/test/marketing goals
Figure 3.15: DTSM engine
3.3 MATHEMATIC MODELS OF DECISION TREE
To investigate the proof that TDDT model is a special case of LDT model
and lead to useful developments of conversion between two models, and finally to
propose a unified model, we start with mathematical abstractions of the two
models.
3.3.1 Model of Top-Down Decision Tree
To investigate whether TDDT can be proved to as a special case of LDT,
and lead to useful developments of conversion between two models, we start with
the TDDT model.
39
Mathematical Abstraction of TDDT Model: A TDDT model is constructed in a
top-down manner based on N patterns {Si, Ci, i=1,2, …N}, Si ∈ R denotes the
selecting candidate; Ci ∈ {C(L1), C(L2), …., C(Lp)} denotes the target class
corresponding to Si, and p is the total number of target classes. The TDDT model
for maintenance policy selecting candidate is summarized as follows:
(1) Initially, all selecting candidates are assigned to the root node. If it is the case
that all candidates assigned to a node belong to the same class, no further decision
needs to be made.
(2) If selecting candidates at this node belong to two or more classes, then one or
more candidate attributes are chosen to split the candidates.
(3) The process is recursively repeated for each of the new internal nodes until a
completely discriminating tree is obtained.
(4) Once constructed, the TDDT allows the determination of the class label of
arbitrary selecting candidates S (1 by N).
3.3.2 Model of Logical Decision Tree
To focus on the economic selecting problem for maintenance policy
strategies, we make the following assumptions.
(1) C (Lx), x=1,2, ….p, from the upper leaf node to the bottom leaf node.
(2) Selecting candidates in the upper leaf node have higher priority to be ranked.
For a TDDT model,
Ranking TDDT (S, C) = {S, Ranking C (Lx), x=1,2, ….p} = {(Si, C(L1)), (Sk,
C(L2)), …., (Sl, C(Lp)), i, k, l ∈ 1,2, …N }
40
Mathematical Abstraction of LDT Model: A LDT model is constructed in a
parallel manner based on N patterns {Si, PSi, i=1,2, …N}, Si ∈ R denotes the
selecting candidate; PSi ∈ N+ denotes the priority score corresponding to Si. The
LDT model for maintenance policy selecting candidate is summarized as follows:
(1) Single score formula is for each sub-cost and total score formulas mean that
there exists a set of weights W (1 by M) for all sub-costs in the LDT model.
(2) The priority score calculation flow is defined.
If Si meets sub-cost (0< j <M+1) criterion, then add sub-cost weight (Wj,) to Si.
(3) The process is recursively repeated for each of the sub-cost until a completely
priority score is obtained.
(4) The vector of priority score PS of selecting candidates S is
PS = W.A
(3.3)
where PS (1 by N) is the priority score matrix of selecting candidates S in the
LDT model. PSi is the i-th element of matrix PS, and it is the i-th selecting
candidate's priority score.
(5) Once constructed, the LDT allows the determination of the priority score of
arbitrary selecting candidates S.
3.4 TOP-DOWN DECISION TREE AS A SPECIAL CASE OF LOGICAL DECISION
TREE
In this section, we prove a top-down decision tree (TDDT) representation
model as a special case of the logical decision tree (LDT) representation model to
propose a unified knowledge representation model. Proof by construction is
derived according to the following innovative idea. In the TDDT representation
model, the candidates are classified among individual rules according to their
41
attributes. The same candidates are also added scores among individual subfunction according to their attributes in the LDT representation model. We
develop an algorithm to find a set of weights for all sub-functions in LDT by
giving the TDDT configuration. Although the definitions of rules and subfunctions are different, they do have the same domain knowledge sets and results.
3.4.1 Proof by Construction
In this section, we apply proof by construction (PBC) to prove that TDDT
model is a special case of LDT model. Proof by construction generates a set of
weights for LDT model according to TDDT model and finds that the two
representation models have the same results.
To prove the relationship between TDDT model and LDT model, let us
first define some input and output formats and make some assumptions for the
models.
Input and Output Formats
TDDT representation model:
Input Items
Selecting Representation
Output Items
Policy Candidates
Top-Down Decision Tree Set of Candidates:
Attributes of Candidates
Model
N ∪ {0}
Input Items
Selecting Representation
Output Items
Policy Candidates
Logical Decision Tree
+
Real Number N
Attributes of Candidates
Model
LDT representation model:
42
Assumptions:
(1) The TDDT representation rule set is equal to LDT representation sub-function
set.
(2) The number of rule is not equal to the number of sub-function.
On one hand, once the TDDT model and LDT model are constructed, the
set of nodes and the set of sub-functions are equal, and they can be used to
describe the same knowledge base with different knowledge representations. In
other words, all possible behavior of the two models can be completely tracked by
the same knowledge base of the selecting system. On the other hand, the number
of rule is not equal to the number of sub-function because rules can be used more
than once.
Problem Set: Given a top-down decision tree representation model TDDT (S, C),
where S is the set of selecting candidates; C is the set of target classes as depicted
in Figure 3.16, and a logical decision tree representation model LDT (S, PS),
where S is the set of selecting candidates; PS is the set of priority scores.
Show: ∃ W such that for any set of selecting candidates S,
Ranking TDDT (S, C) = Ranking LDT (S, PS)
43
W1 = 1010
Subcost 1
W2 =109
H
A2i = 0.10
Subcost 2
A1i = 0.10
A2i = 0.11
A2i = 0.13
A2i =0.1x
W3 =109
L
A3i =0.10
Subcost 3
A1i = 0.11
A3i = 0.11
A3i = 0.1x-1
i
: index of selecting candidate
Aji : attribute value of i-th selecting
candidate corresponding to j-th sub-cost
A3i = 0.1x
Figure 3.16: Weight generations of proof by construction
44
C(L1)
C(L2)
C(L3)
C(Lx)
C(Lx+1)
C(Lx+2)
C3(Lp-1)
C3(Lp)
Proof by Construction:
Step 1) Initialization
Express the selecting candidate, Si , i=1,2, …N, and collect required input
data.
Step 2) Execute TDDT model.
Step 3) Set weights to LDT model from TDDT model.
For first-layer sub-cost 1 in TDDT model, assign W1 = 1010.
For second-layer sub-cost 2 and sub-cost 3, assign W2 = 109 and W3 = 109.
For sub-cost belongs to L-layer, assign Wk = 1011-L, k = 1, 2, 3, …., M, Wk
∈ {1010, 109, 108, 107…. } as shown in Figure 6.
Step 4) Compute Aji for LDT model.
F j (V ji ) =
Aji =
a ji , if V ji > θ j
a ji , if V ji < θ j
,
where Vji is the value of i-th selecting candidate corresponding to j-th subcost from system report data,
θj
is the threshold of attribute assignment,
F(a) >F(b) if a>b, and aji ∈ {0.10, 0.11, 0.12, …, 0.1x}.
Step 5) Calculate priority scores of selecting candidates in LDT model.
M
PS i = ∑ W j ⋅ A ji ∈ N +
j
It also can be represented as the following matrix form for LDT model.
45
⎡ A11
⎢ A
⎢ 21
⎢ A31
⎢
A=⎢ ⋅
⎢ ⋅
⎢
⎢ A j −1,1
⎢ A
⎣ j1
A12
A13
A14
⋅ ⋅
A22
A23
A24
⋅ ⋅
A32
A33
A34
⋅ ⋅
⋅
⋅
⋅
⋅ ⋅
⋅
⋅
⋅
A j −1, 2
A j −1,3
A j −1, 4
⋅ ⋅
⋅ ⋅
Aj2
A j3
Aj4
⋅ ⋅
A1l ⎤
A2l ⎥⎥
A3l ⎥
⎥
⋅ ⎥ and
⋅ ⎥
⎥
A j −1,l ⎥
A jl ⎥⎦
⎡ W1 ⎤
⎢W ⎥
⎢ 2 ⎥
⎢ W3 ⎥
⎥
⎢
W=⎢ ⋅ ⎥
⎢ ⋅ ⎥
⎥
⎢
⎢W j −1 ⎥
⎢ Wj ⎥
⎦
⎣
PS = W.A
where PSi is i-th column of PS, and it is i-th selecting candidate's priority
score.
Step 6) Rank PSi, where i=1, 2, 3, …., N.
Step 7) Check Ranking (S, C) in TDDT model and Ranking (S, PS) in LDT
model to examine if they have the same results.
Step 8) Stop.
Theoretically, therefore, an available set of weight can be obtained from
the TDDT model. The flow chart of the PBC is given in Figure 3.17. According to
PBC, We have developed a TDDT to LDT model conversion algorithm, which
has been applied to our DTSM, and obtained a unified model of the two.
46
Start
Information Flow
Arrived Input Data
Logical Decision Tree
Top-Down Decision Tree
Execute Top-Down Decision
Tree Model and get C (Lp)
Calculate selecting
candidates’ attribute value
matrix for all sub-costs: A
Aji: 0.10,0.11,0.12,0.13..
Assign weight to each
sub-cost:
Wk : 1010, 109, 108, 107,….
F j (V ji ) =
From the first to the last
sub-costs, we assign a set of
weights.
a ji , if V ji > θ j
a ji , if V ji < θ j
W
Ranking C (Lp)
Calculate
W‧A
Ranking
W‧A
Check TDDT and LDT Results
End
Figure 3.17: Flow chart of proof by construction
47
3.4.2 An example of Proof by Construction
In order to demonstrate the feasibility and correctness of our proof by
construction (PBC), we apply our PBC to an example. In this example, there are
five policy candidates and each candidate needs to be evaluated with three subcosts: system design cost (SDC), replacement cost (RC) and BIST cost (BISTC).
Policy candidates' information is given in Table 3.3. The candidates classified by
TDDT model have five target classifications as shown in Figure 3.18. According
to TDDT model, the output is
Ranking (S, C) = {C(L1), C(L2), C(L3), C(L4), C(L5)} = {S1, S2, S3, S4, S5}
Policy Candidate Information Table
Candidate Index
System Design Cost
(SDC)
Replacement
Cost
(RC)
BIST Cost (BISTC)
Table 3.3:
S1
S2
S3
S4
S5
1
1
0
0
0
1
0
1
1
0
0
1
1
0
1
Policy candidate information of the example
48
105
104
0.10
SDC
{S1,S2,S3,S4,S5}
RC
{S1,S2}
0.10
L1
{S1}
0.11
L2
{S2}
103
0.11
104
RC
0.10 BISTC
{S3,S4}
{S3,S4,S5}
0.11
L5
0.10
L3
{S3}
0.11
L4
{S4}
{S5}
Figure 3.18: TDDT representation model of the example
From proof by construction (PBC), we can generate the weight for LDT
model according to SDC, RC and BISTC sub-costs and their corresponding
attributes. Figure 3 also depicts the weights given by PBC.
Step 1) Initialization.
Express the selecting candidate: {S1, S2, S3, S4, S5}.
Step 2) Execute TDDT model.
Ranking (S, C) = {C(L1), C(L2), C(L3), C(L4), C(L5)} = {S1, S2, S3, S4, S5}
Step 3) Set weights to LDT model from TDDT model.
W1=105, W2=104 and W3=103, so we can get W= [105 104 103] as shown
in Figure 3.13.
Step 4) Compute Aji for LDT model.
Attribute values for all candidates are shown in Table 3.4.
49
Attribute Value Table for all Candidates
Candidate
S1
S2
S3
S4
S5
A1i
0.10
0.10
0.11
0.11
0.11
A2i
0.10
0.11
0.10
0.10
0.11
A3i
0.11
0.10
0.10
0.11
0.10
Index
Table 3.4:
Attribute value for all candidates
⎡ A1,1
⎢
A = ⎢ A2,1
⎢⎣ A3,1
A1, 2
A1,3
A1, 4
A2, 2
A3, 2
A2,3
A3,3
A2, 4
A3, 4
A1,5 ⎤ ⎡ 1
1 0.1 0.1 0.1⎤
⎥ ⎢
A2,5 ⎥ = ⎢ 1 0.1 1
1 0.1⎥⎥
A3,5 ⎥⎦ ⎢⎣0.1 1
1 0.1 1 ⎥⎦
Step 5) Calculate priority scores of selecting candidates in LDT model.
PS = W.A =
1 0.1 0.1 0.1⎤
⎡1
⎢
[10 10 10 ] . ⋅ ⎢ 1 0.1 1
1 0.1⎥⎥ = [11010 10200 2100 2010 1200]
1 0.1 1 ⎥⎦
⎣⎢0.1 1
5
4
3
where PSi is i-th column of PS, and it is i-th selecting candidate's priority
score.
Step 6) Rank PSi, where i=1, 2, 3, …., N.
Ranking (S, PS) = {S1, S2, S3, S4, S5}
Step 7) Check Ranking (S, C) in TDDT model and Ranking (S, PS) in LDT
model and find they have the same results.
50
Step 8) Stop.
We can conclude that there exists a set of weights W to let Ranking (S, C)
= Ranking (S, PS), where W = [100000 10000 1000] for [SDC RC BISTC], to let
TDDT model is a special case of LDT model.
3.4 .3 Model Conversion of Top-Down Decision Tree to Logical Decision Tree
Proof by construction (PBC) is developed to generate a set of weights
from TDDT model and to apply this set of weights to LDT model. It is clear that
these two models have the same results by solving a problem set. We theoretically
show that TDDT model is a special case of LDT model and develop a TDDT to
LDT model conversion algorithm. As a result, we get a unified model.
According to the proof by construction in the previous section, we develop
a model converter to transfer TDDT model into LDT model. It has been applied
to our decision tree selecting model for alternative maintenance policies. Figure
3.19 depicts the model converter of TDDT model to LDT model.
Top-Down Decision Tree
Representation
Logical Decision Tree
Representation
TDDT Interpreter
LDT Interpreter
Model-Converter
TDDT
Mathematical Model
LDT
Mathematical Model
Figure 3.19: Model Converter of TDDT Model to LDT Model
51
There are two main functions in our converter design: Control Function
and Information Flow Function. Figure 3.20 depicts the basic structure of the
model converter from TDDT model to LDT model.
Control Function
The function generates weights and attribute's values of TDDT model
based on three sub-functions: 'initialize TDDT model', 'execute the PBC', and
'show a set of weights'. Initializing TDDT model is adapted to setup TDDT model
states, rules, and a TDDT configuration. In executing the PBC sub-function, two
variables of TDDT model to LDT model are generated: (1) assign weight scores
of nodes, and (2) assign attribute's values of candidates. Assigning weight scores
of nodes is to find a set of weights to LDT according to PBC under a given TDDT
configuration which is generated in initializing TDDT model. Assigning
attribute's values of candidates is to transfer from attribute's levels in TDDT
model into attribute's values in LDT model by PBC. Then, we can show a set of
weights to LDT model from the outputs of executing the PBC. The main idea of
this model converter is to provide a set of weights to transfer TDDT model into
LDT model, and then to check the outputs of these two representation models.
Outputs of the two representation models can be obtained via our implementation
of TDDT model and LDT model.
Information Flow Function
Arrived input data is used to initialize the TDDT model, and the list of the
adjusted TDDT configuration can be obtained from the TDDT model. Finally, we
can export output data, a set of weights, to LDT model.
52
The model converter adopts proof by construction (PBC) that we
mentioned in Section 3.4.2. On one hand, it takes a TDDT configuration
initialized by the decision tree selecting model that we developed and
implemented and executes the PBC under the TDDT configuration, and then we
can export output data to LDT model. On the other hand, we implement a
decision tree selecting model based on TDDT model to validate the knowledge
base that we built in Section 3.2. We introduce the integration and implementation
in the next section.
1. Initialize TDDT State and TDDT
Configuration
2. Setup Rules for New Commitment
Arrived Input Data
List of the TDDT
Configuration
Model Converter Execution
Execute the PBC with Current TDDT Model
Step1: Assign weight scores of nodes
Step2: Assign attribute’s values of candidates
Show a set of Weights and Attributes for LDT
Control Function
Export Output Data
Information Flow Function
Figure 3.20: The basic structure of model converter
53
3.5 INTEGRATION OF DTSM
To evaluate how close a DTSM conforms to decision- makers' decisions
based on the field repair cost model (2.2), the prototype system is implemented. In
integrating TDDT model and LDT model, data linkages between codes and
databases are required. The integration architecture is proposed to make the
system be modified or reused more easily. The Logical Decisions for Windows
[26] software is used to configure the LDT model since it has user-friendly GUI
environment and powerful analysis tool. The system architecture has the
following components:
Engineer User Interface (UI): A UI gives the current states of decision tree
selecting tool.
TDDT Configuration UI: A GUI (Graphic User Interface), supporting drag and
drop mode, makes it more convenient to communicate with users and systems.
LDT Software: Based on the current TDDT configuration from database, LDT
software calculates the priority scores for all maintenance policies and suggests
the economic selection.
System Database: It stores the static knowledge acquired, such as rules
description, sets of weights, and situations.
The logical relationship among these components is shown in Figure 3.21.
54
Integration of DTSM
Review
Request
Economic Policy
Selection
Engineer UI
Logical DT Software
Tree Configuration
Service Dept. Read DT
Statuses
Config.
DT
Configuration
DT Config.
Inference
Preview
Save DT
Config.
Database
TDDT Config. UI
Figure 3.21: System architecture of DTSM
3.6 EXAMPLE CASES OF DTSM
To access how close a DTSM conforms to decision- makers' decisions
based on the field repair cost model (2.2), test cases are conducted to validate the
DTSM.
3.6.1 Example Case
The objective is to determine an economic maintenance policy for a
customer product. There are 2 service department states, 12 sub-costs and, 4
possible maintenance policies in this example. The related sub-costs are shown in
Table 3.5.
55
Maintenance Policy
Evaluation Sub-Cost
System Design Cost
Replacement Cost
BIST Cost
External Test Equipment
Cost
Reliable Factor
Production Cost
Logistic Support Cost
Personnel Skill Cost
Service Response Time
Factor
Life-Cycle Cost
Downtime Cost
Satisfaction Factor
Total
Table 3.5:
Policy 1
Cost
150000
150000
10000
Policy 2
Cost
250000
60000
20000
Policy 3
Cost
100000
200000
6000
Policy 4
Cost
175000
130000
9000
130000
50000
200000
140000
10000
2000
4000
5500
5000
1500
10000
3000
20000
1800
3000
7500
8000
1700
6000
5000
1000
4500
1000
2000
1000
500
2500
466500
3000
1000
5000
42300
1000
400
2000
542700
2500
800
4000
48400
Costs of 4 possible maintenance policies
By summing up all the sub-costs, policy 2 is preferred to be selected.
However, it is difficult and time-consuming to know the exact values of all subcosts shown in Table 3.5. Instead of using the equation cost model as shown in
Table 3.5, we use decision tree selecting model (DTSM) to make an economic
decision. The top-down decision tree model is depicted in Figure 3.22. If current
service state 1 is H, and service state 2 is L, then the weight setting for current
field service department is W_2. It is easy to obtain the corresponding estimated
cost as shown in Table 3.6 and W_2 is also shown in Table 3.6. This set of
weights for each sub-cost is determined under a specific field service situation.
56
State 1
H
L
State 2
H
W_1
L
W_2
State 2
H
L
W_3
W_4
Figure 3.22: Top-down decision tree weight base of the example case
The logical decision tree created by Logical Decisions for Windows is
shown in Figure 3.23. From the results by using DTSM as shown in Figure 3.24,
the economic maintenance policy selection is policy 2, which is the same as we
get by using the equation cost model (2.2).
57
Maintenance Policy
Evaluation Sub-Cost
W_2
Policy 1
Policy 2
Policy 3
Policy 4
Estimated
Estimated
Estimated
Estimated
Cost
Cost
Cost
Cost
System Design Cost
0.3
M
H
L
M
Replacement Cost
0.3
MH
L
H
M
BIST Cost
0.02
M
H
L
M
External
Test
Equipment Cost
0.25
MH
L
H
MH
Reliable Factor
0.02
M
L
H
ML
Production Cost
0.02
M
M
M
M
Logistic
Cost
0.01
LM
H
LM
M
Personnel Skill Cost
0.01
M
LM
MH
M
Service
Response
Time Factor
0.03
L
H
L
L
Life-Cycle Cost
0.02
L
M
L
M
Downtime Cost
0.01
L
M
L
M
Satisfaction Factor
0.01
LM
H
LM
MH
Table 3.6:
Support
Estimated cost of 4 possible maintenance policies
58
Economic Maintenance Policy S election
BIS T Cost
Goal
M easure
Downtime Cost
M easure
External Test Equipment Cost
M easure
Life-Cycle Cost
M easure
Logistic S upport Cost
M easure
Personnel S kills
M easure
Production Cost
M easure
Reliable Factor
M easure
Replacement Cost
M easure
S atisfaction Factor
M easure
S ervice Response Time
M easure
S ystem Design Cost
M easure
Figure 3.23: Logical decision tree created by Logical Decisions for Windows
59
Ranking for Economic Maintenance Policy Selection Goal
Alternative
Utility
Policy 2
Policy 4
Policy 3
Policy 1
0.603
0.455
0.408
0.398
System Design Cost
Service Response Time
Production Cost
Personnel Skills
Replacement Cost
BIST Cost
Life-Cycle Cost
Downtime Cost
External Test Equipment Cost
Reliable Factor
Logistic Support Cost
Satisfaction Factor
Preference Set = NEW PREF. SET
Figure 3.24: Results of DTSM for the example case
3.6.2 Results
For validating DTSM, a measure of matching rate is defined as the
percentage of common selections of the two policy decisions generated by DTSM
and the field repair cost model (2.2) for the same policy candidates. In the 20
validation example cases, the average matching rate is 90%. Validation results
support that our DTSM allows decision-makers to explicitly and intuitively
express their maintenance policy selecting knowledge by using the GUI. Also,
DTSM clearly facilitates easy documentation of knowledge. Figure 3.25 shows
the economic maintenance policy selecting for a PC system is partially repairable
policy. From approximated sensitivity analysis, we conclude that partially
60
repairable policy for the PC system is an approximated absolutely confident
selection. This conclusion supports the development of the weight-based
maintenance policy (WBMP), introduced in Chapter 4, for PC systems. Note that
it is time-consuming to build the field repair cost model (2.2) in real applications,
since it is difficult to convert all evaluation factors into dollars. This is also the
purpose of using DTSM to document selecting knowledge.
Ranking for Economic Maintenance Policy Selection Goal
Alternative
Utility
Partially Repairable system 0.750
Non-Repairable System
0.648
Fully Repairable System
0.370
Replacement Cost
BIST Cost
Production Cost
Service Response Time
External Test Equipment Cost
Logistic Support Cost
System Design Cost
Reliable Factor
Personnel Skills
Preference Set = NEW PREF. SET
Figure 3.25:
Result of DTSM for electronic systems
Although we have considered 12 sub-costs in the policy selecting
knowledge, there are many other selecting sub-costs that should be extracted to
the knowledge base when DTSM is applied to the field service application. The
incompleteness of sub-costs will affect the selecting results, and each corporation
should have its own selecting knowledge base. However, to identify the sufficient
sub-costs from veteran decision-makers is a challenging issue and worth doing.
61
One could perhaps study the interview procedure or apply machine learning
algorithm for knowledge extraction to enhance DTSM performance. Even though,
using the decision tree model for policy selecting should be able to meet field
service requirements and mimic experienced decision-maker's behavior.
3.7 SUMMARY
In this chapter, we have described the alternative maintenance policy
problems and selecting knowledge. A prototype of decision tree selecting model
(DTSM) for maintenance policies is built based on the decision tree
representation models, top-down decision tree and logical decision tree. We also
prove that top-down decision tree is a special case of logical decision tree.
We conduct example cases of our DTSM with equation cost model.
Validation results support the applicability of our DTSM, which allows decision
makers to explicitly and intuitively express their situation assessment knowledge
and make the economic decision by using the Logical Decisions for Windows. It
clearly facilitates easy documentation of knowledge. The average matching rate is
90%. Our example results demonstrate both the ideas and the potential of DTSM
for automated decision supporting developments.
62
Chapter 4: Weight-Based Maintenance Policy
To reduce the field maintenance cost for personal computer (PC) systems,
we develop a weight-based maintenance policy (WBMP) to perform repair tasks,
as shown in Figure 4.1, in this chapter. The WBMP aims at developing a system,
which is capable of performing maintenance tasks based on a product's status to
achieve a low field depot cost as well as short service time. However, the
development of a WBMP system requires additional cost due to the capability to
replace new components in the system. In order to justify the cost of
implementing a WBMP and help design the system, a tool will be needed to
estimate its benefit in terms of reduced maintenance cost. We developed a
simulation model that can be used as such a tool.
In this chapter, we combine various aspects of field repair service into one
model. We present a discrete-event simulation model for field repair service with
an integrated WBMP system. Section 4.1 introduces the hardware modeling of PC
systems. We present the design of weight-based maintenance policy in Section
4.2. Section 4.3 describes a generic model for field repair service. Then using the
model, we simulated a field repair service operation of a PC service provider in a
visual simulation environment. The main purpose of the work is to develop an
integrated field repair service model to estimate the value of WBMP. Simulation
models and results are given in Sections 4.4. Then, this chapter is summarized in
Section 4.5.
63
System Design Level
?
Fixed
Weight-Bases
Maintenance Policy
Alternative Field
Maintenance Policies
Ship
Telephone
service
Task
Manager
Maintenance
Service
Fixed
Users
Online test
Alternative Dispatching
Strategies
Economic Diagnosis
and Repair service
System Design Level
Fixed
Fixed
Figure 4.1: Weight-based maintenance policy in field service
4.1 HARDWARE MODELING OF PERSONAL COMPUTER SYSTEMS
Organizationally, there are three hierarchical levels of maintenance
support for electronic companies [29]: organizational maintenance, intermediate
maintenance, and supplier maintenance and they are illustrated in Figure 4.2.
Organizational maintenance is accomplished on the subsystem (e.g., motherboard,
CDROM, or VGA card) of the system (e.g., desktop or laptop) at the customer's
operational site. Technicians assigned to this level replace the failed subsystems,
but do not repair the failed ones. Then, they forward removed subsystems to the
intermediate level. At intermediate level, subsystems removed from systems may
be repaired through the replacement of major units (e.g., power connector, flash
memory, or DRAM socket of the motherboard). Maintenance technicians are
64
usually more skilled than those at the organizational level. The supplier level of
maintenance includes unit repairing, the complete overhauling, as well as system
rebuilding.
Organizational
Maintenance
System
SubSystem 1
Unit 11
Unit 12
Intermediate
Maintenance
SubSystem 2
Unit 13
Unit 21
Unit 22
Unit 23
Supplier
Maintenance
Figure 4.2: Three levels of maintenance support
4.2 WEIGHT-BASED MAINTENANCE POLICY FOR PC SYSTEMS
At the finance department of a PC manufacturing company, the field depot
cost and service response time are considered two important measures of the field
repair service cost. To reduce service response time, a non-repairable system is
performed and decision-makers are often faced with the conflicting objective of
reducing the depot cost. In this section, we present a weight-based maintenance
policy (WBMP) for field repair service. The technology of WBMP aims at
developing a system that is capable of performing maintenance tasks based on
product's status to achieve a low depot cost as well as short service time.
In the WBMP system, units are classified as field replaceable unit (FRU),
base replaceable unit (BRU), and irreplaceable unit (IRU). FRU is replaced in the
field; BRU is replaced in the repair center, and IRU is not replaced when it has a
failure. Five steps are defined in the WBMP: (1) an identification of units in the
65
subsystem, (2) a definition of units' weights in the subsystem, (3) measures of
units' statuses from built-in-self- testing or external testing equipments, (4) a
calculation of the function score of the subsystem, and (5) a decision of replacing
whether a new subsystem or new units.
Unit Identification: This step decomposes a system into several subsystems and
partitions a subsystem into FRU, BRU, and IRU.
Unit's Weight Definition: For replaceable units (FRU and BRU), we define
unit's weight, UWi, as its cost in dollars. For IRU, we define UWi as subsystem's
cost in dollars.
UWi = Ci when the unit is replaceable
UWi = Css when the unit is irreplaceable
Ci
:
i-th unit's cost in the subsystem
Css
:
subsystem cost
Status Measurement: When a unit fails, a subsystem is also under a failed mode
and has malfunctions. From built-in self-testing or external testing results, the
status of each unit can be obtained.
Si = 0 means the unit is in the good operation mode.
Si = 1 means the unit is in the failed operation mode.
Function Score Calculation: This step is responsible for calculating function
score (FS) of each subsystem.
FS i = UWi × S i represents the function score of i - th unit
FS = ∑ FS i represents the function score of the subsystem
i
66
Maintenance Decision: A replacement strategy is selected by comparing FS with
FS threshold (FSth).
If FS< FSth, replace failed units with new units.
If FS> FSth, replace the subsystem with a new one.
FSth = Css - Cec
Cec
:
extra service cost when performing unit replacing tasks
It should be noted that based on WBMP, comparing with non-repairable
system, the overall field service cost in (2.1) should be reduced. We believe that a
WBMP system has the potential of reducing the field service cost. The new field
service cost model can be written as
Cfs' = Chl + Col + Cot + Cre'
(4.1)
Cre': repair service cost of a WBMP system
4.3 A GENERIC MODEL FOR FIELD REPAIR SERVICE
In this section, we present a generic model for field repair service with two
types of service: non-repairable maintenance policy and weight-based
maintenance policy. Five modules are defined: (1) a incoming module which
models pre-processing of the failed products; (2) a decision module that
determines the repair policy for the waiting tasks; (3) a performing module that
simulates technician repairing actions; (4) a post-testing module which tests the
repaired products; and (5) a repaired-storage module that models the storage
behavior. Figure 4.3 is a diagram that shows the five modules and their
relationships.
67
Incoming Module
Decision Module
Non-Repairable
Maintenance Policy
Weight-Based
Maintenance Policy
Performing Module
Post-Testing Module
Repaired-Storage Module
Figure 4.3: A generic field repair service model
Incoming Module: The incoming module simulates the data processing
procedure for failed products when they are delivered to the field repair center.
After work is performed on the failed products, they are sent to decision module
for the next process.
Decision Module: This module determines the maintenance policy with a low
service cost for each failed product. A selected maintenance policy can be used to
perform repair tasks in the performing module. When performing a weight-based
maintenance policy, statuses of the components in the system need to be obtained
based on built-in-self-testing results or external testing equipment results. To get
68
the statuses of the components, we develop an economic approach for failure
diagnosis in Chapter 5.
Performing Module: This module monitors repairing behaviors of field
technicians based on the selected maintenance policy by decision module. When
failed products can not be fixed in the field repair center, they will be sent back to
vendors/providers or discarded.
Post-Testing Module: This module is responsible for testing all repaired products
from performing module and determining if they are ready to send back to
customer or still under failure modes. If they pass all functional tests, they will be
sent to repaired-storage module. Otherwise, they will be sent to decision module
for further repairing, and test engineers need to feed these results back to field
technician in performing module.
Repaired-Storage Module: This module simulates products storage procedure in
the field repair service. All products in this module are waiting to be sent back to
customers. After work is completed, the service and products information is
passed to system database to update the corresponding products' records.
It should be noted that based on the specific field repair service operation,
a simulation model may not need to have all the modules mentioned above. For
example, if in a field repair service operation, there are only non-repairable
system and no partially-repairable system, and then decision module is not needed.
4.4 SIMULATION RESULTS AND ANALYSIS
To justify the cost of implementing a WBMP system, a simulation model
is used to estimate its benefit in terms of the depot cost reduction. A test case is
69
built for a repair center with 15 field technicians, and each technician works eight
hours a day.
4.4.1 Simulation Models
In order to estimate the value of the WBMP system, two scenario
simulations are developed: one with a non-repairable system as shown in Figure
4.4, and one with a WBMP system as shown in Figure 4.5. In the case without a
WBMP system, all failed systems are all repaired by replacing failed subsystems
with new ones; while in the case with a WBMP system, those failed systems are
repaired by replacing with new units or new subsystems based on their statuses.
The Enterprise Dynamics [27] simulation software is used to implement the
simulation model and an example with WBMP system is shown in Figure 4.6. To
run the simulations, the following data are required: (1) products pre-processing
time when they arrive in a repair center, (2) probability distribution of failed
product testing time, (3) queuing discipline in front of field technicians, (4)
probability distribution of product repairing time, (5) probability distribution of
repaired product testing time, (6) queuing discipline in front of the repaired
storage, and (7) the ratio of the unit replacing cost and the subsystem replacing
cost. The related parameters of non-repairable maintenance policy and WBMP
are given in Table 4.1.
70
Failed Products
Failed Product Incoming
Waiting Queuing for
Repair Technician
Field Technician 1
Field Technician 2
Field Technician ..
Field Technician ..
Field Technician ..
Field Technician
Nft
Repaired Testing
Repaired Storage
Repaired Products
Figure 4.4: Simulation model without WBMP
Failed Products
Failed Product Incoming
Testing Window
Replace with a new
Replace with a new
product
unit
Waiting Queuing for
Waiting Queuing for
Repair Technician
Repair Technician
Field Technician 1
Field Technician 2
Field Technician ..
Field Technician Nft
Repaired Testing
Repaired Storage
Figure 4.5: Simulation model with WBMP
71
Repaired Products
Figure 4.6: Simulation model with WBMP by Enterprise Dynamics
72
Non-Repairable
Weight-Based
Maintenance Policy
Maintenance Policy
Incoming
1. Tpp mins/each for processing
1. Tpp mins/each for processing
Module
procedure
procedure
1. Max Nwq capacity
1. The testing time is normal
2. Queuing Discipline: FIFO
distribution with mean Tttm
3. Send to: Random channel
mins and std Ttts mins
Policy
Decision
2. P % replace product; 1-P %
Module
replace unit
3. Max Nwq capacity
4. Queuing Discipline: FIFO
5. Send to: Random channel
Performing
Module
1. Work hours of each
1. Number of Field Technician
technician Twh a day
Nft
2. Number of Field Technician
2. Repair product time is
Nft
normal distribution with mean
3. Repair time is normal
Trtm mins and std Trts mins and
distribution with mean Trtm
prevent negative service time
mins and std Trts mins and
3. Repair unit time is normal
prevent negative service time
distribution with mean Trutm
mins and std Truts mins
Post-
1. The repaired testing time is
1. The repaired testing time is
Testing
normal distribution with mean
normal distribution with mean
Module
Ttrtm mins and std Trtts mins
Ttrtm mins and std Trtts mins
Repaired-
1. Max Nrt capacity
1. Max Nrt capacity
Storage
2. Queuing Discipline: FIFO
2. Queuing Discipline: FIFO
Module
3. Send to: Random channel
3. Send to: Random channel
Table 4.1:
Parameters of two simulation models
73
4.4.2 Simulation Results and Analysis
To assess the service cost reduction of the WBMP system, 2 important
measures are used: (1) number of repaired products, and (2) depot cost reduction
of the WBMP. The first measure is used to reflect the number of the service tasks
and the other one is used to estimate the repair service cost. Five simulation runs
are conducted for both scenarios with and without a WBMP system. In each run,
the field repair operation is simulated for one month. Assume that the component
replacement cost is 30 % of the subsystem replacement cost. The simulation
results show that the WBMP system increases the number of repaired products by
approximately 11% and reduces the field depot cost by about 44% and the detail
simulation results are shown in Table 4.2.
In a month ( 30 days) simulation run
Non-Repairable Maintenance Policy
WBMP
1. Total failed product: 2880
1. Total failed product: 2880
2. Repaired product: 2250
2. Repaired product: 2580 (930:new
product, 1650: new unit)
Table 4.2:
Simulation results of two maintenance policies
4.5 SUMMARY
The complexity and cost of field service are very high. The emerging
research for reducing the field service cost becomes more and more important.
While not all components in the failed PC system are under failure modes, a cost74
effective repair design should have a significant impact on the cost reduction of
field repair service. Therefore, in order to economically reduce the field repair
cost, an economic maintenance policy based on products' statuses is needed to
perform the maintenance tasks for PC systems. The emphasis of this chapter is to
design a weight-based maintenance policy (WBMP) for field repair service.
Developments of the WBMP aims at developing a system to achieve a low field
depot cost as well as short service time.
A simulation model with integrated components' statuses and field repair
activity is developed to evaluate the benefits of a WBMP system. From our
simulation results, the field repair depot cost is reduced significantly with a
WBMP system. The WBMP can also be combined with DTSM and a costeffective design for failure diagnosis, which is given in the next chapter, to build
an economic field repair service model.
75
Chapter 5: The Economics of Overall System-Level Test
Organizationally, there are two hierarchical levels of system-level test for
personal computer (PC) systems, manufacturing system test and field repair test.
The costs of manufacturing system test and field service significantly increase due
to the rule of ten as shown in Table 1.1. Therefore, manufacturing system test and
field service are particularly critical to global competition of a PC manufacturing
company.
This chapter presents a decision tree diagnosis tool for field repair service
to reduce the cost of test. A cost-effective design for overall system-level test is
also introduced here, which incorporates a manufacturing system test model, the
field failure rate, and a field service model into a system-level test cost model.
The optimized system design parameters from the cost model can be used to
reduce the cost of overall system-level test. The remainder of this chapter is
organized as follows. Section 5.1 summarizes the economic analysis of the overall
system-level test for a PC manufacturing company. The decision tree diagnosis
tool (DTDT) for field repair test is presented in Section 5.2. Section 5.3 shows
example cases and results of DTDT for PC systems. A cost-effective design for
overall system-level test is introduced in Section 5.4. Finally, Section 5.5
summarizes this chapter.
76
5.1 OVERALL SYSTEM-LEVEL TEST
A system-level test process is characterized by three sub-models as
depicted in Figure 5.1: a manufacturing system test cost model, a filed failure rate
analysis model, and a field service model.
Manufacturing System Test
Field Failure Analysis
Field Repair Test
Figure 5.1: System-level testing process of a PC manufacturing company
5.1.1 Manufacturing System Test
Manufacturing system testing is characterized by high functionality
uncertainty and verification complexity. The cost of manufacturing system test is
highly dominated by the testing runtime. Less efficient system testing can cause
the delay of the manufacturing process and result in low throughput.
The manufacturing system test process of PC [17] is a pipeline of
assembling, testing, and shipping as shown in Figure 5.2. There are three main
categories of system testing for computer systems: explicit system testing,
implicit system testing, and software testing. Explicit system testing, Tes, is to run
diagnostic on a specific system component. Implicit system testing, Tis, is to run
diagnostic on the system integration faults. Software testing, Ts, is to run
diagnostic on the conflicts between the hardware and the software. Manufacturing
system tests take a long time to execute because of the high volume testing
77
process. In the classical system testing process, all tests for each PC are
performed to ensure that all PCs meet the functionality requirements.
Vendor A
Vendor B
Assembling
Testing
Shipping
Customer
Fail
Vendor N
Repair Decision
Discard
Figure 5.2: The manufacturing system testing process for PC systems
The manufacturing system test model is used to manage test processes,
and further, to support test strategy selections. There are several manufacturing
test cost models available toward test strategy selection [19, 32-33]. Williams and
Ambler [32] proposed a manufacturing system test cost model for a PC
manufacturing company. The overall manufacturing system test cost model can
be rewritten as
Comt = Cft + Cvt + Ctt(t1) + Ctes +Cot
(5.1)
Comt
is the cost of the overall manufacturing system test
Cft
is the fixed cost of test
Cvt
is the variable cost of test
Ctt (t1) is the cost of the test time
t1
is the test time in manufacturing system
Ctes
is the cost of the test escapes
78
Cot
is other costs of test
5.1.2 Field Failure Model
The field failure analysis helps managers to predict future warranty costs
and also assists field engineers to predict the number of failures in a future life
test. Farren and Ambler [19] applied a modified Goel-Okumoto (G-O) reliability
model to the field failure rate analysis. Experiments with field data have shown
that the modified model is a good approximation of the observed failure rate
profile. The modified G-O model is
λ (t1 + t 2 ) = αe − β (t +t ) + γ
1
2
(5.2)
t1
is the test time in manufacturing system
t2
is the customer run time
λ (t)
is the failure occurrence rate
α
is the initial failure rate
β
is the failure rate per latent fault
γ
is the average steady-state failure rate
5.1.3 Field Service Model
Field service is a complex process that involves sending technicians to
perform repair tasks on consumer products. It is responsible for having failure
products fixed and keeping a low service cost. The field service model describes
corporate activities involved with customers by performing maintenance tasks. It
79
is no doubt that field service becomes more and more important while the PC
manufacturing company is continuous growing. In current practices, the cost of
field repair is a major part of the cost of field service and becomes a key
performance measure.
The repair time distribution for electronic system has often approximated a
log-normal distribution [26]. We assume that repair time is constant, the repair
rate and the designated time for repair can be applied as
Pr = 1-Pnp =
Pr
1 − e − µt3
(5.3)
is the probability of performing a repair action within a designated repair
time interval
Pnp
is the probability of not performing a repair action within the same
designated repair time interval
µ
is the repair rate
t3
is the same designated repair and test time interval
The field service cost model (2.1) can be rewritten as
Cfs = Chl + Col + Cot + Cre' + Clcc (t2)+ Crtt (t3)
Cre'
is the cost of repair service except repair test time
Clcc(t2) is the cost of life-cycle
Crtt(t3) is the cost of designated repair time
80
(5.4)
Although the cost reduction problem of the manufacturing system and
field service have been important research topics in test economics of PC
manufacturers, effective methods for overall system-level test cost reduction
under a complex system environment still pose significant challenges to both
researchers and practitioners.
5.2 ECONOMIC DIAGNOSIS DESIGN FOR FIELD REPAIR TEST
Before weight-based maintenance policy can be performed, however, one
must first detect and diagnose the failed components in the system. We define
failure detection to be the task of determining when a system is experiencing
problems. Failure diagnosis, then, is the task of locating the failed components.
There are many developed approaches toward failure diagnosis. We treat
the problem as one of finding the failed units in the subsystem. Fault tree analysis
is widely used in industrial systems [34-38]. However, the fault tree model has the
efficient and accurate problems when the system becomes larger and more
complex. Another approach is manual testing for all components. However, this
approach is time-consuming, requires much expertise, and does not support online testing. Therefore, a cost-effective approach is essential if we want to meet
field repair service requirements. This is why a decision tree diagnosis approach,
discussed next, is developed here. More specifically, we apply decision tree
learning algorithm to construct a diagnosis decision tree. This decision tree model
is then used to predict the failed units in the subsystem. The decision tree model
has the advantage of human-like reasoning procedures, which is important if the
method is to be adopted by current field service applications.
81
5.2.1 Decision Tree Learning Algorithm
We start in describing what a decision tree might look like in our system.
Suppose there are four units in the simplified subsystem as shown in Figure 5.3.
Each unit has 3 operating modes, pass, fail and unknown. Assume a failed unit is
under a stuck-at-0 fault. Suppose there are 2 inputs, 2 outputs, and 1 state
observed from units' outputs. Figure 5.4 shows a possible diagnosis decision tree
for unit 1 learned from some training data. Each node is labeled with the statuses
which include inputs, outputs, and states. Each leaf node represents failure
predictions. By examining the paths that lead to failure predicting leaf nodes, the
operating mode of unit 1 may be obtained. For example, the path of (Input 1=1,
Input 2=0, and State 1=0) results in a leaf node predicting failure of unit 1.
State 1
Or
Input 1
Output 1
2
1
Output 2
Input 2
4
3
Figure 5.3: Simplified subsystem with 4 units
82
Diagnosis Decision Tree for Unit 1
Input 1
0
1
U
Input 2
P: Unit 1 Pass
F: Unit 1 Fail
U: Unit 1 Unknown
0
1
State 1
State 1
0
1
F
P
0
1
F
U
Figure 5.4: A diagnosis decision tree for unit 1
Constructing a decision tree via machine learning algorithm involves
deciding which statuses split at each node, and what a decision tree should look
like. Examining the needs of the diagnosis problem for the PC system, a decision
tree learning algorithm C4.5 is applied here to construct the diagnosis decision
tree. The C4.5 [31] is widely used in many applications. The novelty of our
decision tree diagnosis tool is to apply the decision tree model for failure
predictions via the machine learning algorithm. This approach has the potential of
reducing the field test time. Also it is suitable for online testing applications in the
field service operations.
83
5.2.2 Decision Tree for Failure Diagnosis
After constructing the diagnosis decision tree for each unit, we then can
predict the possible failed units in the subsystem. We do this by applying the
following procedure:
Step 1) Decision Tree Construction for Each Unit:
By using the training data, diagnosis decision trees for all units can be
constructed. All of them are used in diagnosing failed units in the
subsystem.
Step 2) Tests for Failed Subsystem:
The subsystem is tested by input test patterns. Then the corresponding
statuses, states and outputs, can be obtained from testing results.
Step 3) Failure Prediction:
For unit 1, we put the statuses of the subsystem into the corresponding
diagnosis decision tree. The operating mode of unit 1 is then predicted by
using the decision tree model. If the operating mode is pass or fail, then go
to step 4. If the operating mode is unknown, we apply another input test
pattern and go to step 2. Note that all units are assumed to be independent,
and their corresponding failures are also independent.
Step 4) Failure Predictions for all Units:
The step 3 is repeated until all units' operating modes have been obtained.
Step 5) Possible Failure Predictions and Repair:
84
A list of possible failed units can be obtained from Step 4. Then, this list
would be sent to repair technicians to perform repair tasks. The diagnosis
procedure for failed units is depicted in Figure 5.5.
Data Vector
Decision Tree for
Unit 1
Decision Tree for
Unit 2
Decision Tree for
Unit n
Is/Is not Failure 1?
Is/Is not Failure 2?
Is/Is not Failure n?
List of Possible Failure Units
Repair Technicians
Figure 5.5: Diagnosis procedure for failed units
The decision tree diagnosis tool can be used in the field repair center as
shown in Figure 5.6, and its outputs can be the inputs of the weight-based
maintenance policy, which is presented in Chapter 4, and they can be applied
together to reduce the cost of field service.
85
System Design Level
?
Fixed
Weight-Bases
Maintenance Policy
Alternative Field
Maintenance Policies
Ship
Telephone
service
Task
Manager
Maintenance
Service
Fixed
Users
Online test
Alternative Dispatching
Strategies
Economic Diagnosis
and Repair service
System Design Level
Fixed
Fixed
Figure 5.6: Economic diagnosis approach in field repair service
5.3 EXAMPLE RESULTS AND ANALYSIS OF DTDT
Three example cases are demonstrated to show the value of decision tree
diagnosis tool (DTDT): one with 6 units, one with 8 units, and one with 12 units
in a subsystem. A subsystem with 8 units is shown in Figure 5.7. We have
implemented the C4.5 algorithm in Java. An example of training data to construct
the diagnosis decision tree is shown in Figure 5.8, and the corresponding
diagnosis decision tree is depicted in Figure 5.9. Usually a failed subsystem
comes from a single failed unit. Therefore, in all three example cases, we assume
that a failed unit occurs at a time in the subsystem. Examples are conducted by
using JDK 1.4.2 to evaluate the potential effectiveness of decision tree diagnosis
tool (DTDT) as shown in Table 5.1. Note that, in the classical test process, all
86
units are tested to find out the failed unit; while in the DTDT process; possible
failed units are predicted by using the decision tree model.
State 1
State 2
OR
State 3
OR
OR
Input 1
Input 2
1
2
3
4
5
6
7
8
Output 1
Output 2
Figure 5.7: Subsystem modeling with 8 units
Figure 5.8: An example of training data for decision tree learning algorithm
87
Figure 5.9: Diagnosis decision tree from Figure 5.8
To evaluate the performance of DTDT, we use fault coverage success rate,
status reduction per unit, and test time reduction metrics. Let N denote the number
of all units in the subsystem. S1 is the number of statuses used by using classical
testing approach, and S2 is the number of statuses used in the DTDT. Let n be the
number of detected units in the DTDT, and T the number of test times to identify
all failures in the DTDT. We define:
n
Fault coverage success rate: N
(5.4)
( S1 − S 2 )
N
Status reduction per unit:
(5.5)
(N − T )
N
Test time reduction:
(5.6)
88
N
Fault Coverage
Success Rate
Status Reduction
Per Unit
Test Times Reduction
Table 5.1:
6 units
8 units
12 units
100%
100%
100%
33%
37.5%
41.67%
66%
75%
83.3%
Results of using decision tree diagnosis tool
Perfect diagnosis would have fault coverage success rate equal to 100%.
All examples have 100% fault coverage success rate by using the DTDT.
Comparing the results of decision tree diagnosis tool (DTDT) with those from the
classical testing approach, it is seen that the statuses used in the DTDT is fewer.
Also, the test times of the DTDT is significantly reduced. Example results show
that the DTDT significantly reduces the cost of testing and maintains a high fault
coverage rate. It may be concluded from these results that DTDT is a competent
algorithm of failure diagnosis for electronic systems.
5.4 A COST-EFFECTIVE DESIGN FOR OVERALL SYSTEM-LEVEL TEST
We have demonstrated the applicability of using a decision tree model to
diagnosis failures and a weight-based repair policy to reduce the field depot cost.
These new cost-effective designs can be applied to field repair center as shown in
Figure 5.10. We believe that our designs have the potential effectiveness to reduce
the field service cost in terms of the repair testing cost and the repair depot cost.
89
Decision Tree
Diagnosis Tool
Weight-Based
Maintenance
Policy
Repair
Storage
Subsystem Vendors
Discard
Figure 5.10: New approaches for decision processes in the field repair center
In our context, however, it is not sufficient to just include cost-effective
designs in field repair service. Rather, maintaining high customer satisfaction is
necessary to increase corporation's revenues. One could perhaps use logistics and
quality managements within field service operations to achieve high customer
satisfaction.
Another issue is in integrating the overall system-level test. There are two
hierarchical levels of system-level test for the personal computer (PC) system,
manufacturing system test and field repair test. There has been much related work
in the area of the manufacturing system test [17-21]. However, less effort has
been made in the domain of field repair service. On one hand, an economic design
on field repair service is going to reduce a vast amount of money. On the other
hand, the manufacturing system test is highly related to the field repair test, and
their relationship should be important but often overlooked. Therefore, a costeffective design for overall system-level test is needed to reduce the cost of test. A
possible approach is to incorporate a manufacturing system test cost model, the
field failure occurrence rate, and a field repair cost model into a system-level test
90
cost model. Then we optimize system design parameters to reduce the cost of
system-level test. We start in building an overall system-level cost model.
From Section 5.1, the cost model of overall system-level test per system is
defined as follows:
Cost (t1, t2, t3)= Comt (t1) +λ (t1+t2) × Cfs (t2, t3)
Cost
is overall system-level test cost
t1
is the test time in manufacturing system
t2
is the customer run time
t3
is the designated repair and test time
Comt
is the cost of the manufacturing system test
λ (t1+t2)
is the failure occurrence rate
Cfs
is the overall cost of field service
(5.7)
A lot of effort has been made to reduce the cost of the manufacturing
system test, Cost [17-21, 32-33]. Farren and Ambler [19] showed that this cost can
be reduced by optimizing the test time, t1, in the manufacturing system.
Parameter t2 is the customer run time. It relates to life-cycle cost analysis
[39-42]. It is a big issue for designing an optimal life-cycle time for PC systems.
On one hand, a small life-cycle time design will increase the field repair cost and
lower customer satisfaction. On the other hand, a long life-cycle time design will
cause extra design and manufacturing costs. Accordingly, an optimal design of t2
will reduce the cost of overall system-level test.
91
Less work has been done related to the field service cost, Cfs. There is a
trade-off between the repair cost and testing time in the field service model.
Therefore, the optimal repair and test time, t3, will definitely reduce the field
service cost in terms of the repair depot cost.
Our cost-effective design for system-level test is to optimize t1, t2, and t3 in
the (5.7) via genetic algorithms or simulated annealing approaches to reduce the
cost of overall system-level test as shown in Figure 5.11. The purpose of this
section is to develop an economic design for system-level test by feeding design
parameters back to the system design level to minimize the cost of test.
Manufacturing system
test cost model
Manufacturing
system test model
t1
Failure occurrence
rate analysis
Field service
cost model
Field failure model
Field-repair test model
t2
t3
Optimization approach for t1, t2, t3 to reduce the test time and cost
Feeding back to system design level to optimize the system test
quality and cost, and further the system design
Figure 5.11: A cost-effective design for system-level test
The cost analysis can be used to demonstrate the relationship between the
overall system-level testing cost and the three design parameters, t1, t2, and t3.
However, the actual cost analysis results are very dependent on the PC
manufacturing company's profile and the applied field service environment. It is
92
quite obvious that the cost of overall system-level test is very much dependent on
t1, t2, and t3. Therefore, optimal settings of these three parameters should be
considered at the same time in the system design process. We believe that this
cost-effective design has the potential of significantly reducing the cost of overall
system-level test.
5.5 SUMMARY
The costs of manufacturing system test and field service are very high.
The emerging research for reducing overall system-level cost becomes more and
more important. The emphasis of this paper is to design a decision tree diagnosis
tool (DTDT), and further, to develop a cost-effective design for overall systemlevel test. Developments of the DTDT include a decision tree learning approach
to construct the diagnosis decision tree and the decision tree model to predict
failed units. The decision tree diagnosis tool developed is suitable for on-line
diagnosis in a real field service application. Example results show that DTDT
significantly reduces the cost of field repair testing. The cost-effective design
includes building a system-level cost model and optimizing the manufacturing
system test time, the customer run time, and the repair and test time to minimize
the cost of test.
There should be other system design parameters related to the
manufacturing system test cost model, the field failure model, and the field
service cost model. Their relationships should be important and needed to be
researched. The main purpose of this chapter is to analyze the economics of the
93
overall system-level test, to adapt decision trees for failure diagnosis, and to
develop a cost-effective design for overall system-level test.
94
Chapter 6: Conclusion and Future Works
The emphases of this dissertation are: (1) a survey of the economics of
field service; (2) a decision tree design for maintenance policy selecting, which
should fully exploit the advantages of knowledge-based technology to support
both the current and future needs of field service operations; (3) a weight-based
maintenance policy for PC systems; (4) an economic diagnosis approach for field
repair service; and (5) a cost-effective design for overall system-level test.
Our policy selecting studies clearly adopt knowledge engineering methods
to computerize human selecting knowledge for alternative maintenance policy.
We have developed a top-down decision tree knowledge representation model and
key to this model are human-like reasoning procedures and generic in field
service operations. Then we develop a logical decision tree model to calculate
candidates' priority scores, which can be used to make the selecting decision. It
can be easily proved that the top-down decision tree model is compatible to the
logical decision tree model and be proposed a unified representation model. We
have implemented and validated decision tree selecting model (DTSM) by
conducting example cases of field service. The average matching rate that is
intersection of DTSM results and results of the equation cost model is 90%.
Validation presents applicability of DTSM. It also facilitate clear convey of
selecting knowledge and make documentation easier. The decision tree selecting
model we developed in this dissertation serves as a foundation of further
knowledge engineering for automated decision supporting system.
95
To reduce the field repair cost of PC systems, a weight-based maintenance
policy (WBMP) is designed. The WBMP is capable of performing maintenance
tasks based on a product's status to achieve a low field depot cost as well as short
service time. A simulation model with integrated components' statuses and field
repair activity is developed to evaluate the benefits of a WBMP system. The result
shows that the field repair depot cost is greatly reduced with a WBMP system.
However, the cost of knowing the status of failed systems is high. Therefore, we
developed a decision tree diagnosis tool (DTDT) to predict failures in the systems.
Example results show that the DTDT can achieve low testing cost as well as
maintain high fault coverage success rate.
We also study the economics of overall system-level test and present a
cost-effective design, which optimizes the cost model of overall system-level test
by three design parameters, the manufacturing system test time, the customer run
time, and the field repair and test time. We believe that this design has the
potential of reducing the cost of test significantly. For further study, there are still
many interesting and challenging issues worthy of our continued investigation in
the field service domain.
1. The relationship of decision tree selecting model (DTSM) and equation
cost model for field service operations: The DTSM we developed in this
dissertation is to mimic the behavior of the equation cost model with less effort
for a short-term decision supporting system. However, for a long-term decision
supporting system, an equation cost model to analyze the economics of field
96
service is needed in the corporation. Therefore, their relationship should be
clarified and defined when DTSM is applied to field service applications.
2. An optimal set of weight for decision tree selecting model (DTSM): In our
DTSM, logical decision tree model needs to find a set of weights from top-down
decision tree model under a specific field service situation. We did not
incorporate an optimal set of weights into this study but it is really an important
issue for our DTSM. In addition, it is an interesting subject to study the
knowledge engineering to find an optimal set of weights to make decisions
economically and accurately.
3. Logistics quality management for field repair service: Customer satisfaction
is an important measure in the corporation. Due to engineer, manufacture, and
market, worldwide customer satisfaction requires the highest quality information
systems in the industry. Logistics quality management is one of useful approaches
to achieve high customer satisfaction. Therefore, field service logistics is a
promising research topic, which supports both field service engineers and original
subsystem vendors with necessary spare parts for PC manufacturing companies.
4. System parameters of overall system-level test: In addition to the
manufacturing system test time, the customer run time, and the field repair and
test time, there should be other system design parameters related to the
manufacturing system test cost model, the field failure model, and the field
service cost model. Their relationships should be important and needed to be
researched to reduce the cost of test.
97
Bibliography
[1]
B. Davis, The Economics of Automatic Testing. McGrawHill, London,
United Kingdom, 1982.
[2]
Michael L. Bushnell and Vishwani D. Agrawal, Essentials of Electronic
Testing for Digital, Memory & Mixed-Signal VLSI Circuits. Kluwer
Academic Publishers, Norwell, Massachusetts, USA, 2000.
[3]
Chryssla Dislis, Jochen Dick, Ian Dear, and Anthony Ambler, Test
Economics and Design for Testability: For Electronic Circuits and
Systems. Ellis Horwood, Hemel Hempstead, 1995.
[4]
M. Abadir and A. Ambler, Economics of Electronic Design, Manufacture,
and Test. Kluwer Academic Publishers, Los Alamitos, California, 1994.
[5]
A. Ambler, M. Abadir, and S. Sastry, Economics of Design and Test: For
Electronic Circuits and Systems. Ellis Horwood Limited, London, 1992.
[6]
A. Ambler, “The Cost of Test,” Proceedings of AutoTestCon, IEEE
Systems Readiness Technology Conference, 2001, pp. 515-523.
[7]
Hugh Reinhart and Michael Pecht, “Automated Design for
Maintainability,” Proceedings of the IEEE 1990 National, Aerospace and
Electronics Conference, Dayton, OH, May 1990, pp. 1227 – 1232.
[8]
Yiqing Lin, Arthur Hsu, and Ravi Rajamani, “A Simulation Model for
Field Service with Condition-Based Maintenance,” Proceedings of the
2002 Winter Simulation Conference, Dec. 8-11, 2002, pp. 1885 - 1890.
[9]
S. O. Duffuaa, M. Ben-Daya, K. S. Al-Sultan, and A. A. Andijani, “A
Generic Conceptual Simulation Model for Maintenance Systems,” Journal
of Quality in Maintenance Engineering, vol. 7, pp. 207-219, 2001.
[10]
Satoshi Hori, Hiromitsu Sugimatsu, Soshi Furukawa, and Hirokazu Taki,
“Utilizing Repair Cases of Home Electrical Appliances,” IEICE Trans. Inf.
& Syst., vol. E82-D, no. 12, 1999.
98
[11]
E. Szczerbicki, and W. White, “System Modeling and Simulation for
Predictive Maintenance,” Cybernetics and Systems, vol. 29, pp. 481-498,
1998.
[12]
E. F. Watson, P. P. Chawda, B. McCarthy, M. J. Drevna, and R. P.
Sadowski, “A Simulation Metamodel for response-time planning,”
Decision Sciences, vol. 29, pp. 217-241, 1998.
[13]
Marisa Sanchez and Juan C. Augusto and Miguel Felder, “Fault-Based
Testing of E-Commerce Applications,” Proceedings of 2nd Workshop on
Verification and Validation of Enterprise Information Systems, pp. 66-71,
2004.
[14]
P.J Nolan, M.G. Madden, and P. Muldoon, “Diagnosis Using Fault Trees
Induced from Simulated Incipient Fault Case Data,” Second International
Conference on Intelligent Systems Engineering, pp. 304-309, 1994
[15]
T. Assaf and J. B. Dugan, “Diagnostic Expert Systems from Dynamic
Fault Trees,” 2004 Annual Symposium Reliability and Maintainability, pp.
444-450, 2004
[16]
M. Chen, A. X. Zheng, j. Lloyd, M. I. Jordan, E. Brewer, “Failure
Diagnosis Using Decision Trees,” Proceedings of International
Conference on Autonomic Computing, pp. 36-43, 2004.
[17]
David Williams, “PC Manufacturing Test in a High Volume
Environment,” Proceedings of IEEE International Test Conference,
Atlantic City, NJ, 1999, pp. 698 – 704.
[18]
S. Martin, R. Bleck, C. Dislis, and D. Farren, “The Evolution of a System
Test Process,” Proceedings of IEEE International Test Conference,
Atlantic City, NJ, 1999, pp. 680 – 688.
[19]
D. Farren and T. Ambler, “The Economics of System-Level Testing,”
IEEE Design & Test of Computers, vol. 14, issue 3, pp. 51-58, 1997.
[20]
A. K. Jain and C. I. Saraidaridis, “Improving the Manufacturing Test
Interval and Costs for Telecommunication Equipment,” Proceedings of
IEEE Annual Reliability and Maintainability symposium, 1999, pp. 410 –
417.
99
[21]
S.P. Ghosh and E.G. Grochowski, “Dynamic Statistical Control of
Manufacturing Test,” IEEE Design & Test of Computers, vol. 7, issue 4,
pp. 39-51, 1990.
[22]
Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern
Approach. Prentice Hall, Upper Saddle River, New Jersey, 1995, pp. 217264.
[23]
Y.-T. Lin, S.-C. Chang, M. Chien, C.-H. Chang3, K.-H. Hsu3, H.-R.
Chen3, B.-W. Hsieh, “Design and Implementation of a Knowledge
Representation Model for Tool Dispatching,” Proceedings of International
Symposium on Semiconductor Manufacturing, Tokyo, Oct. 15-17, 2002,
pp. 109-112.
[24]
Craig W. Kirkwood, Strategic Decision Making: Multiobjective Decision
Analysis with Spreadsheets. Belmont, CA: Wadsworth, 1997, pp. 155190.
[25]
Robert T. Clemen, Making Hard Decisions: An Introduction to Decision
Analysis. Duxbury Press, Brooks/Cole Publishing Company, California,
1996
[26]
B. S. Blanchard, System Engineering Management. John Wiley & Sons,
N.Y., 1991.
[27]
Logical
Decisions
for
http://www.logicaldecisions.com
[28]
Enterprise Dynamics. Available: http://www.enterprisedynamics.com/
[29]
Benjamin S. Blanchard, Dinesh Verma, and Elmer L. Peterson,
Maintainability: A Key to Effective Serviceability and Maintenance
Management. John Wiley & Sons, N.Y., 1995, pp. 154-165.
[30]
Michael Pecht, Product Reliability, Maintainability, and Supportability
Handbook. ARINC Research Corporation, CRC Press, Inc., Florida, 1995.
[31]
J. R. Quinlan, C4.5: Programs for Machine Learning. Morgan Kaufmann,
1993.
[32]
David Williams and Anthony P. Ambler, “System Manufacturing Test
Cost Model,” IEEE Proceedings of the International Test Conference,
2002, pp. 482-490.
100
Windows.
Available:
[33]
Des Farren and Anthony P. Ambler, “System Test Cost Modelling Based
on Event Rate Analysis,” Proceedings of IEEE International Test
Conference, 1994, pp. 84 – 92.
[34]
Lisa M. Bartlett and John D. Andrews, “Choosing a Heuristic for the Fault
Tree to Binary Decision Diagram Conversion, Using Neural Networks,”
IEEE Transactions on Reliability, vol. 51, no. 3, pp. 344-349, 2002.
[35]
Juan A. Carrasco and Victor Sune, “An Algorithm to Find Minimal Cuts
of Coherent Fault-Trees with Event-Classes, Using a Decision Tree,”
IEEE Transactions on Reliability, vol. 48, no. 1, pp.31-41, 1999.
[36]
Peter Liggesmeyer and Martin Rothfelder, “Improving System Reliability
with Automatic Fault Tree Generation,” Twenty-Eighth Annual
International Symposium on Fault-Tolerant Computing, pp. 90-99, 1998.
[37]
Laura L. Pullum and Joanne Bechta Dugan, “Fault Tree Models for the
Analysis of Complex Computer-Based Systems,” IEEE Proceedings
Annual Reliability and Maintainability Symposium, pp. 200-207, 1996.
[38]
David L. Iverson, “Automatic Translation of Digraph to Fault-Tree
Models,” IEEE Proceedings Annual Reliability and Maintainability
Symposium, pp. 354-362, 1992.
[39]
Thorsten Riedel, Nicole Tiemann, Michael G. Wahl, and Tony Ambler,
“LCCA-Life Cycle Cost Analysis,” IEEE Systems Readiness Technology
Conference AUTOTESTCON, pp. 43-47, 1998.
[40]
Thorsten Riedel, Michael G. Wahl, and Tony Ambler, “Cost Analysis
Linking Design, Test & Maintenance,” European Test Workshop, 1999.
[41]
Thorsten Riedel, Jens Kaminski, Michael G. Wahl, and Tony Ambler,
“Effective Testability Design for the Product Life-Cycle,” IEEE Systems
Readiness Technology Conference AUTOTESTCON, pp. 607-612, 1999.
[42]
T. Ambler and B. Bennetts, "Guest Editor's Introduction: Test and Product
Life Cycle," IEEE Design & Test of Computers, vol. 16, issue 3, pp. 2022, 1999.
101
VITA
Yu-Ting Lin was born in Kaohsiung, Taiwan on July 22, 1976, the son of
Ming-Chung Lin and Hsiu-Feng Ko. He received the B.S. degree in Mechanical
Engineering from National Taiwan University, Taiwan, in 1998 and the M.S.
degree in Electrical Engineering from National Taiwan University, Taiwan, in
2002. From 1998 to 2000, he was in the military service. In the fall of 2002, he
attended the University of Michigan at Ann Arbor. In August 2003, he entered the
Doctoral program in the Department of Electrical and Computer Engineering, the
University of Texas at Austin. His research interests include test economics of
chip design and field service, control systems, and the development of the
dispatching tool for semiconductor manufacturing.
Permanent Address: No.70-1, Bao-an Rd., Yong-an Township, Kaohsiung
County 828, Taiwan (R.O.C.)
This dissertation was typed by the author.
102
Download