A Novel Methodology of Perception-Based Decision

advertisement
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 8- August 2013
A Novel Methodology of Perception-Based Decision
Making for Fault Diagnosis of Complex Systems
Dr. Tarun Chopra
Associate Professor, Department of Electrical Engineering, Govt. Engineering College Bikaner,
(India)-334004
Abstract— This research paper proposes methodology for
implementation of perception based decision making
architecture for fault diagnosis applications, as introduced by the
author in his previous paper.
Keywords— Perception, CTP
I. INTRODUCTION
As pointed out in previous section, the Primary
Level Decision Making System involves decision
making regarding the presence of Normal or Faulty
State of operation according to the perception based
rules formulated by granulation of the measured
parameters.
Fault diagnosis is an important and difficult task
in process engineering involving decision making.
Presentation of process data to the operators is of
crucial importance. The data should be presented to
the operator in a compact and understandable
manner with an ability to provide explanations and
offer insights into process’s near-term and future
behavior.
The perception based information [1] helps
operators to recall a similar looking situation from
the past experience associated with a known fault
and recommend corresponding diagnostic tests.
Thus, the operators will not be required to consider
an exhaustive set of diagnostic tests but just the
most probable ones. This form the basis of our
proposed methodology.
II. METHODOLOGY
FIGURE 1: PROPOSED ARCHITECTURE
BASED DECISION M AKING
OF
PERCEPTION-
The architecture of perception-based decision
making system as proposed in previous paper has
In general, measurements are crisp numbers,
been presented here again as Figure 1 for the
whereas perceptions are fuzzy numbers or, more
convenience of readers.
generally, fuzzy granules, that is, clumps of objects
The description of major components of both the in which the transition from membership to nonPrimary Level Decision Making System and the membership is gradual rather than abrupt.
Secondary Level Decision Making System has been
In CTP, reasoning is viewed as a form of
presented in the following sub-sections.
computation, using methods based on CW [2]. The
reasoning with perceptions is a process of arriving
at an answer to a specific question, given a
A. Primary Level Decision Making System
ISSN: 2231-5381
http://www.ijettjournal.org
Page 3617
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 8- August 2013
collection of perceptions as propositions expressed
in a natural language. These propositions are
converted into their canonical forms, in the
generalized constraint language. The computation
with perceptions is based on propagation of
generalized constraints from premises (antecedent
propositions)
to
conclusions
(consequent
propositions). The canonical forms of propositions
are usually constraints of the basic, possibilistic
type, which are applied in the fuzzy IF-THEN rules.
With regard to the fault diagnosis problem, the
rule of the constraint propagation is designed as the
inference rule in fuzzy logic. The proposed
approach has been applied to the data of the
actuator in Evaporator section of Sugar plant. Table
I shows the measured parameters in Evaporator
section of Sugar plant and their description.
TABLE I: DESCRIPTION OF MEASURED PARAMETERS
S.
No.
1
2
3
4
5
6
Measured
Parameter
CV
P1
P2
T
F
X
Parameter
Description
Control value
(controller output)
Juice pressure
(valve inlet)
Juice pressure
(valve outlet)
Juice temperature
(valve inlet)
Juice flow (5th
evaporator outlet)
Servomotor rod
displacement
Range
Units
0 – 100
%
0 - 1000
kPa
0 - 1000
kPa
0 - 150
°C
0 – 100
m3/h
0 – 100
%
In order to facilitate the comparison of results
obtained from Primary Level Decision Making
System, four parameters i.e., CV, P1, P2 and T, out
of the six measured parameters have been chosen as
inputs for classification, in a manner similar to
those used by the different research groups working
on DAMADICS problem [3].The samples of data
sets used, along with their corresponding States of
Operation are shown below in Table II.
TABLE II: SAMPLES OF NORMAL AND FAULT C ONDITION DATA SETS
ISSN: 2231-5381
S.
N.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
CV
0.288918
0.280923
0.273793
0.267556
0.262236
0.257854
0.254428
0.251971
0.250493
0.25
0.646946
0.633957
0.620438
0.606445
0.592031
0.577254
0.562172
0.546845
0.531333
0.515698
Per Unit Values
P1
P2
0.848398 0.649772
0.833168 0.657497
0.834738 0.645968
0.849471 0.652681
0.876689 0.657492
0.899756 0.645002
0.918182 0.648521
0.915854 0.657435
0.898526 0.646777
0.877526 0.644796
0.87281
0.644814
0.902156 0.649424
0.916895 0.654207
0.914581 0.644305
0.899666 0.648835
0.875233 0.657562
0.848311 0.645474
0.833449 0.647429
0.836225 0.658318
0.847036 0.647919
T
0.215597
0.215279
0.213769
0.214891
0.212955
0.214835
0.216716
0.213865
0.219413
0.214909
0.215475
0.212305
0.215314
0.214386
0.214558
0.214471
0.215474
0.216462
0.216847
0.214857
State
N
F
The fuzzy IF-THEN rules have been formulated
based on the granulation of the data. The general
form of these rules is as follows:Rk: IF x1 is A1 k and ……..and x 4 is A4 k THEN
Class k for k=0,1
(1)
Where, Ai k = U Aij
Class k
i=1, 2, 3, 4 and j=1, 2 and U denotes the union of
fuzzy sets Aij associated with class k, realized by
maximum operation over the membership
functions.
The fuzzy classification system, in the form of the
connectionist architecture, has been finally
employed to draw the fault diagnosis inference.
B. Secondary Level Decision Making System
Secondary Level Decision Making System carries
out further analysis and confirmation of the results
obtained from the first level or Primary Level
Decision Making System. It is essential, since an
unreported fault condition bearing resemblance
with the normal state of operation (i.e. incipient
fault condition misclassified as normal condition),
http://www.ijettjournal.org
Page 3618
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 8- August 2013
may seriously harm the costly plant machinery and and Sk is a linguistic evaluation of the rate of
halt the entire operations, if not dealt with at the change in function Y, such as “slowly decreasing”.
right time. In addition, it also separates out the
abrupt faults from incipient faults.
The methods of reconstruction of fuzzy function
given by such rules are based on general
Since data sets in most of the complex systems, methodology of information granulation and
including the sugar plant problem considered here computing with words.
are large, it is very difficult for a human being
(Technical operator) to capture and understand their In the proposed approach, while summarizing
contents. In such cases, a generalized description in trends in time series data, the aspect of dynamics of
terms of rules in natural language would be very change or the rate of change of variables F and X
helpful. The process of extraction of such rules has been considered. It can be described by the
from time series database of faulty and normal slope of a line representing the trend. Thus, to
operation conditions measured at uniformly spaced quantify dynamics of change the interval of
time moments for the Evaporator section of the possible angles [−90◦; 90◦] may be used. However,
Sugar plant under consideration, is exhibited in this it might be impractical and not consistent with
section.
human reasoning, to use such a scale directly while
describing trends. Therefore fuzzy granulation may
Trends in time series database are identified as be used in order to meet the operators’ needs and
linearly increasing, stable or decreasing functions, task specificity.
and therefore a given time series data may be
represented as piecewise linear functions. Evidently, As mentioned earlier, four out of the six measured
the intensity of an increase or decrease (slope) will parameters were considered in the primary level
matter, too. Thus, the terms like “slowly decision making. The hitherto unused variables F
decreasing” have been formulated for proper and X from the datasets are hence considered in the
representation of trends.
Secondary level decision making, with Normal,
Abrupt and Incipient fault as three outputs. The
Perception based functions linguistically describe selection of only these two variables is supported
dependencies between variables on the basis of by the results of Correlation and Principal
perceptions about the character of such dependency. Component Analysis carried out on the datasets
These perceptions may have different nature. They comprising of all six parameters.
may be based on the experience, on common sense,
on authoritative opinion, on the results of non-fixed Correlation and Principal Components Analysis are
observations or measurements (on non-fixed used for exploratory analysis to understand the
statistics).
nature and relationship between variables.
Correlation is a measure of the association between
Example of such perceptions is:
two variables i.e., it indicates if the value of one
variable changes reliably in response to changes in
“In Incipient fault condition, the Flow is slowly the value of the other variable. The purpose of
decreasing”,
Principal Component Analysis is to identify a set of
underlying factors that explain the relationships
In canonical form such sentences may be rewritten between correlated variables. Generally, there will
as rules:
be fewer underlying factors than variables, so the
Rk : If X is Tk then Y is Sk
result of factor analysis is simpler than the original
set of variables.
where Tk is a linguistic value such as “Incipient
Fault” of an independent variable X,such as “Flow”
ISSN: 2231-5381
http://www.ijettjournal.org
Page 3619
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 8- August 2013
Correlation and Principal Component Analysis was
carried out on the available datasets. The format of
various data elements is now described:-
TABLE V: FACTOR IMPORTANCE
Factor
Eigen
value
2.37766
0.46142
0.02020
- 0.00057
- 0.16852
- 0.33434
1
2
3
4
5
6
Variance
%
83.156
16.137
0.707
.
.
.
Cumulative
%
83.156
99.293
100.000
.
.
.
Input Data:
All the six measured parameters have been used as
predictor variables as indicated in table III, with
the type of fault as target variable. The dataset
includes all nineteen categories of the fault with
Following criteria were set for the analysis:equal weights assigned to them.
Stop when cumulative explained variance = 80%
TABLE III: SUMMARY OF VARIABLES
Minimum allowed Eigen value = 0.50000
S.No.
Variable
Class
Type
1
CV
Predictor Continuous
After analysis of the performance, number of
2
P1
Predictor Continuous
factors retained is 1.
3
P2
Predictor Continuous
There are several methods of rotating the factor
4
T
Predictor Continuous
matrix, that make the relationship between the
5
X
Predictor Continuous
variables and the factors easier to understand. Here,
6
F
Predictor Continuous
the most popular Varimax rotation method has been
7
Type of fault
Target
Categorical
used. A Varimax rotation is an orthogonal
transformation. That means the factor axes remain
III. RESULTS
orthogonal to each other, and the factors are
The correlation between six variables is presented
uncorrelated. After a Varimax rotation, some of the
in the form of a six by six matrix as shown in Table factor loadings will be large, and the rest will be
IV:
close to zero making it easy to see which variables
correlate strongly with the factor. After performing
TABLE IV: CORRELATION M ATRIX
the Varimax rotation, it can be observed from table
CV
P1
P2
X
F
T
VI that Factor 1 retained by PCA is related to
CV
1.0000
-0.3719
-0.0419
0.4973
-0.5184
0.0004
variables X and F.
P1
-0.3719
1.0000
-0.0518
-0.2102
0.2624
0.0006
P2
-0.0419
-0.051
1.0000
-0.0001
-0.0096
-0.0006
X
0.4973
-0.2102
-0.0001
1.0000
-0.8584
-0.6589
F
-0.5184
0.2624
-0.0096
-0.8584
1.0000
0.4314
T
.0004
0.0006
-0.0006
-0.6589
-0.6589
1.0000
TABLE VI: R OTATED FACTOR LOADING M ATRIX
Variable
CV
P1
P2
X
F
T
Fac1
0.5006
-0.2745
0.0008
1.0108 *
0.8881 *
0.4911
Using Principal Component Analysis as Factor
extraction method, the relative and cumulative
amount of variance explained by each factor has This justifies the selection of two variables F and X
for further analysis.It is now required to generate
been calculated in Table V.
epistemic rules using these two variables ( F and X )
as Inputs and State of Operation as Output. There
are several options available for this purpose, like
k-Means Clustering, Fuzzy Subtractive Clustering,
Decision trees etc. Fuzzy Subtractive Clustering
ISSN: 2231-5381
http://www.ijettjournal.org
Page 3620
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 8- August 2013
offers several advantages like time saving due to
being one pass algorithm and higher accuracy.
Fuzzy Subtractive Clustering has hence been
selected for model identification and generation of
epistemic rules. The results obtained from trend
granulation confirm the nature of operation as
Normal or help in separating the abrupt faults from
the incipient faults.
Decision Making for fault diagnosis needs a
classification framework that can perform better
than existing classifiers even in the presence of
uncertainties. These include uncertainty about the
meanings of the words that are used in a rule,
uncertainty about the consequent that is used in a
rule, uncertainty about the measurements that
activate the FLS, uncertainty about the data that are
used to tune the parameters of a FLS, etc. In the
considered case, this becomes very important in
view of the close resemblance of different
categories of faults among themselves and that of
incipient fault with normal state of operation. In
order to accomplish this and to counteract the
impact of uncertainty on classification framework,
the concept of a Type-2 FLS has been incorporated
here. The experiments carried out by using Type-2
FLS strengthen the intuition that Type-2 FLS
IV. CONCLUSIONS
A new classification system has hence been
proposed by using Type-2 Fuzzy C-Means
clustering algorithm (T2FCM). T2FCM is applied
to the training data for the selection of best
segments. Thus, finally the proposed methodology
successfully separates out the three states of
operations (including the overlapping cases) i.e.
Normal, abrupt and incipient fault conditions in the
form of distinct clusters.
REFERENCES
[1] Zadeh L.A., “Toward a perception-based theory of probabilistic
reasoning with imprecise probabilities”, Journal of Statistical Planning
and Inference 105, 2002, 233–264
[2] Zadeh L.A., “From computing with numbers to computing with
words—from manipulation of measurements to manipulation of
perceptions”, IEEE Transactions on Circuits and Systems 45, 1999,
105–119.
[3] Puig V., Stancu A., Escobet T., Nejjari F., Quevedo J., Patton R.J.,
“Passive robust fault detection using interval observers: Application to
the DAMADICS benchmark problem”, Control Engineering Practice
14, 2006, 621–633
ISSN: 2231-5381
http://www.ijettjournal.org
Page 3621
Download