View/Open - Sacramento

advertisement
FAULT CLASSIFICATION IN POWER SYSTEM
USING MICROPROCESSOR
Sumit Kapadia
B.E., North Maharashtra University, India, 2006
PROJECT
Submitted in partial satisfaction of
the requirements for the degree of
MASTER OF SCIENCE
in
ELECTRICAL AND ELECTRONIC ENGINEERING
at
CALIFORNIA STATE UNIVERSITY, SACRAMENTO
FALL
2010
FAULT CLASSIFICATION IN POWER SYSTEM
USING MICROPROCESSOR
A Project
by
Sumit Kapadia
Approved by:
______________________________, Committee Chair
John C. Balachandra, Ph.D.
_______________________________, Second Reader
Fethi Belkhouche, Ph.D.
__________
Date
ii
Student: Sumit Kapadia
I certify that this student has met the requirements for format contained in the university
format manual, and that this project is suitable for shelving in the library and credit is to
be awarded for the Project.
_________________________, Graduate Coordinator
Preetham B. Kumar, Ph.D.
Department of Electrical and Electronic Engineering
iii
___________
Date
Abstract
of
FAULT CLASSIFICATION IN POWER SYSTEM
USING MICROPROCESSOR
by
Sumit Kapadia
This project report introduces artificial intelligence based algorithm for classifying fault
type and determining fault location on the power system, which can be implemented on
a Microprocessor based relay. This new concept relies on a principle of pattern
recognition and identifies fault type easily and efficiently. The approach utilizes selforganized, Adaptive Resonance Theory (ART) neural network, combined with KNearest-Neighbor (K-NN) decision rule for interpretation of neural network outputs. A
selected simplified power network is used to simulate all possible fault scenarios and to
generate test cases. An overview model of how this method can be implemented on
Hardware is also given. Performance efficiency of this method of fault classification and
fault location determination is also computed. Variation of the Neural Network
efficiency with different parameters is also studied.
_______________________________, Committee Chair
John C. Balachandra, Ph.D.
____________
Date
iv
ACKNOWLEDGEMENTS
I take this opportunity to express my gratitude to project coordinator Dr. John
Balachandra, Professor, Department of Electrical and Electronics Engineering,
California State University Sacramento, for their guidance, cooperation and both moral
and technical support for giving ample amount of time for successful completion of the
project. He gave constant support and the freedom to choose the direction in which the
project should proceed.
I would like to extend my acknowledgements to the chairperson of the committee and
the graduate coordinator of Electrical & Electronic Engineering Department, Dr.
Preetham Kumar for providing various facilities which helped me throughout my
project.
I am thankful to Professor Fethi Belkhouche for being second reader of this thesis, who
had brought our attention to the various points to be taken in to consideration during the
preparation of the report.
v
TABLE OF CONTENTS
Page
Acknowledgments............................................................................................................... v
List of Tables ..................................................................................................................... ix
List of Figures ..................................................................................................................... x
Chapter
1 INTRODUCTION ......................................................................................................... 1
1.1. Project Overview ................................................................................................ 1
1.2. Report Outline................................................................................................. …2
2 BACKGROUND ........................................................................................................... 4
2.1. Introduction ......................................................................................................... 4
2.2. Power System and the Concept of Protective Relaying ..................................... 4
2.3. Traditional Protective Relaying for Transmission Lines .................................... 6
2.4. Distance Relaying. .............................................................................................. 8
3 NEURAL NETWORKS. ............................................................................................. 13
3.1. Introduction. ...................................................................................................... 13
3.2. Basis of Neural Networks. ................................................................................ 13
3.3. History of Neural Networks .............................................................................. 14
3.4. Types of Neural Networks ................................................................................ 16
3.5. Neural Networks for Pattern Classification. ..................................................... 16
3.6. Competitive Networks ...................................................................................... 18
vi
3.7. Adaptive Resonance Theory ............................................................................. 21
4 NEURAL NETWORK ALGORITHM ......................................................................... 24
4.1. Introduction ....................................................................................................... 24
4.2. Adaptive Neural Network (ART) Structure...................................................... 24
4.3. Unsupervised Learning. .................................................................................... 26
4.4. Supervised Learning. ........................................................................................ 30
4.5. Implementation. ................................................................................................ 31
5 HARDWARE AND SOFTWARE SOLUTION ........................................................... 35
5.1. Introduction ....................................................................................................... 35
5.2. Relay Architecture ............................................................................................ 35
5.3. Data Acquisition and Signal processing. .......................................................... 36
5.4. Digital Processing Subsystem. .......................................................................... 38
5.5. Command Execution System ............................................................................ 39
6 SIMPLE POWER SYSTEM MODEL .......................................................................... 41
6.1. Introduction ....................................................................................................... 41
6.2. Power System Model ........................................................................................ 41
7 SIMULATION AND IMPLEMENTATION ................................................................ 43
7.1. Introduction ....................................................................................................... 43
7.2. Power System Design and Simulation .............................................................. 43
7.3. Model Interfacing. ............................................................................................ 44
7.4. Generation Simulation Cases. ........................................................................... 44
vii
7.5.
Testing ............................................................................................................. 45
8 SIMULATION RESULTS ............................................................................................ 46
8.1. Simulation Waveforms ..................................................................................... 46
8.1.1. Waveforms for Line to Ground Fault ...................................................... 48
8.1.2. Waveforms for Line to Line Fault ........................................................... 50
8.2. Neural Network Training Results. .................................................................... 51
8.3. Neural Network Classification Results. ............................................................ 53
9 PROGRAM LISTINGS ................................................................................................. 56
9.1. Program to Generate Training Patterns ............................................................ 56
9.2. Program to Plot Fault Waveforms .................................................................... 59
9.3. Program to Train Neural Network. ................................................................... 61
9.4. Program to Generate Test Patterns. .................................................................. 66
9.5. Program to Classify Faults using Neural Network ........................................... 70
9.6. Program to Plot Clusters and Member Patterns. ............................................... 72
10 CONCLUSION ............................................................................................................ 74
References………………………………………………………………………………..76
viii
LIST OF TABLES
Page
1. Table 4.1: Algorithm for Unsupervised Training of Neural Network. ............................. 33
2. Table 8.1: Results of Neural Network Training for Fault Type Classification ................ 52
3. Table 8.2: Results of Neural Network Training for Fault Location Classification .......... 53
4. Table 8.3: Results of Neural Network Fault Type Classifier............................................ 54
5. Table 8.4: Results of Neural Network Fault Location Classifier...................................... 55
ix
LIST OF FIGURES
Page
1. Figure 2.1: Example of a Power System. ............................................................................ 5
2. Figure 2.2: Transmission Line with Fault Detection and Clearance .................................. 6
3. Figure 2.3: Relay Coordination .......................................................................................... 7
4. Figure 2.4: Equivalent Circuit of Transmission Line during Pre-fault Condition .............. 9
5. Figure 2.5: Equivalent Circuit of Transmission Line during Fault Condition.................. 10
6. Figure 2.6: Apparent Impedance seen from Relay Location during Pre-fault and Fault
Condition........................................................................................................................... 11
7. Figure 2.7: Mho and Quadrilateral Distance Relay Characteristics. ................................ 12
8. Figure 3.1: Principle of Artificial Neural Network........................................................... 14
9. Figure 3.2: Multilayer Perceptron Network. ..................................................................... 18
10. Figure 3.3: Structure of Adaptive Resonance Theory Network ....................................... 22
11. Figure 4.1: Combined Unsupervised and Supervised Neural Network Training ............. 25
12. Figure 4.2: Unsupervised Learning (Initialization Phase) ................................................ 27
13. Figure 4.3: Unsupervised Learning (Stabilization Phase). ............................................... 29
14. Figure 4.4: Supervised Learning ....................................................................................... 31
15. Figure 4.5: Implementation of Trained Network for Classification ................................. 32
16. Figure 5.1: Proposed Hardware/Software Design for Microprocessor Based Relay ....... 36
17. Figure 5.2: Moving Data Window for Voltage and Current samples. .............................. 37
18. Figure 6.1: Power System Model ..................................................................................... 42
x
19. Figure 8.1: Cluster with Member Patterns ........................................................................ 46
20. Figure 8.2: Waveforms for Line to Ground Fault ............................................................. 48
21. Figure 8.3: Waveforms for Line to Ground Fault at Different Inception Angle .............. 49
22. Figure 8.4: Waveforms for Line to Line Fault.................................................................. 50
23. Figure 8.5: Waveforms for Line to Line Fault at Different Inception Angle .................. 51
xi
1
Chapter 1
INTRODUCTION
1.1 Project Overview
The problem of detecting and classifying transmission line faults based on three phase
voltage and current signals has been known for a long time. Traditionally overcurrent, distance, over-voltage, under-voltage, differential relay based protection are
implemented using either models of the transmission line or fault signals. All of these
principles are based on a comparison between the measurements and predetermined
setting calculated taking into account only predetermined operating conditions and
fault events. Thus, if the actual power system conditions deviate from the anticipated
ones, the measurements and settings determined by the classical relay design have
inherent limitations in classifying certain fault conditions, and the performance of
classical protective relays may significantly deteriorate. Subsequently, a more
sensitive, selective, and reliable relaying principle is needed, capable of classifying
the faults under a variety of operating conditions and faults.
This report introduces a protective relaying principle for transmission lines which is
based on pattern recognition instead of using traditional methods. The approach
utilizes an artificial neural network algorithm where the prevailing system conditions
are taken into account through the learning mechanism.
Artificial neural networks possess an ability to capture complex and nonlinear
relationships between inputs and outputs through a learning process. They are
2
particularly useful for solving difficult signal processing and pattern recognition
problems. The new protective relaying concept is based on a special type of artificial
neural network, called Adaptive Resonance Theory (ART), which is ideally suited for
classifying large, highly dimensional and time varying sets of input data. The new
classification approach has to reliably conclude, in a short time which type of fault
occurs under a variety of operating conditions. Samples of current and voltage signals
from three transmission line phases are recognized as the features of various
disturbance and fault events in the power network, and aligned into a set of input
patterns. Through combined unsupervised and supervised learning steps, the neural
network establishes categorized prototypes of typical input patterns. In the
implementation phase, the trained neural network classifies new fault events into one
of the known categories based on an interpretation of the match between the
corresponding patterns and set of prototypes using K-NN decision rule.
A simplified model of a power network is used to simulate different fault scenarios.
MATLAB software based algorithm is used for training and evaluation of the neural
network based fault classifier.
1.2 Report Outline
The report is organized as follows. A background of power systems and protective
relaying is provided in Chapter 2. Chapter 3 discusses neural networks and
summarizes applications of neural networks for protective relaying and describes the
relevant approaches and gives an overview of the neural network approach. The
3
neural network algorithm for training and implementation (ART) are explained in
Chapter 4.
Required hardware and software modules for this protective relaying method are
outlined in Chapter 5. A model of the simplified power network is presented in
Chapter 6. Chapter 7 specifies implementation steps, including fault simulation,
pattern generation, and the design of ART neural network algorithms.
The simulation results are shown in Chapter 8. Software code written to simulate
faults, training and testing of the neural network are presented in Chapter 9. The
conclusions derived from the project work are given after Chapter 9. References
related to this project report are enclosed at the end.
4
Chapter 2
BACKGROUND
2.1 Introduction
This chapter discusses topics relevant to the project work and is organized into three
sections. First section describes a role and structure of power system, the effect of
power system faults and emphasizes the concept of protective relaying. The next
section focuses on traditional protective relaying of transmission lines. The most
common relaying principle, distance protection, is analyzed in the third section.
2.2 Power System and the Concept of Protective Relaying
A power system is a grid of electrical sources, loads, transmission lines, power
transformers, circuit breakers, and other equipment, connected to provide power
generation, transmission and distribution. A simple example of a typical power
system is given in Fig 2.1.Transmission lines link the sources and loads in the system,
and allow energy transmission. The loss of one or more transmission lines is often
critical for power transfer capability, system stability, sustainability of required
system voltage and frequency. Circuit breakers are devices for connecting power
system components and carrying transmission line currents under normal operating
conditions, and interrupting the currents under specified abnormal operating
conditions. They are located at the transmission line ends and by interrupting the
current they isolate the fault portion from the system.
5
Circuit
Breakers
BUS 1
Source 1
BUS 3
BUS 4
LOAD A
BUS 6
LOAD B
LOAD D
transmission
lines
Source 2
BUS 7
BUS 5
BUS 2
LOAD C
LOAD E
Figure 2.1 – Example of a Power System
The purpose of protective relaying is to minimize the effect of power system faults by
preserving service availability and minimizing equipment damage. Since the damage
caused by a fault is mainly proportional to its duration, the protection is required to
operate as quickly as possible. Transmission line faults happen randomly, and they
are typically an outcome of severe weather or other unpredictable conditions. Various
fault parameters as well as conditions imposed by actual network configuration and
operating mode determine the corresponding transient current and voltage waveforms
detected by the relays at line ends. The main fault parameters are:
1. Type of fault defined as single phase to ground, phase to phase, two phase to
ground, three phase, and three-phase-to-ground.
2. Fault distance defined as distance between the relay and faulted point.
3. Fault impedance defined as impedance of the earth path.
4. Fault incidence time defined as time instance where the fault occurs on the
voltage waveform.
6
2.3 Traditional Protective Relaying for Transmission Lines
Protective relays are intelligent devices located at both ends of each transmission line,
nearby network buses or other connection points. They enable high speed actions
required to protect the power network equipment when operating limits are violated
due to an occurrence of a fault. The relay serves as primary protection for
corresponding transmission line as well as a backup for the relays at adjacent lines,
beyond the remote line end. Although a transmission line is protected by the relays in
each phase, for simplicity, only one phase connection is illustrated in Fig 2.2.
The transmission line fault detection and clearance measurement and control chain
contains instrument transformers, distance relay and circuit breaker. Instrument
transformers acquire voltage and current measurements (VT and CT) and reduce the
transmission level signals into appropriate lower level signals convenient to be
utilized by the relay. The main role of protective relays is recognizing the faults in
particular area in power network based on measured three-phase voltage and current
signals.
BUS
VT
Relay
CT
Circuit
Breaker
Three Phase Transmission Line
Circuit
Breaker
CT
BUS
VT
Relay
Figure 2.2 - Transmission Line with Fault Detection and Clearance
7
In a very short time, around 20 ms, relay has to reliably conclude whether and which
type of fault occurs, and issue a command for opening the circuit breaker to
disconnect faulted phases accordingly.
The relay responsibility for protection of a segment of the power system is defined by
a concept of zones of protection. A zone of protection is a setting defined in terms of
a percentage of the line length, measuring from the relay location. When fault occurs
within the zone of protection, the protection system activates circuit breakers isolating
the faulty segment of the power system defined by the zone boundaries. The zones of
protection always overlap, in order to ensure back-up protection for all portions of the
power system.
Figure 2.3 – Relay Coordination
8
Fig 2.3 shows a small segment of a power system with protection zones enclosed by
dashed lines distinctly indicating the zones for the relays located at A, B, C and D.
For instance, relay at A has three zones of protection, where zone I is the zone of
primary protection, while zones II and III are backup protections if relays at C and E
malfunction, respectively for the faults in their zone I. Coordination of protective
devices requires calculation of settings to achieve selectivity for faults at different
locations. Zone I setting is selected for relay to trip with no intentional delay, except
for the relay and circuit breaker operating time t1.The zone I is set to under-reach the
remote end of the line, and usually covers around 80-85% of the line length. Entire
line length cannot be covered with zone I because the relay is not capable of precisely
determining fault location. Hence instant undesired operation may happen for the
faults beyond remote bus. The purpose of zone II is to cover the remote end of the
line which is not covered by the zone I. A time delay t1 for the faults in zone II is
required to allow coordination with the zone I of the relay at adjacent line. Zone II
setting overreaches the remote line end and typically is being set to 120% of the line
length. Zone III setting provides back up protection for adjacent line and operates
with the time delay t3. Typical setting for zone III is 150-200% of the primary line
length, and is limited by the shortest adjacent line.
2.4 Distance Relaying
In the traditional protective relaying approach, a distance relay is the most common
relay type for protection of multi-terminal transmission lines. It operates when the
9
apparent impedance seen by the relay decreases bellow a setting value. Relay settings
are determined through a range of short circuit studies using the power system model
and simulations of the worst-case fault conditions. Simplified examples of a singlephase electrical circuit, with voltage source, load, and pre-fault and faulted
transmission line with neglected shunt capacitance are shown in Fig 2.4 and 2.5,
respectively.
Relay
Rline
Xline
U, I
measurements
Rload
U
Xload
Figure 2.4 – Equivalent Circuit of Transmission Line during Pre-fault Condition
The conditions before the fault determine the impedance seen by the relay:
Z line  Rline  jX line ,
Rline << Xline (2.1)
Z load  Rload  jX load ,
Rload > Xload
Z pre fault 
U pre fault
I pre fault
 Z line  Z load
(2.2)
(2.3)
The distance relay operates on the principle of dividing the voltage and current
phasors to measure the impedance from the relay location to the fault. Condition in
(2.1) is due to highly inductive impedance of a transmission line, while condition in
(2.2) is due to mostly resistive load characteristics.
10
Relay
mRline
mXline
(1-m)Rline (1-m)Xline
U, I
measurements
Rload
Rfault
U
Xload
Figure 2.5 – Equivalent Circuit of Transmission Line during Fault Condition
The impedance seen by the relay at the moment of the fault is given as:
Z fault  mZ line  ((1  m) Z line  Z load ) || R fault
 mZ line 
Z fault 
((1  m) Z line  Z load ) R fault
(1  m) Z line  Z load  R fault
U fault
I fault
 mZ line  R fault , Rfault << Rload
(2.4)
(2.5)
The apparent impedance at the relay location is equal to the quotient of measured
phase voltage and current, according to (2.3) and (2.5). Graphical representation of
the measured impedance is usually shown in the complex, resistance-reactance or RX plane given in Fig 2.6, where the origin of the plane represents the relay location
and the beginning of the protected line.
Comparing (2.3) and (2.5), the impedance measured at the relay location under
normal load conditions is generally significantly higher than the impedance during
the fault, which usually includes arc and earth resistance. It depends on the fault type
and location in the system where it occurs. The apparent impedance also varies with
11
fault impedance, as well as impedance of equivalent sources and loads. However, in
real situations problem of detecting apparent impedance is not as simple as the singlephase analysis given for simplified network and transmission line may suggest. The
interconnection of many transmission lines imposes a complex set of relay settings.
line characteristic
X
Zline
m*Zline
Rfault
Zload
Zfault
Zpre-fault
R
Figure 2.6 – Apparent Impedance seen from Relay Location during Pre-fault and
Fault Condition
The impedance computed by the distance relay has to be compared against the relay
settings. Settings are defined as the operating characteristics in the complex R-X
plane that enclose all faults along protected section and takes into account varying
fault resistance. A distance relay is designed to operate whenever estimated
impedance remains within a particular area on the operating characteristic during
specified time interval.
Impedance relay characteristics for three protection zones are shown in Fig 2.7, where
mho characteristic is designed for faults not including ground, while quadrilateral
12
characteristic is designed for faults including ground. The mho operating
characteristic is a circle with the center in the middle of impedance of the protected
section. The quadrilateral operating characteristic is a polygon, where the resistive
reach is set to cover the desired level of ground fault resistance, but also to be far
enough from the minimum load impedance. The typical distance relaying algorithm
involves three sets of three zone characteristics for each phase-to-phase and phase-toground fault. As long as voltages and currents are being acquired, the apparent
impedance is estimated by assuming each of possible phase-to-phase and phase-toground fault types. Selection of either mho or quadrilateral characteristic depends on
value of zero sequence current. An existence of zero sequence current indicates
ground fault and quadrilateral characteristic should be considered, otherwise fault
does not include ground and mho characteristic should be examined. When the
operating characteristic has been chosen, the impedance is checked against all three
sets of individual settings. Depending on type and location of an actual fault, only the
corresponding apparent impedances will fall inside their operating characteristics.
Figure 2.7 – Mho and Quadrilateral Distance Relay Characteristics [15]
13
Chapter 3
NEURAL NETWORKS
3.1 Introduction
Generally, power systems are highly complex and large-scale systems, possessing
statistical nature and having subsystems whose model usually cannot be easily
identified. Quite often there is no suitable analytical technology to deal with this
degree of complexity and the number of computational possibilities is too high
leading to unsatisfactory solutions. Presented problems can be overcome successfully
using artificial neural networks (ANNs).
3.2 Basis of Neural Networks
It is generally known that all biological neural functions, including memory, are
stored in very complex grids of neurons and their interconnections. Learning is a
process of acquiring data from the environment and establishing new neurons and
connections between neurons or the modifications of existing connections. An idea of
artificial neural networks, shown in Fig 3.1, is to construct a small set of simple
artificial neurons and train them to capture general, usually complex and nonlinear,
relationships among data.
14
network
structure
adaptation
+
input
Artifical Neural
Network
output
-
target
+
error
Figure 3.1 – Principle of Artificial Neural Network
The function of neural network is determined by structure of neurons, connection
strengths and the type of processing performed at elements or nodes. In classification
tasks, the output being predicted is a categorical variable, while in regression
problems the output is a quantitative variable. Neural network uses individual
examples, like set of inputs or input-output pairs, and appropriate training mechanism
to periodically adjust the number of neurons and weights of neuron interconnections
to perform desired function. Afterwards, it possesses generalization ability, and can
successfully accomplish mapping or classification of input signals that have not been
presented in the training process.
3.3 History of Neural Networks
The origin of neural networks theory began in the 1940s with the work of McCulloch
and Pitts, who showed that networks with sufficient number of artificial neurons
15
could, in principle, compute any arithmetic or logical function [7]. They were
followed by Hebb, who presented a mechanism for learning in biological neurons.
The first practical application of artificial neural networks came in the late 1950s,
when Rosenblatt invented the perceptron network and associated learning rule.
However, the basic perceptron network could solve only a limited class of problems.
Few years later Widrow and Hoff introduced a new learning algorithm and used it to
train adaptive learning neural networks, with structure and capability similar to the
perceptron network. In 1972 Kohonen and Anderson independently developed new
neural networks that can serve as memories, by learning association between input
and output vectors. Grossberg was investigating the self-organizing neural networks,
capable of performing error correction by themselves, without any external help and
supervision [7]. In 1982, Hopfield described the use of statistical analysis to define
the operation of a certain class of recurrent networks, which could be used as an
associative memory. The back propagation algorithm for training multilayer
perceptron neural networks was discovered in 1986 by Rumelhart and McClelland.
Radial basis function networks were invented in 1988 by Broomhend and Lowe as an
alternative to multilayer perceptron networks. In the 1990s, Vapnik invented support
vector machines a powerful class of supervised learning networks [7]. In the last
fifteen years theoretical and practical work in the area of neural networks has been
rapidly growing.
Neural networks are used in a broad range of applications, including pattern
classification, function approximation, data compression, associative memory,
16
optimization, prediction, nonlinear system modeling, and control. They are applied to
a wide variety of problems in many fields including aerospace, automotive, banking,
chemistry, defense, electronics, engineering, entertainment, finance, games,
manufacturing, medical, oil and gas, robotics, speech, securities, telecommunications,
transportation.
3.4 Types of Neural Networks
Generally, there are three categories of artificial neural networks: feedforward,
feedback and competitive learning networks. In the feedforward networks the outputs
are computed directly from the inputs, and no feedback is involved. Recurrent
networks are dynamical systems and have feedback connections between outputs and
inputs. In the competitive learning networks the outputs are computed based on some
measure of distance between the inputs and actual outputs. Feedforward networks are
used for pattern recognition and function approximation, recurrent networks are used
as associative memories and for optimization problems and competitive learning
networks are used for pattern classification.
3.5 Neural Networks for Pattern Classification
Pattern recognition or classification is learning with categorical outputs. Categories
are symbolic values assigned to the patterns and connect the pattern to the specific
event which that pattern represents. Categorical variables take only a finite number of
possible values. The union of all regions where given category is predicted is known
17
as the decision region for that category. In pattern classification the main problem is
estimation of decision regions between the categories that are not perfectly separable
in the pattern space. Neural network learning techniques for pattern recognition can
be classified into two broad categories: unsupervised and supervised learning.
During supervised learning desired set of outputs is presented together with the set of
inputs, and each input is associated with corresponding output. In this case, neural
network learns matching between inputs and desired outputs (targets), by adjusting
the network weights so as to minimize the error between the desired and actual
network outputs over entire training set. However, during unsupervised learning,
desired outputs are not known or not taken into account, and network learns to
identify similarity or inner structure of the input patterns, by adjusting the network
weights until similar inputs start to produce similar outputs.
The most commonly used type of feed forward neural networks is Multilayer
Perceptron. It has multiple layers of parallel neurons, typically one or more hidden
layers with inner product of the inputs, weights and biases and nonlinear transfer
functions, followed by an output layer with inner product of the inputs, weights and
biases and linear/nonlinear transfer functions, shown in Fig 3.2. Use of nonlinear
transfer functions allows the network to learn complex nonlinear relationships
between input and output data.
18
hidden
layer
input
layer
output
layer
z1
x1
z2
y1
x2
z3
x3
output
input
y2
z4
yL
xJ
zn-1
wNJ
wLN
zn
Figure 3.2 – Multilayer Perceptron Network [5]
3.6 Competitive Networks
Competitive or Self-Organized neural networks try to identify natural groupings of
data from a large data set through clustering [5]. The aim of clustering is to allocate
input patterns into much smaller number of groups called clusters, such that each
pattern is assigned to unique cluster [8]. Clustering is the process of grouping the data
into groups or clusters so that objects within a cluster have high similarity in
comparison to one another, but are dissimilar to objects in other clusters [8]. Two
widely used clustering algorithms are K-means where the number of clusters is given
19
in advance and ISODATA where the clusters are allocated incrementally. The
Competitive networks learn to recognize the groups or clusters of similar input
patterns, each having members that are as much alike as possible. The similarity
between input patterns is estimated by some distance measure, usually by the
Euclidean distance. Neural network neurons are cluster centers defined as prototypes
of input patterns encountered by the clusters. During learning, each prototype
becomes sensitive and triggered by a different domain of input patterns. When
learning is completed a set of prototypes represents the structure of input data. For
supervised classification, each cluster belongs to one of existing categories, and
number of neural network outputs corresponds to desired number of categories,
determined by the given classification task.
The Competitive Learning network consists of two layers. The first layer performs a
correlation between the input pattern and the prototypes. The second, output layer
performs a competition between the prototypes to determine a winner that indicates
which prototype is best representative of the input pattern. The associate learning rule
is used to adapt the weights in a competitive network, and typical examples are the
Hebb rule in (3.1) and Kohonen rule in (3.2). In Hebb rule, winning prototype wij is
adjusted by moving toward the product of input xi and output yj with learning rate α:
wi , j (k )  wi , j (k  1)  xi (k ) y j (k )
(3.1)
In Kohonen rule, the winning prototype wij is moved toward the input xi:
wi , j (k )  wi , j (k  1)   ( xi (k )  wi , j (k  1)) for i  min || xi (k )  wi , j (k  1) ||
(3.2)
20
Typical example of Competitive Learning networks is Vector Quantization (VQ)
network, where the prototype closest to the input pattern is moved toward that
pattern. Competitive Learning networks are efficient adaptive classifiers, but they
suffer from certain problems. The first problem is that the choice of learning rate
forces a trade-off between the speed of learning and stability of the prototypes.
The second problem occurs when clusters are close together such that they may
overlap and encompass patterns already encountered by other clusters. The third
problem is that occasionally an initial prototype is located so far from any input
pattern that it never wins the competition, and therefore never learns. Finally, a
competitive layer always has as many clusters as neurons. This may not be acceptable
for some applications when the number of clusters cannot be estimated in advance.
Some of the problems discussed can be solved by the Self-Organized-Feature-Map
(SOM) networks and Learning Vector Quantization (LVQ) networks. SelfOrganized-Feature-Maps learn to capture both the distribution, as competitive layers
do, and topology of the input vectors. Learning Vector Quantization networks try to
improve Vector Quantization networks by adapting the prototypes using supervised
learning. Mentioned Competitive Learning networks, like many other types of neural
network learning algorithms, suffer from a problem of unstable learning, usually
called the stability/plasticity dilemma. If a learning algorithm is plastic or sensitive to
novel inputs, then it becomes unstable and with certain degree forgets prior learning.
Stability-plasticity problem has been solved by the Adaptive Resonance Theory
21
(ART) neural networks, which adapts itself to new inputs, without destroying past
training.
3.7 Adaptive Resonance Theory
ART neural network is a modified type of competitive learning, used in the Grossberg
network, and has unique concept in discovering the most representative positions of
prototypes in the pattern space. Similar to SOFM and LVQ networks, the prototype
positions are dynamically updated during presentation of input patterns. However,
contrary to SOFM and LVQ, the initial number of clusters and cluster centers are not
specified in advance, but the clusters are allocated incrementally, if presented pattern
is sufficiently different from all existing prototypes. Therefore, ART networks are
sensitive to the order of presentation of the input patterns. The main advantage of
ART network is an ability to self-adjust the underlying number of clusters. This offers
flexible neural network structure that can handle an infinite stream of input data,
because their cluster prototype units contain implicit representation of all the input
patterns previously encountered. Since ART architectures are capable of continuous
learning with non-stationary inputs, the on-line learning feature may be easily
appended. The ART architecture, shown in Fig 3.3, consists of input, hidden and
output layers and their specific interconnections: hidden to output layer activations,
output to hidden layer expectations, mismatch detection subsystem and gain control.
22
The key innovation of ART is the use of expectation or resonance, where each input
pattern is presented to the network and compared with the prototype that it most
closely matches.
input
layer
hidden
layer
output
layer
x1
z1
y1
x2
z2
y2
x3
z3
output
input
GAIN
CONTROL
activation
xJ
zJ
MISMATCH
DETECTION
expectation
WLJ
WJL
yL
reset
signal
Figure 3.3 – Structure of Adaptive Resonance Theory Network
When an input pattern is presented to the network, it is normalized and multiplied by
cluster prototypes, i.e. weights in the hidden-output layer. Then, a competition is
performed at output layer to determine which cluster prototype is closest to the input
pattern. The output layer employs a winner-take-all competition leaving only one unit
with non-zero response. Thus, input pattern activates one of the cluster prototypes.
When a prototype in output layer is activated, it is reproduced as an expectation at the
23
hidden layer. Hidden layer then performs a comparison between the expectation and
the input pattern, and the degree of their match is determined. If match is adequate
(resonance does occur) the pattern is added to the winning cluster and its prototype is
updated by moving toward input pattern. When the expectation and the input pattern
are not closely matched (resonance does not occur), the mismatch is detected and
causes a reset in the output layer. This reset signal disables the current winning
cluster, and the current expectation is removed to allow learning of new cluster. In
this way, previously learned prototypes are not affected by new learning. The amount
of mismatch required for a reset is determined by the controlled gain or vigilance
parameter that defines how well the current input should match a prototype of the
winning cluster. The set of input patterns continues to be applied to the network until
the weights converge, and stable clusters are formed. Obtained cluster prototypes
generalize density of input space, and during implementation have to be combined
with the K-Nearest Neighbor (K-NN) or any other classifier for classification of new
patterns.
There are many different variations of ART available today. ART1 performs
unsupervised learning for binary input patterns, while ART2 is modified to handle
analog input patterns. ART-2A is version of the ART2 with faster learning enabled.
ART3 performs parallel searches of distributed prototypes in a hierarchical network
structure. ARTMAP is a supervised version of ART1. There are also many other
neural networks derived from mentioned basic ART structures.
24
Chapter 4
NEURAL NETWORK ALGORITHM
4.1 Introduction
This chapter presents ART neural network pattern recognition algorithm. First section
discusses very important feature of proposed neural network - the inherent adaptivity
of its structure. Moreover, an extensive description of a training of neural network by
utilizing unsupervised and supervised learning stages is provided in second and third
sections. The fourth section explains the implementation of trained neural network,
and specifies the decision rule for interpreting neural network outputs.
4.2 Adaptive Neural Network (ART) Structure
The proposed neural network does not have a typical predetermined structure with
specified number of neurons, but rather an adaptive structure with self-evolving
neurons. The structure depends only upon the characteristics and presentation order of
the patterns in input data set. The diagram of complete procedure of neural network
training is shown in Fig 4.1. The training consists of numerous iterations of
alternating unsupervised and supervised learning stages, suitably combined to achieve
maximum efficiency. Groups of similar patterns are allocated into clusters, defined as
hyper-spheres in a multidimensional space, where the space dimension is determined
by the length of input patterns. The neural network initially uses unsupervised
learning with unlabelled input patterns to form fugitive clusters. It tries to discover
25
pattern density by their groupings into clusters, and to estimate cluster prototypes that
can serve as prototypes of typical input patterns.
Figure 4.1 – Combined Unsupervised and Supervised Neural Network Training [1]
The category labels are then assigned to the clusters during the supervised learning
stage. The tuning parameter called threshold parameter, controls the cluster’s size and
hence the number of generated clusters, and is being consecutively decreased during
iterations. If threshold parameter is high, many different patterns can then be
incorporated into one cluster, and this leads to a small number of coarse clusters. If
threshold parameter is low, only very similar patterns activate the same cluster, and
this leads to a large number of fine clusters.
After training, the cluster centers serve as neural network neurons and represent
typical pattern prototypes. The structure of prototypes solely depends on the density
of input patterns. Each training pattern has been allocated into a single cluster, while
26
each cluster contains one or more similar input patterns. A prototype is centrally
located in the respective cluster, and is either identical to one of the actual patterns or
a synthesized prototype from encountered patterns. A category label that symbolizes a
group of clusters with a common symbolic characteristic is assigned to each cluster,
meaning that each cluster belongs to one of existing categories. The number of
categories corresponds to the desired number of neural network outputs, determined
by the given classification task. During implementation of trained network, distances
between each new pattern and established prototypes are calculated, and using KNearest Neighbor classifier the most representative category amongst nearest
prototypes is assigned to the pattern.
4.3 Unsupervised Learning
The initial data set, containing all the patterns, is firstly processed using unsupervised
learning realized as a modified ISODATA clustering algorithm. During this stage
patterns are presented without their category labels. The initial guess of the number of
cluster and their positions is not specified in advance, but only a strong distance
measure between cluster prototypes is specified using the threshold parameter.
Unsupervised learning consists of two steps: initialization and stabilization phases.
The initialization phase, shown in Fig 4.2, incrementally iterates all the patterns and
establishes initial cluster structure based on similarity between the patterns.
27
Layer of trained competitive neurons
Input layer
adapt
winning
neuron
W1
|dist|
W2
|dist|
TEST PATTERN
pass
min
WI
|dist|
Xi
Vigilance
test?
fail
WL
|dist|
WL+1
|dist|
add
new
neuron
Figure 4.2 – Unsupervised Learning (Initialization Phase)
The entire pattern set is presented only once. Since the number of cluster prototypes
is not specified, training starts by forming the first cluster with only first input pattern
assigned. A cluster is formed by defining a hyper-sphere located at the cluster center,
with radius equal to the actual value of threshold parameter. New clusters are formed
incrementally whenever a new pattern, rather dissimilar to all previously presented
patterns, appears. Otherwise, the pattern is allocated into cluster with the most similar
patterns. The similarity is measured by calculating the Euclidean distance between a
pattern and existing prototypes. Presentation of each input pattern updates a position
of exactly one cluster prototype. Whenever an input pattern is presented, cluster
28
prototypes that serve as neurons compete among themselves. Each pattern is
compared to all existing prototypes and winning prototype with minimum distance to
the pattern is selected. If winning prototype passes vigilance or similarity test,
meaning that it does satisfactory match the pattern, the pattern is assigned to the
cluster of winning prototype and cluster is updated by adding that pattern to the
cluster. Otherwise, if winning prototype fails vigilance test, a new cluster with
prototype identical to the pattern is being added. The entire procedure continues until
all patterns are examined.
Initialization phase does not reiterate, and although the clusters change their positions
during incremental presentation of the patterns, already presented patterns are not
able to change the clusters. Consequently, the final output of the initialization phase is
a set of unstable clusters. Since initialization phase does not reiterate the patterns,
stabilization phase is needed to refine the number and positions of the clusters.
Stabilization phase shown in Fig 4.3 is being reiterated numerous times until a stable
cluster structure is obtained, when none of the patterns exchange the clusters during
single iteration. Stabilization phase starts with presenting all the patterns again. Each
pattern is compared again to all existing prototypes, and winning prototype is
selected. If for an actual pattern a winning prototype fails vigilance test, a new cluster
is formed and the previous winning prototype is updated by erasing the pattern from
that cluster. If for the pattern happens that the winning prototype is identical in two
consecutive iterations and passes vigilance test, the learning does not occur since the
pattern has not recently changed the cluster. Otherwise, if winning prototypes are
29
different in the current and previous iteration, the pattern is moved to the current
winning cluster and its prototype is updated by adding the pattern, while the previous
winning prototype is updated by erasing the pattern from corresponding cluster. The
stabilization phase is completed when all clusters retain all their patterns after single
iteration.
Unsupervised learning produces a set of stable clusters, including homogenous
clusters, containing patterns of the identical category and non-homogenous clusters,
containing patterns of two or more categories. It requires relatively high computation
time due to a large number of iterations needed for convergence of the stabilization
phase. The steps of unsupervised learning, performed through initialization and
stabilization phases, are defined through mathematical expressions in Table 4.1.
Layer of trained competitive neurons
Input layer
adapt new
and previous
winning neuron
W1
|dist|
TEST PATTERN
W2
|dist|
different in two
consecutive
iterations
min
WI
|dist|
Vigilance
test?
pass
Winning
neurons?
fail
identical in two
consecutive
iterations
Xi
WL
|dist|
No learning
add new neuron
and adapt previous
winning neuron
WL+1
|dist|
Figure 4.3 – Unsupervised Learning (Stabilization Phase)
30
4.4 Supervised Learning
During supervised learning, shown in Fig 4.4, the category label is associated with
each input pattern allowing identification and separation of homogenous and nonhomogenous clusters produced by unsupervised learning. Category labels are
assigned to the homogeneous clusters, and they are being added to the memory of
stored clusters, including their characteristics: prototype position, size, and category.
The
patterns
from
homogeneous
clusters
are
removed
from
further
unsupervised/supervised learning iterations. Set of remaining patterns, present in nonhomogenous clusters is transformed into new, reduced input data set and used in next
iteration. The convergence of learning process is efficiently controlled by the
threshold parameter, being slightly decreased after each of iteration. The learning is
completed when all the patterns are grouped into homogeneous clusters. Contrary to
slow-converging unsupervised learning, supervised learning is relatively fast because
it does not need to iterate.
Whenever allocated clusters with different categories mutually overlap certain
number of their patterns may fall in overlapping regions. Although each pattern has
been nominally assigned to the nearest cluster, their presence in clusters of other
category leads to questionable validity of those clusters.
The ambiguity can be solved by redefining supervised learning and introducing
imposed restrictions for identification of homogeneous clusters. Supervised learning
is implemented by requiring the homogeneous cluster to encompass patterns of
exactly one category. Therefore, both subsets of patterns, one assigned to the cluster,
31
and other encompassed by the cluster although assigned to any other cluster, are taken
into account during supervised learning.
Memory
w1
r1
c1
wp
rp
cp
clusters with
defined category,
size and position
wp+1
rp+1
cp+1
add
cluster
homogeneous
start
first
cluster
Check
cluster?
Assign category
to the cluster
non-homogeneous
Extract cluster
Remove patterns
from input set
Discard
cluster
start
next
cluster
All clusters
examined?
true
false
Figure 4.4 – Supervised Learning
4.5 Implementation
During implementation or testing of the trained neural network, new patterns are
classified according to their similarity to the cluster prototypes generated during
training. Classification is performed by interpreting the outputs of trained neural
network through K-Nearest Neighbor (K-NN) classifier. The K-NN classifier
determines the category of a new pattern based on the majority of represented
categories in pre-specified small number of nearest clusters retrieved from the cluster
structure established during training. It requires only the number K that determines
how many neighbors have to be taken into account. K-NN classifier seems to be very
straightforward and is reasonably employed since the number of prototypes is
32
significantly smaller then the number of patterns in the training set. Fig 4.5 shows an
implementation of the trained neural network with the simplest Nearest Neighbor
classifier, which is identical to K-NN for K = 1.
Layer of trained competitive neurons
Input layer
W1
|dist|
TEST PATTERN
W2
|dist|
min
Assign Category of
Winning Prototype
WI
|dist|
Xi
WL
|dist|
Figure 4.5 – Implementation of Trained Network for Classification
Given a set of categorized clusters, the K-NN classifier determines the category of a
new pattern xi based only on the categories of the K nearest clusters
 c ( xi ) 
1
K
K

k 1
c
( wk )
(4.1)
where wk is a prototype of cluster k, μc(wk) is membership degree of cluster k
belonging to category c, μc(xi) is the membership degree of pattern xi belonging to
category c where i = 1, . . . , I; c = 1, . . ., C; k = 1, . . .,K; where I, K, and C are the
numbers of patterns, nearest neighbors, and categories, respectively. The classifier
33
allows μc(wk) to have only crisp values 0 or 1, depending on whether or not a cluster k
belongs to category c
 c ( wk )  1
0
if cluster k belongs to category c
(4.2)
otherwise
If two or more of K nearest clusters have the same category, then they add
membership degrees to the cumulative membership value of that category. Finally,
when the contributions of all neighbors are encountered, the most representative
category is assigned to the pattern
g ( xi )  max(  c ( xi ))
(4.3)
where g(xi) is the category assigned to pattern xi, and c = 1, . . ., C. Thus the outputs
of a given neural network reflect different categories of input events.
Table 4.1 – Algorithm for Unsupervised Training of Neural Network
Step
0
Action
J is the length of input pattern, I is the number of training
patterns, L is the number of clusters or neurons, nl is the
number of pattern
that belong to the cluster l .
xi  [ xi1  xi 2  xi 3  ....  xiJ ] is i-th input pattern and xij is
j-th feature of
the i-th input pattern, and
wl  wl1 , wl 2 , wl 3 ,...., wlJ is the prototype of l-th cluster.
1
Set: i=1, L=0, nl =0.
2
Form new cluster: wL 1  xi , nL1  1 , L = L + 1.
Set: i = i+ 1
If i > I go to Step 7;
If i ≤ I go to Step 4.
Compute the Euclidean distance dl between pattern i and
prototype of the cluster l.
3
4
dl  (wl  xi )( wl  xi )T for l= 1,2,3,…,L;
Find winning prototype p for which d p  min d l .
l
34
5
Compare minimum distance d p to threshold parameter  :
If d p >  go to Step 2;
If d p ≤  go to Step 6.
6
Pattern i is assigned to the cluster p, and winning prototype
is updated:
np
1
wp 
wp 
xi , n p  n p  1 , go the Step 3.
np 1
np 1
7
8
Set i = 0, every pattern is presented again.
Set: i = I + 1,
If i > I go to Step 13;
If i ≤ I go to Step 9.
Pattern I currently belongs to the cluster q. Compute the
Euclidean distance dl
between pattern i and prototype of the cluster l:
9
dl  (wl  xi )( wl  xi )T for l = 1,2,…,L;
Find winning prototype p for which d p  min d l .
l
10
If d p >  go to step 11 to form new cluster;
11
If p  q and d p ≤  go to Step 12 to change the cluster for
pattern i.
If p= q and d p ≤  go to Step 8,because learning doesn’t
occur for pattern i
Form new cluster L +1 , and update previous winning type
q:
wL 1  xi , nL1  1 , L = L + 1;
nq
1
wq 
wq 
xi , nq  nq  1 , go to step 13.
nq  1
nq  1
12
Change cluster for pattern i and update new and previous
winning prototypes p and q:
np
1
wp 
wp 
xi , n p  n p  1 .
np 1
np 1
wq 
13
nq
nq  1
wq 
1
xi , nq  nq  1 .
nq  1
If in Steps 8-12 if any pattern has changed its cluster
membership go to Step 7, otherwise unsupervised learning
is completed.
35
Chapter 5
HARDWARE AND SOFTWARE SOLUTION
5.1 Introduction
Complete hardware and software solution of Neural Network based digital relaying
concept is proposed in this chapter. First section shows and explains an architecture
of the neural network based microprocessor relaying solution. Second section
describes data acquisition and signal preprocessing steps. The procedures of neural
network based protective relaying algorithm design and implementation are
summarized in the third section. The execution of the relay output command is
addressed in the fourth section.
5.2 Relay Architecture
A functional block diagram of proposed hardware and software solution for the
Microprocessor based protective relaying based is shown in Fig 5.1. Given relaying
principle, as in any other digital relay generally comprise three fundamental hardware
subsystems: a signal conditioning subsystem, a digital processing subsystem and
command execution subsystem.
The main part of the relay is software realization of protective algorithm. The
algorithm is a set of numerical operations used to process input signals to identify and
classify the faults, and subsequently initiate an action necessary to isolate the faulted
section of the power system. The protective relay must operate in the presence of a
36
fault that is within its zone of protection, and must restrain from operating in the
absence of a fault, or in case of faults outside its protective zone.
Figure 5.1 – Proposed Hardware/Software Design for Microprocessor Based Relay[1]
5.3 Data Acquisition and Signal Processing
The training of neural network based protective algorithm can be performed either
on-line using measurement data directly taken from the field, or off-line by accessing
historical record of fault and disturbance cases. Since interesting cases do not happen
frequently, initial off-line training becomes inevitable, and sufficient training data are
provided using simulations of relevant power system scenario cases. Various fault
37
and disturbance events and operating states need to be simulated, by changing power
network topology and parameters.
Transmission line current and voltage signal levels at relay location are usually very
high (kV and kA ranges). They are measured with current and voltage transformers
(CT and VT) and reduced into lower operating range typical for A/D converters. The
process of pattern extraction or forming neural network input signals from obtained
measurements depends on several signal preprocessing steps. Attenuated,
continuously varying, three-phase current and voltage sinusoidal signals are filtered
by low-pass analog anti-aliasing filter to remove noise and higher frequency
components. According to the sampling theorem requirement, the ideal filter cut-off
frequency has to be less or equal one-half of the sampling rate used by the A/D
converter. Furthermore, filtered analog signals are sampled by A/D converter with
specified sampling frequency, and converted to its digital representation.
Figure 5.2 – Moving Data Window for Voltage and Current Samples [4]
38
Selection of sampled data for further processing generally includes both the three
phase currents and voltages, but also may include either three phase currents or three
phase voltages. The samples are extracted in a dynamic data window with desired
length Fig 5.2, normalized, and aligned together to form a common input vector of
pattern components. Since a pattern is composed of voltage and current samples that
originate from two different quantities with different levels and sensitivities of
signals, independent scaling of two subsets of samples present in a pattern may
improve algorithm generalization capabilities. Using this value, the current
contribution to the relay decision may be increased over the voltage contribution or
vice versa.
Specified conditioning of input signals determines the length and characteristics of
input patterns and influences the trade-off between performing the fault classification
more accurately and making the real time decision sooner. The values of
preprocessing parameters adversely affect the algorithm behavior during training as
well as performance during implementation, and should be optimized in each
particular situation whenever either relay location, classification tasks or expected
scenarios are different.
5.4 Digital Processing Subsystem
This subsystem consists of a microprocessor which continuously runs the software
code responsible for the fault classification. The microprocessor should be capable of
floating point operations or a math coprocessor could be used. A 16 or 32 bit
39
microprocessor will satisfy the requirements. Since the computation time is
significant, microprocessors supporting high clock speeds will be required.
ART neural network, explained in Chapter 4, is applied to the patterns extracted from
voltage and current measurements. Neural network training is a complex incremental
procedure of adding new and updating existing pattern prototypes. The outcome of
training process is a structure of labeled prototypes where each belongs to one of the
categories. A category represents a single fault type.
During real-time (on-line) implementation, trained neural network possesses
generalization ability and is expected to successfully classify new patterns that have
not been presented during training process. The prototypes established during training
are dynamically compared with new patterns extracted from the actual measurements.
A pattern similarity to all the prototypes is calculated, and a subset of the most
resembling prototypes is retrieved. Finally, a pattern is being classified by
interpreting relationship between the pattern and chosen prototypes using a decision
rule defined in Chapter 4. If the pattern is classified to the category of un-faulted or
normal state, then the input data window is shifted for one sample, pattern vector is
updated and the comparison is performed again, until a fault is detected and
classified.
5.5 Command Execution System
A pattern categorized by protection algorithm is converted into digital output and by
control circuitry transformed into a corresponding signal for circuit breaker
40
activation. During normal operation, the circuit breaker is closed and currents are
flowing through the transmission line. Depending on the recognized event and
selected designer’s logic, circuit breaker either trips faulted phases within proposed
time period and removes fault from the rest of the system, or stays unaffected.
Whenever a possibility of temporary faults exists, additional reclosing relays are
usually used to automatically reclose the circuit breaker after the fault has been
isolated for a short time interval, and restore the normal status of the power system.
41
Chapter 6
SIMPLE POWER SYSTEM MODEL
6.1 Introduction
This chapter introduces the simplified power network model used for the design and
evaluation of Neural Network based fault classification algorithm. A detailed and
accurate model of a real power network was not made due to the computational
requirements of simulating such a system. The sets of faults or disturbances simulated
for studying/training the Neural Network based relay are discussed in the second
section.
6.2 Power System Model
The model that was used for the entire simulation is shown in Fig 6.1. The model
selected was the bare minimum required to simulate the faults and generate the fault
waveforms. It was built using SimPowerSystems toolbox in MATLAB software. The
model consists of only one Synchronous Alternator connected to an infinite bus-bar
through a transmission line. A load is connected on the load side of the transmission
line. Also a local load is connected at the power generating station itself. Shunt
capacitors are connected on both sides of the load side bus. A Fault breaker unit is
connected near the load side bus. Transformers and connected as shown in Fig 6.1.
42
Figure 6.1 – Power System Model
43
Chapter 7
SIMULATION AND IMPLEMENTATION
7.1 Introduction
This chapter focuses on the software implementation details of the Neural Network
based fault classifier. First section provides a summary of available software tools for
power system modeling and simulation. Second section describes the modules for
power network model and algorithm interfacing. Scenario setup and automatic
generation of scenario cases are explained in third section. The last section describes
the performance testing of the Neural Network based fault classifier.
7.2 Power System Design and Simulation
Modeling and simulation of complex power networks require availability of diverse
software tools. The use of Electromagnetic Transient Program (EMTP) and
Alternative Transients Program (ATP) for simulating power system and producing
voltage and current transient waveforms has been known for a long time. More
recently, general purpose modeling and simulation tool MATLAB with its toolbox
SimPowerSystems has been used for the power network modeling.
Moreover, manual simulation of a large number of scenarios is practically impossible
and is a limiting factor as well. Since simulation outputs have to be used as a signal
generator for the protective algorithm training and evaluation, the model interfacing
for large-scale simulations is critical for. The interfacing can be accurately done using
44
either C language or a commercial software package like MATLAB with pre-defined
libraries
of
specialized
functions.
In
this
project
MATLAB
with
the
SimPowerSystems toolbox has been used for simulation and design of the power
system.
7.3 Model Interfacing
The simulation environment based on MATLAB software package is selected as the
main engineering tool for performing modeling and simulation of the power systems,
as well as for interfacing the user and appropriate simulation programs.
Scenario setting and neural network algorithm are implemented in MATLAB and
interfaced with the power network model implemented in MATLAB. MATLAB has
been chosen due to availability of the powerful set of programming tools, signal
processing and numerical functions, and convenient user-friendly interface.
7.4 Generating Simulation Cases
Manual simulation of a large number of scenarios is practically impossible because of
the large effort and time required. Thus a method of automatically setting the power
network and fault conditions is required to generate the simulation cases to be used
for training the Neural Network algorithm.
A MATLAB based program has been written to automatically generate fault
waveforms for different settings of Fault location, Fault type and Fault inception
angle by communicating with the SimPowerSystems toolbox to set the parameter of
45
the fault breaker block. The simulation results, namely the waveforms are saved into
text files to be used later for network training and analysis.
7.5 Testing
Once the Neural Network has been trained, it has to be tested for its classification
efficiency. Testing involves generation of random test cases of random faults and
checking to see if the fault is being rightly classified. By testing a number of test
cases we can obtain the classification efficiency of the Neural Network based fault
classifier. We can change the parameters of the Neural Network like pattern vector
length or reduce the contributing factors and look at the performance of the network.
The number of clusters generated by the network is a measure of the generalization
capability of the network and this too can be tested for different network parameters.
46
Chapter 8
SIMULATION RESULTS
8.1 Simulation Waveforms
Shown in section 8.1.1 and 8.1.2 are the waveforms obtained upon simulating fault
conditions. As we can see the actual waveform obtained has various harmonics.
These harmonics have to be filtered out to obtain a filtered waveform as shown
below. Filtering of the waveform is required, because when we present the input
pattern based on the actual waveform, the harmonics present cause abrupt deviations
which can cause the classifier to get “confused” while trying to find similarities
between waveforms of the same category. Once the filtered waveforms are obtained,
a fixed number of points are sampled say 10 points in 1 cycle of one fault waveform,
and the pattern vector is built by assembling the 10 point waveforms for each phase
of current and/or voltage one after the other.
Figure 8.1 – Cluster with Member Patterns [6]
47
Before training the set of current values and voltage values in all training patterns are
normalized so as to reduce the values of the data and also to give equal importance to
the shape of both the voltage and current waveforms during training. Fig 8.1 shows a
cluster prototype generated after training along with patterns which are members of
that cluster. The patterns were generated using 15 points in 1.5 cycles of fault
waveform for each current and voltage phase waveform resulting in a pattern of 90
features, 45 features of three phase currents and voltages respectively. While
generating the training data, 10 types of faults were simulated at 3 different locations
as shown in the model diagram, for 20 different fault inception angles and 3 different
fault resistances. This gives a total of 1800 training patterns. Generation of these
patterns is time consuming due to the time taken to simulate each fault. The time
taken to generate the 1800 training patterns was almost 4 hours.
48
8.1.1 Waveforms for Line to Ground Fault
We simulate a line to ground fault and the following figures show the effect of
removal of harmonics on the current and voltage waveforms:
ACTUAL WAVEFORM
FILTERED WAVEFORM
CURRENT WAVEFORM
CURRENT WAVEFORM
VOLTAGE WAVEFORM
VOLTAGE WAVEFORM
Figure 8.2 – Waveforms for Line to Ground Fault
49
When the same fault is simulated with a different fault inception angle we get the
waveforms shown below:
ACTUAL WAVEFORM
FILTERED WAVEFORM
CURRENT WAVEFORM
CURRENT WAVEFORM
VOLTAGE WAVEFORM
VOLTAGE WAVEFORM
Figure 8.3 – Waveforms for Line to Ground Fault at Different Inception Angle
50
8.1.2 Waveforms for Line to Line Fault
We simulate a line to line fault and the following figures show the effect of removal
of harmonics on the current and voltage waveforms:
ACTUAL WAVEFORM
FILTERED WAVEFORM
CURRENT WAVEFORM
CURRENT WAVEFORM
VOLTAGE WAVEFORM
VOLTAGE WAVEFORM
Figure 8.4 – Waveforms for Line to Line Fault
51
When the same fault is simulated with a different fault inception angle we get the
waveforms shown:
ACTUAL WAVEFORM
FILTERED WAVEFORM
CURRENT WAVEFORM
CURRENT WAVEFORM
VOLTAGE WAVEFORM
VOLTAGE WAVEFORM
Figure 8.5 – Waveforms for Line to Line Fault at Different Inception Angle
8.2 Neural Network Training Results
Training of the neural network involves formation of clusters that represent a
particular category. The categories could be fault types or fault locations. After
52
training is performed a number of clusters are formed with each cluster representing
one particular category. The number of clusters formed is a measure of the
generalization of the neural network. Low number of resulting clusters implies high
generalization. The number of clusters formed also determines the memory
requirement for the microprocessor based system when the hardware implementation
is done.
The grouping together of patterns of a particular category depends on the number of
features present in the input patterns. For example if only current values are used as
features of the patterns, then the amount of information available for grouping same
category patterns will be lesser than if both current and voltage features were used.
The table 8.1 below presents the effects on training results, when varying the number
of features in input patterns presented to the Neural Network for fault type
classification.
Table 8.1 - Results of Neural Network Training for Fault Type Classification
No.
1
Features Used
3Φ Voltages and
Window
Pattern
Clusters
Length
formed
1.5 cycles
90
543
Currents
2
3Φ Currents
1.5 cycles
45
508
3
3Φ Voltages
1.5 cycles
45
691
4
3Φ Currents
1 cycle
30
627
53
We have to build a separate classifier to determine fault location. As mentioned
before, faults were simulated in three different locations as shown in Fig 6.1. This
resultant neural network classifier for fault location will have to be used together with
the fault type classifier to obtain total fault identification and classification. The table
8.2 below shows the result for training a neural network to classify fault location.
Table 8.2 - Results of Neural Network Training for Fault Location Classification
Features Used
3Φ Voltages and Currents
Window
1.5 cycles
Pattern
Clusters
Length
formed
90
442
8.3 Neural Network Classification Results
Once the neural network has been trained we shall obtain a set of clusters, each of
which uniquely represents one category. As discussed in section 4.5, we need to have
a decision rule which will decide to which category a particular test pattern belongs.
In this project we are using the K-NN decision rule. In this rule k number of clusters
that are closest to the presented input pattern are taken into account. For each of these
k closest clusters which are representing their corresponding category, a membership
degree is calculated for each category. The category that has the highest membership
degree for the current test pattern is the one that is the winning category. After all the
test patterns are presented the classification efficiency is calculated.
54
The table below shows the variation of fault classification efficiency with the number
of features in input patterns and the number of neighboring clusters considered during
the classification:
Table 8.3 – Results of Neural Network Fault Type Classifier
No.
1
Features
Pattern
Number of
Classification
Used
Length
Neighbors
Efficiency
(k)
(%)
1
98.33
3
91.67
4
87.5
1
98.33
3
94.17
4
87.5
1
97.5
3
92.5
4
85.83
1
24.17
3
20.0
4
20.83
3Φ Voltages
90
and Currents
2
3
4
3Φ Currents
3Φ Currents
3Φ Voltages
45
30
45
The table 8.4 below shows the classification efficiency for fault location using the
value of k as one. As we can see the classification efficiency for fault location
classification is very good.
55
Table 8.4 – Results of Neural Network Fault Location Classifier
Features Used
Pattern
Number of
Classification
Length
Neighbors
Efficiency (%)
(k)
3Φ Voltages and
90
1
99.17
Currents
It is seen that generally the fault type classification is more accurate if the number of
features used is greater. Also it is seen that line currents are sufficient to provide the
Neural Network with enough information to classify fault type. When only voltages
are used as features, the fault type classification efficiency is very poor. It is also
found that the fault type classification efficiency usually decreases with the number of
neighboring clusters (k) considered. For fault location classification, this method is
found to be very accurate and is seen to give a classification efficiency of 99.17 %.
56
Chapter 9
PROGRAM LISTINGS
9.1 Program to Generate Training Patterns
% Program to generate fault data
clc;
clear;
% Constants
mytime=1/60;
max_steps=20;
step_time=mytime/max_steps;
sample_size=1.6667e-5;
data_step=mytime/(sample_size*10);
time_vec=linspace(0,mytime,10);
gr=[1 5 10];
% Filter
cutoff=0.2e3;
samplef=1/sample_size;
[b_filt a_filt]=butter(5,cutoff*2/samplef);
% Model list
model_list=char('sim_model1','sim_model2','sim_model3');
% Main program
file_name='new_patterns.txt';
curr_file=strcat ('d:\MATLAB7\work\project\data_viF\',file_name);
disp ('Total patterns to generate:');
disp (10*3*20*3);
ndone=0;
for mcnt=1:3
model_name=model_list(mcnt,:);
fblk_name=strcat(model_name,'/fault_br');
for fcnt=1:10
57
set_param (fblk_name,'SwitchTimes','[1/60]');
for fr=1:3
text_str=sprintf ('%d',gr(fr));
set_param(fblk_name,'GroundResistance',text_str);
switch fcnt
case 1
% Phase A to ground fault
set_param(fblk_name,'FaultA','on');
set_param(fblk_name,'FaultB','off');
set_param(fblk_name,'FaultC','off');
set_param(fblk_name,'GroundFault','on');
case 2
% Phase B to ground fault
set_param(fblk_name,'FaultA','off');
set_param(fblk_name,'FaultB','on');
set_param(fblk_name,'FaultC','off');
set_param(fblk_name,'GroundFault','on');
case 3
% Phase C to ground fault
set_param(fblk_name,'FaultA','off');
set_param(fblk_name,'FaultB','off');
set_param(fblk_name,'FaultC','on');
set_param(fblk_name,'GroundFault','on');
case 4
% Phase AB to ground fault
set_param(fblk_name,'FaultA','on');
set_param(fblk_name,'FaultB','on');
set_param(fblk_name,'FaultC','off');
set_param(fblk_name,'GroundFault','on');
case 5
% Phase BC to ground fault
set_param(fblk_name,'FaultA','off');
set_param(fblk_name,'FaultB','on');
set_param(fblk_name,'FaultC','on');
set_param(fblk_name,'GroundFault','on');
case 6
% Phase CA to ground fault
58
set_param(fblk_name,'FaultA','on');
set_param(fblk_name,'FaultB','off');
set_param(fblk_name,'FaultC','on');
set_param(fblk_name,'GroundFault','on');
case 7
% Phase A and B fault
set_param(fblk_name,'FaultA','on');
set_param(fblk_name,'FaultB','on');
set_param(fblk_name,'FaultC','off');
set_param(fblk_name,'GroundFault','off');
case 8
% Phase B and C fault
set_param(fblk_name,'FaultA','off');
set_param(fblk_name,'FaultB','on');
set_param(fblk_name,'FaultC','on');
set_param(fblk_name,'GroundFault','off');
case 9
% Phase C and A fault
set_param(fblk_name,'FaultA','on');
set_param(fblk_name,'FaultB','off');
set_param(fblk_name,'FaultC','on');
set_param(fblk_name,'GroundFault','off');
case 10
% Phase A , B and C fault
set_param(fblk_name,'FaultA','on');
set_param(fblk_name,'FaultB','on');
set_param(fblk_name,'FaultC','on');
set_param(fblk_name,'GroundFault','off');
end
for ii=1:20
sim (model_name);
if_a=filtfilt (b_filt,a_filt,i_fault.signals.values(:,1));
if_b=filtfilt (b_filt,a_filt,i_fault.signals.values(:,2));
if_c=filtfilt (b_filt,a_filt,i_fault.signals.values(:,3));
vf_a=filtfilt (b_filt,a_filt,v_fault.signals.values(:,1));
vf_b=filtfilt (b_filt,a_filt,v_fault.signals.values(:,2));
vf_c=filtfilt (b_filt,a_filt,v_fault.signals.values(:,3));
59
st=(mytime+(ii-1)*step_time)/sample_size;
ed=(mytime+(ii-2)*step_time+mytime*1.5)/sample_size;
kk=1;
for jj=st:data_step:ed
jk=floor(jj);
currenta(kk)=if_a(jk);
currentb(kk)=if_b(jk);
currentc(kk)=if_c(jk);
voltagea(kk)=vf_a(jk);
voltageb(kk)=vf_a(jk);
voltagec(kk)=vf_a(jk);
kk=kk+1;
end
data_vector(1:15)=currenta(:);
data_vector(16:30)=currentb(:);
data_vector(31:45)=currentc(:);
data_vector(46:60)=voltagea(:);
data_vector(61:75)=voltageb(:);
data_vector(76:90)=voltagec(:);
data_vector(91)=fcnt;
dlmwrite (curr_file,data_vector,'-append','delimiter',';','newline','pc');
ndone=ndone+1;
disp(ndone);
text_str=sprintf ('[%f]',mytime+ii*step_time);
set_param (fblk_name,'SwitchTimes',text_str);
end
end
end
disp(strcat(model_name,' simulation done!'));
end
9.2 Program to Plot Fault Waveforms
% Program to generate AND plot exact fault waveforms
mytime=1/60;
max_steps=20;
step_time=mytime/max_steps;
sample_size=1.6667e-5;
60
data_b=0;
data_e=0;
data_step=mytime/(sample_size*10);
tit=char ('AG','ABG','AB','ABCG','ABC');
ii=10;
for fcnt=1:2:3
switch fcnt
case 1
% Phase A to ground fault
set_param('sim_model/fault_br','FaultA','on');
set_param('sim_model/fault_br','FaultB','off');
set_param('sim_model/fault_br','FaultC','off');
set_param('sim_model/fault_br','GroundFault','on');
case 2
% Phase AB to ground fault
set_param('sim_model/fault_br','FaultA','on');
set_param('sim_model/fault_br','FaultB','on');
set_param('sim_model/fault_br','FaultC','off');
set_param('sim_model/fault_br','GroundFault','on');
case 3
% Phase A and B fault
set_param('sim_model/fault_br','FaultA','on');
set_param('sim_model/fault_br','FaultB','on');
set_param('sim_model/fault_br','FaultC','off');
set_param('sim_model/fault_br','GroundFault','off');
case 4
% Phase A , B and C to ground fault
set_param('sim_model/fault_br','FaultA','on');
set_param('sim_model/fault_br','FaultB','on');
set_param('sim_model/fault_br','FaultC','on');
set_param('sim_model/fault_br','GroundFault','on');
case 5
% Phase A , B and C fault
set_param('sim_model/fault_br','FaultA','on');
set_param('sim_model/fault_br','FaultB','on');
set_param('sim_model/fault_br','FaultC','on');
set_param('sim_model/fault_br','GroundFault','off');
end
61
text_str=sprintf ('[%f]',mytime+ii*step_time);
set_param ('sim_model/fault_br','SwitchTimes',text_str);
sim ('sim_model');
currenta=i_fault.signals.values(:,1);
currentb=i_fault.signals.values(:,2);
currentc=i_fault.signals.values(:,3);
voltagea=v_fault.signals.values(:,1);
voltageb=v_fault.signals.values(:,2);
voltagec=v_fault.signals.values(:,3);
time_vec=i_fault.time;
figure;
plot (time_vec,currenta,time_vec,currentb,time_vec,currentc);
title (strcat('Current waveform :',deblank (tit(fcnt,:))));
xlabel('Time');
ylabel('Amplitude');
legend('Phase A','Phase B','Phase C');
pic_name=strcat('d:\MATLAB7\work\project\pics\fig2_i',deblank
(tit(fcnt,:)),'.jpg');
print('-djpeg',pic_name);
plot (time_vec,voltagea,time_vec,voltageb,time_vec,voltagec);
title (strcat('Voltage waveform :',deblank (tit(fcnt,:))));
xlabel('Time');
ylabel('Amplitude');
legend('Phase A','Phase B','Phase C');
pic_name=strcat('d:\MATLAB7\work\project\pics\fig2_v',deblank
(tit(fcnt,:)),'.jpg');
print('-djpeg',pic_name);
end
9.3 Program to Train Neural Network
% ART-2 K-Nearest Neighbour training algorithm
clc;
clear;
62
% Pattern loading
if (exist('pattern_loaded')==0)
pattern = dlmread('d:\MATLAB7\work\project\data_viF\new_patterns.txt',';');
pattern_loaded=1;
end
disp ('Dimentions of training batch:');
[Nex Ndim]=size(pattern);
disp (Nex)
disp (Ndim-1)
Nsamples=Nex;
categ=pattern(:,91)';
pattern(:,91)=[];
% Normalization of patterns
%%{
for ii=1:Nex
cpat=pattern(ii,1:45);
vpat=pattern(ii,46:90);
cpat=cpat/sqrt(cpat*cpat');
vpat=vpat/sqrt(vpat*vpat');
pattern(ii,:)=[cpat vpat]/sqrt(2);
end
%%}
% Randomising pattern order
pat_index=[1:Nex];
pati=0;
for ii=1:Nex-1
pati(ii)=randsample (pat_index,1);
pat_index(:,find (pat_index==pati(ii)))=[];
end
pati(Nex)=pat_index;
thr=3; % Threshold parameter
is_done=0;
nweights=0;
nncat=0;
ncat_list=0;
npatcnt=0;
cat_size=0;
cat_type=0;
ncycles=0;
63
max_thr=0;
while ((is_done==0)&&(thr>0))
ncycles=ncycles+1;
% Unsupervised learning step
ncat=1;
pindex=pati(1);
weights=pattern(pindex,:);
patcnt=1;
cat_list=pindex;
pat_dist=0;
% First pass of training samples
for ii=2:Nex
pindex=pati(ii);
pat_dist=0;
for jj=1:ncat
pat_dist(jj)=sqrt((weights(jj,:)-pattern(pindex,:))*(weights(jj,:)pattern(pindex,:))');
end
min_dist=min(pat_dist);
if (min_dist>max_thr)
max_thr=min_dist;
end
min_cat_index=find (pat_dist==min_dist);
if (min_dist > thr)
ncat=ncat+1;
weights(ncat,:)=pattern(pindex,:);
patcnt(ncat)=1;
cat_list(ncat,1)=pindex;
continue;
else
a=min_cat_index;
b=patcnt(a);
cat_list(a,b+1)=pindex;
weights(a,:)=(b/(1+b))*weights(a,:)+(1/(1+b))*pattern(pindex,:);
patcnt(a)=b+1;
end
end
64
% Second pass, stabilization phase
is_stable=0;
while (is_stable==0)
is_stable=1;
for ii=1:Nex
pindex=pati(ii);
pat_dist(1,:)=[];
for jj=1:ncat
pat_dist(jj)=sqrt((weights(jj,:)-pattern(pindex,:))*(weights(jj,:)pattern(pindex,:))');
end
min_dist=min(pat_dist);
new_cat=find (pat_dist==min_dist);
[curr_cat pno]=find (cat_list==pindex);
if (min_dist > thr)
ncat=ncat+1;
weights(ncat,:)=pattern(pindex,:);
patcnt(ncat)=1;
cat_list(ncat,1)=pindex;
a=patcnt(curr_cat);
weights(curr_cat,:)=(a/(a-1))*weights(curr_cat,:)-(1/(a1))*pattern(pindex,:);
trow=cat_list(curr_cat,:);
trow(:,pno)=[];
trow(length(trow)+1)=0;
cat_list(curr_cat,:)=trow(:);
patcnt(curr_cat)=a-1;
is_stable=0;
break;
elseif (new_cat~=curr_cat)
b=patcnt(curr_cat);
a=patcnt(new_cat);
weights(new_cat,:)=(a/(a+1))*weights(new_cat,:)+(1/(a+1))*pattern(pindex,:);
weights(curr_cat,:)=(b/(b-1))*weights(curr_cat,:)-(1/(b1))*pattern(pindex,:);
patcnt(new_cat)=a+1;
cat_list(new_cat,a+1)=pindex;
trow=cat_list(curr_cat,:);
trow(:,pno)=[];
trow(length(trow)+1)=0;
cat_list(curr_cat,:)=trow(:);
patcnt(curr_cat)=b-1;
65
is_stable=0;
break;
end
end
end
% Supervised learning step
hcnt=0;
hclus=0;
for ii=1:ncat
atype=categ(cat_list(ii,1));
ok_flag=1;
for jj=1:patcnt(ii)
btype=categ(cat_list(ii,jj));
if (atype~=btype)
ok_flag=0;
break;
end
end
% Homogenous cluster check
if (ok_flag==1)
nncat=nncat+1;
npatcnt(nncat)=patcnt(ii);
for jj=1:patcnt(ii)
ncat_list(nncat,jj)=cat_list(ii,jj);
end
cat_type(nncat)=atype;
cat_size(nncat)=thr;
for jj=1:length(weights(ii,:))
nweights(nncat,jj)=weights(ii,jj);
end
hcnt=hcnt+1;
hclus(hcnt)=ii;
disp(sprintf ('New Category of %d type with %d samples, size
%f',atype,npatcnt(nncat),thr));
end
end
% Reducing pattern set
for ii=1:hcnt
rcat=hclus(ii);
for jj=1:patcnt(rcat)
rpat=cat_list(rcat,jj);
66
pati(:,find (pati==rpat))=[];
end
Nex=Nex-patcnt(rcat);
end
thr=thr-thr/4;
% Check if all clusters are homogenous
if (Nex==0)
is_done=1;
disp ('All clusters are homogenous!');
elseif (thr<=0)
disp ('No more new categories can be formed!');
disp ('Number of samples not categorised:');
disp (Nex);
is_done=1;
end
end
disp ('Training completed!');
disp ('Number of categories:');disp (nncat);
disp ('Number of cycles:');disp (ncycles);
disp ('Max Distance:');disp (max_thr);
clusters.number=nncat;
clusters.weights=nweights;
clusters.patcnt=npatcnt;
clusters.size=cat_size;
clusters.type=cat_type;
clusters.cat_list=ncat_list;
save D:\MATLAB7\work\project\clusters\new_clusters_vi90.mat clusters;
disp ('Clusters saved to file!');
9.4 Program to Generate Test Patterns
% Program to generate TEST fault data
clc;
clear;
% Constants
67
model_list=char('sim_model1','sim_model2','sim_model3');
mytime=1/60;
sample_size=1.6667e-5;
data_step=mytime/(sample_size*10);
time_vec=linspace(0,mytime,50);
test_time=0;
ntimes=30;
patno=0;
for ii=1:ntimes
test_time(ii)=randsample (time_vec,1);
time_vec(:,find (time_vec==test_time(ii)))=[];
end
% Filter
cutoff=0.2e3;
samplef=1/sample_size;
[b_filt a_filt]=butter(5,cutoff*2/samplef);
% Main program
file_name='d:\MATLAB7\work\project\data_test\test1800_viF.txt';
f_open=0;
disp ('Generating test cases:');
for ii=1:ntimes
mno=randsample ([1:3],1);
model_name=model_list(mno,:);
fblk_name=strcat(model_name,'/fault_br');
fr=randsample ([1:10],1);
text_str=sprintf ('[%f]',mytime+test_time(ii));
set_param (fblk_name,'SwitchTimes',text_str);
text_str=sprintf ('%d',fr);
set_param(fblk_name,'GroundResistance',text_str);
ftype=[1:10];
for jj=1:4
fcnt=randsample (ftype,1);
ftype(:,find (ftype==fcnt))=[];
switch fcnt
case 1
% Phase A to ground fault
68
set_param(fblk_name,'FaultA','on');
set_param(fblk_name,'FaultB','off');
set_param(fblk_name,'FaultC','off');
set_param(fblk_name,'GroundFault','on');
case 2
% Phase B to ground fault
set_param(fblk_name,'FaultA','off');
set_param(fblk_name,'FaultB','on');
set_param(fblk_name,'FaultC','off');
set_param(fblk_name,'GroundFault','on');
case 3
% Phase C to ground fault
set_param(fblk_name,'FaultA','off');
set_param(fblk_name,'FaultB','off');
set_param(fblk_name,'FaultC','on');
set_param(fblk_name,'GroundFault','on');
case 4
% Phase AB to ground fault
set_param(fblk_name,'FaultA','on');
set_param(fblk_name,'FaultB','on');
set_param(fblk_name,'FaultC','off');
set_param(fblk_name,'GroundFault','on');
case 5
% Phase BC to ground fault
set_param(fblk_name,'FaultA','off');
set_param(fblk_name,'FaultB','on');
set_param(fblk_name,'FaultC','on');
set_param(fblk_name,'GroundFault','on');
case 6
% Phase CA to ground fault
set_param(fblk_name,'FaultA','on');
set_param(fblk_name,'FaultB','off');
set_param(fblk_name,'FaultC','on');
set_param(fblk_name,'GroundFault','on');
case 7
% Phase A and B fault
set_param(fblk_name,'FaultA','on');
69
set_param(fblk_name,'FaultB','on');
set_param(fblk_name,'FaultC','off');
set_param(fblk_name,'GroundFault','off');
case 8
% Phase B and C fault
set_param(fblk_name,'FaultA','off');
set_param(fblk_name,'FaultB','on');
set_param(fblk_name,'FaultC','on');
set_param(fblk_name,'GroundFault','off');
case 9
% Phase C and A fault
set_param(fblk_name,'FaultA','on');
set_param(fblk_name,'FaultB','off');
set_param(fblk_name,'FaultC','on');
set_param(fblk_name,'GroundFault','off');
case 10
% Phase A , B and C to ground fault
set_param(fblk_name,'FaultA','on');
set_param(fblk_name,'FaultB','on');
set_param(fblk_name,'FaultC','on');
set_param(fblk_name,'GroundFault','on');
case 11
% Phase A , B and C fault
set_param(fblk_name,'FaultA','on');
set_param(fblk_name,'FaultB','on');
set_param(fblk_name,'FaultC','on');
set_param(fblk_name,'GroundFault','off');
end
patno=patno+1;
disp(patno);
sim (model_name);
if_a=filtfilt (b_filt,a_filt,i_fault.signals.values(:,1));
if_b=filtfilt (b_filt,a_filt,i_fault.signals.values(:,2));
if_c=filtfilt (b_filt,a_filt,i_fault.signals.values(:,3));
vf_a=filtfilt (b_filt,a_filt,v_fault.signals.values(:,1));
vf_b=filtfilt (b_filt,a_filt,v_fault.signals.values(:,2));
vf_c=filtfilt (b_filt,a_filt,v_fault.signals.values(:,3));
70
st=(mytime+test_time(ii))/sample_size;
ed=st+(mytime*1.5)/sample_size;
kk=1;
for jj=st:data_step:ed
jk=floor(jj);
currenta(kk)=if_a(jk);
currentb(kk)=if_b(jk);
currentc(kk)=if_c(jk);
voltagea(kk)=vf_a(jk);
voltageb(kk)=vf_a(jk);
voltagec(kk)=vf_a(jk);
kk=kk+1;
end
data_vector(1:15)=currenta(1:15);
data_vector(16:30)=currentb(1:15);
data_vector(31:45)=currentc(1:15);
data_vector(46:60)=voltagea(1:15);
data_vector(61:75)=voltageb(1:15);
data_vector(76:90)=voltagec(1:15);
data_vector(91)=fcnt;
if (f_open==0)
dlmwrite (file_name,data_vector,'delimiter',';','newline','pc');
f_open=1;
else
dlmwrite (file_name,data_vector,'-append','delimiter',';','newline','pc');
end
end
end
9.5 Program to Classify Faults using Neural Network
% Program to classify test patterns
clc;
clear;
% Constants
k=1;
nwrong=0;
71
% Load test patterns
pattern=dlmread('d:\MATLAB7\work\project\data_test\test1800_viF.txt',';');
[Nex Ndim]=size(pattern);
disp ('Number of test patterns:'); disp (Nex);
disp ('Vector length:'); disp (Ndim-1);
correct_cat=pattern(:,Ndim)';
pattern(:,91)=[];
% Load cluster centers
load D:\MATLAB7\work\project\clusters\new_clusters_vi90.mat;
disp ('Clusters loaded!');
% Normalization of patterns
for ii=1:Nex
cpat=pattern(ii,1:30);
vpat=pattern(ii,46:90);
cpat=cpat/sqrt(cpat*cpat');
vpat=vpat/sqrt(vpat*vpat');
pattern(ii,:)=[cpat vpat]/sqrt(2);
end
% Randomising pattern order
pat_index=[1:Nex];
pati=0;
for ii=1:Nex-1
pati(ii)=randsample (pat_index,1);
pat_index(:,find (pat_index==pati(ii)))=[];
end
pati(Nex)=pat_index;
for ii=1:Nex
pindex=pati(ii);
pat=pattern(pindex,:);
cat_deg=zeros(1,10);
dist=0;
min_dist=0;
min_clus=0;
min_cat=0;
for jj=1:clusters.number
dist(jj)=sqrt((clusters.weights(jj,:)-pat)*(clusters.weights(jj,:)-pat)');
end
72
for jj=1:k
min_dist(jj)=min(dist);
min_clus(jj)=find (dist==min_dist(jj));
dist(:,min_clus(jj))=[];
min_cat(jj)=clusters.type(min_clus(jj));
end
for jj=1:10
for kk=1:k
cat_deg(jj)=cat_deg(jj)+(min_cat(kk)==jj);
end
end
max_cat=find(cat_deg==max(cat_deg));
if (length(max_cat)>1)
max_cat=min_cat(1);
end
%max_cat=min_cat(1);
if (max_cat~=correct_cat(pindex))
nwrong=nwrong+1;
text_str=sprintf ('Pattern number %d of cat(%d) misclassified into
cat(%d)',pindex,correct_cat(pindex),max_cat);
disp(text_str);
end
end
disp ('All patterns tested!');
disp ('Number misclassified:'); disp (nwrong);
disp ('Classification efficiency:'); disp (100*(Nex-nwrong)/Nex);
9.6 Program to Plot Clusters and Member Patterns
% Program to plot prototype and member patterns
subplot (3,3,1);
plot (clusters.weights(31,:));
for ii=1:6
subplot (3,3,ii+3);
plot (pattern(clusters.cat_list(31,ii),:));
end
73
figure;
subplot (3,3,1);
plot (clusters.weights(33,:));
for ii=1:6
subplot (3,3,ii+3);
plot (pattern(clusters.cat_list(33,ii),:));
end
74
Chapter 10
CONCLUSION
The Neural Network based Power system Fault classifier was found to be very
efficient. It has many advantages over the traditional methods. Traditional methods
need extensive system modeling to predict the type of fault occurring. Also every
time the system is updated or modified, the modeling has to be done again. The
Neural Network based fault classifier only needs to be trained with appropriate
examples to learn the correct classification. Also this system is adaptive to changes in
the system.
The number of features and the type of features used to train the network greatly
determines the efficiency with which faults are correctly classified. It was found that
generally the fault type classification is more accurate if the number of features used
is greater. Also it was found that line currents are sufficient to provide the Neural
Network with enough information to classify fault type. When only voltages are used
as features, the fault type classification efficiency is very poor.
The variation of the classification efficiency with the number of closest clusters to
consider (k) was also studied. It was found that the fault type classification efficiency
usually decreases with the number of neighboring clusters considered.
For fault location classification, this method was found very accurate and gave a
classification efficiency of 99.17 %.
75
The hardware implementation of this type of fault classifier is also simple and is a
very good option to consider when upgrading presently used methods for determining
fault type and fault location.
76
REFERENCES
1. Slavko Vasilic and Mladen Kezunovic, “Fuzzy ART Neural Network Algorithm
for Classifying the Power System Faults”, IEEE Transactions on Power Delivery,
Vol. 20, No. 2, April 2005
2. Nan Zhang and Mladen Kezunovic, “Coordinating Fuzzy ART Neural Networks
to Improve Transmission Line Fault Detection and Classification”, IEEE
Transactions on Power Delivery
3. M Sanaye Pasand and H Khorashadi Zadeh, “Transmission Line fault detection
and Phase selection using ANN”, ICPST 2003
4. Slavko Vasilic and Mladen Kezunovic, “An Improved Neural Network Algorithm
for Classifying the Transmission Line Faults”, IEEE Transactions on Power
Delivery, May 2002
5. Whei-Min Lin and Ming-Tong Tsay, “A Fault Classification method by RBF
Neural Network with OLS Learning procedure”, IEEE Transactions on Power
Delivery, Vol. 16, No. 4, Oct 2001
6. E Vazquez, H J Alruve and Oscar L Chacon, “Neural Network approach to Fault
Detection in Electric Power systems”, IEEE 1996
7. M Kezunovic and Igor Rikalo, “Detect and Classify Faults using Neural Nets”,
IEEE Computer Applications in Power, 1996
77
8. Thomas Dalstein and Bernd Kulicke, “Neural Network approach to Fault
Classification for High Speed Protective relaying”, IEEE Transactions on Power
Delivery, Vol. 10, No. 2, April 1995
9. K S Swarup and H S Chandrashekharaiah, “Fault Detection and Diagnosis of
Power systems using ANNs”, IEEE 1991
10. Bernard Widrow and Michael A Lehr, “30 Years of Adaptive Neural Networks:
Perceptron, Madaline and Backpropagation”, Proceedings of IEEE, Vol. 78, No.
9, Sept 1990
11. Torbjorn Elsoft and Rui J P deFigueiredo, “A New Neural Network for Cluster
detection and Labeling”, IEEE Transactions on Neural Networks, Vol. 9, No. 5,
Sept 1998
12. Teuvo Kohonen, “The Self-Organizing Map”, Proceedings of IEEE, Vol. 78, No.
9, Sept 1990
13. G A Carpenter and S Grossberg, “ART-2: self-organization of stable category
recognition codes for analog input patterns”, Applied Optics, Vol. 26, No. 23, Dec
1987
14. T M Cover and P E Hart, “Nearest Neighbor pattern Classification”, IEEE
Transactions on Information Theory, Vol. 13, 1967
15. Nan Zhang and Mladen Kezunovic, “Implementing an Advanced Simulation Tool
for Comprehensive Fault Analysis”, Transmission and Distribution Conference
Exhibition: Asia and Pacific, 2005 IEEE/PES
Download