Preparation of Papers in Two-Column Format for the

advertisement
Applying Artificial Neural Networks to Energy Quality Measurement
Abstract - This work applies an Artificial Neural Network for the filtering of noisy sinusoidal signals, which
stem from the electric power network. An eigenfilter is implemented by means of a linear neural network
trained by the Hebbian Learning Algorithm. The obtained results show that the harmonic noise component is
minimized with no phase lag.
1. INTRODUCTION
Market-optimized solution for electric power distribution involves energy quality control. In recent years
the consumer market has demanded higher quality standards, aiming efficiency improvement in the domestic and
industrial uses of the electric power. Electric power quality can be assessed by a set of parameters, which includes
Total Harmonic Distortion (THD), Displacement Factor, and Power Factor, among others. These parameters are
obtained by measuring the voltage and current in the electric bus for which is desired to perform the quality
assessment. Most measurement systems employ some filtering in order to improve the measured parameters.
However, it is crucial for the measurement performance that the filter does not introduce any phase lag in the
measured voltage or current. In this work, a linear Artificial Network (ANN) trained by the Generalized Hebbian
Algorithm (GHA) is used as an eigenfilter [3], so that a measured noisy sinusoidal signal is cleaned, improving the
measurement precision.
A linear ANN which uses the GHA as learning rule performs the Subspace Decomposition of the training
vector set [1]. Each subspace, into which the training set is decomposed, contains highly correlated information.
Therefore, since the auto-correlation of the noise component is nearly zero, upon reconstructing the original vector
set from its subspaces, the noise component is implicitly filtered out.
2. HEBBIAN RULE
The Hebbian Rule is based upon the postulate of Hebb, which stems from neurobiological experiments: If
neurons on both sides of a synapse are activated synchronously and repeatedly, the strength of the synapse is
increased selectivity.
Analytically, the Hebb´s rule for a single neuron can be described according to the Equation 1
w(n  1)  w(n)    y (n) x(n)  y 2 (n) w(n)
(1)
Where wn is the synapse vector at instant n , xn  is the input vector,  is the learning rate and y is the neuron output
value.
An important property of this rule is that the learning occur locally, that is, the alteration in the synapses
weight only depends on the activity of the two neurons interconnected by the synapse. This simplifies in a
significant way the complexity of the learning circuit.
A single trained neuron using the Hebb´s rule presents orientation selectivity. Figure 1 shows this property.
The indicated points are drawn starting from a two-dimensional Gaussian distribution and they are used to train a
neuron. The neuron weights vector is initialized as indicated. As the training continues, the vector of weights moves
progressively closer into the direction w of maximum variance at the data. In fact, w is the eigenvector of the data
covariance matrix that corresponds to the largest eigenvalue.
Figure 1. A single trained neuron using the Hebbian Algorithm presents orientation selectivity.
3. GENERATION OF THE INPUT VECTOR
Through simulation, 10 harmonically distorted sinusoidal signals were generated, each one comprising a
vector in 167 . Each vector is a training vector for the linear ANN.
4. PARAMETERS OF THE ARTIFICIAL NEURAL NETWORK
The ANN architecture is such that only 3 sub-spaces of 167 are taken into account. The goal is to find the
eigenvectors of the training set covariance matrix such that they align with the highest energy directions in 167 .
So, the data spread along these directions can be considered just the fundamental components of the sinusoidal
waves, filtering out the harmonic noise.
Operational parameters of the ANN:
A. Input Vector
The input vector set comprises ten vectors (ten positive semicycles with different harmonic noises), each
vector of size 167 . This is due to the fact that the sinusoidal signals were sampled with 167 samples.
B. Sub-spaces
The number of considered sub-spaces is 3. This parameter was experimentally determined aiming to
minimize the output harmonic noise.
C. Initial Learning Rate
The adopted learning rate is 1  10 20 . This parameter was determined as a compromise between the
convergence speed and the convergence stability [3].
D. Alpha Rate
The adopted alpha Rate (parameter that adjusts the convergence speed of the neural network as a function of
the instant eigenvalue) is 1500 [2].
E. Synapses Initial Values
All synapses are randomly initialized with values in the interval [–7,5; 7,5]. [3].
5. EXPERIMENTAL RESULTS
Below we show some of the obtained results. In the graphs are indicated the Input Signal (E), the Output
Signal (S) and the Difference Signal (D). The Difference Signal consists of the Harmonic Noise (D = E-S). For the
best visualization the Input Signal (E) curves were offset, so, there is no DC gain involved in the process.
192.017
200
150
100
Ei
Si
Di
50
0
 22.608 50
0
0
20
40
60
80
100
120
140
i
Figure 2. Graph of the results for the 1st input signal.
160
180
166
181.624
200
150
100
Ei
Si
Di
50
0
 20.795 50
0
20
40
60
80
0
100
120
140
160
i
180
166
Figure 3. Graph of the results for the 5th input signal.
185.558
200
150
100
Ei
Si
Di
50
0
 20.582 50
0
0
20
40
60
80
100
120
140
i
Figure 4. Graph of the results for the 6th input signal.
160
180
166
6. CONCLUSIONS
The results obtained in this work demonstrate the potential ability of linear Artificial Neural Networks
trained by the Hebbian Learning Algorithm in the filtering of the harmonic noise in the power bus. Although in
some cases the filtering was not perfectly effective, the output waveform presented lesser harmonic content than the
originally one presented to the Neural Network. In all cases, no phase lag was observed, which is a quite desired
feature. The obtained results suggest that further Neural Network architectures should be assessed.
7. REFERENCES
[1] M.C.D. de Castro. “Algoritmo Hebbiano Generalizado para Extração dos Componentes Principais de um Conjunto de
Dados no Domínio Complexo”. Tese de Mestrado, Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre, RS,
Brasil, 1996.( http://www.epo.pucrs.br/~decastro/pdf/C_MasterThesis.pdf).
[2] M.C.D. de Castro. Study aid of Artificial Neural Network. Pontifícia Universidade Católica do Rio Grande do Sul, Porto
Alegre, RS, Brasil, 2001. (http://www.epo.pucrs.br/~decastro/RNA_hp/RNA.html )
[3] S. Haykin, Neural Networks, 2nd ed., Prentice Hall, Upper Saddle River, New Jersey, 1999.
Download