View/Open

advertisement

STLF using AGNM under Sum

Square Error Gradient Function

#

Chandragiri Radha Charan

#

Assistant Professor, EEE Department, JNTUH College of Engineering, Nachupally (Kondagattu),

Karimnagar Dist., Andhra Pradesh, India

1 crcharan@gmail.com

Abstract Load forecasting is carried out under adaptivity of

Generalized Neuron Model which will have sum squared error gradient function. Past data was taken for the month of January

2003. Data will comprise the load data and weather data.

Comparison is made for the outputs of this testing of Adaptivity

Generalized Neuron Model (AGNM) are root mean square testing error, maximum testing error and minimum testing error based on momentum factor.

Keywords

Adaptivity, Load Forecasting, Generalized Neuron

Model, Sum Square Error Gradient Function, Normalization function

.

I.

I NTRODUCTION T O F ORECASTING

Load Forecasting is divided in to four forecasting techniques. They are Very Short Term Load Forecasting

(VSTLF), Short Term Load Forecasting (STLF), Medium

Term Load Forecasting and Long Term Load Forecasting

(LTLF) techniques. Adativity Generalized Neuron Model

(AGNM) method uses Short Term Load Forecasting (STLF) technique under sum square error gradient function. The features of STLF are optimum planning of power generation, control , serves as input for load flow studies, contingency analysis etc.

Various techniques were brought out during the estimation of STLF consisting of computational capacity, flexibility of neurons, minimization of local minima, size of training data, learning algorithm, higher accuracy. In 1980-81 the IEEE load forecasting working group [1], [2] has published a general philosophy load forecasting on the economic issues. Rahaman

[3] and Ho [4] has proposed an application of Knowledge

Based Expert Systems (KBES) in 1990. Park [5] and Peng [6] used ANN for STLF which will not consider in 1991-92.

Khincha has developed online ANN model for STLF in

1996[7]. Utilization of Short Term Load Forecasting by

Generalized Neuron Model has been found by Manmohan et al in 2002[8]. Chandragiri Radha Charan and Manmohan has developed the difference between error gradient functions of generalized neuron model and adaptivity Generalized Neuron

Model in 2010[9].

II. G ENERALIZED N EURON M ODEL

Generalized Neuron Model has this advantages. They are no hidden nodes, more flexibility, less training and testing time, two aggregation and activation functions. The aggregation functions are summation(

) and product (

) and activation functions are straight line, Sigmoid, Gaussian activation functions.

A. Architecture of GNM

FIG.1. GENERALIZED NEURON MODEL

.

F IG .2. A RCHITECTURE OF GNM

Opk=f1out1

 w1s1+f2out1

 w1s2+…….+fnout1  w1sn+ f1out2

 w1p1+f2out2

 w1p2+…….fnout2

 w1pn .

(1)

Here f1out1, f2out1,…. ,fnout1 are outputs of activation functions f1,f2,…,fn related to aggregation function ∑, and f1out2, f2out2, fnout2 are outputs of activation functions f1,f2,…,fn related to ∏. Output of activation function f 1 for aggregation function

, f 1out1= f 1(ws1

sumsigma).Output for activation functions f 1 for aggregation function of

, f 1out2= f 1(wfp1

 product)

III.

DATA F OR N ORMALIZATION

A.

Historical Data

Data has been taken from Dayalbagh Electricity,

Dayalbagh Science Museum from the month of January 2003.

Electric load will be in Watts, temperature is in

C, Humidity is measured in percentage.

Normalization value =

[( Y

Y ) * (

L

L max min L

L max min min )]

( Y ) min where: Y max

=0.9, Y min

=0.1, L = values of variables,

(2)

L min

= minimum value in that set, L max

= maximum value in that set.

TABLE I

Type I (First, Second, Third week of load, average maximum temperature, average minimum temperature, average humidity as inputs and fourth week load as output)

First Second Third Avg.

Avg.

Avg.

Fourth week week week max.

min.

humidity week load load load temp.

temp.

2263.2

2479.2

2166 11.5

5.83

2238 3007.2

2227.2

12 6.66

load

87 2461.2

95 2383.2

2482.2

3016.8

2802 11.5

6.83

88.6

2025.6

2384.4

2196

2678.4

2887.6

First week load

0.17

0.14

0.43

0.31

0.10

0.65

0.90

3285.6

2295.6

2286

2458.8

Second week load

0.25

0.67

0.68

0.90

0.10

0.10

0.23

2022

2014.8

3087.6

2618.4

Third week load

0.20

0.25

0.68

0.10

0.09

0.90

0.54

Normalized Data

10.83

10.16

10.5

12.5

Avg.

max.

temp.

0.55

0.72

0.55

0.32

0.10

0.21

0.90

5.16

5.66

6.33

5.83

Avg.

min.

0.42

0.81

0.90

0.10

0.33

0.66

0.42

temp.

95

90

90

85.6

Avg.

0.21

0.90

0.35

0.90

0.64

0.47

0.10

humidity

2557.2

2548.8

2560.8

2800.8

Fourth week load

0.54

0.46

0.10

0.64

0.63

0.65

0.90

B. Sum Square Error Gradient Function

Sum square error gradient function =

E

Wsi

 sum (( D

Opk ) *

opk

Wsi

(3).

Adaptive learning,



 

E

 t

 old

 

 

E

 t

 old

(4) new

 where

E=change in error,

Wsi= change in weights, opk= actual output,

 opk= change in output ,D = desired output,

= new learning rate,

 old

= old learning rate.

C. Simulation Result of AGNM through STLF

Case I: MATLAB 7.0

®

were simulated under STLF using

AGNM under the conditions that the learning rate ,

=0.001, momentum factor, α = 0.96, gain scale factor = 1.0, tolerance

= 0.002, all initial weights = 0.95, all initial weights = 0.95, training epochs = 30,000 under sum square error gradient function.

Case II: MATLAB 7.0

®

were simulated under STLF using

AGNM under the conditions that the learning rate ,

=0.001, momentum factor, α = 0.99, gain scale factor = 1.0, tolerance

= 0.002, all initial weights = 0.95, all initial weights = 0.95, training epochs = 30,000 under sum square error gradient function.

TABLE II

SIMULATION RESULTS OF AGNM

CASE MOMENTU RMS MAX MINIMUM

M FACTOR, TESTING TESTING TESTING

α ERROR ERROR ERROR

I 0.96

1.0802×10

-14

1.0802×10

-14

1.0802×10

-14

II 0.99

1.1230×10 -9 1.296×10 -9 -1.8346×10 -9

IV. C ONCLUSIONS

Adaptive Generalized Neuron Model has simulated the sum square error gradient functions under STLF and accordingly these were the testing errors. By keeping the parameters of momentum factor the values of rms testing error, maximum testing error, minimum testing error will change. Further the

Short Term Load Forecasting Model can be implemented by using Generalized Neuro-Fuzzy model, Data Mining technique.

A CKNOWLEDGMENT

Historical data has been given by Dayalbagh Science

Museum, Dayalbagh Electricity and Water Department, Agra,

Uttar Pradesh.

R EFERENCES

[1] IEEE Committee Report. ‘Load Forecasting’ Bibliography, Phase1,

IEEE Transactions on Power Apparatus and Systems , vol. PAS-99, no

1, 1980, pp. 53.

[2]

IEEE Committee Report. ‘Load Forecasting’ Bibliography, Phase 2,

IEEE Transactions on Power Apparatus and Systems , vol. PAS- 100, no 7, 1981, pp.3217.

[3] S.D.Rahaman and R.Bhatnagar, ‘Expert Systems Based Algorithm for

Short Term Load Forecasting. IEEE Transactions on Power Systems , vol. 3, no 2, May 1988,pp. 392.

[4]

K.L.Ho. ‘Short Term Load Forecasting Taiwan Power System Using

Knowledge Based Expert System. ‘IEEE Transactions on Power

Systems , vol. 5, no 4, November 1990, pp.1214.

[5]

D.Park. ‘Electric Load Forecasting Using an Artificial Neural

Network.’

IEEE Transactions on Power Systems , vol. 6, 1991, pp.442.

[6]

T.M.Peng. ‘Advancement in Application of Neural Network for Short

Term Load Forecasting.’ IEEE Transactions on Power Systems , vol. 7, no 1, 1992, pp. 250.

[7]

H.P.Khincha and N.Krishnan. ‘Short Term Load Forecasting Using

Neural Network for a Distribution Project.’

National Conference on

Power Systems (NPSC’96) , at Indian Institute of Technology, Kanpur,

December 1996, pp. 417

[8]

Man Mohan, D.K.Chaturvedi, A.K. Saxena, P.K.Kalra. ‘Short Term

Load Forecasting by Generalized Neuron Model.’ Institution of

Engineers (India) , Vol. 83, September 2002.

[9] Chandragiri Radha Charan, Manmohan, "Application of Adaptive

Learning in Generalized Neuron Model for Short Term Load

Forecasting under Error Gradient Functions", IC3, Part I, CCIS 94,

Springer Verlag, Heidelberg, 2010, pp.508-517

Download