Local Maximum Ozone Concentration Prediction Using Neural Networks

Local Maximum Ozone Concentration Prediction
Using Neural Networks
Dominik Wieland and Franz Wotawa
Technische Universität Wien,
Institut für Informationssysteme and
Ludwig Wittgenstein Laboratory for Information Systems,
Paniglgasse 16, A-1040 Wien,
Email: wotawa@dbai.tuwien.ac.at
From: AAAI Technical Report WS-99-07. Compilation copyright © 1999, AAAI (www.aaai.org). All rights reserved.
Abstract
This paper describes the use of Artificial Neural Networks
(ANNs) for the short term prediction of maximum ozone concentrations in the East Austrian region. Various Multilayer
Perceptron topologies (MLPs), Elman Networks (EN) and
Modified Elman Networks (MEN) were tested. The individual models used ozone, temperature, cloud cover and wind
data taken from the summer months from 1995 and 1996.
The achieved results were satisfactory. Comparisons with alternative models showed that the neural approaches used in
this study were superior.
Introduction
manifests itself in its effect on organisms as a poiOzone
sonous gas. It results in irritation of the respiratory system
and shows effects on the health especially of children. Empirically, the highest concentrations are found downwinds of
conurbations such as the Viennese basin and mostly in the
summer season. A reason for this is the emission of precessor substances and the available time for producing ozone.
In order to provide adequate early warnings, it is important
to have accurate and reliable forecasts of future high ozone
levels. The ability to forecast ozone trends or exact values
is not so important for the goverment than forecasting that
the ozone value will reach a dangerous level where driving
cars is prohibited and plants emitting to much precessor substances have to be shut down.
In practise, there are many different models for ozone
forecasting. Many of these use statistical approaches, such
as correlation and regression analyses. These models are
mostly simple and the accuracy of their results is not to
be underestimated, however, peak ozone levels are not accurately predictable. As already suggested, it is just these
peak levels which are most interesting. Therefore, we tested
neural models on their ability for predicting the maximum
ozone concentrations. Advantages and benefits of Artificial
Neural Networks (ANNs) published so far include:
Fault Tolerance: ANNs can use incomplete and corrupted
data. The malfunction of part of the system causes no
sudden failure of the whole system.
This work was supported by the Austrian Science Fund Project
N Z29-INF
Parallelism: This feature encompasses not only the aforementioned fault tolerance but, also allows, through effective hardware implementation, a much quicker calculation
of the system output.
Adaptivity: ANNs are often capable of self organisation.
This means that certain free system parameters do not
have to be adjusted experimentally but are often set by
the system itself.
Non-linearity: The immanent non-linearity of most
ANNs allows the calculation of complex and indisputable
correlations.
User friendly: ANNs are more user friendly than other
models with similar capacities.
Due to these characteristics of ANNs, it would appear that
their ability to predict maximum ozone concentration is very
promising. And in fact Acuna et al's paper (Acuna, Jorquera, & Perez 1996) entitled “Neural Network Model for
Maximum Ozone Concentration Prediction” introduces the
application of neural networks to ozone forecasting. They
use a Multilayer Perceptron (MLP) model for predicting the
maximum ozone concentrations in Santiago de Chile. Various MLP topologies were tested and according to the authors, provided satisfactory results. The Santiago model
provided a good comparison for this work, the results of
which compared favourably with those contained herein.
In difference to (Acuna, Jorquera, & Perez 1996) we add
new models, e.g., Elman Networks, and compare our results
with a physical/chemical model that had been developed for
the East Austrian Region around Vienna (Stohl, Wotawa, &
Kromp-Kolb 1996).
The work described in this paper is the first part of a general research program applying Artificial Intelligence concepts and techniques on issues of environmental research,
using ozone forecasting as initial application area. In the
next step we want to combine neural networks and qualitative reasoning as described in (Catala, Moreno, & Parra
1998) for the prediction of ozone values. In this context
we are interested to use neural networks for directly predicting the (qualitative) severity levels of ozone concentration
in an area instead of predicting the exact values (as done in
this paper) and mapping it to the levels afterwards. As an
advantage learning and forecasting should be speeded up.
Another direction of research is the use of qualitative phys-
ical/chemical models instead of using ordinary differential
Finally, it is planned to compare the outcomes of
equations.
all models helping to find the appropriate AI technique in
other domains.
Basics of Neural Networks
To be self contained we briefly recall the basics of neural
networks. Neural networks can be seen in the sense of an
abstract simulation of a biological neural system, such as the
human brain. They are made up of many parallel working
units, based upon single brain cells or neurons and the connections between them which simulate the axon interlaces
of the brain.
The most important part of an ANN is the neuron. Data
is processed in the neurons insofar as they accept incoming
data, transform it into an activation signal and switch it to the
neuron output. Individual neurons are combined in a single
network through directed and weighted connections. Such
networks take on external input through their own input neurons and propagate these through the neuron connections to
the output neurons.
ANNs allow themselves to be placed in models with supervised and unsupervised training algorithms, whereby the
data predictions mostly come into effect as supervised approaches. Some of the most important approaches are:
Single and Multi-layered Feedforward Networks (such as
MLPs and RBFNs, see (Haykin 1999))
Recurrent Neural Networks (RNNs) (e.g. Jordan and Elman networks, see (Pham & Liu 1995) and (Pham & Liu
1993))
Stochastic models for time series prediction (Markov
Models, Hidden Markov Models, see (Kung 1993))
Unsupervised models (Art networks, self organizing feature maps, see (Cotrell, Girard, & Rouset 1997))
Other approaches e.g. alternative neuron models (see
(Burg & Tschichold-Gürman 1997))
Evolutionary algorithms which can be used for training as
well as for determining the topology of ANNs (see (Fang
& Xi 1997))
Multilayer Perceptrons (MLPs)
MLPs come about through the joining together of multiple
non-linear perceptrons (see (Haykin 1999)) and are multilayered feedforward networks. Figure 1 shows the formal
representation of a single neuron used in MLPs, consisting
of an input, an activation, and an output function. Usually
the input function computes the sum of all inputs using the
given weights, i.e.,
where denotes the -th neuron in the -th layer, the
weight between the neuron and ! , and " the number of
neurons in layer $#&% . In most cases identity is used as the
output function. Therefore this function is often ignored (as
Artificial Neuron
Input Activation Output
j-1
y1
w1 i
j
xi
j-1
yN
j-1
wN
j-1
j
ai
act
j
1
yi
i
Figure 1: A single Neuron
Output Layer
Hidden Layer
Input Layer
Figure 2: Topology of an MLP
it is in our case). The activation function ')(* takes the input value and computes the output value. The most popular
activation function is the sigmoid function
+
% 2
%-,/.10
where denotes the output value of the -th neuron in
the -th layer.
Figure 2 shows the topology of a classic MLP. MLPs are
normally trained through the Backpropogation algorithm by
modifying the weights between the neurons.
The Backpropagation Algorithm (BP) The BP algorithm tries to minimize the output error function of a network by adapting the weights of the network connections in
the direction of its negative gradient. The error function is
therefore half the square of the network output error compared to the desired target output:
3 6
4% 5 87 * # 9
:<;
where index denotes the cells
9
of the output layer, *
represents the target output, and the actual output of the
ANN. The change in the weights runs parallel to the negative
gradient of the error function:
=
3
# ? +> @
? The direct result is the back propagation learning rule distinguishing
= BA)between
D FAC layers:
C 9
, =Edifferent
with C
'(*HG 7 : 7 * # :
1. Output layer
1
1
1
from the weather prediction model of the European Center for Medium Range Weather Forecasts (ECMWF 1995).
Ozone and weather data were available for the periods 7.7.95
to 25.9.1995 and 1.5.1996 to 30.9.1996.
In order to quantify the prediction ability of a certain forecast model, we use the Root Mean Square (RMS) error, defined as follows:
Context Units
Figure 3: Topology of the EN
')(*HG :JI LK C N M N M
7
where A is the learning rate or learning coefficient and
regulates the speed of the convergence of the algorithm, C is also known as the error signal of a particular cell, and
')(* 7 : is the activation function of the respective neuron.
')(*HG denotes the derivation of ')(* .
2. Others C
Elman Networks (ENs)
ENs (Pham & Liu 1995) belong to the class of partially recursive networks. They can be seen as an extension of the
MLPs whereby for each neuron of the hidden layer a state
neuron is added to the input layer. At each stage the contents
of the hidden neurons are copied to the state layer through
fixed feedback links and fed back to the hidden layer in the
next stage. In this way former information of time series is
implicitly kept in the network and thus used to calculate the
network output. Figure 3 shows the topology of the classic
EN.
The model series M01 used classic MLPs. The maximum
ozone value for a day * was predicted for a day using the
ozone levels for the day before and the temperature forecasts from a meteorological model for the same day * . Experiments were made with hidden layers of different sizes
(3 to 10 hidden neurons), varying learning rates (1.0 to
0.2) and bias neurons. The first model M01a has 2 input,
5 hidden, and 1 output neuron.
The MEN (see (Pham & Liu 1995)) differentiates itself from
the classic EN by connecting state neurons with themselves.
In this way each state neuron gets a certain inertia, which increases the capabilities of the network to dynamically memorize data. The following formula shows the output value of
the jth neurons of the state layer.
According to the authors, MENs are superior to the classic
ENs in non linear time series problems. Their use is effective
in solving complex problems such as the prediction of local
maximum ozone concentrations.
Ozone Prediction Models
Time series of ozone measurement in the East Austrian region (as the average of five measurement points providing
3 hours fixed average values) was available for training
and
measuretesting of the developed ANNs. In addition to
ments, model analyses and forecasts of temperature, cloud
cover, and wind speed of the last, current, and the next 2 days
were available. The meteorological forecasts originated
where Z represents the actual measured and \ the predicted ozone values at the time respectively, and " denotes
the number of observations.
For the training of the developed ANNs, the raw data was
divided into a training and a test set. In this way all data
from 1996 were used for the training of the ANN and the
data from 1995 was used for testing the prediction ability of
each model. A validation set was not used due to lack of
data. The optimum timing for the completion of the training
process was determined experimentally. All models were
trained with the BP algorithm. A data scale was used to feed
the input pattern into the ANN. In this way the values of the
individual measurements were limited by the minimum and
maximum values of the respective measurement series and
then transformed into the interval [0.1].
As part of the work described in this paper, MLPs and
ENs were implemented and tested for their ability to perform predictions of ozone concentrations in the East Austrian region. The following model series were developed:
The Modified Elman Network (MEN)
PO Q* ,B% : FR PO * : , * :
7
7
7
:
where 7 * represents the output of the j-th hidden neuron, and R denotes the neuron' s inertia.
SUTWV YX I [7 Z #]\ : ;
"
The models in the series M02 were an extension of model
M01. With the additional inputs of the forecasts for cloud
cover and wind speed similar experiments as for series
M01 were carried out. The models vary in the hidden layers, the lerning rates and the use of bias neurons. The first
model M02a has 4 input, 5 hidden, and 1 output neuron.
The models in the series M03 and M04 were dynamic
models. Multiple measurements of one time series were
presented to the MLP in parallel after a relevant precoding. The series M03 and M04 used data of only one
time series (Ozone or temperature time series) and tried
to find the optimum number of past values which were
relevant to the target values. The models of series M03
used the ozone values from the past 4 days as inputs for
forecasting the today' s ozone level. For the models M04
only the temperatures of the past 4 days were used. As
model M04 scored surprisingly good results, further experiments with various hidden layers, learning rates and
bias neurons were conducted.
Model
M01
M02
M03
M04
M05
M06
M07
M08
Version
M01k
M02f
M03b
M04e
M05g
M06f
M07a
M08c
NN used
MFFN
MFFN
MFFN
MFFN
MFFN
EN
MEN ( R =0,2)
MEN ( R =0,4)
#HU
5+1
5
2
2+1
5+1
8+1
5+1
5+1
LR
1
0,4
1
0,2
0,2
0,2
0,2
0,2
Bias
Y
N
N
Y
Y
Y
Y
Y
Steps
1,000
100
3,000
1,000
100
5,000
3,000
5,000
RMS Trg.
9.4942
8.3786
10.6892
10.4792
8.2406
9.0740
8.9964
7.3741
RMS Test
11.2004
11.1768
15.0537
10.6651
10.8132
10.3186
10.5150
9.9579
Table 1: Comparison of the RMS errors of various tested models
Model M08c - Training (1996)
Ozone Concentration
The models in the M05 series were bivariate time series
models. Therefore they were a combination of models
M03 and M04. The exact topology of the individual networks were originally developed from the Santiago models. Further experiments were carried out using wind and
cloud cover data.
The approaches of series M06, M07 and M08 used partially recursive ANNs: While M06 was testing classic
ENs using bias neurons and hidden layers of varying
sizes, in series M07 the MEN was used. Experiments
with varying inertiae were carried out. Lastly wind and
cloud cover data were again included, leading to model
M08. The base model M06a has 2 input, 4 hidden, 1 output neurons and 4 context units. The ozone value from
yesterday and the predicted temperature value for today
is used as an input to the net.
Model
Ozon
Day
(a) Training Set
Ozone Concentration
Model M08c - Test (1995)
More informations about the used models can be found in
the appendix.
Model
Ozon
Results and Discussion
Table 1 shows the RMS error of the respective best models
of all test series during the training and abstraction phases.
#HU stands for the number of hidden units, LR for the learning rate, RMS Trg. and RMS test for the RMS error of the
training and test sets respectively. The Bias column indicates
whether bias neurons had been used or not, while #Steps
give the necessary number of learning steps for the neural
network in order to obtain the best result. The RMS error for
all model series during the abstraction phase are presented in
the appendix. The best results were scored by the model
M08c. Figure 4 show both the actual ozone level variations and those predicted by model M08c during the learning
phase (Summer 1996) and the abstraction phase (Summer
1995). In general, the results of the individual model series
were stable and uniform. As a result of these facts, on the
one hand, accurate forecast results were pretty much guaranteed. On the other hand, the limitations of the approach
used in this work were to be seen. The results obtained by
the different models are given in the appendix.
A further step has been taken to compare the results
shown herein with those of other models. These models are:
The Persistence Model (PM): In this model, yesterday' s
levels are used to predict today' s forecast.
Day
(b) Test Set
Figure 4: Results of Neural Network Model08c
Note: Ozone concentration is given in ppb (parts per billion).
IMPO Model: This model uses a chemical/physical approach developed by the Institute for Meteorology and
Physics of the Universität für Bodenkultur Wien (BOKU,
University for Agricultural Science, Vienna) (see (Stohl,
Wotawa, & Kromp-Kolb 1996)). Figure 5 shows the results of the IMPO model within the considered time period.
A statistical model (Loibl 1996), which predicts today' s
ozone maximum value using the regression function
Z_^
B`ba cPced ,gf a h f h 5 ,F% a i9j l5 k
Z_^
^
where Z ^ denotes the predicted ozone value for day * , Z ^ denotes the ozone value from the day before, and k ^ is the
temperature from day * .
Model
PM
IMPO
Stat. M.
S3
S3
Version
1995
1995
1995
G1
G2
NN used
Persistence
physic./chem.
Regression Fkt.
MFFN
MFFN
#HU
—
—
—
5+1
5+1
LR
—
—
—
—
—
Bias
—
—
—
Y
Y
#Steps
—
—
—
—
—
RMS Trg.
11.2245
12.5739
10.5856
—
—
RMS Test
14.4540
14.8043
12.0291
13.7000
15.4000
Table 2: RMS errors of various comparison models Note: PM, IMPO und Stat.M. stands for the persistence model, the IMPO model
and the statistical model respectively. S3 denotes the best Santiago model (see (Acuna, Jorquera, & Perez 1996)), which was tested on the
sets G1 und G2. The 2 rightmost columns show the RMS errors of the training and the test sets respectively.
The Santiago Models (Acuna, Jorquera, & Perez 1996))1.
In comparison to these reference models, the network architecture tested herein showed a more satisfactory performance. A comparison of the tables 1 and 2 shows that almost all the results of this project displayed a higher level
of accuracy than those of the IMPO model, the PM and the
statistical model. All models, with the exception of model
M04, scored appreciably lower RMS errors during the abstraction phase than the two Santiago models described in
(Acuna, Jorquera, & Perez 1996). In the comparison with
the Santiago models, it must be noted, however, that the
ozone level variations in Santiago were greater than those
in the East Austrian test region.
For most models it is true to say that the trends, as shown
in Figure 4, of the short term ozone development could be
accurately predicted. However, the tested models had problems in correctly forecasting extreme ozone peaks. The reason for this is that in all predicted measurement series the individual value variations were less extreme than in the actual
measured ozone levels. This can be seen in the highly differential variances and standard deviations of the measured
and the forecast time series.
General Findings
Through the experiments with the individual ANNs general
findings on the topology and parameter settings of the ANN
were gained. Amongst these are:
In most models a hidden layer with five neurons was seen
to be optimal. It was observed that networks with larger
hidden layers could better reproduce once learned data.
However, this has to be seen in relation to the more complex architecture and longer training periods of such networks. During the abstraction phase, networks with five
hidden cells showed as a rule satisfactory results.
Using lower learning rates (0.2 - 0.4) the forecasting ability of the relevant ANNs could be increased.
The introduction of bias neurons in the input and hidden layers had a positive effect on the performance of the
ANNs. In the MLP approaches lower RMS errors were
observed. In EN and MEN models bias neurons had a
stabilising effect on the prediction curves.
1
We compare the published results of the Santiago models with
ours.
A further positive effect on the ability of the ANNs to
correctly predict local ozone values was brought about
through the introduction of cloud cover and wind speed
data. The best models of each series mostly used these
supplementary values as was seen in models M02f and
M08c.
The ultimately tested RNNs appeared to have been a good
choice in the forecast of ozone levels. The scored results
were appreciably better than those achieved by MLFF
networks. The altogether best result (model M08c) was
achieved by a MEN with an inertia of 0.4. In general the
optimum settings for the inertia parameter were between
0.2 and 0.4 and therefore surprisingly low. Large R -values
(approaching 1) seemed to be inappropriate.
Comparing the outcome of the IMPO model (see figure 5)
with model M08c (figure 4) leads to the following conclusions. While the IMPO model tends to overestimate the
maximum ozone concentration (in the extrem case with
a factor nearby 2) this is not the case for the ANN. Both
models follow the ozone curve although the predicted values are not equal to the real values.
Conclusion
In the scope of this work various ANN models for the short
term forecast of local ozone maxima were developed. Extensive tests with varying topologies and parameter settings led
to the results shown in table 1. The best result was achieved
by model M08c which used a MEN and the standard BP algorithm. The RMS error during the abstraction phase was
9.958 (see also figure 4). The results were all in all satisfactory since the RMS errors of the best models of the respective series lay under the results of the comparison models (see table 2). It was possible to develop a forecast system for the test data from 1995 which, compared to the PM
and IMPO models, showed a clear improvement and also
seems to be superior to common statistical approaches. For
more informations about the used networks, their results,
and a discussion about possible improvements see (Wieland
1999).
Future research include the use of qualitative neural networks and qualitative reasoning techniques for ozone prediction. This allows to compare several different approaches
applied to the same problem and may help to select the appropriate techniques in other forecasting domains.
Ozone Concentration
IMPO Model (1996)
Fang, J., and Xi, Y. 1997. Neural Network Design Based
on Evolutionary Programming. Artificial Intelligence in
Engineering 11:155–161.
Model
Ozon
Haykin, S. 1999. Neural Networks – A Comprehensive
Foundation. Prentice Hall.
Kung, S. Y. 1993. Digital Neural Networks. PTR Prentice
Hall.
Loibl, W. 1996. Trendprognose regionaler Ozonmaxima unter Einbezug verschiedener meteorologischer Daten.
Technical report, Report Nr. UBA-BE-058, Umweltbundesamt, Vienna, Austria.
Day
(a)
Ozone Concentration
IMPO Model (1995)
Model
Ozon
Pham, D., and Liu, X. 1993. Identification of Linear and
Nonlinear Dynamic Systems using Recurrent Neural Networks. Artificial Intelligence in Engineering 8:67–75.
Pham, D. T., and Liu, X. 1995. Neural Networks for Identification, Prediction and Control. Springer Verlag.
Day
(b)
Figure 5: Results of the IMPO Model
Note: Ozone concentration is given in ppb (parts per billion).
Acknowledgement
The authors wish to thank Gerhard Wotawa from the Department of Meteorology and Physics, Universität für Bodenkultur, Vienna, Austria for his support and for his comments
on earlier drafts of this paper. Ozone data were contributed
by the Austrian Environmental Protection Agency (Umweltbundesamt, UBA) and by the government of Lower Austria.
References
Acuna, G.; Jorquera, H.; and Perez, R. 1996. Neural Network Model for Maximum Ozone Concentration Prediction. In Proceedings of the International Conference on
Artificial Neural Networks ICANN-96, 263–268.
Burg, T., and Tschichold-Gürman, N. 1997. An Extended
Neuron Model for Efficient Time Series Generation and
Prediction. In Proceedings of the International Conference
on Artificial Neural Networks ICANN-97, 1005–1010.
Catala, A.; Moreno, J. M.; and Parra, X. 1998. Neural
Qualitative Systems. In Proceedings of the Workshop (W5)
on Model-based Systems and Qualitative Reasoning of the
13th European Conference on Artificial Intelligence ECAI98, 12–20.
Cotrell, M.; Girard, B.; and Rouset, P. 1997. Long Term
forecasting by Combining Kohonen Algorithm and Standard Prevision. In Proceedings of the International Conference on Artificial Neural Networks ICANN-97, 993–998.
European Centre for Medium Range Weather Forecasts
(ECMWF), Reading, UK. 1995. User guide to ECMWF
products Version 2.1.
Stohl, A.; Wotawa, G.; and Kromp-Kolb, H. 1996. The
IMPO modeling system description, sensitivity studies and
applications. Technical report, Universität für Bodenkultur,
Institut für Meteorologie und Physik, Türkenschanzstras̈e
18, A-1180 Wien.
Wieland, D. 1999. Prognose lokaler Ozonmaxima unter
Verwendung neuronaler Netze. Master's thesis, Technische Universität Wien, Vienna, Austria. Only available in
German.
Appendix A – Used Neuron Network Models
In this section the specification of the considered network
models are given. For all models the inputs, the number
of hidden units #HU, the learning rate LR, and the use of
bias neurons Bias, and the inertia Inertia for state neurons in
modified Elman Networks are given. The ozone, temperature, cloud cover, and wind
can be used as input. We
4o speed
use an index for +mgn#
#p% o f o % oq4)r to indicate whether
the value is given for today (2), yesterday (1), the day before
yesterday (0), another day before (-1), and so on (-2). All
models predict the ozone value for today (Ozon ; ).
Model
M01a
M01b
M01c
M01d
M01e
M01f
M01g
M01h
M01i
M01j
M01k
M01l
M01m
Inputs
Ozone Ozone Ozone Ozone Ozone Ozone Ozone Ozone Ozone Ozone Ozone Ozone Ozone , Temp ;
, Temp ;
, Temp ;
, Temp ;
, Temp ;
, Temp ;
, Temp ;
, Temp ;
, Temp ;
, Temp ;
, Temp ;
, Temp ;
, Temp ;
#HU
5
3
4
6
8
10
5
5
5
5
5+1
5+1
5+1
LR
1.0
1.0
1.0
1.0
1.0
1.0
0.8
0.6
0.4
0.2
1.0
0.4
0.2
Bias
N
N
N
N
N
N
N
N
N
N
Y
Y
Y
sModel
M02a
M02b
M02c
M02d
M02e
M02f
M02g
M02h
M02i
M02j
Model
M03a
M03b
M03c
M03d
Model
M04a
M04b
M04c
M04d
M04e
M04f
M04g
M04h
Model
M05a
M05b
M05c
M05d
M05e
M05f
M05g
M05h
M05i
Inputs
Ozone , Cloud ;
Wind ; , Temp ;
Ozone , Cloud ;
Wind ; , Temp ;
Ozone , Cloud ;
Wind ; , Temp ;
Ozone , Cloud ;
Wind ; , Temp ;
Ozone , Cloud ;
Wind ; , Temp ;
Ozone , Cloud ;
Wind ; , Temp ;
Ozone , Cloud ;
Wind ; , Temp ;
Ozone , Cloud ;
Wind ; , Temp ;
Ozone , Cloud ;
Wind ; , Temp ;
Ozone , Cloud ;
Wind ; , Temp ;
Inputs
Ozone Ozonet
Ozone Ozone Ozone Ozonet
,
#HU
5
LR
1.0
Bias
N
,
6
1.0
N
,
8
1.0
N
,
10
1.0
N
,
4
1.0
N
,
5
0.4
N
,
5
0.2
N
,
5+1
1.0
Y
,
5+1
0.4
Y
,
5+1
0.2
Y
#HU
1
2
3
LR
1.0
1.0
1.0
Bias
N
N
N
4
1.0
N
,Ozone , Ozonet ,
; , Ozone ,
, Ozone Model
M06a
M06b
M06c
M06d
M06e
M06f
Model
M07a
M07b
M07c
M07d
M07e
M07f
Model
M08a
M08b
M08c
Inputs
Temp ;
Temp , Temp ;
Temp t , Temp ,
Temp ;
Temp , Tempt ,
Temp , Temp ;
Temp ,Temp ;
Temp ,Temp ;
Temp ,Temp ;
Temp ,Temp ;
#HU
1
2
3
LR
1.0
1.0
1.0
Bias
N
N
N
4
1.0
N
2+1
3+1
4+1
5+1
0.2
0.2
0.2
0.2
Y
Y
Y
Y
Inputs
Ozone ,Temp ;
Ozonet , Ozone ,
Temp ;
Ozone ,Temp ,
Temp ;
Ozonet , Ozone ,
Temp , Temp ;
Ozone ,Temp ; ,
Cloud ; , Wind ;
Ozonet , Ozone ,
Temp ; , Cloud ; ,
Wind ;
Ozone , Temp ,
Temp ; , Cloud ; ,
Wind ;
Ozonet , Ozone ,
Temp , Temp ; ,
Cloud ; , Wind ;
Temp , Temp ; ,
Cloud ; , Wind ;
#HU
6+1
5+1
LR
0.2
0.2
Bias
Y
Y
5+1
0.2
Y
4+1
0.2
Y
5+1
0.2
Y
5+1
0.2
Y
5+1
0.2
Y
5+1
0.2
Y
5+1
0.2
Y
M08d
Inputs
Ozone ,Temp ;
Ozone ,Temp ;
Ozone ,Temp ;
Ozone ,Temp ;
Ozone ,Temp ;
Ozone ,Temp ;
Inputs
Ozone ,Temp ;
Ozone ,Temp ;
Ozone ,Temp ;
Ozone ,Temp ;
Ozone ,Temp ;
Ozone ,Temp ;
Inputs
Ozone ,Temp ;
Cloud ; ,Wind ;
Ozone ,Temp ;
Cloud ; ,Wind ;
Ozone ,Temp ;
Cloud ; ,Wind ;
Ozone ,Temp ;
Cloud ; ,Wind ;
#HU
4
3+1
4+1
5+1
6+1
8+1
#HU
5+1
5+1
5+1
5+1
8+1
8+1
#HU
, 5+1
LR
1.0
0.2
0.2
0.2
0.2
0.2
LR
0.2
0.2
0.2
0.2
0.2
0.2
LR
0.2
Bias
N
Y
Y
Y
Y
Y
Bias
Y
Y
Y
Y
Y
Y
Bias
Y
Inertia
0.0
0.0
0.0
0.0
0.0
0.0
Inertia
0.2
0.4
0.6
0.8
0.2
0.4
Inertia
0.0
,
5+1
0.2
Y
0.2
,
5+1
0.2
Y
0.4
,
5+1
0.2
Y
0.6
Appendix B – RMS Errors
The following tables shows the RMS error of all models of
all test series during the abstraction phases (see (Wieland
1999)). #Steps give the necessary number of learning steps
for the neural network in order to obtain the best result.
#Steps
100
500
1000
3000
5000
M01a
12.7054
12.3203
12.1776
12.1714
12.1799
M01b
M01c
M01d
M01e
M01f
12.1960
12.2140
12.2118
12.2554
12.2460
12.2341
12.3053
12.1978
12.2186
12.2253
12.1165
12.1671
12.3947
12.2114
12.1483
#Steps
100
500
1000
3000
5000
M01g
M01h
M01i
M01j
M01k
M01l
12.1442
12.1019
12.1799
12.1543
12.0105
12.0474
12.0999
11.9613
11.9500
12.0028
11.9310
11.9069
11.2004
11.4195
11.5968
11.9176
11.2626
11.2787
M02e
11.5549
12.2018
12.4597
11.9919
12.0421
M02f
11.1768
11.3759
11.7479
11.9430
11.8852
#Steps
100
500
1000
3000
5000
M01m
11.8898
11.7384
11.4646
#Steps
100
500
1000
3000
5000
M02a
11.3809
12.3983
12.0239
11.9833
12.0366
M02b
12.6112
11.8879
12.6211
12.1773
12.0859
M02c
12.5169
12.0402
12.9571
13.3754
12.9121
M02d
12.2704
12.3392
12.8433
12.4654
12.0287
#Steps
100
500
1000
3000
5000
M02g
11.2219
11.2256
12.0909
12.1283
11.7518
M02h
12.2785
12.0397
11.7547
11.8873
11.7883
M02i
11.3814
11.9295
12.0620
12.7961
13.1452
M02j
11.1829
11.6844
11.7748
12.0353
12.3596
#Steps
100
500
1000
3000
5000
M03a
17.9275
17.8842
17.8841
17.8841
17.8841
M03b
15.2594
15.1088
15.0786
15.0537
15.0939
M03c
16.3233
15.2674
15.0975
15.8108
16.0035
M03d
16.0686
15.1814
15.0688
15.4859
16.2480
#Steps
100
500
1000
3000
5000
M04a
17.2282
17.1559
17.1559
17.1559
17.1559
M04b
17.7798
11.3929
11.4399
11.7363
11.8890
M04c
11.7956
11.8875
12.0099
12.1321
12.1355
M04d
11.8978
12.3251
12.4722
12.6919
12.7855
M04e
15.9281
10.9266
10.6651
10.8002
10.8726
M04f
11.3494
10.8152
10.7187
10.6816
10.7025
#Steps
100
500
1000
3000
5000
M04g
11.6726
10.9101
10.7480
10.6986
10.7162
M04h
12.4379
11.1285
10.9937
10.8259
10.7277
#Steps
100
500
1000
3000
5000
M05a
11.7696
11.6355
11.5293
11.2261
11.2074
M05b
12.0077
11.5827
11.5422
11.1948
11.1407
M05c
12.5474
11.1635
11.0748
11.1226
11.0501
M05d
12.7423
11.2658
11.1323
10.9521
10.9327
M05e
11.1829
11.6844
11.7748
12.0353
12.3596
M05f
11.5014
11.6142
11.5846
11.3329
11.3798
#Steps
100
500
1000
3000
5000
M05g
10.8132
11.3031
11.2983
10.9417
10.8497
M05h
11.6184
11.5859
11.4888
11.3546
11.3501
M05i
11.6323
11.5733
11.6144
11.3036
11.2098
#Steps
100
500
1000
3000
5000
M06a
12.5614
12.6118
20.2562
18.1967
18.3555
M06b
12.9471
12.5284
12.0667
11.8152
12.0016
M06c
12.1955
11.7043
11.6131
11.7833
11.9388
M06d
12.1056
12.1096
11.5278
11.2674
11.3059
M06e
12.6153
12.2515
11.5542
11.0893
11.9760
M06f
12.4573
12.0379
11.9483
11.3542
10.3186
#Steps
100
500
1000
3000
5000
M07a
12.0761
11.6134
11.0486
10.5150
11.1970
M07b
12.1458
11.5021
10.8558
10.6269
11.4205
M07c
12.3928
11.9894
11.1009
10.9646
11.3466
M07d
16.3523
12.9935
11.6606
11.7003
11.7185
M07e
12.3627
11.6692
11.7382
11.0655
11.2189
M07f
12.6136
11.6748
11.1409
10.8164
11.4517
#Steps
100
500
1000
3000
5000
10000
M08a
13.0530
12.6976
11.9779
11.5771
11.3775
11.6521
M08b
13.2301
12.4605
11.6784
10.2700
10.2378
10.8070
M08c
13.3053
11.6085
11.0446
10.5488
9.9579
10.9865
M08d
12.8895
11.2232
11.0759
10.4890
10.6531
11.5829