Using a Neural Network to Make Predictions Dr. Hongfei Zhang

advertisement
Using a Neural Network to Make Predictions
in the Stock Market
Alicia Arnold
Advisor
Dr. Hongfei Zhang
,'"
"\
-. ('
..,::
Abstract:
This paper discusses the use a neural network to solve a
problem of predicting stock prices. A background discussion of
neural networks is given. Next, the models used for the network
are discussed. This discussion shows the shortcomings and
advantages of the networks used.
It was discovered that a radial
basis network best fit the data being presented. However, these
findings are not entirely accurate to what the stock market
actually does, since so many factors influence it.
Background Information:
Most computers go through the following loop to calculate
results for some problem (3):
get an instruction from memory
I
,A
,
retrieve required data
for the instruction
store the result
In memory
,
,-
This
perform the Instruction
loop becomes very rigid,
computed in this fashion.
and not all
instructions can be
Also, some data may be noisy.
In other
words,
some of the information may divert from the rest of the
data.
This type of calculating cannot account for such data.
Another problem is that each piece of information is stored in
memory, so if blocks of memory are destroyed, the data saved there
is
lost.
Neural
Networks
can do
anything
that
this
type
of
computing can do, but can also overcome the limitations and are
able
to
solve
such
problems
as
pattern
functions to data, and classifying data.
algori thmic
solutions
to problems
recognition,
fitting
Neural Nets can help form
that were previously thought
unsolvable.
The human brain is made up of cells called neurons which
function in groups called networks, hence the name neural network.
These
networks
are
highly
interconnected.
Each
neuron
has
thousands of inputs and may send its output to many other neurons.
According
to
Withagen,
"Metabolic
machinery
within
the
provides a power source for information-processing functions."
cell
He
also remarks that short term and long term integration of signals
occurs,
and
that
neurons
can
U
'digitize'
data
for
These attributes were carried over
transition" .
artificial intelligence version of a neural network.
Computerized
elements
that
neural
networks
in
similar
act
a
are
made
fashion
up
to
of
the
local
into
the
processing
neuron.
The
processing elements have many inputs and are highly interconnected.
There can also be many layers of these artificial neurons.
There
are many different ways to organize and structure the elements.
Each element receives and processes the inputs and then generates
one or more outputs.
Humans learn through repetition and pattern association.
computerized neural network can be trained in a similar way.
A
The
processing elements are presented with a set of data for which the
proper output is known.
The network then learns to process the
inputs so that the given targets are reached.
When the network can
manipulate the inputs to match the targets given,
said to be trained.
,-.
training data
incorporated.
and
the network is
The inputs are assigned weights based on the
the
desired output.
A bias
value
is
This is sent to a layer of processing elements.
also
The
number of processing elements usually depends on the number of
possible outputs.
biases
The processing elements
and send the
total
to
a
sum the
transformation
inputs
function.
processing elements are all working simultaneously.
and
The
This is where
the parallelism comes in that normal computing does not utilize
(4). The transformation function generates a single output that is
sent to the next layer of neurons.
is used instead of a
Sometimes a threshold detector
transformation function.
While training,
after the output is generated from the last layer, the weights are
sent back to the initial inputs to go through the process again
until the desired output is achieved.
Once trained,
the neural
network can be tested on validation data.
with
the
advances
made
today
in
computer
technology,
scientists hope that Neural Networks can be used to solve problems
-
and complete tasks that were once thought to be too complex.
may also help solve other problems more quickly.
They
Some applications
of Neural Networks are in signature analysis, in process control,
to monitor the performance of other machines,
ln marketing,
ln
speech and vision recognition systems, and in weather prediction
( 6) .
The Problem:
One other
possible
application
for
Neural
Networks
is
determining whether or not there is a pattern in stock prices so
that more accurate predictions can be made.
Statistics from IBM
stock from January 1, 1990 until October 13, 1997 were used to try
to find a structure.
Almost 2000 data points were available.
available was the Neural Network Toolbox in MATLAB.
Also
This contained
all of the neural network architecture tools so that creating the
artificial network was much easier.
A target was created for each
day based on the closing price of the stock and what the stock
would do in the next 5 days.
The Results:
For the
first
attempt made at
finding
a
pattern,
all
17
available statistics including the open, high, low, close, volume,
simple averages, and volatility were
used for each of the days of data. This vector of information was
the input to the neural network.
A three-layer back propagation
network was chosen because this seemed to be the simplest and most
logical method.
A log sigmoid function was used for the first
transformation function, a purely linear function was used for the
second,
and a
tangent sigmoid function was used for
the
final
layer's transformation function. The tangent sigmoid function was
chosen as the final function because it returns values between -1
and 1. If a -1 was the result, the stock should be sold.
If a 0
was the result,
the stock should be retained.
If a
1 was the
resul t, then more IBM stock should be purchased.
Two hundred
points were used to train the network, and then the next 10 days
were used as validation.
This was repeated until all 1965 data
points were either used as part of a training set or validation
set.
For the validation set, the neural network correctly
predicted whether to buy,
sell,
or retain the stock 90% of the
There was not a high level of confidence in this result,
time.
though, because the sum-squared error after each loop went up.
It
did not appear that the weights were being carried over into the
next loop, so the 90% correct result was by chance.
After running
the same data through again using the same weights, the network was
only correct 86.6% of the time.
For the next attempt, only the open, close, high, low, volume,
and volatility statistics were used for the days.
Still, 200 data
points were used to train the network and 10 were used to validate
the weights.
The same transformation functions were also used.
The network's training goal for the error was not achieved In the
specified number of epochs,
so more neurons were added to
hidden middle
was
program.
layer.
This
a
suggestion
from
the
the
MAT LAB
Again, the network predicted the correct result 86.6% of
the time, but confidence was not obtained based on the individual
loop results.
Then the open, close, high, low, volume, simple average, and
volatility statistics were used to try to predict a pattern.
This
time, the network was correct 95.7% of the time, but it was saying
that the stock should just be retained forever. This did not bring
approval.
An overlapping loop was used on the open, close, high,
volume, and volatility statistics.
low,
One hundred data points were
used to train the data, and then the next 15 points were used to
validate the weights and biases.
However with this overlapping
loop, the next one hundred points to be used to train the network
were 16-116 instead of 116-216.
Because the close price of the
stock is somewhat time-dependent, this seemed to be a much better
idea.
way.
Also, more data points were used to train the network this
The network was correct 73.2% of the time.
This appeared to
be much more logical.
Next,
the same points were used, but 120 were used to train
and then the next 10 were used for validation.
-
Although it was
thought that more training points would achieve better results,
this was not the case. The network was only correct for 10% of the
validation points!
The program used to compute the values was
examined to be sure the network was training correctly.
a
problem
when
normalizing
the
data.
After
There was
correcting
the
normalization function and rerunning the program using the smallest
set
of
the
statistics
(open,
close,
high,
low,
volume,
and
volatility), the percent correct was only 7.7%, and the network did
not appear to be training at all.
The reason for this could not be
ascertained.
that
One
thought
was
volatility statistics of O.
Also,
(those other than the open,
close,
the
first
50
dates
have
the other computed statistics
high,
values of 0 for the first 50 data points.
low,
and volume)
have
These were omitted from
the training and validation sets, and the percent correct was 61.9%
of the data points.
used to validate.
Eighty dates were used to train, and 11 were
The back propagation method didn't seem to be
working well with this data,
so the other method tried was the
radial basis method.
Using 80 dates to train and again 11 to validate, and allowing
50 epochs
,-
(repetitions of the inputs)
during the training,
percent correct was 64.7%, but the results were promising.
allowing 100 epochs
61.2%.
for
the
training,
the
After
the percent correct was
MATLAB suggested increasing the spread of the radial basis
or increasing the number of epochs.
been tried,
The latter method had already
so enlarging the spread was the next attempt.
This
time, the percent correct was up to 82.7% from changing the spread
from 1 to 5.
Most of the incorrect results occurred in entire
validation groups, where it was determined that for 11 consecutive
days, IBM stock should be either purchased or sold.
be totally incorrect,
This may not
since the target was based on what would
happen in the next few days, if the close price is low for all of
those compared to previous prices,
suggest selling the stock.
then the network is going to
Using a still wider spread of 10, the percent of data points
that were guessed according to the target was 89.3%.
continued to be increased until
-
it was 30,
correct guesses by the network was
92.8%.
The spread
and the percent of
Unfortunately,
the
network was predicting that the stock be retained for all dates.
There were no predictions to sell or buy for any date. Then it was
realized that the spread was not being altered.
squared error goal,
It was the sum-
so the network was barely training.
It was
allowing many mistakes in the training set yet still continuing
with the current weights and biases.
After correcting this error in the program and using a sumsquared error goal of one with a spread of three on the smallest
set of statistics using 30 dates to train and 5 to validate,
percent correct from the network was 75.5%.
the
It was determined that
30 points of training would constitute one month of data, and the
next 6 dates would be the following week.
This seemed reasonable
since the data is time-dependent.
The spread was increased to five,
and the percent correct
increased to 82.9% of the validation set for a set of 965 data
points.
The smaller set was used to make sure the network was
training.
The large set of points can take hours or even days to
compute on the available equipment.
Finally, on the entire set of
1965 valid dates, with a spread of 10, a training set of 30, and a
validation set of 6, the percent correct was 76.4%.
Conclusion:
Trying to predict what stock prices are going to do is very
complicated.
be
There are many things that affect stocks that cannot
accounted
for
by
strictly
using
Governmental actions or announcements,
death of a
examined,
or even a
famous person could affect stock prices.
Therefor
the stock market.
and
structure was
it
was
examined
to determine
over
found using MATLAB' s
seven
years,
-.
determined
from
variable
the
close
time
frame
values.
is
a
stock was
no
reliable
Further tests could
different methods,
different combinations of transformation functions.
a
there
Neural Network Toolbox back
be run with different sets of statistics,
using
if
Even though only one
propagation method or the radial basis method.
suggest
statistics.
the holidays,
historical data may not be enough
pattern In
historical
for
There
the
are
or
Chung and Kin
training
many
variables to consider, and this was just one attempt.
set
as
different
Bibliography
(1) Chung, Lam King and Lam Kin. "An Alternative Choice of
Output in Neural Network for the Generation of Trading
Signals in a Financial Market".
The University of Hong
Kong. http://hkusub.hku.hk:80001-kclam/reportlpaper.htm.
(2)Demuth, Howard and Mark Beale
(1996).
Toolbox for Use with MATLAB.
Neural Network
Natick, Massachusetts:
The
Math Works, Inc.
(3)Gurney, Dr. Kevin.
"Neural Nets". BruneI University, UK.
http://www.shef.ac.uk/psychology/gumey/notes/index.html.
(4)Medsker, Larry, Efraim Turban, and Robert R. Trippi.
Systems and Applied Artificial Intelligence.
Expert
New York:
Macmillan publishing Co.,1992.
(5)Mehta, Mahendra
(1995).
Foreign Exchange Markets.
In
Refenes, Apostolos-Paul (Ed.), Neural Networks in the
Capital Markets (pp. 177-198). Chichester, England:
(6)Sarle, Warren.
USA.
Wiley.
Copyright 1997 by Warren S. Sarle, Cary, NC,
"The 'Frequently Asked Questions'
(FAQ) File of the
Usenet Newsgroup comp.ai.neural-nets".
ftp://ftp.sas.com/pub/neuraIIFAQ.html.
(7) Smith, Dr. Leslie.
"An Introduction to Neural Networks".
University of Stirling.
http://www.cs.stir.ac.uk/-lssINNIntrolInvSlides.html.
(8)Withagen, Heini.
"Neural Network Information".
Uni versi ty of Technology.
Eindhoven
http://www.eeb.ele.tue.nllneurallneural.html.
Appendix A:
MAT LAB Programs
,-
function target=ibmt(data)
%generate target vector
cl_close=S;
.,-..cl_high=3;
::::l_low=4;
forward=5;
range=3;
close=data(:,cl_close);
high=data(:,cl_high);
low=data(:,cl_low);
target=zeros(size(close,1)-forward+1,1);
for i=1:size(close,1)-forward+1
avhigh=mean(high(i:i+forward-1)) ;
avlow=mean(low(i:i+forward-1));
if avhigh-close(i»close(i)-avlow+range,
target(i)=l;
elseif avhigh-close(i) < close(i)-avlow-range,
target (i) =-1;
else
target(i)=O;
end
end
function nmlize=norrnalize(sarnple)
stdev=std(sarnple);
for j=l: (size(sarnple,2»
stdevA=stdev(l,j) ;
A=sarnple ( : , j ) ;
B=A;
total=surn (A) ;
avg=total/size(A,l);
A=A-avg;
B=A/stdevA;
nmlize(:,j)=B;
end
-
%data:
O,H,L,C,V,Volatility
ibm;
targetv=ibmt(data) ;
~targetv=targetv' ;
datav=data( [1:1965], [2:6,18]);
ndatav=normalize(datav) ;
ndatav=ndatav' ;
%get all the data
%generate target for all data
%get sample set
%normalize the set
% Training parameters are:
%
TP(l) - Epochs between updating display, default = 25.
%
TP (2) - Maximum number of epochs to train, default = 1000.
%
TP (3) - Sum-squared error goal, default = 0.02.
%
TP(4) - Minimum gradient, default = 0.0001.
%
TP(5) - Initial value for MU, default = 0.001.
Multiplier for increasing MU, default
10.
%
TP(6)
Multiplier for decreasing MU, default
0.1.
%
TP(7)
%
TP (8) - Maximum value for MU, default = 1e10.
tp=[15,100,2] ;
ntr=200;
nvl=10;
n=floor«size{ndatav,2))/ntr) ;
if (n*ntr + nvl»{size{ndatav,2))
n=n-1;
end;
missed=zeros{l,n) ;
newva=zeros{n,l+nvl);
validt=zeros(n,l+nvl) ;
-start=l
trainp=ndatav(:, [start: (start+ntr)]);
%get training set
traint=targetv{:, [start: (start+ntr)]);
%get target for training set
validp=ndatav(:, [start+ntr+1: (start+ntr+1+nvl)]);
%get validation data
validt(l, :)=targetv(:, [start+ntr+l: (start+ntr+1+nvl)]);
%get validation target
[wl,bl,w2,b2,w3,b3]=initff(trainp,12, 'logsig' ,36, 'purelin' ,1, 'tansig');
[wl,bl,w2,b2,w3,b3,te,tr] =
trainlm{wl,bl, 'logsig' ,w2,b2, 'purelin' ,w3,b3, 'tansig' ,trainp,traint,tp);
w3, b3
va=simuff(validp,wl,bl, 'logsig' ,w2,b2, 'purelin' ,w3,b3, 'tansig');
newva{l, :)=hardlim{va-0.5)-hardlim(-va-0.5);
wrong=O;
for j=l: (l+nvl)
if newva{l,j)-=validt{l,j)
wrong=wrong+1;
end;
end;
missed{l,l)=wrong;
for i=2:n
i
n
--
start=({i-l)*ntr)+2;
trainp=ndatav(:, [start: (start+ntr)]);
%get training set
traint=targetv{:, [start: (start+ntr)]);
%get target for training set
validp=ndatav(:, [(start+ntr+l): (start+ntr+l+nvl)]);
%get validation data
validt(i, :)=targetv{:, [(start+ntr+l): (start+ntr+1+nvl)]);
%get validation target
w3, b3
[wl,bl,w2,b2,w3,b3,te,tr]
trainlm(wl,bl, 'logsig' ,w2,b2, 'purelin',w3,b3, 'tansig',trainp,traint,tp);
.,.-
va=sirnuff(validp,w1,b1, 'logsig' ,w2,b2, 'purelin' ,w3,b3, 'tansig');
newva(i, :)=hardlim(va-0.5)-hardlirn(-va-0.5);
wrong=O;
for j=l: (l+nvl)
if newva(i,j)-=validt(i,j)
wrong=wrong+1;
end;
end;
rnissed(l,i)=wrong
end
percent_incorrect=sum(rnissed)/(n*nvl) *100
-
--
% train with an overlapping loop
%data:
O,H,L,C,V,Volatility
ibm;
targetv=ibmt(data);
targetv=targetv' ;
-
%get all the data
%generate target for all data
datav=data([1:165], [2:6,18]); %get sample set
ndatav=normalize(datav);
%normalize the set
ndatav=ndatav' ;
% Training
%
TP(l)
%
TP(2)
%
TP(3)
%
TP(4)
%
TP(5)
%
TP(6)
%
TP(7)
%
TP(8)
parameters are:
- Epochs between updating display, default = 25.
- Maximum number of epochs to train, default = 1000.
- Sum-squared error goal, default = 0.02.
- Minimum gradient, default = 0.0001.
- Initial value for MU, default = 0.001.
10.
Multiplier for increasing MU, default
0.1.
Multiplier for decreasing MU, default
- Maximum value for MU, default = lel0.
tp= [5, 50, 1] ;
ntr=80;
nvl=lO;
n=floor((size(ndatav,2)-ntr)/nvl) ;
_
missed=zeros(l,n);
newva=zeros(n,l+nvl);
validt=zeros(n,l+nvl) ;
start=l;
trainp=ndatav(:, [start: (start+ntr)]);
traint=targetv(:, [start: (start+ntr)]);
validp=ndatav(:, [start+ntr+l: (start+ntr+l+nvl)]);
validt(l, :)=targetv(:, [start+ntr+1: (start+ntr+l+nvl)]);
%get
%get
%get
%get
training set
target for training set
validation data
validation target
[w1,b1,w2,b2,w3,b3]=initff(trainp,12, 'logsig' ,36, 'purelin',l, 'tansig');
[w1,b1,w2,b2,w3,b3,te,tr] =
trainlm(w1,b1, 'logsig' ,w2,b2, 'purelin' ,w3,b3, 'tansig',trainp,traint,tp);
va=simuff(validp,w1,b1, 'logsig',w2,b2, 'purelin' ,w3,b3, 'tansig');
newva(l, :)=hardlim(va-0.5)-hardlim(-va-0.5);
wrong=O;
for j=l: (nvl+1)
if newva(l,j)-=validt(l,j)
wrong=wrong+1;
end;
end;
missed(l,l)=wrong;
wrong
for i=2:n
i,n
start=((i-1)*nvl) ;
trainp=ndatav(:, [start: (start+ntr)]);
traint=targetv(:, [start: (start+ntr)]);
set
validp=ndatav(:, [(start+ntr+1): (start+ntr+1+nvl)]);
validt(i, :)=targetv(:, [(start+ntr+1): (start+ntr+1+nvl)]);
%get training set
%get target for training
%get validation data
%get validation target
[w1,b1,w2,b2,w3,b3,te,tr] =
-trainlm(w1,b1, 'logsig' ,w2,b2, 'purelin' ,w3,b3, 'tansig' , trainp,traint,tp) ;
va=simuff(validp,w1,b1, 'logsig',w2,b2, 'purelin' ,w3,b3, 'tansig');
newva(i, :)=hardlim(va-0.5)-hardlim(-va-0.5);
wrong=O;
for j=l: (l+nvl)
if newva(i,j)-=validt(i,j)
wrong=wrong+l;
end;
end;
missed(l,i)=wrong;
wrong
end
missed
percent_incorrect=sum(missed)/(n*(nvl+l»*lOO
newva
validt
-
% train with an overlapping loop -- back propagation
%data: Q,H,L,C,V,Volatility
%get all the data
%generate target for all data
ibm;
,-. targetv=ibmt(data);
targetv=targetv';
da tav=data ( [51: 1965] , [2 : 6, 18] ) ;
ndatav=normalize(datav) ;
ndatav=ndatav' ;
%get sample set
%normalize the set
% Design parameters are:
%
DP(l) - Iterations between updating display, default = 25.
%
DP(2) - Maximum number of neurons, default = # vectors in P.
%
DP(3) - Sum-squared error goal, default = 0.02.
%
DP(4) - Spread of radial basis functions, default = 1.0.
% Missing parameters and NaN's are replaced with defaults.
dp= [ 5 , 30 , 1 , 10] ;
_
ntr=30;
nvl=5;
n=floor((size(ndatav,2)-ntr)/nvl) ;
if n+ntr+nvl+1 > size (ndatav,2)
n= n-1;
end;
missed=zeros(l,n) ;
newva=zeros(n,nvl) ;
validt=zeros(n,nvl);
start=l;
trainp=ndatav(:, [start: (start+ntr)]);
traint=targetv(:, [start: (start+ntr)]);
validp=ndatav(:, [start+ntr+1: (start+ntr+nvl)]);
validt(l, :)=targetv(:, [start+ntr+1: (start+ntr+nvl)]);
[w1,b1,w2,b2,te,tr]
=
%get training set
%get target for training set
%get validation data
%get validation target
solverb(trainp,traint,dp);
va=simurb(validp,w1,b1,w2,b2) ;
newva(l, :)=hardlim(va-0.5)-hardlim(-va-0.5);
wrong=O;
for j=l: (nvl)
i f newva(l,j)-=validt(l,j)
wrong=wrong+1;
end;
end;
missed(l,l)=wrong;
wrong
for i=2:n
i,n
start=((i-1)*nvl) ;
trainp=ndatav(:, [start: (start+ntr)]);
traint=targetv(:, [start: (start+ntr)]);
set
validp=ndatav(:, [(start+ntr+1): (start+ntr+nvl)]);
validt(i, :)=targetv(:, [(start+ntr+1): (start+ntr+nvl)]);
[w1,b1,w2,b2,te,tr] = solverb(trainp,traint,dp);
,-.
va=simurb(validp,w1,b1,w2,b2);
newva(i, :)=hardlim(va-0.5)-hardlim(-va-0.5);
wrong=O;
for j=l:(nvl)
if newva(i,j)-=validt(i,j)
wrong=wrong+1;
end;
end;
%get training set
%get target for training
%get validation data
%get validation target
rnissed(l,i)=wrongi
wrong
end
,-%dp,ntr, nvl
%rnissed
percent_incorrect=surn(rnissed)/(n*(nvl))*lOOi
%newva, validt
ibm;
targetv=ibmt{data) ;
targetv=targetv' ;
%get all the data
%generate target for all data
'-"'datav=data{[51:1965) , [2:18)); %get sample set
ndatav=normalize{datav);
%normalize the set
ndatav=ndatav' ;
% Training
%
TP{l)
%
TP(2)
%
TP(3)
%
TP(4)
%
TP(5)
%
TP(6)
%
TP(7)
%
TP (8)
parameters are:
- Epochs between updating display, default = 25.
- Maximum number of epochs to train, default = 1000.
- Sum-squared error goal, default = 0.02.
- Minimum gradient, default = 0.0001.
- Initial value for MU, default = 0.001.
Multiplier for increasing MU, default
10.
Multiplier for decreasing MU, default = 0.1.
- Maximum value for MU, default = 1e10.
tp=[15,100,2) ;
ntr=200;
nvl=10;
n=floor{{size{ndatav,2))!ntr) ;
if (n*ntr + nvl»{size(ndatav,2))
n=n-1;
end;
n
~
missed=zeros{l,n) ;
newva=zeros(n,l+nvl) ;
validt=zeros{n,l+nvl);
start=l;
trainp=ndatav{:, [start: (start+ntr)));
%get training set
traint=targetv(:, [start: (start+ntr)));
%get target for training set
validp=ndatav{:, [start+ntr+1: (start+ntr+1+nvl)));
%get validation data
validt{l, :)=targetv(:, [start+ntr+1: (start+ntr+1+nvl)));
%get validation target
[w1,b1,w2,b2,w3,b3)=initff{trainp,17, 'logsig',34, 'purelin' ,1, 'tansig');
[w1,b1,w2,b2,w3,b3,te,tr) =
trainlm{w1,b1, 'logsig' ,w2,b2, 'purelin' ,w3,b3, 'tansig' ,trainp,traint,tp);
va=simuff(validp,w1,b1, 'logsig',w2,b2, 'purelin' ,w3,b3, 'tansig');
newva{l, :)=hardlim(va-0.5)-hardlim(-va-0.5);
wrong=O;
for j=l: (l+nvl)
if newva{l,j)-=validt{l,j)
wrong=wrong+1;
end;
end;
missed(l,l)=wrongi
for i=2:n
i
n
start=«i-1)*ntr)+2;
trainp=ndatav(:, [start: (start+ntr)));
%get training set
traint=targetv(:, [start: (start+ntr)));
%get target for training set
validp=ndatav(:, [(start+ntr+1): (start+ntr+1+nvl)));
%get validation data
validt{i, :)=targetv(:, [(start+ntr+1): (start+ntr+1+nvl)));
%get validation target
[w1,b1,w2,b2,w3,b3,te,tr) =
-trainlm(w1,b1, 'logsig' ,w2,b2, 'purelin' ,w3,b3, 'tansig' ,trainp,traint,tp);
va=simuff(validp,w1,b1, 'logsig' ,w2,b2, 'purelin' ,w3,b3, 'tansig');
newva(i, :)=hardlim(va-0.5)-hardlim(-va-0.5);
wrong=O;
for j=l: (l+nvl)
if newva(i,j)-=validt(i,j)
wrong=wrong+l;
end;
end;
missed(l,i)=wrong
,.-... end
percent_incorrect=sum(missed)/(n*nvl)*lOO
-.
% train with an overlapping loop -- back propagation
ibm;
targetv=ibmt(data);
~targetv=targetv' ;
datav=data( [51:1965], [2:18]);
ndatav=normalize(datav) ;
ndatav=ndatav' ;
%get all the data
%generate target for all data
%get sample set
%normalize the set
% Design parameters are:
%
DP(l) - Iterations between updating display, default = 25.
%
DP(2) - Maximum number of neurons, default = # vectors in P.
%
DP(3) - Sum-squared error goal, default = 0.02.
%
DP(4) - Spread of radial basis functions, default = 1.0.
% Missing parameters and NaN's are replaced with defaults.
dp= [ 5 , 100 , 10] ;
ntr=80;
nvl=10;
n=floor((size(ndatav,2)-ntr)/nvl) ;
missed=zeros(l,n);
newva=zeros(n,l+nvl) ;
validt=zeros(n,l+nvl) ;
start=l;
trainp=ndatav(:, [start: (start+ntr)]);
traint=targetv(:, [start: (start+ntr)]);
validp=ndatav(:, [start+ntr+l: (start+ntr+l+nvl)]);
validt(l, :)=targetv(:, [start+ntr+l: (start+ntr+l+nvl)]);
-
training set
target for training set
validation data
validation target
[wl,bl,w2,b2,te,tr] = solverb(trainp,traint,dp);
va=simurb(validp,wl,bl,w2,b2);
newva(l, :)=hardlim(va-0.5)-hardlim(-va-0.5);
wrong=O;
for j=l:(nvl+l)
if newva(l,j)-=validt(l,j)
wrong=wrong+l;
end;
end;
missed(l,l)=wrong;
wrong
for i=2:n
i,n
start=((i-l)*nvl);
trainp=ndatav(:, [start: (start+ntr)]);
traint=targetv(:, [start: (start+ntr)]);
set
validp=ndatav(:, [(start+ntr+l): (start+ntr+l+nvl)]);
validt(i, :)=targetv(:, [(start+ntr+l): (start+ntr+l+nvl)]);
[wl,bl,w2,b2,te,tr] = solverb(trainp,traint,dp);
-
%get
%get
%get
%get
va=simurb(validp,wl,bl,w2,b2) ;
newva(i, :)=hardlim(va-0.5)-hardlim(-va-0.5);
wrong=O;
for j=l:(l+nvl)
if newva(i,j)-=validt(i,j)
wrong=wrong+l;
end;
end;
missed(l,i)=wrong;
wrong
end
%get training set
%get target for training
%get validation data
%get validation target
%dp,ntr, nvl
%missed
percent_incorrect=sum(missed)/(n*(nvl+l»*100;
%newva, validt
-
%data: O,H,L,C,V,SimpAvg1,SimpAvg2,SimpAvg3,Volatility
ibm;
targetv=ibmt(data) ;
,._ targetv=targetv' ;
%get all the data
%generate target for all data
datav=data([51:1965], [2:6,12:14,18]);
%get sample set
ndatav=normalize(datav) ;
%normalize the set
ndatav=ndatav' ;
% Training parameters are:
%
TP (1) - Epochs between updating display, default = 25.
%
TP(2) - Maximum number of epochs to train, default = 1000.
%
TP(3) - Sum-squared error goal, default = 0.02.
%
TP(4) - MinimuIT. gradient, default = 0.0001.
%
TP(5) - Initial value for MU, default = 0.001.
Multiplier for increasing MU, default
10.
%
TP(6)
Multiplier for decreasing MU, default
0.1.
%
TP(7)
%
TP(8) - MaximuIT. value for MU, default = 1e10.
tp=[15,100,2] ;
ntr=200;
nvl=10;
n=floor((size(ndatav,2))/ntr) ;
_
if (n*ntr + nvl»(size(ndatav,2))
n=n-1;
end;
n
missed=zeros(l,n) ;
newva=zeros(n,l+nvl);
validt=zeros(n,l+nvl) ;
start=l;
trainp=ndatav(:, [start: (start+ntr)]);
%get training set
traint=targetv(:, [start: (start+ntr)]);
%get target for training set
validp=ndatav(:, [start+ntr+1: (start+ntr+1+nvl)]);
%get validation data
validt(l, :)=targetv(:, [start+ntr+1: (start+ntr+1+nvl)]);
%get validation target
[w1,b1,w2,b2,w3,b3]=initff(trainp,18, 'logsig' ,27, 'purelin' ,1, 'tansig');
[w1,b1,w2,b2,w3,b3,te,tr] =
trainlm(w1,b1, 'logsig' ,w2,b2, 'purelin' ,w3,b3, 'tansig' ,trainp,traint,tp);
w3, b3
va=simuff(validp,w1,b1, 'logsig' ,w2,b2, 'purelin' ,w3,b3, 'tansig');
newva(l, :)=hardlim(va-0.5)-hardlim(-va-0.5);
wrong=O;
for j=l: (l+nvl)
if newva(l,j)-=validt(l,j)
wrong=wrong+1;
end;
end;
missed(l,l)=wrong;
for i=2:n
i
n
start=((i-1)*ntr)+2;
trainp=ndatav(:, [start: (start+ntr)]);
%get training set
traint=targetv(:, [start: (start+ntr)]);
%get target for training set
validp=ndatav(:, [(start+ntr+1): (start+ntr+1+nvl)]);
%get validation data
validt(i, :)=targetv(:, [(start+ntr+1): (start+ntr+l+nvl)]);
%get validation target
~
[wl,bl,w2,b2,w3,b3,te,tr] =
trainlm(wl,b1, 'logsig' ,w2,b2, 'purelin' ,w3,b3, 'tansig',trainp,traint,tp);
w3, b3
va=simuff (validp, wi, b1, , logsig' , w2, b2, 'purelin' , w3, b3, 'tansig' ) ;
newva(i, :)=hardlirn(va-O.5)-hardlirn(-va-O.5);
wrong=O;
for j=1: (1+nvl)
if newva(i,j)-=validt(i,j)
wrong=wrong+1;
end;
end;
rnissed(1,i)=wrong
end
percent_incorrect=surn(rnissed)/(n*nvl)*100
-
Appendix B:
Notes and Results
,-
-
errors. txt
Matrix is close to singular or badly scaled.
Results may be inaccurate.
create loop for training and validating, without initializing in middle
able to adjust # train/validate
try different method --> radial basis
next:
results of trySloop -- all data points available
nvl=10
ntr=200
neurons: 12, 36, 1
newva =
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
percent_ incorrect = 10
0
0
2
4
2
0
0
0
0
0
0
0
1
0
missed
0
0
0
0
0
0
0
1
0
results of trySloop
nvl=10
ntr=200
neurons: 17, 34, 1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
all data points available
what I got:
newva =
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
-1
0
0
-1
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
0
0
2
0
2
target:
validt
-
1
0
0
0
0
0
1
0
0
missed
6
errors. txt
percent_incorrect
14.4444
I am concerned that each time it goes through the loop, the SSE starts out
high again, even though I only initialize the weights and biases at the
beginning, not in the loop.
I don't know if they are carrying over or not.
I am going to put in some output statements so I can try to figure this out.
try4loop -- Q,H,L,C,V,Volatility
=
start
1
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 12.3592.
15/100 epochs, mu
100, SSE
7.2347.
30/100 epochs, mu
100, SSE = 7.05109.
45/100 epochs, mu
100, SSE = 6.92429.
60/100 epochs, mu
1000, SSE = 6.81198.
75/100 epochs, mu
100, SSE = 6.74718.
90/100 epochs, mu
100, SSE = 6.7077.
100/100 epochs, mu = 100, SSE = 6.68336.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
w3 =
Columns 1 through 7
-0.0283
0.3050
0.6707
0.3426
0.2575
-0.4350
0.0420
0.3531
-0.2269
-0.1099
-0.1315
-0.0188
-0.6827
-0.0815
-0.0256
-0.2494
-0.2776
0.2273
-0.1284
0.1874
0.0290
-0.4823
-0.1861
0.0612
-0.3597
-0.0592
-0.1086
Columns 8 through 14
0.3493
-0.6090
Columns 15 through 21
0.3017
0.3910
Columns 22 through 28
0.2481
-0.2401
Columns 29 through 35
-0.6204
Column 36
0.3512
b3
0.1794
i
2
-0.1157
errors. txt
n
9
w3 =
Columns 1 through 7
0.3050
-0.0283
0.6707
0.3426
0.2575
-0.4350
0.0420
0.3531
-0.2269
-0.1099
-0.1315
-0.0188
-0.6827
-0.0815
-0.0256
-0.2494
-0.2776
0.2273
-0.1284
0.1874
0.0290
-0.4823
-0.1861
0.0612
-0.3597
-0.0592
-0.1086
Columns 8 through 14
-0.6090
0.3493
Columns 15 through 21
0.3910
0.3017
Columns 22 through 28
0.2481
-0.2401
Columns 29 through 35
-0.6204
-0.1157
Column 36
0.3512
b3
0.1794
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 33.3621.
15/100 epochs, mu
100, SSE = 11.4314.
30/100 epochs, mu
1000, SSE = 10.7034.
45/100 epochs, mu
1000, SSE = 10.5382.
60/100 epochs, mu
100, SSE
10.4447.
75/100 epochs, mu
100, SSE = 10.3883.
90/100 epochs, mu
100, SSE = 10.3521.
100/100 epochs, mu = 100, SSE = 10.3353.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
missed
1
i
3
-
n
9
w3
0
0
0
0
0
0
0
0
errors. txt
Columns 1 through 7
0.2864
-0.0457
0.5836
0.3565
0.2620
-0.4178
0.0528
0.3180
-0.1798
-0.0672
-0.1263
0.0084
-0.6019
-0.1292
0.0032
-0.2358
-0.2361
0.2222
-0.1215
0.1092
-0.0344
-0.4842
-0.2002
0.0369
-0.3097
-0.0525
-0.1468
Columns 8 through 14
-0.5538
0.2984
Columns 15 through 21
0.3272
0.2513
Columns 22 through 28
0.2042
-0.2509
Columns 29 through 35
-0.5584
-0.1080
Column 36
0.3547
b3
0.1810
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 19.7704.
15/100 epochs, mu
0.1, SSE = 2.61209.
30/100 epochs, mu
0.01, SSE = 2.58612.
45/100 epochs, mu
10, SSE = 2.45461.
60/100 epochs, mu
0.01, SSE = 2.4163.
75/100 epochs, mu
0.001, SSE = 2.36739.
90/100 epochs, mu
0.1, SSE = 2.1577.
100/100 epochs, mu = 0.01, SSE = 2.13392.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
missed
1
o
o
o
o
o
o
o
o
i
4
n
9
w3 =
Columns 1 thro'Jgh 7
0.0818
0.D337
0.7147
-0.0905
-0.0206
-0.1179
-0.0873
0.3959
-0,4055
0.1773
-0.1386
0.0718
Columns 8 through 14
0.4182
-0.3481
errors. txt
Columns 15 through 21
0.5530
-0.0724
-0.5008
-0.0416
-0.0690
0.1881
0.1256
0.3051
-0.2062
0.0315
0.0487
-0.4684
-0.1717
-0.0460
-0.1921
0.2048
0.0161
Columns 22 through 28
0.0581
0.1772
Columns 29 through 35
0.3558
-0.5723
Column 36
0.0491
b3
0.1866
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
-
0/100 epochs, mu = 0.001, SSE = 15.9146.
100, SSE
15/100 epochs, mu
8.36353.
30/100 epochs, mu
0.1, SSE = 8.19335.
0.1, SSE = 5.57613 .
45/100 epochs, mu
60/100 epochs, mu
10, SSE = 5.54817.
75/100 epochs, mu
0.01, SSE = 5.450l.
0.01, SSE = 5.432l.
90/100 epochs, mu
100/100 epochs, mu = 0.001, SSE = 5.41702.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases andlor more hidden neurons.
missed
1
0
0
0
0
0
0
0
0
i
5
n
9
w3
=
Columns 1 through 7
-0.5891
-0.6882
0.3551
0.0980
0.4840
0.1786
0.1061
0.4316
0.2429
-0.1897
-0.1833
0.1406
-0.5407
-0.0399
0.1306
-0.2167
0.3187
Columns 8 through 14
0.1982
-
-0.6657
Columns 15 through 21
-0.1824
0.5930
Columns 22 through 28
errors. txt
,-
0.0296
-0.6706
-0.1965
-0.3986
-0.5348
-0.1502
-0.1773
-0.4364
-0.l399
0.1622
0.3443
0.1497
Columns 29 through 35
0.3293
-0.5959
Column 36
0.5770
b3
0.3097
TRAINLM: 01100 epochs, mu
0.001, SSE
l. 80699.
missed
1
0
0
0
1
0
0
0
0
i
6
n
-
9
=
w3
Columns 1 through 7
-0.5891
-0.6882
0.3551
0.0980
0.4840
0.1786
0.1061
0.4316
0.2429
-0.1897
-0.1833
0.1406
-0.5407
-0.0399
0.l306
-0.2167
0.3187
0.0296
-0.6706
-0.1965
-0.3986
-0.5348
-0.1502
-0.1773
-0.4364
-0.l399
0.1622
Columns 8 through 14
0.1982
-0.6657
Columns 15 through 21
-0.1824
0.5930
Columns 22 through 28
0.1497
0.3443
Columns 29 through 35
-0.5959
0.3293
Column 36
0.5770
b3
0.3097
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
011 0 0 epochs, rou = 0.001, SSE = 10.4627.
0.1, SSE = 2.95376.
151100 epochs, rou
1, SSE = 2.95293.
30/100 epochs, rou
451100 epochs, rou
10, SSE = 2.90623.
errors. txt
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
60/100 epochs, mu
75/100 epochs, mu
90/100 epochs, mu
100/100 epochs, mu
10, SSE = 2.90385.
0.01, SSE = 2.90225.
100, SSE
2.86511.
= 10, SSE = 2.85201.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
missed
o
1
o
o
1
o
o
o
o
i
7
n
9
=
w3
Columns 1 through 7
-
-0.5428
-0.5825
0.3070
0.0681
0.2836
0.2191
-0.0253
0.3067
0.4741
-0.1279
-0.2399
0.3002
-0.3813
-0.2271
0.0999
-0.1514
0.2673
-0.0086
-0.5675
-0.2895
-0.5384
-0.4738
-0.1827
-0.1972
-0.2180
-0.2339
0.1792
Columns 8 through 14
0.1435
-0.5297
Columns 15 through 21
-0.0962
0.6212
Columns 22 through 28
0.1250
0.2299
Columns 29 through 35
-0.4578
0.2077
Column 36
0.5795
b3
0.3042
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 5.26721.
15/100 epochs, mu
10, SSE = 4.07861.
30/100 epochs, mu
0.0001, SSE
3.64564.
45/100 epochs, mu
0.0001, SSE
2.99563.
60/100 epochs, mu
1e-005, SSE
2.97228.
75/100 epochs, mu
1e-005, SSE
2.76953.
90/100 epochs, mu
1e-006, SSE
2.1827.
100/100 epochs, mu = 1e-006, SSE = 2.00744.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
errors. txt
initial weights and biases and/or more hidden neurons.
missed
o
1
o
o
1
o
3
o
o
i
8
n
9
=
w3
Columns 1 through 7
-17.4506
-5.8334
-5.5863
12.7254
10.4193
-1. 9443
9.6012
-3.6295
13.9358
-1. 0207
4.9548
11.3455
0.9377
-9.8764
6.3760
-12.4220
-6.4183
-5.2391
-7.4862
-13.6975
-18.5604
-4.3613
-4.7051
-1. 9282
-0.9230
-6.0465
-5.5301
Columns 8 through 14
-11.2375
-5.7017
Columns 15 through 21
0.4526
-
-3.0477
Columns 22 through 28
-6.6647
-5.2187
Columns 29 through 35
-4.6385
-8.5142
Column 36
16.5842
b3
0.1055
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
01100 epochs, mu = 0.001, SSE = 102.919.
1e-006, SSE
22.7482.
151100 epochs, mu
301100 epochs, mu
1e-009, SSE
22.7481.
1e-005, SSE
22.7479.
451100 epochs, mu
Warning: Matrix is close to singular or badly scaled.
Results may be inaccurate. RCOND = 1.027186e-016.
> In c:\matlab5\toolbox\nnet\nnet\tlm3.m at line 115
In c:\matlab5\toolbox\nnet\nnet\trainlm.m at line 50
In c:\neural nets\try4loop.m at line 66
1e-008, SSE = 22.7477.
TRAINLM: 60/100 epochs, mu
TRAINLM: 75/100 epochs, mu = 0.1, SSE = 22.7383.
TRAINLM: 90/100 epochs, mu = 10, SSE = 22.7194.
TRAINLM: 100/100 epochs, mu = 100, SSE = 22.3082.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
errors. txt
missed
0
1
0
1
0
0
3
0
0
i
9
n
9
w3 =
Columns 1 through 7
-6.4913
-19.3308
-6.1197
14.1065
11. 4911
-2.1191
10.6226
-4.0449
15.4589
-1.1232
5.4610
12.5780
1.0297
-10.9604
7.0906
-13.77l0
-7.0668
-5.7686
-8.3024
-15.1936
-20.5680
-4.8713
-5.2584
-2.1121
-1. 0545
-6.7l86
-6.1794
Columns 8 through 14
-6.2858
-12.4804
Columns 15 through 21
-3.3412
0.5220
-
Columns 22 through 28
-5.8291
-7.3963
Columns 29 through 35
-9.4566
-5.1740
Column 36
18.3465
b3
0.0866
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 129.233.
15/100 epochs, mu
10, SSE = 83.3419.
30/100 epochs, mu
1, SSE = 83.3419.
45/100 epochs, mu
10, SSE
83.3419.
60/100 epochs, mu
10, SSE = 83.3419.
75/100 epochs, mu
10, SSE = 83.3419.
90/100 epochs, mu
1, SSE = 83.3419.
100/100 epochs, mu = 10, SSE = 83.3419.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
missed
1
o
o
o
1
o
3
o
8
errors. txt
percent_incorrect
14.4444
I just don't know!!
It seems like the SSE should be less, especially for
that last loop where the error was 83!
It appears that the weights and
biases are carried over, though.
I'm going to try the next combination of data points using the same method.
try6loop -- data: O,H,L,C,V,SimpAvg1,SimpAvg2,SimpAvg3,Volatility
ntr=200
nvl=10
neurons:
18, 27, 1
n =
9
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 9.12637.
15/100 epochs, mu
1, SSE
5.92604.
30/100 epochs, mu
1, SSE = 5.76577.
45/100 epochs, mu
1, SSE = 5.60932.
60/100 epochs, mu
10, SSE = 5.58195.
75/100 epochs, mu
100, SSE = 5.57119.
90/100 epochs, mu
10, SSE
5.53102.
100/100 epochs, mu = 1, SSE = 5.39796.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
w3 =
Columns 1 through 7
0.1477
0.0866
-0.0814
0.4878
0.1621
-0.2957
0.0546
-0.0488
0.0109
-0.6390
-0.0765
0.7057
0.2061
0.4355
-0.0134
-0.0058
0.2166
-0.1195
o . 1262
-0.0733
-0.0551
Columns 8 through 14
0.0354
-0.4704
Columns 15 through 21
0.4271
-0.2820
Columns 22 through 27
-0.2248
b3
0.1266
i
2
n
9
0.3492
errors.txt
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 26.8073.
15/100 epochs, mu
100, SSE = 8.69596.
30/100 epochs, mu
100, SSE = 8.33547.
45/100 epochs, mu
1000, SSE = 8.1677.
60/100 epochs, mu
10, SSE = 8.10184.
75/100 epochs, mu
1000, SSE = 8.04909.
90/100 epochs, mu
1000, SSE = 8.01754.
100/100 epochs, mu = 1000, SSE = 8.00302.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
w3
=
Columns 1 through 7
0.0025
0.1396
-0.1107
0.4522
0.2239
-0.3592
0.0516
0.0496
0.0210
-0.6417
-0.0236
0.6590
0.1735
0.4532
-0.0391
-0.0257
0.1619
-0.0464
0.0913
-0.0388
-0.0152
Columns 8 through 14
-0.3827
0.0276
Columns 15 through 21
-0.2891
0.3805
Columns 22 through 27
0.3602
-0.2327
b3
0.1126
missed
1
0
0
0
0
0
0
0
0
i
3
n
9
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
~
0/100 epochs, mu = 0.001, SSE = 20.5797.
15/100 epochs, mu
10, SSE = 2.9451.
30/100 epochs, mu
10, SSE = 2.89476.
45/100 epochs, mu
1, SSE = 2.87361.
60/100 epochs, mu
0.01, SSE = 2.83418.
75/100 epochs, mu
0.1, SSE = 2.81142.
90/100 epochs, mu
1, SSE = 2.74141.
100/100 epochs, mu = 0.01, SSE = 2.73549.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
errors. txt
w3 =
Columns 1 through 7
-0.0555
0.0480
-0.0978
0.5329
0.1120
-0.3331
0.0358
0.0099
-0.0881
-0.5719
0.0559
0.6097
0.1788
0.3895
-0.0743
-0.1750
0.1133
-0.1023
0.0557
-0.0636
0.0103
Columns 8 through 14
-0.3199
-0.1373
Columns 15 through 21
-0.2318
0.3633
Columns 22 through 27
0.2464
-0.2052
b3
0.1151
missed
0
1
-
0
0
0
0
0
0
0
i
4
n
9
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 134.618.
15/100 epochs, mu
10, SSE = 8.80589.
30/100 epochs, mu
10, SSE = 8.64708.
45/100 epochs, mu
1, SSE = 8.63319.
60/100 epochs, mu
0.1, SSE
8.59569.
75/100 epochs, mu
100, SSE = 8.49182.
90/100 epochs, mu
0.1, SSE = 8.37224.
100/100 epochs, mu = 1, SSE = 7.62103.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
w3 =
Columns 1 through 7
0.0407
0.1384
-0.2268
0.4829
-0.0923
-0.8026
0.0323
-0.4594
-0.0113
-0.7228
0.2947
0.9591
0.6853
0.0874
-0.1911
-0.5001
0.5670
Columns 8 through 14
,-
0.0844
-0.2081
Columns 15 through 21
0.3790
-0.2891
Columns 22 through 27
errors. txt
-
0.1961
-0.2343
-0.0215
0.4477
0.3641
-0.1319
b3
0.2048
missed
1
0
0
0
0
0
0
0
0
i
5
n
9
TRAINLM: 0/100 epochs, mu
TRAINLM: 1/100 epochs, mu
w3
0.001, SSE = 4.25647.
0.0001, SSE = 0.616283.
=
Columns 1 through 7
.-
0.1593
0.1244
-0.0939
0.5529
-0.1935
-0.6929
-0.0290
-0.4180
-0.0424
-0.6251
0.2034
0.8400
0.6291
0.0106
-0.1310
-0.4471
0.5304
-0.1417
0.3465
-0.2353
0.2289
Columns 8 through 14
-0.3162
-0.0049
Columns 15 through 21
0.4857
-0.3767
Columns 22 through 27
0.2022
-0.2536
b3
0.1854
missed
1
0
0
0
0
0
0
0
i
6
n
-
9
TRAINLM: 0/100 epochs, mu = 0.001, SSE = 3.25624.
TRAINLM: 15/100 epochs, mu
100, SSE = 2.88891.
TRAINLM: 30/100 epochs, mu = 1, SSE = 2.7593.
0
errors. txt
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
1, SSE
45/100 epochs, mu
0.001,
60/100 epochs, mu
0.001,
75/100 epochs, mu
1, SSE
90/100 epochs, mu
100/100 epochs, mu = 0.01,
= 2.66931.
SSE = 2.58233.
SSE = 2.50792.
= 2.4534.
SSE = 2.44011.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
w3
=
Columns 1 through 7
-0.6355
0.3056
-0.7363
0.7427
-0.4554
-1.2359
-0.5937
-0.6119
0.1736
-1.3753
0.7989
0.3365
0.4080
-0.4978
-0.7810
-0.0979
0.7195
0.2974
-0.1326
-0.2204
1. 0984
Columns 8 through 14
-0.0556
0.7833
Columns 15 through 21
-1.1575
0.5632
Columns 22 through 27
0.4696
-1.0950
b3
0.1522
missed
1
0
0
0
0
0
0
0
0
i
7
n
9
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 131.355.
15/100 epochs, mu
0.1, SSE = 4.01177.
30/100 epochs, mu
1, SSE = 3.96465.
45/100 epochs, mu
100, SSE = 3.494.
60/100 epochs, mu
1, SSE = 3.44807.
75/100 epochs, mu
1, SSE = 3.43993.
90/100 epochs, mu
10, SSE = 3.43748.
100/100 epochs, mu = 10, SSE = 3.43599.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
w3 =
Columns 1 through 7
errors. txt
-0.7777
0.2682
-0.5886
0.6713
-0.4667
-0.9614
-0.3157
-0.5022
0.0722
-1. 2645
0.7231
0.4075
0.5108
-0.2713
-0.4485
0.1203
0.5202
0.3667
-0.2233
-0.1506
0.9265
Columns 8 through 14
0.0362
0.8923
Columns 15 through 21
-1. 0422
0.4893
Columns 22 through 27
0.2191
-0.9254
b3
0.1706
missed
1
0
0
0
0
0
2
0
0
i
8
n
9
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 35.0803.
15/100 epochs, mu
100, SSE
22.5339.
30/100 epochs, mu
100, SSE
22.3763.
45/100 epochs, mu
100, SSE
22.3617.
60/100 epochs, mu
100, SSE
22.3514.
75/100 epochs, mu
1000, SSE = 22.3435.
90/100 epochs, mu
100, SSE = 22.3372.
100/100 epochs, mu = 100, SSE = 22.3337.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
w3 =
Columns 1 through 7
0.2828
-0.7509
-0.6847
0.7852
-0.4809
-0.9201
-0.2265
-0.3949
0.0139
-1.2765
0.6450
0.3606
0.3021
-0.3294
-0.4728
-0.0016
0.3609
0.2615
-0.1948
-0.0407
0.8628
Columns 8 through 14
0.6384
-0.0436
Columns 15 through 21
,-
0.4989
-0.9661
Columns 22 through 27
-1.0837
0.2700
errors. txt
b3
0.1892
missed
o
1
o
o
o
o
2
o
o
i
9
n
9
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 139.581.
15/100 epochs, mu
0.1, SSE = 27.996.
30/100 epochs, mu
10, SSE = 27.5842.
45/100 epochs, mu
1, SSE = 27.5792.
60/100 epochs, mu
10, SSE = 27.562.
75/100 epochs, mu
10, SSE = 27.5574.
90/100 epochs, mu
100, SSE = 27.5415.
100/100 epochs, mu = 100, SSE = 27.5396.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases andlor more hidden neurons.
w3 =
Columns 1 through 7
0.3312
-0.7604
-0.6534
0.7271
-0.4305
-0.8654
-0.2718
-0.3688
0.0682
-1.2632
0.7054
0.3885
0.2414
-0.3130
-0.4162
0.0343
0.4647
0.2812
-0.1332
-0.0156
0.8551
Columns 8 through 14
0.6142
-0.0956
Columns 15 through 21
0.5179
-1.0103
Columns 22 through 27
-1. 0145
0.2309
b3
0.1889
missed
1
~-
0
percent_incorrect
5.5556
newva =
0
0
0
0
2
0
2
errors. txt
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
validt
1
o
o
o
o
o
1
o
o
o
o
o
o
o
o
o
o
-1
o
o
o
o
o
o
o
o
o
o
o
o
o
o
1
o
o
o
o
o
o
o
o
o
o
o
1
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
Did printing the weights and biases help, or should I print them before
and after I trai~ so that I can compare easier from loop to loop?
Have an overlapping loop! !
ntr = 200 give or take -- try less, like 100
nvl = 15 -- get too much error, try 10
next time, 15-215, validate 216-231
adjust ntr, nvl a lot -- find best fit
more neurons but more error --> over fit
try less
play around
Also try with solverb using the loop
I did the overlapping loop with back propagation, and I got lots of
errors saying the matrix was close to singular or badly scaled.
It
never finished the loops. This was with ntr = 100, nvl
15. (try40loop)
Here is the finished missed matrix and percent incorrect:
missed =
Columns 1 through 12
1
o
6
3
8
12
8
11
7
8
16
11
10
o
2
2
2
2
o
o
1
o
o
o
o
1
o
o
o
16
12
o
15
15
16
16
1
16
o
15
o
o
o
5
16
14
1
Columns 13 through 24
1
8
5
Columns 25 through 36
o
4
15
Columns 37 through 48
14
1
2
Columns 49 through 60
o
16
11
Columns 61 through 72
o
errors. txt
13
3
1
3
0
2
0
0
0
2
2
0
0
0
0
0
0
0
0
2
0
0
2
1
0
0
0
0
0
4
0
0
0
4
0
1
1
0
8
7
2
5
6
5
3
7
Columns 73 through 84
0
0
0
Columns 85 through 96
0
2
3
Columns 97 through 108
4
5
3
5
Columns 109 through 120
1
6
3
5
Columns 121 through 124
8
13
8
12
percent_incorrect
27.3118
I'll try again with ntr=120, nvl=10 to get different values of data
in each set.
Here is what I got. Apparently there is something wrong with the way I figure
the percent_icorrect. Also, I do not know why it only went through one
epoch for these last few values. This is all the matlab window stored.
I will alter the program and then decrease the output to the screen so
that perhaps more may be stored.
Columns 25 through 36
11
11
11
11
11
11
11
11
11
10
11
11
11
11
11
11
11
11
11
11
9
9
11
11
10
11
9
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
Columns 37 through 48
10
11
11
11
Columns 49 through 60
11
11
11
11
Columns 61 through 72
11
11
8
11
Columns 73 through 84
11
11
11
11
Columns 85 through 96
-
11
11
11
11
Columns 97 through 108
11
11
11
11
errors. txt
Columns 109 through 120
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
10
10
11
9
10
11
11
11
11
11
11
11
9
11
11
11
11
10
11
11
11
11
11
11
8
11
11
8
7
11
11
11
9
11
11
0
0
Columns 121 through 132
11
11
11
11
Columns 133 through 144
11
10
11
11
Columns 145 through 156
10
9
8
10
Columns 157 through 168
11
11
11
11
Columns 169 through 180
10
11
9
9
Columns 181 through 184
0
0
-
0
0
i
179
n
184
TRAINLM: 0/100 epochs, mu
=
0.001, SSE
=
167.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
missed
=
Columns 1 through 12
0
1
3
11
11
11
11
10
11
11
11
11
11
11
7
11
11
11
11
11
11
11
11
11
11
10
11
11
11
11
11
11
11
11
11
11
9
9
11
11
10
11
9
11
Columns 13 through 24
11
11
11
11
Columns 25 through 36
11
11
11
11
Columns 37 through 48
10
11
11
11
Columns 49 through 60
11
11
11
11
errors.txt
Columns 61 through 72
11
11
8
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
10
10
11
9
10
11
11
11
11
11
11
11
9
11
11
11
11
10
11
11
11
11
11
11
8
11
11
8
7
11
11
11
9
11
11
11
0
11
11
11
Columns 73 through 84
11
11
11
11
Columns 85 through 96
11
11
11
11
Columns 97 through 108
11
11
11
11
Columns 109 through 120
11
11
11
11
Columns 121 through 132
11
11
11
11
Columns 133 through 144
10
11
11
11
Columns 145 through 156
10
.-
9
8
10
Columns 157 through 168
11
11
11
11
Columns 169 through 180
10
11
9
9
Columns 181 through 184
0
0
0
0
i
180
n
184
TRAINLM: 0/100 epochs, mu
TRAINLM: 1/100 epochs, mu
0.001, SSE = 179.
0.0001, SSE = 131.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
-
missed
=
Columns 1 through 12
0
1
3
11
11
11
11
10
11
errors. txt
Columns 13 through 24
.....~"""
11
11
11
11
11
11
7
11
11
11
11
11
11
11
11
11
11
10
11
11
11
11
11
11
11
11
11
11
9
9
11
11
10
11
9
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
10
10
11
9
10
11
11
11
11
11
11
11
9
11
11
11
11
10
11
11
11
11
11
11
8
11
11
8
7
11
11
11
9
11
11
11
8
Columns 25 through 36
11
11
11
11
Columns 37 through 48
10
11
11
11
Columns 49 through 60
11
11
11
11
Columns 61 through 72
11
11
8
11
Columns 73 through 84
11
11
11
11
Columns 85 through 96
11
11
11
11
Columns 97 through 108
-
11
11
11
11
Columns 109 through 120
11
11
11
11
Columns 121 through 132
11
11
11
11
Columns 133 through 144
10
11
11
11
Columns 145 through 156
10
9
8
10
Columns 157 through 168
11
11
11
11
Columns 169 through 180
10
11
9
9
Columns 181 through 184
0
i
181
0
0
0
errors. txt
n
184
TRAINLM: 0/100 epochs, mu
=
0.001, SSE
=
123.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
missed
=
Columns 1 through 12
0
1
3
11
11
11
11
10
11
11
11
11
11
11
7
11
11
11
11
11
11
11
11
11
11
10
11
11
11
11
11
11
11
11
11
11
9
9
11
11
10
11
9
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
10
10
11
9
10
11
11
11
11
11
11
11
9
11
11
11
11
10
11
11
11
Columns 13 through 24
11
11
11
11
Columns 25 through 36
11
11
11
11
Columns 37 through 48
10
-
11
11
11
Columns 49 through 60
11
11
11
11
Columns 61 through 72
11
11
8
11
Columns 73 through 84
11
11
11
11
Columns 85 through 96
11
11
11
11
Columns 97 through 108
11
11
11
11
Columns 109 through 120
11
11
11
11
Columns 121 through 132
11
11
11
11
Columns 133 through 144
10
11
11
11
Columns 145 through 156
10
9
8
10
Columns 157 through 168
errors. txt
-
11
11
11
11
11
11
11
8
11
11
8
7
11
11
11
9
11
11
11
8
Columns 169 tr_rough 180
10
11
9
9
Columns 181 trxough 184
10
0
0
0
i
182
n
184
TRAINLM: 0/100 epochs, mu = 0.001, SSE = 126.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
missed =
Columns 1 through 12
0
1
3
11
11
11
11
10
11
11
11
11
11
11
7
11
11
11
11
11
11
11
11
11
11
10
11
11
11
11
11
11
11
11
11
11
9
9
11
11
10
11
9
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
Columns 13 through 24
11
11
11
11
Columns 25 through 36
11
11
11
11
Columns 37 through 48
10
11
11
11
Columns 49 through 60
11
11
11
11
Columns 61 through 72
11
11
8
11
Columns 73 through 84
11
11
11
11
Columns 85 through 96
11
-,
11
11
11
Columns 97 through 108
11
11
11
11
Columns 109 through 120
errors. txt
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
10
10
11
9
10
11
11
11
11
11
11
11
9
11
11
11
11
10
11
11
11
11
11
11
8
11
11
8
7
11
11
11
9
11
11
11
8
Columns 121 through 132
11
11
11
11
Columns 133 through 144
10
11
11
11
Columns 145 tI:.rough 156
10
9
8
10
Columns 157 tl:.rough 168
11
11
11
11
Columns 169 through 180
10
11
9
9
Columns 181 through 184
10
9
0
0
i
183
n
184
TRAINLM: 0/100 epochs, mu
=
0.001, SSE
=
136.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
missed
=
Columns 1 through 12
0
1
3
11
11
11
11
10
11
11
11
11
11
11
7
11
11
11
11
11
11
11
11
11
11
10
11
11
11
11
11
11
11
11
11
11
9
9
11
11
10
11
9
11
Columns 13 through 24
11
11
11
11
Columns 25 through 36
11
11
11
11
Columns 37 through 48
10
11
11
11
Columns 49 through 60
11
11
11
11
Columns 61 through 72
errors. txt
11
11
,'-"
8
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
10
10
11
9
10
11
11
11
11
11
11
11
9
11
11
11
11
10
11
11
11
11
11
11
8
11
11
8
7
11
11
11
9
11
11
11
8
11
11
11
Columns 73 through 84
11
11
11
11
Columns 85 through 96
11
11
11
11
Columns 97 through 108
11
11
11
11
Columns 109 trxough 120
11
11
11
11
Columns 121 tr_rough 132
11
11
11
11
Columns 133 through 144
10
11
11
11
Columns 145 through 156
10
-
9
8
10
Columns 157 through 168
11
11
11
11
Columns 169 through 180
10
11
9
9
Columns 181 through 184
10
9
0
9
i
184
n
184
TRAINLM: 01100 epochs, mu
=
0.001, SSE
=
139.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases andlor more hidden neurons.
missed
-
=
Columns 1 through 12
0
1
3
11
Columns 13 through 24
11
11
11
10
11
errors. txt
11
11
11
11
11
11
7
11
11
11
11
11
11
11
11
11
11
10
11
11
11
11
11
11
11
11
11
11
9
9
11
11
10
11
9
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
10
10
11
9
10
11
11
11
11
11
11
11
9
11
11
11
11
10
11
11
11
11
11
11
8
11
11
8
7
11
11
11
9
11
11
11
8
Columns 25 through 36
11
11
11
11
Columns 37 through 48
10
11
11
11
Columns 49 through 60
11
11
11
11
Columns 61 through 72
11
8
11
11
Columns 73 through 84
11
11
11
11
Columns 85 through 96
11
11
11
11
Columns 97 through 108
11
11
11
11
Columns 109 through 120
11
11
11
11
Columns 121 through 132
11
11
11
11
Columns 133 through 144
10
11
11
11
Columns 145 through 156
10
9
8
10
Columns 157 through 168
11
11
11
11
Columns 169 through 180
10
11
9
9
Columns 181 through 184
10
9
percent_incorrect
--
104.9457
9
5
errors2.txt
I don't think I did anything wrong in computing the percent_incorrect. For some r
eason,
when it computed how many were missed, it sometimes got 11, even though
there were only 10 in the validation set.
I just have no idea what went
wrong.
I will repeat this with half of the data, using the same number
in the training set and the same number in the validation set with less
output in the middle to try to determine just what happened.
I fixed the problem with the missed matrix and the percent_incorrect.
Using this program with 965 inputs instead of 1965, these are the results.
Maybe I just don't understand what the SSE means.
It doesn't seem like
it should go up in between loops or that it shouldn't not change during 25
epochs. But I don't know what to change, either.
» try40loop
ntr=120 nvl=10
TRAINLM: 0/100 epochs, mu
TRAINLM: 1/100 epochs, mu
i
0.001, SSE = 1.81649.
0.0001, SSE = 0.908141.
=
2
-
n
84
TRAINLM: 0/100 epochs, mu
i
0.001, SSE
0.951571.
=
3
n
84
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 2.02301.
15/100 epochs, mu
10, SSE = 1. 87725.
30/100 epochs, mu
10, SSE = 1.82213.
45/100 epochs, mu
1, SSE = 1.78785.
60/100 E::!pochs, mu
0.1, SSE = 1.66055.
75/100 E::!pochs, mu
1, SSE = 1.63074.
90/100 E::!pochs, mu
0.01, SSE
1.4792.
100/100 epochs, mu = 0.1, SSE = 1.26802.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
i
-
4
n =
84
errors2.txt
,-
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
i
0/100 epochs, mu = 0.001, SSE
3.07312.
15/100 epochs, mu
0.01, SSE
1.94652.
30/100 epochs, mu
0.01, SSE
1.7766.
45/100 epochs, mu
0.01, SSE
1.68727.
60/100 epochs, mu
0.001, SSE = 1.3442.
75/100 epochs, mu
0.001, SSE = 1.05026.
83/100 epochs, mu
0.0001, SSE = 0.974131.
=
5
n
84
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 10.9675.
15/100 epochs, mu
0.001, SSE = 10.8899.
30/100 epochs, mu
0.01, SSE
10.8773.
45/100 epochs, mu
0.01, SSE
10.8618.
60/100 epochs, mu
0.01, SSE
10.837.
75/100 epochs, mu
0.01, SSE
10.8243.
90/100 epochs, mu
0.01, SSE = 10.8162.
100/100 epochs, mu = 0.01, SSE = 10.8127.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
--
i
6
n
84
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 23.811.
15/100 epochs, mu
0.01, SSE = 23.8067.
30/100 epochs, mu
0.0001, SSE
23.7843.
45/100 epochs, mu
0.0001, SSE
23.6166.
60/100 epochs, mu
0.0001, SSE
23.5649.
75/100 epochs, mu
0.0001, SSE
23.5124.
90/100 E=pochs, mu
1e-005, SSE
23.4939.
100/100 epochs, mu = 0.1, SSE = 23.4883.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
i
7
n
84
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 36.4661.
15/100 epochs, mu
10, SSE = 36.4534.
30/100 epochs, mu
0.1, SSE = 36.4504.
45/100 epochs, mu
0.001, SSE = 36.4348.
errors2.txt
--
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
60/100 epochs, mu
0.001, SSE
75/100 epochs, mu
0.001, SSE
90/100 epochs, mu
0.001, SSE
100/100 epochs, mu = 0.0001, SSE
36.4056.
36.4026.
36.4.
= 36.3909.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
i
8
n
84
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
-
01100 epochs, mu = 0.001, SSE = 46.3754.
15/100 epochs, mu
0.001, SSE
46.3515.
30/100 epochs, mu
0.001, SSE = 46.3383.
45/100 epochs, mu
0.001, SSE = 46.3271.
60/100 epochs, mu
0.0001, SSE = 46.2551.
75/100 epochs, mu
0.0001, SSE = 46.1844.
90/100 epochs, mu
0.001, SSE = 46.1589.
100/100 epochs, mu = 0.001, SSE = 46.1561.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
i
9
n
84
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 55.1389.
151100 ,=pochs, mu
0.001, SSE
55.0709.
30/100 ,=pochs, mu
0.001, SSE
55.0639.
451100 epochs, mu
O. OOL SSE
55.055.
601100 epochs, mu
0.001, SSE
55.0489.
75/100 epochs, mu
O.OOL SSE
55.0447.
901100 epochs, mu
0.0001, SSE = 55.0336.
100/100 epochs, mu = 0.0001, SSE = 55.0268.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
i
10
n
84
TRAINLM: 0/100 epochs, mu = 0.001, SSE = 71.0255.
TRAINLM: 15/100 epochs, mu = 0.0001, SSE = 71.0072.
errors2.txt
",-
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0.0001, SSE
71.0044.
30/100 epochs, mu
71.0027.
0.0001, SSE
45/100 epochs, mu
0.0001, SSE
71.0016.
60/100 epochs, mu
0.0001, SSE
71. 0008.
75/100 epochs, mu
0.0001, SSE
71. 0004.
90/100 epochs, mu
100/100 epochs, mu = O.OOOL SSE = 71.0003.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
i
11
n
84
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
-
0/100 epochs, mu = 0.001, SSE = 87.0003.
15/100 epochs, mu
0.0001, SSE
87.
30/100 ·epochs, mu
0.0001, SSE
87.
45/100 ·epochs, mu
0.0001, SSE
87.
60/100 epochs, mu
0.0001, SSE
87.
75/100 epochs, mu
0.0001, SSE
87.
90/100 epochs, mu
0.0001, SSE
87.
100/100 epochs, mu =.0.0001, SSE = 87.
TRAINLM: Network error did not reach the error goal.
Further traini::J.g may be necessary, or try different
initial weight.s and biases and/or more hidden neurons.
i
12
n
84
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 97.
15/100 '=pochs, mu
0.0001, SSE
97.
30/100 ,=pochs, mu
0.0001, SSE
97.
45/100 '=pochs, mu
0.0001, SSE
97.
60/100 '=pochs, mu
0.0001, SSE
97.
75/100 E=pochs, mu
0.0001, SSE
97.
90/100 E=pochs, mu
0.0001, SSE
97.
100/100 epochs, mu = 0.0001, SSE = 97.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
i
-
13
n
84
errors2.txt
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 107.
15/100 epochs, mu
0.0001, SSE
107.
0.0001, SSE
30/100 epochs, mu
107.
45/100 epochs, mu
0.0001, SSE
107.
0.0001, SSE
60/100 epochs, mu
107.
75/100 epochs, mu
0.0001, SSE
107.
90/100 epochs, mu
0.0001, SSE
107.
100/100 epochs, mu = 0.0001, SSE = 107.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases and/or more hidden neurons.
i
14
n
84
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/100 epochs, mu = 0.001, SSE = 117.
15/100 epochs, mu
0.001, SSE
117.
30/100 epochs, mu
0.001, SSE
117.
45/100 epochs, mu
0.001, SSE
117.
missed =
-
Columns 1 through 12
0
1
3
11
11
11
11
10
11
11
11
Columns 13 through 24
11
Somehow,
the numbers got stuck at -l.
newva ( [1 : 12] , : )
ans
0
0
0
-1
-1
-1
-1
-1
-1
-1
-1
-1
0
0
0
-1
-1
-1
-1
-1
-1
-1
-1
-1
0
0
0
--1
--1
-1
--1
--1
--1
--1
--1
--1
0
0
0
-1
-1
-1
-1
-1
-1
-1
-1
-1
0
0
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
0
0
0
-1
-1
-1
-1
-1
-1
-1
-1
-1
0
0
0
-1
-1
-1
-1
-1
-1
-1
-1
-1
0
0
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
0
0
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
0
0
0
-1
-1
-1
-1
-1
-1
-1
-1
-1
0
0
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
-1
-1
0
0
0
0
-1
0
0
0
0
0
0
0
»validt([1:12], :)
-
ans
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
11
errors2.txt
0
0
0
0
0
0
1
1
0
0
0
0
0
0
I don't know
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
1
0
0
0
0
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
why the weights and biases weren't being adjusted correctly.
1
I think that only 50 epochs are needed.
by hundredths of numbers.
AAAAAAAAHHHHHHHHHHHH! ! !
My normalize function was wrong!!! ! !
This has been my problem all along!!!!!
-
.-
After that, the SSE only changes
results.txt
This is all that was left in the matlab window after I adjusted the
normalization function.
I ran try40loop on 965 of the days, using ntr=80, nvl=10.
used 50 epochs.
I only
I don't understand why no training was done on the last loop, even
though the SSE was 81. Also, notice in the newva how there are
many rows of -l's and then many rows of l's. Why is this so?
Obviously, no training occurred in the last few loops.
I wonder if
the program ran out of memory, because it worked well for the first
7 loops. What is going on??
i
88
n
88
TRAINLM: 0/50 epochs, mu = 0.001, SSE = 81.
TRAINLM: Network error did not reach the error goal.
Further training may be necessary, or try different
initial weights and biases andlor more hidden neurons.
missed =
Columns 1 thro·..lgh 12
0
0
1
0
0
1
3
11
11
11
11
10
11
11
11
11
11
11
11
11
11
11
11
10
11
11
11
11
11
11
11
11
10
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
Columns 13 through 24
11
11
11
Columns 25 through 36
11
11
11
11
Columns 37 through 48
11
11
11
11
Columns 49 through 60
11
11
11
11
Columns 61 through 72
11
11
11
11
Columns 73 through 84
11
.-
11
:'1
11
Columns 85 through 88
11
11
11
percent_incorrect
11
results. txt
92.2521
=
newva
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
1
-1
-1
-1
-1
-1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
-
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
-1
-1
-1
-1
-1
-1
0
0
0
0
0
0
0
-1
-1
0
0
0
0
0
0
0
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
-
-1
-1
-1
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
-1
-1
-1
-1
-1
-1
-1
-1
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
1
1
1
-1
-1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
1
-1
-1
-1
-1
0
0
0
0
0
0
0
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
-1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
results. txt
,-...
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
1
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
--1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
-1
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
validt
-
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
results. txt
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
b
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
I will have to try to watch during the last few loops to see i f I can
figure out what is going on.
,-
This is the results of the first 8 loops.
As is evident, the first 7
work fine.
ntr=BO, nvl=10
» try4oloop
TRAINLM: 0/50 epochs, rou
0.001, SSE = 18.9793.
TRAINLM: 1/50 epochs, rou
0.0001, SSE = 0.868416.
wrong =
0
results. txt
i
2
n
8
TRAINLM: 0/50 epochs, mu
0.001, SSE
0.868547.
0.001, SSE
0.870294.
=
wrong
o
i
3
n
8
TRAINLM: 0/50 epochs, mu
wrong =
-
1
i
4
n
8
TRAINLM: 0/50 epochs, mu
TRAINLM: 1/50 epochs, mu
0.001, SSE = 1.66391.
0.0001, SSE = 0.986673.
=
wrong
o
i
5
n
8
TRAINLM: 0/50 epochs, mu
TRAINLM: 1/50 epochs, mu
-
wrong
=
o
i
0.001, SSE = 1.14482.
0.1, SSE = 0.965875.
results. txt
6
n
8
TRAINLM: 0/50 epochs, mu
TRAINLM: 1/50 epochs, mu
0.001, SSE = 1.00981.
0.1, SSE = 0.961386.
=
wrong
1
i
7
n
8
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
--
0/50 epochs, mu = 0.001, SSE = 2.00026.
5/50 epochs, mu = 0.01, SSE = 1.74484.
10/50 epochs, mu
0.01, SSE = 1.37281.
15/50 epochs, mu
0.01, SSE = 1.32566.
18/50 epochs, mu
0.0001, SSE = 0.96721.
=
wrong
2
i
8
n
8
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
TRAINLM:
0/50 epochs, mu = 0.001, SSE = 3.02198.
5/50 epochs, mu = 0.01, SSE = 1.82011.
10/50 epochs, mu
0.01, SSE
1.48499.
15/50 epochs, mu
0.01, SSE
1.35343.
20/50 epochs, mu
0.01, SSE
1.19421.
25/50 epochs, mu
0.01, SSE
1.08191.
30/50 epochs, mu
0.01, SSE
0.999096.
=
wrong
11
missed
-
0
o
percent_incorrect:
17.0455
1
o
o
1
2
11
results.txt
=
newva
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
1
0
0
0
0
0
0
-1
1
0
0
0
0
0
0
-1
-1
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1
validt
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
-1
0
0
0
0
0
0
0
Why is that last row all l's??? THE SSE MET ITS GOAL OF BEING UNDER 1,
Is there something
YET ALL 11 ARE MISSED! ! ! I just don't know!
weird in the data? I'll also double check the normalization function.
.-
I think a problem might be happening because the first 50 or so dates
have a volatility of O. I'll try this program again, starting with
51, as I do with the larger sets of data.
Download