Uploaded by juliom17030

Aforge-ass-4

advertisement
FACULTY OF INFORMATION TECHNOLOGH
UNIVERSITY OF MORATUWA
Artificial Neural Networks Using
AForge.NET
By Zameer M.F.M.
2009
1
ZAMEER@BCS.ORG.UK
Table of contents
1.
An overview
3
2.
Neural network libraries
3
3.
Supported Learning Algorithms
4
4.
Supported Network Architectures
5
5.
Limitations
5
6.
A Demonstration on using AForge.Net for ANN
6
6.1.
6
6.2.
Supervised learning
6.1.1.
Comparison between Perceptron & Delta
learning.
6
6.1.2.
The use of Back propagation Algorithm.
7
6.1.3.
The use of Momentum
8
6.1.4.
The effect of changes in Activation
functions.
10
6.1.5.
The effect of Bias in Backpropagation
learning.
11
Unsupervised Learning
12
6.2.1. Kohonan Self Organization Maps for color
clustering..
12
6.2.2. Kohonan SOM for finding hidden patterns.
13
2
© itengineerings.com - All rights reserved.
AForge.NET Framework for Artificial Neural Networks development
1. An overview
Aforge.NET is a C# framework designed for developers and researchers in the fields of Computer
Vision and Artificial Intelligence. The framework consists by a set of libraries which support the
following features.
 AForge.Imaging - library with image processing routines and filters.
 AForge.Vision - computer vision library.
 AForge.Neuro - neural networks computation library.
 AForge.Genetic - evolution programming library.
 AForge.Fuzzy - fuzzy computations library.
 AForge.MachineLearning - machine learning library.
 AForge.Robotics - library providing support of some robotics kits.
2. Neural Network Libraries
The two main Libraries which are used for ANN development are
1. AForge.Neuro
2. AForge.Neuro.Learning
Class library overview (For Neural Networks)
3
© itengineerings.com - All rights reserved.
3. Supported learning algorithms

Perceptron Learning o
A supervised learning algorithm.
o
Used to train single layer ANN.
o
o

Used with Unipolar/Bipolar binary activation functions.
Applications are limited with classifications of linearly separable data.
Delta Rule Learning –
o
A supervised learning algorithm.
o
Is applicable to single-layer activation networks only, where each neuron has a
Unipolar/Bipolar continuous activation function.
o

Back Propagation Learning –
o
A supervised training algorithm.
o
Used to train multilayer ANN.
o

Limited to some classification and recognition tasks mostly.
Applications consist of approximation, prediction, object recognition, etc.
SOM Learning o
One of the most famous unsupervised learning algorithms for cauterization problems.
o
It treats neural network as a 2D map of nodes, where each node may represent a separate
class.
o
The algorithm organizes a network in such a way, that it becomes possible to find the
correlation and similarities between data samples.

Elastic Network Learning – T
o
This algorithm is similar to the SOM learning algorithm, but it treats network neurons
not as a ring.
o
During the learning procedure, the ring gets some shape, which represents a solution.
o
One of the most common demonstrations of this learning algorithm is the Traveling
Salesman Problem (TSP).

Evolutionary learning –
o
o
A supervised learning algorithm, which is based on Genetic Algorithms.
For the given neural network, it create a population of chromosomes, which represent
neural network's weights.
4
© itengineerings.com - All rights reserved.
o
Then, during the learning process, the genetic population evolves and weights, which
are represented by the best chromosome, are set to the source neural network.
4. Supported Network Architectures
 . Activation Network - the neural network where each neuron computes its output as the
activation function's output, and the argument is a weighted sum of its inputs combined with the
threshold value. The network may consist of a single layer, or of multiple layers. Trained with
supervised learning algorithms, the network allows to solve such tasks as approximation,
prediction, classification, and recognition.
 Distance Network - The neural network where each neuron computes its output as a distance
between its weight values and input values. The network consists of a single layer, and may be
used as a base for such networks like Kohonen Self Organizing Map, Elastic Network, and
Hamming Network
5. Limitations
 Lack of user friendliness because there is no GUI based development environment.
 Development of a ANN is a time consuming process because some of the activities have to be
performed manually. E.g:- Loading data, Normalization of input vector, Defining number of
layers, and neurons in each layaer etc.
 There is no support for Counter propagation network architectures which comprise of hybrid
learning algorithms. (i.e Training of Kohanen layer in Un supervised mode and Grossberg
layer in Supervised mode.). Thre for the learning algorithm has to be written manually.
5
© itengineerings.com - All rights reserved.
6. A demonstration using AForge.NET framework for ANN
6.1 Supervised learning
6.1.1 Comparison between Perceptron learning and Delta rule learning.
The training session is conducted with a same set of data for both Perceptron and Delta rule learning,
Test case
Learning algorithm
Perceptron learning
Delta rule learning
Learning rate (n)
0.1
0.1
Sigmoid’s alpha value
2
2
Learning error limit
0.1
0.1
Results
No.
of
Iterations
gone
to 14
17
converge towards the Error limit
Final average error
0.06666
0.09696
Comments
A smooth learning curve in Delta learning when compared with
Perceptron learning. Error monotonically decreases in Delta rule
learning.
6
© itengineerings.com - All rights reserved.
6.1.2. The use of Back propagation algorithm.
Settings
“Cancer Causes and Cancer probability” data
Training data set
set. (Refer Appendix A)
Learning rate
0.1
Momentum
0
Sigmoid’s alpha value
1
Neurons in each layer from Layer 1,Layer2,Layer3
20,5,1
Avtivation function
Bipolar Continuous
No. of training Iterations
10,000
Test data
Input data (Sports, Sun,Water, Cigarate, Cancer
44,54,64,74
Expected output
81
Results
Results (Cancer probability)
80.91068571
Final error
0.071
7
© itengineerings.com - All rights reserved.
6.1.3. The use of Momentum
Settings
“Cancer Causes and Cancer probability” data
Training data set
set. (Refer Appendix A)
Learning rate
0.5
Sigmoid’s alpha value
3
Neurons in each layer from Layer 1, …, Layer5
5,10,5,6,1
Avtivation function
Bipolar Continuous
No. of training Iterations
10,000
Test 1
Momentum
0
Result 1
Final Error
80.91068571
Comments
Network is paralyzed
8
© itengineerings.com - All rights reserved.
Test 2
Momentum
0.5
Result 2
Final Error
0.420
Comments
Network is not paralyzed and learns fine
9
© itengineerings.com - All rights reserved.
6.1.4. The effect of changes in the Activation function.
Settings
“Cancer Causes and Cancer probability” data
Training data set
set. (Refer Appendix A)
Learning rate
0.5
Momentum
0.5
Sigmoid’s alpha value
1
Neurons in each layer from Layer 1,Layer2,Layer3
5,5,1
No. of training Iterations
10,000
Test data
Activation Function
Unipolar Continuous – Green Color
Activation Function
Bipolar Continuous – Red Color
Results
Learning speed (Unipolar)
Law when comparing with Bipolar Activation
Learning speed (Bipolar)
Is high when comparing with Unipolar Activation
10
© itengineerings.com - All rights reserved.
6.1.5. The effect of Bias in back propagation learning
Settings
“Cancer Causes and Cancer probability” data
Training data set
set. (Refer Appendix A)
Learning rate
0.5
Momentum
0
Sigmoid’s alpha value
1
Neurons in each layer from Layer 1,Layer2,Layer3
20,5,1
No. of training Iterations
10,000
Test data
No Bias
0 - Red Color
Bias with -1 value
-1 – Blue Color
Bias with +1 value
+1 - Green Color
Results
Without Bias (0)
Learning speed comparatively law
With Bias (-1)
Learning speed comparatively high.
11
© itengineerings.com - All rights reserved.
6.2 Unsupervised learning
6.2.1 Kohonen Self Organization Maps for Color clustering.
Settings
Input data set
A Randomized color image (as shown in the
bellow figure)
Learning rate
0.1
Initial radius
15
Number of Iterations
5000
Neurons in the Layer (only single layer)
100*100
Results
Number of different Color clusters
Red, Green Blue, Orange, Yellow, and many
more…. The bellow figure depicts Intensity
histograms of Main 3 colors .i.e. Red, Green, Blue
6.2.2 Kohonen Self Organization Maps for finding hidden patterns,
12
© itengineerings.com - All rights reserved.
Settings
Input data set
The amount of neutrinos (Protein, Carbohydrate,
Fat) in Grams in a 100g sample of a Food. 25
sample of food items are provided. (Refer
Appendix B)
Maximum Error
0.0000001
Initial radius
Neurons in the Layer (only single layer)
10*10
Results
Relationships which can be found
Bottom right corner appears water that has no fat,
Carbohydrate, or no protein.
In the top right hand corner, sugar, which is made
almost entirely of carbohydrate, has taken hold.
In the top left corner, butter reigns supreme, being
almost entirely fat.
Bottom left is occupied by tuna, which has the
highest protein content of the foods.
Remaining foods live between these extremes.
13
© itengineerings.com - All rights reserved.
14
© itengineerings.com - All rights reserved.
Download