3.1. Manual input mode

advertisement
MuLaPeGASim User
Manual
- Multilayer Perceptron Genetic Algorithm
Simulator –
1. Overview




Introduction
The menu
Net Designer
Pattern Builder
 Manual Input Mode
 OCR Pre-processing Mode
 Net Trainer
2. Introduction
MuLaPeGASim is a small Multilayer Perceptron neural network simulator with some
special features for Optical Character Recognition (OCR) problems.
You have the possibility to design a Multilayer Feed-Forward network, create training
patterns and train it with the Backpropagation learning algorithm (on-/offline) or with a
Genetic leaning algorithm. The patterns could be entered manually or created
automatically for an OCR network. It is also possible to extract characters from an
image.
The Application itself is divided into 3-4 main parts. In this manual these parts are
described in a logical using order.
1. The menu
1.1. The File menu
“Load default neural network”:
Load the default 2-3-3-2 network with all its properties and patterns.
“Load neural network…”:
Load a *.net file. You can find some examples in the program directory in the
subfolder examples/net.
“Save neural network as…”:
Save the current network with all properties and patterns as a .net file.
“Load patterns…”:
Load a *.pat file which contains patterns for a neural network. The patterns have
to fit to the current network. You can find some examples in the program directory
in the subfolder examples/pat.
“Save patterns as…”:
Save the current patterns as a .pat file.
“Load image…”:
Load an image for extracting the characters in the Pattern Builders “OCR preprocessing” mode. You can find some examples in the program directory in the
subfolder examples/pics.
“Exit”:
Exit the application.
1.2. The Network menu
“Randomize weight”:
Randomize the weights and thresholds of the network using the selected
randomize options (see Net Designer).
“Start/stop training”:
Start the training of the network or stop the current training (see Net Trainer).
1.3. The info (“?”) menu
“Help”:
Open this fabulous  user manual.
“About”:
Open a new window with some information about the system and version
numbers of the used libraries.
You will also find two scary pictures  of the authors and their email address.
2. Net Designer
Like the name “Net Designer” implies, this is the place to design the topology of the
neural network  set the activation function of the neurons and the options for
randomizing.
2.1. Area 1
Here you could set the global network options:
“Net name”:
Insert a name for the network. This name will be stored in a *.net file.
“Activation function”:
Select an activation function for the neurons. All neurons will have the same type
of activation function.
“Randomize options”:
Set the options for pseudo randomizing:



enter the seed which will be used for randomizing or select “Use time” to use
the system time as seed  complete new values each time you randomize the
weights and thresholds
enter the range for randomized values  values are greater than “Min” and
lower than “Max” or select “Use optimal” to use an optimal range for the
current activation function type
press the “Default” button to reset the values of the “Randomize options”
controls
2.2. Area 2
Here you could examine the current values of the network in the tree view or
change / edit the network topology. The changes are just done if you press the
“Generate neural net” button.
“Expand all nodes” / Collapse all nodes“”:
Expand or collapse all nodes of the network topology tree view.
“Number of neurons in the layer”:
Insert the number of neurons you want to have in the new layer.
“Insert before” / Insert after“”:
Insert a new layer before / after the current selected layer in the tree view. The
current net is not changed “physically” before you click the “Generate neural net”
button in Area 3.
“Delete layer”:
Delete the selected layer. The current net is not changed “physically” before you
click the “Generate neural net” button in Area 3.
2.3. Area 3
In this area the network topology is visualized.
“Clear view“”:
Clear the tree view and the network visualization.
“Show current net”:
Show the current network in the tree view and visualize it. Most time this action is
done automatically.
“Generate neural net”:
Generates the network. Only when you press this button the changes will be
made for the real network. The patterns, values of the weights and thresholds for
the current network will be discarded! A name for the network is also generated
and inserted in the “Net name” Textbox. The name represents the current network
topology, e.g. a network with 2 input neurons, one hidden layer with 5 neurons
and 1 output neuron is named “2-5-1”.
3. Pattern Builder
In this tab are two different panels, one for the manual editing / creating of patterns,
propagating of the network for a pattern and one for OCR specific pattern generating
/ extracting.
You can change the current panel by clicking on of the “Mode” radio buttons.
3.1.
Manual input mode
The “Manual input” mode is needed to change the values of patterns manual. You
can also check the current output of the network for a selected pattern 
propagate.
3.1.1. Area 1
In this area an overview of all current patterns is presented. When you select
one pattern in the tree view, the properties of this pattern are presented in
area 2 and area 3.
“Show current patterns”:
Show the current patterns, which are associated with the network, in the tree
view. Most time this action is done automatically.
“Delete current patterns”:
Delete all patterns for the network.
“Create new pattern”:
Create a new pattern and add it to the current network. The number of teach
outputs corresponds to number of output neurons for the network. Also the
number of inputs corresponds to the current input layer. All values are
initialized with 0.
“Delete selected pattern”:
Delete the selected pattern.
“Expand all nodes” / “Collapse all nodes”:
Expand or collapse all nodes of the “Patterns overview” tree view.
“Clear view“”:
Clear the patterns tree view, the input and output table.
3.1.2. Area 2
Here you could examine the current values of the network for the selected
pattern. The current real output of each output neuron for the selected pattern,
is presented in the right table (“Output”) in the “Real” column. The
corresponding teach output is shown in the “Teach” column. Each time the
selected pattern is changed, the network is propagated.  Use this feature to
propagate the network for a pattern.
It is also possible to manually change the values of a pattern element.
“Change”:
Change the value for the selected neuron. In the “Output” table this value
represents the teach output.
3.1.3. Area 3
This area is used to visualize the output of the10 most responding neurons for
the current pattern. The output of each neuron is associated with one progress
bar.
“Associate neurons with letters“”:
This feature is actually only needed for OCR problems. If this box is checked,
the output neurons are associated with letters. So it is possible to see what
pattern number symbolizes a letter and what neurons are responding for that
letter. Of course this feature only makes sense and works correct for letters
from A-Z.
3.2.
OCR pre-processing mode
The “OCR pre-processing” mode is especially designed for OCR pattern building.
You have the possibility to generate images with letters and extract the features of
them to train a neural network. It is also possible to filter an image, extract a
section of characters from the image, extract the features and use this as an input
for the network.
3.2.1. Area 1
In this area you could choose a font and generate training images for an OCR
neural network.
“Character range”:
Use the arrows beside the boxes to select the range of characters you would
like to generate.
“Noise in image”:
Adjust the amount of noise in percent you would like to have in each image.
“Images for each character”:
Change the number of pictures for each letter, e.g. 2 means, 2 pictures with an
‘A’, 2 pictures with a ‘B’ and so forth. This feature is intended for noised
images.
“Select font”:
Select a font (size,…) for the image generation.
3.2.2. Area 2
This area is used to filter images and extract character regions and separate
the characters in it to propagate a network. Of course you have to load an
image first  [Ctrl+I].
“Characters region”:
Select the region in the original image which contains a section of characters.
You can see your selected region as a red rectangle in the original image.
Filter sequence:
The numbers beside the checkboxes indicate the position in the filter
sequence. This means for example, if 1. … is checked and 2. … is also
checked, then the 2. filter uses the output from the 1. as his input.
“1. Gray conversion”:
Check this box if you want to convert the original image to a gray scaled
image.
“2. Brightness normalization”:
Check this box if you want to normalize the brightness of the input image.
“3. Histogram equalization”:
Check this box if you want to increase the contrast of the input image.
“4. Binary conversion”:
Check this box if you want to convert the input image to a black/white-image
image.
You can also adjust the “threshold”  if the brightness of the pixel is greater
than the threshold its colour is set to white; else its colour is set to black.
“5. Smoothing”:
Check this box if you want to smooth the input image with a Gaussian
convolution matrix. Adjust the amount of the smoothing with the “Sigma”
parameter of the Gaussian function.
“6. Canny”:
Check this box if you want to detect the edges of the input image with the
Canny algorithm. You can adjust the “Low threshold” and “High threshold”  a
potential edge is made to an edge if the value of the pixel is greater than the
high threshold and its not stopped along the edge, till the value is lower than
the low threshold.
“7. Character separator properties”:
Adjust here the thresholds for the histogram character separation for the line
and column histogram  if the value of the histogram is lower or equal the
threshold than the line/column is separated. First the lines are separated, then
the column histogram of each line is computed and the characters are
separated.
You can see the found regions for single characters in the “Filtered image”
picture box as red rectangles.
3.2.3. Area 3
In this area you could change the region of extraction for the single character
and you can start the generating or filtering.
Beside Area 1 and Area 2 are two radio buttons docked. By changing the state
of the radio button you can change the input of this Area 3.
“Start/Abort“”:
Start / abort the generating or filtering of the images, depending on your
selection (see above).
“Pic No.”:
If you click on the arrows beside the box you can change the displayed image
in the picture box above.
“Delete image”:
If you have chosen to filter and separate a given image it could happen that
some wrong pictures are detected. With this button you can delete these
pictures.
“Extraction window”:
Select the region in the single character images which should be extracted by
the feature extractor. You can see your selected extraction window as a red
rectangle in the picture box.
“Scale images to”:
If you had different sized images for training than for propagation you could
scale them to a uniform size.
“Also generate corresponding net”:
If you have chosen to generate a set of training images it is also possible to
generate the corresponding neural network. So you don’t have to worry about
the input layer, output layer size.
But don’t forget to set the number of hidden layers and the size of them before
generating training patterns (see Generate neural net)!
3.2.4. Area 4
Here you can change the method and their parameters for the neural network
feature extraction.
This is place where the pixel colour is transformed to neural network useable
data.
“Start/Abort“”:
Start / abort the feature extraction.
“Feature extraction method”:
Select a method for feature extraction and for some methods you can also
adjust their parameters. For example, the dy/dx jumps mean that every dx.
column and dy. row is only extracted.
4. Net Trainer
In the “Net Trainer“ page you could change the used learning algorithm, adjust the
properties / parameters of the learning algorithm and trace the training process. The
steepness of the Tanh or logistic activation function could also be changed.
The parameters of the learning algorithm / activation function could be changed live
during the training, you don’t have to stop the training to do this!
4.1. Area 1
The table on the left shows the error for each cycle of the training. So it is possible
to trace the error for each cycle. The graph on the right draws the course of the
global summed error. If you move the mouse over the graph on the right, a tooltip
with the cycle and the error for the cycle is shown under your mouse cursor.
“Max error”:
Insert the value for the tolerable error. The training stops if the current global
summed error is lower or equal than this value.
“Fast mode (no live visualization)”:
When this box is checked, the error value is not inserted live in the table, the error
graph is not drawn and the progress of the training is not shown in the status bar.
This speeds up the training for small network, whose weight correction is
calculated really fast and the most time is wasted with drawing of the error graph
and inserting the current error in the table.
4.2. Area 2
In this area some common training algorithm properties could be set.
“Choose a learning algorithm“:
You could select one of the learning algorithms from the dropdown list.
“Slow-motion”:
Adjust the “sleep time” in milliseconds using the slider. The training will be paused
for the adjusted time after each cycle.
“Max cycles“:
Type in the maximum number of cycles for the training. The training stops if this
value has been reached or the error is below or equal the “Max error” (Area 1).
“Start training”:
Start the training of the network using the selected learning algorithm and the
adjusted parameters.
“Stop training”:
Stop the execution of the training.
4.3. Area 3
Here you could adjust some learning algorithm specific properties / parameters.
If the type of the neurons activation function is the Tanh or the Logistic activation
function, you could also adjust the steepness of the function.
© 2004 Rene Schulte & Torsten Bär
Download