NN_Lecture_1

advertisement
Neural Networks
And
Its Applications
By
Dr. Surya Chitra
OUTLINE
• Introduction & Software
• Basic Neural Network & Processing
– Software Exercise Problem/Project
• Complementary Technologies
– Genetic Algorithms
– Fuzzy Logic
• Examples of Applications
– Manufacturing
– R&D
– Sales & Marketing
– Financial
Introduction
What is a Neural Network?
A computing system made up of a number of
highly interconnected processing elements,
which processes information by its dynamic
state response to external inputs
Dr. Robert Hecht-Nielsen
A parallel information processing system
based on the human nervous system
consisting of large number of neurons,
which operate in parallel.
Biological Neuron & Its Function
Information Processed in Neuron Cell Body and
Transferred to Next Neuron via Synaptic Terminal
Processing in Biological Neuron
Neurotransmitters Carry information to Next Neuron and
It is Further Processed in Next Neuron Cell Body
Artificial Neuron & Its Function
Dendrites
Neuron
Axon
Outputs
Inputs
Processing Element
Processing Steps Inside a Neuron
Electronic Implementation
Summed
Inputs
• Sum
• Min
• Max
• Mean
• OR/AND
Add
Bias
Weight
Transform
• Sigmoid
• Hyperbola
• Sine
• Linear
Inputs
Outputs
Processing Element
Sigmoid Transfer Function
1
Transfer 1
0
.
8 Function = 
( 1 + e (- sum) )
0
.
6
f
(
X
)
Functio
f
'
(
X
)
0
.
4
0
.
2
0
1
0
5
0
X
5
1
0
Basic Neural Network & Its Elements
Bias Neurons
Input
Neurons
Hidden
Neurons
Clustering of
Neurons
Output
Neurons
Back-Propagation Network
Forward Output Flow
• Random Set of Weights Generated
• Send Inputs to Neurons
• Each Neuron Computes Its Output
– Calculate Weighted Sum
Ij =
 i W i, j-1 * X i, j-1 + B j
– Transform the Weighted Sum
X j=
f (I j) =
1/ (1 + e – (Ij + T) )
• Repeat for all the Neurons
Back-Propagation Network
Backward Error Propagation
• Errors are Propagated Backwards
• Update the Network Weights
– Gradient Descent Algorithm
Wji (n) =
Wji (n+1) =
 * j * Xi
Wji (n) + Wji (n)
• Add Momentum for Convergence
Wji (n) =  * j * Xi +  * Wji (n-1)
Where
n = Iteration Number;
 = Learning Rate
 = Rate of Momentum (0 to 1)
Back-Propagation Network
Backward Error Propagation
• Gradient Descent Algorithm
– Minimization of Mean Squared Errors
• Shape of Error
– Complex
– Multidimensional
– Bowl-Shaped
– Hills and Valleys
• Training by Iterations
– Global Minimum is Challenging
Simple Transfer Functions
Recurrent Neural Network
Input Unit
Bias Unit
Context Unit
Computation Node
Time Delay Neural Network
Input Unit
Bias Unit
Higher Order Unit
Computation Node
Training - Supervised
• Both Inputs & Outputs are Provided
• Designer Can Manipulate
–
–
–
–
–
Number of Layers
Neurons per Layer
Connection Between Layers
The Summation & Transform Function
Initial Weights
• Rules of Training
– Back Propagation
– Adaptive Feedback Algorithm
Training - Unsupervised
• Only Inputs are Provided
• System has to Figure Out
–
–
–
–
Self Organization
Adaptation to Input Changes/Patterns
Grouping of Neurons to Fields
Topological Order
• Based on Mammalian Brain
• Rules of Training
– Adaptive Feedback Algorithm (Kohonen)
Topology:
Map one space to another without
changing geometric Configuration
Traditional Computing Vs. NN Technology
CHARACTERISTICS
TRADITIONAL
COMPUTING
ARTIFICIAL
NEURAL
NETWORKS
PROCESSING STYLE
Sequential
Parallel
FUNCTIONS
Logically
Via Rules, Concepts
Calculations
Mapping
Via Images, Pictures
And Controls
LEARNING METHOD
By Rules
By Example
Accounting
Word Processing
Communications
Computing
Sensor Processing
Speech Recognition
Pattern Recognition
Text Recognition
APPLICATIONS
Traditional Computing Vs. NN Technology
CHARACTERISTICS
TRADITIONAL
COMPUTING
ARTIFICIAL
NEURAL
NETWORKS
PROCESSORS
VLSI - Traditional
ANN
Other Technologies
APPRAOCH
One Rule at a time
Sequential
Multiple Processing
Simultaneous
CONNECTIONS
Externally
Programmable
Dynamically Self
Programmable
LEARNING
Algorithmic
Adaptable Continuously
FAULT TOLERANCE
None
Significant via Neurons
PROGRAMMING
Rule Based
Self-learning
ABILITY TO TEST
Need Big
Processors
Require Multiple
Custom-built Chips
HISTORY OF NEURAL NETWORKS
TIME PERIOD
Neural Network Activity
Early 1950’s
IBM – Simulate Human Thought Process – Failed
Traditional Computing Progresses Rapidly
1956
Dartmouth Research Project on AI
1959
Stanford – Bernard Widrow’s ADALINE/MADALINE
First NN Applied to Real World Problem
1960’s
PERCEPTRON – Cornell Neuro-biologist(RosenBlatt)
1982
Hopfiled – CalTech, Modeled Brain for Devices
Japanese – 5th Generation Computing
1985
NN Conference by IEEE – Japanese Threat
1989
US Defense Sponsored Several Projects
Today
Several Commercial Applications
Still Processing Limitations
Chips ( digital,analog, & Optical)
Download