Ch 1

advertisement
1-1
Chapter One
Introduction to Artificial Neural
Networks
The human mind is the most complex
computing device ever known.
□ Conventional computers
1. Scientific and mathematical problem solving,
2. Creation, manipulation, and maintenance
of databases,
3. Electronic communication,
4. Graphics,
5. Word Processing,
6. Desktop publishing,
7. Control functions.
□ How do computers
Learning, analysis, organization,
comprehension, adaptation, association,
recognition, planning, decision
1-2
○ Many problems are not suitable to be solved
by sequential algorithms or computers
。 Example: Perceptual-Organizing Problem
Solving this problem should involve a
number of processes, such as analyzing,
associating, organizing, comprehending, and
recognizing processes
25 line segments
1-3
Is local evidence so little use ?
26 line segments
The above example and the following example
illustrate that parallel-processing architectures
may be good tools for solving difficult-to-solve,
or unsolved problems
1-4
。 Visual pattern recognition
Conventional processing
Segment ─ locate primary areas
Group ─ form significant regions
Cluster ─ associate with objects
Hole filling, Noise removal
Edge detection , Edge following
Preprocessing
:
Recognition
1-5
Input image
Thresholding
Grouping
Segmentation
* How does the nervous system perform
recognition and others?
1-6
。 Subjective inference (Reasoning)
。 Visual illusion
1-7
1.1 Neurophysiology
◎ Nervous systems
○
○ Distinguished Characteristics
。Adaptation
。Self-organizing
。Parallel processing
。Nonlinear function
。Associative memory
◎ Three major components constructing the
human nervous system
1. Brain
2. Spinal cord
3. Periphery
1-8
○ Biological Brain
Activities: sensations, perception, cognition,
recognition, imagination, dream
consciousness, …
thinking
Functions manifested:
feeling
Willing
1-9
○ Spinal nerves
1-10
○ Neurons
1. Unipolar
3 types:
2. Bipolar
neurons
3. Multipolar
|
|
|
Terminal
Pathway
Central
末梢神經
連絡神經
中樞神經
感覺器官
脊髓
腦
1-11
Signals:
Frequencies of pulses
Learning: Adjust
Synaptic gaps
Potentials
Firing rates
Memory:
Synaptic connections
Strength of connections
1-12
1.1.2. Synaptic Function
1-13
1-14
□ Artificial Neural System (ANS)
-- A collection of parallel processing units
connected together in the form of a directed
graph
○ Architecture
○ Characteristics
○ Self-organizing
○ Associative
○ Adaptive
○ Parallel
○ Dynamic
○ Non-algorithm
○ Non-linear
○ Non-local
1-15
○ Basic functions:
Learn, analyze, organize, comprehend,
associate, recognize, plan, decision
1-16
○ Example: Character Recognition
(a) 10 decimal characters
 10 output units each corresponding
to one character
(b) Represent a character image
as a vector containing binary elements
1-17
。 Two different phases of network operation
(1) Training phase – encoding information
(2) Production phase – recalling information
1-18
。Abilities of ANS:
(1) Noise
(2) Distortion
(3) Incompletion
1-19
1.1.3 Neural Circuits
○ McCullock-Pitts Theory (1943)
-- the first theory for treating the brain as a
computational organism
。Assumptions:
1. All-or-none (on-or-off) process
2. Synapses excited ====> neuron
active
3. Synapses delay
4. Synapses inhibited ====> neuron
inactive
5. Fixed interconnection
1-20
。 Basic neural circuits:
How do these relatively simple circuits
combine to give the brain numerous
abilities?
1-21
。 Propositional logic
Statement
Assertion
T, F
Proposition
P
Predicate
P(x)
Quantifiers
, 
Operators
 ,  ,  , If…then
Propositional expression
◎ Definitions
N i (t ) : neuron i fires at time t
N i (t ) : not fire

Graphic representation of propositional
expressions
(1)
Excitatory:
(2)
Inhibitory:
(3)
Fire:
(4) Activate:

1-22
(5) Precession:
N 2 (t )  N1 (t 1)
Neuron 2 activates after neuron 1 fires
(6) Disjunction: N 3 (t )  N1 (t  1)  N 2 (t  1)
(7)
Conjunction: N 3 (t )  N1 (t  1)  N 2 (t  1)
(8)
Conjoined negation:
N3 (t )  N1 (t  1)  N 2 (t  1)
1-23
。Example : Combination of simple networks
N 3 (t )  N1 (t  1)  N a (t  1)
 N a (t )  N b (t  1)  N 2 (t  1)
 N b (t )  N 2 (t  1)
 N a (t )  N 2 (t  2)  N 2 (t  1)
 N3 (t )  N1 (t  1)  ( N 2 (t  3)  N 2 (t  2))
Similarly,
N 4 (t )  N 2 (t 1)  N 2 (t  2)
1-24
1.1.4 Hebbian Learning
○ Learning process –
Modifies the network to incorporate new
information
○ Hebb’s Learn Theory -When cell A is near enough to excite cell B
and repeatedly or persistently takes part in
firing it, some metabolic process takes place
in one or both cells such that A’s efficiency
is increased
Example: (Pavlov experiment)
Initially
 C -- B

,
A -- x -- B
After training
C -- B

A -- B
1-25
1.2 From Neurons to ANS
1.2.1 Processing Element (PE)
--- Sometimes called artificial neuron, node,
or processing unit.
◎ PE model
1-26
Input: Excitatory, inhibitory, gain, bias x j
Weight: Connection strength wij
Net input: neti xjwijwixwiT x
j
Activation function (or state function):
ai(t)Fi(ai(t 1), neti(t))
Transfer function (or output function):
xi fi(ai)
Simplification: xi fi  neti  ,
◎ Dynamic system : a system evolves over time
Example : xixi fi  neti  xi : output
i, neti  0 for a sufficiently long time
 xi reaches an equilibrium value,
i.e. xi0
xi fi(neti)
ii,
Remove input, neti  0
 fi (neti )  0
 xi   xi
1-27
◎ Learning: modification of synaptic strengths
(i.e., weight values)
wijGi(wij, xi, xj,
),
where Gi : learning law
1.2.3. Example Neural Models
◎ Photoperceptron
Receptive field, Excitatory connection,
Inhibitory connection, Feedback
1-28
In the above figure,
1. Each R unit inhibits the A units in the
complement to its own source set
2. Each R unit inhibits the other
The above aid in the establishment of a
winning R unit (Competition)
◎ Applications
。Classify patterns into categories
。Distinguish pattern classes
1-29
◎ The following NM can differentiate patterns
only if the patterns are linearly separable.
(Minsky & Papert 1969)
Let   1,2,
,n : a set of inputs
1
where n  
0
on
off

Let    , ,
1
2
,n

be a corresponding set of weights
1
 
0
if
   
n
n
n
otherwise
1-30
◎ XOR problem (cannot be solved by a
perceptron)
1
Output function: f(net)  
0
net  
net  
Activation: net  w1 x1  w2 x2
Question: select w1 and w2 such that each
pair of input values results in the proper
output value.
This task cannot be done
1-31
Reason:
Let   w1 x1  w2 x2
A line in the (X1, X2) plane.
One solution:
Download