TITLE GOES HERE

advertisement
TITLE GOES HERE
Subject
Year
: T0293/Neuro Computing
: 2009
Introduction
Meeting 1
2
Basic Concept of Neural Networks
•
Traditional, sequential, logic-based digital computing
excels in many areas but has been less successful for
other types of problems.
•
Artificial Neural Networks (ANN) was motivated by a
desire to try both to understand the brain and to
emulate some of its strength.
•
The basic idea of ANN is to adopt the human brain-like
mechanism, i.e.:
1. The processing element receives many signals
2. Fault tolerance
3. Parallel processing
3
Artificial Intelligence – what is it?
A set
of scientific branches
which aims to modeling
of
reasonable intellectual action in
human mind.
 Representation and accumulation of knowledge,
The ability of reasoning to retrieve relevant information
The ability of adaptation by learning and self-organization
We need:
• Store knowledge
•Apply the knowledge to
solution of problems
•Acquire a new knowledge
Optimal team behavior for goal achievement or hierarchy of targets, role
models
Meaning analysis, semantic, crisp logic, rules of inference, emotion
•
•
Artificial Neural Network is characterized by
1. its pattern of connections between the neurons (=
its architecture)
2. its method of determining the weights on the
connections (= learning / training algorithm)
3. its activation function
Artificial Neural Network consists of many processing
elements called neurons / units / cells / nodes
5
Neural Networks Characteristic :
• Philosophies:
–
–
–
–
Learning
Generalization
Abstraction
Applicability
• Techniques:
– Node Characteristics
– Topology (circuit structure nodes)
– Learning Rules
6
Neural Networks Development
•
The 1940s: The Beginning of Neural Nets
– McCulloch-Pitts Neurons
•
Warren McCulloch & Walter Pitts (1943)
•
The neurons can be arranged into a net to produce any
output that can be represented as a combination of logic
functions
•
Feature: the idea of a threshold such that (s.t.) if the net
input to a neuron is greater than the threshold, then the
unit fires
•
Most likely used as logic circuits
7
– Hebb Learning
• Donald Hebb (1949)
• The first learning law for ANNs
• Premise: if two neurons were active simultaneously, then the
strength of the connection between (b/w) them should be
increased
• Feature: weight adjustment
8
• The 1950s & 1960s: The First Golden Age of Neural
Networks
– Perceptrons
• Group of researchers (Black, 1962; Minsky & Papert, 1969;
Rosenblatt, 1958, 59, 62)
• Typical perceptron consists of an input layer (the retina)
connected by paths with fixed weights to associator neurons;
the weights on the connection paths are were adjustable
• The learning rule uses an iterative weight adjustment that is
more powerful than the Hebb rule
9
– ADALINE
• Bernard Widrow & Marcian Hoff (1960)
• Closely related to the perceptron learning rule
• The perceptron learning rule adjusts the connection weight
to a unit whenever the response of the unit is incorrect
• The delta rule adjusts the weights to reduce the difference
b/w the net input to the output unit and the desired output
• The learning rule for a single layer network is a pre cursor of
the BP rule for multi-layer nets
10
• The 1970s: The Quiet Years
– Kohonen
• Teuvo Kohonen (1972) – Helsinsky Univ of Tech
• Dealt with Associative Memory Neural Nets
• Basic of the development of self-organising feature maps
that use a topological structure for the cluster units
• Application: speech recognition, solution of the TSP, etc
11
– Anderson
• James Anderson, Brown Univ ( 1968, 1972)
• Dealt with Assoc Mem NN
– Grossberg
• Stephen Grossberg (1967 – 1988)
– Carpenter
• Gail Carpenter together with Grossberg has developed a
theory of a self organizing neural networks called adaptive
resonance theory for binary input patterns (ART1 & ART2)
12
• The 1980s: Renewed Enthusiasm
– Backpropagation
• Overcome the problem discovered in the previous decade,
which caused by two reasons: the failure of single-layer
perceptrons to be able to solve such simple problems
(mapping) as the XOR function and the lack of a general
method of training a multilayer net.
– Hopfield Nets
• David Tank, John Hopfield (1982)
• A number of NNs have been developed based on fixed
weights and adaptive activations
• These nets can serve as associative memory nets and can be
used to solve constraint satisfaction problem such as the TSP
13
– Neocognitron
• Kunihiko Fukusima et al (1975, 1988)
• A series of specialised NN for character recognition has been
developed
– Boltzmann Machine
• Nets in which weights or activations are changed on the
basis of a probability density function.
• These nets incorporate such classical ideas as simulated
annealing and Bayesian decision theory
14
Advance
Robotics
Machine
Vision
Intelligent
Control
Technical
Diagnistics
Artificial
Intellect
with Neural
Networks
Image &
Pattern
Recognition
Intelligentl
Medicine
Devices
Intelligent
Security
Systems
Intelligent
Data Analysis
and Signal
Processing
Intelligent
Expert
Systems
• Future Neural Networks
– Neural Networks Model:
•
•
•
•
Hopfield Net
Multilayered-networks (Back Propagation)
Bidirectional Associative Memory
Self-organizing Maps, etc
– Application
• Pattern Recognition
• Image Compression
• Optimization, etc
16
References
WEB
http://www.ieeexplore.ieee.org/
(Neural Networks, Neural Computation J. Cognitive
Neuroscience)
http://www.aic,nrl.navy.mil/galist
http://www.illigal.ge.uiuc.edu:8080/
http://www.cnel.ufl.edu/
http://www.unipg.it/~sfr/publications/publications.htm
Download