September 7, 2010
Fall 2010
Instructor: Marc Pomplun
Neural Networks
Lecture 1: Motivation & History
1
S-3-171
S-3-135
Tuesdays 14:30-16:00
Thursdays 19:00-20:30
287-6443 (office)
287-6485 (lab) marc@cs.umb.edu
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
2
The Visual Attention Lab
Cognitive research, esp. eye movements
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
3
Example: Distribution of Visual Attention
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
4
Selectivity in Complex Scenes
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
5
Selectivity in Complex Scenes
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
6
Selectivity in Complex Scenes
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
7
Selectivity in Complex Scenes
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
8
Selectivity in Complex Scenes
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
9
Selectivity in Complex Scenes
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
10
Artificial Intelligence
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
11
Modeling of Brain Functions
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
12
Biologically Motivated Computer Vision:
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
13
Human-Computer Interfaces:
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
14
Grading
For the assignments, exams and your course grade, the following scheme will be used to convert percentages into letter grades:
95%: A
90%: A-
86%: B+
82%: B
78%: B-
74%: C+
70%: C
66%: C-
50%: F
62%: D+
56%: D
50%: D-
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
15
Complaints about Grading
If you think that the grading of your assignment or exam was unfair,
• write down your complaint (handwriting is OK),
• attach it to the assignment or exam,
• and give it to me or put it in my mailbox.
I will re-grade the whole exam/assignment and return it to you in class.
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
16
“Standard” Computers Neural Networks one CPU fast processing units reliable units static infrastructure highly parallel processing slow processing units unreliable units dynamic infrastructure
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
17
There are two basic reasons why we are interested in building artificial neural networks (ANNs):
• Technical viewpoint: Some problems such as character recognition or the prediction of future states of a system require massively parallel and adaptive processing.
• Biological viewpoint: ANNs can be used to replicate and simulate components of the human
(or animal) brain, thereby giving us insight into natural information processing.
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
18
Why do we need another paradigm than symbolic AI for building “intelligent” machines?
• Symbolic AI is well-suited for representing explicit knowledge that can be appropriately formalized.
• However, learning in biological systems is mostly implicit – it is an adaptation process based on uncertain information and reasoning.
• ANNs are inherently parallel and work extremely efficiently if implemented in parallel hardware.
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
19
• The “building blocks” of neural networks are the neurons .
• In technical systems, we also refer to them as units or nodes .
• Basically, each neuron
– receives input from many other neurons,
– changes its internal state ( activation ) based on the current input,
– sends one output signal to many other neurons, possibly including its input neurons
(recurrent network)
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
20
• Information is transmitted as a series of electric impulses, so-called spikes .
• The frequency and phase of these spikes encodes the information.
• In biological systems, one neuron can be connected to as many as 10,000 other neurons.
• Usually, a neuron receives its information from other neurons in a confined area, its so-called receptive field .
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
21
History of Artificial Neural Networks
1938 Rashevsky describes neural activation dynamics by means of differential equations
1943 McCulloch & Pitts propose the first mathematical model for biological neurons
1949 Hebb proposes his learning rule: Repeated activation of one neuron by another strengthens their connection
1958 Rosenblatt invents the perceptron by basically adding a learning algorithm to the McCulloch &
Pitts model
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
22
History of Artificial Neural Networks
1960 Widrow & Hoff introduce the Adaline, a simple network trained through gradient descent
1961 Rosenblatt proposes a scheme for training multilayer networks, but his algorithm is weak because of non-differentiable node functions
1962 Hubel & Wiesel discover properties of visual cortex motivating self-organizing neural network models
1963 Novikoff proves Perceptron Convergence
Theorem
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
23
History of Artificial Neural Networks
1964 Taylor builds first winner-take-all neural circuit with inhibitions among output units
1969 Minsky & Papert show that perceptrons are not computationally universal; interest in neural network research decreases
1982 Hopfield develops his auto-association network
1982 Kohonen proposes the self-organizing map
1985 Ackley, Hinton & Sejnowski devise a stochastic network named Boltzmann machine
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
24
History of Artificial Neural Networks
1986 Rumelhart, Hinton & Williams provide the backpropagation algorithm in its modern form, triggering new interest in the field
1987 Hecht-Nielsen develops the counterpropagation network
1988 Carpenter & Grossberg propose the Adaptive
Resonance Theory (ART)
Since then, research on artificial neural networks has remained active, leading to numerous new network types and variants, as well as hybrid algorithms and hardware for neural information processing.
September 7, 2010 Neural Networks
Lecture 1: Motivation & History
25