Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015

advertisement

Bain on Neural Networks and Connectionism

Stephanie Rosenthal

September 9, 2015

Associationism and the Brain

Aristotle counted four laws of association when he examined the processes of remembrance and recall:

1. The law of contiguity. Things or events that occur close to each other in space or time tend to get linked together

2. The law of frequency. The more often two things or events are linked, the more powerful that association.

3. The law of similarity. If two things are similar, the thought of one will tend to trigger the thought of the other

4. The law of contrast. Seeing or recalling something may also trigger the recollection of something opposite.

---Dr.C. George Boeree

Dawn of Connectionism

David Hartley’s Observations on man (1749)

• We receive input through vibrations and those are transferred to the brain

• Memories could also be small vibrations (called vibratiuncles) in the same regions

• Our brain represents compound or connected ideas by connecting our memories with our current senses

• Current science did not know about neurons

Dawn of Connectionism

Alexander Bain (The senses and the intellect (1855), The

emotions and the will (1859), The mind and body (1873))

• Knowing that the brain was composed of neurons, he tried to match what he know about memory with the structure of the brain (associationism + structure)

• Idea 1: The “nerve currents” from a memory of an event are the same but reduce from the “original shock”

• Idea 2: “for every act of memory, … there is a specific grouping, or co-ordination of sensations … by virtue of specific growths in cell junctions”

Bain’s Idea 1: Neural Groupings

Neurons excite and stimulate each other

They are flexible so the combinations of stimulations can produce different results

Bain’s Idea 1: Neural Groupings

Different intensities of activation of A lead to the differences in when

X and Y are activated

Bain’s Idea 2: Making Memories

“when two impressions concur, or closely succeed one another, the nerve currents find some bridge or place of continuity, better or worse, according to the abundance of nerve matter available for the transition.”

Hebb on Neural Nets (1949)

“When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that

A’s efficiency, as one of the cells firing B, is increased.“

Sounds a lot like Bain…

Bain’s Doubts

• In 1873, Bain postulated that there must be one million neurons and 5 billion connections relating to 200,000 “acquisitions”

• In 1883, Bain was concerned that he hadn’t taken into account the number of “partially formed associations” and the number of neurons responsible for recall/learning

Connectionism

Definition: “a movement in cognitive science that hopes to explain intellectual abilities using artificial neural networks”

Alternative: Classicism argues symbolic representations are encoded directly into memory

Training Neural Networks

Hebbian Learning is a well known unsupervised technique that strengthens weights between pairs of nodes if the two nodes are often active at the same time

Supervised Learning requires a training set to be shown in sequence to the net, and weights are adjusted to match the known desired output

Still a “fine art”

Strengths of NNs

Connectionist models seem particularly well matched to what we know about neurology

Neural networks are well adapted for problems that require the resolution of many conflicting constraints in parallel.

Connectionist models accommodate graded notions of category membership.

Weaknesses of NNs

Connectionists usually do not attempt to explicitly model the variety of different kinds of brain neurons, nor the effects of neurotransmitters and hormones.

It is far from clear that the brain contains the kind of reverse connections that would be needed if the brain were to learn by a process like backpropagation

It is widely felt that neural networks are not good at the kind of rule based processing for language and reasoning.

Weaknesses of NNs

Systematicity – understanding a concept without having learned the particular application of it

Example: “John loves Mary” vs “Mary loves John”

Fodor and McLaughlin show that it is possible to construct NNs that do recognize John’s love but not

Mary’s, which is not what the human brain would do

Other Ideas

Folk Psychology says people have plans, beliefs, and desires, but there does not seem to be a brain structure that supports these units

Predictive Coding says that our brains constantly are comparing reality to memory and detecting surprises. Training NNs to detect surprise require some additions of backwards edges that correspond to differences from the “generic”

Summary

Bain first discussed neurons and their ability to compute different values in networks

Connectionism is the idea that memories are stored in computation vs a classicist digital memory and separate processing

NNs have been successfully applied to many different problems, but there are still challenges to make them act like human brains

Questions and Comments

Download