This is lecture one of
`Biologically Inspired Computing’
Contents:
Course structure, Motivation for BIC , BIC vs
Classical computing, overview of BIC techniques
Go to my home page: www.macs.hw.ac.uk/~dwcorne /
Find my Teaching Materials page, and go on from there.
Course Delivery
EVENTS Week beginning Monday 3:15
EM306
12 th Jan DC overview of module
19 th Jan DC Evolutionary
Computation
Wednesday
11:15 EM303
DC EAs I
DC Evolutionary
Computation
Thursday 4:15
EM307
NT Neural
Computation
NT Neural
Computation
DC hands out coursework 1 worth
25% of module
26 th Jan
2 nd Feb
DC Swarm
Intelligence
DC Kohonen
Networks
DC Swarm
Intelligence
DC Cellular
Automata
NT Neural
Computation
NT Neural
Computation
9 th Feb
16 th Feb
23 rd Feb
2 nd Mar
9 th Mar
PF
PF
PF
NT
PF
PF
PF
PF
PF
PF
PF
PF
NT
DC hands out coursework 2 worth
10% of module
(PF c/w TBA)
(PF c/w TBA)
(PF c/w TBA)
(PF c/w TBA)
Friday hand-in for coursework 2
16 th Mar
23 rd Mar
30 th Mar
NT
NT
DC Revision
Lecture
PF
NT
PF Revision
Lecture
NT
NT Revision
Lecture
Friday hand-in for coursework 1
DC
NT
PF
• All the slides (all online)
• A few additional papers and notes provided online
Exam: 50% DC c/w 35%
PF c/w 15% c/w 1 Programming/Expts assignment 25% c/w 2 Question Sheet 10%
• DC = David Corne, will generally lecture about bioinspired methods for optimisation , with a focus on evolutionary computation (aka genetic algorithms) – broadly this is about how certain aspects of nature
(evolution, swarm behaviour) lead to very effective optimisation and design methods.
• PF = Pier Frisco, will generally lecture about molecular computing
– how computation is done within biological cells – and how that can be exploited, and how it inspires new ideas in computer science.
• NT = Nick Taylor, will generally lecture about neural computation
– this is perhaps the most widely exploited bio-inspired technique, which underpins how we can build machines that learn from examples.
About the DC Coursework
Programming assignment : Worth 25% of the module; released in week 2.
Quiz Questions : There will be 10 questions, each worth 1 mark.
The questions are at the end of my ppt lecture material, and sometimes at the end of additional (ppt) reading material. Hence they are released gradually
About the Exam
Answer THREE Questions from FOUR
You might expect that DC/NT/PF contribute roughly 1/3 each to the exam.
The (DC) MSc coursework will be based on the BSc coursework, but a bit harder with more to do.
Lecture 1:
Classical computation vs biological computation
Motivation for biologically inspired computation
Overview of some biologically inspired things
What to take from this lecture:
What `classical computing’ is, and what kinds of tasks it is naturally suited for.
What classical computing is not good at.
An appreciation of how computation and problem solving are manifest in biological systems.
Appreciation of the fact that many examples of computations done by biological systems are not yet matched by what we can do with computers.
An understanding of the motivation (consequent on the above) for studying how computation is done in nature .
A first basic knowledge of the main currently and successfully used BIC methods
• the fridge story
• How do you tell the difference between dog and cat?
• How do you tell the difference between male and female face?
• How do you design a perfect flying machine?
• How would we design the software for a robot that could make a cup of tea in your kitchen?
• What happens if you:
– Cut off a salamander’s tail?
– Cut off a section of a CPU?
Classical computing is good at:
Number-crunching
Thought-support (glorified pen-and-paper)
Rule-based reasoning
Constant repetition of well-defined actions.
Classical computing is bad at:
Pattern recognition
Robustness to damage
Dealing with vague and incomplete information;
Adapting and improving based on experience
• Automatically locate a small outburst of violent behaviour in a football crowd
• Classify a plant species from a photograph of a leaf.
• Make a cup of tea?
These two things tend to come up a lot when we think of what we would like to be able to do with software, but usually can’t do.
But these are things that seems to be done very well indeed in Biology.
So it seems like a good idea to study how these things are done in biology – i.e. (usually) how computation is done by biological machines
Pattern recognition is often called classification
Formally, a classification problem is like this:
We have a set of things: S
(e.g. images, videos, smells, vectors, …)
We have n possible classes, c
1
, c
2
, …, c n
, and we know that everything in S should be labelled with precisely one of these classes.
In computational terms, the problem is:
Can we design a computational process that takes a thing s (from S ) as its input, and always outputs the correct class label for s ?
1
2
3
4
5
6
7
8
9
What S might be What the classes might be
Images of peoples’ faces male, female
Smells (e.g. molecular spectra: illicit_drugs, ok or fresh meat, good meat, rotten meat
Utterances of “hello” (e.g. in wav files) child, man, woman or authorised-person, unauthorised
Renditions of my signature genuine, fake
Images of artworks beautiful, good, reasonable, awful
Patient data – results of various blood tests
Applications for loans malignant, benign good risk, medium risk, bad risk
Aircraft engine diagnostics safe, needs-maintenance, ground
Ground-penetrating radar image land-mine-here, safe
The idea of these examples is to:
1. Remind you that pattern recognition is something you do easily, and all the time, and you (probably) do it much better than we can do with classical computation. (e.g. 1, 2, 3, 5)
2. Remind (or inform) you that such complex pattern recognition problems are not yet done well by software (e.g. 1, 2, 3, 5)
3. Indicate that there are some very important problems that we would like to solve with software (9, 8, 6, 2, 7 are obvious, but of course we would like to do all nine and much more ), which are classification problems, and note that these are just as hard as examples 1, 2, 3, 5.
4. So, hopefully we can learn how brains do 1, 2, 5 etc …, so that we can build machines that find land mines, tell fake from genuine signatures, diagnose disease, and so on …
The business end of this is made of lots of these joined in networks like this
Much of our own “computations” are performed in/by this network
The brain is a complex tangle of neurons, connected by synapses
When neurons are active, they send signals to others.
A neuron with lots of `strong’ active inputs will become active.
And, when connected neurons are active at the same time, the link between them gets stronger
So, suppose these neurons happen to be active when you see a fluffy animal with big eyes, small ears and a pointed face
So, suppose these neurons happen to be active when you see a fluffy animal with big eyes, small ears and a pointed face
… and suppose your mother then says “Cat”, which excites this additional neuron.
Links will then strengthen between the active neurons
So when you see a similar animal again, this neuron will probably
Automatically be activated, helping you classify it.
A slightly different group of neurons will respond to dogs, and sometimes both the “cat” and “dog” group will be active, but one will be more active than the other …
What happens if we damage a single neuron (remember, in reality there will be thousands involved in simple classification-style computations)?
Compare this with damaging a line of code.
In classical computing we provide rules; but biology seems to learn gradually from examples.
We have 3 items as follows: (item 1: 20kg; item2: 75kg; item 3: 60kg)
Suppose we want to find the subset of items with total weight closest to 100kg.
Which is the best?
101
000
100
110
010
111
011
001
Well done, you just searched the space of possible subsets. You also found the optimal one. If the above set of subsets is called S , and the subsets themselves are s 1, s 2, s 3, etc …, you just optimised the function “closest_to_100kg( s )”; i.e. you found the s which minimises the function |(weight—100)| .
In general, optimisation means that you are trying to find the best solution you can (usually in a short time) to a given problem.
S
We always have a set S of all possible solutions s 1 s 2 s 3
…
S may be small (as just seen)
S may be very, very, very , very large
(e.g all possible timetables for a 500-exam/3-week diet)
… in fact something like 10 30 is typical for real problems.
S may be infinitely large – e.g. all real numbers.
We wish to design something – we want the best possible (or, at least a very good) design.
The set S is the set of all possible designs. It is always much too large to search through this set one by one, however we want to find good examples in S .
In nature, this problem seems to be solved wonderfully well, again and again and again, by
Nature has designed millions of extremely complex machines, each almost ideal for their tasks (assuming an environment that doesn’t go haywire), using evolution as the only mechanism.
Clearly, this is worth trying for solving problems in science and industry.
Evolutionary algorithms:
Use nature’s evolution mechanism to evolve solutions to all kinds of problems. E.g. to find a very aerodynamic wing design, we essentially simulate evolution of a population of wing designs. Good designs stay in the population and breed to, poor designs die out. EAs are highly successful and come in many variants. There is also a lot to learn to understand how to apply them well to new problems.
We will do quite a lot on EAs. EAs are all about optimisation, however classification is also an optimisation problem, so EAs work there too …
A genetically optimized three-dimensional truss with improved frequency response.
An EA-optimized concert-hall design, which improves on human designs in terms of sound quality averaged over all listening points.
Swarm Intelligence
How do swarms of birds, fish, etc … manage to move so well as a unit? How do ants manage to find the best sources of food in their environment.
Answers to these questions have led to some very powerful new optimisation methods, that are different to EAs. These include ant colony optimisation, and particle swarm optimisation.
Also, only by studying how real swarms work are we able to simulate realistic swarming behaviour
(e.g. as done in Jurassic Park, Finding Nemo, etc …)
Kohonen Networks
NT will teach you about neural computation, which is largely about how we can teach machines to do classification and pattern recognition – but there is a more fundamental type of neural-inspired method, which relates to making sense of the world around us without being trained or taught: this is what a
Kohnonen network does
Cellular Automata
Cellular Automata (CA) are very simple computational systems that produce very complex behaviour, including `lifelike’ reproduction. CAs, as we will see, are also very useful for explaining/simulating biological pattern generation and other behaviours
Neural Computing
Pattern recognition using neural networks is the most widely used form of BIC in industry and science. We will learn about the most common and successful types of neural network.
This is Stanley, winner of the DARPA grand
Challenge – a great example of bio-inspired computing winning over all other entries, which were largely `classical’
With these
How Biological Computers Compute
Not with these class 1
PF will tell you more …
34
Before we get into looking at Evolutionary Algorithms (as well as other methods that do optimisation), we need to understand certain things about optimisation, such as
• When we need clever methods to do it, and when we don’t
• What alternatives there are to EAs – no point designing an EA for an optimisation problem if it can be solved far more simply.
So some of the additional material and associated quiz questions this week is about optimisation problems in general, and some key pure computer-science things you need to know.
The next lecture will then introduce evolutionary algorithms.