MA256: Biological Systems in Space and Time Contents E-mail: 2015-2016, Term 2

advertisement
MA256: Biological Systems in Space and Time
E-mail: T.Bretschneider@warwick.ac.uk
2015-2016, Term 2
Contents
Lecture
motion
Lecture
Lecture
Lecture
Lecture
Lecture
1
1: History of spatial biology, review of chemical kinetics, transport processes & Brownian
2:
3:
4:
5:
6:
Mathematics of diffusion
Turing: “The chemical basis of morphogenesis”
Extended examples of Turing patterns
Codimension-two Turing-Hopf bifurcation and travelling waves
Numerical methods for solving reaction-diffusion problems
Lecture 1
We start out with a brief history on biological pattern formation, which loosely follows an excellent
review article by Siegfried Roth “Mathematics and biology: a Kantian view on the history of pattern
formation theory” (Dev Genes Evol 221, 255-279, 2011).
1.1
A brief history of biological pattern formation
Understanding the self-organization of spatio-temporal patterns in biology remained elusive until
the early 20th century. A puzzling problem was that of embryonic self-regulation and scaling,
namely to understand how the body parts of an organism grow proportionally to its overall size.
In experiments where cells were isolated from two-cell or four-cell sea urchin embryos Hans
Driesch could show that each cell was able to grow into a complete larva of smaller size (1894). He
concluded ’Insofar as it contains a nucleus, every cell, during development, carries the totality of
all primordia...’, which posed the difficult question how any observed asymmetry like the head-foot
axis could be specified. Driesch developed the concept of positional information and stated ’The
relative position of a blastomere <early embryonic cell> within the whole will probably in a
general way determine what shall come from it’. As he was not able to find an answer to the
problem he resorted to vitalism, assuming that biology does not follow physical principles .
A particular problem of understanding how patterns or asymmetries could form spontaneously
was that thermodynamics could only deal with reversible (equilibrium) processes. Until then,
common belief was that chemical systems should reach a stable equilibrium and that for example
oscillations of a homogeneously stirred reaction would be impossible. In 1921 William Bray,
1
however, discovered the first chemical oscillator (chemical iodine clock) which proved that
assumption wrong.
In his work Bray was referring to theoretical mathematical models of hypothetical chemical
systems by Alfred Lotka (1910-1920). One important component in this coupled differential
equation model was autocatalysis (self-amplification of a molecule), however Lotka had to state
that ”No reaction is known which follows the above law...”). Later he applied his models to
oscillations in population dynamics (predator-prey type models), and his work laid the foundation
for dynamical systems theories in chemistry and biology.
It took however a long time before Bray’s experiments were generally accepted as there was great
skepticism whether this was a truly homogeneous system. In an attempt to find an inorganic
version of the citrate cycle1 Boris P. Belousov discovered a chemical reaction that not only was
able to oscillate, but also exhibited spatial patterns in form of propagating spiral waves or target
patterns (1951). The underlying main mechanism is the autocatalytic oxidation of Ce3+ to Ce4+
and the reverse reduction by citric acid. His manuscript was rejected because the ’supposed
discovery was impossible’. Belousov’s work was taken up by Anatol Zhabotinsky in 1961 with
publications only in Russian, before the first English publication about the Belousov-Zhabotinsky
(BZ) reaction occurred in 1967, by the Danish Hans Degn.
In a seminal paper Alan Turing (1952) proposed a mathematical theory of how amplification of
noise could give rise to instabilities and result in spontaneous emergence of patterns. In his model
the chemical reaction of two interacting species is coupled by diffusion, whereby the two chemicals
diffuse at different rates. He particularly thought about applying his model to problems in
developmental biology, for example to explain how morphogens2 could control tentacle formation
in the fresh water polyp Hydra. Turing who is formemost known as pioneer of computer science
(and for his work in codebreaking during World War II) was the first to use a digital computer to
perform simulations of a reaction-diffusion model at Manchester University. Turing died in 19543 .
With the excitement about the discovery of the genetic code in 1953 Turing’s work did not receive
too much attention among mainstream biologists.
In the 1960s Lewis Wolpert elaborated on the concept of positional information in which the
potential of a cell to differentiate into a specific cell type depends on reading out gradients of
morphogen concentrations. In the famous French flag problem he proposed two opposing gradients
of blue and red morphogen: Cells turn blue where the concentration of blue is high and red is low,
cells become red if red is higher than blue, and cells which see intermediate levels of blue and red
would turn white. This model is able to self-regulate. For example, when two domains are fused
or a domain is cut, the resulting domains would always show a scaled version of the french flag.
More complex models to explain self-regulation and scaling, based on phase shifts between
coupled oscillators were proposed by Goodwin and Cohen (1969).
1 the citrate cycle is responsible for the final oxidation of food (carbohydrates, fat and proteins) into carbon
dioxide, taking place in organelles called mitochondria, therefore sometimes called cellular power plants
2
a morphogen is a signaling molecule that specifies what type a cell should develop into
3 The
Alan Turing Centenary Conference took place in Manchester, 2012
2
In 1972 Gierer and Meinhardt developed a Turing-type reaction-diffusion model for how gradients of a postulated head activator in Hydra could be established. Here the introduction of a
saturation term which caps the activator concentration results in proportion regulation in growing
domains, i.e. the size of the head is always a certain fraction of the body length while the animal
is growing. Since then Meinhardt successfully applied similar models to a whole range of biological
problems, from spacing of leaves in plants to symmetries of body plans and patterns on sea shells.
Jim Murray’s work (1970-, author of the standard textbook “Mathematical Biology”) laid the foundation for a whole new discipline of spatial biological pattern formation. He wrote a number of
highly influential, popular articles like ’How the leopard gets its spots’, Scientific American (1988).
The theoretical basis of non-equilibrium thermodynamics, which explained how non-reversibility
in biological processes was possible, was founded by Ilya Prigogine, who in 1977 was awarded the
Nobel Prize for Chemistry. Prigogine showed that more complex reaction schemes could give rise
to structural stability in oscillatory systems, limit cycle oscillations of fixed amplitude and period,
different to the Lotka Volterra equations, a harmonic oscillator where amplitude and period depend
on the initial state. The reaction scheme he proposed became well known under the name of “Brusselator”. In 1972 a largely complete kinetic model of the BZ reaction with 10 different reactions
was developed (Field, Körös, Noyes mechanism), a simplified three-variable model (concentrations
of HBrO2, Br-, and Ce4+) became popular under the name “Oregonator”.
It took however a long time before the molecular basis of morphogens was identified, the existence
of which had only been proposed by Alan Turing on theoretical grounds. In the 1980s Christiane
Nüsslein-Volhard discovered how the gradual distribution of Bicoid, a transcription factor in the
early embryo of the fruit fly Drosophila melanogaster determines the main later (anterior-posterior)
body axis. For her work she was awarded the Nobel Prize for Physiology and Medicine in 1995.
The setup of the Bicoid gradient actually does not follow a Turing mechanism, but relies on a
prepattern of maternal factors. Bicoid mRNA is produced in the adjecent nurse cells and then
transferred to one end of the oocyte where it is translated to form the Bicoid protein. Many biologists at that time were unjustly disappointed as this did not reflect self-organisation as proposed by
Turing. One can, however, rate Turing’s contribution not high enough as he was mainly responsible
for biologists to start looking for morphogens in the first place. “True” Turing mechanisms were
later found in chemical systems CIMA (chlorite–iodide–malonic acid, De Kepper; Castets et al.,
Phys Rev Lett 64: 2953, 1990) and the patterning of hair follicles in mice for example (Sick et al,
Science 314: 1447, 2006).
3
1.2
Review of simple reaction kinetics
Let’s consider the following chemical reaction with molecules A, B, C, D:
k+
aA + bB ⌦ cC + dD
k
a, b, c, d are called stochiometric coefficients
the reaction rate v(t) is given by
1 d [A]
1 d [B]
1 d [C]
1 d [D]
=
=
=
a dt
b dt
c dt
d dt
for simple (elementary, one step reactions in solution or gas) the law of mass action states that in
equilibrium the rate of the forward reaction is equal to the rate of the backward reaction:
v(t) =
a
b
c
k+ [A] [B] = k [C] [D]
d
For any more complex reactions the exponents can only be experimentally deduced.
Far from equilibrium we can write:
d [A]
a
b
c
d
= ak+ [A] [B] + ak [C] [D]
dt
Let’s apply the above to the Lotka-Volterra equations
Ẋ
=
k1 X
Ẏ
=
k2 XY
k2 XY
k3 Y
The underlying reaction scheme can be formulated asa :
k1
A+X
!
2X
(1)
!
2Y
(2)
!
B
k2
X +Y
k3
Y
The concentration of A is assumed to be big so that we ignore any reverse reactions, and any drop
in the concentration of A. A is therefore incorporated into k1 .
Reactions (1,2) are autocatalytic.
In another famous example, the Brusselator, the following autocatalytic reaction occurs:
2A + B ! 3A
a watch the truly inspiring movie on self-replication by Lionel Penrose from 1961 (assisted by his daughter):
http://www.dump.com/automaticmechanical/)
4
1.3
Transport in biological systems
Undirected transport processes play an important role in biological pattern formation, which is why
models based on diffusion limited instabilities have been widely applied to explain patterns at very
different length and space-scales: Patterns of morphogen distributions in developmental biology,
spreading of electric activity (heart, nerves), or spreading of infectious diseases.
Although directed transport seems to be a more obvious way to create patterns, it is by no means
less exciting. In particular motor-polymer systems and self-organisation of cytoskeletal structures
such as self-assembly of the mitotic spindle have achieved great attention over the last twenty-five
years. Other examples of directed processes are the transport of cellular organelles along microtubules (MTs), or directed motion of cells in response to extracellular signals
1.4
Definition of Diffusion
“Diffusion is the process by which matter is transported from one part of a system to another as
a result of random molecular motions” (Crank, p1). In Latin “diffundere” means to disperse. The
classical experiment to demonstrate diffusion is to consider a tall cylindrical vessel, the lower part
filled with iodine solution and clear water is slowly poured on top to avoid mixing. Initially we
observe a sharp, well defined boundary. In time, this boundary blurs until eventually the whole
solution is uniformly coloured. Clearly there must be a transfer of iodine molecules from the bottom
to the top.
5
1.5
Brownian motion
We can’t see individual molecules in solution, but can observe particles that are just large enough
to be seen under the microscope. 1827 biologist Robert Brown observed that pollen grains in water
move and change direction randomly, as the result of collisions with water molecules.
We can say something about the average distance a molecule travels in a given interval of
time but cannot say anything about it’s direction. The mean-square distancea increases linearly in
time
N
⌦
↵
1 X
msd(t) = r2 (t) =
(ri,t
N i=1
ri,0 )
2
msd(t) s t
In 1D: msd(t) = 2Dt where D is defined as the diffusion coefficient, typically units are given as
cm2/sec.
In 2D: msd(t) = 4Dt
In 3D: msd(t) = 6Dt
Thus, to move 10 times as far, we need 100 times longer.
a angle brackets denote an ensemble average either over a number of N different molecules or N successive measurements from a single molecule
1.5.1
Typical values for diffusion of proteins
We will see later that diffusion rates depend on the size of molecules, not mass.
2
µm2
Let’s consider a ‘typical’ protein with radius rp = 2 nm, D = 1.3 ⇥ 10 10 m
sec = 130 sec
Traversing an eukaryotic cell (r = 20mm) takes about t =< r2 > /6D = 500 msec .
Traversing a E. coli bacterium (r = 1 mm) takes in the order of 1 ms .
Diffusing the length of a nerve axon (r = 1 m) would take about 109 sec = 30 years !
6
1.5.2
Viscous resistance
Here we will be mostly concerned with molecular transport, ie small particles in aqueos solutions,
where diffusion is directly related to friction between particles and the fluid.
At very short time scales, the motion of a particle is dominated by its inertia and its displacement
will be linearly dependent on time: Dx = vDt. Let’s find out whether we can put a limit on this
timescale.
An external force F (gravity, electric force field) causes a particle of mass m to accelerate (v is the
particle’s velocity, dv
dt its acceleration). In a medium of low Reynolds number (no turbulence) it
will be slowed down by a linear drag force f v , with friction coefficient f . We obtain the following
force balance equation which is a first order linear ODE.
m
dv
=F
dt
(3)
fv
Integrating yields
v(t)
=
v(t)
=
f
F
+ Ce m t
f
⇣
⌘
t
v1 1 e ⌧
whereby C = Ff to ensure v(0) = 0, v1 = Ff and time constant ⌧ = m
f .
In the limit t ! 1 , the drag force f v balances the external force F .
⌧ is called the momentum relaxation time, the time to loose initial momentum. At very short
time scales Dt <‌< t the motion of a particle is dominated by its inertia and its displacement will
be linearly dependent on time: Dx = vDt (ballistic motion). It was only very recently that the
instantaneous velocity of Brownian motion could be determined for a micrometer sized silica (SiO2 )
bead held in a optical trap, in air. (Li, Science 328, p1673, 2010).
For typical proteins ⌧ = 10 ps (1 ps = 1 ⇥ 10 12 sec ) , after which time v(t) is considered
constant. We note that the terminal velocity does not depend on mass.
1.5.3
Fluctuating forces and time correlation functions
Before we were talking about a particle under the influence of an external constant force F .
At room temperature a particle in solution will experience ca. 1021 collisons per second. As we
cannot compute the resulting path in all detail we need a statistical description. We now replace
F by A , which is a fluctuating force resulting from all collisions, and obtain a Langevin equation:
du
=
dt
u + A(t)
f
u is the velocity of a particle with mass m , = m
, with friction coefficient f
´ f
Using the same approach as above (integrating factor µ(t) = e m dt ) we obtain the formal
solution of the Langevin equation
u(t) = u(0)e
t
+
ˆt
0
7
A(⌧ )e
(t ⌧ )
d⌧
(4)
which is of little use as we do not know the fluctuating force A . However, we can try and derive
statistical arguments for how u(t) behaves by looking at the autocorrelation function hu(0)u(t)i4 .
We multiply (4) by u(0) and average.
hu(0)u(t)i = hu(0)u(0)i e
t
+
ˆt
0
hu(0)A(⌧ )i e
(t ⌧ )
d⌧
(5)
As u(0) and A(⌧ ) are not correlated the integral term equals zero and we obtain the velocity autocorrelation function
hu(0)u(t)i = hu(0)u(0)i e t
(6)
which tells us how fast the correlation
of
u
decays
in
time.
⌦ ↵
hu(0)u(0)i can be written as u2 as statistically the average velocity at any point in time
should be equal.
1.5.4
Stokes and Einstein equations
⇥
⇤
The drag coefficient f depends on the properties of the fluid (dynamic viscosity ⌘ in N ⇤ sec/m2 )
and the size of the object (radius R), given by the Stokes equation
f = 6⇡⌘R
The relationship between the diffusion constant D and the friction coefficient f was shown by
Einstein to be
kb T
D=
f
whereby kb is the Boltzmann constant kB = 1.381 ⇥ 10
1.5.5
23 J
K
.
Dependency of D on Temperature
Brownian motion is driven by thermal motion of fluid molecules. Recall that in Einstein’s equation
diffusion directly depends on temperature, D = kbfT . According to the equipartition theorem a
molecule contains the energy E = 1/2kB T for each degree of freedom (translation, rotation, etc).
Rotational kinetic energy for small globular particles is however insignificant so that in 3 dimensions
Ekin = 32 kb T .
The mean kinetic energy of a particle at thermal equilibrium is directly related to its velocity.
Ekin =
m ⌦ 2↵
u
2
⌦ ↵
With hu(0)u(t)i = u2 e t (see above) we see how the time correlation function of a particle’s
velocity is related to its kinetic energy and temperature.
hu(0)u(t)i =
4 angle
3kb T
e
m
brackets here denote averaging over an ensemble of particles
8
t
Using the so called Green-Kubo relation
1
D=
3
ˆ1
0
hu(0)u(⌧ )i d⌧
which relates D to the time correlation function of u one can derive Einstein’s equation.
1.5.6
Diffusion as a random walk
We started out by stating that the mean square displacement of diffusing particles is proportional
to time t .
⌦
↵
x2n ⇠ t
here n is the summation index for many random walks.
We are now considering Brownian motion as a random process consisting of a sequence of discrete
steps of fixed length, a random walk. It can be shown that the time-varying probability distribution
of a walker performing a random walk is a very good approximation of a diffusion process. It is a
normal distribution with variance proportional to time t .
• Let’s compute P (m, N ) , the probability for a particle to reach position m after N steps.
Initial position was x = 0 .
• p(4x = +1) = p(4x =
• A=
N!
NL !NR !
1) = 1/2 . Probability to make a step to the right or to the left.
number of alternative routes with NL left and NR right steps.
• Division by 2N , which is the total number of possible routes for N steps, yields a binomial
distribution
✓ ◆N
1
N!
P (m, N ) =
2
NL !NR !
( P has a maximum if NL ⇡ NR ⇡ N/2 . )
• If we want the probability to make m more steps to the right than to the left we can write
✓ ◆n
1
N!
P (m, N ) =
2
[0.5(N m)]! [0.5(N + m)]!
• Using Stirling’s approximation for ln N ! = (N + 12 ) ln N
ln P (m, N )
⇡
N+
1
2
ln 2⇡ + O(N
1
1
(N + ) ln N
ln 2⇡ N ln 2
2
2 
1
N⇣
m⌘
(N + m + 1) ln
1+
2
2
N
 ⇣
1
N
m⌘
(N m + 1) ln
1
2
2
N
• using series expansion for small x we employ: ln(1 + x) ⇡ x
9
x2/2
.
1
) we obtain
ln P (m, N )
⇡
1
1
(N + ) ln N
ln 2⇡
2
2✓
1
(N + m + 1) ln N
2
✓
1
(N m + 1) ln N
2
N ln 2
m
ln 2 +
N
m
ln 2
N
◆
m2
2N 2
◆
m2
2N 2
cancelling terms gives:
ln P (m, N )
⇡
P (m, N )
⇡
1
ln N + ln 2
r2
m2
2
e 2N
⇡N
1
ln 2⇡
2
m2
2N
Assuming N is large, we have approximated the binomial distribution by a Gaussian. Numerical
solutions give good results already for N = 10 .
For large N we introduce a new variable, x the mean distance from the starting point: x = m4x
where 4x is the stepsize.
P (x, N )24x
=
P (m, N )
P (x, N )
=
P (m, N )
✓
1
24x
◆
Note that for a given number of steps N , 4m can only change by ±2 as m will always be even
for an even N , and odd for odd N . P (m, N ) is mapped to the continuos interval [x 4x, x + 4x].
We obtain:
s
x2
1
2N (4x)2
e
P (x, N ) ⇡
2⇡N (4x)2
Substituting N by t/4t and the defining the diffusion coefficient as D := 12 (4x)2 /4t we obtain:
1
P (x, t) ⇡ p
e
2 ⇡Dt
x2
4Dt
This is the probability distribution for a single random walker in 1D.
1.5.7
The speed of diffusion, revisited
Recall the Gaussian distribution
f (x, µ = 0,
2
)=
1
p e
2⇡
x2
2 2
p
We observe that the standard
deviation
= 2Dt corresponds to the root mean square displacep
p
ment, as we had earlier msd(t) = 2Dt . If we consider a large number of particles, the speed of
10
diffusion therefore equates to the speed of the moving boundary which confines approximately 2/3
of all particles.
⌦ ↵
Alternatively, we can compute the mean square displacement x2 from our probability distribution directly:
x2
1
P (x, t) ⇡ p
e 4Dt
2 ⇡Dt
´1 2
´ 1 2 x2
x P (x, t)dx
x e 4Dt dx
⌦ 2↵
1
x = ´1
= ´1
= 2Dt
x2
1
P (x, t)dx
e 4Dt dx
1
1
To solve the
p integral
p in the numerator we use differentiation with respect to a parameter:
with c = 4Dt = 2 ⇥ we have:
ˆ1
2
x e
x2
4Dt
1
´1
dx =
ˆ1
dx
1
p
x2
c2
x2
c2
x2 e
we now introduce I(c) = 1 e
dx = c ⇡
(this can be easily seen by recalling that the factor for normalising the area under a Gaussian
is p12⇡ ).
dI(c) p
= ⇡
dc
=
ˆ1
x2 e
1
=
2c
3
ˆ1
J(c) =
p
( 2)c
x2 e
1
|
x2
c2
⇡ 3
c
2
{z
J(c)
x2
c2
3
dx
dx
}
Since
⌦
2
2.1
↵ J(c)
1
1
x2 =
= c2 = (4Dt) = 2Dt . . .
I(c)
2
2
Lecture 2
Mathematics of diffusion
The previous microscopic view of particles performing a random walk led us to a statistical approximation of diffusion in form of a probability distribution which for large enough t results in
< x2 >⇠ t . We will now turn to a macroscopic view. The partial differential equations (Fick, 1855)
we will discuss are fundamental to a large number of problems, as for example in heat conduction
(Fourier, 1822) where the “diffusing substance” is heat. In many problems diffusion coefficients will
11
be considered constant, but we will also discuss exceptions where immobilizing reactions are involved. Generally, if diffusion coefficients are not constant (in most cases they will be concentration
dependent as self-diffusivity increases with concentration) strictly formal mathematical solutions
do no longer exist.
In an isotropic medium the rate of transfer of a diffusing substance through unit area of a section
is proportional to the concentration gradient measured normal to the section:
@C
(Fick’s 1st law)
@x
Diffusion is along the normal to the surface of constant concentration.
Let’s consider a parallelepiped with length 2dx, 2dy, 2dz, whose sides are parallel to the coordiante axes. Its centre is the point 0,0,0 .
Rate of molecules entering the left half of the box through the left face is given by:
✓
◆
@Fx
4 dy dz Fx
dx
@x
F =
D
where Fx is measured at x = 0;
Similarly, rate of loss of molecules through the right is given by:
✓
◆
@Fx
dx
4 dy dz Fx +
@x
The overall rate of change caused by these fluxes (influx-outflux) is:
8 dx dy dz
@Fx
@x
@F
z
and similarly 8 dx dy dz @yy and 8 dx dy dz @F
@z in y and z direction, respectively.
The rate at which the amount of substance changes can be also written as:
8 dx dy dz
@C
@t
and hence:
@C
@Fx
@Fy
@Fz
+
+
+
=0
@t
@x
@y
@z
Assuming D to be constant and employing Fick’s first law this becomes:
✓ 2
◆
@C
@ C
@2C
@2C
=D
+
+
@t
@x2
@y 2
@z 2
for the one-dimensional case this is referred to as Fick’s 2nd law of diffusion.
@C
@2C
=D 2
@t
@x
12
(Fick’s 2nd law)
D may be a function of x, y, z and/or C . We then need to write:
✓
◆
✓
◆
✓
◆
@C
@
@C
@
@C
@
@C
=
D
+
D
+
D
;
@t
@x
@x
@y
@y
@z
@z
In various forms of vector notation this is written as
@C
= div (DgradC) = r · (D rC) = 4C
@t
If D depends on time, i.e. D = f (t) , we are introducing a new time-scale T such that dT =
f (t)dt .
The diffusion equation then becomes:
@C
@2C
@2C
@2C
=
+
+
@T
@x2
@y 2
@z 2
2.2
Solutions of the diffusion equation for constant D
Generally, the diffusion equation is an evolution equation which describes how concentrations change
over time. To solve it we need to specify initial and boundary conditions. Dirichlet boundary
conditions prescribe values the solution can take on at the boundary (for example C |lef t = a,
C |right = 0). Neumann boundary conditions specify the derivatives at the boundary, so here
define fluxes into or out of our system ( @C
a and @C
a would imply an influx
@x |lef t =
@x |right =
at the left side and an outflux at the right side of our domain). Boundary conditions can be of a
mixed type, i.e. Neumann on one side and Dirichlet on the other. Cauchy boundary conditions
impose Dirichlet and Neumann boundary conditions at the same boundary at the same time.
Solutions: two standard forms, which are
1. Series of error functions5 or related integrals (for small times)
2. Trigonometrical series which converge for longer times
Using the Laplace transform6 both types of solutions can be obtained
5 erfz
´
= p2⇡ 0z exp( ⌘ 2 )d⌘
´
6 f¯ (p) = 1 exp( pt)f (t) dt
0
13
2.2.1
Superposition of solutions to the diffusion equation
One can easily verify by differentiation that the solution of our random walk problem in one dimension
1
P (x, t) = p
e
2 ⇡Dt
x2
4Dt
(7)
is a solution of Fick’s 2nd law of diffusion
@C
@2C
=D 2
@t
@x
Initial conditions are P (x, 0) = x where x is the Dirac delta function, which is zero except at
x = 0 and whose integral is unity.
p
(x) = lim
2
!0
1
2⇡
2
e
x2
2 2
The domain is considered infinite here.
´ +1
If the amount of substance deposited at t = 0 , x = 0 is M = 1 Cdx we have
x2
M
C(x, t) = p
e 4Dt
2 ⇡Dt
The above is the 1D solution for a instantaneous plane source in an infinite medium
In 2D, for a point source on an infinite plane we have
r2
M
e 4Dt
4⇡Dt
For an instantaneous point source in a 3D volume we have
C(x, t) =
M
C(x, t) = p
e
8 ⇡Dt
2.2.2
r2
4Dt
Reflection at a boundary
We now consider a 1D semi-infite cylinder with impermeable boundary at x = 0 .
The amount of substance that previously diffused to the left, now also diffuses to the right.
Since the diffusion equation is linear the sum of two solutions is itself a solution. Using reflection
and superposition we obtain
M
C(x, t) = p
e
⇡Dt
14
x2
4Dt
2.2.3
Separation of variables:
We can try to solve Fick’s second law of diffusion
@C
@2C
=D 2
@t
@x
(8)
C = X(x)T (t)
(9)
by separating the variables so that
Substitution in (8) yields
dT
d2 X
= DT
dt
dx2
Our PDE ,(8), is therefore separable into two independent ODEs:
X
1 dT
=
T dt
2
D
1 d2 X
2
=
X dx2
Both sides of the equation are equal to a constant which for convenience we have chosen to be
2
D.
The solutions are
T =e
2
Dt
and
X = Asin x + Bcos x
which can be easily verified by differentiation.
Therefore solutions to (8) can be written in the form
C = (Asin x + Bcos x) ⇥ e
2
Dt
(10)
We said before that the superposition principle holds for the diffusion equation as it is linear.
A general solution can therefore be obtained by summing up solutions (10).
C=
1
X
(An sin
nx
+ Bn cos
n=1
An , B n ,
n
n x)
⇥e
2
n Dt
are determined by the initial and boundary conditions .
Example:
Consider a 1D cylinder of length L . The initial concentration C(x, 0) = C0 ; Boundary conditions
are: C(0, t) = 0 and C(L, t) = 0 ;
First we look at the solution for X:
15
X = Asin x + Bcos x
with C(0, t) = 0 , X(0) = 0 = B
with C(L, t) = 0, X(L) = 0 = Asin L
n L = 0, ⇡, 2⇡, . . .
n⇡
n = L , n = 1, 2, . . .
C
=
=
X(x)T (t)
1 ⇣
⇣ n⇡ ⌘⌘
X
An sin
x ⇥e
L
n=1
2
n Dt
for t = 0 we have C(x, 0) = C0 , therefore
C0 =
1
X
An sin
n=1
⇣ n⇡ ⌘
x
L
We multiply both sides by sin (p⇡x/L) and integrate from 0 to L .
Using the relationships
ˆL
0
C0
ˆL
0
The left hand side is
So for n = 1, 3, 5, . . .
⇣ n⇡ ⌘
⇣ p⇡ ⌘
sin
x sin
x dx =
L
L
(
0,
1
2 L,
n 6= p
n=p
ˆL
1
⇣ p⇡ ⌘
⇣ n⇡ ⌘
⇣ p⇡ ⌘
X
sin
x dx =
An sin
x sin
x dx
L
L
L
n=1
0
C0 2L
p⇡
for p = 1, 3, 5, . . . and 0 for p = 0, 2, 4, . . . .
C0
2L
n⇡
=
An
=
1
An L
2
4
C0
n⇡
The final solution is
C(x, t) =
1
4C0 X 1
e(
⇡
2k + 1
D(2k+1)2 ⇡ 2 /L2 )t
k=0
2.3
⇥ sin(
(2k + 1)⇡x
)
L
Diffusion and chemical reaction
A simple one dimensional linear gradient can be created by combining a source with a sink. If we
think about morphogen gradients, for example in the French flag problem, then it is difficult to
imagine how a single linear gradient can provide positional information. The following example
shows how linear degradation of a chemical can produce an exponential gradient of a morphogen
16
which defines a more distinct region of gene activation. Turing patterns, which we will discuss in
the next lecture, can give rise to even more distinctly defined regions.
We consider the following diffusion problem with degradation.
@C
@2C
=D 2
@t
@x
Let us find the steady state solution by setting
@C
@t
kC
(11)
= 0 . We obtain
@2C
k
= C
@x2
D
As derivatives of the exponential function are exponential functions themselves we use the following
trial solution
pk
Dx
C(x) = C0 e
where C0 is the boundary condition C(0, t) . Initial conditions are C(x, 0) = 0;
We differentiate C(x) twice to verify it is correct.
r ! p
k
@C(x)
k
Dx
= C0
e
@x
D
@ 2 C(x)
k pkx
k
D
= C0 e
= C
2
@x
D
D
A time-dependent solution can be obtained by letting C 0 = Cekt .
Employing the product rule it can be easily verified that
@ (C 0 )
@ 2 (C 0 )
=D
@t
@x2
gives
to
@C kt
@2C
e + kekt C = Dekt 2
@t
@x
which is equivalent to our original equation 11.
Thus it is enough to solve the simpler diffusion problem for C 0 and then compute C according
C(x, t) = e kt C 0 (x, t)
Let’s consider for example a point source in 1D with M molecules for which we derived the
solution already (see eq. 7)
M
C 0 (x, t) = p
e
2 ⇡Dt
17
x2
4Dt
then
C(x) = e
3
3.1
kt
M
C 0 (x, t) = p
e(
2 ⇡Dt
kt
x2
4Dt )
Lecture 3
Turing systems
In his famous paper Turing (1952) sets out by considering the following example:
@X
@t
@Y
@t
X, Y
I
IV
=
1 + 11X
=
1
I
II
V
6Y
III
6X = 1 + 5X
6Y
III
7Y + 6X
0
Theoretically this scheme could be explained by the following chemical reactions where Y enhances the degradation of X:
I : A ! X, B ! Y
II : Y ! D
III : X ! Y, or X ! Y + E
IV : A + X ! U , U ! 2X
V : X +C !V , V +Y !W , W !C +H
We are considering two compartments separated by a semipermeable membrane which allows
diffusion of X and Y across. The distance between the compartments is 4x = 1. Diffusion
constants are DX = 0.5 , DY = 4.5 . Concentrations in the left half are denoted by subscript l ,
in the right half by r making the assumption that X and Y are homogeneously distributed within
each compartment.
At equilibrium we have: Xl = Xr = Yl = Yr = 1.
Now we perturb the system at t = 0 by using the following initial concentrations:
Xl
=
1.06 ,
Xr = 0.94.
Yl
=
1.02 ,
Yr = 0.98
We obtain
18
@Xl
@t
@Xr
@t
@Yl
@t
@Yr
@t
=
0.18
0.06
|{z}
t=0
= 0.12
dif f usion
=
0.18 + 0.06 =
0.12
t=0
=
0.22
0.18 = 0.04
t=0
=
0.22 + 0.18 =
0.04
t=0
we employ the following transformation where ⇠ stands for our initial perturbation :
Xl
=
1 + 3⇠
Xr
=
1
Yl
=
1+⇠
Yr
=
1
3⇠
⇠
Inclusion of the diffusion term results in exponential growth of our perturbation with rate:
d⇠
= 2⇠
dt
In line with the previous finding, a Turing system is defined as a reaction-diffusion system that is
stable against spatial perturbations when there is no diffusion (D = 0), but becomes unstable when
D > 0. ✓
◆
DA 0
with D =
;
0 DI
Such a system takes on the following form:
@A
@t
@I
@t
=
f1 (A, I) + DA r2 A
=
f2 (A, I) + DI r2 I
f1 and f2 are non-linear functions.
Famous examples for such systems are models by Schnakenberg, 1979, Gierer and Meinhardt,
1972, and the Bruesselator, which we will discuss in more detail in the next lecture :
19
3.1.1
Schnakenberg model
“most simple” chemically plausible oscillator following a “substrate depletion” mechanism:
k1
X⌦A
k2
,
f1 (A, I)
k
2A + I !3 3A
=
,
k
Y !4 I
k 2 A + k 3 A2 I
k1
f2 (A, I) = k4 k3 A2 I
Usually the equations are non-dimensionalised by using appropriate variable transformations,
here we can use for example:
2
x
B
t⇤ = DLA2 t , x⇤ = L
,d= D
= LDAk2
DA ,
to obtain the system
3.1.2
ut
=
a
vt
=
b
u + u 2 v + r2 u
u2 v + dr2 u
Gierer-Meinhardt model
f1 (A, I)
f2 (A, I)
=
=
k2 A + k3 A2 /I
k1
k4 A
2
k5 I
The Gierer-Meinhardt model is an “activator-inhibitor” model where I inhibits the production of
the activator, A, in a linear way.
3.1.3
Turing-type enhanced degradation model
f1 (A, I)
=
k 1 A2
f2 (A, I)
=
k4 A
k2 AI + k3
k5 I
The inhibitor I enhances the degradation of the activator, A.
20
3.2
Turing instability
Let’s consider the generic reaction-diffusion system
@A
@t
@I
@t
=
f1 (A, I) + DA r2 A
=
f2 (A, I) + DI r2 I
Without diffusion, we find the fixed point (A⇤ , I ⇤ ) by solving:
f1 (A⇤ , I ⇤ )
=
0
⇤
=
0
⇤
f2 (A , I )
Stability can be assessed by evaluating the Jacobian J.
✓
◆ ✓
@f1 /@A @f1 /@I
a11
J=
=
@f2 /@A @f2 /@I
a21
a12
a22
For a stable fixed point we have:
tr(J)
=
a11 + a22 < 0
det(J)
=
a11 ⇤ a22
◆
(12)
a21 ⇤ a12 > 0
Now, let’s include diffusion and linearise about the steady state introducing new variables:
@a
@t
@i
@t
A
=
A⇤ + a
I
=
I⇤ + i
=
a11 ⇤ a + a12 ⇤ i + Da r2 a
=
a21 ⇤ a + a22 ⇤ i + Di r2 i
(13)
We consider no-flux boundary conditions which imply generic solutions a(x, t) and i(x, t) in the
form
X
a(x, t) =
↵n e n t cos(kx)
n
i(x, t)
=
X
n
21
ne
nt
cos(kx)
k is the wave number (“spatial eigenvalue”). In 1D it is n⇡
L with n = 0, 1, 2, . . . and L the length
of the domain. The wavelength of the spatial pattern is 2⇡
k . ↵n and n depend on the initial
conditions . is the “temporal eigenvalue”.
We now consider the following perturbation for one specific k
a(x, t)
=
A(t) cos (kx)
i(x, t)
=
I(t) cos (kx)
After substituting in eq(13) we obtain
@A
@t
@I
@t
(Remember that
The Jacobian is:
d2 cos(kx)
dx2
=
=
a11 A + a12 I
k 2 Da A
=
a21 A + a22 I
k 2 Di I
Da k 2
a12
Di k 2
k 2 cos(kx))
JD =
✓
a11
a21
a22
Because of condition (12) the trace of J is always negative.
Da k 2 + a22
tr(JD ) = a11
◆
Di k 2 < 0
The only possibility to obtain an instability is when det(J) < 0
Da k 2 ⇤ a22
det(JD ) = a11
we replace k 2 by x.
det(JD ) = Da Di x2
Di k 2
a12 a21 < 0
(Da a22 + Di a11 )x + a11 ⇤ a22
a21 ⇤ a12 < 0
det(JD ) is a quadratic in x . We determine its minimum by setting the first derivative to zero.
2Da Di xmin
Da a22
xmin =
det(JD )min =
(Da a22 + Di a11 )
4Da Di
(Da a22 + Di a11 )
4Da Di
2
Di a11 = 0
Da a22 + Di a11
2Da Di
2
(Da a22 + Di a11 )
+ a11 ⇤ a22
2Da Di
2
Da a22 + Di a11
>
>
22
a11 ∗a22 −a21 ∗a12
p
2 Da Di (a11 ∗a22 −a21 ∗a12 )
a21 ⇤ a12 < 0
together with (12) we obtain in summary the following conditions
a11 ⇤ a22
a11 + a22
<
0
(14)
a21 ⇤ a12
>
0
p
2 Da Di (a11 ∗a22 −a21 ∗a12 ) > 0
(15)
Da a22 + Di a11
>
The corresponding wave number is
k=
r
Da a22 + Di a11
2Da Di
(14) together with (16) imply that a11 and a22 have opposite signs. Assume that a11 > 0, which
is equivalent to A acting as an activator. It results in growth of a perturbation (local instability).
a22 < 0 reflects the role of I acting as an inhibitor (global inhibition). Another mechanism which
is connected to a Turing instability is that of lateral inhibition.
(16) demands that Da a22 + Di a11 > 0 ; With a11 > 0 and a22 < 0 we can state this in the
following form:
Di
Da
>
|a22 |
|a11 |
This means that a Turing instability occurs if the diffusion constant of the inhibitor exceeds
that of the activator. The fact that the faster diffusion of the inhibitor prevents the formation of
nearby activator peaks results in “lateral inhibition”.
23
(16)
Download