Small Signal Response Version 0.84

advertisement
Version 0.841
Small Signal Response
The cellular system equation is given by:
dS
= Nv(S, p)
dt
(1)
where S is a vector of species, N the stoichiometry matrix, v the vector of
rates and p a vector of parameters. This system is often nonlinear which
means in general there are no closed solutions. When engineers are confronted with an intractable set of equations, they will linearize.
Linear Systems
A linear system is one that obeys:
• Homogeneity
• Additivity
The two conditions can be combined to form the superposition principle.
Homogeneity
If we change an input signal by a factor α (i.e multiply by α) and the output
is also changed by the same factor, then the system is said to be homogenous.
If H is the system and x the input signal then the following is true if the
system is homogenous:
H(αx) = αH(x)
Homogeneity also implies that when the input is zero the output is also zero.
1
c 2009-10, Herbert M Sauro
Copyright 1
Additivity
Apply two separate inputs x1 and x2 to yield two outputs y1 and y2 . Now
apply both inputs, x1 and x2 simultaneously to yield a new output, y3 . If
y3 is equal to the sum y1 + y2 then the system obeys additivity. That is:
H(x1 + x2 ) = H(x1 ) + H(x2 )
We can combine both rules to yield the superposition property:
H(αx1 + βx2 ) = αH(x1 ) + βH(x2 )
Additivity
If:
x1
System
y1
x2
System
y2
x1 + x 2
System
y1 + y2
then:
Homogeneity
If:
x1
System
y1
x 1
System
y 1
then:
Figure 1: Linearity
Non-linear equations are fairly easy to spot, any equation where the state
variables raised to a non-unity power or systems that involve the state variables of trigonometric functions are non-linear.
2
Example
1. Consider the simple equation: y = mx. Homogeneity is easily satisfied since m(αx) = α(mx) = αy. To check for additivity we consider two
separate inputs:
y1 = mx1
y2 = mx2
If we now apply both inputs simultaneously we see that additivity is also
obeyed.
y3 = m(x1 + x2 ) = y1 + y2
Therefore the equation is linear.
2. Consider the simple equation, y = x2 . The homogeneity test for this
equation fails because
(αx)2 = α2 x2 = α2 y 6= αy
y = x2 is therefore a non-linear equation.
Linearization
To linearize means replacing the nonlinear version with a linear approximation. Such approximations are only valid when subjected to small changes
around some operating point.
In order to linearize a nonlinear system, we can use the Taylor Series
expansion. The Taylor series represents a function as an infinite sum of
terms at a specific value of the independent variable. If the series is centered
at the specific value of zero, the series is called the Maclaurin series. The
Taylor series if given by:
f (x) = f (a) +
f 0 (a)
f 00 (a)
f 000 (a)
(x − a) +
(x − a)2 +
(x − a)3 + · · ·
1!
2!
3!
The first two terms of the series represent the linearized portion of the
expansion, that is:
3
f (x) ≈ f (a) +
f 0 (a)
(x − a)
1!
where a is the operating point of the expansion. So long as x − a is small
enough, the linearized portion will be a good approximation.
For example, let us use the Taylor series to linearize f (x) = x2 . We must
first select an operating point around which to write the expansion. For the
sake of argument, assume that the operating point is x = 2. We also need
to compute the first derivative f 0 (a) at the operating point, a.
f 0 (a) = 2a = 4
This yields the linear approximation:
f (x) ≈ f (2) + 4(x − 2) = 4 + 4x − 8 = 4x − 4
The table below compares the exact function with the linearized version,
illustrating that the approximation is only valid near the selected operating
point.
x
x2
4x − 4
Error
0
1
1.9
2
2.1
3
4
0
1
3.61
4
4.41
9
16
-4
0
3.6
4
4.4
8
12
4
1
0.01
0
0.01
1
4
Linearizing ODEs
Consider
ds
= f (s, p)
dt
and linearize around the steady state operating point sss and pss such that:
dsss
= f (sss , pss ) = 0
dt
4
The Taylor expansion around the steady state is then
ds
∂f (sss , pss )
∂f (sss , pss )
≈
(s − sss ) +
(p − pss )
dt
∂s
∂p
Let us define δs = s − sss and δp = p − pss . Differentiating δs with respect
to time yields:
δs
ds
=
dt
dt
Note that sss is independent of time. We can therefore write the Taylor
series approximation as:
∂f (sss , pss )
∂f (sss , pss )
dδs
≈
δs +
δp
dt
∂s
∂p
The important point to note about this result is that the Taylor approximation gives us an equation that describes the rate of change of a perturbation in s.
We can apply the same linearization procedure to the system equation
around the steady state solution, that is f (sss , pss ) = 0. If the changes are
sufficiently small then we can replace the approximation with an equality:
dδs
∂v(sss , pss )
∂v(sss , pss )
=N
δs + N
δp
dt
∂s
∂p
(2)
The result yields a set of linear ordinary differential equations. Such equations can be solved by assuming an initial condition to δs = δso and δp =
δpo . Note that the equation only describes the evolution of δs, not δp.
Whatever the initial conditions for δp, it remains constant. One can imagine three different scenarios for setting up the initial conditions:
δso =
6 0
δso =
6 0
δso = 0
δpo =
6 0
δpo = 0
δpo 6= 0
See assignment
5
Example
Consider the simplest nonlinear system, a two step pathway with the first
step governed by an irreversible mass-action rate law and the second step by
an irreversible Michaelis-Menten rate law. The pathway has a single species,
S which is governed by the nonlinear differential equation:
ds
Vm s
= k1 Xo −
dt
Km + s
The steady state level for s is denoted by sss . We now linearize around this
point by evaluating the appropriate matrices:
dδs
∂v1 /∂sss
∂v1 /∂p1 ∂v1 /∂p2
= [1, −1]
δs + [1, −1]
δp
∂v2 /∂sss
∂v2 /∂p1 ∂v2 /∂p2
dt
Each derivative is computed at sss . In addition let us focus on a perturbation
to only one parameter, Vm (i.e. p2 ) and to s:
dδs
0
0
δVm
δs + [1, −1]
= [1, −1]
sss /(Km + sss )
Vm Km/(Km + sss )2
dt
Multiplying out the matrices yields the linear differential equation:
δs(t) = −
Vm Km
sss
δs +
δVm
(Km + sss )2
(Km + sss )
(3)
which describes the rate of change of the perturbation in response to a step
change in Vm and an initial perturbation to s. Note that the terms that
include sss are constant so that the equation can be reduced to
δ ṡ = −(C1 δs + C2 )
Note that Vm is absorbed into C2 because δVm is constant. The solution
to this equation with initial condition δso in sss , is given by:
C2
δs(t) = e−C1 t − 1
+ δso e−C1 t
C1
This equation describes the time evolution of δs as a result of a perturbation
in δVm and/or δs. Note that the equation only applies to small changes in
δVm and δs because of the linearization.
6
If we assume that the initial condition for δs to equal to zero (i.e δso = 0,
no perturbation in s), then the steady state solution (obtained as t goes to
infinity) is:
δs = −
C2
C1
At t = 0 the perturbation is zero as defined by the initial condition but as t
advances, δs(t) goes negative indicating that a perturbation in Vm results
in a decline in the steady state level of s. As time continues to advance, δs
reaches a new steady state given by −C2 /C1 . Note, this is the delta change
in s not the absolute value of the new level of s, which is why there is a
negative sign in the solution. The absolute level of the new steady state
level of s is given by sss − C1 /C2 .
If on the other hand δVm is zero but so is not, then C2 = 0 so that the
evolution equation is given by δso e−C1 t . As t advances, this term decays to
zero so that at the new steady state δs = 0, that is the system relaxes back
to its original state.
State Space Representation
The state of a system refers to the minimum set of variables, known as the
state variables that fully describe the system and its response to any given
set of inputs. In a state space representation, the state is determined fully
by the set of initial conditions at time to and the system inputs.
The state variables themselves are considered an internal description because not all state variables may be accessible to observation. Therefore,
we also introduce a set of output variables, y(t), which are a function of
the state variables but which are guaranteed to be observable. A typical
example of a internal variable in synthetic biology might be a transcription
factor whereas a corresponding output variable might be the level of GFP
that is responding to the transcription factor.
In synthetic biology, the state variables are the species concentrations (not
including the boundary species).
The inputs are the kinetic parameters and the boundary species.
The outputs are what we can measure, i.e a phenotype of interest, eg GFP
or a pathway flux.
7
However, the most general form of the state space equations are non-linear
which makes them difficult to study. Instead we focus on the linearized
form:
x(t) = Ax(t) + Bp(t)
y(t) = Cx(t) + Dp(t)
where x(t) represent the internal variables and y(t) the output variables. p
represent the parameters or inputs to the system. The A matrix is called
the state matrix (also the Jacobian) of size m × m where m is the number
of internal variables. The B matrix is called the control matrix of size m × q
where q is the number of parameters. The C matrix is called the output
matrix of size r × m where r is the number of output variables. The D
matrix is called the feed-forward matrix of size r × q.
From the linearization of the system equation:
dδs
∂v(so , po )
∂v(so , po )
=N
δs + N
δp
dt
∂s
∂p
we see that the A matrix is given by:
A=N
∂v(so , po )
∂s
B=N
∂v(so , po )
∂p
and the B matrix by
We will return to the C and D matrices later.
Time Invariance
In the state space representation above the various matrices such as A and B
were not functions of time. Such models are examples of time-invariant systems. Time-invariance means that the evolution of a system is independent
of the start time we give when the simulation starts. Thus the system:
y(t) = 5x(t)
8
is time invariant because it doesn’t matter to the solution, y(t) whether the
initial conditions start at to = 0 or to = 10. However the system
y(y) = tx(t)
is time variant because it now matters what we set the initial time start to
be.
Linear Time Invariant Systems
Systems which are linear and time-invariant are called Linear Time Invariant Systems, or LTI.
Time and Frequency Domains
There are two equivalent ways to represent a dynamical system:
• Time Domain
• Frequency Domain
Each has its distinct advantages and disadvantages. The frequency domain
is often used in engineering particularly as a design aid and as a means to
assess certain performance characteristics.
In synthetic biology, the utility of the frequency domain includes:
• Provides an indication of how close to instability we might be and how
to move closer or further away.
• Provides an approach for measuring the degree of modularity in a
circuit.
• Allows us to explain the onset of oscillations in a feedback circuit.
• Allows us to understand the properties of negative feedback in more
detail.
• Gives us the machinery to reconstruct the internal structure of a system from the input/output response.
9
• Relates the DC component of the frequency domain to the existing
field of metabolic control analysis.
In order to proceed we must first briefly review complex numbers
Standard form
√
Imaginary numbers are solutions to equations such as −x and are usually
√
represented
for convenience
by the symbol, xi. Thus the imaginary number
√
√
of −1 = i and for −9 = 3i (We ignore the fact that there are two solution,
+3i and −3i.
Although i is often used to represent the imaginary unit number, in engineering j is often used instead to avoid confusion with electrical current,
i.
Imaginary numbers can also be paired up with real numbers to form complex numbers. Such number have the form:
a + bj
where a represents the real part and b the imaginary part. This notation
is actually a short-hand for the more general statement:
(a, 0j) + (0, bj)
that is vector addition. For convenience the 0 values are omitted and the
notation shortened to a + bj.
A conjugate complex pair is given by the pair of complex numbers:
a − bj
a + bj
Polar form
We can express a complex number on a two dimension plane where the
horizontal axis represents the real part and the vertical axis the imaginary
part. A complex number can therefore represented as a point on the plane.
We can also express a complex number in terms of the distance the point is
away from the origin and the angle is has with respect to the horizontal axis.
10
Im
r
b
a + bj

a
Re
Figure 2: Argand Plane
In this way we can express the real and imaginary parts using trigonometric
functions:
b = r sin θ
a = r cos θ
where r is the length of the line from the origin to the point and θ the angle.
The following two representations are therefore equivalent.
a + bj = r(cos θ + j sin θ)
When written like this, r is also known as the magnitude or modulus of the
complex number, A, that is:
p
a2 + b2
|A| = r =
The notation |A| is often used to denote the magnitude of a complex number.
The angle, θ is known as the argument or phase and is given by:
−1
θ = tan
b
a
In calculating the angle we must be careful about the sign of b/a. Figure 3
illustrates the four possible situations. If the point is in the second quadrant, 180o should be added to the tan−1 result. If the point is in the third
quadrant, then 180o should be subtracted from the tan−1 result. The 1st
and 4th quadrants need no adjustments.
11
a)
b)
1st Quadrant
2nd Quadrant
Unit Circle
Unit Circle
135
1
45
1
o
1
c)
o
-1
d)
3rd Quadrant
Unit Circle
4th Quadrant
Unit Circle
-1
-135
1
o
-1
-1
-45
o
Figure 3: a) Both axes are positive, arctan (1/1) = 45o ; b) Horizontal
axis is negative -1, arctan (1/ − 1) = −135o ; c) Vertical axes is negative,
arctan (−1/1) = −45o ; d) Both axes are negative, arctan (−1/ − 1) = −135o
The rules for computing the angle are summarized in the list below. The
atan2 function often found in software such as Matlab will usually automatically take into consideration the signs.


tan−1 ( xy )





π + tan−1 ( xy )



−π + tan−1 ( y )
x
atan2(y, x) = π


2



π

−

2


undefined
12
x>0
y ≥ 0, x < 0
y < 0, x < 0
y > 0, x = 0
y < 0, x = 0
y = 0, x = 0
Exponential Form
By Euler’s formula:
ejθ = cos(θ) + j sin(θ)
we can substitute the sine/cosine terms in the polar representation to give:
a + bj = rejθ
This gives us three ways to represent a complex number:
a + bj = r(cos(θ) + j sin(θ) = rejθ
Basic Complex Arithmetic
Let a + bj and c + dj be complex numbers. Then:
1. a + bj = c + dj if and only if a = c and b = d (i.e. the real parts are
equal and the imaginary parts are equal)
2. (a + bj) + (c + dj) = (a + c) + (b + d)j (i.e. add the real parts
together and add the imaginary parts together)
3. (a + bj) − (c + dj) = (a − c) + (b − d)j
4. (a + bj) (c + dj) = (ac − bd) + (ad + bc)j
5. (a + bj) (a − bj) = a2 + b2
6.
a + bj
(ac + bd) + (bc − ad)j
=
c + dj
c2 + d2
Division is accomplished by multiplying the top and bottom by the conjugate. Note that the product of a complex number and its conjugate gives
a real number, this allows us to eliminate the imaginary part from the denominator.
13
a + bj
a + bj c − dj
=
·
c + dj
c + dj c − dj
=
(ac − b(−d)) + (a(−d) + bc)j
c2 + d2
=
(ac + bd) + (bc − ad)j
c2 + d2
=
ac + bd bc − ad
+ 2
c2 + d2
c + d2
Sinusoidal Signals
The sinusoidal signal is the most fundamental periodic signal. Any other
periodic signal can be constructed from a summation of sinusoidal signals.
We can describe a sinusoidal signal by an equation of the general form:
y(t) = A sin(ωt + θ)
This equation describes how an output, y, varies in time, t. The equation
has three terms, the amplitude (A), the angular frequency (ω) and the phase
(θ). The angular frequency is the rate of the periodic signal and is expressed
in radians per second. A sine wave traverses a full cycle (peak to peak) in
2π radians (circumference of a circle) so that the number of complete cycles
traversed in one second is then ω/2π. This is termed the frequency, f , (cycles
sec−1 ) of a sine wave and has units of Hertz. The angular frequency is then
conveniently expressed as ω = 2πf and often we will see this in sinusoid
expressions as
y(t) = A sin(2πf t + θ)
The amplitude is the extent to which the periodic function changes in the y
direction, that is the maximum height of the curve from the origin (Figure 5).
The horizontal distance peak to peak is referred to as the period, T and is
usually expressed in seconds. The inverse of the period, 1/T sec−1 is equal
to the frequency, f .
The phase, θ, indicates how delayed or advanced the periodic signal may be.
14
Figure 4 shows a typical plot of the sinusoidal function, y(t) = A sin(ωt + θ),
where the amplitude is set to one, the frequency to 2 cycles per second and
the phase to zero.
1
y(t)
0.5
0
−0.5
−1
0
0.5
1
Time (t)
1.5
2
Figure 4: y(t) = A sin(ωt + θ) where A = 1, f = 2 Hz so that ω = 4π, θ = 0.
y(t)
The left panel in Figure 5 shows the affect of varying the amplitude and
right panel shows two signals of different frequency.
2
2
1
1
0
0
−1
−1
−2
0
0.5
1
1.5
2
−2
0
Time (t)
0.5
1
1.5
2
Time (t)
Figure 5: Left Panel: Amplitude change y(t) = A sin(ωt + θ) where A =
2, f = 2 Hz, θ = 0 Right Panel: Frequency Change y(t) = A sin(ωt + θ)
where A = 1, f = 4 Hz, θ = 0
Sinusoidal signals can be time shifted. The two sine waves shown in Figure 6
have the same frequency and amplitude but one of them is shifted to the
right by 90 degrees (or π/2 radians), that is phase shifted.
The sign in the phase shift term determines whether the shift is to the left
15
2
y(t)
1
0
−1
−2
0
0.5
1
Time (t)
1.5
2
Figure 6: Phase change: y(t) = A sin(ωt+θ) where A = 1, f = 2 Hz, θ = 90◦ .
The red curve is shifted 90◦ to the left relative to the blue curve.
or right. If the expression is negative, such as sin(α − β), then the phase is
delayed, that is its starts later, in other words the signal is shifted right.
One important property of sinusoidal signals is that the sum of two sinusoidal
signals of the same frequency but different phase and amplitude will result
in another sinusoidal frequency with a different phase and amplitude but
identical frequency.
A1 sin(ωt + θ1 ) + A2 sin(ωt + θ2 ) = A3 sin(ωt + θ3 )
In fact any linear operation on a sinusoid will only change the amplitude or
phase. For example, multiply by a constant only changes the amplitude.
Linear Systems and Sinusoidals
Given that linear systems are composed of combinations of linear operations
such as addition, multiplication by a constant or integration we can be sure
that any sinusoidal input to such a system will only experience changes to the
phase and amplitude of the signal. The frequency will remain unchanged.
Of particular interest is how the steady state responds to a sinusoidal input,
termed the sinusoidal steady state response.
We can illustrate this by way of an example. Consider the linear first-order
differential equation:
16
dy
+ ay = b sin(ωt)
dt
where the input to the equation is a sinusoidal equation, sin(ωt). This
equation is of the standard linear form:
dy
+ P (t)y = Q(t)
dt
We can therefore use the integrating factor technique to solve this equation.
The integrating factor is given by
R
ρ=e
P (t)dt
=e
R
adt
= eat
Multiplying both sides by ρ and noting that
d
(
dt
Z
P (t)dt) = P (t)
we obtain
d
(yeat ) = eat b sin(ωt)
dt
(Note that
d
at
dt (ye )
=
dy at
dt e
+ yaeat ).
Assuming an initial condition of y(0) = 0 and integrating both sides gives:
y=b
ωe−at + a sin(ωt) − ω cos(ωt)
a2 + ω 2
At this point we only want to consider the steady state sinusoidal response,
hence as t → ∞, then
y=√
a2
b
a sin(ωt) − ω cos(ωt)
√
2
+ω
a2 + ω 2
To show that the frequency of the input signal is unaffected, we proceed as
follows. We start with the well known trigonometric identity:
A sin(β − α) = A cos(α) sin(β) − A sin(α)cos(β)
= a sin(β) + ω cos(β)
17
where a = A cos(α) and ω = sin(α). If we sum the squares of a and ω we
obtain a2 + ω 2 = A2 (sin2 (α) + cos2 (α)) = A2 . That is:
A=
p
a2 + ω 2
Similarly, ω/a = (A sin(α))/(A cos(α)) = tan(α). That it is:
α = tan−1
ω a
Since a sin(β) + ω cos(β) = A sin(β − α) where β = ωt then
y=√
a2
b
sin(ωt − α)
+ ω2
This final result shows us
√ that the frequency, ω remains unchanged but the
amplitude is scaled by a2 + ω 2 and the phase shifted by α. In summary
the amplitude change is given by:
Aout
1
=√
2
Ain
a + ω2
and the phase shift by:
α = − tan−1
ω a
Laplace and Fourier Transforms
In the previous section we used a relatively laborious approach that determined how a sinusoidal signal was changed by a linear system. For larger
systems, this approach becomes too unwieldy. Instead we can get the same
information by using the unilateral Fourier Transform. This transform takes
a sinusoidal input signal, applies it to a linear system and computes the resulting phase and amplitude change at the given frequency of the sinusoidal
input. The transform can be applied at all frequencies so that complete
frequency response can be computed indicating how the system alters sinusoidal signals at different frequencies. Analytically, the unilateral Fourier
Transform is given by:
∞
Z
F (jω) =
0
18
x(t)e−jωt dt
If we compare this to the Laplace transform:
∞
Z
X(s) =
x(t)e−st dt
0
we see that they are very similar. s in the Laplace transform is usually a
complex number σ + jω where the real part
In the case of the unilateral Fourier transform, s = jω. To compute the
Fourier transform we can therefore take the Laplace transform and substitute s with jω. The reason this works is because the real part represents
the transient or exponential decay of the system to steady state, whereas
the imaginary part represents the steady state itself. When injecting a sinusoidal signal into a system we are primarily interested in the sinusoidal
steady state, hence we can set σ = 0.
In Fourier analysis, harmonic sine and cosines are multiplied into the system
function, f (t) and then integrated. The act of integration picks out the
strength of the response to a give frequency.
Let us use the Fourier transform to obtain the amplitude and phase change
for a general linear first-order differential equation:
dy
+ ay = f (t)
dt
We will denote L(f (t)) to means the Laplace transform of f (t). The table
below shows a very short list of Laplace transforms.
f (t)
F (s)
f (t) + g(t)
L[f (t)] + L[g(t)]
af (t)
aL[f (t)]
y
Y (s)
dy/dt
sY (s) + y(0)
Taking Laplace transforms on both sides yields:
sY (s) + aY (s) = L(f (t))
so that
19
Y (s) =
L(f (t))
s+a
The transfer function of the system, T (s), is however the ratio of L(output/(input)
so that
T (s) =
Y (s)
1
=
L(f (t))
s+a
To obtain the frequency response we set s = jω:
T (jω) =
1
jω + a
From this complex number we can compute both the amplitude and phase
shift. First we must get the equation into a standard form:
T (jω) =
1 (a − jω)
a
jω
= 2
−
a + jω (a − jω)
a + ω 2 a2 + ω 2
From this we can easily compute the amplitude change to be:
s
A=
a2
ω2
+
=
(a2 + ω 2 )2 (a2 + ω 2 )2
r
1
1
=√
2
a2 + ω 2
a + ω2
The phase shift can be computed using tan−1 (b/a) so that
−1
α = − tan
ω a
Note that these results are identical to the results obtained in the previous
section when the differential equation was integrated directly.
In conclusion we can determine the frequency response of a linear system
from the Laplace transform.
Laplace Transform of the System Equation
Given the linearized system equation:
x(t) = Ax(t) + Bp(t)
20
taking the Laplace transform on both sides yields:
dx
L
= L [Ax(t) + Bp(t)]
dt
sX(s) − x(0) = AX(s) + BP(s)
We will assume that the initial condition corresponds to the steady state,
that is x(0) = 0, therefore:
sX(s) = AX(s) + BP(s)
(sI − A)X(s) = BP(s)
X(s)
= (sI − A)−1 B
P(s)
The left-hand side of the above equation represents the transfer function for
the linearized system. By replacing s with jω and substituting A and B
with the network terms, we obtain the frequency response equation:
Hs (jω) =
∂v
jωI − N
∂s
−1
N
∂v
∂p
The subscript on the transfer function, H, is there to emphasize that this
is the response of the species concentrations to a sinusoidal input on one or
more of the parameters.
Example
Consider the simple gene regulatory network with a single transcription
factor, s:
Xo
S
V1
V2
Figure 7: One Gene Network
21
We will assume that the expression rate for s is controlled by a factor Xo .
Let us examine the frequency response of this system to a sinusoidal input
at Xo . We will also assume that the first step is governed by the rate law
v1 = k1 Xo and the degradation step by v2 = k2 s.
We will compute the frequency response given by:
∂v −1 ∂v
Hs (jω) = jωI − N
N
∂s
∂p
First we need to collect the three matrix terms, N, ∂v/∂s and ∂v/∂p.
N = [1 − 1]
∂v
=
∂s
" ∂v #
∂v
=
∂p
" ∂v #
1
∂s
∂v2
∂s
0
=
k2
1
∂Xo
∂v2
∂Xo
k
= 1
0
Inserting these into the transfer function yields:
−1
0
k
Hs (jω) = jω − [1 − 1]
[1 − 1] 1
k2
0
= (jω + k2 )−1 k1 =
k1
jω + k2
To obtain the amplitude and phase shift we must convert the above expression into the standard form by multiplying top and bottom by the conjugate
complex number:
k1 k2 − jω
k1 k2 − k1 jω
k1 k2
k1 jω
= 2
− 2
=
2
2
2
jω + k2 k2 − jω
k2 + ω
k2 + ω
k2 + ω 2
√
From this we can determine the amplitude ( a2 + b2 ) given by:
s
Amplitude = |Hs (jω)| =
22
k12
k1
=p 2
2
2
k2 + ω
k2 + ω 2
Likewise the phase change can be computed from tan−1 (b/a):
−1
Phase = − tan
ω
k2
Plots of amplitude and phase versus frequency are called Bode plots. The
amplitude plot is plotted using decibels (dB) which is a logarithmic unit
for expressing the magnitude of a quantity, in this case the change in the
amplitude, given by the formula 20 log10 (|A|). The bandwidth of a system
is the frequency at which the gain drops
√ below the 3 db peak. This is also
the frequency where the signal is 1/ 2 of the maximum signal amplitude
(about 70% of the signal strength).
Magnitude (dB)
10
0
−10
−20
−30
10−3 10−2 10−1 100 101 102 103 104
Frequency rad/sec (ω)
Figure 8: Bode Plot: Magnitude (dB)
Note that the phase shift starts at zero. That is as the frequency is reduced
the phase shift gets smaller. At high frequencies the phase shift tends to
−90o . The way to understand this is to look at the phase expression and
realize that the smaller k2 the more likely the phase shift will be −90o . If
k2 is very small then we can assume there is very little degradation flux,
this means that the change in s is dominated by the input sine wave. The
maximum rate of increase in s is when the sine wave is at its maximum
peak. As the input sine wave decreases the rate of increase in s slows until
the input sine wave crosses the steady state level of s. Once the input sine
wave reaches the steady state level, the level of s also peaks. Thus the
input sine wave and the concentration of s will be −90o out of phase with
23
Phase (Degrees)
0
−50
−90o line
−100
10−4 10−3 10−2 10−1 100 101 102 103 104
Frequency rad/sec (ω)
Figure 9: Bode Plot: Phase Shift
the concentration of s lagging. The frequency point at which the phase
reaches −90o will depend on the k2 value. Figure 10 illustrates the phase
shift argument. Also note that the amplitude tends to zero as the frequency
is increased.
One question that remains is what is the amplitude change at zero frequency? To answer this let us set ω = 0:
|Hs (jω)| =
k1
k2
Clearly the amplitude change is not zero, so what is it? If we look at
the steady state solution, the answer will become clear. At steady state,
v1 − v2 = 0, that is:
k1 Xo = k2 s
In other words the steady state concentration of s is:
sss =
k1 Xo
k2
The sensitivity of s with respect to Xo is:
dS
k1
=
dXo
k2
24
s
Xo
Xo and s
1
0
−1
Zero Rate
0
0.2
0.4
0.6
Time (t)
0.8
1
Figure 10: −90o phase shift at high frequencies or low degradation rates.
Note that the concentration of s peaks when the input rate is at zero.
which is of course the amplitude change we observed at zero frequency.
Therefore the amplitude change at zero frequency is the sensitivity of the
particular state variable (species concentration) to the input signal.
If we write the systems equation more explicitly, such that:
Nv(S(p), p) = 0
so that S is shown to be a function of p, we can differentiate this expression
implicitly with respect to p to give:
N
∂v ds
∂v
+
∂s dp ∂p
=0
Expanding and rearranging the terms yields:
N
∂v ds
∂v
= −N
∂s dp
∂p
If we assume that N∂v/∂s is invertible then we can solve for ds/dp to give:
ds
∂v −1 ∂v
=− N
N
dp
∂s
∂p
25
It should be noted that the expression on the right is the same as the frequency response equation with jω = 0. At zero frequency the frequency
response is effectively similar to a step response in the parameter.
Longer Genetic Networks
What happens if we want to deal with longer genetic networks, say for example a network with two transcription factors? The analysis remains the
same although the algebra becomes more convoluted. With two transcription factors the invertible matrix becomes a 2 by 2 matrix.
Xo
S1
V2
V1
S2
V3
V4
Figure 11: Two Gene Network Cascade
For the sake of argument, let us again assume simple kinetics at each step,
thus v1 = k1 Xo , v2 = k2 s1 , v3 = k3 s1 and v4 = k4 s2 .
Again we need to compute:
Hs (jω) =
∂v −1 ∂v
jωI − N
N
∂s
∂p
First we will collect the three matrix terms, N, ∂v/∂s and ∂v/∂p.
N=

1 −1
0
0
0 −1
∂v1
 ∂s1


 ∂v2

∂v 
 ∂s1
=  ∂v
∂s  3

 ∂s1

 ∂v4

∂s1
0
1

∂v1
∂s2 



∂v2  
0 0

∂s2 
k2 0 
 

=

∂v3 
k3 0 


∂s2 
0 k4


∂v4 
∂s2
26

∂v1
 ∂Xo 


 ∂v   
 2
k1


∂Xo   0 
∂v 
= 
=
 0
∂p 
 ∂v3 
 ∂X 
0
o



 ∂v4 
∂Xo

and insert the terms into the transfer function to yield:
k1

k2 + iw
Hs (jω) = 

k1 k3
(w − ik2 )(ik4 + w)





It is far easier however to compute the amplitude and phase numerically.
Many computer languages provide a function called ArcTan2, or more commonly, atan2 that will compute the arcTangent of a value while taking into
account the four quadrants. These functions take two arguments, the first
is usually the imaginary part and the second argument is the real part.
Magnitude (dB)
50
0
−50
10−5
10−4
10−3 10−2 10−1
100
Frequency (rad/sec)
101
Figure 12: Bode Plot for a two gene cascade: Magnitude Plot. All rate laws
are simple irreversible first-order with values: v1 : k1 = 0.1, v2 : k2 = 0.2, v3 :
k3 = 0.25, v4 : k4 = 0.06, Xo = 1, see Figure 11
27
Phase (degrees)
0
−50
−100
−150
−180o line
−200
10−5
10−4
10−3
10−2
10−1
Frequency (rad/sec)
100
101
Figure 13: Bode Phase Plot: See Figure 12
Flux Frequency Response
Since v = v(x, p) we can linearize this to give:
v=
∂v
∂v
δx +
δp
∂x
∂p
Taking the Laplace Transform of this gives:
L(v) = CX(s) + DP(s)
But X(s) = (sI − A)−1 BP(s) so that:
V(s) = C(sI − A)−1 B + D P(s)
The transfer function, HJ (s) is given by V(s)P(s)−1 :
HJ (s) = C(sI − A)−1 B + D
Substituting the various state space terms with the network terms and setting s = jω, finally yields:
HJ (iω) =
∂v
∂s
jωI − N
28
∂v
∂s
−1
N
∂v ∂v
+
∂p ∂p
Transfer Functions
Cs and CJ
∂v −1
Cs (jω) = jωI − N
N
∂s
CJ (jω) =
∂v
Cs + I
∂s
In the biology community, these transfer functions are also refereed to as
the control coefficients.
Structural Constraints
Summation Constraints
Let the basis for the null space of N be given by K, that is:
NK = 0
Post multiplying the canonical equation, Cs by K gives:
Cs (jω)K =
∂v −1
jωI − N
NK
∂s
yielding
Cs (jω)K = 0
Post multiplying the canonical equation, CJ by K gives:
CJ (jω)K =
∂v
Cs K + K
∂s
so that
CJ (jω)K = K
29
Connectivity Constraints
∂v −1
Cs (jω) = jωI − N
N
∂s
The inverse term in the above equation can be written as:
∂v −1
∂v
jωI − N
jωI − N
=I
∂s
∂s
or
∂v
jωI − N
∂s
−1
∂v −1 ∂v
jωI − jωI − N
N
=I
∂s
∂s
∂v
jωI − N
∂s
−1
∂v
N
=
∂s
Rearranging:
∂v −1
jωI − N
jωI − I
∂s
This can we rewritten as
∂v
Cs
=
∂s
∂v −1
jωI − N
jωI − I
∂s
so that at ω = 0
Cs (0)
∂v
= −I
∂s
Similarly we can derive a connectivity theorem for the fluxes. If:
CJ (jω) =
then we can post multiply by
∂v
Cs + I
∂s
∂v
∂s :
CJ (jω)
∂v
∂v ∂v ∂v
=
Cs
+
∂s
∂s
∂s
∂s
But at ω = 0, Cs (0)∂v/∂s = −I
Therefore:
30
CJ (0)
∂v
=0
∂s
Scaled Transfer Functions
Often in biology it is more convenient to work with relative than absolute
changes. This eliminates the need to be concerned with units and also makes
it easier to compare across different laboratories.
A scaled transfer function is defined by:
Cs =
dS dv
S%
dS v
=
/
≈
dv S
S v
v%
If matrix terms this can be expressed as:
Cs = dg(S)−1
dS
dg(v)
dv
where dg(x) represents the diagonal matrix with elements x. Given the
transfer function at zero frequency:
dS
∂v −1
=− N
N
dv
∂S
We can pre-multiply by dg(S)−1 and post-multiply by dg(v) to yield:
dg(S)
−1
dS
∂v −1
−1
dg(v) = −dg
N
N dg(v)
dv
∂S
therefore:
∂v
dg(S) Cs = − N
∂S
−1
N dg(v)
and:
−1 ∂v
N dg(v) dg(v)
dg(S)Cs = −N dg(v)
∂S
(N dg(v) ε) Cs = −N dg(v)
31
Finally:
Cs = − (N dg(v) ε)−1 N dg(v)
Let us now multiply both sides by the one vector, [1, 1, · · · ]T , so that:
Cs 1 = − (N dg(v) ε)−1 N dg(v) 1
Cs 1 = − (N dg(v) ε)−1 Nv
But at steady state, Nv = 0, therefore:
Cs 1 = 0
Similarly:
CJ 1 = 1
In scalar for these relations are given by:
X
Cis = 0
X
CiJ = 1
where the summation is over all reaction steps in the pathway.
Negative Feedback
Negative feedback has a significant effect on the frequency response of a
system. Of particular interest here are four effects:
• Negative feedback reduces the overall gain.
• The bandwidth is extended, i.e. the frequency response is improved.
• The effect of loads on the output is reduced.
• The gain and phase margins are reduced
32
Another key property of negative feedback systems is their propensity to
become unstable. In terms of the frequency response is it straight forward
to understand the origins of this instability. In a negative feedback system,
most disturbances are damped due to the action of the feedback. However
what if the feedback mechanism takes too long to respond so that by the
time the negative feedback acts, the disturbance has already abated. In
such a situation, the feedback would now attempt to restore a disturbance
that is not longer present resulting in an incorrect action. Imagine that the
feedback system acts in the opposite direction to the disturbance because of
the delay, this would cause the disturbance to grow rather then fall. Imagine
also that there is sufficient gain in the feedback loop to amplify or at least
maintain this disturbance, the result would be a growing signal. If the loop
gain amplifies then the disturbance will grow until it reaches the physical
limits of the system at which point the loop gain is likely to fall and the
disturbance fall. The result is a continuous growth and decline in the original
disturbance, that is a sustained oscillation.
The key elements for sustaining an oscillation is a sufficient loop gain (at
least 1.0) and a delay in the system of exactly −180o . As we have seen,
specific phase shifts only occur at a particular frequency, however random
disturbances in the system will occur at all frequencies, therefore disturbances in system that has sufficient loop gain will quickly locate the point
where the phase shift is at −180o resulting in a rapid destabilization of the
system.
The requirement for a −180o shift means that at this point the negative
feedback is effectively behaving as a positive feedback, and positive feedbacks
are normally destabilizing influences.
There are two terms that engineers frequently used to measure how close a
system is to instability, these terms are the gain and phase margins.
The gain margin is the amount of gain increase required to make the loop
gain unity at the frequency where the phase angle is −180o and the phase
margin is the difference between the phase of the response and −180o when
the loop gain is 1.0, see Figure ??. Both terms can be used to measure
the relative stability of a negative feedback system. Both however must be
measured with respect to the loop gain and not the closed loop gain.
However, without having to get into any sophisticated arguments, there is
a very simple yet insightful algebraic analysis of a simple feedback system.
This analysis illustrates many of the key properties that feedback systems
possess. Consider the block diagram in Figure ??.
33
Magnitude
of Loop Gain
0
}
Gain Margin
Phase
Phase Margin
180
{
Figure 14: Gain and Phase Margins
We will consider only the steady-state behaviour of the system. We take the
input u, the output y, and the error e to be constant scalars. Assume (for
now), that both the amplifier A and the feedback F act by multiplication
(take A and F as non-negative scalars). Then without feedback (i.e. F = 0),
the system behaviour is described by y = Au, which is an amplifier (with
gain A) provided that A > 1. Assuming for now that the disturbance d is
zero, and if we now include feedback in our analysis, the behaviour of the
system is as follows. From the diagram, we have
y = Ae
e = u − F y.
Eliminating e, we find
y=
Au
1 + AF
or simply y = Gu,
A
where G = 1+AF
is the system (or closed loop) gain. Comparing G with
A, it is immediate that the feedback does indeed reduce the gain of the
amplifier. Further, if the loop gain AF is large (AF 1), then
G≈
A
1
= .
AF
F
34
d
u
+
+
e
A
-
y
Ae
F
Summation Point
A = Amplifier
F = Feedback
Figure 15: Linear Feedback System
That is, as the gain AF increases, the system behaviour becomes more
dependent on the feedback loop and less dependent on the rest of the system.
We next indicate three specific consequences of this key insight.
Resistance to internal parameter variation In all real amplifiers, both
man-made and natural, there will be variation in the amplifier (A) characteristics, either as a result of the manufacturing process or internally generated thermal noise. We can study the effect of variation in the amplifier
characteristics by investigating how A causes variation in the gain G.
Considering the sensitivity of the system gain G to variation in the parameter A, we find
∂G
∂
A
1
=
=
.
∂A
∂A 1 + AF
(1 + AF )2
Clearly, this sensitivity decreases as AF increases. It may be more telling
to consider the relative sensitivity, in which case we find
∂G A
1
=
,
∂A G
1 + AF
35
so that for a small change ∆A in the gain of the amplifier, we find the
resulting change ∆G in the system gain satisfies
∆G
∆A
1
≈
.
G
1 + AF A
As the strength of the feedback (F ) increases the influence of variation in A
decreases.
Resistance to disturbances in the output Suppose now that a nonzero
disturbance d affects the output as in Figure ??. The system behaviour is
then described by
y = Ae − d
e = u − F y.
Eliminating e, we find
y=
Au − d
.
1 + AF
The sensitivity of the output to the disturbance is then
∂y
1
=−
.
∂d
1 + AF
Again, we see that the sensitivity decreases as the loop gain AF is increased.
In practical terms, this means that the imposition of a load on the output, for
example a current drain in an electronic circuit or protein sequestration on a
signaling network, will have less of an effect on the amplifier as the feedback
strength increases. In electronics this property essentially modularizes the
network into functional modules.
Improved fidelity of response Consider now the case where the amplifier A is nonlinear. For example a cascade pathway exhibiting a sigmoid
response. Then the behaviour of the system G (now also nonlinear) is described by
e = u − F y = u − F G(u).
G(u) = y = A(e)
Differentiating we find
G0 (u) = A0 (u)
de
du
de
= 1 − F G0 (u).
du
36
Eliminating
de
du ,
we find
G0 (u) =
A0 (u)
.
1 + A0 (u)F
We find then, that if A0 (u)F is large (A0 (u)F 1), then
G0 (u) ≈
1
,
F
so, in particular, G is approximately linear. In this case, the feedback compensates for the nonlinearities A(·) and the system response is not distorted.
(Another feature of this analysis is that the slope of G(·) is less than that of
A(·), i.e. the response is “stretched out”. For instance, if A(·) is saturated
by inputs above and below a certain “active range”, then G(·) will exhibit
the same saturation, but with a broader active range.)
A natural objection to the implementation of feedback as described above is
that the system sensitivity is not actually reduced, but rather is shifted so
that the response is more sensitive to the feedback F and less sensitive to
the amplifier A. However, in each of the cases described above, we see that
it is the nature of the loop gain AF (and not just the feedback F ) which
determines the extent to which the feedback affects the nature of the system.
This suggests an obvious strategy. By designing a system which has a small
“clean” feedback gain and a large “sloppy” amplifier, one ensures that the
loop gain is large and the behaviour of the system is satisfactory. Engineers
employ precisely this strategy in the design of electrical feedback amplifiers,
regularly making use of amplifiers with gains several orders of magnitude
larger than the feedback gain (and the gain of the resulting system).
Implications for Drug Targeting
The analysis of feedback illustrates an important principle for those engaged
in finding new drug targets. The aim of a drug is to cause a disruption to
the network in such a way that it restores the network to it ‘healthy’ wildtype state. Clearly targets must be susceptible to disruption for the drug
to have any effect. The analysis of feedback suggests that targets inside
the feedback loop are not suitable because any attempt to disturb these
targets will be resisted by the feedback loop. Conversely, targets up stream
and particularly downstream are very susceptible to disturbance. Figure ??
illustrates the effect of a 20 fold decrease in enzyme activity at two points
in a simple reaction chain. In the first case both disruption at the center
37
or end of the network has a significant effect on the concentration of the
last species, S3 . In the second panel, the same pathway is shown but with
a negative feedback loop from S3 to the first enzyme. The same activity
reductions are also shown, but this time note the almost insignificant effect
that modulating a step inside the loop has compared to modulating a step
outside the loop.
Thus the take-home message to pharmaceutical companies who are looking
for suitable targets is to avoid targeting reaction steps inside feedback loops!
Figure 16: Negative Feedback and Drug Targets
38
Download