Algorithms for Periodic Steady State Analysis on Electric Circuits

advertisement
Nat.Lab. Unclassified Report 804/99
Date of issue: 2/99
Algorithms for Periodic Steady
State Analysis on Electric Circuits
Master’s Thesis
S.H.M.J. Houben
Unclassified Report
c
Philips Electronics 1999
804/99
Unclassified Report
Authors’ address data: S.H.M.J. Houben WAY3.73; houben@prle
c Philips Electronics N.V. 1999
All rights are reserved. Reproduction in whole or in part is
prohibited without the written consent of the copyright owner.
ii
c Philips Electronics N.V. 1999
Unclassified Report
804/99
Unclassified Report:
804/99
Title:
Algorithms for Periodic Steady State Analysis on Electric Circuits
Master’s Thesis
Author:
S.H.M.J. Houben
Keywords:
circuit simulation; Periodic Steady State; Radio Frequency; autonomous; multi-tone; Finite Difference; Shooting; PSS; RF; Pstar
Abstract:
Designers of electric circuits are often interested in the behaviour
of the designed circuits after turn-on effects have damped out.
The traditional way to compute this long-term behaviour is by
simulating the electric circuit over a large time interval. However,
when the turn-on effects damp out slowly, the computational costs
may become prohibitive.
Many circuits have a periodic oscillating behaviour after turnon effects have damped out. Typical examples of such circuits
are radio-frequent (RF) circuits. Such a periodic oscillating solution, also known as the Periodic Steady State (PSS), can be
computed efficiently by using two-point boundary value problem
techniques. An extra complication arises because the oscillation
period might be unknown a priori.
In this thesis, algorithms will be discussed for finding the PSS
in the non-autonomous case (where the period is known a priori)
and the autonomous case (where the period is an additional unknown). The methods will be compared both from a theoretical
and an experimental viewpoint.
This project has been a co-operation between Philips Electronic
Design & Tools/Analog Simulation (ED&T/AS) and the Eindhoven University of Technology (TUE). It was under supervision of Dr. E.J.W. ter Maten (ED&T/AS), Dr. W.H.A. Schilders
(Philips Research, VLSI Design Automation & Test), Prof. Dr.
R.M.M. Mattheij (TUE) and Dr. J.M.L. Maubach (TUE).
Conclusions:
The following conclusions can be reached:
1. In this report, several methods for finding a PSS have been
compared, and a mathematical framework to fit these methods in has been developed. Several well-posedness theorems have already been proven. However, still needed is a
deeper mathematical validation of all steps of the various
methods described this report and in [12].
c Philips Electronics N.V. 1999
iii
804/99
Unclassified Report
2. Using a dedicated PSS algorithm to find the long-term behaviour of a circuit can save computational costs, as compared to doing a transient analysis.
3. Of the methods tested, the most attractive method is the
FDM* method (see page 85), since it generally performs
about as well as the best of the other two methods tested.
4. Several recommendations have been taken into account in
an already started implementation of a RF noise simulation
facility in Pstar.
5. The setup of Qstar (see [3]) as a MATLAB based parallel
to Pstar proved itself to be of value because it created an
environment for study in MATLAB. Pstar, which is developed and maintained on a customer-driven basis, does not
provide such an investigation environment.
6. Qstar will also be of value for allowing future students to
study methods for Pstar.
iv
c Philips Electronics N.V. 1999
Contents
1
Introduction
1
2
Basic equations
2
2.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
2.2
Nodal analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
2.2.1
The reformulation of the Kirchhoff’s Current Law . . . . . . . .
4
2.2.2
The reformulation of the Kirchhoff’s Voltage Law . . . . . . . .
6
2.3
Modified nodal analysis . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
2.4
The rank of A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
3
4
Applications using the circuit equations
15
3.1
DC analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
3.2
Small-signal AC analysis . . . . . . . . . . . . . . . . . . . . . . . . . .
16
3.3
Transient analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
3.4
Periodic steady state analysis . . . . . . . . . . . . . . . . . . . . . . . .
19
3.5
Some final remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
Hierarchical algorithm
20
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
4.2
The composition of a circuit from its sub-circuits . . . . . . . . . . . . .
20
4.3
Some basic circuit components . . . . . . . . . . . . . . . . . . . . . . .
24
4.3.1
Current source . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
4.3.2
Current-defined resistor . . . . . . . . . . . . . . . . . . . . . .
26
4.3.3
Capacitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
4.3.4
Voltage source and voltage-defined resistor . . . . . . . . . . . .
27
4.3.5
Inductor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27
v
804/99
Unclassified Report
4.3.6
5
6
7
vi
28
4.4
Special properties of G and C
. . . . . . . . . . . . . . . . . . . . . . .
29
4.5
The elimination of zeroes on the main diagonal . . . . . . . . . . . . . .
30
4.5.1
Zeroes due to voltage-defined elements . . . . . . . . . . . . . .
30
4.5.2
Zeroes due to the hierarchy . . . . . . . . . . . . . . . . . . . . .
33
A complete example
35
5.1
The resonant model . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
5.2
The npn-transistor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
5.3
The integration into the super-circuit . . . . . . . . . . . . . . . . . . . .
40
Existence of a solution to the two-point Boundary Value Problem
43
6.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
6.2
Index of a DAE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
6.3
Transferable DAE’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
6.4
Well-conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
Periodic Steady State analysis
50
7.1
Harmonic Balance . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
7.1.1
The Discrete Fourier Transform . . . . . . . . . . . . . . . . . .
51
7.1.2
Applying the DFT in Harmonic Balance . . . . . . . . . . . . . .
52
7.1.3
Solving the Harmonic Balance form . . . . . . . . . . . . . . . .
53
Extrapolation methods . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
7.2.1
Aitken’s 12 method . . . . . . . . . . . . . . . . . . . . . . . .
55
7.2.2
Reduced rank extrapolation (RRE) . . . . . . . . . . . . . . . . .
57
7.2.3
Minimal polynomial extrapolation (MPE) . . . . . . . . . . . . .
58
7.2.4
Epsilon extrapolation . . . . . . . . . . . . . . . . . . . . . . . .
59
7.3
Shooting method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
7.4
Wave-form-Newton . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
7.5
Finite Difference Method . . . . . . . . . . . . . . . . . . . . . . . . . .
63
7.6
Comparing Finite Difference with Shooting . . . . . . . . . . . . . . . .
64
7.2
8
Controlled circuit components . . . . . . . . . . . . . . . . . . .
Free-running oscillators
66
8.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
8.2
Characterisation of free-running oscillators . . . . . . . . . . . . . . . .
68
c Philips Electronics N.V. 1999
Unclassified Report
9
804/99
8.2.1
Basic results . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
8.2.2
Floquét-theory for DAE’s . . . . . . . . . . . . . . . . . . . . .
70
8.3
An adaptation to the Finite Difference Method . . . . . . . . . . . . . . .
73
8.4
Finding the extra condition . . . . . . . . . . . . . . . . . . . . . . . . .
76
Multi-tone analysis
79
9.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
9.2
The multi-tone representation of the circuit equations . . . . . . . . . . .
80
9.3
Finite Difference Method . . . . . . . . . . . . . . . . . . . . . . . . . .
82
9.4
Hierarchical Shooting . . . . . . . . . . . . . . . . . . . . . . . . . . . .
82
10 Numerical experiments
85
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
85
10.2 Comparisons between Shooting and the Finite Difference Method . . . .
85
10.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
89
11 Conclusions
90
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
90
11.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
90
11.3 Suggestions for further research . . . . . . . . . . . . . . . . . . . . . .
91
A Hybrid analysis
92
A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
92
A.2 Method essentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
92
B Circuit descriptions and diagrams
94
B.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
94
B.2 Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
94
B.2.1
Rectifier circuit . . . . . . . . . . . . . . . . . . . . . . . . . . .
94
B.2.2
Frequency doubler . . . . . . . . . . . . . . . . . . . . . . . . .
97
B.2.3
Mixer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
B.2.4
Free-running oscillator . . . . . . . . . . . . . . . . . . . . . . . 104
B.2.5
More complicated rectifier circuit . . . . . . . . . . . . . . . . . 107
B.2.6
Another free-running oscillator . . . . . . . . . . . . . . . . . . . 109
B.2.7
An even more complicated rectifier circuit . . . . . . . . . . . . . 112
c Philips Electronics N.V. 1999
vii
804/99
Distribution
viii
Unclassified Report
119
c Philips Electronics N.V. 1999
Chapter 1
Introduction
In this report, methods for finding the Periodic Steady State (PSS) of an electric circuit
will be discussed. I will start in chapter 2 by explaining how the equations that describe
a circuit are derived. In the next chapters, I will give some basic insight in various circuit
simulation techniques. In chapter 7, I will discuss PSS analysis in detail. In chapter 8, an
extension will be made towards free-running (autonomous) oscillators. In chapter 9, an
extension will be made towards multi-tone problems.
In chapter 10, I will discuss some numerical experiments which have been conducted. Finally, in chapter 11 I will arrive at some conclusions, and I will do some recommendations
for further research.
1
Chapter 2
Basic equations
2.1 Introduction
A circuit consists of nodes which are connected by several types of branches. A simple
circuit is shown in figure 2.1. This circuit has 4 nodes and 5 branches. Note that some of
b
+
- 1
0
a
c
e
3
2
d
Figure 2.1: A simple circuit
the branches have a direction, i.e. there is a distinction between their positive and their
negative end node. Hence a circuit is essentially a directed graph. With each branch,
we can associate a number of branch variable. Typical branch variables are the current i
through the branch and the voltage difference v over the branch.
There are two types of equations, which together describe the circuit:
1. Branch equations
These depend on the type of the branch, and they depend primarily on the circuit
variables that are associated with that particular branch. Every branch has a number
of circuit variables associated with it. Typically, a branch has n branch variables
with n − 1 branch equations describing them.
For example, a linear resistor can be described by the branch equation i = v/R,
where R is the resistance of the resistor, and i and v are the branch variables. The
2
Unclassified Report
804/99
branch equations can also contain derivatives of branch variables. An example of
this is a linear capacitor, which can be described by i = C v̇.
In some cases, the branch equation might also contain circuit variables that are associated with other branches. Such a circuit variable is called a controlling variable.
A branch that has controlling variables in its branch equation is called a controlled
branch.
Branch equations come in two types, namely differential and algebraical equations.
Suppose that x is the vector containing all circuit variables. An algebraical equation
has the form:
x p = j (t, x).
(2.1)
The function j may depend on all circuit variables, hence the appearance of the
complete vector x in the equation 2.1.
A differential equation has the form
xq =
∂q(t, x) ∂q(t, x)
d
q(t, x) ≡
+
ẋ.
dt
∂t
∂x
(2.2)
2. Topological equations
The topological equations do not depend on the type of the branches, but only on
the topology of the circuit, i.e. the way in which the branches are connected. These
equations are given by two laws, the so-called Kirchhoff’s topological laws. These
laws are:
Kirchhoff’s Current Law (KCL) The total sum of currents traversing each cutset1 in the circuit must always be zero. For example, the currents through
branch a and branch b in circuit 2.1 must sum up to 0.
Kirchhoff’s Voltage Law (KVL) The total sum of all voltage differences along
each loop in the circuit must always be zero. For example, the voltage differences over the branches of the loop a → b → c → a must sum up to
0.
There are several methods for combining the branch equations and the Kirchhoff’s topological laws into one system of circuit equations:
f(t, x, ẋ) :=
d
q(t, x) + j(t, x) = 0,
dt
(2.3)
where x is the vector containing all relevant circuit variables.
Some authors prefer to express equation 2.3 in the following form:
f(t, x, ẋ) :=
d
q(t, x) + j∗ (t, x) + s(t) = 0.
dt
(2.4)
1
A cutset is a minimal set of branches that, if removed, would divide the circuit into two separate
parts.The branches b, c and d in figure 2.1 form a cutset.
c Philips Electronics N.V. 1999
3
804/99
Unclassified Report
In this case, the so-called sources (branches which have a circuit equation of the form
xs = s(t)) are in the function s. However, in this report I will put the sources in j.
In the next section, I will discuss two methods, nodal analysis and modified nodal analysis. The latter is the method used by Pstar. In appendix A, a different method is discussed,
called hybrid analysis.
Equation 2.3 is a so-called Differential-Algebraical Equation, or DAE for short. This
means that this system of equations might contain both differential equations, which potentially arise from the branch equation of a capacitor, and algebraical equations, which
might arise from the branch equation of a resistor, or from the topological equations.
2.2 Nodal analysis
The simplest method for constructing the circuit equations is the so-called nodal analysis
method. This method has, however, some severe restrictions; several important types
of branches, such as voltage sources, cannot be used. Hence Pstar uses a variant of this
method, called modified nodal analysis, in which the restrictions are lifted. Since modified
nodal analysis is most easily explained by comparing it to standard nodal analysis, I will
discuss standard nodal analysis first.
The method depends on the nodal incidence matrix A ∈ Rn×b , where n is the number of
nodes and b is the number of branches in the circuit. A is defined as:

if node i is the positive node of branch j
 1
A(i, j ) = −1 if node i is the negative node of branch j .
(2.5)

0
if node i is not connected to branch j
From this, it follows that the sum of all rows is a zero row:
P
i
A(i, j ) = 0, ∀ j .
For circuit 2.1, the incidence matrix looks like this:
b
c
d
 a
0 1
1
0
0

1  0 −1 1
0
A= 

2  −1 0 −1 1
3 0
0
0 −1
e 
0

1 
.
−1 

0
(2.6)
For the branches a, c and d, one of their two end nodes is arbitrarily selected as their
“positive” end node. Note that A is time-independent; the topology of the network does
not change in time.
2.2.1 The reformulation of the Kirchhoff’s Current Law
Let’s recapitulate the Kirchhoff’s Current Law (KCL):
4
c Philips Electronics N.V. 1999
α)
Unclassified Report
804/99
The total sum of currents traversing each cutset in the circuit is always zero.
This is equivalent to the following assertion:
(β)
The total sum of all currents going into some node is always zero.
I will give a short description of the proof:
“(α) ⇒ (β)” Select some node p, and consider the set of branches B p that are directly
connected to p. An example is shown in figure 2.2; in this circuit, four branches
are directly connected to p. Clearly, these branches form a cutset in the network,
p
Figure 2.2: The set of branches connected to node p form a cutset
since one could disconnect p from the rest of the network by removing exactly
these branches. So the total sum of currents traversing this cutset must sum up to 0.
But this sum is equal to the sum of all currents going into node p. So this sum is 0,
what was to be proven.
“(β) ⇒ (α)” The assertion is obviously true if one subset in which the circuit is divided
by the cutset, consists of only one node. By using induction in the number of nodes
of one subset, we can prove the assertion for any cutset.
The formulation (β) of the Kirchhoff’s Current Law (KCL) can be written in the form of a
system of linear equations. To see this, consider again the circuit in figure 2.1. According
to formulation (β), all currents entering node 0 must sum up to 0. So we get the following
equation:
i a + i b = 0.
The currents entering node 1 must also sum up to 0. This leads to the following equation:
−i b + i c + i e = 0.
The minus sign before i b results from the fact that node 1 is the negative end node of
branch b. The current i b is flowing out of node 1; hence it gives a negative contribution to
the equation.
c Philips Electronics N.V. 1999
5
804/99
Unclassified Report
If ib (t) ∈ Rb is the vector of all branch currents at time t, the complete set of equations
resulting from the Kirchhoff’s Current Law can be written as:
Aib (t) = 0,
∀t ∈ R.
(2.7)
Another way to express this is to say that ib is in the kernel of A; ib ∈ N (A).
2.2.2 The reformulation of the Kirchhoff’s Voltage Law
Let’s recapitulate the Kirchhoff’s Voltage Law:
α)
The total sum of voltage differences along each loop in the circuit is always
zero.
This is equivalent to the following assertion:
To every node i a voltage vi can be assigned in such a way that for every
branch j following equation holds:
β)
v j = v +j − v −j .
Here, v +j and v −j are the voltages in the positive and negative nodes connected
to the branch j . The voltages are unique except for a common constant.
Again, I will give a short description of the proof:
“(α) ⇒ (β)” Select two nodes p and q, and consider two trails t1 and t2 from p to q. See
figure 2.3. The sum of the voltage differences over the branches along p will be
q
q
t2
t1
p
p
Figure 2.3: Two trails t1 and t2 from p to q can be connected to form a loop.
called v p , and the sum of the voltage differences over the branches along q will be
called vq . Now consider the loop we get by walking from p to q along t1 and then
6
c Philips Electronics N.V. 1999
Unclassified Report
804/99
returning to p along t2 . The sum of the voltage differences over the branches along
this loop is v p − vq , but since the sum of voltage differences along a loop must be
zero, we now know that v p = vq . Hence the sum of the voltage differences of the
branches along a trail is only dependent on the begin- and endpoint of the trail.
Next, select an arbitrary node 0, and assign to this node an arbitrary voltage v0 . For
every node i , select some trail ti from 0 to i . Assign to i the voltage vi = vti + v0 ,
where vti is the sum of the voltage differences over the branches along ti . It is now
easy to prove that this assignment of voltages satisfies the properties mentioned
above.
“(β) ⇒ (α)” It is easy to show that for every trail t the sum of voltage differences along
t is equal to vq − v p , where p is the begin node of the trail and q is the end node of
the trail. So if we have a loop (i.e. a circular trail), the begin and the end node are
the same, and therefore the sum of the voltage differences must be zero.
Formulation (β) of the Kirchhoff’s Voltage Law (KVL) will now be used to get a system
of linear equations. Note that formulation (β) essentially says that every voltage difference v j over a branch j can be written as the difference between the nodal voltages of
the two end nodes of branch j . So for circuit 2.1, this would lead to the following set of
equations:
va
vb
vc
vd
ve
= v0
− v2
= v0 − v1
=
v1 − v2
=
v2 − v3
=
v1 − v2
(2.8)
Now let vn (t) ∈ Rn be the vector of all node voltages at time t, and let vb (t) ∈ Rb be the
vector of the voltage differences at time t of all branches. The equations 2.8 can then be
rewritten as:
vb (t) = A T vn (t), ∀t ∈ R,
(2.9)
where A is defined as in equation 2.6. So Kirchhoff’s Voltage Law says essentially that
vb (t) ∈ R(A T ). The fact that, given vb (t), vn (t) is only determined uniquely except for a
constant, implies that the vector [1, 1, . . . , 1]T ∈ N (A T ).
The combination of KCL and KVL with the branch equations
The next step is to combine the topological equations resulting from the Kirchhoff’s Current Law and the Kirchhoff’s Voltage Law with the branch equations. At first, we would
like all branch equations to have the form:
ij =
c Philips Electronics N.V. 1999
d
q̂(t, v) + ˆ(t, v),
dt
(2.10)
7
804/99
Unclassified Report
where i j is the current through branch j and v is the vector containing the voltage differences over all branches. Typically, equation 2.10 will only depend on component v j of
the vector v, but since controlled branches are allowed, 2.10 might depend on the whole
v.
There are some types of branches whose branch equations cannot be written in the form
of 2.10; the treatment of these branches will be discussed further on. An example of a
circuit whose branch equations do have the form of 2.10 is given in figure 2.4.
0
+
r
s
c
1
Figure 2.4: A circuit whose branch equations can be written in the form 2.10
The incidence matrix A for this circuit is:
A=
0
1
r
s
c
1
1
1
.
−1 −1 −1
For this circuit the number of nodes n = 2, and the number of branches b = 3. The
branch equations of this circuit are:
r:
s:
c:
1
vr
R
is = I
d
i c = (Cvc )
dt
ir =
(resistor)
(current source)
(capacitor)
All branch equations are linear, which, of course, need not be true in the general case.
The branch equations can be aggregated into one system, like this:
ib =
O
d
q̂(t, vb ) + |(t, vb ),
dt
O
with q̂, | : R × Rb → Rb ,
(2.11)
where ib is the vector containing all branch currents, and vb is the vector containing the
voltage differences over all branches. In our example, the functions q̂ and ˆ are defined
as:

 
 

1 
vr
vr
0
v
R r







I .
q̂(t, vv ) :=
, |(t, vv ) :=
0
vc
vc
Cvc
0
O
8
c Philips Electronics N.V. 1999
Unclassified Report
804/99
By applying the equations 2.7 and 2.9, we can transform the system 2.11 into:
O
d
q̂(t, vb ) + |(t, vb )
dt
d
Aq̂(t, vb ) + A|(t, vb )
H⇒ Aib =
dt
d
Aq̂(t, vb ) + A|(t, vb )
H⇒ 0 =
(2.7)
dt
d
Aq̂(t, A T vn ) + A|(t, A T vn ) ∈ Rn .
H⇒ 0 =
(2.9)
dt
ib =
O
O
O
(2.12)
In this way, we find a system of n equations for n unknowns. This step can be performed
for the circuit 2.4; the resulting equations are:
1
d C(v0 − v1)
(v0 − v1 ) + I
R
.
0=
+
− R1 (v0 − v1 ) − I
dt C(v1 − v0)
Of course, these equations are not independent, but this is to be expected, since we now
that [1, 1]T ∈ N (A T ); hence rank(A) < 2. In the general case, we know that since
A is singular, the system of equations 2.12 is of rank n − 1 at most; hence this system
of equations is undetermined. This can be fixed by prescribing the voltage in one of
the nodes. This node is called the grounded node, since it could be considered as been
connected to the earth.
If node g is selected to become the grounded node with voltage vg , then we can proceed
like this: remove the g-th coordinate of vn and call the resulting vector ṽn ∈ Rn−1 . Also
remove the g-th row from the matrix A, resulting in the matrix à ∈ R(n−1)×b . Since
vn (g) = vg , we then have:
A T vn = Ã T ṽn + vg A T eg ,
(2.13)
where eg ∈ Rn is the g-th unit vector. By substituting 2.13 into 2.12 and removing the
gth equation, we get:
0=
O
d
à q̂(t, à T ṽn + vg A T eg ) + à |(t, à T ṽn + vg A T eg ),
dt
(2.14)
which is a system of n − 1 equations for n − 1 unknowns.
In our example circuit 2.4, we will select node 0 to become the grounded node with
voltage v0 = 0. By performing the steps that are mentioned above, we get the following
circuit equation:
1
d
0=
(Cv1) + v1 + I,
dt
R
which is indeed a system of n − 1 equations for n − 1 unknowns, since n = 2 in this
example.
To finish the discussion of nodal analysis, I will show how the system 2.14 can be written
in the form of 2.3. First, define x by x := ṽn . Next, q and j are defined as:
q(t, x) := Ã q̂(t, Ã T ṽn + vg A T eg );
c Philips Electronics N.V. 1999
O
j(t, x, ẋ) := Ã |(t, Ã T x + vg A T eg ).
9
804/99
Unclassified Report
Finally, define f as:
f(t, x, ẋ) :=
∂q(t, x)
∂q(t, x)
d
q(t, x) + j(t, x) =
ẋ +
+ j(t, x).
dt
∂x
∂t
The system 2.14 can now be written in the form:
f(t, x, ẋ) = 0 ∈ Rn−1 ,
i.e. in the form of 2.3.
2.3 Modified nodal analysis
Nodal analysis has one important limitation; it requires all branch equations to fit in the
form 2.10. This is expressed by saying that nodal analysis requires the branches to be
current-defined and voltage-controlled. However, it is also very useful to have branches
that are voltage-defined and current-controlled. Such branches have a branch equation of
the form:
d
v j = q̄(t, i j ) + ¯(t, i j ).
dt
A very simple but important example of such a branch is the voltage source; a 5V voltage
source would have the branch equation v j = 5. Clearly, this can’t be rewritten in the
form of equation 2.10. For example, if branch b in circuit 2.4 is replaced by an oscillating
voltage source, we get the circuit as shown in figure 2.5.
0
r
s
c
1
Figure 2.5: Same as figure 2.4, but branch s is now a voltage source, which produces a
sine wave voltage
This circuit has the following branch equations:
1
va
R
= V sin ωt
= C v̇c
ir =
vs
ic
In the most general setting, we have two types of branches: branches that are currentdefined, and branches that are voltage defined. Both types can be voltage-controlled,
current-controlled, or even both. Suppose that there are b1 current-defined branches and
10
c Philips Electronics N.V. 1999
Unclassified Report
804/99
b2 voltage-defined branches, giving a total of b1 +b2 = b branches. We split the vectors vb
and ib into two parts, the first (vb1 and ib1 ) corresponding to the current-defined branches
and the second (vb2 and ib2 ) corresponding to the voltage-defined branches:
vb1
ib1
and ib =
.
vb =
vb2
ib2
For circuit 2.5, this would give:
v
vb1 := r ,
vc
vb2 := vs ,
and similar for ib . The matrix A can also be split into two parts:
A = A1 A2 .
In the case of circuit 2.5, we get:
0
1
A1 :=
r
c
1
1
−1 −1
and
A2 :=
0
1
s
1
.
−1
The Kirchhoff’s Current Law, as formulated in equation 2.7, can now be rewritten as:
A1 ib1 + A2 ib2 = 0,
(2.15)
and equation 2.9, which describes the Kirchhoff’s Voltage Law, becomes:
vb1 = A1T vn ,
vb2 = A2T vn .
(2.16)
Suppose that we have the following branch equations:
O
N
d
q̂(t, vb1 , ib2 ) + |(t, vb1 , ib2 ),
dt
d
vb2 = q̄(t, vb1 , ib2 ) + |(t, vb1 , ib2 ).
dt
For circuit 2.5, the definitions of q̂, |, q̄ and | are:
1 0
vr
vr
v
, i s ) :=
,
, i s ) := R r ,
|(t,
q̂(t,
vc
vc
0
Cvc
vr
v
, i s ) := 0,
q̄(t,
¯(t, r , i s ) := V sin ωt.
vc
vc
ib1 =
O
N
O
(2.17)
(2.18)
(2.19)
(2.20)
By applying 2.15 and 2.16, the equations 2.17 can be transformed in the following way:
O
d
q̂(t, vb1 , ib2 ) + |(t, vb1 , ib2 )
dt
d
A1 q̂(t, vb1 , ib2 ) + A1 |(t, vb1 , ib2 )
H⇒ A1 ib1 =
dt
d
A1 q̂(t, vb1 , ib2 ) + A1 |(t, vb1 , ib2 )
H⇒ − A2 ib2 =
(2.15)
dt
d
A1 q̂(t, A1T vn , ib2 ) + A1 |(t, A1T vn , ib2 ).
H⇒ − A2 ib2 =
(2.16)
dt
ib1 =
O
O
O
c Philips Electronics N.V. 1999
(2.21)
11
804/99
Unclassified Report
Similar, the equation 2.18 can be transformed:
N
d
q̄(t, vb1 , ib2 ) + |(t, vb1 , ib2 )
dt
d
H⇒ A2T vb2 = q̄(t, A1T vb1 , ib2 ) + |(t, A1T vb1 , ib2 ).
(2.16)
dt
vb2 =
N
(2.22)
By substituting the equations 2.19 and 2.20, into 2.21 and 2.22, we get the following
equations for circuit 2.5:
1
d C(v0 − v1 )
(v
−
v
)
−i s
0
1
,
(2.23)
=
+ R1
is
(v − v0)
dt C(v1 − v0 )
R 1
v0 − v1 = V sin ωt.
(2.24)
Again, this set of equations is undetermined. We apply the same trick as in the previous
section, and get for the general case the equations:
d
Ã1 q̂(t, Ã1T ṽn + vg A1T eg , ib2 )
dt
+ Ã1 |(t, Ã1T ṽn + vg A1T eg , ib2 ),
d
q̄(t, Ã1T vb1 + vg A1T eg , ib2 )
Ã2T vb2 + vg A2T eg =
dt
+ |(t, Ã1T vb1 + vg A1T eg , ib2 ).
− Ã2 ib2 =
O
N
(2.25)
(2.26)
In this way, we find n − 1 + b2 equations for n − 1 + b2 unknown variables. For circuit 2.5,
node 0 will again be the node which is grounded, resulting in the following equations:
d
1
Cv1 + v1 ,
dt
R
−v1 = V sin ωt.
is =
The resulting system can again be written in the form of 2.3, but note that in this case we
have both all node voltages and some branch currents as circuit variables.
2.4 The rank of A
In this section, I will say something about the rank of the incidence matrix A, as defined in
2.5. It is clear that the rank of A influences the rank of the complete system of equations.
First, I will proof the following theorem:
Theorem 2.4.1. The rank of the matrix A ∈ Rn×b , as defined in 2.5, is equal to n −
p, where p is the number of “pieces” of which the circuit consists, i.e. the number of
nonempty sub-circuits which are not connected to the rest of the circuit. In figure 2.6, a
circuit is shown with p = 3.
12
c Philips Electronics N.V. 1999
Unclassified Report
804/99
1
a
2
0
5
e
b
f
3
d
c
8
6
h
4
g
7
Figure 2.6: A circuit consisting of 3 “pieces” (p = 3)
Proof. I will proof this with induction to the number of pieces p.
“P(0)”: If the circuit consists of 0 pieces, we have an empty circuit (no nodes and no
branches). Hence n = 0 and rank(A) = 0 = n − p.
“P( p) H⇒ P( p + 1)”: Suppose that we have a circuit consisting of p + 1 pieces, n
nodes and b branches. Number the nodes in such a way that the nodes 1, . . . , n 0 lie in the
first piece and that the nodes n 0 + 1, . . . , n lie in the last p pieces. Similarly, the branches
are numbered in such a way that the branches 1, . . . , b0 lie in the first piece and that the
branches b0 + 1, . . . , b lie in the last p pieces. The first piece will be called sub-circuit 1
and the last p pieces will together be called sub-circuit 2. Since there are no connections
(by definition) between sub-circuit 1 and sub-circuit 2, the matrix A looks like this:
1
..
.
A
=




0

n

n0 + 1 

.. 
. 
n
1 · · · b0
(b0 + 1) · · · b
A1
∅
∅
A2










(2.27)
The matrix A2 is the incidence matrix of sub-circuit 2, which consists of p pieces and
(n −n 0 ) nodes. By applying the induction assumption, we find that rank(A2 ) = n −n 0 − p.
The matrix A1 is the incidence matrix of sub-circuit 1, which consists of n 0 nodes and b0
branches. I will show that R(A1 )⊥ = h1i, where 1 = [1, 1 . . . , 1]T . This implies that
ρ(A1 ) := dim (R(A1 )) = n 0 − 1, and therefore that rank(A) = rank(A1 ) + rank(A2 ) =
(n 0 − 1) + (n − n 0 − p) = n − ( p + 1), which had to be proven.
Every column of A1 contains, by construction, exactly one 1, one −1, and further only zeroes. Hence for every column a of A1 we have that (a, 1) = 0, where 1 = [1, 1, . . . , 1]T .
c Philips Electronics N.V. 1999
13
804/99
Unclassified Report
Hence h1i ⊆ R(A1 )⊥ . Now suppose that the vector n ∈ R(A1 )⊥ . Select two nodes i and j
in sub-circuit 1. Since sub-circuit 1 consists of one piece, there is a trail (i, k1 , . . . , k K , j )
in sub-circuit 1 which connects node i with node j . So a branch b0 exists between node i
and node k1 . Since n ∈ R(A1 )⊥ , we know that (n, ab0 ) = 0, where ab0 is the b0 -th column
of A1 . The vector ab0 contains either 1 or −1 at position i , −1 or 1 at position b0 and
further only zeroes. Hence (n, ab0 ) = 0 implies that n i = n k1 . In the same way, we find
that n k1 = n k2 , n k2 = n k3 , etcetera, and finally n k K = n j . Thus we have n i = n j . This
is true for any choice of i and j , which means that n = [n, n, . . . , n]T = n · 1, where
. So n ∈ h1i, and therefore we find that R(A1 )⊥ = h1i.
|n| := knk
k1k
Of course, a realistic circuit should consist of one piece, so the matrix A will have rank
n−1. Hence it is to be expected that “grounding” one node will make the circuit equations
of a realistic circuit determinate. Pstar is able to detect situations such as a circuit that
consist of two parts. Special effort is taken to produce some more or less reasonable
solution even in these degenerate cases. However, I will not discuss this subject in detail.
14
c Philips Electronics N.V. 1999
Chapter 3
Applications using the circuit equations
As described in the previous chapter, the circuit equations typically have the following
form:
f(t, x, ẋ) :=
d
q(t, x) + j(t, x) = 0.
dt
(3.1)
From the system of equations 3.1, several types of information can be obtained, as described in the following sections. A more comprehensive overview of analysis methods
can be found in [12].
3.1 DC analysis
The simplest type of analysis that can be applied to 3.1 is the determination of the steady
state solution. In the steady state, all currents are DC1 , so this type of analysis is often
referred to as DC analysis. The steady state is described by:
ẋ = 0,
so the equation 3.1 reduces to:
f(t, x, 0) = f DC (x, 0) = 0.
(3.2)
Note that f is independent of t in this formulation; we could write f DC as:
f DC (x, ẋ) :=
dq DC (x)
ẋ + j DC (x).
dx
(3.3)
Note that q DC and j DC , and therefore f DC , do not depend on the time t.
The equation 3.2 is an algebraic equation, not a differential equation, since q plays no
role. Hence it can be solved by using typical techniques for solving such equations, such
as the Newton method.
1
Direct Current, means time independent
15
804/99
Unclassified Report
3.2 Small-signal AC analysis
Consider a time-independent, or autonomous, system of equations:
f DC (x, ẋ) = 0,
(3.4)
and let x(t) = x DC be a steady-state solution of 3.4, i.e. f DC (x DC , 0) = 0.
Small-signal AC2 analysis is involved with finding the answer to the following problem;
what happens when a small, simple sinewave source is added to the equation 3.4. More
formally, define the function f AC as:
f AC (t, x, ẋ) := f DC (x, ẋ) + s1 cos ωt + s2 sin ωt,
(3.5)
in which x = x AC . It is important to note that the added source does not depend on x
or ẋ; i.e. the source does not depend on the state of the circuit. If ks1 k and ks2 k are
small enough, a solution x AC of 3.5 can be approximated by doing a Taylor expansion
and omitting the terms of second order and higher.
x AC (t) = x DC + δx1 cos ωt + δx2 sin ωt.
(3.6)
Now substitute 3.6 into 3.5 and perform a first-order truncated Taylor expansion:
f DC (x AC (t), ẋ AC (t)) + s1 cos ωt + s2 sin ωt
=f DC (x DC , 0) + s1 cos ωt + s2 sin ωt
∂f DC +
(δx1 cos ωt + δx2 sin ωt)
∂x x=x DC ,ẋ=0
∂f DC (−ωδx1 sin ωt + ωδx2 cos ωt).
+
∂ ẋ x=x DC ,ẋ=0
By using equation 3.1, we can derive:
∂ ∂q DC
∂f DC ẋ + j DC (t, x) =
=
∂x x=x DC ,ẋ=0 ∂x
∂x
∂ ∂q DC
∂f DC ẋ + j DC (t, x) =
=
∂ ẋ x=x DC ,ẋ=0 ∂ ẋ
∂x
dj DC dx x=x DC
dj DC dx x=x DC
(3.7)
(3.8)
Since x DC is a solution of 3.4, we have f DC (x DC , 0) = 0. So we find the following
equation:
G(δx1 cos ωt + δx2 sin ωt) +
C(−ωδx1 sin ωt + ωδx2 cos ωt) +
s1 cos ωt + s2 sin ωt = 0,
where:
2
16
dj DC ,
G :=
dx x=x DC
(3.9)
dq DC C :=
.
dx x=x DC
Alternating Current
c Philips Electronics N.V. 1999
Unclassified Report
804/99
The sine terms and the cosine terms of equation 3.9 can be separated:
Gδx1 + ωCδx2 + s1 = 0,
−ωCδx1 + Gδx2 + s2 = 0.
(3.10a)
(3.10b)
If we use complex notation, this system can be written as:
Ĝδx + s = 0,
(3.11)
where Ĝ := G − i ωC, δx := δx1 + i δx2, and s := s1 + i s2 .
I have said that ksk needs to small for this linearisation to be justified. When computing
with the linearised system, however, it is common to scale s in such √
a way that ksk ≈ 1,
to avoid the numerical difficulties that might arise when |s| = O( ), where is the
machine number.
The solution of this problem is called the small-signal AC solution. This analysis is called
AC analysis, because the inputs and the results are sinewaves, or SW’s for short. So they
can be physically interpreted as AC currents. It is important to note that in AC analysis,
the frequencies of the output signals are always the same as the frequencies of the input
signals. Hence it is impossible to simulate an effect called frequency conversion. Frequency conversion means that output is produced at frequencies different from the input
frequencies. There are many interesting circuits which exhibit frequency conversion, but
they cannot be simulated with AC analysis.
AC analysis cannot simulate frequency conversion because it is a linear frequency analysis. There are also nonlinear frequency analyses, e.g. Harmonic Balance. If one wants to
model frequency conversion, such a nonlinear analysis should be carried out.
3.3 Transient analysis
Transient analysis determines the response of the circuit over a specific time interval
[0, T ]. Again, we have the basic equation:
d
(3.12)
f(t, x, ẋ) := q(t, x) + j(t, x) = 0.
dt
The most simple way to solve this system is by using an integration algorithm known as
Backward Euler:
d
. q(ti+1 , xi+1 ) − q(ti , xi )
q(ti+1 , xi+1 ) =
.
dt
1t
If we substitute this into 3.12, we obtain:
q(ti+1 , xi+1 ) − q(ti , xi )
+ j(ti+1 , xi+1 ) = 0.
1t
This equation expresses xi+1 implicitly in terms of xi . In circuit simulation software, this
equation is most commonly solved using the Newton-Raphson algorithm.
Backward Euler is a simple representative of a larger class of integration methods, called
linear multistep methods, which are used by Pstar3 . These methods are described by the
3
Pstar is a circuit simulator developed by Philips ED&T/AS
c Philips Electronics N.V. 1999
17
804/99
Unclassified Report
following equation:
n
1
. 1 X
q(ti+1 , xi+1 ) =
α j q(ti+1− j , xi+1− j )
1t
1t j=1
n
X
d
+
β j q(ti+1− j , xi+1− j ).
dt
j=0
(3.13)
One of the reason why these methods are used is because they handle stiff problems well.
In the future, other types of integration methods, such as implicit Runge-Kutta methods,
might be implemented in Pstar.
If β0 = 0, the vector qxi+1 is approximated solely in terms of previous time points solutions. Hence, in that case the integration method is called explicit. On the other hand, if
β0 6= 0, the method is called implicit. For an implicit method, we can reorder the terms to
yield:
n
d
. −1 X
q(ti+1 , xi+1 ) =
α j q(ti+1− j , xi+1− j )
dt
β0 1t j=0
n
−1 X
d
+
β j q(ti+1− j , xi+1− j ).
β0 j=1 dt
(3.14)
where α0 = −1. We can substitute the equation 3.14 into 3.12, which results in:
n
−1 X
α j q(ti+1− j , xi+1− j )
β0 1t j=0
n
−1 X
d
+
β j q(ti+1− j , xi+1− j ) + j(ti+1 , xi+1 ) = 0.
β0 j=1 dt
(3.15)
If we define the vector bi+1 as:
bi+1
n
n
−1 X
−1 X
d
:=
α j q(ti+1− j , xi+1− j ) +
β j q(ti+1− j , xi+1− j ),
β0 1t j=1
β0 j=1 dt
equation 3.15 can be written as:
1
q(ti+1 , xi+1 ) + bi+1 + j(ti+1 , xi+1 ) = 0.
β0 1t
(3.16)
So we now have an algebraic equation for xi+1 . A possible solution method for this
problem is a Newton method.
An equation of the form 3.12 is called a differential-algebraical equation, or DAE for
short, since it contains both a differential part ( dtd q(t, x)) and an algebraical part (j(t, x)).
Differential-algebraical equations can, in general, only be handled by implicit integration
methods. So if some other type of integration method would be used to integrate the DAE,
it should be an implicit method.
18
c Philips Electronics N.V. 1999
Unclassified Report
804/99
3.4 Periodic steady state analysis
Many circuits have a so-called Periodic Steady State, or PSS, solution. A PSS solution is
a solution x P S S to the equation 3.12, but instead of specifying an initial value, a periodicity
constraint is given:
x P S S (0) = x P S S (T ),
(3.17)
for some T > 0. The period T might be known beforehand, or it might be an additional
unknown of the system. In the former case, we have a classic two-point boundary value
problem. Solutions methods for this kind of problem are described in chapter 7. If T is an
unknown, we have a so-called free-running oscillator. This type of problem is discussed
in more detail in chapter 8.
3.5 Some final remarks
In all these four types of analysis, the two matrices:
G(t, x, ẋ) =
∂j(t, x)
∂x
and
C(t, x, ẋ) =
∂q(t, x)
,
∂ ẋ
play an important role. In the DC analysis, the matrix G(x, 0) is important in the NewtonRaphson process for finding a solution of f(x, 0). In the AC analysis, the roles of G and
C have already been discussed. In the transient analysis, we have to solve a system of the
form 3.16 at every time step. To use Newton-Raphson in this case, we need the matrix:
1
q(ti+1 , xi+1 )
β0 1t
+ bi+1 + j(ti+1 , xi+1 )
∂xi+1
=G+
1
C.
β0 1t
So in this case we also need the matrices G and C. In chapter 7, I will show that G and C
are important for PSS analysis, too. In the next chapter I will describe a general method
for finding f, G and C from a circuit description.
c Philips Electronics N.V. 1999
19
Chapter 4
Hierarchical algorithm
4.1 Introduction
Circuits are often defined in terms of sub-circuits, which might consist of several subsub-circuits, etcetera. In this chapter I will describe how the circuit equations of the total
circuit can be derived from the circuit equations of the sub-circuits. I will also describe
some basic components, which can be thought of as “elementary sub-circuits”, i.e. subcircuits which are not further divided into sub-sub-circuits.
All these circuits and sub-circuits together form a hierarchy of circuits. To the user of
Pstar, the circuit designer, the sub-circuits behave as ‘black boxes”; the user doesn’t have
to to know how their actual design looks like; only the functional behaviour matters. This
is called modular design. It has advantages to the user, because the internals of the subcircuits are hided. But this is also an advantage for the solving algorithm, since it means
that the circuit, and therefore the system of circuit equations, is already partitioned in subsystems that are only connected with each other by a few variables. The solving algorithm
in Pstar makes use of the hierarchical structure of the problem. This will be described in
this chapter.
4.2 The composition of a circuit from its sub-circuits
Consider a circuit, called the super-circuit, which consists of N sub-circuits. In addition to
sub-circuits, a circuit might contain primitive elements like resistors, capacitors, etcetera,
but I will regard these as special, predefined sub-circuits for the sake of simplicity.
Every sub-circuit j has n j variables that describe its internal state x( j) ∈ Rn j . The circuit
equations for sub-circuit j are given by:
f( j) (t, x( j) , ẋ( j) ) :=
d ( j) ( j)
q (x ) + j( j) (t, x( j) ) = 0 ∈ Rn j .
dt
20
Unclassified Report
804/99
The matrices G ( j) and C ( j) are defined in the obvious way:
G ( j) (t, x( j))) :=
∂j( j) (t, x( j) )
∂x( j)
and
C ( j) (x( j) ) :=
∂q( j) (t, x( j) )
.
∂x( j)
In principle, every variable of the sub-circuit will be associated with one of the n variables of the super-circuit. In order to connect the different sub-circuits, some variables
of different sub-circuits will be associated to the same variable of the super-circuit. For
example, consider two sub-circuits i and j . Assume that node k of circuit i needs to be
connected with node l of circuit j . In that case, variable xk(i) , which describes the voltage
( j)
in node k of sub-circuit i , and variable xl , which describes the voltage in node l of subcircuit j , both need to be associated with the same variable of the common super-circuit,
say variable xm .
It is common to divide the variables of a sub-circuit j into two groups; one group that
is invisible to the super-circuit, called the internal variables, and one group that needs
to be “connected with” (i.e. associated to) variables of other sub-circuits. The latter are
called terminal variables. In Pstar, the state vector x( j) can be partitioned into two vectors
y( j) ∈ Rm j and z( j) ∈ Rk j , in the following manner:
x
( j)
:=
y( j)
z( j)
.
Here, y( j) contains the terminal variables, and z( j) contains the internal variables. The
number of terminal variables is denoted as m j , and the number of internal variables is
denoted as k j . Of course, m j + k j = n j . The state vector x of the super-circuit can now
be written as:
T
T
T
T
T
(1)
(2)
(N)
x= y z
.
z
··· z
All terminal variables of the sub-circuits are associated with the variables in the vector
y. The internal variables of the sub-circuits are not associated with variables of other
sub-circuits; hence the vectors z(1) , . . . , z(N ) appear explicitly in the vector x.
All these associations between variables can concisely be represented by the N matrices
B1, . . . , B N . These matrices B j ∈ Rn×n j are defined as:

 1 if variable q of sub-circuit j is associated
with variable p of the super-circuit,
B j ( p, q) =

0 otherwise.
If the structure of the vectors x( j) and x as described in the previous paragraphs is used,
c Philips Electronics N.V. 1999
21
804/99
Unclassified Report
we get the following structure for the matrix B j :
y
y
z(1)
..
.
B j = ( j)
z
..
.
z(N )









( j) T
O
..
.
O
..
.
z( j)
O
O
..
.
Ik j
..
.
O
O
B 0j
T





,




with B 0j ∈ Rm×m j .
The matrices B1, . . . , B N are only a kind of “bookkeeping” matrices, which are introduced here as a theoretical aid. They probably won’t be computed explicitly by a simulation program.
We can now express the circuit equation of the super-circuit in terms of the circuit equations of the sub-circuits:
q(t, x) =
N
X
B j q( j) (t, B Tj x),
(4.1)
B j j( j) (t, B Tj x).
(4.2)
j=1
j(t, x) =
N
X
j=1
which can be summarised into one equation:
f(t, x, ẋ) =
N
X
B j f( j) (t, B Tj x, B Tj ẋ) = 0 ∈ Rn .
(4.3)
j=1
Similarly,we can express the matrices G and C in the matrices G (1) , . . . , G (N ) respectively C (1) , . . . , C (N ) :
G ( j) (t, x) =
N
X
B j G ( j) (t, B Tj x)B Tj ,
j=1
C ( j) (t, x) =
N
X
B j C ( j) (t, B Tj x)B Tj .
j=1
From these two expressions, the following two result can be obtained:
Theorem 4.2.1. If the matrices G ( j) are symmetric for all j with 1 ≤ j ≤ N , the matrix
G is symmetric.
Proof. If the matrix G ( j) is symmetric, the matrix B j G ( j) B Tj is also symmetric. So if all
the matrices G ( j) are symmetric, the matrix G, being the sum of N symmetric matrices,
will be symmetric too.
22
c Philips Electronics N.V. 1999
Unclassified Report
804/99
Theorem 4.2.2. If the matrices G ( j) are positive semi-definite for all j with 1 ≤ j ≤ N ,
the matrix G is positive semi-definite.
Proof. If the matrix G ( j) is positive semi-definite, the matrix B j G ( j) B Tj will be positive
semi-definite too, since (B j G ( j) B Tj x, x) = (G ( j) B Tj x, B Tj x) ≥ 0, Hence the matrix G will
be positive semi-definite, since it is the sum of N positive semi-definite matrices.
Theorems 4.2.1 and 4.2.2 also hold when C is substituted for G.
If the matrices G ( j) are divided into sub-matrices in the following manner:
( j)
( j) G 11 G 12
( j)
,
G =
( j)
( j)
G 21 G 22
( j)
( j)
( j)
( j)
with G 11 ∈ Rm j ×m j , G 12 ∈ Rk j ×m j , G 21 ∈ Rm j ×k j and G 22 ∈ Rk j ×k j , the matrix
B j G ( j) B Tj can be written as:

( j)
B 0j G 11 B 0j T
O
..
.




( j) T
BjG Bj = 
 G ( j) B 0 T

j
21

.
.

.
O
O ···
O ···
.. . .
.
.
O ···
..
.
Ø ···
Therefore, the matrix G can be written as:
 PN
0 ( j) 0 T
j=1 B j G 11 B j

N
X
0T

G (1)
( j) T
21 B1
G=
BjG Bj = 

..

j=1
.
(N ) 0 T
G 21 B N
( j)
B 0j G 12
O
..
.
···
···
O
O
..
.
G 22
..
.
···
..
.
···
O
..
.
( j)
O
B10 G (1)
···
12
G (1)
22
∅
..





.




O
)
B N0 G (N
12
∅
.
)
G (N
22



.


(4.4)
Note that the resulting matrix is rather sparse. The same is, of course, true for the matrix
C. The sparsity structure is also visualised in figure 4.1. Note that all diagonal blocks,
except for the first one, recursively have a sparsity structure similar to the matrix itself.
Pstar stores the matrix not as a full matrix, but in a way that makes full use of the sparse
structure of G and C. In fact, only the non-zero blocks in 4.4 are stored. Moreover,
(N )
the matrices G (1)
22 , . . . , G 22 also have the structure of 4.4, so they too can be stored in a
similar structure. Hence the resulting storage structure is sparse and recursive.
Because of the special matrix structure, some operations are very difficult to perform by
Pstar code, while others are comparatively easy. An example of a difficult (i.e. expensive)
operation is the lookup of a single element in the matrix, because in this case the whole
hierarchy of circuits and sub-circuits has to be followed. However, matrix addition and
vector or matrix multiplication are cheaper than their full-matrix variants, since the sparse
structure of the matrix limits the amount of computation required. It is also possible
c Philips Electronics N.V. 1999
23
804/99
Unclassified Report
Figure 4.1: Sparsity structure of the matrices G and C
to do an efficient Gaussian UL-decomposition using this storage structure. This ULdecomposition starts at the lowest levels in the circuit model and works up to the highest
level; hence it is a bottom-up algorithm. The algorithm starts decomposing the block
of the matrix which is only related to the internal variables: the G 22 -block. If this has
been done for all the matrices belonging to the sub-circuits of a specific circuit, then the
( j)
decomposed matrices G 11 can be substituted in the matrix of the super-circuit, and this
super-circuit can in turn be decomposed.
The reader might have noted that Pstar makes an UL-decomposition rather than an LUdecomposition. This is the result of the rather arbitrary choice of the designers of Pstar
to place the terminal variables at lower indices than the internal variables. Since matrix
decomposition has to start with the internal variables if it is to be done in a hierarchical
way, this means that we are forced to use an UL-decomposition.
4.3 Some basic circuit components
I will now discuss some elementary circuit components. As described in the previous
sections, larger circuits can be built out of these elementary components. All components
described here can be interpreted as circuits consisting of two nodes a and b, connected
by a single branch. The incidence matrix A of such a circuit is:
1
A=
.
−1
The components all give a contribution to the left-hand side of the circuit equation that
has the form:
d
lhs = q(t, x) + j(t, x).
dt
24
c Philips Electronics N.V. 1999
Unclassified Report
804/99
For every circuit component, I will give the vector functions q and j and the matrices
G := ∂j/∂x and C := ∂q/∂x. I will put them in a table, like this:
x1
..
.
xn
j
q
G
j1 (x) q1 (x) G 11 (x) · · ·
..
..
..
..
.
.
.
.
jn (x) qn (x) G n1 (x) · · ·
C
G 1n (x) C11 (x) · · · C1n (x)
..
..
..
..
.
.
.
.
G nn (x) Cn1 (x) · · · Cnn (x)
However, parts of this table that consist only of zeroes empty will be omitted.
4.3.1 Current source
This is the simplest component. It is drawn in circuit diagrams as:
0
+
-
1
This component has two connection nodes, a and b. It produces a given current j0(t)
between these two points. Hence we have the following branch equation:
i (t) = I (t).
So we get the following definitions for j, and q:
va
I (t)
j(t,
) :=
,
vb
−I (t)
0
va
) :=
q(t,
.
vb
0
For the matrices G and C, we find that G = C =
summarised using the following table:
v0
v1
0 0
0 0
. These expressions can be
j
I (t)
−I (t)
This leads to the following contributions to the circuit:
v0 : rhs = I (t)
v1 : rhs = −I (t)
Note that the equation belonging to a voltage variable contains currents as terms; we will
see further on that the reverse is also true, i.e. the equation belonging to a current variable
contains voltages as terms.
c Philips Electronics N.V. 1999
25
804/99
Unclassified Report
4.3.2 Current-defined resistor
This is a resistor between two connection nodes 0 and 1. It is drawn in circuit diagrams
as:
0
1
For a linear resistor, Ohm’s law is used: i = v/R, where v = v0 − v1 . This leads to the
following expressions for j and G:
j
v0
v1
1
v − R1 v1
R 0
− R1 v0 + R1 v1
G
1
R
− R1
− R1
1
R
which leads to the following contributions to the right-hand side:
1
1
v0 : rhs = v0 − v1
R
R
1
1
v1 : rhs = − v0 + v1
R
R
For a nonlinear resistor, we have i = I (v), leading to:
v0
v1
j
G
dI
I (v0 − v1 )
− ddvI
dv
dI
−I (v0 − v1 ) − ddvI
dv
For a real-world resistor, we expect the function I to be strictly monotonic ascending.
Moreover, a real resistor should have the property that I (0) = 0. If this is not the case,
the “resistor” should be considered as a combination of a resistor and a current source.
4.3.3 Capacitor
The capacitor is drawn in circuit diagrams as:
0
1
A linear capacitor is described by i = dtd (Cv), where C is the capacitance of the capacitor.
This leads to the following expressions for q and C:
v0
v1
q
C
Cv0 − Cv1
C −C
−Cv0 + Cv1 −C C
This gives the following equations for rhs:
d
(Cv0 − Cv1)
dt
d
v1 : rhs = (−Cv0 + Cv1 )
dt
The nonlinear capacitor is similar to the nonlinear current-defined resistance.
v0 : rhs =
26
c Philips Electronics N.V. 1999
Unclassified Report
804/99
4.3.4 Voltage source and voltage-defined resistor
The voltage-defined resistor has the same symbol as the current-defined resistor. The
voltage source is drawn in circuit diagrams as:
0
+
-
1
The general non-linear voltage-defined resistor and the voltage source can both be described by the equation v = V (i ). If V (i ) does not depend on i , we have a voltage
source. If V (i ) is an ascending function in i , we have a resistor. Note that an extra
variable i R needs to be added in both cases.
v0
v1
iR
j
G
iR
0 0
1
−i R
0 0
−1
v0 − v1 − V (i R ) 1 −1 − ddiV
This leads to the equations for rhs:
v0 : rhs
v1 : rhs = −i R
i R : rhs = v0 − v1 − V (i R )
Again, we expect that V (0) = 0 for a real-world resistor. If V (0) 6= 0, the “resistor” is in
fact a combination of a resistor and a voltage source.
4.3.5 Inductor
The inductor is drawn in circuit diagrams as:
0
1
The inductor is similar to the capacitor but now we have an equation of the form v =
d
(Li ) in the linear case. This means that, again, an extra variable i L needs to be added:
dt
v0
v1
iL
j
q
G
C
iL
0
0 0
1 0 0 0
−i L
0
0 0 −1 0 0 0
v0 − v1 −Li L 1 −1 0 0 0 −L
This gives the equations:
v0 : rhs = i L
v1 : rhs = −i L
d
i R : rhs = − Li L + v0 − v1
dt
c Philips Electronics N.V. 1999
27
804/99
Unclassified Report
A more complex situation arises when the flux φ L is desired as an additional unknown. A
nonlinear inductor can be described by v = dtd φ L , where φ L = 8(i L ). This results in the
following table:
v0
v1
iL
φL
j
q
iL
0
−i L
0
v0 − v1
−φ L
φ L − 8(i L )
0
G
0 0
1 0 0
0 0 −1 0 0
1 −1 0 0 0
0 0 d8
1 0
di
0
0
0
0
C
0 0
0 0
0 −1
0 0
In this case, we get the equations:
v0 :
v1 :
iL = 0
−i L = 0
iR : −
φL :
d
φ L + v0 − v1 = 0
dt
φ L − 8(i L ) = 0
4.3.6 Controlled circuit components
Most of the components that were just discussed lead to symmetric contributions to the
G and C matrices. The only exception is the inductor with the flux φ as an additional
unknown, but this type of branch is rarely used in circuit descriptions. Asymmetry of G
and C is most often caused by so-called controlled components. These are components
whose circuit equations contain variables that do not belong directly to that component.
A simple example is a controlled current source, as shown in figure 4.2. Branch a is a
a
0
b
1
?
2
vb
Figure 4.2: A controlled current source
controlled current source, and branch b is a branch of some unspecified type. Suppose
that the current through a depends on the voltage difference vb over b, according to the
following equation: i a = I (vb ) := I (v2 − v1 ). We then have the following contributions
to the right-hand side:
v0 : rhs = I (v2 − v1 )
v1 : rhs = −I (v2 − v1 )
v2 : rhs = 0 (no contribution for v2 )
Note that these are only the contributions for branch a; branch b will have its own set of
equations, but I will not discuss them here.
28
c Philips Electronics N.V. 1999
Unclassified Report
804/99
By putting the contributions to rhs in the table form, we obtain:
v0
v1
v2
j
G
dI
I (v2 − v1 ) 0 − ddvI
dv
−I (v2 − v1) 0 ddvI − ddvI
0
0 0
0
So component a gives an asymmetric contribution to the matrix G.
4.4 Special properties of G and C
I will now formulate some conditions under which the matrices G and C,and their linear
combination G + κC, are symmetric or even positive-definite. To this end, I will divide
all possible circuit elements into four categories:
Category I:
1. Non-controlled current-defined resistors with a non-negative resistance for all
voltages.
2. Non-controlled current sources.
3. Non-controlled capacitors with a non-negative capacity.
Note that these branches have the property that their G and C-matrices are symmetric and positive semi-definite.
Category II:
1. Non-controlled voltage sources.
2. Non-controlled inductors in DC analysis.
These branches have symmetric G and C-matrices, but G and C are not positive
semi-definite. However, the matrices G and C have the following form:
O p
.
(4.5)
G, C =
pT 0
For the inductor, this is only true for G, not C. Hence the restriction to the DC-case,
in which C plays no role.
Category III:
1. Non-controlled inductors in analyses other than DC.
2. Non-controlled voltage-defined resistors.
3. Non-controlled current-defined “resistors” that have a negative resistance for
some voltages.
c Philips Electronics N.V. 1999
29
804/99
Unclassified Report
4. Non-controlled “capacitors” that have a negative capacitance for some voltages.
These components have symmetric, but not necessarily positive semi-definite, nor
do they exhibit the special structure of the C and G-matrices of category II.
Category IV: This category contains controlled elements. These have in general asymmetric G and C-matrices, as has been discussed above.
It is now possible to classify electric circuits into four categories: circuits in category 1
may only contain components from category 1, circuits in category 2 may contain components from category 1 and 2, circuits in category 3 may contain components from 1, 2 and
3, and circuits in category 4 may contain components from all categories. The following
can now be proven:
Theorem 4.4.1.
1. The matrices G and C of circuits from category 1 are symmetric
and positive semi-definite.
2. The matrices G and C of circuits from category 2 are symmetric, and have a block
partitioning of the form:
M P
,
(4.6)
G, C =
PT O
where M is a symmetric, positive-definite matrix.
3. The matrices G and C of circuits from category 3 are symmetric.
Proof. Part 1 follows from theorem 4.2.2, since all sub-circuits are positive semi-definite
for circuits of category I. Part 3 follows from theorem 4.2.1, since all sub-circuits are
symmetric for circuits of category III. Part 2 is easily verified by reordering the circuit
variables in such a way that the voltage variables come first, and the current variables last.
The described block partitioning arises then from the division of the variables in voltage
and current variables.
4.5 The elimination of zeroes on the main diagonal
Some components in the circuit may give rise to zeroes on the main diagonal of the circuit
matrices. An example of such a component is the voltage source. In this section, two
strategies will be discussed to deal with this situation.
4.5.1 Zeroes due to voltage-defined elements
The matrix G volt of the voltage source looks like this:


0 0
1
G volt =  0 0 −1  .
1 −1 0
30
c Philips Electronics N.V. 1999
Unclassified Report
804/99
The voltage source can be combined with some other component to yield a complete
circuit. A simple example is the combination of a voltage source and a linear resistor, as
shown in figure 4.3. The circuit equations of this circuit are:
s
+
-
0
1
r
Figure 4.3: A simple circuit with a voltage source and a resistor
1
1
v0 − v1 + i s = 0
R
R
1
1
v1 : − v0 + v1 − i s = 0
R
R
is :
v0 − v1 − V = 0
v0 :
If the G-matrix of the resistor is combined with G volt , the matrix for the whole circuit
becomes:

 1 −1
1
R
R
1
−1  .
G =  −1
R
R
1 −1 0
This matrix is, of course, singular, since [1, 1, 0]T is in the null space. The standard way
to solve this is to ground one node of the circuit, i.e. the voltage in one node is prescribed.
If the first node is grounded, the first row and the first column can be removed, and the
resulting system is nonsingular:
1
−1
R
.
G̃ =
−1 0
Although the resulting matrix is certainly nonsingular, it is impossible to find an ULdecomposition of this matrix without pivoting, for the simple reason that none exists.
This can be shown in the following way: suppose such a decomposition existed:
1
−1
l1 0
l1 u 1 + l2 u 2 l3 u 2
u1 u2
R
·
=
=
UL =
.
0 u3
l2 u 3
l3 u 3
l2 l3
−1 0
Since l3 u 2 = l2 u 3 = −1, we know that l2 , l3 , u 2 and u 3 are all unequal to 0. This means
that l3 u 3 is also unequal to zero. But l3 u 3 needs to be equal to 0 for the equation to hold.
Hence such an U L-decomposition does not exist.
This situation typically arises when voltage sources are used. In DC analysis, the problem also arises with inductors. Since these components give rise to zeroes on the main
diagonal, UL-decomposition becomes problematic. Of course, there is no problem when
c Philips Electronics N.V. 1999
31
804/99
Unclassified Report
pivoting is allowed. But one restriction that arises from the sparse matrix structure in
Pstar is that general pivoting is not possible. Hence, another solution has to be found.
Pstar solves the problem coupling every problematic current variable with a suitable node
voltage variable in such a way that swapping the rows of these two variables will remove
the zero from the main diagonal.
In the above example, there is only one current variable and one node voltage variable, so
selection of a voltage variable is trivial. After swapping the two rows of the matrix, we
get the matrix:
−1 0
,
1
−1
R
which is lower-triangular itself; hence an UL-decomposition trivially exists.
In general, suitable voltage variables are not found by searching the matrix for a row
which has a nonzero element in the column belonging to a current variable. Instead,
a suitable variable is found by looking at the topological structure of the circuit. This
is best explained by looking at the example circuit in figure 4.4. The above described
1
a
2
4
b
c
3
d
5
Figure 4.4: A circuit with some voltage sources
strategy used in Pstar implies that the current through a branch is always coupled with the
voltage variable of either of its two connection nodes. So i c , the current through branch
c, is coupled with either v2 or v3 . But if v2 is chosen, it becomes impossible to couple, for
example, i b with v2 .
It is always possible to couple all current variables with a voltage variable, provided no
cycle of voltage-defined branches exists. If no such cycles occur, the graph F consisting
of all voltage-defined branches is a forest, i.e. a graph consisting of a number of disconnected trees. It is a basic result from graph theory that every non-empty forest has at least
one leaf, i.e. a node which is incident to only one branch. If such a leaf node n has been
found, the current variable i t of the voltage-defined branch t which is connected to n can
be coupled with n’s voltage variable. The branch t is then deleted from F , and the process
is repeated until F contains no branches.
32
c Philips Electronics N.V. 1999
Unclassified Report
804/99
The algorithm can be described in pseudo-code as follows:
F
:= the set of all branches that are voltage sources (or inductors in DC)
{invariant :
F
= the set of all branches that are voltage
sources (or inductors) and whose current
variable has not yet been coupled with a
voltage variableg
while F 6= ∅ do
n := a leaf node of F
t := the unique branch in F which is connected to n
couple i t with vn
F := F − t
Note that the graph F is represented as a set of branches.
Example: Consider the circuit as shown in figure 4.4. The branches a, b, c and d are
voltage sources, so we initialise F := {a, b, c, d}. Note that F is a forest; in fact, it is a
tree (i.e. a connected forest). First select a leaf node of F , for example node 1. Couple
the current i a through branch a, with the voltage v1 in node 1, and remove branch a from
F . We now have F = {b, c, d}. Now select another leaf node, e.g. node 5. Couple v5
to i d and remove branch d from F . This process can be repeated until F = ∅ and every
branch current is coupled with a node voltage.
If the subgraph F of all voltage sources does contain cycles, the algorithm which is presented here does not work. Such networks probably do not have any legal solution. In
this case, Pstar produces an error message.
Instead of swapping the rows, it is also possible to keep the rows in their original order,
and simply use the 2 × 2-sub-matrix on the main diagonal as a 2 × 2 pivot block. In
that case, we actually have a block Gaussian UL-decomposition, since we will find some
2 × 2-blocks on the diagonal of U and L after finishing the decomposition.
As a final remark, note that this pivoting strategy is done on topological information; it
may not be the best choice from the point of view of the stability. However, a pivoting
strategy that is based on the value of the matrix elements rather than the topological structure of the circuit would in general destroy the sparsity structure of the circuit matrices.
4.5.2 Zeroes due to the hierarchy
Another problem arises when a trail of voltage sources (or inductors) exist that connect
two terminal nodes of a sub-circuit. Such a situation is shown in picture 4.5. There are two
ways to couple the branch currents to the node voltages; one coupling is (i a , v1 ), (i b , v2)
and (i c , v3), and the other coupling is (i a , v2), (i b , v3 ) and (i c , v4 ). But in both cases,
one branch current has to be coupled with the node voltage of a terminal node. Suppose
that variable i a is coupled with v1 . Pstar stores coupled variables in adjacent locations
in the vector. Since terminal variables and internal variables are stored in possibly very
c Philips Electronics N.V. 1999
33
804/99
Unclassified Report
1
a
2
b
3
c
4
Figure 4.5: A sub-circuit with a trail of voltage sources connecting two terminal nodes
distant parts in the vector, this means that the variable i a has to be stored in the “terminal
variable”-part of the vector, next to the variable v1 , although it is, strictly speaking, an
internal variable.
34
c Philips Electronics N.V. 1999
Chapter 5
A complete example
In this chapter, the complete circuit equations of a not-so-trivial circuit will be derived.
The circuit under consideration is shown in figure 5.1. It is a frequency doubler. A
4 12V
resonant
model
3
2
npn-transistor
1
100kHz
0.3V
+
100µF
2mA
0
0V
Figure 5.1: A simple frequency doubler
frequency doubler is a circuit that gets an input voltage (between node 0 and 2) oscillating
at a specific frequency (in this case, 10kHz), and that produces an output voltage (between
node 3 and 4) oscillating at twice the frequency. Note that node 4 should be considered
connected to node 0 by an additional voltage source of 12V. This voltage source is not
drawn in the picture, but instead the voltage in node 4 is stated explicitly. This is common
practice when drawing circuit diagrams.
35
804/99
Unclassified Report
The circuit consists of some basic circuit branches, and also two sub-circuits, namely the
resonant model and the transistor. The resonant model is a sub-circuit which resonances
at a frequency of 200kHz. It is used to “filter out” the desired frequency. The transistor is
a simplified model of a bipolar npn-transistor. The transistor is severely nonlinear, hence
it is more complicated than the resonant model. Therefore, I will discuss the resonant
model first.
A detailed discussion on the modelling of npn-transistors in Pstar can be found in [7].
Philips ED&T/AS has published the models used in Pstar on the Web, at
5.1 The resonant model
The internal structure of the resonant model is shown in figure 5.2. The resonant model is
4
0.19mH
10kΩ
3.3nF
3
Figure 5.2: The internal structure of the resonant model
modelled as a resistor, a capacitor and an inductor. The resonant model can be described
in Pstar by:
model: resonant model(3, 4);
l_inductor (3, 4) 0.19ml;
c_capacitor (3, 4) 3.3n;
r_resistor (3, 4) 10k;
end;
This should be pretty self-explanatory. In this sub-circuit, the nodes 3 and 4 are terminal
nodes. As said before, the function of the resonant model is to “filter out” a specific
frequency, namely the frequency of 20k H z. The fact that this model accomplishes this
task can be verified by doing an AC analysis with a harmonic oscillating voltage source
between nodes 3 and 4. To provide a reference, node 3 will be grounded, so that the
voltage difference between node 3 and 4 is equal to the voltage in node 4. The frequency
of the oscillation will be varied from 100kHz to 400kHz. The results of this analysis
are shown in figure 5.3. Every input signal to the resonant model is damped to some
extend, except for the component with a frequency of 200kHz. So if a voltage source
36
c Philips Electronics N.V. 1999
Unclassified Report
804/99
May 18, 1998
12:27:11
Resonant model
- y1-axis -
10.0k
(LIN)
MAG(VN(4))
8.0k
6.0k
4.0k
2.0k
0.0
100.0k
200.0k
150.0k
300.0k
350.0k
Analysis: AC
User: contr_6
400.0k
250.0k
(LIN)
F
Simulation date: 18-05-98, 11:21:16
File: /home/contr_6/products/pstar/test/resonant_model.sdif
Figure 5.3: An AC analysis of the resonant model
that produces that particular frequency is hooked up to the resonant model, the amplitude
of the signal will go to ∞, since the voltage source amplifies the signal, but the circuit
doesn’t damp it.
At first sight, it may seem that this circuit doesn’t have any internal variables, but actually
it does, since the current through the inductor will become an internal variable. The
contributions to rhs for the resistor are:
v3
v4
jR
1
v − R1 v4
R 3
− R1 v3 + R1 v4
1
R
GR
− R1
− R1
1
R
The contributions for the capacitor are:
v3
v4
qC
CC
Cv3 − Cv4
C −C
−Cv3 + Cv4 −C C
Finally, the contributions for the inductor, in which an extra current variable i I appears:
v3
v4
iI
jI
qI
GI
CI
iI
0
0 0
1 0 0 0
−i I
0
0 0 −1 0 0 0
v3 − v4 −Li I 1 −1 0 0 0 −L
c Philips Electronics N.V. 1999
37
804/99
Unclassified Report
These contributions can be integrated into one table of circuit contributions by adding
all contributions belonging to v3 , all contributions belonging to v4 , and adding all contributions belonging to i I (which is only one). This results in the following table of contributions (the superscript r.m. refers to the fact that these contributions belong to the
resonance model):
jr.m.
v3
v4
iI
i I + R1 v3 − R1 v4
−i I − R1 v3 + R1 v4
v3 − v4
qr.m.
G r.m.
C r.m.
1
1
Cv3 − Cv4
−R 1
C −C 0
R
1
−Cv3 + Cv4 − R1
−1
−C
C
0
R
−Li I
1 −1 0
0
0 −L
In fact, Pstar stores the variables in a different order, namely v3 , i I and v4 , for the reason
discussed in section 4.5. The matrices G r.m. and C r.m. as computed by Pstar can now be
found by substituting the variables R, C and L by their respective values 104, 3.3 · 10−9
and 1.9 · 10−4 .
5.2 The npn-transistor
Now we come to the transistor. The transistor is a nonlinear sub-circuit; in fact, the
transistor is the only nonlinear circuit element in this circuit. The transistor is modelled
by the circuit shown in figure 5.4. Node b, called the base of the transistor, is connected
c1
c
r1
+
b
j1
-
r2
c2
j2
+
e
Figure 5.4: A simple transistor model
with node 2 of the super-circuit. Similarly, node c, thecollector, is connected with node 3
of the super-circuit, and node e, the emitter, is connected with node 1 of the super-circuit.
All the components in this circuit are nonlinear. Furthermore, the two current sources j1
and j2 are controlled by the voltage differences vb − ve and vb − vc , respectively. The only
three variables in the model are vb , vc and ve ; they are all terminals. The model has six
parameters; these have been chosen as:
i s = 10−15
vt = 0.025
38
b f = 100
br = 10
τ f = 0.1 · 10−6
τr = 0.1 · 10−6
c Philips Electronics N.V. 1999
Unclassified Report
804/99
For now, I will just give the j’s and q’s of the individual branches; the G- and C-matrices
will be discussed later. So let’s start with the resistors:
jr1
jr2
vb
vb −vc
is
bf
vc
e vt − 1
vb −vc
− bisr e vt − 1
ve
0
− bisf
is
br
e
vb −ve
vt
−1
0
vb −ve
e vt − 1
The capacitors have similar equations, but they make a contribution to the q, not the j:
vc
c1
q
vb −vc
v
t
i s τr e
−1
vb −vc
−i s τr e vt − 1
ve
0
vb
qvbc−v2 e
v
t
is τ f e
−1
0
vb −ve
−i s τ f e vt − 1
Finally, the current sources are interesting because they are controlled by voltage differences over different branches. This is visible in the contributions because their contributions contain the terms vb − vc and vb − ve , although their end nodes are vc and ve .
vc
ve
j2
vbj−vj1e
vjb −v
c
v
v
is e t − 1
−i s e t − 1
vb −ve
vb −vc
−i s e vt − 1
i s e vt − 1
By adding the contributions belonging to the same node, we get:
tr.
vb
vc
ve
is
br
vb −vc
− bisr e vt
vb −ve
is
− b f e vt
vb −vc
vb −ve
j
e vt − 1 + bisf e vt − 1
vb −ve
vb −vc
− 1 + i s e vt − 1 − i s e vt − 1
vb −ve
vb −vc
v
v
t
t
− 1 − is e
− 1 + is e
−1
and:
tr.
vb
vc
ve
vb −vc
vb −ve
q
v
v
t
t
i s τr e
− 1 + is τ f e
−1
vb −vc
−i s τr e vt − 1
vb −ve
−i s τ f e vt − 1
By differentiating jtr. and qtr. to [vb , vc , ve ]T , the matrices G tr. and C tr. can be found explicitly. This computation is straightforward, but the resulting expressions are rather complicated; hence I will write the matrices G tr. and C tr. as:
 ∂ j tr. ∂ j tr. ∂ j tr. 


G tr. = 

c Philips Electronics N.V. 1999
vb
vb
vb
∂vb
∂vc
∂ve
∂ jvtr.c
∂vb
∂ jvtr.c
∂vc
∂ jvtr.c
∂ve
∂ jvtr.e
∂vb
∂ jvtr.e
∂vc
∂ jvtr.e
∂ve




39
804/99
Unclassified Report

and


C =

tr.
∂qvtr.b
∂qvtr.b
∂qvtr.b
∂qvtr.c
∂vb
∂qvtr.c
∂vc
∂qvtr.c
∂ve
∂qvtr.e
∂vb
∂qvtr.e
∂vc
∂qvtr.e
∂ve
∂vb
∂vc
∂ve



.

5.3 The integration into the super-circuit
On the super-circuit level, there are 8 visible variables. First we have the 5 node voltages
v0 , . . . , v5, furthermore we have two current variables i 0 and i 1 arising from the two
voltage sources in the circuit. The variable i 0 is the current through the voltage source
between node 0 and node 4, and the variable i 1 is the current through the voltage source
between node 0 and node 2. Finally we have the current through the inductor in the
resonant model, i I . This current variable is coupled with variable v3 of the resonant
model, which is a terminal variable; hence i I needs to be “lifted” to the super-circuit
along with v3 . This process is described in more detail in section 4.5.
Pstar stores these 8 variables in the following order:
v0 , i 0 , v4 , i I , v3 , i 1 , v2 , v1 .
The order is rather arbitrary, but the following things can be noted: v0 , the grounded
node, is considered to be the only terminal node of the circuit; hence Pstar puts it first.
Furthermore, current i 0 is coupled with v4 , current i I is coupled with v3 and current i 1 is
coupled with v2 , so we find variable v4 next to i 0 , variable v3 next to i I , and variable i 1
next to v2 . Also note that in principle, Pstar could have chosen to couple i 0 or i 1 with v0 .
But v0 is a terminal variable, and Pstar prefers to couple current variables to non-terminal
(i.e. internal) variables if at all possible.
The matrices B r.m. and B tr. can now be constructed. These matrices have their columns
labelled with the variables of the sub-circuit, and their rows labelled with the variables
of the super-circuit. So they effectively map the sub-circuit’s variables on those of the
super-circuit’s.
v3
v0 0
i0 
0
v4 
0
i 0
:= I 
v3 
1
i1 
0
v2  0
v1 0

B r.m.
v4
0
0
1
0
0
0
0
0
iI

0
0

0

1
,
0

0

0
0
vb
v0 0
i0 
0
v4 
0
i 0
B tr. := I 
v3 
0
i1 
0
v2  1
v1 0

vc
0
0
0
0
1
0
0
0
ve

0
0

0

0
.
0

0

0
1
Furthermore, the circuit has 4 basic branches of its own. Their branch contributions can
be stated in tabular form; for every branch, I will only list the rows and columns that are
40
c Philips Electronics N.V. 1999
Unclassified Report
804/99
directly associated with the branch under consideration; all other branches contain only
zeroes.
First, the voltage source between node 0 and node 4 has the following contributions:
v0
i0
v4
j(0,4)
G (0,4)
−i 0
0 −1 0
v4 − v0 − 12 −1 0 1
i0
0
1 0
The voltage source between node 0 and node 2 is periodic, so we get a sine function in
the contributions:
j(0,2)
G (0,2)
i1
0 1
0
v0 − v2 + 0.3 sin(2π f t) 1 0 −1
−i 1
0 −1 0
v0
i1
v2
The current source between node 0 and node 1 has the following, rather simple, contributions:
j(0,1)
v0 2 · 10−3
v1 −2 · 10−3
Finally, the capacitor between node 0 and node 1 has the contributions:
v0
v1
q(0,1)0
C(0,1)0
−4
10 (v0 − v1 )
10
−10−4
−4
−4
−10 (v0 − v1 ) −10
10−4
−4
All these contributions can be added to form one large system of contributions for the
whole circuit. This system of contributions is shown in table 5.1. Note that the q, j, G and
C of the sub-circuits haven’t been written out explicitly; they are just listed in the table
form. Also note that the variable names of the sub-circuits have been used for indexing
is the first component of vector jr.m. , and Cvtr.b ,vc , is the
in vectors and matrices; hence jvr.m.
3
component at position (1, 2) of the matrix C tr. .
c Philips Electronics N.V. 1999
41
804/99
42
Table 5.1: The complete system of contributions of the frequency doubler
j
q
v0
−i 0 + i 1 + 2 · 10−3
10−4 (v0 − v1 )
i0
v4 − v0 − 12
0
v4
i 0 + jvr.m.
4
qvr.m.
4
0
iI
jir.m.
I
qir.m.
I
v3
jvr.m.
+ jvtr.c
3
i 1 v0 − v2 + 0.3 sin 2π f t
−i 1 + jvtr.b
v1
−2 · 10−3 + jvtr.e
0 −1
C
10−4 0
0
0
0
0
0
−10−4
0
0
0
0
0
0
0 Cvr.m.
Cvr.m.
4 ,v4
4 ,i I
Cvr.m.
4 ,v3
0
0
0
0
0 Cir.m.
Cir.m.
I ,v4
I ,i I
Cir.m.
I ,v3
0
0
0
0
0 Cvr.m.
Cvr.m.
Cvr.m.
+ Cvtr.c ,vc 0 Cvtr.c ,vb
3 ,v4
3 ,v3
3 ,i I
0
0
0
0
0
0
0
0
0
Cvtr.b ,vc
0 Cvtr.b ,vb
tr.
−4
0 G tr.
0
ve ,vb G ve ,ve −10
0
0
Cvtr.e ,vc
0 Cvtr.e ,vb Cvtr.e ,ve + 10−4
0
0
0
1
0
0
1
0
0
0
0
0
0
0
1 G vr.m.
G vr.m.
4 ,v4
4 ,i I
G vr.m.
4 ,v3
0
0
0
0
0
0 G ir.m.
G ir.m.
I ,v4
I ,i I
G ir.m.
I ,v3
0
0
0
qvr.m.
+ qvtr.c
3
0
tr.
tr.
0 G vr.m.
G vr.m.
G vr.m.
+ G tr.
vc ,vc 0 G vc ,vb G vc ,ve
3 ,v4
3 ,v3
3 ,i I
0
1
0
0
0
0
qvtr.b
0
0
0
0
G tr.
vb ,vc
−10−4 (v0 − v1 ) + qvtr.e 0
0
0
0
G tr.
ve ,vc
−1 0
0
−1
0
tr.
−1 G tr.
vb ,vb G vb ,ve
0
0
Cvtr.c ,ve
0
Cvtr.b ,ve
Unclassified Report
c Philips Electronics N.V. 1999
v2
G
Chapter 6
Existence of a solution to the two-point
Boundary Value Problem
6.1 Introduction
In this chapter, I will give sufficient conditions for the existence of a time-periodic solution
to the two-point Boundary Value Problem:
d
q(t, x) + j(t, x) = 0 ∈ R N
dt
x(0) = x(T ).
for 0 ≤ t ≤ T ,
(6.1a)
(6.1b)
6.2 Index of a DAE
I will start with a short overview of general theory for DAE’s. In this theory, the concept
of index of a DAE plays an essential role. For a more elaborate overview of the concept
of index, please see chapter IX in [6].
Definition 6.2.1. Consider a general DAE of the form:
k(t, x, ẋ) = 0.
(6.2)
The index of 6.2 is defined as the smallest ν ∈ N so that the system:
k(t, x, ẋ) = 0,
d
k(t, x, ẋ) = 0,
dt
..
.
dν
k(t, x, ẋ) = 0.
dt ν
can be rewritten into an ordinary differential equation by using only linear combinations
and reordering of terms.
43
804/99
Unclassified Report
Example: consider the circuit shown in 6.1. The unknowns are v and i L ; the current J (t)
is given. This circuit has the following circuit equations:
J(t)
v
L
Figure 6.1: A circuit whose circuit equations are index-2 DAE’s
1
di L
= v,
dt
L
0 = i L − J (t).
(6.3a)
(6.3b)
Differentiating 6.3b gives:
di l
d J (t)
=
.
dt
dt
(6.4)
Subtracting 6.4 from 6.3a gives:
0=
1
d J (t)
v−
,
L
dt
(6.5)
and reordering these terms gives:
d J (t)
1
= v.
dt
L
(6.6)
d 2 J (t)
.
dt 2
(6.7)
Differentiating 6.6 gives:
v̇ = L
So we need to differentiate 2 times before we get an ordinary differential equation. By
definition, this means that this is an index-2 DAE. Note that the resulting ordinary differential equation contains the second derivative of the input signal; in general, the ODE
corresponding to a DAE of index n contains the n-th derivative of the input signal. The
practical result of this can be seen in figure 6.2 and 6.3. Due to the index-2 character
of the DAE, an initial “wiggle” is introduced in the computed solution; this wiggle is a
numerical artefact. The deviation from the real solution has been plotted in 6.3. A slightly
damping method is used; hence the deviation damps out during the simulation. Note that
this phenomenon could have been prevented, or at least reduced, by using a very strong
damping integration method, such as Euler Backward.
44
c Philips Electronics N.V. 1999
Unclassified Report
- y1-axis -
1.0
804/99
(LIN)
VN(1)
I(L_1)
750.0m
500.0m
250.0m
0.0
-250.0m
-500.0m
-750.0m
-1.0
-1.25
0.0
2.0
1.0
4.0
Analysis: TR
User: contr_6
6.0
3.0
5.0
(LIN)
T
Simulation date: 21-12-98, 11:38:54
File: /home/contr_6/products/pstar/test/index2.sdif
Figure 6.2: Solution of equation 6.3, as computed by Pstar,
using a BDF method.
6.3 Transferable DAE’s
The following definition is needed in the rest of this chapter:
Definition 6.3.1. The differential-algebraic equation 6.1a is called transferable iff
1. The matrix functions C(t) := ∂q/∂x and G(t) := ∂j/∂x are continuous.
2. A continuously differentiable orthogonal projector function Q : [0, T ] → R N ×N
exists which satisfies the property:
R(Q(t)) = N (C(t)),
∀t ∈ [0, T ].
3. The matrix K (t) := C(t) + G(t) · Q(t) is invertible for all t ∈ [0, T ] and all x.
The following theorem can now be proven:
Theorem 6.3.1. A transferable differential-algebraic equation has index-1, and can be
written in the form:
u̇ = g(t, u),
y = h(t, u).
(6.8)
(6.9)
Proof. First, note that since Q(t) is continuous, trace(Q(t)) is too. Since Q(t) is a projection, rank(Q(t)) = trace(Q(t)), and therefore rank(Q(t)) is continuous too. But the
c Philips Electronics N.V. 1999
45
804/99
Unclassified Report
Dec 21, 1998
11:29:07
Index-2 circuit
- y1-axis -
50.0m
(LIN)
VN(1)+SIN(T)
40.0m
30.0m
20.0m
10.0m
0.0
-10.0m
-20.0m
0.0
2.0
4.0
1.0
Analysis: TR
User: contr_6
6.0
3.0
5.0
(LIN)
T
Simulation date: 21-12-98, 11:28:55
File: /home/contr_6/products/pstar/test/index2.sdif
Figure 6.3: Deviation of the computed solution of 6.3 relative to the real solution.
rank of a matrix can only be a natural number, so this means that rank(Q(t)) is constant,
hence the nullity of C(t) is constant, say rank(Q(t)) = dim N (C(t)) = ν.
Since Q is an orthogonal projection, so by definition Q = Q 2 = Q T . Hence Q is
symmetric and we have an orthogonal matrix S which brings it into diagonal form; since
Q is a projection, the diagonal entries will only consist of 1’s and 0’s:


0
..


.




0

 −1
(6.10)
Q(t) = S(t) 
 S (t).
1




..


.
1
The matrix S can be written as [S1 |S2 ], where
S1T Q S1 = O
Note that R(S1 ) =
the solution x as:
46
and
S2T Q S2 = I.
N (Q) and R(S2 ) = R(Q).
The basic idea of the proof is to express
u
u
x =: S
= S1 u + S2 y.
= [S1 |S2 ]
y
y
c Philips Electronics N.V. 1999
Unclassified Report
804/99
We can rewrite equation 6.1a as:
d
q(t, S1 u + S2 y) + j(t, S1 u + S2 y) = 0,
dt
(6.11)
Note that ∂q(t, S1 u + S2 y)/∂y = C(t, . . . )S2 , and since R(S2 ) = R(Q) = N (C), we
have that ∂q(t, S1 u + S2 y)/∂y = O, hence q(t, S1 u + S2 y) = q(t, S1 u), and we have:
d
q(t, S1 u) + j(t, S1 u + S2 y) = 0,
dt
(6.12)
∂q(t, S1 u)
+ j(t, S1 u + S2 y) = 0.
C(t, S1 u) Ṡ1 u + S1 u̇ +
∂t
(6.13)
Differentiating gives:
This can be rewritten as:
f̂(t, u̇, u, y) = 0.
(6.14)
The important observation is
i that property 3 of the definition of transferability imh now
u̇
plies that the Jacobian ∂ f̂/∂ y is invertible for all t and all u, u̇ and y, since:
∂f
= C(t, S1 u)S1
∂ u̇
(6.15)
∂f
= G(t, S1 u + S2 y)S2 .
∂y
(6.16)
and
From the definition of S2 follows directly that S2 = Q S2 . So we have:
∂
∂f
h i = [C · S1 |G · S2 ]
u̇
y
= [C · S1 |G · Q · S2 ]
= [C · S1 + G · O|O + G · Q · S2 ]
= [(C + G Q)S1 |(C + G Q)S2 ]
= (C + G Q)S.
Since C + G Q is invertible (by assumption) and Q is invertible, ∂ f̂/∂
too.
(6.17)
h i
u̇
y
is invertible
So by using the Implicit Function Theorem, we can rewrite equation 6.14 into the following form:
u̇ = g(t, u),
y = h(t, u).
c Philips Electronics N.V. 1999
(6.18)
(6.19)
47
804/99
Unclassified Report
So it turns out that the condition that the DAE is transferable actually implies that it
is an index-1 DAE. Note that the function g and h are continuous and differentiable,
and therefore Lipschitz continuous. Hence a solution u(t) to the initial value problem
u(0) = u0 exists. So the initial value problem for x with x(0) = x0 has a solution,
provided that x0 satisfies the algebraic equations at t = 0:
S2T x = h(0, S1T x).
(6.20)
6.4 Well-conditioning
The next step is to investigate how the solution of the problem 6.1 changes when the input
parameters change slightly. For this, look at the linearised problem:
C(t)ẋ + G(t)x = r(t) ∈ R N
x(0) = x(T ).
for 0 ≤ t ≤ T ,
(6.21a)
(6.21b)
Expressed in terms of u and y, equation 6.21a becomes:
C(t)S1 u̇ + G(t)S1 u + G(t)S2 y = r(t)
(6.22)
or
(C(t) + G(t)Q)(S1 u̇ + S2 y) + G(t)S1 u&r(t),
(6.23)
which in turn can be written as:
u̇ = S1T (t)K −1 (t)(r(t) − G(t)S1 (t)u),
y = S2T (t)K −1 (t)(r(t) − G(t)S1 (t)u).
(6.24)
The solution for the periodic BVP for u can now be written as:
ZT
u(t) =
G(t, s)S1 (s)T (C(s) + G(s)Q(s))−1 r(s) ds,
(6.25)
0
where G(t, s) is defined as:
G(t, s) :=
U (t)U (0)U −1 (s), for t > s,
U (t)U (T )U −1 (s), for t < s.
(6.26)
The matrix function U (t) is a solution of the matrix differential equation:
U̇ = −S1T (t)K −1 (t)G(t)S1 (t)U.
(6.27)
κ1 := max kI − Q(t)K −1 (t)G(t)k
(6.28)
κ2 := max kG(t, s)k,
(6.29)
κ3 := max kK −1 (t)k.
(6.30)
Now define κ1 , κ2 , κ3 as
t∈[0,T ]
t,s∈[0,T ]
s∈[0,T ]
48
c Philips Electronics N.V. 1999
Unclassified Report
804/99
Then we have by equation 6.25:
ZT
κ3 kr(s)k ds.
ku(t)k ≤ κ2
(6.31)
0
Since x = S1 u + S2 y, it follows that kxk ≤ kuk + kyk. By using equation 6.24, we find
that kxk ≤ κ1 kuk + κ3 krk, so we finally find:
ZT
kx(t)k ≤ (κ1 κ2 + 1)κ3
kr(s)k ds =: κ
0
c Philips Electronics N.V. 1999
ZT
kr(s)k ds.
(6.32)
0
49
Chapter 7
Periodic Steady State analysis
As described in section 3.4, the Periodic Steady State (PSS) solution of a circuit is the
solution x ∈ C([0, T ], R N ) to the following problem:
f(t, x, ẋ) :=
d
q(t, x) + j(t, x) = 0 ∈ R N
dt
x(0) = x(T ).
for 0 ≤ t ≤ T ,
(7.1a)
(7.1b)
Here T may or may not be known a priori. If T is known a priori, the circuit is called
non-autonomous, otherwise it is called autonomous. If T is not known a priori, it forms
an additional unknown in the system 7.1. This is the case in circuits like a free-running
oscillator, i.e. an oscillator that is not “driven” by periodic input signals. In this chapter, I
will discuss the non-autonomous case.
For non-autonomous circuits, T is known a priori. The resulting problem is a two-point
boundary value problem. There are many methods to solve this type of problem. Two of
them, Harmonic Balance and Epsilon Extrapolation, are already implemented in Pstar.
This kind of problem does often occur during the simulation of Radio Frequent (RF)
circuits. This type of circuit often exhibits strong oscillatory behaviour.
7.1 Harmonic Balance
Harmonic Balance (HB) is one of the two PSS algorithms currently implemented in Pstar.
Harmonic Balance is a nonlinear frequency-domain analysis; in fact, it is a generalisation
of the linear AC analysis. If the problem under consideration is linear, Harmonic Balance
reduces to a linear AC analysis. However, unlike AC analysis, Harmonic Balance can
be used to simulate inter-modulation effects, i.e. influences between different frequency
components of the solution. Harmonic Balance is based on the so-called Discrete Fourier
Transform, so I will explain this first.
50
Unclassified Report
804/99
7.1.1 The Discrete Fourier Transform
Let g : R → R be a Riemann-integrable, periodic function with period T . Then we have
numbers γk ∈ C, k ∈ Z, satisfying:
g(t) =
∞
X
γk eiωkt for t ∈ D,
(7.2)
k=−∞
where ω := 2π/T , D ⊆ R such that R\D has measure 0. The right-hand side of equation
7.2 is called the Fourier series of g.
If g is continuous, we have D = R. If g is sufficiently smooth, the γk ’s will vanish
exponentially fast for |k| → ∞. So in that case, it makes sense to approximate g with a
truncated Fourier series:
g(t) ≈ ĝ(t) :=
K
X
γ̂k eiωkt .
(7.3)
k=−K
If we define ˆ := [γ̂−K , γ̂−K +1 , . . . , γ̂ K ]T , the equation 7.3 can be rewritten as:
ĝ(t) = ((t) · ˆ ) = ((t))T ˆ ,
(7.4)
where (t) := [eiω(−K )t , eiω(−K −1)t , . . . , eiωK t ]T . If we select 2K +1 points t−K , . . . , t K ∈
[0, T ), we can define the vector ĝ := [g(t−K ), . . . , g(t K )]T . This vector is called a timeprofile of g, since it consists of a finite number of samplings of g in the time-domain. We
can also define the matrix 8 as:


((t−K ))T


..
8 := 
.
.
T
((t K ))
We can now derive from equation 7.4 the following compact relation between ĝ, 8 and
ˆ .
H⇒
H⇒
def. of
ĝ and 8
ĝ(t) = ((t))T ˆ

 

ĝ(t−K )
((t−K ))T ˆ
 ..  

..
 . =

.
T
ĝ(t K )
((t K )) ˆ
ĝ = 8ˆ .
(7.5)
For most choices for the points tk , the matrix 8 will be invertible. This is especially the
case if the tk ’s are chosen to be equally-spaced over the interval [0, T ). If 8 is invertible,
we have of course ˆ = 8−1 ĝ. Note that the matrix 8−1 converts a finite (discrete)
sampling of g in the time domain, into a truncated set of Fourier coefficients. Hence the
matrix 8−1 is called the Discrete Fourier Transform (DFT), and the matrix 8 is called the
Inverse Discrete Fourier Transform (IDFT).
c Philips Electronics N.V. 1999
51
804/99
Unclassified Report
7.1.2 Applying the DFT in Harmonic Balance
If the functions q and j in equation 7.1a are replaced by their Fourier expansion, we get:
∞
∞
X
X
d
Q k eiωkt +
Jk eiωkt = 0,
(7.6)
dt
k=−∞
k=−∞
where
q(t, x) =:
∞
X
Qk e
iωkt
and j(t, x) =:
k=−∞
∞
X
Jk eiωkt .
k=−∞
Again, we can truncate the Fourier series, resulting in:
K
K
X
X
d
Q k eiωkt +
Jk eiωkt = 0,
dt
k=−K
k=−K
where
q(t, x̂) =:
K
X
Q̂ k e
iωkt
and j(t, x̂) =:
k=−K
K
X
(7.7)
Jˆk eiωkt .
k=−K
Equation 7.7 can be transformed into the following set of equations:
i ω(−K )Q −K + J−K = 0,
..
.
(7.8)
i ωK Q K + J K = 0.
By using the DFT, equation7.8 can be rewritten as:




q(t−K , x̂(t−K ))
j(t−K , x̂(t−K ))



..
..
−1 
8−1 
+8 
 = 0,
.
.
q(t K , x̂(t K ))
(7.9)
j(t K , x̂(t K ))
where  := diag(i ω(−K ), . . . , i ωK ). By defining:








x−K
q(t−K , x−K )
j(t−K , x−K )
x−K

 . 





..
..
E ( ... ) := 
q
 and Ej( .. ) := 
,
.
.
xK
q(t K , x K )
xK
j(t K , x K )
equation 7.9 can be rewritten as:




x̂(t−K )
x̂(t−K )




8−1qE ( ... ) + 8−1Ej( ... ) = 0.
x̂(t K )
x̂(t K )
(7.10)
Now define ˆ := 8−1 [x̂(t−K ), . . . , x̂(t K )]T . Equation 7.10 can now be written as:
E (8ˆ ) + 8−1Ej(8ˆ ) = 0.
F H B (ˆ ) := 8−1 q
(7.11)
Equation 7.11 is called the Harmonic Balance form. This is the equation that is solved
in the Harmonic Balance algorithm. Note that the equations are in the frequency domain
and the unknown ˆ consists of Fourier coefficients.
Note that equation 7.11
52
c Philips Electronics N.V. 1999
Unclassified Report
804/99
7.1.3 Solving the Harmonic Balance form
Equation 7.11 is often solved using some type of Newton method. Hence the Jacobimatrix dF H B /d ˆ needs to be computed. Differentiation of F H B yields:
d dF H B
E (8ˆ ) + 8−1Ej(8ˆ )
8−1 q
=
d ˆ
d ˆ
d qE
dEj
(8ˆ )8 + 8−1
(8ˆ )8
= 8−1
d 8ˆ
d8ˆ
E ˆ )8 + 8−1 G(8
E ˆ )8,
= 8−1 C(8
(7.12)
E and CE are defined by:
where the block diagonal matrices G




G(t−K , x−K )
∅
x−K


. 
..
E 
G(

 .. ) := 
.
xK
∅
G(t K , x K )
and




x−K
C(t−K , x−K )
∅


. 
..
E 
C(
 .. ) := 
.
.
xK
∅
C(t K , x K )
So the Jacobi-matrix dF H B /d ˆ can be computed by using the matrices G and C.
7.2 Extrapolation methods
The other method currently implemented in Pstar for determining the PSS of a circuit is
epsilon extrapolation. I will discuss this algorithm later on, but first I will give a general
overview of extrapolation methods.
To this end, consider again the system of equations 7.1a and 7.1b:
f(t, x, ẋ) = 0 ∈ R N
x(0) = x(T ).
for 0 ≤ t ≤ T ,
Suppose that for some domain D ⊆ R N the implicit initial value problems:
f(t, x, ẋ) = 0,
x(0) = x0 ,
for 0 ≤ t ≤ T,
with x0 ∈ D
(7.13)
have a unique solution on the interval [0, T ]. Now define the function F : D → R N as:
F(x0 ) := x(T ),
where x is the solution of f(t, x, ẋ) = 0 with x(0) = x0 .
c Philips Electronics N.V. 1999
(7.14)
53
804/99
Unclassified Report
t=0
t=T
x(t)
x0
t-axis
F( x0 )
Figure 7.1: The function F maps the initial value in t = 0 on the end value in t = T .
Thus the function F gives the value of the solution in t = T , given the initial value in
t = 0. See also figure 7.1. Solving the system 7.1 is now equivalent to finding a solution
to the equation
x = F(x).
(7.15)
A solution x∗ to 7.15 is called a fixed point of f.
Now suppose that F satisfies the following two criteria:
F(D ) ⊆ D ,
kF(x) − F(y)k ≤ αkx − yk,
for some α ∈ [0, 1].
(7.16a)
(7.16b)
A function which satisfies these two criteria is called a contraction mapping. According
to the Banach Contraction Theorem, every contraction mapping has at least one fixed
point. So F does have a fixed point, and the system of equations 7.1 has a solution.
According to the Banach Contraction Theorem, the fixed point x∗ satisfies x∗ = limn→∞ F n (x0 ),
for every x0 ∈ D . So one way to find a solution to 7.15, and thus to 7.1, is to calculate
the sequence x0 , F(x0 ), F 2 (x0 ), . . . , until the sequence has converged sufficiently to the
solution x∗ . Note that this algorithm results in at least linear convergence, since:
kF n (x0 ) − x∗ k = kF n (x0 ) − F(x∗ )k
(7.15)
≤ αkF n−1 (x0 ) − x∗ k.
(7.16b)
If α is close to 1, this process might converge slowly. In fact, calculating the sequence x0 ,
F(x0 ), F 2 (x0 ), . . . is completely equivalent to calculating the transient analysis over the
intervals [0, T ], [T, 2T ], [2T, 3T ], etcetera. So we actually have here a transient analysis
in disguise. This has been visualised in figure 7.2.
54
c Philips Electronics N.V. 1999
Unclassified Report
804/99
t = 0
t = T
t = 2T
t = 3T
x0
F(x0)
2
F(x0)
3
F(x0)
Figure 7.2: Computing the sequence x0 , F(x0 ), F 2 (x0 ), . . . is equivalent to doing a transient analysis over the interval [0, T ], [T, 2T ], etcetera.
It might take a long time before the transient solution approximates the Periodic SteadyState; after all, this was the reason to explore dedicated PSS methods in the first place.
So we actually have a slowly converging sequence with a known order of convergence.
Several methods exist to speed up the convergence of such a sequence. These so-called
extrapolation methods are often used to speed up the convergence of numerical differentiation, but they can be applied here as well.
A good example of how slow the unaccelerated convergence to the PSS can be is given
by figure 7.3, which shows the result of a transient analysis of the frequency doubler in
figure 5.1. The period T in the Periodic Steady State of this circuit will be 10−5 s, because
the circuit will operate at a base frequency of 100k H z. But the PSS hasn’t been reached
even after 10−3 s, which means that even after 100 evaluations of F, the fixed point hasn’t
been approximated well enough. So there is clearly room for improvement here.
These extrapolation methods almost always are based on the computation of sequence
(yn ), which is defined by:
y0 = x0 ,
yn+1 = EXTRAP(F p (yn ), F p+1 (yn ), . . . , F p+q (yn )).
(7.17)
Here, EXTRAP is an extrapolation formula, and p and q are natural numbers. Note
that if we take p = 1, q = 0 and EXTRAP(F p (yn )) = F p (yn ), we have the sequence
yn = F n (x0 ), i.e. the original, unaccelerated sequence. There are several choices possible
for EXTRAP. I will discuss a number of them in the following subsections.
7.2.1 Aitken’s 12 method
One of the best-known methods to accelerate the convergence of a sequence is Aitken’s
12 method. First, I will discuss this method for the scalar case. Suppose that we have a
sequence (xn )n , defined by some starting value x0 and a recursion relation x j+1 = F(x j ),
which converges to a limit x. Now define the sequences of first and second order differ-
c Philips Electronics N.V. 1999
55
804/99
May 25, 1998
08:45:11
VN(1)
Unclassified Report
Test circuit 2: frequency doubler
-500.0m
(LIN)
-550.0m
-600.0m
-650.0m
-700.0m
-750.0m
VN(3)
80.0
70.0
(LIN)
50.0
30.0
10.0
-10.0
0.0
400.0u
200.0u
Analysis: TR
User: contr_6
800.0u
600.0u
1.0m
(LIN)
T
Simulation date: 25-05-98, 08:40:50
File: /home/contr_6/products/pstar/test/circuit2.sdif
Figure 7.3: A transient analysis of the circuit in figure 5.1
56
c Philips Electronics N.V. 1999
Unclassified Report
804/99
ences (1xn )n and (12 xn )n by:
1x j := x j+1 − x j ,
(7.18)
12 x j := 1x j+1 − 1x j = x j+2 − 2x j+1 + x j .
(7.19)
(7.20)
Now construct the sequence (x̂n )n by:
x̂ j := x j −
(1x j )2
.
12 x j
(7.21)
In [11], the following theorem is proven:
Theorem 7.2.1. Assume that the sequence (xn )n converges to x according to:
x j − x = (γ + δ j )(x j−1 − x),
|γ | < 1,
lim δ j = 0,
j→∞
(7.22)
and that x j 6= x for all j ∈ N. Then there is a M ∈ N such that the sequence
x̂ M , x̂ M+1 , . . . defined by 7.21 exists and has the property:
x̂ j − x
= 0.
j→∞ x j − x
(7.23)
lim
This theorem says, in effect, that the accelerated sequence (x̂n )n converges to the same
limit as (xn )n , only faster. But in general, the sequence (x̂n )n will still converge linearly.
This is solved by a variant of Aitken’s method, called Steffensen’s method. The idea of
Steffensen’s method is to apply Aitken’s method first on the first three elements of (xn )n ),
producing the first element of the accelerated sequence x̂0 . Next, x̂0 is taken as the starting
point of a new sequence by applying the original recursion relation x j+1 = F(x j ), and
Aiken’s method is again applied to the first three elements of that sequence, etcetera. So
the general idea is to compute an accelerated sequence (yn )n , in the following way:
y0 = x0 ,
(7.24)
y j+1 = AITKEN(y j , F(y j ), F (y j )).
2
(7.25)
Note that this method has the form of 7.17. The sequence y0 , y1, y2 , . . . will converge to
x with at least quadratic convergence, as shown in [11].
7.2.2 Reduced rank extrapolation (RRE)
Aitken’s method (and Steffenson’s variant of it) can be generalized to operate on a sequence (xn )n ⊂ R N of vectors. In order to do this, we define:
1x j := x j+1 − x j ,
(7.26)
1 x j := 1x j+1 − 1x j = x j+2 − 2x j+1 + x j ,
1X j := [1x j |1x j+1 | · · · |1x j+N −1 ],
(7.27)
(7.28)
12 X j := [12 x j |12 x j+1 | · · · |12 x j+N −1 ].
(7.29)
(7.30)
2
c Philips Electronics N.V. 1999
57
804/99
Unclassified Report
Find the extrapolated value x̂ j by:
x̂ j := x j − 1X j (12 X j )−1 x j .
(7.31)
The process can now be restarted with x̂ j ; this is Steffensen’s modification. Note that the
scalar Aitken’s method is found back when N = 1. The resulting method is called the
Full Rank Extrapolation method, but is hardly ever used in this form, because it involves
inverting an N × N -matrix. However, it forms the foundation behind a much more important method, the Reduced Rank Extrapolation, or RRE for short, which is described in
full detail in [2].
The idea behind RRE is to replace the full N × N -rank matrices 1X j and 12 X j by
2 (k)
N × k-matrices 1X (k)
j and 1 X j , where k < N . These matrices are defined as:
1X (k)
j := [1x j |1x j+1 | · · · |1x j+k−1 ],
12 X (k)
j
:= [12 x j |12 x j+1 | · · · |12 x j+k−1].
(7.32)
(7.33)
(7.34)
The RRE-accelerated sequence (x̃n )n is now defined as:
2 (k) +
x̃ j := x j − 1X (k)
j (1 X j ) x j .
(7.35)
Note that the ordinary matrix inverse (12 X j )−1 has been replaced by the Penrose pseudo+
inverse (12 X (k)
j ) . For a suitable choice of k, this method will converge; if Steffenson’s
modification is applied, it might even converge quadratically. In [10], the convergence
criteria of RRE are discussed, together with some strategies for the actual implementation
of the method.
7.2.3 Minimal polynomial extrapolation (MPE)
In order to derive the MPE method, I will first suppose that the sequence (xn )n is generated
linearly from a starting point x0 according to the equation:
x j+1 = F(x j ) = Ax j + b,
for j ∈ N.
(7.36)
In general, A and b are not known explicitly; we only have a way to evaluate the function
F, namely by integrating the equation 7.1a over one interval of length T . Note that if the
limit limn→∞ xn =: x exists, this limit satisfies the relation x = (I − A)−1 b. Moreover,
the following properties are satisfied by the first and second order differences 1x j and
12 x j , which can be easily verified by the reader:
1x j+1 = A1x j = A j+1 1x0 ,
12 x j = (A − I )1x j .
(7.37)
(7.38)
Now define the polynomial p j (λ) to be the minimal polynomial of A with respect to 1x j ,
i.e. p j (λ) is the unique monic polynomial of least degree satisfying:
p j (A)1x j = 0,
58
(7.39)
c Philips Electronics N.V. 1999
Unclassified Report
i.e. 1xh ∈
(k )
N ( p j (A)).
804/99
Let k j be the degree of p j . Then we have coëfficients c(0)
j , ... ,
c j j , such that:
p j (λ) =
k
X
cjλj,
where ck = 1.
(7.40)
c := [c0 , . . . , ck−1 ]T and
(7.41)
j=0
From 7.39 and 7.40, it follows that:
1X 0(k) c = −1xk ,
where
1X 0(k) := [1x0 , . . . , 1xk−1]
It can be shown (see [10]) that c is uniquely determined by 7.41, even though the matrix
1X 0(k) is rectangular. So c can be written as:
c = −(1X 0(k) )+ xk ,
(7.42)
where (1X 0(k) )+ denotes the Penrose pseudo-inverse of 1X 0(k) .
If F is not an affine transformation (i.e. F might be a general nonlinear function), it still
remains possible to perform this algorithm, although it is not a priori clear whether this
will result in anything useful. However, it can be shown (see [10]) that RRE and MPE
are equivalent except for a different scaling of the equations. However, this scaling might
make a difference when doing computations with finite precision. It is to be expected
that MPE will have less numerical difficulties than RRE; the reason is that RRE uses
second-order differences, while MPE uses only first-order differences.
7.2.4 Epsilon extrapolation
The extrapolation method which is implemented in Pstar is the Scalar Epsilon Algorithm
(SEA), which is discussed in [9]. The function EXTRAP, as defined by SEA, is:
EXTRAP(z0 , z1 , . . . , zq ) := (0)
q ,
where (r)
s is defined by:
(r)
−1 = 0,
(r)
0 = zr ,
(r+1)
(r+1)
−1
(r)
− (r)
s+1 = s−1 + (s
s ) ,
r = 1, 2, . . . , q,
(7.43a)
r = 0, 1, . . . , q,
(7.43b)
s = 0, . . . , q − 1,
and r = 0, . . . , q − s − 1.
(7.43c)
The dependencies between all the (r)
s ’s for q = 3 have been visualised in figure 7.4.
In equation 7.43c appears a vector inverse. There are several possible definitions for
this vector inverse; the one used by SEA is the so-called primitive inverse: v−1 :=
[v1−1 , . . . , vn−1]T . Another possibility is the so-called Samelson inverse: v−1 := v/kvk2 .
c Philips Electronics N.V. 1999
59
804/99
Unclassified Report
(1)
−1 = 0
(0)
0 = z0
(2)
−1 = 0
(1)
0 = z1
(0)
1
(3)
−1 = 0
(2)
0 = z2
(1)
1
(0)
2
(3)
0 = z3
(2)
1
(1)
2
(0)
3
Figure 7.4: The epsilon algorithm visualised. The relation → means: “is needed in the
computation of”.
If the Samelson inverse is used in 7.43c, the resulting algorithm is called the Vector Epsilon Algorithm (VEA). In equation 7.43c appears a vector inverse. There are several
possible definitions for this vector inverse; the one used by SEA is the so-called primitive
inverse: v−1 := [v1−1 , . . . , vn−1]T . Another possibility is the so-called Samelson inverse:
v−1 := v/kvk2. If the Samelson inverse is used in 7.43c, the resulting algorithm is called
the Vector Epsilon Algorithm (VEA).
7.3 Shooting method
Just as the extrapolation method is the shooting method based on the function F as defined
in equation 7.14. However, in this case we are trying to solve the equation:
F(x0 ) − x0 = 0.
(7.44)
This is called Single Shooting. The equation 7.44 is typically solved using some type of
Newton method; hence the matrix 8(x0 ) := d F(x0 )/dx0 is needed. This matrix can be
computed as a by-product of the transient computation of x(T ) from x0 .
60
c Philips Electronics N.V. 1999
Unclassified Report
804/99
The shooting method can now be described by the following pseudo-code:
x0 := some initial guess
{integrate (7.1a) over [0, T ]:}
xT , 8 := F(x0 ), 8(x0 )
while kx0 − xT k ≥ do
{compute Newton update:}
x0 := (I − 8)−1 (xT − 8x0 )
{integrate (7.1a) over [0, T ]:}
xT , 8 := F(x0 ), 8(x0 )
Here, is the convergence criterium of the Newton algorithm. If equation 7.1a is linear,
the while-loop will be executed only once. So the shooting method can be simplified for
linear problems to yield:
x0 := some initial guess
{integrate (7.1a) over [0, T ]:}
xT , 8 := F(x0 ), 8(x0 )
{compute Newton update:}
x0 := (I − 8)−1 (xT − 8x0 )
7.4 Wave-form-Newton
This is a variant on the shooting method. Suppose that some initial profile x(0) (t) is
provided. We are now going to find a better guess x(1) , which is defined as:
x(1) (t) := x(0) (t) + δx(1) (t).
(7.45)
By substituting this guess into equation 7.1a, we obtain:
d
q(t, x(1) ) + j(t, x(1) ) = 0
dt
d
H⇒ q(t, x(0) + δx(1) ) + j(t, x(0) + δx(1) ) = 0
(7.45) dt
d
H⇒ C0 δx(1) + G 0 δx(1) + s(0) + O(kδx(1) k2 ) = 0,
Taylor dt
(7.46)
where
∂q(t, x(0) (t))
,
∂x
∂j(t, x(0) (t))
(0)
,
G 0 (t) := G(t, x (t)) =
∂x
d
s0 (t) := q(t, x(0) ) + j(t, x(0) ).
dt
C0 (t) := C(t, x(0) (t)) =
c Philips Electronics N.V. 1999
(7.47)
(7.48)
(7.49)
61
804/99
Unclassified Report
If the higher-order terms are neglected in equation 7.46, we get the following linear DAE
for δx1 :
d
C0 δx(1) + G 0 δx(1) = −s0 ,
dt
(7.50a)
which can be combined with the periodicity condition:
x(0) (0) + δx(1) (0) = x(0) (T ) + δx(1) (T ).
(7.50b)
This linear system can be solved using the linear shooting method, as described in the
previous section. Using the solution δx(1) a new (and hopefully better) approximation
x(1) := x(0) + δx(1) can be computed. This process can be repeated until a sufficient good
solution has been found. The resulting algorithm can be described in pseudo-code as
x(0) := some initial guess
{compute G 0 , C0 and s0 :}
G 0 (t) := G(t, x(0) (t))
C0 (t) := C(t, x(0)(t))
d
s0 (t) :=
q(t, x(0) ) + j(t, x(0) )
dt
for 0 ≤ t ≤ T
for 0 ≤ t ≤ T
for 0 ≤ t ≤ T
i := 0
while ksi k ≥ do
solve δx(i+1) in
d
Ci δx(i+1) + G i δx(i+1) = −si ,
dt
x(i) (0) + δx(i+1) (0) = x(i) (T ) + δx(i+1) (T ).
x(i+1) := x(i) + δx(i+1)
{compute G i+1 , Ci+1 and si+1 :}
G i+1 (t) := G(t, x(i+1) (t))
Ci+1 (t) := C(t, x(i+1) (t))
d
si+1 (t) :=
q(t, x(i+1) ) + j(t, x(i+1) )
dt
for 0 ≤ t ≤ T
for 0 ≤ t ≤ T
for 0 ≤ t ≤ T
i := i + 1
There are two essential differences between (single) shooting and wave-form-Newton:
1. Wave-form-Newton uses the whole function x on the interval [0, T ] in its iteration.
Single shooting stores only the value x(0). Of course, practical implementations of
wave-form-Newton will only store x on a finite number of points t0 , . . . , t N .
2. The approximations computed by Wave-form-Newton satisfy the periodicity equation 7.1b during the whole iteration, except for the initial guess x(0) . However, the
62
c Philips Electronics N.V. 1999
Unclassified Report
804/99
DAE 7.1a is not satisfied by the intermediate results. On the other hand, the shooting method satisfies 7.1a during the whole equation, but 7.1b is not satisfied by the
intermediate results. This can be summarised by saying that the shooting method
takes equation 7.1a as an invariant and tries to establish 7.1b, while Wave-formNewton keeps 7.1b as an invariant and tries to establish 7.1a.
Finally, I want to remark that the pseudo-code above should not be taken too literal.
Especially, the matrices G i and Ci and the vector si can be computed while solving the
linear system. Since they are only needed at the current time point in the solution process
of the ODE 7.50b, they do not have to be stored for all time-points. So less memory
is needed. However, Wave-form-Newton will still need much more memory than single
shooting, since single shooting stores the solution at only one time-point, while Waveform-Newton needs the solution at all the time-points t0 , . . . , t N .
7.5 Finite Difference Method
The Finite Difference Method tries to solve the DAE and the boundary equations all “at
once”, i.e. by accumulating the linearised equations in one big big matrix and solving the
resulting matrix equation. The disadvantage of this method is that the size of this matrix
can be huge; however, the matrix has a sparse structure, which can be exploited to keep
memory usage reasonable.
First, the system of equations 7.1 has to be discretised. For this, choose M + 1 points
0 = t0 < t1 < · · · < t M = T . I will choose the θ -method for the discretisation:
q(t j , x j ) − q(t j−1 , x j−1)
+ θ j(t j , x j ) + (1 − θ )j(t j−1 , x j−1) = 0 ∈ R N .
t j − t j−1
(7.51)
Note that for θ = 12 , the θ -method is equivalent to the trapezoidal rule. By linearising
equation 7.51, we obtain the following system of equations:
1
−1
Cj + θG j xj +
C j−1 + (1 − θ )G j−1 x j−1 = f j ,
(7.52)
1t j
1t j
combined with the periodicity conditions:
x0 − x M = 0.
(7.53)
−1
C j + (1 − θ )G j . We can now put all these
Define A j := 1t1 j C j + θ G j and B j := 1t
j
equations in a matrix:

   
x0
0
I
−I
 B0 A 1
  x1   f1 

   

  x2   f2 
B1 A 2
(7.54)

  =  .

  ..   .. 
.
.
.
.

 .   . 
.
.
xM
fM
B M−1 A M
c Philips Electronics N.V. 1999
63
804/99
Unclassified Report
Now, if we have a UL-decomposition of the A j ’s: A j = P j U j L j , we can easily obtain
the following almost-UL-decomposition:


I
−I

 B0 A 1




B
A
1
2
=



..
..


.
.
B M−1 A M



I
−I
I
 P1U1
 U −1 P T B0

L1

 1 1

−1



P2U2
U2 P2T B1 L 2


.




.
.
.
..  
..
..


−1 T
PM U M
U M PM B M−1 L M
(7.55)
Efficient and stable algorithms to obtain a complete LU-factorisation are described in [1],
chapter 7.
7.6 Comparing Finite Difference with Shooting
The Finite Difference Method and the Shooting Method are very much alike.
As I will show in this section, in the linear case and with exact arithmetic, both methods
produce exactly the same result. To make things a bit more concrete, suppose that both
use the θ -method (7.51) as their discretisation method.
Suppose that we have a linear set of equations, so we have:
j(t, x) = G(t)x + j0 (t),
q(t, x) = C(t)x + q0 (t).
(7.56a)
(7.56b)
Now consider both the Finite Difference Method (FDM) and Single Shooting (SS).
FDM For FDM, we have a set of equations for every time point t j , 1 ≤ j ≤ M:
−1
1
Cj + θG j xj +
C j−1 + (1 − θ )G j−1 x j−1 = f j .
1t j
1t j
(7.57)
Moreover, we have the periodicity equation:
x(t0 ) = x(t M ).
(7.58)
This is a set of linear equations. One way to solve this is to put everything into a
big matrix, but another way would be to express x(t M ) in x(t M−1 ), using 7.57, and
then x(t M−1 ) in x(t M−2 ), etc, until we have something like:
x(t0 ) = 8x(t0 ) + 0 .
(7.59)
In this way, it is possible to condense the FDM matrix into a n × N -matrix 8.
64
c Philips Electronics N.V. 1999
Unclassified Report
804/99
SS In the case of SS, a relation is given to find x j given x j−1 , for every j ∈ {1, 2, . . . , M}.
This is, in fact, just the relationship expressed in 7.57. So again, we finally find a
relation between x(t0 ) and x(t M ). This is again the relation 7.59. It is customary for
Shooting to condense the relations 7.57 into the system 7.59, but there is nothing
that stops us from building a large N × M-matrix just as with the Finite Difference
Method.
So in the linear case, Shooting and Finite Difference are actually the same. It is quite
common to describe FDM in the expanded matrix form, while Shooting is often described
with the compacted form as in equation 7.59. But this is by no means necessary.
In the nonlinear case, however, there is a difference. To see this, consider that FDM
starts out with the whole wave-form computed in the previous Newton step, and linearises
around this. Shooting, on the other hand, only uses the value of x(t0 ) in its next Newton
step. It then generates a complete waveform by using an IVP1 solver. This leads to the
most important difference between Shooting and FDM:
Shooting
FDM
The intermediate solutions do not satisfy
The intermediate solutions all satisfy the
the (discretised) DAE. However, they do
discretised version of the original DAE.
satisfy the periodicity equation. The DAE
The periodicity equation is satisfied as the
is satisfied as the Newton method conNewton method converges.
verges.
Which method is better? This depends on the problem at hand. In the classical single
shooting setting, the shooting matrix 8 is formed as a by-product of the transient analysis.
This is typically done by repeatedly pre-multiplying 8 with some matrix for every timepoint. In general, this may lead to a very badly conditioned matrix 8. This is a wellknown disadvantage of Single Shooting, and for this reason Single Shooting is little used
for real-world problems.
However, there is an important category of circuits for which Single Shooting performs
very well. This is the type of circuit that only contain strongly decaying modes. The
result is that the matrix 8 is small in matrix norm, hence the matrix I − 8 is close to I
and therefore very well-conditioned. Furthermore, since 8 = 8(x0 ) is small, the matrix
I − 8 is relatively independent on the initial value x0 . This means that the problem is
almost linear, and therefore Newton will converge rapidly.
1
Initial Value Problem
c Philips Electronics N.V. 1999
65
Chapter 8
Free-running oscillators
8.1 Introduction
A free-running oscillator is a circuit defined by the following properties:
1. The circuit equations do not depend explicitely on the time t. I.e. the circuit equations are autonomous, hence they look like this:
dq(x)
+ j(x) = 0.
dt
(8.1)
This excludes any time-dependent sources, or other time-dependent circuit elements.
2. A periodic solution x(t) exists which is not a DC solution. This can be expressed
mathematically as:
∃T >0 ∀t∈R (x(t) = x(t + T )) ,
∃t0 ∈R (ẋ(t0 ) 6= 0) .
(8.2)
(8.3)
In this chapter, I will refer to a solution x(t) which satisfies these conditions as a
non-trivial periodic solution. The DC solution will be called the trivial periodic
solution.
We are now interested in the non-trivial periodic solution x(t). The problem is that T will,
in general, not be known. This can be accommodated by scaling the interval [0, T ] to the
interval [0, 1] (or some other canonical interval you might like), by defining τ := t/T .
Equation 8.1 can then be rewritten as:
1 dq(x(τ T ))
+ j(x(τ T )) = 0.
T
dτ
66
(8.4)
Unclassified Report
804/99
By defining x̂(τ ) := x(τ T ), we arrive at the following Boundary Value Problem:
1 dq(x̂)
+ j(x(τ T )) = 0, 0 ≤ τ ≤ 1,
T dτ
x̂(0) = x̂(1),
T > 0.
(8.5a)
(8.5b)
(8.5c)
Of course, the system 8.5 is not well-posed, since the solution is not unique. After all, if
T
non-trivial
periodic
solution
DC solution
x
Figure 8.1: The solution space of 8.5. The solution set consists of at least two onedimensional manifolds.
x̂(τ ) is a solution to 8.5, then x̂(τ + τ0 ) is also a solution. So we find at least two, at least
one-dimensional, manifolds of solutions, as shown in figure 8.1. Note that there might be
even more solutions.
Given that we have found a non-trivial periodic solution to 8.5, we now know that this solution lies on an at least one-dimensional manifold, so from this follows that the linearised
equation:
1 dC x̂(τ )
+ G x̂(τ ) = b(τ ),
T dτ
x̂(0) = x̂(1),
T > 0.
0 ≤ τ ≤ 1,
also has several solutions. In fact, by differentiating 8.5a to τ , we find:
˙ )
1 dC x̂(τ
˙ ) = 0.
+ G x̂(τ
T dτ
˙ ) is a homogeneous solution to 8.6a.
Hence we find that x̂(τ
(8.6a)
(8.6b)
(8.6c)
(8.7)
So if we try to solve equation 8.5 by using a Newton method (i.e. by repeatedly linearising), this will fail, since the Newton matrix will become singular when the solution is
approached. Hence an extra condition is needed to make the problem well-posed; this is
reasonable, since we added an extra unknown T to the problem. Discussion of the exact
nature of this added equation will be postponed until the end of this chapter.
c Philips Electronics N.V. 1999
67
804/99
Unclassified Report
8.2 Characterisation of free-running oscillators
8.2.1 Basic results
In order to characterise types of free-running oscillators, I will look at the linearisation
of 8.1 around the DC solution. For now, I will assume that a DC solution does exist and
is locally unique; this is equivalent to the claim that for some x DC , the matrix G(x DC )
is invertible1. Linearising around the DC solution results in the following system for the
correction:
C1ẋ + G1x = −
dq(x DC )
− j(x DC ).
dt
(8.8)
In order to investigate whether this system has a unique solution, it is sufficient to consider
the homogenous part:
C1ẋ + G1x = 0.
(8.9)
The system 8.1 has a (locally) unique DC solution x(t) = x DC iff 8.9 has a unique solution. Note that 1x(t) = 0 is always a solution to 8.9. Furthermore if 1x(t) is a solution
of 8.9, then 1ẋ(t) is a solution too.
Still under the assumption that G is invertible, equation 8.1 can be rewritten as:
1x = −G −1 C1ẋ.
(8.10)
1x = (−G −1 C)k 1x(k) , for k ∈ N.
(8.11)
More generally, we have:
It is a basic result from linear algebra that every matrix can be written as the direct sum of
an invertible and a nilpotent part. This means that an invertible matrix S exists such that:
A
−1
−G C = S
S −1 ,
N
where A is invertible and N nilpotent, say with index n. Then define y ∈ R(A), z ∈ R(N )
as:
y
:= S −1 1x.
(8.12)
z
The equation 8.10 can now be rewritten as:
y = Aẏ,
z = N ż.
(8.13)
(8.14)
1
The fact that a locally unique DC solution exists does not mean no other solution exists in the vicinity
of the DC solution; it just means that no other DC solution exists in this vicinity
68
c Philips Electronics N.V. 1999
Unclassified Report
804/99
Since N is nilpotent with index n, we have that z = N n z(n) = 0. Since A is invertible, we
get the following equation:
ẏ = A−1y,
z = 0.
(8.15)
(8.16)
Now if A has only eigenvalues λ with Re(λ) < 0, then the DC solution is stable. But
if there are eigenvalues λ with Re(λ) ≥ 0, the DC solution is not stable. If there are
eigenvalues λ with Re(λ) > 0, the DC solution is unstable. However, if there are only
eigenvalues with Re(λ) ≤ 0 and there is some non-zero purely imaginary eigenvalue,
then the circuit will behave as an oscillator. Such a circuit is called a harmonic oscillator,
since it will produce a harmonic oscillation. The simplest example of such a circuit is
given in figure 8.2. The following table can be assembled for this circuit:
Figure 8.2: The simplest harmonic oscillator possible.
v0
v1
iL
G
C
0 0
1
C −C 0
0 0 −1 −C C
0
1 −1 0
0
0 −L
Here, C is the capacity of the capacitor, and L is the inductance of the inductor. The
first row and column can be omitted, since they correspond to the ground node. With
C = L = 1, the following G- and C-matrices are obtained:
0 −1
G=
−1 0
1 0
and C =
,
0 −1
(8.17)
which means that:
0 −1
,
A = −G C =
1 0
−1
(8.18)
and we have the following system for x:
0 1
ẋ = A x =
x.
−1 0
−1
(8.19)
Obviously, this produces a harmonic oscillating solution.
c Philips Electronics N.V. 1999
69
804/99
Unclassified Report
8.2.2 Floquét-theory for DAE’s
The previous discussion only holds for the case where the matrices are constant. In general, when we have a nonlinear oscillator with period T , the linearised DAE will have
T -periodic coefficient matrices G and C. Fortunately, in [4], it is shown that a linear
index-1 DAE with periodic matrices can be transformed into a linear DAE with constant
matrices. This result is generalised to index-2 DAE’s in [5]. First some definitions:
Definition 8.2.1. The two linear homogeneous DAEs:
C1 (t)ẋ(t) + G 1 (t)x(t) = 0
(8.20a)
C2 (t)ẋ(t) + G 2 (t)x(t) = 0
(8.20b)
and
are kinematically equivalent iff a nonsingular matrix function F ∈ C(R, L(R N )) exists
such that:
x(t) is a solution of 8.20a
⇐⇒
F −1 (t)x(t) is a solution of 8.20b,
where F should satisfy:
supkF(t)k < ∞ and
supkF −1(t)k < ∞.
t∈R
t∈R
(8.21)
Two DAE’s can be shown to be kinematically equivalent when their matrices exhibit
certain algebraic relationships.
Theorem 8.2.1. Suppose that for two DAE’s 8.20a and 8.20b, nonsingular matrix functions E, F ∈ C(R, L(R N )) exist such that:
C2 (t) = E(t)C1 (t)F(t),
G 2 (t) = E(t) G 1 (t)F(t) + C1 (t) Ḟ(t) ,
∀ t ∈ R,
(8.22a)
∀ t ∈ R,
(8.22b)
−1
supkF(t)k < ∞,
supkF (t)k < ∞.
t∈R
t∈R
(8.22c)
Then 8.20a and 8.20b are kinematically equivalent.
Proof. The proof is based on some simple algebraic manipulation.
C1 (t)ẋ(t) + G 1 (t)x(t)
d
F(t)F −1 (t)x(t) + G 1 (t)F(t)F −1 (t)x(t)
= C1 (t)
dt
d
F −1 (t)x(t) + G 1 (t)F(t)F −1 (t)x(t)
= C1 (t) Ḟ(t)F −1 (t)x(t) + C1 (t)F(t)
dt
d
F −1 (t)x(t) + E −1 (t)E(t) C1 (t) Ḟ(t) + G 1 (t)F(t) F −1 (t)x(t)
= E −1 (t)E(t)C1 (t)F(t)
dt
d
F −1 (t)x(t) + E −1 (t)G 2 (t)F −1 (t)x(t)
= E −1 (t)C2 (t)
dt
70
c Philips Electronics N.V. 1999
Unclassified Report
804/99
So if x(t) is a solution of 8.20a, then F −1 x(t) is a solution of 8.20b, and vice versa. Hence
8.20a and 8.20b are kinematically equivalent
Definition 8.2.2. The two linear homogeneous, T -periodic DAEs
C1 (t)ẋ(t) + G 1 (t)x(t) = 0 and C2 (t)ẋ(t) + G 2 (t)x(t) = 0
are periodically equivalent if they are kinematically equivalent and the matrix functions
F is T -periodic.
The importance of the concept of kinematical equivalence is that it allows us to apply
conclusions about existence, uniqueness and stability of solutions of a DAE to any DAE
which is kinematically equivalent. This follows from the following two lemmata.
Lemma 8.2.1. Let the DAE’s 8.20a and 8.20b be kinematically equivalent, and suppose
that 8.20a has k linear independent solutions x1 , . . . , xk . Then 8.20b has k linear independent solutions x̂1 , . . . , x̂k .
Proof. Define x̂ j (t) := F −1 (t)x j (t). Since F(t) is invertible, we find that x̂1 , . . . , x̂k
must be linear independent. So 8.20b has at least k independent solutions. If 8.20b
had actually more than k linear independent solutions , then 8.20a had also more than k
linear independent solutions, which violates the assumption. So 8.20b has exactly k linear
independent solutions.
Lemma 8.2.2. Let the DAE’s 8.20a and 8.20b be kinematically equivalent. Then we have
the following results:
1. If 8.20a has only stable solutions, i.e. for all t ∈ R, kx(t)k ≤ some M, then 8.20b
has only stable solutions, too.
2. If all solutions x of 8.20a have the property that x(t) → 0 for t → ∞, then the
solutions of 8.20b have this property, too.
Proof. Let x̂ be a solution of 8.20b. Then x̂(t) = F −1 (t)x(t), where x is a solution of
8.20a.
1. Suppose that 8.20a has only stable solutions. So for all t ∈ R, kx(t)k ≤ some M.
Hence kx̂(t)k = kF −1 (t)x(t)k ≤ kF −1(t)kkx(t)k ≤ kF −1 (t)kM ≤ ∞. Hence
x̂(t) stays bounded for all t, so 8.20b has only stable solutions.
2. Suppose that all solutions x of 8.20a have the property that x(t) → 0 for t → ∞.
Then kx̂(t)k = kF −1(t)x(t)k ≤ kF −1 (t)kkx(t)k → 0 for t → ∞.
c Philips Electronics N.V. 1999
71
804/99
Unclassified Report
Now consider a linear, homogenous, T -periodic index-1 DAE:
C(t)ẋ + G(t)x = 0.
(8.23)
It can be shown that for every t ∈ [0, T ], the space R N can be written as the direct sum
of the subspaces N (t) := N (C(t)) and S(t) := {z ∈ R N | G(t)z ∈ R(C(t))}. Note
that for any solution x(t) of the DAE, x(t) ∈ S(t) for all t ∈ [0, T ]. Now assume that
N (t) is smooth, i.e. it can be spanned by T -periodic C 1 -functions nr+1 (t), . . . , n N (t).
Furthermore, suppose that S(t) is continuous, i.e. it can be spanned by T -periodic Cfunctions s1 (t), . . . , sr (t).
Now define the matrix function V as:
V (t) := [s1 (t)| · · · |sr (t)|nr+1 (t)| · · · |n N (t)] .
The projection P(t) on S(t) along N (t) can now be written as:
Ir
V −1 (t).
P(t) := V (t)
∅
(8.24)
(8.25)
The fundamental solution (t; x0) : R × S(0) → S(t) is defined as follows: (t; x0 ) =
x(t), where x is the solution with x(0) = x0 . Since the DAE is linear, is linear in x0 . So,
for fixed t, the function zt (x0 ) := (t; x0 ) is a linear operator; z(t) ∈ L(S(0); S(t)). Now
take s1 (0), . . . , sr (0) as a basis for S(0) and s1 (t), . . . , sr (t) as a basis for S(T ). Is then
possible to define the matrix Z (t) corresponding to the linear operator zt . It can be shown
that Z (t) is invertible for all t. This means that Z (T ) can be written as Z (T ) = e T W0 , for
some complex-valued matrix W0 .
Now define the transformation F(t) by:
F(t) := V (t)
Z (t)e−t W0
I
.
(8.26)
The claim is now that any solution x of the DAE can be written as:
tW
e 0
x(t) = F(t)
F −1 (0)x0
∅
tW
−1
Z (t)e−t W0
e 0
Z (0)
⇐⇒x(t) = V (t)
V −1 (0)x0
I
∅
I
Z (t)Z −1 (0)
⇐⇒x(t) = V (t)
V −1 (0)x0
∅
(8.27a)
(8.27b)
(8.27c)
Equation 8.27c can easily be verified by using the definition of Z .
From 8.27a, we find immediately that:
−1
F x(t) =
72
tW
e 0
∅
F −1 x(0).
(8.28)
c Philips Electronics N.V. 1999
Unclassified Report
804/99
Hence x̂(t) := F −1 (t)x(t) is a solution of the system:
x̂˙ 1 = W0 x̂1 ,
x̂2 = 0
(8.29a)
(8.29b)
x̂
where 1 := x̂. From this, we conclude that 8.23 and 8.29 are periodically equivalent.
x̂2
To summarise this result:
Theorem 8.2.2. Consider the linear, homogenous, T -periodic index-1 DAE 8.23, and
suppose that:
1.
2.
N (C(t)) is smooth, i.e. it can be spanned by T -periodic C 1-functions nr+1 (t), . . . , n N (t).
S(t) := {z ∈ R N | G(t)z ∈ R(C(t))} is continuous, i.e. it can be spanned by
T -periodic C-functions s1 (t), . . . , sr (t).
Then 8.23 is periodically equivalent to the constant-matrix DAE 8.29.
By linearising an arbitrary autonomous DAE around a periodic solution, we find a linear
DAE whose homogenous part has the form of 8.23. It is now possible to generalise the
conclusions for constant-matrix DAE’s to arbitrary DAE’s linearised around their periodic
solutions. Periodic solutions of an autonomous DAE can therefore be classified according
to the 2-norm of e W0 :
ke W0 k2 < 1: The solution of the original DAE is locally unique and stable.
ke W0 k2 = 1: The solution of the original DAE is stable, but might not be locally unique.
The DAE linearised around the solution does not have a unique solution.
ke W0 k2 > 1: The solution of the original DAE is unstable.
8.3 An adaptation to the Finite Difference Method
To derive the FDM formulation, we first have to linearise 8.5, but note that T has now
also become an unknown, so we have to linearise to T as well:
d
1 dq(x̂)
−1 dq(x̂)
1T
+
C(x̂)1x̂
+
G(x̂)1x̂
=
−
− j(x̂).
(8.30)
T 2 dτ
dτ
T dτ
Now we can discretise 8.30 on a grid of time-points t0 , . . . , t M , e.g. by using the θ method. This gives the following equations for 0 ≤ i < M:
C(x̂i+1 )1x̂i+1 − C(x̂i )1x̂i
−1 q(x̂i+1 ) − q̂(x̂i )
1T +
2
T
1τ
1τ
+ θ G(x̂i+1 )1x̂i+1 + (1 − θ )G(x̂i )1x̂i
1 q(x̂i+1 ) − q(x̂i )
− θ j(x̂i+1 ) − (1 − θ )j(x̂i ). (8.31)
=−
T
1τ
c Philips Electronics N.V. 1999
73
804/99
Unclassified Report
In addition, the periodicity constraint can be added:
x̂(τ0 ) = x̂(τ M ).
(8.32)
Now, we have (M + 1) × N equations for (M + 1) × N + 1 unknowns. So we need an
extra equation. Just for now, suppose that this “magical” equation is given:
X (x̂0 , . . . , x̂ M , T ) = 0.
(8.33)
It is now possible to use a Newton method to find a solution. Note that the resulting matrix
equation looks like this:
Note that the marked part of the equation forms the matrix equation without the extra variable T and the extra equation 8.33. This reduced matrix equation has the same structure
as the matrix equation resulting from the standard FDM method. So it seems possible
to reuse an existing solver for the FDM method. However, as I already discussed, this
equation has no unique solution, since a phase shift remains possible.
This can be resolved by adding an extra voltage source E to the circuit. This voltage
source produces is specified by the voltages e0 , . . . , e M which it produces at the time
points t0 , . . . , t M . These voltages e j become extra unknowns of the circuit, hence M + 1
extra equations are needed. For this, take the conditions that the current I E through the
voltage source is 0 for all time points, i.e. the extra equations are I E (i ) = 0 for 0 ≤
i ≤ M. However, these equations (and the additional unknowns e j ) are not added to the
marked part of the matrix, but to the unmarked part. But note that the marked part deals
with the given additional source E (and thus with extra unknowns for the currents I E (i ))).
This results in an overall matrix that looks like this:
74
c Philips Electronics N.V. 1999
Unclassified Report
804/99
In algebraic form, we have the system:
Ax + Bv + cT = f,
(8.34a)
B x = 0,
(k, v) = γ.
(8.34b)
(8.34c)
T
Here, 8.34c is the extra condition which should be imposed on the system to make it
unique. A is nonsingular which means that a Block Gaussian Elimination procedure can
be applied.
Now define the following vectors and matrices:
y := A−1 f,
−1
(8.35a)
D := A B.
(8.35b)
d := A−1 c.
(8.35c)
Note that these three tensors can be computed together, by reusing the same UL-decomposition
of A for all three. Equation 8.34a can now be rewritten as:
x = y − Dv − dT.
(8.36)
B T y − B T Dv − B T dT = 0.
(8.37)
By applying 8.34b, we find:
Now define the tensors ŷ, D̂ and d̂ by ŷ := B T y, D̂ := B T D and d̂ := B T d. By using
these definitions, 8.37 can be rewritten as:
ŷ − D̂v − d̂T = 0.
(8.38)
z := D̂ −1 ŷ,
(8.39a)
e := D̂ −1 d̂.
(8.39b)
Now define the vectors:
Again, when computing these vectors, the UL-decomposition of D̂ can be reused. By
using equation 8.38, we find:
v = z − eT.
(8.40)
By applying 8.34c to 8.40, the following result can be derived:
γ = (k, z) − (k, e)T.
(8.41)
From 8.41 we find that T = (k,z)−γ
. Now v can be found by using equation 8.40, and x
(k,e)
can be found by using equation 8.36.
c Philips Electronics N.V. 1999
75
804/99
Unclassified Report
The resulting algorithm can be summarised as follows:


y 
 f
−1
D
B
:= A


d
c


ŷ 
 y
D̂
D
:= B T


d
d̂
ŷ
z
:= D̂ −1
e
d̂
(k, z) − γ
T :=
(k, e)
v := z − eT
x := y − Dv − dT
8.4 Finding the extra condition
The only problem which remains to be solved is finding a suitable equation 8.33. This
equation should satisfy the following properties:
1. It is satisfied by at least one non-trivial periodic solution.
2. It disallows phase shifts in the solution, so that the resulting set of equations has a
locally unique solution
3. It should forbid the DC solution.
4. If a non-trivial solution with period T exists, then this waveform repeated twice will
produce a non-trivial solution with period 2T . A suitable equation for 8.33 should
disallow a solution 2T if T is a solution.
Note that condition 1. and 2. are necessary for convergence of the Newton process; an
equation that fails to provide 1. will naturally not lead to a solution, while an equation
that fails to provide 2. will lead to a problem that is not locally unique. Condition 3. and
4. are properties that assert some form of global uniqueness; equations that fail to provide
3. or 4. might still produce a solution for some problems.
The general approach is to select a certain circuit variable x j as the variable on which an
additional condition will be forced. The additional condition looks in general like this:
f (x̂ j ) = γ.
76
(8.42)
c Philips Electronics N.V. 1999
Unclassified Report
804/99
Here, f is a linear functional on the function space C 1 ([0, 1]). Such a functional can be
represented as:
Z1
f (x̂ j ) =
x̂ j (θ )φ(θ ) dθ,
(8.43)
0
for some distribution φ. Some possible choices for φ are:
1. φ(τ ) = δ(τ ),
2. φ(τ ) = cos 2π τ ,
3. φ(τ ) = 1.
The resulting conditions satisfy the following of the requirements:
1. φ(τ ) = δ(τ ).
This condition may satisfy requirement 1, if a suitable value for γ is chosen. Phase
shifts are forbidden by this condition, except when x̂(τ ) is constant around τ = 0.
If a suitable value for γ is chosen, requirement 3 may be fulfilled. However, this
condition cannot cater for requirement 4, since the delta function δ(τ ) is invariant
under transformations of the form τ → kτ .
2. φ(τ ) = cos 2π τ .
This condition may satisfy requirement 1 and forbids phase shifts, therefore it fulfils
requirement 2. For γ 6= 0, requirement 3 is fulfilled. Requirement 4 is also fulfilled
for γ 6= 0. However, this might pose a problem if the base harmonic in the circuit
is much smaller than the first harmonic.
3. φ(τ ) = 1.
Again, this condition may satisfy requirement 1 for suitable choices of γ . However,
condition 2 and 4 cannot be fulfilled, since the function φ(τ ) = 1 is not only
invariant under multiplications but also under translations, so phase shifts remain
possible. This means that this condition cannot make the system locally unique.
From this discussion, it follows that φ(τ ) = 1 is not suitable as an extra condition.
In the Finite Difference Method, condition 1 and 2 can be implemented easily.
1. φ(τ ) = δ(τ ).
The condition:
Z1
x̂ j (τ )δ(τ ) dτ = α,
0
is equivalent to the condition:
x̂ j (0) = α.
c Philips Electronics N.V. 1999
77
804/99
Unclassified Report
This is also easily handled by the Shooting Method, since the Shooting Method
explicitely computes the circuit variables at t = 0, but not at other time-points (like
FDM).
2. φ(τ ) = cos 2π τ .
By using a simple quadrature scheme, the following discretisation can be obtained:
X
1 M−1
x̂ j (τi ) cos 2π τi = α
M i=0
(8.44)
The problem with this approach is that it is not easily applicable to the Shooting
method. However, by rewriting the integral equation into a differential equation:
ȧ = x̂ j (τi ) cos 2π τ,
(8.45)
and then adding two boundary conditions:
a(0) = 0,
a(1) = α.
In this way, the additional equation can be handled in the same framework as the
circuit equations.
78
c Philips Electronics N.V. 1999
Chapter 9
Multi-tone analysis
9.1 Introduction
In the previous chapter, some methods were discussed to find a Periodic Steady State of
the circuit using time-domain analysis. A common property of all these methods is that
the period T of the PSS has to be able to accommodate all input signals. So if we have M
input signals with frequencies f 1 , . . . , f M , then T has to satisfy
∃k∈N,k6=0
T = k · lcm(
1
1
,... ,
).
f1
fM
(9.1)
This leads to two related problems when input signals with different frequencies exist:
1. Two input frequencies f 1 and f 2 may fail to have a common divisor, in which case
lcm( f11 , f12 ) is not defined. Hence T is defined neither. This will be the case when
f1
6∈ Q.
f2
2. Even if T is defined, it might be so large as to make the computation prohibitive.
The execution times of the algorithms in the previous chapter are proportional to
T
, where tmin is the time scale of the fastest input signal. So when T becomes
tmin
large, so will execution time.
In fact, problem 1 is just a limiting case of problem 2 for T → ∞. Both problems are related to the fact that the function lcm is not continuous, hence you essentially need to know
f 1 and f 2 to an infinite precision to compute even one significant digit of lcm( f11 , f12 ).
In [8], a new approach is described to solve this problem. It is based on a more efficient
representation of a multi-tone signal. Here, “efficient” means “needing only a few sample points to represent”. As an example, consider the two-tone signal described by the
function:
b(t) = sin 2π f 1 t · sin 2π f 2 t.
79
(9.2)
804/99
Unclassified Report
The function b is periodic for ff12 ∈ Q. But for ff 12 6∈ Q, this is not the case. Worse, even
if b is periodic with period T , T might be much larger than either f11 or f12 . In order to
represent b in the time domain, you need a number of sampling points in the order of
T · max( f 1 , f 2 ).
In his case, it is a better idea to use the fact that there are two different time scales in the
signal. This leads to the bi-variate representation:
b̂(t1 , t2 ) = sin 2π f 1 t1 · sin 2π f 2 t2 .
(9.3)
One can easily obtain the original signal from b̂, since b(t) = b̂(t, t). But note that b̂
is periodic both in t1 (with period f11 ) and in t2 (with period f12 ). The number of sample
points needed to represent b̂ is actually completely independent of the actual values of f 1
and f 2 . In fact, if f 1 or f 2 are changed, the only thing that happens is that b̂1 is scaled in
either the t1 - or t2 -direction.
This technique can be generalized to an arbitrary number of frequencies M, by introducing time variables t1 , . . . , t M . Note that the multi-tone representation b̂ contains more
information than the single-tone representation b, since it is easy to go from b̂ to b, but in
general very complicated to find a suitable b̂ given b.
9.2 The multi-tone representation of the circuit equations
Consider the circuit equation:
dq(t, x)
+ j(t, x) = 0.
dt
(9.4)
The functions q and j depend on t. Now suppose that these functions contain only expressions that are either T1 -periodic or T2 -periodic, for some T1 , T2 ∈ R+ . In that case, it
is possible to define functions q̂(t1 , t2 , x) and ĵ(t1 , t2 , x), which satisfy:
q̂(t, t, x) = q(t, x) and
ĵ(t, t, x) = j(t, x),
(9.5)
and have the additional property that they are T1 -periodic in t1 and T2 -periodic in t2 .
Now suppose we have a solution x̂ to the following partial differential-algebraic equation (PDAE):
d q̂
d q̂
+
+ ĵ = 0,
dt1 dt2
x̂(0, t2 ) = x̂(T1 , t2 ),
x̂(t1 , 0) = x̂(t1 , T2 ),
(9.6)
∀ 0 ≤ t2 ≤ T2 ,
∀ 0 ≤ t1 ≤ T1 .
(9.7)
(9.8)
Now define x(t) as x(t) = x̂(t, t). We can then proof the following:
Theorem 9.2.1. The function x(t) as defined above is a solution to 9.4. Moreover, if T1
and T2 have a least common multiple T , then x will be T -periodic.
80
c Philips Electronics N.V. 1999
Unclassified Report
804/99
Proof.
d q̂(t, t, x̂(t, t))
dq(t, x)
=
dt
dt
∂ q̂(t, t, x̂) ∂ q̂(t, t, x̂) ∂ q̂(t, t, x̂) ∂ x̂
∂ x̂
=
+
+
+
∂t1
∂t2
∂x
∂t1 ∂t2
d q̂
d q̂
=
+
dt1 dt2
= −ĵ(t, t, x̂(t, t)).
Thus
dq(t,x)
dt
(9.9)
(9.10)
(9.11)
(9.12)
+ j(t, x) = 0.
An interesting question is what happens when a different angle is used to traverse the
t1 × t2 -plane. If we define the function xθ (t) := x̂(t, θ t), and:
qθ (t, x) := q̂(t, θ t, x),
(9.13)
jθ (t, x) := ĵ(t, θ t, x),
(9.14)
we can derive:
d q̂(t, θ t, x̂(t, θ t))
dqθ (t, x̂(t, θ t))
=
dt
dt
∂ q̂
∂ q̂
∂ q̂ ∂ x
∂x
=
+θ
+
+θ
∂dt1
∂dt2 ∂ x ∂t1
∂t2
d q̂ d q̂
=
+θ
dt1
dt2 t1 =t,t2 =θ t
d q̂ .
= −jθ (t, xθ ) + (θ − 1) dt
(9.15)
(9.16)
(9.17)
(9.18)
2 t1 =t,t2 =θ t
So xθ (t) is actually the solution of the DAE:
d q̂ dqθ (t, x)
+ jθ (t, x) = (1 − θ )
.
dt
dt2 t1 =t,t2 =θ t
(9.19)
This means that if θ lies close to 1, the function xθ would be an excellent first guess for
solving the DAE:
dqθ (t, x)
+ jθ (t, x) = 0.
dt
(9.20)
This situation arises when doing a frequency sweep, since in that case equation 9.20 has
to be solved over and over again for slightly different values of θ ; when using xθ as a first
approximation, we would expect to find rapid convergence after the first instance of this
problem has been solved.
In the next section, I will describe several methods to compute the solution x̂.
c Philips Electronics N.V. 1999
81
804/99
Unclassified Report
9.3 Finite Difference Method
The Finite Difference Method can also be used to solve multi-dimensional boundary value
problems. The idea is to use a rectangular grid, and then discretise in both the t1 - and the
t2 -directions. (Or for a general M-tone problem, in all of the t j ’s, 1 ≤ j ≤ M.) Since this
is in fact a standard technique for solving boundary value problems, I will not elaborate
on it further.
12.8
12.6
12.4
12.2
12
11.8
11.6
11.4
11.2
20
20
15
15
10
10
5
5
0
0
Figure 9.1: A two-tone solution of the voltage in a frequency mixer
9.4 Hierarchical Shooting
The idea of Hierarchical Shooting is to view a partial differential equation in 2 (or M,
in general) variables as an ordinary differential equation in a function space F , where F
consists of functions of 1 (or M − 1, in general) variables. So the problem 9.8 can be
rewritten as:
Dt [Q(t2 , X )] +
1
d Q(t2 , X )
+ J (t2, X ) = 0,
dt2
X (0) = X (T2 ).
(9.21)
(9.22)
Here, (Q(t2 , X ))(t1 ) := q̂(t1 , t2 , (X (t2))(t1 )), (J (t2, X ))(t1 ) := ĵ(t1 , t2 , (X (t2))(t1 )), and
X : R → (R → R N ). Finally, Dt1 is the function operator that differentiates a function to
t1 . Note that for any three sets U, V, W , the sets U → (V → W ) and U × V → W are
isomorphic.
82
c Philips Electronics N.V. 1999
Unclassified Report
804/99
circuit3twotone
14
12
10
V(1)
V(2)
V(3)
V(4)
V(5)
I(5,0)
I(5,4)
I(1,0)
I(2,0)
8
6
4
2
0
−2
0
0.2
0.4
0.6
time
0.8
1
1.2
−5
x 10
Figure 9.2: The two-tone solution of a frequency mixer converted to a single tone solution.
The curve V (4) corresponds with the two-tone solution in figure 9.1.
Now equation 9.21 can be discretised using for example Euler Backward:
Q(t2(i) , X ) − Q(t2(i−1) )
+ Dt1 [Q(t2(i) , X )] + J (t2(i), X ) = 0.
1t2
(9.23)
But equation 9.21 is itself a differential equation, since it can be rewritten as:
d Q(t2(i) , X ) Q(t2(i) , X ) − Q(t2(i−1) )
+
+ J (t2(i), X ) = 0.
dt1
1t2
(9.24)
Equation 9.24 can be solved by using a Shooting Method, thus giving rise to an “inner
loop”. The “outer loop” consists of solving 9.21 by using a Shooting Method, hence the
name Hierarchical Shooting.
Hierarchical Shooting has two advantages over a Finite Difference approach:
1. The linear system that has to be solved is smaller.The Finite Difference method
needs to solve a square matrix of size N · M1 · M2 , whereas Hierarchical Shooting
needs to solve a matrix of size N for the inner loop and a matrix of size N × M1
for the outer loop. The downside is that Hierarchical Shooting might need more
Newton steps before convergence is reached.
2. It is much easier to use an adaptive grid in a Hierarchical Shooting method than
in a Finite Difference method. When using Hierarchical Shooting (or any shooting
c Philips Electronics N.V. 1999
83
804/99
Unclassified Report
method, for that matter), the time points are generated adaptively by the integration
method on which the shooting method is based, so in general no special care has to
be taken to use an adaptive grid. With Finite Difference, this is less trivial.
84
c Philips Electronics N.V. 1999
Chapter 10
Numerical experiments
10.1 Introduction
In this chapter, I will give the results of some numerical experiments which I have conducted, which compare the Shooting and the Finite Difference Method.
10.2 Comparisons between Shooting and the Finite Difference Method
In the rest of this chapter, I will use the following abbreviations:
SS Single Shooting method. An initial guess at time point t0 is produced by a DC analysis.
FDM Finite Difference Method. An initial guess x(0) on the entire interval [t0 , t1 ] is
produced by x(0) (t) = x DC , where x DC is the DC solution.
FDM* Finite Difference Method. An initial guess x(0) on the entire interval [t0 , t1] is
produced by an initial transient analysis from t0 to t1 . The initial value in t0 of this
transient analysis is produced by a DC analysis.
The following parameters are used by these methods:
θ Specifies the discretisation method. The DAE’s are discretised using:
dq
. qi+1 − qi
+j=
+ θ ji+1 + (1 − θ )ji .
dt
1t
Sensible values for theta are
1
2
< θ ≤ 1.
The tolerance of the Newton method.
85
(10.1)
804/99
Unclassified Report
The following pages contain graphs comparing the number of flops needed for several
test runs. Note that these are the flops as returned by the MATLAB flops command;
MATLAB has a peculiar way of counting flops, which means that MATLAB flops cannot
be directly compared with “ordinary” flops. However, since all computations have been
done using MATLAB, the flop counts can be used to compare the results with each other.
The circuit diagrams of the various circuits used can be found in appendix B.
5
7
Circuit 1: theta=1, eps=1e−6
x 10
FDM
FDM*
SS
6
5
4
3
2
1
0
10
5
7
100
Circuit 1: theta=1, eps=1e−12
x 10
6
50
FDM
FDM*
SS
5
4
3
2
1
0
86
10
50
100
c Philips Electronics N.V. 1999
Unclassified Report
5
7
804/99
Circuit 1: theta=0.6, eps=1e−12
x 10
FDM
FDM*
SS
6
5
4
3
2
1
0
10
6
4
3.5
50
100
Circuit 2: theta=1.0, eps=1e−12
x 10
FDM
FDM*
SS
3
2.5
2
1.5
1
0.5
0
10
c Philips Electronics N.V. 1999
50
100
87
804/99
Unclassified Report
6
2.5
Circuit 3: theta=1.0, eps=1e−12
x 10
FDM
FDM*
SS
2
1.5
1
0.5
0
10
6
2.5
50
100
Circuit 5: theta=1.0, eps=1e−12
x 10
FDM
FDM*
SS
2
1.5
1
0.5
0
88
10
50
100
c Philips Electronics N.V. 1999
Unclassified Report
804/99
10.3 Conclusions
The tests show that neither FDM or SS perform better in all cases. However, the slightly
modified FDM called FDM* generally performs as well as the best of either FDM or SS.
This is to be expected, as I explained in section 7.10. Whenever SS is superior to FDM, it
will converge in a very small number of iterations, so first doing a SS will almost certainly
produce a very good initial approximation for FDM. On the other hand, even when SS is
not very good, it will give an initial approximation to FDM that is at least not noticeable
worse than the DC solution.
c Philips Electronics N.V. 1999
89
Chapter 11
Conclusions
11.1 Introduction
In this chapter, I will summarise the conclusions reached in the previous chapters. After
that, I will give some suggestions for further research.
11.2 Conclusions
The following conclusions can be reached:
1. In this report, several methods for finding a PSS have been compared, and a mathematical framework to fit these methods in has been developed. Several wellposedness theorems have already been proven. However, still needed is a deeper
mathematical validation of all steps of the various methods described this report
and in [12].
2. Using a dedicated PSS algorithm to find the long-term behaviour of a circuit can
save computational costs, as compared to doing a transient analysis.
3. Of the methods tested, the most attractive method is the FDM* method (see page
85), since it generally performs about as well as the best of the other two methods
tested.
4. Several recommendations have been taken into account in an already started implementation of a RF noise simulation facility in Pstar.
5. The setup of Qstar (see [3]) as a MATLAB based parallel to Pstar proved itself to
be of value because it created an environment for study in MATLAB. Pstar, which
is developed and maintained on a customer-driven basis, does not provide such an
investigation environment.
6. Qstar will also be of value for allowing future students to study methods for Pstar.
90
Unclassified Report
804/99
11.3 Suggestions for further research
1. It seems worthwhile to think of an integration of Qstar and Pstar. The idea is that
Pstar does the matrix assembly, and then Qstar exposes the assembled matrices in
a Rapid Application Development tool like MATLAB or NumLab1 . This allows
fast prototyping of methods in a RAD environment, and then testing the methods
against real-world problems.
2. Research directed towards implementing Finite Difference Method and other methods using the special hierarchical matrix-storage scheme of Pstar.
3. Research directed towards multi-tone methods instead of single-tone methods.
4. Comparisons of the presented methods for adding an equation to autonomous problems. This includes studying the validity of gauging a circuit by using additional
elements in the circuit.
5. Research on algorithms for intuiting an initial guess for the oscillation period for
an autonomous circuit, so that an explicit initial guess doesn’t have to be provided
by the user. A reliable estimate of the error in the oscillation period is also of high
importance to the user, so this is also an interesting direction for future research.
6. Research directed towards finding efficient algorithms to make clear the relation between frequency bands of input noise and frequency bands in the response quantity.
Also the inverse problem should be studied.
7. Research directed towards the handling of phase noise in autonomous circuits in
the case of white noise and 1/ f noise.
8. In [12] various methods are described for studying the effect of noise under RF
conditions. In several cases this involves a further analysis after determining a
noiseless PSS solution. For most methods a study from a mathematical point of
view is still lacking.
1
NumLab is a graphical programming environment for numerical methods, currently under development
at the Eindhoven University of Technology.
c Philips Electronics N.V. 1999
91
Appendix A
Hybrid analysis
A.1 Introduction
In this appendix, I give a short description of the hybrid analysis-method for for formulating the circuit equations. Note that this method is not used in Pstar; modified nodal
analysis is used instead.
A.2 Method essentials
Consider a circuit with n nodes and b branches. Every branch j has a branch equation
of the form h j (t, i j , i̇ j , v j , v̇ j ) = 0. These equations can be put together in the following
system of b equations:
h(t, i, i̇, v, v̇) = 0 ∈ Rb .
(A.1)
But this is a system of only b equations for 2b unknowns. Additional equations have to
be derived using the Kirchhoff’s Topological Laws.
Consider a spanning tree T of the circuit, as shown in figure A.1. Note that if we add
one branch t 6∈ T to T , the resulting graph T + t will contain exactly one cycle. In the
same way, if we remove one branch t 0 ∈ T from T , the resulting graph T − t 0 will be
disconnected. The set of branches that are not in T − t 0 will contain exactly one cutset as
a subset. This cutset is shown in figure A.1 with a dashed line.
If the circuit contains b branches and T contains bT branches, we can define the bT cutsets
that are created by removing one of the bT branches of T . We can also define the b − bT
cycles that are created by adding one of the b − bT branches not in T to T . By applying
the current law and the voltage law, we get a total system of b equations. Note that these
equations are linear; in fact, the only coefficients that can occur in them are 1, −1 and 0.
Hence we can write these linear equations down as:
i
= 0 ∈ Rb .
(A.2)
T
v
92
Unclassified Report
804/99
t
t’
Figure A.1: A spanning tree T of a circuit. The branches that belong to T are drawn with
thick lines.
Combining equation A.2 with equation A.1, we get a system of 2b equations for 2b unknowns, which can be written as:
f(t, i, i̇, v, v̇) = 0.
This is again the form of equation 2.3.
c Philips Electronics N.V. 1999
93
Appendix B
Circuit descriptions and diagrams
B.1 Introduction
In this appendix I will describe several circuits that can be used to test the performance
of several methods to compute a Periodic Static State of a circuit. In the next section, I
discuss each circuit used in the testing in detail.
B.2 Circuits
B.2.1 Rectifier circuit
A rectifier circuit is a circuit for transforming an AC voltage source into a DC voltage.
1
60Hz
2V
2
1µ
10k
0
The description of this circuit in Pstar is:
title: Test
/* author:
* date:
* version:
*/
circuit 1: simple rectifier circuit;
S.H.M.J. Houben
April 7, 1998
0.1
circuit;
94
Unclassified Report
e_1
diode_1
r_1
c_1
end;
(0,
(1,
(2,
(2,
804/99
1);
2);
0) 10k;
0) 1u;
transient;
t
= auto(0, 10/f);
f
= 60; /*frequency of voltage source*/
amp = 2; /*amplitude of voltage source*/
e_1 = amp * sin(2*pi*f*t);
file: vn(1), vn(2);
end;
run;
Pstar produces the following output during transient analysis:
Jun 23, 1998
12:19:05
- y1-axis -
Test circuit 1: simple rectifier circuit
2.0
(LIN)
VN(1)
VN(2)
1.5
1.0
500.0m
0.0
-500.0m
-1.0
-1.5
-2.0
0.0
40.0m
20.0m
80.0m
60.0m
Analysis: TR
User: contr_6
120.0m
100.0m
160.0m
140.0m
(LIN)
180.0m
T
Simulation date: 23-06-98, 12:18:46
File: /home/contr_6/products/pstar/test/circuit1.sdif
The description of this circuit in Qstar is:
#Circuit 1
#A simple rectifier circuit
c Philips Electronics N.V. 1999
95
804/99
Unclassified Report
#Author: Stephan Houben
#In Matlab, do: solveTransient(’circuit1’, 0 , 2/60)
def f 60
def amp 2
ground v0
node v1 v2
voltage_source v0 v1 amp*sin(2*pi*f*t)
diode v1 v2
resistor v2 v0 10e3
capacitor v2 v0 1e-6
Qstar produces the following output during transient analysis:
circuit1
2
V(1)
V(2)
I(0,1)
1.5
1
0.5
0
−0.5
−1
−1.5
−2
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
time
96
c Philips Electronics N.V. 1999
Unclassified Report
804/99
B.2.2 Frequency doubler
A frequency doubler is a circuit that produces an output signal at two times the frequency
of its input signal.
4
0.19m
12V
3.3n
10k
3
2
1
100kHz
0.3V
100µ
2m
0
The description of this circuit in Pstar is:
title: Test
/* author:
* date:
* version:
*/
circuit 2: frequency doubler;
S.H.M.J. Houben
April 7, 1998
0.1
change;
print_matrix=true;
max_abs_jac_par=1e30;
end;
model: device(3,
l_inductor (3,
c_capacitor (3,
r_resistor (3,
end;
4);
4) 0.19ml;
4) 3.3n;
4) 10k;
circuit;
c Philips Electronics N.V. 1999
97
804/99
Unclassified Report
e_1
j_1
c_1
(0, 2);
(1, 0) 2ml;
(1, 0) 100u;
transistor_1 (3, 2, 1);
device_1
(3, 4);
e_0
end;
(4, 0) 12;
transient;
t
= auto(0, 10/f);
f
= 100k; c:frequency of voltage source;
amp = 0.3; c:amplitude of voltage source;
e_1 = amp * sin(2*pi*f*t);
file: vn(1), vn(2), vn(3);
end;
run;
Pstar produces the following output during transient analysis:
Jun 23, 1998
13:13:42
- y1-axis -
Test circuit 2: frequency doubler
1.0
(LIN)
(LIN)
80.0
VN(1)
VN(2)
- y2-axis VN(3)
800.0m
70.0
600.0m
60.0
400.0m
50.0
200.0m
40.0
0.0
30.0
-200.0m
20.0
-400.0m
10.0
-600.0m
0.0
-800.0m
-10.0
0.0
40.0u
20.0u
Analysis: TR
User: contr_6
80.0u
60.0u
100.0u
(LIN)
T
Simulation date: 23-06-98, 13:13:04
File: /home/contr_6/products/pstar/test/circuit2.sdif
98
c Philips Electronics N.V. 1999
Unclassified Report
804/99
The description of this circuit in Qstar is:
#Circuit 2
#A frequency doubler
#author: Stephan Houben
#In Matlab, do: solveTransient(’circuit2’, 0, 2/100e3)
ground v0
node v1 v2 v3 v4
def amp 0.3
def f
100e3
voltage_source v0 v2
current_source v1 v0
capacitor
v1 v0
amp*sin(2*pi*f*t)
2e-3
100e-6
transistor
v3 v2 v1
inductor
capacitor
resistor
v3 v4
v3 v4
v3 v4
voltage_source v4 v0
0.19e-3
3.3e-9
10e3
12
Qstar produces the following output during transient analysis:
c Philips Electronics N.V. 1999
99
804/99
Unclassified Report
circuit2
70
V(1)
V(2)
V(3)
V(4)
I(0,2)
I(3,4)
I(4,0)
60
50
40
30
20
10
0
−10
0
0.2
0.4
0.6
time
0.8
1
1.2
−4
x 10
B.2.3 Mixer
A mixer gets two different input signals at different frequencies. This simple mixer subsequently produces a whole range of output frequencies.
100
c Philips Electronics N.V. 1999
Unclassified Report
804/99
5
0.19mH
12V
10kΩ
3.3nF
4
3
10kΩ
10kΩ
1MHz
2
0
1
1.2Mhz
The description of this circuit in Pstar is:
title : Test circuit3 : simple mixer;
/* author: Stephan Houben
*/
circuit;
e_1 (5,0) 12;
r_1 (5,4) 10k;
l_1 (5,4) 0.19ml;
c_1 (5,4) 3.3n;
transistor2_1 (4,3,0);
e_rf (1,0) 0.7 + vrf*sin(2*pi*frf*t);
e_lo (2,0) 0.7 + vlo*sin(2*pi*flo*t);
r_2 (1,3) 10k;
r_3 (2,3) 10k;
vrf = 0.1;
vlo = 0.3;
frf = 1.2mg;
flo = 1mg;
c Philips Electronics N.V. 1999
101
804/99
Unclassified Report
end;
transient;
t = auto(0,50u);
file: vn, i(transistor2_1\c);
end;
run;
Pstar produces the following output during transient analysis:
Jun 23, 1998
13:25:17
VN(1)
Test circuit3 : simple mixer
800.002m
(LIN)
750.002m
675.002m
600.002m
VN(2)
1.0
(LIN)
800.0m
600.0m
400.0m
VN(4)
12.6
(LIN)
12.2
11.8
11.4
0.0
4.0u
2.0u
8.0u
6.0u
Analysis: TR
User: contr_6
12.0u
10.0u
16.0u
14.0u
(LIN)
T
Simulation date: 23-06-98, 13:24:18
File: /home/contr_6/products/pstar/test/circuit3.sdif
The description of this circuit in Qstar is:
#Circuit 3
#A simple mixer
#author: Stephan Houben
#In Matlab, do: solveTransient(’circuit3, 0, 1/100e3)
ground v0
node v1 v2 v3 v4 v5
def vrf 0.1
102
c Philips Electronics N.V. 1999
Unclassified Report
804/99
def vlo 0.3
def frf 1.2e6
def flo 1e6
voltage_source v5 v0 12
resistor v5 v4 10e3
inductor v5 v4 0.19e-3
capacitor v5 v4 3.3e-9
transistor2 v4 v3 v0
voltage_source v1 v0 0.7+vrf*sin(2*pi*frf*t)
voltage_source v2 v0 0.7+vlo*sin(2*pi*flo*t)
resistor v1 v3 10e3
resistor v2 v3 10e3
Qstar produces the following output during transient analysis:
circuit3
14
12
10
V(1)
V(2)
V(3)
V(4)
V(5)
I(5,0)
I(5,4)
I(1,0)
I(2,0)
8
6
4
2
0
−2
0
0.5
c Philips Electronics N.V. 1999
1
1.5
time
2
2.5
3
−5
x 10
103
804/99
Unclassified Report
B.2.4 Free-running oscillator
An oscillator produces an output signal at a specific frequency. This is an example of a
free-running oscillating circuit; no oscillating input signal is given, yet the output signal
does oscillate.
1
10kΩ
12V
100k Ω
100k Ω
2
10kΩ
5
1pF
3
4
1pF
0
The description of this circuit in Pstar is:
title: Test circuit 4;
/*author: Stephan Houben
*/
circuit;
e_1 (1,0) 12;
transistor2_1 (2,3,0);
r_c1 (1,2) 10k;
r_b1 (1,3) 100k;
transistor2_2 (4,5,0);
r_c2 (1,4) 10k;
r_b2 (1,5) 100k;
c_1 (2,5) 1p;
c_2 (3,4) 1p;
104
c Philips Electronics N.V. 1999
Unclassified Report
804/99
vbr_1 (3,0) 0.5;
end;
transient;
t = auto(0, 1u);
file: vn, i;
end;
run;
Pstar produces the following output during transient analysis:
Jun 23, 1998
13:30:10
- y1-axis -
Test circuit 4
15.0
(LIN)
VN(1)
VN(2)
VN(3)
10.0
VN(4)
VN(5)
5.0
0.0
-5.0
-10.0
-15.0
0.0
400.0n
200.0n
Analysis: TR
User: contr_6
800.0n
600.0n
1.0u
(LIN)
T
Simulation date: 23-06-98, 13:29:53
File: /home/contr_6/products/pstar/test/circuit4.sdif
The description of this circuit in Qstar is:
#Circuit 4
#A relaxation oscillator
#author: Stephan Houben
#In Matlab, do: solveTransient(’circuit4’, 0, 2e-7, ’x0’, [0;1;0;0;0;0]
ground v0
node v1 v2 v3 v4 v5
c Philips Electronics N.V. 1999
105
804/99
Unclassified Report
voltage_source v1 v0 12
transistor2 v2 v3 v0
resistor v1 v2 10e3
resistor v1 v3 100e3
transistor2 v4 v5 v0
resistor v1 v4 10e3
resistor v1 v5 100e3
capacitor v2 v5 1e-12
capacitor v3 v4 1e-12
Qstar produces the following output during transient analysis:
circuit4
15
V(1)
V(2)
V(3)
V(4)
V(5)
I(1,0)
10
5
0
−5
−10
106
0
0.2
0.4
0.6
time
0.8
1
1.2
−6
x 10
c Philips Electronics N.V. 1999
Unclassified Report
804/99
B.2.5 More complicated rectifier circuit
This rectifier circuit does a better job in converting AC to DC. It is designed in such a way
that it damps the incoming sine-wave.
1
60Hz
10V
2
1µF
5Ω
3
1mF
0.1Η
1mF
4
1kΩ
0
The description of this circuit in Pstar is:
title: Test circuit 5;
circuit;
e_1(0,1) 10*sin(2*pi*60*t);
diode_1(1,2);
c_1
(1,2) 1u;
r_1 (2,3) 5;
l_1 (3,4) 0.1;
c_3 (3,0) 1ml;
c_4 (4,0) 1ml;
r_2 (4,0) 1k;
end;
tr;
t=auto(0,4/60);
file: vn(1), vn(2), vn(3), vn(4);
end;
run;
Pstar produces the following output during transient analysis:
c Philips Electronics N.V. 1999
107
804/99
Jun 23, 1998
13:31:40
- y1-axis -
Unclassified Report
Test circuit 5
10.0
(LIN)
VN(1)
VN(2)
7.5
VN(3)
VN(4)
5.0
2.5
0.0
-2.5
-5.0
-7.5
-10.0
0.0
20.0m
10.0m
40.0m
30.0m
Analysis: TR
User: contr_6
60.0m
50.0m
(LIN)
70.0m
T
Simulation date: 23-06-98, 13:31:30
File: /home/contr_6/products/pstar/test/circuit5.sdif
The description of this circuit in Qstar is:
#Circuit 5
#A more complex rectifier cirucit
#author: Stephan Houben
#In Matlab, do: solveTransient(’circuit2’, 0, 2/60)
ground v0
node v1 v2 v3 v4
def f 60
voltage_source v0 v1 10*sin(2*pi*f*t)
diode v1 v2
capacitor v1 v2 1e-6
resistor v2 v3 5
inductor v3 v4 0.1
capacitor v3 v0 1e-3
capacitor v4 v0 1e-3
108
c Philips Electronics N.V. 1999
Unclassified Report
804/99
resistor v4 v0 1e3
Qstar produces the following output during transient analysis:
circuit5
10
8
6
4
2
0
−2
V(1)
V(2)
V(3)
V(4)
I(0,1)
I(3,4)
−4
−6
−8
−10
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
time
B.2.6 Another free-running oscillator
Another example of a free-running oscillator.
c Philips Electronics N.V. 1999
109
804/99
Unclassified Report
4
3Ω
47nF
3
2
8.1k Ω
5
1.5k Ω
1
0.01H
12k Ω
10V
0.1µF
0
The description of this circuit in Pstar is:
Title: Test circuit 6;
change;
MAX_INTEGR_ORDER = 1;
end;
circuit;
e_1(0,5) 10;
r_1(0,2) 12k;
r_2(2,5) 8.1k;
l_1(0,3) 0.01;
r_3(3,4) 3;
c_1(0,1) 0.1u;
c_2(1,4) 47n;
transistor_1(4,2,1);
r_4(1,5) 1.5k;
vbr_1(0,1) 1;
end;
tr;
t=auto(0,1.5e-3);
file: vn(1), vn(2), vn(3), vn(4);
end;
110
c Philips Electronics N.V. 1999
Unclassified Report
804/99
run;
Pstar produces the following output during transient analysis:
Jun 23, 1998
13:34:10
- y1-axis -
Test circuit 6
15.0
(LIN)
VN(1)
VN(2)
VN(3)
10.0
VN(4)
5.0
0.0
-5.0
-10.0
-15.0
0.0
400.0u
200.0u
800.0u
600.0u
Analysis: TR
User: contr_6
1.2m
1.0m
1.6m
1.4m
(LIN)
T
Simulation date: 23-06-98, 13:33:51
File: /home/contr_6/products/pstar/test/circuit6.sdif
The description of this circuit in Qstar is:
#Circuit 6
#A free-running oscillator
#author: Stephan Houben
#In Matlab, do: solveTransient(’circuit6’, 0, 1e-3, ’x0’,[0;1;0;0;0;0;0
#solveOscillatorPSS(’circuit6’,3,’T0’,1e-4,’method’,1,’tol’,1e12,’theta’,1,’amp’,0.1,’maxdT’,0.1e-5,’debug’,0)
ground v0
node v1 v2 v3 v4 v5
voltage_source v0 v5 10
resistor v0 v2 12e3
resistor v2 v5 8.1e3
c Philips Electronics N.V. 1999
111
804/99
Unclassified Report
inductor v0 v3 0.01
resistor v3 v4 3
capacitor v0 v1 0.1e-6
capacitor v1 v4 47e-9
transistor v4 v2 v1
resistor v1 v5 1.5e3
Qstar produces the following output during transient analysis:
circuit6
2
V(1)
V(2)
V(3)
V(4)
V(5)
I(0,5)
I(0,3)
0
−2
−4
−6
−8
−10
0
0.2
0.4
0.6
time
0.8
1
1.2
−3
x 10
B.2.7 An even more complicated rectifier circuit
This rectifier circuit has four diodes instead of only one.
112
c Philips Electronics N.V. 1999
Unclassified Report
804/99
50Hz
20V
1
100pF
100pF
3
0
1mF
0.633mH
100pF
25Ω
100pF
2
4
10mF
The description of this circuit in Pstar is:
title: Circuit 7: full wave rectifier circuit;
change;
MAX_INTEGR_ORDER = 1;
end;
circuit;
e_0 (0,3);
diode_1(1,0);
c_1(1,0) 100e-12;
diode_2(1,3);
c_2(1,3) 100e-12;
diode_3(0,2);
c_3(0,2) 100e-12;
diode_4(3,2);
c_4 (3,2) 100e-12;
c_a(1,2)
l_a(1,4)
r_a(2,4)
c_b(2,4)
end;
1e-3;
0.633e-3;
25;
10e-3;
c Philips Electronics N.V. 1999
113
804/99
Unclassified Report
transient;
amp = 20;
f = 50;
t = auto(0, 4/f);
e_0 = amp*sin(2*pi*f*t);
file: vn(2), vn(4), i(e_0);
end;
run;
Pstar produces the following output during transient analysis:
Jun 23, 1998
13:40:37
- y1-axis -
Circuit 7: full wave rectifier circuit
40.0
(LIN)
VN(2)
VN(4)
I(E_0)
20.0
0.0
-20.0
-40.0
-60.0
-80.0
0.0
20.0m
10.0m
40.0m
30.0m
Analysis: TR
User: contr_6
60.0m
50.0m
80.0m
70.0m
(LIN)
T
Simulation date: 23-06-98, 13:39:51
File: /home/contr_6/products/pstar/test/circuit7.sdif
The description of this circuit in Qstar is:
#Circuit 7
#A full wave rectifier circuit
#author: Stephan Houben
#In Matlab, do: solveTransient(’circuit7’, 0, 2/50)
ground v0
114
c Philips Electronics N.V. 1999
Unclassified Report
804/99
node v1 v2 v3 v4
def amp 20
def f 50
voltage_source v0 v3 amp*sin(2*pi*f*t)
diode v1 v0
capacitor v1 v0 100e-12
diode v1 v3
capacitor v1 v3 100e-12
diode v0 v2
capacitor v0 v2 100e-12
diode v3 v2
capacitor v3 v2 100e-12
capacitor v1 v2 1e-3
inductor v1 v4 0.633e-3
resistor v2 v4 25
capacitor v2 v4 10e-3
Qstar produces the following output during transient analysis:
c Philips Electronics N.V. 1999
115
804/99
Unclassified Report
circuit7
30
20
10
0
−10
−20
−30
−40
V(1)
V(2)
V(3)
V(4)
I(0,3)
I(1,4)
−50
−60
−70
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
time
116
c Philips Electronics N.V. 1999
Bibliography
[1] U.M. Ascher, R.M.M. Mattheij, and R.D. Russell. Numerical solution of Boundary
Value Problems for Ordinary Differential Equations. Prentice Hall-Inc., Englewood
Cliffs, New Jersey, USA, 1988.
[2] R.P. Eddy. Extrapolating to the limit of a vector sequence. In P.C.C. Wang, editor,
Information Linkage between Applied Mathematics and Industry, pages 387–396.
Academic Press, New York, 1979.
[3] S.H.M.J. Houben. The Qstar user’s manual. Technical report, Philips ED&T/AS, to
appear in1999.
[4] René Lamour. Floquét-theory for differential-algebraic equations (DAE). ZAMM
78 S3, 1998.
[5] René Lamour, Roswitha März, and Renate Winkler. Stability of periodic solutions of index-2 differential algebraic systems. Pre-print, available from
http://www.mathematik.hu-berlin.de/publ/pre/1998/P-9823.ps, 1998.
[6] R.M.M. Mattheij and J. Molenaar. Ordinary Differential Equations in Theory and
Practice. Wiley, 1996.
[7] Bipolar modelbook. Technical report, Philips ED&T/AS.
[8] Jaijeet Roychowdhury. Analysing circuits with widely-separated time scales using
numerical PDE methods. Report Bell-Labs, to appear in IEEE Trans CAS 1, 1997.
[9] Stig Skelboe. Computation of the periodic steady-state response of nonlinear networks by extrapolation methods. IEEE Transactions on Circuits and Systems,
27(3):161–175, 1980.
[10] David A. Smith, William F. Ford, and Avram Sidi. Extrapolation methods for vector
sequences. SIAM Review, 29(2), 1987.
[11] J. Stoer. Einführung in die Numerische Mathematik I. Springer-Verlag, Berlin, 1972.
[12] E.J.W. ter Maten, W.H.A. Schilders, and S.H.M.J. Houben. Methods for simulating
RF-noise, v.1.1. Technical report, Philips ED&T/AS, 1998.
117
804/99
118
Unclassified Report
c Philips Electronics N.V. 1999
Unclassified Report
804/99
Author(s)
S.H.M.J. Houben
Title
Algorithms for Periodic Steady State Analysis on Electric
Circuits
Master’s Thesis
Distribution
Nat.Lab./PI
PRL
PRB
LEP
PFL
CP&T
WB-5
Redhill, UK
Briarcliff Manor, USA
Limeil–Brévannes, France
Aachen, BRD
WAH
Director:
Deputy Director:
Department Head:
Dr. Ir. M.F.H. Schuurmans
Ir. G.F.M. Beenker
”
WB56
WAY52
”
PRLE, ED&T
PRLE, ED&T
PRLE, ED&T
PRLE
PS-CIC
WAY-31
WAY-31
WAY-31
WAY-41
Nijmegen
PRLE, ED&T
WAY-31
PRLE, ED&T
PRLE, ED&T
PRLE, ED&T
PRLE, ED&T
PRLE, ED&T
PRLE, ED&T
PRLE, ED&T
PRLE, ED&T
PRLE
PRLE
PRLE
WAY-31
WAY-31
WAY-31
WAY-31
WAY-31
WAY-31
WAY-31
WAY-31
WAY-41
WAY-41
WAY-41
Abstract
Ir. H.A.J. van de Donk
Dr. H.H.J. Janssen
Ir. M.F. Sevat
Dr. T.G.A. Heijmen
Ir. M. Stoutjesdijk
Full report
Mw. Drs. E.J.M. Duren–
van der Aa
Ir. J.G. Fijnvandraat
Ir. J.C.H. van Gerwen
Ir. S.H.M.J. Houben
Dr. Ir. T.A.M. Kevenaar
Mw. Ir. L.S. Lengowski
Dr. E.J.W. ter Maten
Dr. Ir. S.P. Onneweer
Ir. J.F.M. Peters
Dr. Ir. P.B.L. Meijer
Ir. C. Niessen
Dr. W.H.A. Schilders
List continues on next page.
c Philips Electronics N.V. 1999
119
804/99
Unclassified Report
Full report
Drs. M.C.J. van de Wiel
PS-ASG-LTC
Ir. C.W. Bomhof
Prof. Dr. H.A. van der Vorst PRLE (advisor)
Prof. Dr. Ir. M.J.L. Hautus
Dr. E.F. Kaasschieter
Prof. Dr. R.M.M. Mattheij
Dr. J.M.L. Maubach
Dr. René Lamour
120
BQ1, Nijmegen
Utrecht University
Mathematical Institute
P.O. Box 80010
3508 TA Utrecht
Eindhoven University of Technology
Dept. of Mathematics and Computing Science, HG8
P.O. Box 512
5600 MB Eindhoven
Institut für Angewandte Mathematik am Institut für Mathematik
Mathematisch-Naturwissenschaftliche Fakultät II
Unter den Linden 6, Humboldt Universität zu Berlin
10099 Berlin, Germany
c Philips Electronics N.V. 1999
Related documents
Download