A Hierarchical Bottom-up, Equation-Based Optimization Design Methodology William R. Sanchez

advertisement
A Hierarchical Bottom-up, Equation-Based
Optimization
Design Methodology
by
William R. Sanchez
S.B., Massachusetts Institute of Technology (2005)
Submitted to the Department of Electrical Engineering and Computer
Science
in partial fulfillment of the requirements for the degree of
Master of Engineering in Computer Science and Electrical Engineering
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
May 2007
c William R. Sanchez
S.B., Massachusetts Institute of Technology (2005), MMVII. All rights
reserved.
The author hereby grants to MIT permission to reproduce and distribute
publicly paper and electronic copies of this thesis document in whole or
in part.
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Department of Electrical Engineering and Computer Science
May 25, 2007
Certified by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Joel Dawson
Carl Richard Soderberg Assistant Professor
Thesis Supervisor
Accepted by. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Arthur C. Smith
Chairman, Department Committee on Graduate Students
2
A Hierarchical Bottom-up, Equation-Based Optimization
Design Methodology
by
William R. Sanchez
S.B., Massachusetts Institute of Technology (2005)
Submitted to the Department of Electrical Engineering and Computer Science
on May 25, 2007, in partial fulfillment of the
requirements for the degree of
Master of Engineering in Computer Science and Electrical Engineering
Abstract
We have implemented a segment of an RF transmitter signal chain in discrete components using bipolar transistors. We formulated both a broadband amplifier and mixer as
mathematical programs (MP) and extracted Pareto-optimal (PO) [1–3] tradeoff surfaces.
Abstracting these PO surfaces in place of the blocks at the system level, we have demonstrated a new hierarchical system design methodology. Furthermore, the optimization,
simulation, and measured results are consistent at all levels of hierarchy.
Keywords: System design, optimization, geometric programming, analog circuits, Pareto-optimal.
Thesis Supervisor: Joel Dawson
Title: Carl Richard Soderberg Assistant Professor
Acknowledgments
First and foremost, I would like to dedicate this work, as everything I ever accomplish,
to my parents. Without their incredible sacrifices, hard work, dedication, inexhaustible
source of energy, wisdom, strength, heartfelt love, and their continued faith in God and
that God is ever present, I would have been lost years ago as a child. Through example,
sheer iron-fist, and amazing savvy, they have guided myself and my siblings along an
incredible path, enabling us to break barriers and climb unimaginably high. They have
been my source of inspiration and drive and all my efforts go to allowing them to rejoice
in the fruits they have vicariously reaped through their children.
In addition, I would like to thank my advisor, professor, and mentor, Dr. Joel Dawson,
for his wonderful guidance and support, and believing in me and identifying what promises
I have within. Dr. Dawson has been a true source of encouragement when things have
gotten dark. He never fails to keep his students’ in perspective. His optimism diffuses to
all of us catapulting us into the success we originally envision.
I would also like to thank Maria Hershenson and Sunderarajan Mohan of Sabio Labs,
Inc. for early and helpful discussions in formulations. I extend my gratitude to all the
members of our research group for their help and guidance when issues arose. In particular, thanks to Sungwon Chung and Jack Holloway for many helpful lab and measurement
suggestions; Tania Khanna and Ranko Sredojevic for helpful conversations on the optimization theory side of the work; Oh, Jose, Philip, Muyiwa, and Mike for support in the
meetings and general conversations.
I would like to thank Anne-Céline Bringer for sticking by me through this process,
dealing with the late nights and uncertainty, and being an amazing source of support,
inspiration, and happiness in my life. Thank you for returning a normal balance in my
life, forcing me to remember what life is about, and showing me how to appreciate the
beauties of the world.
I would like to thank Carrie Brown for serving as a role-model, friend, and mentor all
at the exact times when I needed it in my life. Thank you for keeping me along the right
path and being my college-mother. You will always be my role-model and I hold a dear
spot for you in my heart.
Finally, thank you to all my friends coming up, especially Br. Jerry and Colleen
McGeehan, my friends in the salsa community, and otherwise. You helped remind me of
life outside of MIT and sometimes that is necessary.
El que la sigue, la consigue.
Lo que en los libros no está, la vida te enseñará.
Preface
In this work, the task is to approach circuit design at a system level via optimization and
mathematical programming methods. Our objective is to come up with a design methodology that combines the strengths of both analog designers and optimization theory
Our intention is not to replace analog circuit designers by automating the analog circuit
design process. When the term optimization is heard among analog designers, often
the reaction is reluctance. Skepticism is almost standard when it comes to exploiting
the richness of optimization theory. This adamant stance is partly a consequence of
how optimization as a tool has been applied. As a result, during the last decade or so,
optimization theory and operations research have developed as their own disciplines, but
without many applications. It has not been until recently that applications have increased
in various engineering, business, and scientific fields. Despite skepticism, designers are
recognizing a need for optimizers because of the relentless, breakneck pace of device
scaling. With the explosion of computational power and increased use of mixed-signal
systems, the need for analog designers to keep up with the scaling ability of their digital
counterparts has increased proportionally. Efficient and practical methods have been the
topic of much research in academia and industry.
We design a two-block system consisting of a broadband amplifier and single-balanced
mixer. Once an initial system decomposition is defined by the system-level engineer,
topologies for an amplifier and mixer using bipolar technologies are selected at the circuit
design level. Moreover, we pursue an equation-based design methodology and formulate
each block into a mathematical program (e.g., geometric program). This is a sequential
design approach, henceforth referred to Hierarchical-Bottom up (H-BU). Design variables
or metrics of interest (e.g., device currents, power, conversion gain, bandwidth, noise,
SNR, IIP3, etc.) are swept across the design space to construct a set of tradeoff surfaces
(e.g., power vs. noise, bandwidth vs. power, IP3 vs. power, etc). This yields valuable
tradeoff information for each subsystem. The tradeoff information fully characterizes each
of the sub-blocks and naturally allows for a global, system-level optimization. We present
results of the applying H-BU and compare these against simulated data (i.e., HSPICE
and Cadence) and measured data.
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 Introduction
7
13
1.1
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2
H-BU design methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3
Brief history of optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.4
Document organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2 Generating Pareto-optimal surfaces
21
2.1
Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2
Pareto-optimal surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3
The utility of geometric programming . . . . . . . . . . . . . . . . . . . . . 25
2.4
GP theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5
2.4.1
High-level view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4.2
Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.4.3
Posynomial−1 denominator approximation . . . . . . . . . . . . . . 32
2.4.4
Posynomial−1 denominator algorithm . . . . . . . . . . . . . . . . . 33
Multi-dimensional and monomial fitting of PO surfaces . . . . . . . . . . . 35
3 Segment of transmitter chain example
39
3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.2
Subsystem formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3
3.2.1
Amplifier formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.2.2
Mixer formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Hierarchical system issues . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
9
3.3.1
Parasitic-aware optimization . . . . . . . . . . . . . . . . . . . . . . 51
3.3.2
Managing Interface variables . . . . . . . . . . . . . . . . . . . . . . 52
4 Experimental Results
53
5 Conclusions
59
5.1
Thoughts on Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Appendices
62
A Circuit design high-level process
63
A.1
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
A.1.1 Circuit design [4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
B MATLAB formulation pseudo code
67
B.1 Amplifier pseudo code and Pareto-optimal space generation via constraint
sweeping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
B.2 Mixer pseudo code and Pareto-optimal space generation via constraint
sweeping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
B.3 Monomial fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
B.4 Hierarchical pseudo code and Pareto-optimal space generation via constraint sweeping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
C Additional Pareto-optimal surfaces for Transmitter segment
75
List of Figures
1-1 H-BU methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1-2 Traditional system design cycle . . . . . . . . . . . . . . . . . . . . . . . . 15
1-3 H-BU design cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2-1 Pareto-optimality in objective space. Here we have two performance metrics: f1 & f2 . If we let f1 = P ower−1 and f2 = Speed, the for the fastest
system, we need the most power, and vice versa. Considering the point,
E (the shaded gray space is the feasible region) we see that we can always
lower the P ower to maintain the same Speed, until we hit the boundary
(This boundary is what is referred to as the Pareto-optimal front. At higher
dimensions, this becomes a surface.). At the boundary (e.g., point A), if
we try to make an improvement in P ower, it will cost us a degradation in
Speed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2-2 H-BU flow for transmitter segment . . . . . . . . . . . . . . . . . . . . . . 26
2-3 Graphical interpretation of convexity and non-convexity and some examples. 37
3-1 H-BU flow for transmitter segment . . . . . . . . . . . . . . . . . . . . . . 39
3-2 H-BU flow for transmitter segment . . . . . . . . . . . . . . . . . . . . . . 40
3-3 Two stage amplifier block of transmitter segment . . . . . . . . . . . . . . 42
3-4 Hybrid-π model for bipolar transistor . . . . . . . . . . . . . . . . . . . . . 42
3-5 Worst-case transistor configuration for OCτ calculation. . . . . . . . . . . . 44
3-6 Single-balanced mixer block of transmitter segment . . . . . . . . . . . . . 45
3-7 Effect of feedback on distortion. s = si − f so .
. . . . . . . . . . . . . . . 47
3-8 Output waveform due to 3rd- order harmonic distortion . . . . . . . . . . . 48
11
3-9 Two-tone intermodulation products . . . . . . . . . . . . . . . . . . . . . . 49
3-10 Third-order intercept point, IIP 3.
. . . . . . . . . . . . . . . . . . . . . . 49
3-11 Transconductor of Fig. 3-6 . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4-1 Hierarchical Pareto surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4-2 System Pareto surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4-3 Block 2-D GP sweeps to illustrate the cause for nonzero relative errors in
Fig. 4-4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4-4 Associated relative errors due to monomial fitting of PO surfaces. . . . . . 55
4-5 GP PO, measured, and simulated 2-D amplifier tradeoff curves. . . . . . . 57
C-1 Amplfier block flat PO surfaces . . . . . . . . . . . . . . . . . . . . . . . . 75
C-2 Mixer block flat PO surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . 76
C-3 Hierarchical system PO surfaces . . . . . . . . . . . . . . . . . . . . . . . . 77
Chapter 1
Introduction
1.1
Background
This thesis presents an alternative system-level design methodology that incorporates the
rich techniques established in the fields of operations research and optimization theory.
Specifically, we focus on a segment of a transmitter chain to illustrate the power and
efficiency of the methodology.
Over the last decade, the use of mixed-signal circuits on system-on-chip integrated circuits (IC) has increased at a steady rate. The challenges associated with large-scale analog
system-level exploration, including early-stage trade-off analysis, create a bottleneck for
mixed-signal system design. Both equation-based and simulation-based optimization techniques have been used for the exploration of a circuit’s design space. However, at present,
these tools are only practical for small electronic systems. We overcome this limitation by
establishing a new, general design methodology for large systems. By decomposing a system into smaller, less complex building blocks, thereby adopting a Hierarchical, Bottomup (H-BU) approach, the optimization problems remain tractable, and the tradeoff space
can be constructed in a piecewise manner up to the system level. Fig. 1-1 illustrates the
H-BU design flow.
It is instructive to examine the process that we wish to replace. The traditional,
simulation-based, system-level design cycle is summarized as follows: To begin, a system
designer is given an application to be implemented under a set of tight specifications. After
13
14
Figure 1-1: H-BU methodology
the system is decomposed into blocks, the system designer makes a first pass at resource
allocation to the different blocks. Resources such as power, noise budget, and linearity are
apportioned according to available insight and the system designer’s experience. Circuit
block designers then design circuits to maximally exploit the given resources, after which
the overall process is evaluated. Are some blocks infeasible as specified? Was the system
designer overly generous with other blocks? An iterative group negotiation process is
carried out in this fashion until an acceptable solution is reached. Fig. 1-2 illustrates this
process.
The challenge for the system designer tasked with finding system-level decompositions
that yield enough feasible space at the block-level for the circuit designers creates the
bottleneck in efficiency for this approach. Without the presence of analog design experts, system designers do not become aware of block-level infeasibility until much later
in the process. Depending on the complexity of each of the blocks, the probability of system feasibility increases with each iteration of system-level resource reallocation. With a
1.1. Background
15
(a) System designer initiates system ID & decomposition
(c) System designer resource reallocation
(b) Block level design space exploration
(d) This iterative process loops until satisfactory solution emerges
Figure 1-2: Traditional system design cycle
16
simulation-based, top-down design approach, the amount of time for a system design to
be completed can be random and inefficient.
(a) Block designers characterize design space in (b) System designer selects system operating
a single run
point
Figure 1-3: H-BU design cycle
What does each loop iteration from system-level decomposition to block-level infeasibility accomplish? The efforts of each iteration by the system and circuit designers
trace out the system design space via each of the blocks’ local design space. Any feasible
solution that is found at the block level may not be satisfactory at the system level. Furthermore, optimality of any feasible solution is not guaranteed, and is highly improbable
early in the trade-off exploration phase of design. The key contribution of this work is
presenting a method whereby the number of iterations is dramatically reduced, greatly
decreasing design time, increasing efficiency, and providing design insight to system designers to complement what they have learned by experience.
1.2
H-BU design methodology
We present a design methodology where designers can map expertise to re-usable designs
that simulate as expected over all process corners of interest. Furthermore, the proposed
methodology results in a reduction in simulation runs. As a result, designers can focus
on innovation, not design centering [5].
1.3. Brief history of optimization
17
The H-BU methodology and a suitably chosen optimization framework (e.g., GP)
coupled with an equation-based design philosophy is intended to provide an alternative
analog IC system design process. Once the initial system decomposition is settled at
the system level, each circuit designer produces a set of Pareto-optimal (PO) surfaces,
to send to the system level designer for system-level allocation. Whether a simulationor equation-based approach is used in the generation of the surfaces, an equation-based
approach should prevail at the system level. The reason for an equation-based approach at
the system level is two-fold. First, it acknowledges that designers already use an equationbased methodology at the system level. An equation-based approach is evinced by the
near ubiquity of the spreadsheet as a resource allocation tool. Second, the gentle nature
of most tradeoff curves means that they can be easily and efficiently represented by loworder functions. These surfaces fully describe the block-level space in an optimal sense.
At the system level, the system designer can select the operating points for each block so
as to optimally distribute the allocated resources. Moreover, fitting the PO surfaces into
functions amenable to an MP allows the system designer to formulate a system level MP
and produce the PO surface characterizing the system over any range of interest.
1.3
Brief history of optimization
The first programming term to be introduced was linear programming, coined by George
Dantzig in the 1940s. In this context, the term programming does not refer to computer
programming. Instead, the term comes from the use of program by the United States
military to refer to proposed training and logistics schedules, which were the problems
that Dantzig was studying at the time. George Dantzig received his Ph.D. from Berkeley
in 1946. Initially, he was to accept a teaching post at Berkeley, but was persuaded by his
wife and former Pentagon colleagues to return as a mathematical adviser to the USAF.
It was there, in 1947, that he first posed the linear programming problem, and proposed
the simplex method to solve it. It was not until 1962 that Spendley et al presented an
alternate, efficient sequential optimization method called the basic simplex method. This
method finds the global optimum of a response with fewer iterations than the non-
18
systematic approaches or the one-variable-at-a-time method. Research has led to further
improvements of the method, such as the modified simplex method [6–8]. Interior-point
methods, a class of algorithms to solve linear and nonlinear convex optimization problems,
were first presented by Narendra Karmarkar for linear programming in 1984 [9].
In 1961, Clarence Zener, Director of Science at Westinghouse, observed that a sum
of component costs sometimes may be minimized almost by inspection when each cost
depends on products of the design variables, each raised to arbitrary but known powers
[10]. An example of such a cost function is:
f (x1 , . . . , xn ) =
t
X
ck xα1 1k xα2 2k · · · xαnnk
k=1
For this function to be a polynomial all exponents must be positive, whole numbers. Zener
and his co-workers Duffin and Peterson coined the term posynomials when they observed
that all coefficients of f (x) must be positive. The positive coefficients of the posynomials
are needed because they may be raised to fractional powers in the geometric inequality,
an operation forbidden to negative numbers. The sense of the posynomial inequalities is
restricted by the direction of the geometric inequality [10].
Posynomials form the basis of geometric programs (GPs). Early pioneers in introducing GP into circuit design include Stephen Boyd, Mar Hershenson, et al. [8, 11–16] GP
lends itself nicely in a variety of applications, such as economics, engineering, mathematics, and operations research. The myriad of properties fundamental to GPs make it an
attractive optimization framework for our application and will be discussed in some detail
in Chapter 2. In particular, we shall see the constructs of GPs that make it a suitable
choice for circuit design.
1.4
Document organization
Chapter 2 begins by stating the motivation for the hierarchical, bottom-up (H-BU)
methodology and provides an overview of the theory and properties of geometric programming and a possible optimization framework to carry out H-BU in the context circuit design. We provide a treatment for a particular class of non-convex functions we
1.4. Document organization
19
shall refer to as posynomial−1 denominators in Chapter 2. Our treatment laying the
groundwork for an algorithm presented for handling GP-ized formulations is discussed
here. Finally, Chapter 2 also addresses the generation of subsystem tradeoff surfaces and
the multi-dimensional fitting problem.
Chapter 3 focuses on the formulation of the amplifier and mixer. The set of specifications are outlined here and the derivations and significance of various metrics are
discussed. Chapter 3 focuses on addressing hierarchical system issues such as incorporating parasitic effects into the formulations. Additionally, the notion of system-level
interface variables is presented. Finally, once the multi-dimensional problem has been
completed, we construct a hierarchical formulation of the transmitter segment system.
In Chapter 4, a summary of optimization results is compared against simulated and
measured data for both sequential design and hierarchical design. Finally, concluding
remarks and future directions of this work are presented in Chapter 5.
20
Chapter 2
Generating Pareto-optimal surfaces
2.1
Motivation
The mathematical programming (MP) formulation of the transmitter segment constructed
in Section 3.1 takes the form of a geometric program (GP) and falls into a class of nonlinear
programs with many attractive properties [17]. While the particular circuits considered
here are easily amenable to GP, we stress that the MP framework adopted for Paretooptimal (PO) surface generation remains open and depends on the application.
The motivation for turning to GP is that interior-point methods can solve largescale GPs extremely efficiently and reliably [8, 18–20]. In addition, a number of practical
problems, particularly in circuit design [12,13,21], have been found to be equivalent to (or
well approximated by) GPs. A combination of GP and analog circuit design is natural:
effective solutions for the practical problems are easily achieved. Furthermore, geometric
programming brings with it the following attractive properties: global optima are assured
by convexity; there is no sensitivity to a designer-supplied starting point as in Newtontype methods; and GPs are efficiently solved enabling the fast generation of PO surfaces.
2.2
Pareto-optimal surfaces
Consider an MP formulation taking the following form:
minimize y = f (x) = (f1 (x), f2 (x), . . . , fn (x))
21
22
Chapter 2. Generating Pareto-optimal surfaces
subject to h(x) = (h1 (x), h2 (x), . . . , hm (x)) = b
g(x) = (g1 (x), g2 (x), . . . , gp (x)) ≤ 0
x = (x1 , x2 , . . . , xk ) ∈ X
y = (y1 , y2 , . . . , yn ) ∈ Y
(2.1)
The multi-objective nature of the formulation inherent in circuit design problems requires treatment on the Pareto-optimal front. This treatment is necessary due to the
coupling and conflicting nature of the objective functions. For example, consider two objectives, performance (f1 ) and power consumption (f2 ), under the speed (h1 , g1 ), precision
(h2 , g2 ), and model (h3 , g3 ) constraints. An optimal system design might be an architecture which achieves maximum performance at power and does not violate the specification
constraints. If such a solution exists, we actually only have to solve a single objective optimization problem (SOP). The optimal solution for either objective is also the optimum
for the other objective. What makes multi-objective programs difficult is the common situation when the individual optima corresponding to the distinct objective functions are
sufficiently different and decoupling is not possible. In such cases, a satisfactory trade-off
has to be found. In our example, performance and power consumption are generally competing: high-performance architectures substantially increase power consumption, while
low-power architectures usually provide lower performance. This discussion makes clear
the need for a redefining of optimality for multi-objective programs.
In single-objective optimization, the feasible set is completely ordered according to
one objective function, f , such that for two solutions a, b in the feasible set, Xf , either
f (a) ≥ f (b) or f (b) ≥ f (a). The goal is to find the solution (or solutions) that give the
maximum value of f . However, when several objectives are involved, the situation changes:
the feasible set is, in general, not ordered, but partially ordered. This is illustrated in
Fig. 2-1 [1] on the left, where f1 and f2 denote performance and cheapness, respectively.
The solution represented by point B is better than the solution represented by point C:
it provides higher performance at lower cost. It would be even more preferable if it only
improved one objective, as is the case for C and D: despite equal cost, C achieves better
2.2. Pareto-optimal surfaces
23
Figure 2-1: Pareto-optimality in objective space. Here we have two performance metrics:
f1 & f2 . If we let f1 = P ower−1 and f2 = Speed, the for the fastest system, we need
the most power, and vice versa. Considering the point, E (the shaded gray space is the
feasible region) we see that we can always lower the P ower to maintain the same Speed,
until we hit the boundary (This boundary is what is referred to as the Pareto-optimal
front. At higher dimensions, this becomes a surface.). At the boundary (e.g., point A), if
we try to make an improvement in P ower, it will cost us a degradation in Speed.
performance than D. In order to express this situation mathematically, the relations
=, ≥, and > are extended to objective vectors by analogy to the single-objective case.
In the multi-objective case, the solutions a, b ∈ Xf have three relational possibilities:
f (a) ≥ f (b), f (b) ≥ f (a), or f (a) f (b) ∧ f (b) f (a)
(2.2)
In a circuit design context, it is usually interesting to obtain the maximum precision and
speed for a given choice of complexity and technology while consuming minimal power
for a given application. For an achieved power,
P(Complexity,Technology, Speed, Precision)
= (f1 (x), f2 (x), . . . f3 (x))
(2.3)
the vector x should dominate any other vector u under the following definition: For a
maximization problem, a vector x = (x1 , ..., xm ) dominates a vector u = (u1 , . . . , um ) (x
24
Chapter 2. Generating Pareto-optimal surfaces
dominates u) if and only if:
∀i ∈ 1, 2, . . . , n : fi (x) ≥ fi (v)
and
∃j ∈ 1, 2, . . . , n : fj (x) > fj (v)
Additionally, x covers u (x u) if and only if x u or x = u. A decision vector x ∈ X
is now said to be Pareto-optimal if and only if there exists no z ∈ X for which f (z)
dominates f (x).
Fig. 2-1 clarifies the main difference between single-objective programs and multiobjective programs clear: There is no single optimal solution in the SOP case but rather
a set of optimal trade-offs. None of these can be identified as better than the others unless
preference information is included (e.g., a ranking of the objectives). At a Pareto-optimal
solution, any -deviation from the solution will cause some degradation in at least one
performance metric. Thus, the useful value of the PO surface1 should be clear. Pareto
surfaces represent the best performance that can be obtained from a given circuit topology
across its complete design space. These surfaces are generated for nominal values of
process parameters and are often used for optimizing performance parameters of a circuit
in order to meet system-level specifications. They encapsulate the set of designs describing
the metrics’ trade-offs. In constructing the design space over the governing metrics, p,
for the optimal system one needs to solve the following optimization problem [15]:
minimize λT p
subject to fi (x, p) ≤ 0, i = 1, 2, . . . , m.
gi (x, p) = 0, i = 1, 2, . . . , p.
1
(2.4)
It should be mentioned here that the entirety of all Pareto-optimal solutions is called the Paretooptimal set; the corresponding objective vectors are often cited as the Pareto-optimal front or surface in
the literature.
2.3. The utility of geometric programming
25
A verification procedure for the resulting feasible set described in (2.4) is provided in [15].
Other methods have been developed and proposed in the generation and exploration of
the Pareto-optimal hyperplane [15, 21–23].
From the PO generation step of H-BU in Fig. 1-1, what emerges are smooth, wellbehaved surfaces. For our PO surfaces, it is often of interest to consider power as a
function of all of the other design metrics. Because design tradeoffs tend to be smooth, it
is often possible to fit power as a monomial function of these other metrics. The relative
errors associated with these approximations can be calculated and are vanishingly small.
The system designer receives not only the trade-off surfaces from the circuit designers,
but also access to a single expression, fully describing each surface. This access allows
the system designer to formulate a system-level mathematical program (e.g., GP) of the
form in (2.5), as seen in Fig. 2-2, and run a single iteration in order to get an optimal
strategy for system-level resource allocation.
minimize Pamp (Aamp , famp )
subject to
+ Pmix (IIP 3, Amix , fmix )
Amix · Aamp ≥ Atotal
fmix ≥ fmix,min
famp ≥ famp,min
Pamp + Pmix ≤ Pmax
IIP 3M IX,dBm ≥ IIP 3M IN
Model constraints
(2.5)
The remainder of this chapter is dedicated to providing an overview of the rich theory of
geometric programming and the properties that make it a natural choice for many circuit
design applications.
2.3
The utility of geometric programming
A geometric program (GP) is a type of mathematical optimization problem consisting
of an objective function and constraint functions constructing a subset of the Euclidean
26
Chapter 2. Generating Pareto-optimal surfaces
Figure 2-2: H-BU flow for transmitter segment
space that has a special form. The motivation for turning to GP is that recently developed
interior-point methods can solve large-scale GPs extremely efficiently and reliably [8]. In
addition, a number of practical problems, particularly in circuit design, have been found
to be equivalent to (or well approximated by) GPs. The fundamental challenge when
using GP is to express a practical problem, such as an engineering analysis or a design
problem, in GP form.
In the best case, GP formulations are exact, or there exists at least approximate formulations. GPs have been used to optimally design electronic circuits including CMOS
op-amps and planar spiral inductors [12, 14]. In this work, we use the method and techniques developed to optimally design a discrete-component bipolar amplifier and singlebalanced mixer cascade as a segment of a transmitter signal chain. The example illustrates
the power of combining optimization techniques with a systematic, hierarchical, and sensible methodology many times more efficient than traditional, simulation-based design
methodologies.
GPs are a subset of convex optimization problems. GP has many useful theoretical,
practical, and computational properties. A logarithmic transformation converts the apparently non-convex standard form GP into a convex optimization problem. As a result a
local optimum is also a global optimum, the duality gap is zero, and a global optimum can
be computed very efficiently. Convexity and duality properties of GP are well understood,
and large-scale, robust numerical solvers for GP are available. GP is naturally suited to
model many types of important nonlinear systems in science and engineering. It extends
2.4. GP theory
27
the scope of linear programming (LP) to the realm of nonlinear programs.
Since its inception in the 1960s, GP has found applications in mechanical and civil engineering, chemical engineering, probability and statistics, finance and economics, control
theory, circuit design, information theory, coding and signal processing, wireless networking, circuit design, etc. A detailed discussion of GP can be found in [8, 17, 19, 20]. GP
has generated renewed interest since the late 1990s. The start of the millennium has
had tremendous momentum in pushing forward the application of GP to eliminating the
bottleneck for mixed-signal, system-on-chip designs [15]. The quasi-automation of analog
circuit design and the push for a new, equation-based design philosophy is an area of
much research and industry interest [3, 12, 13, 15]. GP theory is well-developed and very
efficient GP algorithms are currently available through user-friendly software packages
(e.g., MOSEK, GGPLab [24]). Engineers, scientists, and researchers interested in using
GP need only acquire the capability of modeling or approximating problems as GP. In
using GP, an ability to understand and be able to quantify how established models will
deviate from physical observations is closely coupled with a general understanding of optimization theory. To aid in this, in what follows we provide an overview on the theory,
algorithms, and modeling methods of GP.
2.4
GP theory
There have been many research and professional activities that utilize the power of recent
developments in nonlinear convex optimization to tackle broad ranges of problems in the
analysis and design of integrated circuits. These research activities are driven by both new
demands in the study of integrated circuit design, new tools emerging from optimization
theory, and the bottleneck created by analog front-ends in mixed-signal systems. The
development of highly efficient computational algorithms, like the interior-point methods
[8, 17], has lowered reluctancy to turn to optimization theory as a practical tool for aid in
design. A general nonlinear, convex optimization program takes the following form:
minimize fo (x)
subject to fi (x) ≤ 0, for i = 1, 2, . . . , m
28
Chapter 2. Generating Pareto-optimal surfaces
Ax = c
(2.6)
Here fo (x) is the convex function to be minimized subject to the upper bound inequality
constraints (fi (x)) on the other convex functions and affine equality constraints (Ax = c).
The constant parameters are A ∈ <lxn and x ∈ <l . The objective function fo to be
minimized and m constraint functions (fi ) are convex functions. From basic results in
convex analysis, it is well known that for a convex optimization problem, a local minimum
is also a global minimum [17]. The Lagrange duality theory is also well developed for
convex optimization. For example, the duality gap is zero under constraint qualification
conditions, such as Slater’s condition [17] that requires the existence of a point on the
interior of the feasible region created by the nonlinear inequality constraints. When put
in an appropriate form with the right data structure, a convex optimization problem is
also easy to solve numerically by efficient algorithms, such as the primal-dual interiorpoint methods [17], which have worst-case polynomial-time complexity for a large class
of functions and scales gracefully with problem size in practice.
2.4.1
High-level view
A comparison that offers insight is that of GP modeling and modeling via nonlinear programming (NLP). NLP modeling is relatively easy, since the objective and constraint
functions can be any nonlinear functions. In contrast, GP modeling can be less straightforward, since the objective and constraint functions are constrained in the forms they can
take. GP solvers exist and use simple algorithms. On the other hand, solving a general
NLP is much more difficult, and may involve compromising a global optimum for a local
one.
The most important feature of GPs is that they can be solved globally. Algorithms for
GPs always obtain a global minimum. Infeasibility is always detected. Furthermore, the
starting point does not matter; the same global solution is found regardless of the starting
point. For algorithms where the starting point is critical, heuristics must be employed to
reduce the risk of finding non-global extrema. For GP, this problem is nonexistent. In
exchange for the rigid form of GP and accepting its limitations, the user reaps its many
2.4. GP theory
29
benefits.
Viewed another way, a linear program is an optimization problem with a stricter
limitation on the form of the objective and constraint functions (i.e., they must be linear).
Nonetheless, LP modeling is widely used, in many fields, because LPs can be solved with
great reliability and efficiency. In a similar manner, there is a science to GP modeling
and it is not a simple matter of using a software package or algorithm. Using GP requires
a basic understanding of optimization theory and the valid transformations that may be
used for effective modeling. Lastly, modeling success is not guaranteed as many problems
cannot be formulated as GPs.
2.4.2
Formulations
The theory of GP, including the convexity and duality properties, can be developed from a
basic geometric inequality: the arithmetic mean is greater than or equal to the geometric
mean. GP can be extended by other geometric inequalities. A detailed exposition of
GP theory, from both primal and dual perspectives, can be found in [17, 19]. In this
section, we provide an overview of the GP formulation and the mathematical implications
of a coordinate transformation in the properties of convexity of the standard form GP.
There are two equivalent GP formulations: the standard, non-convex form and the posttransformation, convex form. The first is a constrained optimization problem constructed
from posynomial functions. The latter results from a logarithmic transformation of the
former. A monomial is defined as
f : <n++ −→ <l
fmonomial (x) = ck · xa11 xa22 · · · xai i
(2.7)
where c ≥ 0 and the ai ∈ < i = 1, 2, . . . k. A sum of monomials is called a posynomial
and has the following form:
30
Chapter 2. Generating Pareto-optimal surfaces
f (x)posynomial =
n
X
ci · xa11 xa22 · · · xann
(2.8)
j
where ci ≥ 0, i = 1, 2, . . . n and aj ∈ <, j = 1, 2, . . . n. Of key importance to note is the
positive nature of all coefficients and the convexity upon a suitable transformation. Using
definitions (2.7) and (2.8), a GP in standard form is now constructed:
minimize fo (x)
subject to fi (x) ≤ 1, i = 1, 2, . . . , m.
hl (x) = 1, l = 1, 2, . . . , M.
(2.9)
where fi , i = 1, 2, . . . , m are posynomials:
fi (x) =
Ki
X
a
(1)
a
(2)
a
(n)
cik · x1ik x2ik · · · xnik
(2.10)
k
and hl , l = 1, 2, . . . , M are monomials:
a
(1)
a (2)
hl (x) = cl · x1l x2l
· · · xanl (n)
(2.11)
Many decompositions and extensions of this formulation can be derived and are explored
in [8]. Additionally, a matrix form of (2.9) can be established for efficient solution using
interior-point methods traversing a feasible path within the constraint set. As can be seen,
a GP in standard form is not a convex optimization program due to the non-convexity of
the posynomial expressions. However, consider a logarithmic change of coordinates
yi = log xi ,
(2.12a)
2.4. GP theory
31
bik = log cik , bl = log cl
(2.12b)
Substituting (2.12) in (2.9) yields
minimize
subject to
Ko
X
exp(aT0k y + b0k )
k=1
Ki
X
exp(aTik y + bik ) ≤ 1, i = 1, 2, . . . , m.
k=1
aTl y + bl = 0, l = 1, 2, . . . , M.
(2.13)
It is easy to see that (2.13) is equivalent to the following GP in convex form
minimize log
subject to
Ko
X
exp(aT0k y + b0k )
k=1
Ki
X
log
exp(aTik y + bik ) ≤ 0, i = 1, 2, . . . , m.
k=1
aTl y
+ bl = 0, l = 1, 2, . . . , M.
(2.14)
The proof of convexity lies in the positive-definiteness of the Hessian of each of the
objective and constraint functions of y. The proof of this lies in the following lemma:
Lemma 1. The log-sum-exp function f (x) = log
Pn
i=1
exi is convex in x.
The proof of this statement is found in many convex optimization texts. Thus (2.14),
and therefore (2.9), are indeed a convex optimization program.
We can now understand the restrictive form of the standard form GP (2.9). First,
we see that equality constraints can only be imposed on monomial expressions. Suppose
equality constraints were to be imposed on posynomial expressions. This would result
in a non-convex problem after the change of coordinates. Secondly, in contrast to the
integer requirement of the exponential constants in a polynomial optimization problem,
GP relaxes this requirement and allows the exponential constants to be any real numbers
since they only perform an affine transformation of y. This makes the difference between
32
Chapter 2. Generating Pareto-optimal surfaces
a polynomial-time problem and an NP-hard one. Lastly, the coefficients of x in (2.9)
must be positive as the logarithmic function is only defined for positive arguments. This
restriction is relaxed in an extension of GP called Signomial Programming. This results
in a general class of nonlinear, non-convex optimization problems. Since posynomials are
closed under positive multiplication and addition, a ratio of posynomials is not posynomial. It can be shown that functions expressed as a ratio of posynomials are in fact
signomial expressions. We revisit this topic next in Section 2.4.3.
2.4.3
Posynomial−1 denominator approximation
In this section, we will address the inevitability of signomial terms in (2.9) in a circuit
design context. As mentioned in Section 2.4, signomial programs (SP) are an extension
of GP, where the coefficients cik in (2.10) are unrestricted in sign, and therefore cannot
be transformed into convex problems. SP extends the limitation of GP where lower
bounds on posynomials are not allowed as inequalities (or equalities) in the standard
form GP [8, 25]. Flow conservation equality constraints (i.e., Kirchoff’s current law) are
obvious examples of the need for posynomial equality constraints.
Eq. (2.15) is the relaxation introduced to the GP standard form, resulting in an SP.
s(x) =
N
X
i=1
ci
n
Y
a
(j)
xj i
j=1
(j)
where c ∈ <, ai ∈ < ∀i, j, x ∈ <n++
(2.15)
The four major approaches to solving SP, either exactly or approximately, are:
• branch and bound type methods
• relaxations that are provably tight
• reversed GP
• complementary GP
In the next subsection we discuss the need for modeling schemes to solve SPs and
present a simple SP algorithm most similar to complementary GP [19].
2.4. GP theory
2.4.4
33
Posynomial−1 denominator algorithm
A mathematical representation of parallel impedances in the transmitter segment considered in Section 3.1, gives rise to expressions of the form given by (2.16). The ubiquity
of these types of expressions lines the boundary between generalized nonlinear programs
and GP. This discontinuity introduces a need for modeling approximation for degenerate
cases of posynomial−1 expressions. For example, (2.16) does not qualify as a generalized
posynomial [8, 19]. The negative coefficients in the Taylor expansion of (2.16) reveal its
non-convex nature.
1
Pn
i
ci ·
xa11 xa22
· · · xann
(2.16)
It should be clear that functions of the form (2.16) are neither GP nor convex. Now
consider the following expression
1
α · ck ·
xa11 xa22
· · · xai i
(2.17)
A sensible approximation for functions of the form (2.16) is (2.17), where α is suitably
chosen to account for the discarded information. Care must be taken in making such an
approximation since a decrease in order of the original function results, and therefore a
critical dimension of the original space may be unaccounted for in the problem formulation. For analog circuit design problems, these approximations are usually determined
by dominant terms in the denominator sum of (2.16). Several approaches to handling
posynomial−1 expressions are presented in [19, 25]. A simple alternative is to suitably
approximate the posynomial−1 expression as a monomial and compute an error function
to determine a scalar correction factor. The following algorithm outlines this idea.
Given a function of the form:
f (x) =
p(x)
q(x)
(2.18)
where both p(x) and q(x) are posynomials of the form:
q(x) =
M
X
k=1
ck xa11k xa22k · · · xannk
(2.19)
34
Chapter 2. Generating Pareto-optimal surfaces
the algorithm for a candidate choice of α proceeds as follows:
1. Approximate denominator as monomial:
q 0 (x) = c1 xa111 xa221 · · · xann1
(2.20)
p(x)
p(x)
>
0
q (x)
q(x)
(2.21)
thus
2. Compute the initial error of the approximation as:
q(x∗ ) − q 0 (x∗ )
q 0 (x∗ )
q(x∗ )
=⇒ α = 1 + χerror = 0 ∗
q (x )
χerror =
(2.22)
(2.23)
where x∗ is a solution to the GP formulation with the approximations is step 1.
3. Approximate q(x) with the adjusted value, q 00 (x):
q 00 (x) =
q(x∗ ) 0
· q (x)
q 0 (x∗ )
(2.24)
4. Check that solution is within some tolerance:
q(x) − q 00 (x) ≤ |δ|
(2.25)
Since the α ≥ 1, we can write the following inequality
p(x)
p(x)
> 00
0
q (x)
q (x)
However, no guarantee that
p(x)
q 00 (x)
>
p(x)
q(x)
(2.26)
holds, and there is no guarantee that the error
decreases. Despite no proof of convergence available for this algorithm, it works well and
we have demonstrated its convergence through the example in Section 3.1.
2.5. Multi-dimensional and monomial fitting of PO surfaces
2.5
35
Multi-dimensional and monomial fitting of PO
surfaces
The strategy for generating the PO surfaces employed for the transmitter segment discussed in Chapter 3 involves sweeping the constraints. This is analogous to the verification
procedure in (2.4). The advantage of adopting this strategy for generating the PO surfaces is three-fold: 1) a single function objective can be used, 2) under a single function
objective, the weighting scalars, λ, do not need to be determined, and 3) often it is more
natural to think in terms constraints rather than minimization/maximization of various
performance metrics.
In fitting the PO surfaces into monomial expressions, we must ensure that the design
space can be approximated by a monomial. In formulating (2.5) as a GP, we come halfway. As a check to the monomial fitting approximation, we require that the relative errors
associated with the approximation satisfy |δrelative
error |
≤ , where is determined by the
application.
For the case where the selected MP strategy is GP, once we have the PO data,
f (x1 , . . . , xn ), we form the logarithmically transformed function
2
F (y) = log f (ey )
(2.27)
Now, f can be approximated by a monomial if and only if F can be approximated by
an affine function [8]. The converse can also be easily shown: if F can be approximated
by an affine function (i.e., convex function), then f can be approximated by a monomial.
A basic result of convex analysis is that any convex function can be arbitrarily well
approximated by piecewise linear convex functions expressed as a maximum of a set of
affine functions [17, 26]. Thus, we have
F (y) ≈ max (φoi + φ1i y1 + · · · + φni yn )
i=1,...,p
2
As discussed in 2.4.2, this transformation is used to transform a GP into a convex problem.
(2.28)
36
Chapter 2. Generating Pareto-optimal surfaces
This is the same as
f (x) ≈ max eφoi xφ1i · · · xφni
(2.29)
1,...,p
The right-hand side is the maximum of p monomials, and therefore convex. In general,
determining convexity for f (x1 , . . . , xn ) can be difficult. A number of techniques are presented in [17] for determining convexity. Graphical interpretations of convexity/concavity
are illustrated in Fig. 2-3. A formal definition of convexity is given in [26] as:
Definition 1. A function f : <n 7→ < is called convex if for every x, y ∈ <n , and every
t ∈ [0, 1], we have
f (tx + (1 − t)y) ≤ tf (x) + (1 − t)f (y) (See Fig 2-3(a)).
(2.30)
Corollary 1. Convexity of F is stated in terms of f by requiring that the following
inequality hold [8]
t 1−t
t
1−t
f (xt1 x̃1−t
∀ 0 ≤ t ≤ 1.
1 , . . . , xn x̃n ) ≤ f (x1 , . . . , xn ) f (x1 , . . . , xn )
(2.31)
In general, for a function f (x1 , . . . , xn ) with a Hessian given by




2
H(f (x1 , . . . , xn )) = ∇ f (x1 , . . . , xn ) = 



...
∂2f
∂x1 ∂xn

∂2f
∂x22
...
..
.
∂2f
∂x2 ∂xn
∂2f
∂xn ∂x2
...
∂2f
∂x2n







∂2f
∂x21
∂2f
∂x1 ∂x2
∂2f
∂x2 ∂x1
∂2f
∂xn x1
..
.
..
.
..
.
(2.32)
convexity requires H to be positive semi-definite, which is the same as requiring the
eigenvalues, vi , of H to satisfy vi ≥ 0.
In performing the monomial fitting for the PO data resulting from sweeping the constraints of (2.5), we used our own custom algorithm. Common MATLAB functions that
are readily available for the same task include f mincon, the Curve Fitting toolbox, the
Optimization toolbox, etc.
2.5. Multi-dimensional and monomial fitting of PO surfaces
(a) 2-D definition of convexity
(c) Non-convex function
37
(b) Convexity versus concavity
(d) Another non-convex function
Figure 2-3: Graphical interpretation of convexity and non-convexity and some examples.
38
Chapter 2. Generating Pareto-optimal surfaces
Chapter 3
Segment of transmitter chain
example
3.1
Introduction
Fig. 3-1 illustrates the H-BU design methodology applied to a segment of a transmitter.
We consider an amplifier and mixer cascade of Fig. 3-2. We arrive at this decomposition
by specification of the system designer.
Figure 3-1: H-BU flow for transmitter segment
At this stage, topologies for each of the blocks are fixed and we can generate the
39
40
Chapter 3. Segment of transmitter chain example
Figure 3-2: H-BU flow for transmitter segment
Pareto-optimal surfaces for each block using a suitable MP solver. These surfaces fully
characterize each of the blocks in an optimal sense. Fitting the PO surfaces into a functional form and passing these to the system level allows for the formulation of a systemlevel MP and generation of a system PO surface for final, optimal allocation of resources.
3.2
Subsystem formulations
The block-level formulations require the manipulation of the algebraic expressions capturing the constraints into GP form. When manipulation into GP form is not possible we
must resort to the discussion in Section 2.4.3. Appendix B provides MATLAB pseudo code
for each of the formulations. The templates for the block-level formulations are given
in Eq. (3.1) and (3.2). Note that the general framework of the H-BU can accommodate
many different forms of mathematical programming. Simulation-based methods, and even
evolutionary algorithms, are possible alternatives to GP.
3.2. Subsystem formulations
41
Amplifier Formulation:
Mixer Formulation:
minimize Pamp
minimize Pmix
Aamp ≥ Aamp,min
subject to
subject to
Amix ≥ Amix,min
famp ≥ famp,min
fmix ≥ fmix,min
Saturation constraints
IIP 3 ≥ IIP 3min
Breakdown constraints
gm · RE ≥ 10
Model constraints
Variations in β, vBE
(3.1)
Saturation constraints
Breakdown constraints
Model constraints
(3.2)
3.2.1
Amplifier formulation
In formulating the selected amplifier topology (Fig. 3-3), we ensure proper device operation such as keeping all transistors in the forward-active regime, avoiding the breakdown
voltages, and obeying other model constraints. These requirements can all be formulated
as mathematical constraints easily amenable to GP. In what follows, we use the hybrid-π
model of Fig. 3-4.
Voltage gain, Aamp
The voltage gain expressions for the various amplifier configurations in the first stage of
Fig. 3-2 are easily derived and manipulated into GP form. These are given in Eq. (3.3)
for the first stage of our amplifier:
(β + 1)RE
= Av,EF
RB + rb + rπ + (β + 1)RE
|
{z
}
Emitter−F ollower
(3.3a)
42
Chapter 3. Segment of transmitter chain example
Figure 3-3: Two stage amplifier block of transmitter segment
Figure 3-4: Hybrid-π model for bipolar transistor
3.2. Subsystem formulations
43
βRL
r +r
| b {z π}
= Av,CE
(3.3b)
Common−Emitter
Common−Base
}|
{
gm rπ
rb + rπ
·
= Av,CB
RS,EF + rb + rπ (β + 1)
Since β 1
⇒ (β + 1) → β
z
(3.3c)
and rb rπ ⇒ (rb + rπ ) → rπ .
Now,
(Av,EF · Av,CE · Av,CB )−1 · Av,total ≤ 1
(3.4)
is in standard GP form.
Amplifier bandwidth, famp
The bandwidth constraint for the amplifier in Fig. 3.1 is captured using the method of
open-circuit time constants (OCτ s) [27,28]. Developed at MIT in the mid-1960s, this is a
very powerful technique for analyzing high-bandwidth circuits. Moreover, not only does
the OCτ method yield a conservative bandwidth estimate, but it also helps identify which
elements are responsible for bandwidth limitations.
Assuming all poles and zeros have negligible imaginary components, a conservative
bandwidth estimate for a circuit with a transfer function of the form
Vo
1
(s) =
Vi
(τ1 s + 1)(τ2 s + 1) . . . (τn s + 1)
(3.5)
Vo
1
P
(s) =
n
Vi
τ1 τ2 · · · τn s + . . . + ( ni τi )s + 1
(3.6)
which expands to
can be shown [29] to be well approximated by
fh ≈ (2π
X
i
τi )−1
(3.7)
44
Chapter 3. Segment of transmitter chain example
where τi is the time constant associated with the ith capacitor in the circuit given by
τi = Rio Ci
(3.8)
In Eq. (3.8), Rio is the effective resistance facing the i th capacitor with all of the other
capacitors removed (i.e., open-circuited). Intuitively, the reciprocal of each i th OCτ is
the bandwidth that the circuit would exhibit if that i th capacitor were the only one in
the system.
Fig. 3-5 shows the most complicated transistor configuration for computing OCτ .
Using the hybrid-π model (Fig. 3-4) and ignoring Ccs , the only capacitances in the small-
Figure 3-5: Worst-case transistor configuration for OCτ calculation.
signal model of Fig. 3-5 are cµ and cπ . It can be shown the effective resistances are given
by:
rπo
RB + RE
= rπ k
1 + gm RE
!
rµo = RB k(rπ + (β + 1)RE ) + RL
!
gm rπ
(RB k(rπ + (β + 1)RE ))RL
+
rπ + (β + 1)RE
(3.9)
Clearly, Eq. (3.9) will give rise to large posynomial−1 expressions requiring the treat-
3.2. Subsystem formulations
45
ment given in Section 2.4.3 of Chapter 2.
3.2.2
Mixer formulation
In formulating the selected single-balanced mixer topology (Fig. 3-6), we proceed in a
similar manner to the amplifier formulation. That is, we ensure proper device operation
such as keeping all transistors in the forward-active regime, avoiding the breakdown voltages, and obeying other model constraints. These requirements can all be formulated as
mathematical constraints easily amenable to GP.
Figure 3-6: Single-balanced mixer block of transmitter segment
Mixer conversion gain, Amix
To capture the conversion gain requirement of this block, we first note that if we assume
complete and instantaneous switching of the Q8 − Q9 pair at each zero-crossing point of
the VLO square wave, then the small-signal gain is given by that of a common-emitter
amplifier with emitter degeneration. That is,
v (t) gm
out
|Av,mix | = ·R
=
vsource,RF (t) 1 + gm RE,M IX
(3.10)
46
Chapter 3. Segment of transmitter chain example
To take into account the modulating effect of the switching pair caused by VLO , we
assume hard-switching of Q8 and Q9 . If Q8 and Q9 are driven by a 50% duty cycle squarewave toggling between +1 and -1, it can be shown that the fundamental of VLO is given
by
VLO,f und =
4
· cos(ωLO t).
π
(3.11)
Therefore Eq. (3.10) becomes,
v
(ω
)
4
gm
out,IF IF |Av,mix | = · R · · cos(ωLO t)
· VLO,f und =
vsource,RF (ωRF ) 1 + gm RE,M IX
π
(3.12)
Finally, since multiplication of vsource,RF (t) by cos ωLO t is equivalent to shifting vsource,RF (ω)
by ±ωLO and dividing the result by 2, the IF output magnitude of Eq. (3.12) is thus given
by
|Av,mix |IF
v
(ω
)
gm
2
out,IF IF =
·R· .
· VLO,f und =
vsource,RF (ωRF ) 1 + gm RE,M IX
π
(3.13)
Mixer bandwidth, fmix
In considering the speed requirements for the transmitter segment, we note that we are
bandwidth-limited by the amplifier block. Since all nodes of the mixer block are low
impedance nodes, except for the output nodes, each node contributes a time constant on
the order of fT 1 of the transistor. Therefore, assuming R 1
,
gm
the load resistor R, which
also determines the conversion gain, will determine the bandwidth of this block. Using
the OCτ method discussed earlier, the time constant created by R will be given by:
τmix =
1
RCL
where CL is the load capacitance the mixer output must drive.
1
It can be shown that ωT = 2πfT =
gm
cπ +cµ .
(3.14)
3.2. Subsystem formulations
47
Mixer linearity, IIP 3
Emitter degeneration is added to the transconductor stage of the mixer in Fig. 3-6 to
achieve reasonable linearity through feedback (Fig. 3-7). However, doing so degrades
the transconductance, gm , of the block and therefore the conversion gain the block can
achieve. In formulating the mixer block as a GP, we ensure the tradeoff between linearity
and gain is captured by adding a constraint for linearity in the form of IIP 3. This metric
quantifies the magnitude separation between the first order component versus third order
intermodulation products resulting when two signals with different frequencies are applied
to a nonlinear system [27, 30].
Figure 3-7: Effect of feedback on distortion. s = si − f so .
The nonlinearity of transistors is a fundamental consequence of the device physics.
Due to Boltzmann statistics, the collector current for Qi can be described very accurately
by
VBEo + Vin
VT
"
!2
!3
#
VBEo
Vin 1 Vin
1 Vin
= IS exp
+
+ ···
1+
+
VT
VT
2 VT
6 VT
IC ≈ IS exp
(3.15)
(3.16)
Expanding Eq. 3.16 for an input of the form Vin (t) = A1 cos(ω1 t) + A2 cos(ω2 t) yields an
output of the form
I(t) = α1 (A1 cos ω1 t + A2 cos ω2 t) + α2 (A1 cos ω1 t + A2 cos ω2 t)2
48
Chapter 3. Segment of transmitter chain example
+α3 (A1 cos ω1 t + A2 cos ω2 t)3 + ...
(3.17)
It is clear that for a two-tone input of the form Vin (t) = A1 cos(ω1 t) + A2 cos(ω2 t), the
collector current given by (3.16) will contain harmonic (Fig. 3-8) and intermodulation
products at ω1 ± ω2 , 2ω1 ± ω2 , and 2ω2 ± ω1 , and fundamental components at ω1 , ω2 .
If the difference between ω1 and ω2 is small, the components at 2ω1 − ω2 and 2ω2 − ω1
appear in the vicinity of ω1 and ω2 (Fig. 3-9).
Figure 3-8: Output waveform due to 3rd- order harmonic distortion
Intermodulation is a troublesome effect in RF systems. As can be seen from Fig. 3-9,
a weak signal (2ω1,2 − ω2,1 ) accompanying the signal of interest (ω1,2 ) can cause thirdorder nonlinearity, corrupting the desired component. Measured by a two-tone test, the
third-order intercept point parameter (IP3) helps characterize this phenomenon. It can
be shown that as A1 = A2 = A increases, the fundamentals increase in proportion to A,
whereas the third-order intermodulation products increase in proportion to A3 .
2
This is
plotted on a logarithmic scale in Fig. 3-10. The third-order intercept point is defined to
be at the intersection of the two lines and is algebraically expressed [30] as
v u u 4 α1 IIP 3 = t 3 α3 2
(3.18)
Care must be taking in applying this analysis when measuring IP 3. In particular, this analysis
provides an estimate for IP 3 during the initial phases of design. The actual value of IP 3 must be
obtained through accurate extrapolation to ensure that all nonlinear and frequency-dependent effects are
accounted for.
3.2. Subsystem formulations
49
Figure 3-9: Two-tone intermodulation products
(a) Measuring IP3 without extrapolation
(b) Graphical interpretation of 3-10(a)
Figure 3-10: Third-order intercept point, IIP 3.
The transconductor of Fig. 3-6 can be represented by Fig. 3-11. The total signal
applied to the base of the amplifier is
vi + VQ = VBE + IE RE
(3.19)
Splitting the VBE and IE terms into DC and AC currents (assuming α ≈ 1)
vi + VQ = VBE,Q + vbe + (IQ + ic )RE
(3.20)
50
Chapter 3. Segment of transmitter chain example
Figure 3-11: Transconductor of Fig. 3-6
Subtracting the bias terms to separate AC and DC equations
VQ = VBE,Q + IQ RE
(3.21)
vi = vbe + ic RE
(3.22)
Rewriting Eq. (3.22) as
vbe = vi − ic RE
(3.23)
and comparing to the error signal s of Fig. 3-7 we see that
s = vbe
(3.24a)
s o = ic
(3.24b)
si = v i
(3.24c)
f = RE
(3.24d)
3.3. Hierarchical system issues
51
It can be shown [31, 32] that
AIIP 3
s
√
(VT + Ic RE )3
=2 2
VT
(3.25)
Assuming gm RE 1, Eq. (3.25) simplifies to
AIIP 3
where VT =
kT
q
s
√
(Ic RE )3
≈2 2
VT
(3.26)
is the thermal voltage (VT ≈ 26mV at room temperature). Note that the
requirement that gm RE 1 is enforced in the pseudo-formulation of (3.2) as gm RE ≥
10. Eq. (3.26) is a monomial expression and is comfortably included in the mixer GP
formulation.
3.3
Hierarchical system issues
In this section, we address hierarchical system issues, namely incorporating parasitic
effects into the formulations and the notion of system level interface variables. The ability
to incorporate parasitic effects early in the design process is advantageous in the selection
of viable topologies for various blocks. Including these effects corresponds to additional
constraints in Eq. (3.1) and (3.2). Likewise, interface variables provide a means to connect
the block-level formulations to reconstruct the system-level application.
3.3.1
Parasitic-aware optimization
A major advantage of adopting an equation-based design strategy is the ability to include
parasitic effects and arbitrary levels of conservativeness. Parasitic effects that can be
modeled mathematically can be included in the block- or system-level MP. In our design
example, certain layout-related parasitic capacitances were straightforward to model and
account for. Including these effects early on can aid in the selection of feasible block-level
topologies. Moreover, sensitivities to design variable variations (i.e., temperature, process,
component tolerances, etc.) can also fit within various optimization frameworks [3, 5, 33].
52
3.3.2
Chapter 3. Segment of transmitter chain example
Managing Interface variables
Once the block-level formulations are complete, these must be connected together in such
a way so as to minimize loading and other interconnect effects. These effects are captured
in a mathematical program via the use of interface variables since the interdependencies among blocks are not automatically accounted for. Interface variables function as
system-level couplers. For the design example presented here, we dealt with this issue by
constraining our designs to have an appropriate mismatch. For example, in connecting the
amplifier with the mixer in Fig. 3-1, we ensure that Zout,amp Zin,mixer . Consequently,
the system-level MP must have access to interface variables and constraints to enforce
the interconnect requirements.
Chapter 4
Experimental Results
A set of PO surfaces for the amplifier and mixer blocks are shown in Fig. 4-1. As discussed in Chapter 2, the PO surfaces are produced by sweeping the constraints. This is
an alternative to the feasible set description proposed in [15]. A practical sweep range
(a) Amplifier Pareto surface
(b) Mixer Pareto surface
Figure 4-1: Hierarchical Pareto surfaces
is technology-dependent and can be narrowed a priori by the designer. While design
space narrowing is not required, it reduces the number of optimization runs necessary to
produce the PO data. Moreover, keeping the sweep ranges constrained to within practical operation ranges of the application minimizes the relative errors associated with the
multi-dimensional fitting problem discussed in Section 2.5 of Chapter 2. The hierarchical
surfaces resulting from sweeping the bandwidth and gain constraints of (2.5) are shown
53
54
Chapter 4. Experimental Results
in Fig. 4-2.
(a) Transmitter Pareto surface I
(b) Transmitter Pareto surface II
Figure 4-2: System Pareto surfaces
Fig. 4-1(a) shows the results of sweeping the bandwidth and gain constraints of the
amplifier block. Additional block and system PO surfaces are included in Appendix C.
One should note the smooth nature of the surfaces, but also the increasing curvature as
power in the amplifier increases. Fig. 4-3(a) shows the projection of Fig. 4-1(a) onto
a 2-D plane where this curvature is more clearly seen. From Section 2.5 of Chapter 2,
we know that a monomial approximation of a function f (x1 , . . . , xn ) consists of an affine
function (i.e., φoi + φ1i y1 + · · · + φni yn ) approximation of convex or nearly convex function,
F (y) = log f (ey ). It is easy to see that an affine function will not perform well where
there is curvature after the logarithmic transformation. This is analogous to saying that a
linear function cannot contain second-order information and therefore will necessarily have
an associated error when used as an approximation for a function containing high-order
information.
Fig. 4-1(b) shows the result of sweeping the bandwidth, conversion gain, and IIP 3
constraints of the mixer block. As before, the reader may note the smooth nature of the
surface. Again, a monomial fit consists of a logarithmic transformation and first-order
approximation of the data. Curvature in the PO surfaces manifests itself as errors in a
monomial approximation. These errors are illustrated for each sub-block in Fig. 4-41 .
1
Notice that from these plots, we can see the order in which the constraints were swept for the
55
(a) Amplifier 2-D GP sweep
(b) Mixer 2-D GP sweep
Figure 4-3: Block 2-D GP sweeps to illustrate the cause for nonzero relative errors in Fig.
4-4.
(a) Amplifier monomial fit relative error
(b) Mixer monomial fit relative error
Figure 4-4: Associated relative errors due to monomial fitting of PO surfaces.
56
Chapter 4. Experimental Results
We have used GP as the MP vehicle to carry out H-BU here, however, a designer’s
experience may serve as a guide for a more adequate selection of an MP so as to minimize
inaccuracies that are fundamental in an MP formulation. In the case of GP, one way to
reduce the errors would be to employ posynomial fitting in the multi-dimensional fitting
step of H-BU. Techniques for fitting data into a posynomial form are presented in [8].
A posynomial approximation would take into account higher order information and can
be used instead to ameliorate the high errors, but the marginal returns would exceed the
effort. The reason the errors are tolerable here is that one can design around them since it
is known how the monomial approximations will deviate from the block level information.
Furthermore, the inaccuracy in going from PO data to approximation can be neglected if
the order of magnitude of the errors are less than those associated with process variations,
device mismatches, systematic offsets, fundamental noise limits, etc.
Table 4.1 summarizes the MP formulation , simulation, and hardware results for each
circuit block across three power levels. Fig. 4-5 shows a 2-D plot of the amplifier bandwidth versus power for the GP, simulated, and measured data 2 .
generation of the amplifier PO surfaces. i.e., bandwidth then gain. This is seen from the spikes in the
relative errors for the amplifier and the decreasing curvature in the top plot of Fig. 4-3(a) for increasing
gains.
2
Note that the GP data is consistently conservative in predicting bandwidth for a given power budget.
This is to be expected since we have used the method of OCτ for capturing the bandwidth constraint in
(3.1) for the formulation of the amplifier.
57
Figure 4-5: GP PO, measured, and simulated 2-D amplifier tradeoff curves.
Table 4.1: Amplifier & Mixer data
Power
50 mW
15 mW
50 mW
Power
32 mW
20 mW
16 mW
Performance
metric
famp (MHz)
Aamp (V/V)
Pamp (mW)
famp (MHz)
Aamp (V/V)
Pamp (mW)
famp (MHz)
Aamp (V/V)
Pamp (mW)
Performance
metric
IIP3 (dBm)
Amix (V/V)
Pmix (mW)
IIP3 (dBm)
Amix (V/V)
Pmix (mW)
IIP3 (dBm)
Amix (V/V)
Pmix (mW)
Amplifier data
GP Simulated
1.6
900
50
.672
900
15
.253
900
5
Measurement
2.64
1197
53
1.07
1378
19
.416
1195
8.79
2.44
996
50
1.09
1081
16
.479
1102
8.61
Mixer data
GP Simulated
Measurement
−6.5
2
32
−8.5
1.25
20.5
−9.5
1
16
−1.7
2.21
33
−4.5
1.25
17.2
−5.62
1
17.2
−4–−7
3.4
30.9
−6.1–−8
1.42
18.7
−10–−12.5
1.34
15.6
58
Chapter 4. Experimental Results
Chapter 5
Conclusions
Relatively speaking, analog circuits are still crafted by a few, experienced analog designers. In the world of mixed-signal designs, the mostly manual, simulation-based design
methodology has serious shortcomings. There is an economic need for shortened analog
design schedules to reduce time to market. Efficient analog porting and re-use is a necessity for reducing overall design and manufacturing costs. Robust designs that meet
specifications over multiple process corners are a necessity to meet high yields.
It is with great difficulty and a large investment of man-hours that the traditional
analog design methodology allows porting to mixed-signal applications and new process
technologies. For large systems comprised of many blocks, the tradeoff information captured by the Pareto-optimal surfaces between the various blocks is of enormous value.
The value of this information lies in readily exposed relations hidden underneath strong
nonlinearities, indirect parameter correlations, subsystem trans-coupling and otherwise
non-intuitive interactions among blocks. This information allows system designers to
make the best decisions on how to choose an optimal allocation of the available resources.
Moreover, if each block is kept within a suitably solved optimization framework, then
each can be optimized efficiently.
What this work has demonstrated is that a harmony between optimization practice
and analog circuit design can lead to a more efficient, insightful, and technology portingfriendly design methodology. H-BU reduces the time-consumption and ameliorates the
inefficiencies of the traditional analog circuit design methodology. Furthermore, H-BU
59
60
Chapter 5. Conclusions
promises a systematic way of compiling an analog library.
If circuit design problems can be formulated as mathematical programs, either exactly or approximately, they can be easily optimized across a broad design space. The
methodology proposed here is a powerful aid to a circuit designer, who is limited and
can optimize through only one set of specifications at a time. Regardless of adherence
to a particular optimization framework at the bottom-level, the system-level formulation
is generally amenable to simple mathematical programming form. We see the proposed
methodology as a powerful aid for the designer of large, complex, heterogeneous systems.
5.1
Thoughts on Future work
Until now, we have carefully avoided a number of details facilitating the use of H-BU.
These issues remain mostly on the optimization theory side of the methodology. Nonetheless, it is worthwhile to understand where the limitations of an equation-based, optimization approach exist and how one can work around these.
By way of the design example presented here, we have demonstrated the H-BU design
flow to a discrete component system. A simple modification could be made to handle
the integer optimization problem in the PO surface generation step of H-BU. This modification, if practical, would allow for more closely matching optimization and measured
results. However, integer programs are harder to solve and would require some care.
In addition, for the circuit block formulations presented in Chapter 3, we adhered
to a single, rigid optimization framework (i.e., GP). When it comes to modeling, GP
turns out to be a practical choice for circuit design. We have presented an algorithm to
handle certain classes of non-convex issues arising in circuit design, not amenable to GP,
in Chapter 2. However, if we presented the same design example of Chapter 3 under a
CMOS process, this algorithm would not function as a solution. In this case, a different
class of nonlinear optimizer would be necessary.
There is no recipe for determining what optimizer would work best for a given application. When the application is unstructured and/or the design space unknown, it may
be practical to use evolutionary- or genetic-type algorithms to aid in characterizing the
problem. Generally, these methods work well for ad-hoc type situations. Similarly, for
5.1. Thoughts on Future work
61
the generation of Pareto-optimal surfaces, when the bounds on the performance metrics
are unknown, a designer would do best to formulate a multi-objective problem [1].
We hope to have conveyed the power of an equation-based design strategy in this thesis.
The reduction in the number of simulations is a major advantage. We have portrayed
the system-level perspective of this point in the reduction of system-to-circuit designer
iterations. While a basic understanding of optimization theory is required for selection
of a suitable optimization framework for a particular application, our hope is that circuit
design and operations research remain separate disciplines. H-BU provides a bridge for
the coupling of optimizers and circuit designers in order that each may do what they are
better in.
62
Chapter 5. Conclusions
Appendix A
Circuit design high-level process
A.1
Background
The problem addressed in this thesis is a circuit design and optimization one. Each
being a discipline of its own, the hybrid nature of the problem warrants discussion of
the two elements comprising the proposed design methodology: the circuit design process
and optimization methods. This document does not attempt to provide an in-depth
treatment of either topic. The reader is referred to the many textbooks dedicated to
each in turn, with some useful suggestions included as references. The goal here is to
provide an introductory, first-pass treatment at the ideas and concepts that will aid in
the construction and understanding of the problem addressed here.
It has long been maintained that analog design is more of an art than a science. Much
controversy exists on the matter, but it remains that an inevitable movement is underway
to, at least, partly automate analog design for a given application’s topology. Automation
of optimization, for a given topology, thus makes analog design closer to an art than it
currently is. In relieving analog designers of the chores that can be handled by CAD tools
and solvers, their time can be more productively spent developing enhanced topologies
using the ability that CAD tools lack: human designer judgment.
63
64
Appendix A. Circuit design high-level process
A.1.1
Circuit design [4]
The process of circuit design covers systems as large as national power grids all the way
down to individual transistors within an integrated circuit, both comparable systems in
terms of complexity. For simple circuits, one person may handle the design process, but for
more complex designs, teams of engineers and designers following a systematic approach
and using CAD tools are dominant.
The process of working out the physical form that a system will take, including the
method(s) of construction, materials, parts and technologies to use, physical layout, and
analytical tools all fall under circuit design. Often, circuit design is thought of as separate
from the entities that comprise it. These are: specification, design, costs, verification and
testing, and prototyping.
Specification
A specification is where the circuit design process begins. A specification states the functionality that the finished design must provide. At this stage however, how to achieve the
required functionality remains an open problem. The initial specification is a technically
detailed description of what a customer wants the finished design to achieve and includes
a variety of requirements, such as input and output signals, available power supplies and
power consumption bounds, and a deadline. The specification may also set some of the
physical parameters that the design must meet, such as size, weight, moisture resistance,
temperature range, thermal output, vibration tolerance, etc. The design example presented in Chapter 3 approaches circuit design by starting from single-stage transistor
blocks and evolving to a segment of a transmitter system, abstracting away from physical
parameters.
The proposed design methodology this thesis advocates differs from the traditional
design process. As a traditional system design approach progresses, the designer(s) frequently return to the specification and alter it to take into account the progress of the
design. What results is a random process as to the number of iterations from specification
to design engineers, usually with suboptimal time and resource allocation. This process
can involve tightening specifications that the customer has supplied, and adding tests
A.1. Background
65
that the next-iteration design must pass in order to be accepted. These additional specifications will often be used in the verification of a design. One suboptimally allocated
resource here is time.
Design
The design process involves moving from the specification at the start, to a plan that
contains all the information needed for physical construction towards the end. This typically occurs in a number of stages. The process might begin with the conversion of the
specification into a block diagram of the various functions that the circuit must perform.
In this ”black box,” system-level stage, the contents of each block are not considered,
only what each block must do. This approach allows a complicated task to be broken
into smaller tasks, which may either be handled in sequence or divided amongst a design team. Subsequently, each block is abstractly considered in more detail, but with a
lot more focus on the details of the electrical functions to be provided. At this or later
stages it is common to require a large amount of research and/or mathematical modeling
into the feasible space imposed by the requirements. The results of this research may
be fed back into earlier stages of the design process. For example, if it turns out one
of the blocks cannot be designed within the parameters set for it, it may be necessary
to alter other blocks instead. Finally, the individual circuit components and topologies
are chosen to carry out the function in each of the blocks of the overall design. At this
stage, the physical layout and electrical connections of each component are also decided.
Layout commonly takes the form of artwork and requires a lot of care. The end-product
is usually a printed circuit board or integrated circuit. This stage is typically extremely
time consuming because of the vast array of choices available.
Costs
”Manufacturing costs shrink as design costs soar,” is often quoted in circuit design, particularly for ICs. Naturally, it is desirable to keep costs at a minimum, or at least bounded.
This is an important part of the design process.
66
Appendix A. Circuit design high-level process
Prototyping
It is desirable to explore an idea before investing on it. Prototyping is a means to do just
this. Prototypes are created at any time during the development of a design. During the
planning and specification phase, when the need for exploration is greatest, breadboarding
is used. This allows for parallel verification and testing.
Verification and Testing
Upon completion of a design, the circuit must undergo verification and testing. Verification is the process of going through each stage of a design and ensuring that it will
do what the specification requires it to do. This is frequently a highly mathematical
process and can involve large-scale computer simulations of a design or parts of it. This
stage requires a large investment of time due to flaws found in the current design and the
large amounts of redesigning required in ameliorating the problems. As seen Chapter 1,
this iterative process can be summarized in n tradeoff curves, where n is the number of
blocks functioning together to fulfill the set of specifications. The methodology proposed
in this thesis, H-BU, seeks to reduce the number of iterations from specification to redesigning, and hence reduce the amount of time to reach a complete, functional design
with greater than suboptimal performance and resource allocation. Having a means to
providing system-level trade-off curves is a first step to this end. This is covered in more
detail in Chapters 1 and 2.
Appendix B
MATLAB formulation pseudo code
B.1
Amplifier pseudo code and Pareto-optimal space
generation via constraint sweeping
The following pseudo-code generates the Pareto-surface for an amplifier block by sweeping
the voltage gain, Av,i , and bandwidth, fh , constraints. First the block constants and
design variables are defined. Following is the objective function(s)1 Lastly, the set of
performance, process, technology, and other physical and device constraints are defined.
In the case of using geometric programming for the optimization framework, exercising
of the posynomial−1 algorithm of Chapter 2 may be required. Note that implementing
this algorithm increases the amount of necessary memory and doubles the number of
required optimization runs for generation of the Pareto-optimal space (i.e., still Θ(n)).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for [constraint set ]k = Γk , ∀ k = 1, . . . , M
%% Define constants, λi :
Vdd , Vee , VT , β, cµ , cπ , cparasitic . . .
%% Define design variables, pi :
RL,i , gm,i , rπ,i , τcπ,i , τcµ,i , Av,i , . . .
%% Auxiliary variable definitions
%% Define objective functions:
1
A superposition of weighted objectives can also be used [1].
67
68
Appendix B. MATLAB formulation pseudo code
Pamp (pi , λi ) ⇒ Power objective
N
1 X −1
fh ≈
(
τi )
⇒ Bandwidth objective
2π i
%% Constraint Set:
%% Performance constraints:
# of stages
Av,amp
min
≤
Y
!
Av,i
i
fh,min ≤ fh
Vswing ≤ Vdd,ee ∓ Ic,n (pi )RL,n where n is n-th and last stage of amplifier
%% Technology & device constraints:
gm,i rπ,i == β
vce,sat
≤
vce,i
vb[e,c]
≤
vbreakdown
%% Optimization direction ⇒ minimization/maximization
%% Evaluate current solution
%% Perform posynomial−1 error, χi , calculation & correction here and repeat
for each Γk .
end
Generate Pareto-surfaces.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
B.2. Mixer pseudo code and Pareto-optimal space generation via constraint
sweeping
69
B.2
Mixer pseudo code and Pareto-optimal space generation via constraint sweeping
The following pseudo-code generates the Pareto-surface for a mixer block by sweeping
the voltage conversion gain, Amix , bandwidth, fmix , and linearity, IIP 3, constraints.
First the block constants and design variables are defined. Following is the objective
function(s)2 Lastly, the set of performance, process, technology, and other physical and
device constraints are defined.
As before, in the case of using geometric programming for the optimization framework, exercising of the posynomial−1 algorithm of Chapter 2 may be required. Note that
implementing this algorithm increases the amount of necessary memory and doubles the
number of required optimization runs for generation of the Pareto-optimal space (i.e., still
Θ(n)). However, the formulation for the mixer considered in Chapter 3 is such that the
approximation algorithm is not required.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for [constraint set ]k = Γmix,k , ∀ k = 1, . . . , L
%% Define constants, λi :
Vdd , Vee , VT , β, cµ , cπ , cparasitic . . .
%% Define design variables, pi :
RL , CL , RE , gm , rπ , τcπ , τcµ , Amix , . . .
%% Auxiliary variable definitions
%% Define objective functions:
Pmix (pi , λi ) ⇒
Power objective
%% Constraint Set:
%% Performance constraints:
2
A superposition of weighted objectives can also be used [1].
70
Appendix B. MATLAB formulation pseudo code
Amix
min
≤ Amix
fmix,min ≤ fmix
Vswing ≤ Vdd,ee ∓ Ic (pi )RL
IIP 3min ≤ IIP 3mix
%% Technology & device constraints:
gm,i rπ,i == β
vce,sat
≤
vce,i
vb[e,c]
≤
vbreakdown
%% Optimization direction ⇒ minimization/maximization
%% Evaluate current solution
%% Perform posynomial−1 error, χi , calculation & correction here and repeat
for each Γk .
end
Generate Pareto-surfaces.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
B.3
Monomial fitting
The choice of fitting depends on the selected optimization framework (See Appendix ??),
which in turn depends on the application. For the design example presented in Chapter
3, geometric programming served as a suitable mathematical programming framework.
In fact, GP is generally a convenient and popular choice for circuit design [8,12,13,15,16].
B.4. Hierarchical pseudo code and Pareto-optimal space generation via
constraint sweeping
71
Given sets of data
x(i) , f (i) , i = 1, . . . , N
several schemes are presented in [8] for fitting these data spaces into monomial and/or
posynomial expressions. We have written our own MATLAB script to perform a monomial
fit that minimizes the relative error of our data defined as follows:
ri =
|f (x(i) ) − f (i) |
f (i)
Depending on the application, different error metrics may be used. Fig. 4-4 in Chapter 4
illustrates the relative errors, ri , associated with the amplifier and mixer blocks presented
in Chapter 3.
B.4
Hierarchical pseudo code and Pareto-optimal space
generation via constraint sweeping
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Once the Paretooptimal surfaces for each of the system blocks have been generated and fit into a functional
form (See Appendix B.3), a system-level formulation can be cast for the generation of the
system-level Pareto-optimal space. This is the final H-BU step. At this point, the design space of the system has been characterized in its entirety. If modeled appropriately,
any desired set of requirements/specificaitons that lies in the Pareto-optimal should be
simulated for verification and can then be constructed. The advantage here against a
simulation-only based approach is that the design space is fully characterized in an efficient manner first, and only candidate/feasible points are simulated for verification.
This reduces the necessary simulation-runs for design space exploration required by the
designer. The designer can focus on simulating/verifying feasible/optimal solutions only.
A pseudo-code formulation for a hierarchical system (i.e., the system consisting of the
connection of each of the blocks), is given below. The general form is the same as the block
72
Appendix B. MATLAB formulation pseudo code
formulations. The system-level constraints are swept for generation of the Pareto-optimal
surface.
For the hierarchical formulation, the objective(s) and constraints take functional forms
because the Pareto-optimal surfaces have been fitted. The auxiliary constraints ensure
that the range of the performance constraints are kept within the range of the block-level
performance parameters. For example, if the amplifier power ranges between 10−4 −−10−2
W, we must add a constraint on Pamp (pi ) to ensure the hierarchical formulation only
searches for solutions in this region of space.
We note that the posynomial−1 algorithm of Chapter 2 is not needed here by construction. This is because at this stage of H-BU, the Pareto surfaces have been fit into
GP functional forms and thus require no further manipulation.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for [constraint set ]k = Γmix,k , ∀ k = 1, . . . , S
%% Define constants, λi :
Vdd , Vee , . . . and any other system-level constants & parameters.
%% Define design variables, psys,i :
Aamp , Amix , famp , fmix , Pamp , Pmix , IIP 3mix , . . .
%% Auxiliary variable definitions
%% Define objective functions:
Pamp (psys,i , λi ) + Pmix (psys,i , λi ) ⇒
Total system Power objective
%% Constraint Set:
%% Performance constraints:
Atotal ≤ Aamp · Amix
famp,min ≤ famp
fmix,min ≤ fmix
B.4. Hierarchical pseudo code and Pareto-optimal space generation via
constraint sweeping
IIP 3mix,min ≤ IIP 3mix
%% Block-level space constraints:
Aamp, block min
≤ Aamp
≤ Aamp, block max
famp, block min
≤ famp
≤ famp, block max
Amix, block min
≤ Amix
≤ Amix, block max
fmix, block min
≤ fmix
≤ fmix, block max
IIP 3mix, block min ≤ IIP 3mix ≤ IIP 3mix, block max
Pamp, block min
≤ Pamp
≤ Pamp, block max
Pmix, block min
≤ Pmix
≤ Pmix block max
%% Optimization direction ⇒ minimization/maximization
%% Evaluate current solution
end
Generate Pareto-surfaces.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
73
74
Appendix B. MATLAB formulation pseudo code
Appendix C
Additional Pareto-optimal surfaces
for Transmitter segment
(a) Amplifier PO surface
(b) Amplifier PO surface
Figure C-1: Amplfier block flat PO surfaces
75
76 Appendix C. Additional Pareto-optimal surfaces for Transmitter segment
(a) Mixer PO surface
(b) Mixer PO surface
(c) Mixer PO surface
(d) Mixer PO surface
Figure C-2: Mixer block flat PO surfaces
77
(a) Transmiter PO surface
(b) Transmiter PO surface
(c) Transmiter PO surface
(d) Transmiter PO surface
Figure C-3: Hierarchical system PO surfaces
78 Appendix C. Additional Pareto-optimal surfaces for Transmitter segment
Bibliography
[1] E. Zitzler, “Evolutionary algorithms for multi-objective optimization: Methods and
applications,” PhD thesis, Swiss Federal Institute of Technology, Zurich, Switzerland,
Nov. 1999.
[2] T. Eekeleart, T. McConaghy, and G. Gielen, “Efficient multiobjective synthesis of
analog circuits using hierarchical Pareto-optimal performance hypersurfaces,” Design
Automation and Test in Europe Conference, pp. 1070–1075, June 2005.
[3] S. Tiwary, P. Tiwary, and R. Rutenbar, “Generation of yield-aware pareto surfaces for
hierarchical circuit design space exploration,” Design and Automation Conference,
pp. 31–36, July 2006.
[4] W.-C. design. (2005, Dec.). [Online]. Available: http://en.wikipedia.org/
[5] X. Li, J. Wang, L. Pileggi, T.-S. Chen, and W. Chiang, “Performance-centering
optimization for system-level analog design exploration,” IEEE, pp. 421–428, 2005.
[6] E. Aberg and A. Gustavsson, “Design and evaluation of modified simplex methods,”
Analytica Chimica Acta 144, pp. 39–53, 1982.
[7] D. Betteridge, A. Wade, and A. Howard, “Reflections on the modified simplex ii,”
Talanta 32, pp. 723–734, 1985.
[8] S. Boyd, S. Kim, L. Vandenberghe, and A. Hassibi, “A tutorial on geometric programming,” July 2005, Optimization & Engineering.
[9] Wikipedia-Optimization. (2005, Dec.). [Online]. Available: http://en.wikipedia.org/
79
80
BIBLIOGRAPHY
[10] D. Wilde and C. Beighler, Foundations of Optimization. Englewood, N.J.: Prentice
Hall, Inc., 1967.
[11] M. Hershenson, “Gpcad: A tool for cmos op-amp synthesis,,” in IEEE/ACM International Conference on Computer Aided Design, San Jose, CA, 1998, pp. 296–303.
[12] M. Hershenson, S. Boyd, and T. Lee, “Optimal design of a CMOS op-amp via geometric programming,” IEEE Transactions CAD, vol. 20, pp. 1–21, Jan. 2001.
[13] M. Hershenson, “Design of pipeline analog-to-digital converters via geometric programming,” IEEE/ACM ICCAD, pp. 317–324, Nov. 2002.
[14] M. Hershenson, S. Mohan, S. Boyd, and T. Lee, “Optimization of inductor circuits
via geometric programming,” DAC/ACM, pp. 994–998, 1999.
[15] M. Hershenson, S. Boyd, and T. Lee, “Efficient description of the design space of
analog circuits,” in IEEE/ACM DAC, June 2003.
[16] ——, “Automated design of folded-cascode op-amps with sensitivity analysis,” International Conference on Electronics, Circuits, and Systems, pp. 121–124, September
1998.
[17] S. Boyd and L. Vandenberghe. (2004) Introduction to convex optimization with
engineering applications. [Online]. Available:
http://www.stanford.edu/∼boyd/
cvxbook/
[18] Y. Nesterov and A. Nemirovsky, “Interior-point polynomial methods in convex programming,” Studies in Applied Mathematics, vol. 13, 1994.
[19] M. Chiang, Geometric Programming for Communication Systems. Princeton, New
Jersey: Princeton University, Jan. 1996.
[20] R. Duffin, E. Peterson, and C. Zener, Geometric Programming-Theory and Applications. New York: Wiley, 1967.
BIBLIOGRAPHY
81
[21] J. Zou, D. Mueller, H. Graeb, and U. Schlichtmann, “A CPPLL hierarchical optimization methodology considering jitter, power and locking time,” Design and Automation Conference, pp. 19–24, July 2006.
[22] T. Eekeleart, R. Schoofs, G. Gielen, M. Steyeart, and W. Sansen, “Hierarchical
bottom-up analog optimization methodology validated by a delta-sigma A/D converter design for the 802.11a/b/g standard,” Design and Automation Conference, pp.
25–30, July 2006.
[23] G. Gielen, T. McConaghy, and T. Eekeklaert, “Performance space modeling for hierarchical synthesis of analog integrated circuits,” Design Automation Conference, pp.
881–886, June 2005.
[24] A. Mutapcic, K. Koh, S. Kim, L. Vandenberghe, and S. Boyd. (2006) GGPLAB:
A Simple Matlab Toolbox for Geometric Programming. [Online]. Available:
http://www.stanford.edu/∼boyd/ggplab/
[25] C. Moranas and C. Floudas, Global Optimization in Generalized Geometric Programming. Hanover, MA: now Publishers Inc., 2005.
[26] D. Bertsimas and J. Tsitsiklis, Introduction to Linear Optimization. Nashua, N.H.:
Athena Scientific, 1997.
[27] T. Lee, The Design of CMOS Radio-Frequency Integrated Circuits.
New York:
Cambridge Univ. Press, 2004.
[28] P. Gray and R. Meyer, Analysis and Design of Analog Integrated Circuits, 4th ed.
New York: Wiley, 2001.
[29] R. Thornton, C. Searle, D. Pedersoon, R. Adler, and J. E.J. Angelo, Mutlistage
Transistor Circuits, S. E. E. C. Series, Ed. New York: Wiley, 1965, vol. 5.
[30] B. Razavi, RF Microelectronics. New Jersey: Prentice Hall, 1998.
[31] A. M. Niknejad, “Integrated circuits for communications,” Online Course notes, 2005.
82
BIBLIOGRAPHY
[32] J. Dawson, M. Hershenson, and T. Lee, “Optimal allocation of local feedback in
multi-stage amplifiers via geometric programming,” IEEE Transactions on Circuits
and Systems, vol. 48, pp. 1–11, Jan. 2001.
[33] F. Bernardinis, P. Nuzzo, and A. Vincentelli, “Robust system level design with analog
platforms,” ICCAD, Nov. 2006.
Download